id
stringlengths
2
8
url
stringlengths
31
117
title
stringlengths
1
71
text
stringlengths
153
118k
topic
stringclasses
4 values
section
stringlengths
4
49
sublist
stringclasses
9 values
505484
https://en.wikipedia.org/wiki/Spermatogenesis
Spermatogenesis
Spermatogenesis is the process by which haploid spermatozoa develop from germ cells in the seminiferous tubules of the testicle. This process starts with the mitotic division of the stem cells located close to the basement membrane of the tubules. These cells are called spermatogonial stem cells. The mitotic division of these produces two types of cells. Type A cells replenish the stem cells, and type B cells differentiate into primary spermatocytes. The primary spermatocyte divides meiotically (Meiosis I) into two secondary spermatocytes; each secondary spermatocyte divides into two equal haploid spermatids by Meiosis II. The spermatids are transformed into spermatozoa (sperm) by the process of spermiogenesis. These develop into mature spermatozoa, also known as sperm cells. Thus, the primary spermatocyte gives rise to two cells, the secondary spermatocytes, and the two secondary spermatocytes by their subdivision produce four spermatozoa and four haploid cells. Spermatozoa are the mature male gametes in many sexually reproducing organisms. Thus, spermatogenesis is the male version of gametogenesis, of which the female equivalent is oogenesis. In mammals it occurs in the seminiferous tubules of the male testes in a stepwise fashion. Spermatogenesis is highly dependent upon optimal conditions for the process to occur correctly, and is essential for sexual reproduction. DNA methylation and histone modification have been implicated in the regulation of this process. It starts during puberty and usually continues uninterrupted until death, although a slight decrease can be discerned in the quantity of produced sperm with increase in age (see Male infertility). Spermatogenesis starts in the bottom part of seminiferous tubes and, progressively, cells go deeper into tubes and moving along it until mature spermatozoa reaches the lumen, where mature spermatozoa are deposited. The division happens asynchronically; if the tube is cut transversally one could observe different maturation states. A group of cells with different maturation states that are being generated at the same time is called a spermatogenic wave. Purpose Spermatogenesis produces mature male gametes, commonly called sperm but more specifically known as spermatozoa, which are able to fertilize the counterpart female gamete, the oocyte, during conception to produce a single-celled individual known as a zygote. This is the cornerstone of sexual reproduction and involves the two gametes both contributing half the normal set of chromosomes (haploid) to result in a chromosomally normal (diploid) zygote. To preserve the number of chromosomes in the offspring – which differs between species – one of each gamete must have half the usual number of chromosomes present in other body cells. Otherwise, the offspring will have twice the normal number of chromosomes, and serious abnormalities may result. In humans, chromosomal abnormalities arising from incorrect spermatogenesis results in congenital defects and abnormal birth defects (Down syndrome, Klinefelter syndrome) and in most cases, spontaneous abortion of the developing foetus. Location in humans Spermatogenesis takes place within several structures of the male reproductive system. The initial stages occur within the testes and progress to the epididymis where the developing gametes mature and are stored until ejaculation. The seminiferous tubules of the testes are the starting point for the process, where spermatogonial stem cells adjacent to the inner tubule wall divide in a centripetal direction—beginning at the walls and proceeding into the innermost part, or lumen—to produce immature sperm. Maturation occurs in the epididymis. The location [Testes/Scrotum] is specifically important as the process of spermatogenesis requires a lower temperature to produce viable sperm, specifically 1°-8 °C lower than normal body temperature of 37 °C (98.6 °F). Clinically, small fluctuations in temperature such as from an athletic support strap, causes no impairment in sperm viability or count. Duration For humans, the entire process of spermatogenesis is variously estimated as taking between 72 to 74 days (according to tritium-labelled biopsies) and approximately 120 days (according to DNA clock measurements). Including the transport on ductal system, it takes 3 months. Testes produce 200 to 300 million spermatozoa daily. However, only about half or 100 million of these become viable sperm. Stages The entire process of spermatogenesis can be broken up into several distinct stages, each corresponding to a particular type of cell in humans. In the following table, ploidy, copy number and chromosome/chromatid counts are for one cell, generally prior to DNA synthesis and division (in G1 if applicable). The primary spermatocyte is arrested after DNA synthesis and prior to division. Spermatocytogenesis Spermatocytogenesis is the male form of gametocytogenesis and results in the formation of spermatocytes possessing half the normal complement of genetic material. In spermatocytogenesis, a diploid spermatogonium, which resides in the basal compartment of the seminiferous tubules, divides mitotically, producing two diploid intermediate cells called primary spermatocytes. Each primary spermatocyte then moves into the adluminal compartment of the seminiferous tubules and duplicates its DNA and subsequently undergoes meiosis I to produce two haploid secondary spermatocytes, which will later divide once more into haploid spermatids. This division implicates sources of genetic variation, such as random inclusion of either parental chromosomes, and chromosomal crossover that increases the genetic variability of the gamete. The DNA damage response (DDR) machinery plays an important role in spermatogenesis. The protein FMRP binds to meiotic chromosomes and regulates the dynamics of the DDR machinery during spermatogenesis. FMRP appears to be necessary for the repair of DNA damage. During spermatocytogenesis, meiosis employs special DNA repair processes that remove DNA damages and help maintain the integrity of the genome that is passed on to progeny. These DNA repair processes include homologous recombinational repair and non-homologous end joining Each cell division from a spermatogonium to a spermatid is incomplete; the cells remain connected to one another by bridges of cytoplasm to allow synchronous development. Not all spermatogonia divide to produce spermatocytes; otherwise, the supply of spermatogonia would run out. Instead, spermatogonial stem cells divide mitotically to produce copies of themselves, ensuring a constant supply of spermatogonia to fuel spermatogenesis. Spermatidogenesis Spermatidogenesis is the creation of spermatids from secondary spermatocytes. Secondary spermatocytes produced earlier rapidly enter meiosis II and divide to produce haploid spermatids. The brevity of this stage means that secondary spermatocytes are rarely seen in histological studies. Spermiogenesis During spermiogenesis, the spermatids begin to form a tail by growing microtubules on one of the centrioles, which turns into basal body. These microtubules form an axoneme. Later the centriole is modified in the process of centrosome reduction. The anterior part of the tail (called midpiece) thickens because mitochondria are arranged around the axoneme to ensure energy supply. Spermatid DNA also undergoes packaging, becoming highly condensed. The DNA is packaged firstly with specific nuclear basic proteins, which are subsequently replaced with protamines during spermatid elongation. The resultant tightly packed chromatin is transcriptionally inactive. The Golgi apparatus surrounds the now condensed nucleus, becoming the acrosome. Maturation then takes place under the influence of testosterone, which removes the remaining unnecessary cytoplasm and organelles. The excess cytoplasm, known as residual bodies, is phagocytosed by surrounding Sertoli cells in the testes. The resulting spermatozoa are now mature but lack motility. The mature spermatozoa are released from the protective Sertoli cells into the lumen of the seminiferous tubule in a process called spermiation. The non-motile spermatozoa are transported to the epididymis in testicular fluid secreted by the Sertoli cells with the aid of peristaltic contraction. While in the epididymis the spermatozoa gain motility and become capable of fertilization. However, transport of the mature spermatozoa through the remainder of the male reproductive system is achieved via muscle contraction rather than the spermatozoon's recently acquired motility. Role of Sertoli cells At all stages of differentiation, the spermatogenic cells are in close contact with Sertoli cells which are thought to provide structural and metabolic support to the developing sperm cells. A single Sertoli cell extends from the basement membrane to the lumen of the seminiferous tubule, although the cytoplasmic processes are difficult to distinguish at the light microscopic level. Sertoli cells serve a number of functions during spermatogenesis, they support the developing gametes in the following ways: Maintain the environment necessary for development and maturation, via the blood-testis barrier Secrete substances initiating meiosis Secrete supporting testicular fluid Secrete androgen-binding protein (ABP), which concentrates testosterone in close proximity to the developing gametes Testosterone is needed in very high quantities for maintenance of the reproductive tract, and ABP allows a much higher level of fertility Secrete hormones affecting pituitary gland control of spermatogenesis, particularly the polypeptide hormone, inhibin Phagocytose residual cytoplasm left over from spermiogenesis Secretion of anti-Müllerian hormone causes deterioration of the Müllerian duct Protect spermatids from the immune system of the male, via the blood-testis barrier Contribute to the spermatogonial stem cell niche The intercellular adhesion molecules ICAM-1 and soluble ICAM-1 have antagonistic effects on the tight junctions forming the blood-testis barrier. ICAM-2 molecules regulate spermatid adhesion on the apical side of the barrier (towards the lumen). Influencing factors The process of spermatogenesis is highly sensitive to fluctuations in the environment, particularly hormones and temperature. Testosterone is required in large local concentrations to maintain the process, which is achieved via the binding of testosterone by androgen binding protein present in the seminiferous tubules. Testosterone is produced by interstitial cells, also known as Leydig cells, which reside adjacent to the seminiferous tubules. Seminiferous epithelium is sensitive to elevated temperature in humans and some other species, and will be adversely affected by temperatures as high as normal body temperature. In addition, spermatogonia do not achieve maturity at body temperature in most of mammals, as β-polimerase and spermatogenic recombinase need a specific optimal temperature. Consequently, the testes are located outside the body in a sac of skin called the scrotum. The optimal temperature is maintained at 2 °C (man) (8 °C mouse) below body temperature. This is achieved by regulation of blood flow and positioning towards and away from the heat of the body by the cremasteric muscle and the dartos smooth muscle in the scrotum. One important mechanism is a thermal exchange between testicular arterial and venous blood streams. Specialized anatomic arrangements consist of two zones of coiling along the internal spermatic artery. This anatomic arrangement prolongs the time of contact and the thermal exchange between the testicular arterial and venous blood streams and may, in part, explain the temperature gradient between aortic and testicular arterial blood reported in dogs and rams. Moreover, reduction in pulse pressure, occurring in the proximal one third of the coiled length of the internal spermatic artery. Moreover, the activity of spermatogenic recombinase decreases, and this is supposed to be an important factor of testicles degeneration. Dietary deficiencies (such as vitamins B, E and A), anabolic steroids, metals (cadmium and lead), x-ray exposure, dioxin, alcohol, and infectious diseases will also adversely affect the rate of spermatogenesis. In addition, the male germ line is susceptible to DNA damage caused by oxidative stress, and this damage likely has a significant impact on fertilization and pregnancy. According to the study by Omid Mehrpour et al exposure to pesticides also affects spermatogenesis. Hormonal control Hormonal control of spermatogenesis varies among species. In humans the mechanism is not completely understood; however it is known that initiation of spermatogenesis occurs at puberty due to the interaction of the hypothalamus, pituitary gland and Leydig cells. If the pituitary gland is removed, spermatogenesis can still be initiated by follicle stimulating hormone (FSH) and testosterone. In contrast to FSH, luteinizing hormone (LH) appears to have little role in spermatogenesis outside of inducing gonadal testosterone production. FSH stimulates both the production of androgen binding protein (ABP) by Sertoli cells, and the formation of the blood-testis barrier. ABP is essential to concentrating testosterone in levels high enough to initiate and maintain spermatogenesis. Intratesticular testosterone levels are 20–100 or 50–200 times higher than the concentration found in blood, although there is variation over a 5- to 10-fold range amongst healthy men. Testosterone production does not remain constant throughout the day, but follows a circadian rhythm. The maximum peak of testosterone occurs at 8 a.m., which explains why men frequently suffer from morning erections. In younger men, testosterone peaks are higher. FSH may initiate the sequestering of testosterone in the testes, but once developed only testosterone is required to maintain spermatogenesis. However, increasing the levels of FSH will increase the production of spermatozoa by preventing the apoptosis of type A spermatogonia. The hormone inhibin acts to decrease the levels of FSH. Studies from rodent models suggest that gonadotropins (both LH and FSH) support the process of spermatogenesis by suppressing the proapoptotic signals and therefore promote spermatogenic cell survival. The Sertoli cells themselves mediate parts of spermatogenesis through hormone production. They are capable of producing the hormones estradiol and inhibin. The Leydig cells are also capable of producing estradiol in addition to their main product testosterone. Estrogen has been found to be essential for spermatogenesis in animals. However, a man with estrogen insensitivity syndrome (a defective ERα) was found produce sperm with a normal sperm count, albeit abnormally low sperm viability; whether he was sterile or not is unclear. Levels of estrogen that are too high can be detrimental to spermatogenesis due to suppression of gonadotropin secretion and by extension intratesticular testosterone production. The connection between spermatogenesis and prolactin levels appears to be moderate, with optimal prolactin levels reflecting efficient sperm production. Disorders Disorders of spermatogenesis may cause oligospermia, which is semen with a low concentration of sperm and is a common finding in male infertility.
Biology and health sciences
Animal reproduction
Biology
505536
https://en.wikipedia.org/wiki/Blood%20donation
Blood donation
A blood donation occurs when a person voluntarily has blood drawn and used for transfusions and/or made into biopharmaceutical medications by a process called fractionation (separation of whole blood components). A donation may be of whole blood, or of specific components directly (apheresis). Blood banks often participate in the collection process as well as the procedures that follow it. Today in the developed world, most blood donors are unpaid volunteers who donate blood for a community supply. In some countries, established supplies are limited and donors usually give blood when family or friends need a transfusion (directed donation). Many donors donate for several reasons, such as a form of charity, general awareness regarding the demand for blood, increased confidence in oneself, helping a personal friend or relative, and social pressure. Despite the many reasons that people donate, not enough potential donors actively donate. However, this is reversed during disasters when blood donations increase, often creating an excess supply that will have to be later discarded. In countries that allow paid donation some people are paid, and in some cases there are incentives other than money such as paid time off from work. People can also have blood drawn for their own future use (autologous donation). Donating is relatively safe, but some donors have bruising where the needle is inserted or may feel faint. Potential donors are evaluated for anything that might make their blood unsafe to use. The screening includes testing for diseases that can be transmitted by a blood transfusion, including HIV and viral hepatitis. The donor must also answer questions about medical history and take a short physical examination to make sure the donation is not hazardous to their health. How often a donor can donate varies from days to months based on what component they donate and the laws of the country where the donation takes place. For example, in the United States, donors must wait 56 days (eight weeks) between whole-blood donations but only seven days between platelet apheresis donations and twice per seven-day period in plasmapheresis. The amount of blood drawn and the methods vary. The collection can be done manually or with automated equipment that takes only specific components of the blood. Most of the components of blood used for transfusions have a short shelf life, and maintaining a constant supply is a persistent problem. This has led to some increased interest in autotransfusion, whereby a patient's blood is salvaged during surgery for continuous reinfusion—or alternatively, is self-donated prior to when it will be needed. Generally, the notion of donation does not refer to giving to one's self, though in this context it has become somewhat acceptably idiomatic. History Charles Richard Drew (1904–1950) was an American surgeon and medical researcher. His research was in the field of blood transfusions, developing improved techniques for blood storage, and applied his expert knowledge to developing large-scale blood banks early in World War II. This allowed medics to save thousands of lives of the Allied forces. As the most prominent African American in the field, Drew protested against the practice of racial segregation in the donation of blood, as it lacked scientific foundation, and resigned his position with the American Red Cross, which maintained the policy until 1950. Types of donation Blood donations are divided into groups based on who will receive the collected blood. An allogeneic (also called homologous) donation is when a donor gives blood for storage at a blood bank for transfusion to an unknown recipient. A directed donation is when a person, often a family member, donates blood for transfusion to a specific individual. Directed donations are relatively rare when an established supply exists. A replacement donor donation is a hybrid of the two and is common in developing countries. In this case, a friend or family member of the recipient donates blood to replace the stored blood used in a transfusion, ensuring a consistent supply. When a person has blood stored that will be transfused back to the donor at a later date, usually after surgery, that is called an autologous donation. Blood that is used to make medications can be made from allogeneic donations or from donations exclusively used for manufacturing. Sometimes there are specific reasons for preferring one form or the other. Allogeneic donations have a lower risk of some complications than blood from a family member. Neonatal alloimmune thrombocytopenia may require a transfusion from the mother's own platelets. Autologous (self) donations may be preferred for someone with a rare blood type for a planned surgery. Blood is sometimes collected using similar methods for therapeutic phlebotomy, similar to the ancient practice of bloodletting, which is used to treat conditions such as hereditary hemochromatosis or polycythemia vera. This blood is sometimes treated as a blood donation, but may be immediately discarded if it cannot be used for transfusion or further manufacturing. The actual process varies according to the laws of the country, and recommendations to donors vary according to the collecting organization. The World Health Organization gives recommendations for blood donation policies, but in developing countries many of these are not followed. For example, the recommended testing requires laboratory facilities, trained staff, and specialized reagents, all of which may not be available or too expensive in developing countries. A blood drive or a blood donor session is an event in which donors come to donate allogeneic blood. These can occur at a blood bank, but they are often set up at a location in the community, such as a shopping center, workplace, school, or house of worship. Screening Donors are typically required to give consent for the process, and meet a certain criteria such as weight and hemoglobin levels, and this requirement means minors cannot donate without permission from a parent or guardian. In some countries, answers are associated with the donor's blood, but not name, to provide anonymity; in others, such as the United States, names are kept to create lists of ineligible donors. If a potential donor does not meet these criteria, they are 'deferred'. This term is used because many donors who are ineligible may be allowed to donate later. Blood banks in the United States may be required to label the blood if it is from a therapeutic donor, so some do not accept donations from donors with any blood disease. Others, such as the Australian Red Cross Blood Service, accept blood from donors with hemochromatosis. It is a genetic disorder that does not affect the safety of the blood. The donor's race or ethnic background is sometimes important since certain blood types, especially rare ones, are more common in certain ethnic groups. Historically, in the United States donors were segregated or excluded on race, religion, or ethnicity, but this is no longer a standard practice. Recipient safety Donors are screened for health risks that could make the donation unsafe for the recipient. Some of these restrictions are controversial, such as restricting donations from men who have sex with men (MSM) because of the risk of transmitting HIV. In 2011, the UK (excluding Northern Ireland) reduced its blanket ban on MSM donors to a narrower restriction which only prevents MSM from donating blood if they have had sex with other men within the past year. A similar change was made in the US in late 2015 by the Food and Drug Administration (FDA). In 2017, the UK and US further reduced their restrictions to three months. In 2023, the FDA announced new policies easing restrictions on gay and bisexual men donating blood. These updated guidelines stipulate that men in monogamous relationships with other men, or who have not recently had sex, can donate. Individuals who report having sex with people who are HIV positive or have had sex with a new partner who has engaged in anal sex are still barred from blood donation. Autologous donors are not always screened for recipient safety problems since the donor is the only person who will receive the blood. Since the donated blood may be given to pregnant women or women of child-bearing age, donors taking teratogenic (birth defect-causing) medications are deferred. These medications include acitretin, etretinate, isotretinoin, finasteride, and dutasteride. Donors are examined for signs and symptoms of diseases that can be transmitted in a blood transfusion, such as HIV, malaria, and viral hepatitis. Screening may include questions about risk factors for various diseases, such as travel to countries at risk for malaria or variant Creutzfeldt–Jakob disease (vCJD). These questions vary from country to country. For example, while blood centers in Québec and the rest of Canada, Poland, and many other places defer donors who lived in the United Kingdom for risk of vCJD, donors in the United Kingdom are only restricted for vCJD risk if they have had a blood transfusion in the United Kingdom. Australia removed its UK-donor deferral in July 2022. Directed donations from family members (e.g., a father donating blood to his child) carry extra risks for the recipient. Any blood transfusion carries some risk of a transfusion reaction, but between genetically related family members, there are additional risks. The donated blood must be irradiated to prevent a potentially deadly graft-versus-host disease, which is more likely between genetically related people. Not all healthcare facilities have the equipment to do this on site. Alloimmunization is a particular risk for directed granulocyte donations. It is a common misconception that directed donations are safer for the recipient; however, family members and close friends, especially parents who have not previously donated blood, frequently feel pressured into lying about disqualifying risk factors (e.g., drug use or prior sexual relationships) and their eligibility, which can result in a higher risk of infection with bloodborne pathogens. Additionally, in the less common case of a person with leukemia or other bone marrow disease, a familial blood transfusion can trigger the production of alloantibodies against HLA proteins, which can cause a bone marrow transplant from that donor to fail in the future. (Closely related family members are usually the best match for a bone marrow transplant.) A directed donation from an unrelated friend, however, would not have the same risk. Donor safety The donor is also examined and asked specific questions about their medical history to make sure that donating blood is not hazardous to their health. The donor's hematocrit or hemoglobin level is tested to make sure that the loss of blood will not make them anemic, and this check is the most common reason that a donor is ineligible. Accepted hemoglobin levels for blood donations, by the American Red Cross, is 12.5g/dL (for females) and 13.0g/dL (for males) to 20.0g/dL, anyone with a higher or lower hemoglobin level cannot donate. Pulse, blood pressure, and body temperature are also evaluated. Elderly donors are sometimes also deferred on age alone because of health concerns. In addition to age, weight and height are important factors when considering the eligibility for donors. For example, the American Red Cross requires a donor to be or more for whole blood and platelet donation and at least (males) and at least (females) for power red donations (double red erythrocytapheresis). The safety of donating blood during pregnancy has not been studied thoroughly, and pregnant women are usually deferred until six weeks after the pregnancy. Donors with aortic stenosis have traditionally been deferred out of concern that the acute volume depletion (475 mL) of blood donation might compromise cardiac output. Blood testing The donor's blood type must be determined if the blood will be used for transfusions. The collecting agency usually identifies whether the blood is type A, B, AB, or O and the donor's Rh (D) type and will screen for antibodies to less common antigens. More testing, including a crossmatch, is usually done before a transfusion. Type O negative is often cited as the "universal donor" but this only refers to red cell and whole blood transfusions. For plasma and platelet transfusions the system is reversed: AB positive is the universal platelet donor type while both AB positive and AB negative are universal plasma donor types. Most blood is tested for diseases, including some STDs. The tests used are high-sensitivity screening tests and no actual diagnosis is made. Some of the test results are later found to be false positives using more specific testing. False negatives are rare, but donors are discouraged from using blood donation for the purpose of anonymous STD screening because a false negative could mean a contaminated unit. The blood is usually discarded if these tests are positive, but there are some exceptions, such as autologous donations. The donor is generally notified of the test result. Donated blood is tested by many methods, but the core tests recommended by the World Health Organization are these four: Hepatitis B surface antigen Antibody to hepatitis C Antibody to HIV, usually subtypes 1 and 2 Serologic test for syphilis The WHO reported in 2006 that 56 out of 124 countries surveyed did not use these basic tests on all blood donations. A variety of other tests for transfusion transmitted infections are often used based on local requirements. Additional testing is expensive, and in some cases the tests are not implemented because of the cost. These additional tests include other infectious diseases such as West Nile fever and babesiosis. Sometimes multiple tests are used for a single disease to cover the limitations of each test. For example, the HIV antibody test will not detect a recently infected donor, so some blood banks use a p24 antigen or HIV nucleic acid test in addition to the basic antibody test to detect infected donors. Cytomegalovirus is a special case in donor testing in that many donors will test positive for it. The virus is not a hazard to a healthy recipient, but it can harm infants and other recipients with weak immune systems. Blood testing in the US takes at least 48 hours. Because of the time required for testing, directed donations are not practical in emergencies. Obtaining the blood There are two main methods of obtaining blood from a donor. The most frequent is to simply take the blood from a vein as whole blood. This blood is typically separated into parts, usually red blood cells and plasma, since most recipients (other than trauma patients) need only a specific component for transfusions. The amount of blood donated in one session – generally called a 'unit' – is defined by the WHO as 450 millilitres. Some countries like Canada follow this standard, but others have set their own rules, and sometimes there is variation even among different agencies within a country. For example, whole blood donations in the United States are in the 460–500 ml range, while those in the EU must be in the 400-500 ml range. Other countries have smaller units – India uses 350 ml, Singapore 350 or 450 ml, and Japan 200 or 400 ml. Historically, donors in the People's Republic of China would donate only 200 ml, though larger 300 and 400 ml donations have become more common, particularly in northern China and for heavier donors. In any case, an additional 5-10 ml of blood may be collected separately for testing. The other method is to draw blood from the donor, separate it using a centrifuge or a filter, store the desired part, and return the rest to the donor. This process is called apheresis, and it is often done with a machine specifically designed for this purpose. This process is especially common for plasma, platelets, and red blood cells. For direct transfusions a vein can be used but the blood may be taken from an artery instead. In this case, the blood is not stored, but is pumped directly from the donor into the recipient. This was an early method for blood transfusion and is rarely used in modern practice. It was phased out during World War II because of problems with logistics, and doctors returning from treating wounded soldiers set up banks for stored blood when they returned to civilian life. Site preparation and drawing blood The blood is drawn from a large arm vein close to the skin, usually the median cubital vein on the inside of the elbow. The skin over the blood vessel is cleaned with an antiseptic such as iodine or chlorhexidine to prevent skin bacteria from contaminating the collected blood and also to prevent infections where the needle pierced the donor's skin. A large needle (16 to 17 gauge) is used to minimize shearing forces that may physically damage red blood cells as they flow through the needle. A tourniquet is sometimes wrapped around the upper arm to increase the pressure of the blood in the arm veins and speed up the process. The donor may also be prompted to hold an object and squeeze it repeatedly to increase the blood flow through the vein. Whole blood The most common method is collecting the blood from the donor's vein into a container. The amount of blood drawn varies from 200 millilitres to 550 millilitres depending on the country, but 450 millilitres is typical. The blood is usually stored in a flexible plastic bag that also contains sodium citrate, phosphate, dextrose, and adenine. This combination keeps the blood from clotting and preserves it during storage up to 42 days. Other chemicals are sometimes added during processing. The plasma from whole blood can be used to make plasma for transfusions or it can also be processed into other medications using a process called fractionation. This was a development of the dried plasma used to treat the wounded during World War II and variants on the process are still used to make a variety of other medications. Apheresis Apheresis is a blood donation method where the blood is passed through an apparatus that separates out one particular constituent and returns the remainder to the donor. Usually the component returned is the red blood cells, the portion of the blood that takes the longest to replace. Using this method an individual can donate plasma or platelets much more frequently than they can safely donate whole blood. These can be combined, with a donor giving both plasma and platelets in the same donation. Platelets can also be separated from whole blood, but they must be pooled from multiple donations. From three to ten units of whole blood are required for a therapeutic dose. Plateletpheresis provides at least one full dose from each donation. During a platelet donation, the blood is drawn from the patient and the platelets are separated from the other blood components. The remainder of the blood, red blood cells, plasma, and white blood cells are returned to the patient. This process is completed several times for a period of up to two hours to collect a single donation. Plasmapheresis is frequently used to collect source plasma that is used for manufacturing into medications much like the plasma from whole blood. Plasma collected at the same time as plateletpheresis is sometimes called concurrent plasma. Apheresis is also used to collect more red blood cells than usual in a single donation (commonly known as "double reds") and to collect white blood cells for transfusion. Recovery and time between donations Donors are usually kept at the donation site for 10–15 minutes after donating since most adverse reactions take place during or immediately after the donation. Blood centers typically provide light refreshments, such as orange juice and cookies, or a lunch allowance to help the donor recover. The needle site is covered with a bandage and the donor is directed to keep the bandage on for several hours. In hot climates, donors are advised to avoid dehydration (strenuous exercise and games, alcohol) until a few hours after donation. Donated plasma is replaced after 2–3 days. Red blood cells are replaced by bone marrow into the circulatory system at a slower rate, on average 36 days in healthy adult males. In one study, the range was 20 to 59 days for recovery. These replacement rates are the basis of how frequently a donor can donate blood. Plasmapheresis and plateletpheresis donors can donate much more frequently because they do not lose significant amounts of red cells. The exact rate of how often a donor can donate differs from country to country. For example, plasmapheresis donors in the United States are allowed to donate large volumes twice a week and could nominally donate 83 litres (about 22 gallons) in a year, whereas the same donor in Japan may only donate every other week and could only donate about 16 litres (about 4 gallons) in a year. Iron supplementation decreases the rates of donor deferral due to low hemoglobin, both at the first donation visit and at subsequent donations. Iron-supplemented donors have higher hemoglobin and iron stores. On the other hand, iron supplementation frequently causes diarrhea, constipation and epigastric abdominal discomfort. The long-term effects of iron supplementation without measurement of iron stores are unknown. Complications Donors are screened for health problems that would put them at risk for serious complications from donating. First-time donors, teenagers, and women are at a higher risk of a reaction. One study showed that 2% of donors had an adverse reaction to donation. Most of these reactions are minor. A study of 194,000 donations found only one donor with long-term complications. In the United States, a blood bank is required to report any death that might possibly be linked to a blood donation. An analysis of all reports from October 2008 to September 2009 evaluated six events and found that five of the deaths were clearly unrelated to donation, and in the remaining case they found no evidence that the donation was the cause of death. Hypovolemic reactions can occur because of a rapid change in blood pressure. Fainting is generally the worst problem encountered. Falls due to loss of consciousness may result in injuries in rare cases. The risk for developing dizziness with fainting is increased in female and young donors. The process has similar risks to other forms of phlebotomy. Bruising of the arm from the needle insertion is the most common concern. One study found that less than 1% of donors had this problem. A number of less common complications of blood donation are known to occur. These include arterial puncture, delayed bleeding, nerve irritation, nerve injury, tendon injury, thrombophlebitis, and allergic reactions. Donors sometimes have adverse reactions to the sodium citrate used in apheresis collection procedures to keep the blood from clotting. Since the anticoagulant is returned to the donor along with blood components that are not being collected, it can bind the calcium in the donor's blood and cause hypocalcemia. These reactions tend to cause tingling in the lips, but may cause convulsions, seizure, hypertension, or more serious problems. Donors are sometimes given calcium supplements during the donation to prevent these side effects. In apheresis procedures, the red blood cells are returned. If this is done manually and the donor receives the blood from a different donor, a transfusion reaction can take place. Manual apheresis is extremely rare in the developed world because of this risk and automated procedures are as safe as whole blood donations. The final risk to blood donors is from equipment that has not been properly sterilized. In most cases, the equipment that comes in direct contact with blood is discarded after use. Re-used equipment was a significant problem in China in the 1990s, and up to 250,000 blood plasma donors may have been exposed to HIV from shared equipment. Storage, supply and demand Storage and blood shelf life The collected blood is usually stored in a blood bank as separate components, and some of these have short shelf lives. There are no storage methods to keep platelets for extended periods of time, though some were being studied as of 2008. The longest shelf life used for platelets is seven days. Red blood cells (RBC), the most frequently used component, have a shelf life of 35–42 days at refrigerated temperatures. For (relatively rare) long-term storage applications, this can be extended by freezing the blood with a mixture of glycerol, but this process is expensive and requires an extremely cold freezer for storage. Plasma can be stored frozen for an extended period of time and is typically given an expiration date of one year and maintaining a supply is less of a problem. Demand for blood The American Red Cross states that in the United States, someone needs blood every two seconds, and someone needs platelets every thirty seconds. There is not a consistent demand for each blood type. One type of blood being in stock does not guarantee that another type is. Blood banks may have some units in stock but lack others, ultimately causing the patients that need units for specific blood types to have delayed or canceled procedures. Additionally, every year there is an increase of around 5-7% for transfusions without an increase of donors to balance it as well as a growing population of elderly people that will need more transfusions in the future without a predicted increase in donations to reflect those growing numbers. This was supported in 1998 where blood donations to the Red Cross increased to 8%, totaling 500,000 units but hospitals' need for donations increased by 11%. Blood donations tend to always be high in demand with numerous accounts repeatedly stating periodic shortages over the decades. However, this trend is disrupted during national disasters. The trend demonstrates that people are donating the most during catastrophes when, arguably, donations are not as needed compared to periods without disasters. From 1988 to 2013, it has been reported that during every national disaster, there was a surplus of donations; a surplus that consisted of over 100 units. One of the most notable examples of this pattern was the September 11th attacks. A study observed that compared to the four weeks before September 11, there was an estimated increase of 18,700 donations from first-time donors for the first week after the attack: 4,000 was the average of donations from first-time donors before the attack which increased to about 22,700 donations; while repeat donors increased their donations by 10,000 per week: initially, donations were estimated to be around 16,400 which increased to 26,400 donations after September 11. Therefore, in the first week after the attack on 9/11, there was an overall estimated 28,700 increase in donations compared to the average weekly donations made four weeks prior to the attack. Increases in donations were observed in all blood donation centers, beginning on the day of the attack. While blood donations were above average after the first few weeks following 9/11, the number of donations fell from an estimated 49,000 donations in the first week to 26,000–28,000 donations between the second and fourth weeks after 9/11. Despite the substantial increase of donors, the rate that first-time donors would become repeat donors were the same before and after the attack. The limited storage time means that it is difficult to have a stockpile of blood to prepare for a disaster. The subject was discussed at length after the September 11 attacks in the United States, and the consensus was that collecting during a disaster was impractical and that efforts should be focused on maintaining an adequate supply at all times. Blood centers in the U.S. often have difficulty maintaining even a three-day supply for routine transfusion demands. Donation levels The World Health Organization (WHO) recognizes World Blood Donor Day on 14 June each year to promote blood donation. This is the birthday of Karl Landsteiner, the scientist who discovered the ABO blood group system. The theme of the 2012 World Blood Donor Day campaign, "Every blood donor is a hero" focuses on the idea that everyone can become a hero by giving blood. Based on data reported by 180 countries between 2011 and 2013, the WHO estimated that approximately 112.5 million units of blood were being collected annually. In the United States it is estimated that 111 million citizens are eligible blood donors, or 37% of the population. However less than 10% of the 37% eligible blood donors donate annually. In the UK the NHS reports blood donation levels at "only 4%" while in Canada the rate is 3.5%. Donator's incentive and deterrence Multiple studies have shown that the main reason people donate is due to prosocial motivators (e.g., altruism, selflessness, charity), general awareness regarding the demand for blood, increased confidence in oneself, helping a personal friend/relative, and social pressure. On the other hand, lack of blood donations can occur due to fear, lack of faith in the medical professionals, inconvenience, and the lack of consideration for donating, or perceived racial discrimination. Pathologist Leo McCarthy states that blood shortages routinely occur in the United States between July 4 and Labor day and between Christmas and New Year. Donor health benefits In patients prone to iron overload, blood donation prevents the accumulation of toxic quantities. Donating blood may reduce the risk of heart disease for men, but the link has not been firmly established and may be from selection bias because donors are screened for health problems. Research published in 2012 demonstrated that in patients with metabolic syndrome, repeated blood donation is effective in reducing blood pressure, blood glucose, HbA1c, low-density lipoprotein/high-density lipoprotein ratio, and heart rate. A study published in JAMA Network Open tracked PFAS levels in a clinical trial and showed that regular blood or plasma donations resulted in a significant reduction in PFAS levels for the participants. Donor compensation The World Health Organization set a goal in 1997 for all blood donations to come from unpaid volunteer donors, but as of 2006, only 49 of 124 countries surveyed had established this as a standard. Some countries, such as Tanzania, have made great strides in moving towards this standard, with 20 percent of donors in 2005 being unpaid volunteers and 80 percent in 2007, but 68 of 124 countries surveyed by WHO had made little or no progress. Most plasmapheresis donors in the United States are still paid for their donations. Donors are now paid between $25 and $50 per donation. In some countries, for example Brazil and the United Kingdom, it is illegal to receive any compensation, monetary or otherwise, for the donation of blood or other human tissues. Regular donors are often given some sort of non-monetary recognition. Time off from work is a common benefit. For example, in Italy, blood donors receive the donation day as a paid holiday from work. In 2023, Poland introduced legislation that secured two days off work for employed persons when they donate - the donation day and the subsequent next day. Blood centers will also sometimes add incentives such as assurances that donors would have priority during shortages, free T-shirts, first aid kits, windshield scrapers, pens, and similar trinkets. There are also incentives for the people who recruit potential donors, such as prize drawings for donors and rewards for organizers of successful drives. Recognition of dedicated donors is common. For example, the Singapore Red Cross Society presents awards for voluntary donors who have made a certain number of donations under the Blood Donor Recruitment Programme starting with a "bronze award" for 25 donations. In Ireland the Irish Blood Transfusion Service awards a silver pin or pendant for 10 donations, a gold pin or pendant for 20 donations, a gold lapel pin for 50 donations while those reaching 100 donations attend a dinner ceremony where they are presented with a small porcelain statue depicting the logo of the IBTS (a pelican). The government of Malaysia also offers free outpatient and hospitalization benefits for blood donors, for example, 4 months of free outpatient treatment and hospitalization benefits after every donation. In Poland, after donating a specific amount of blood (18 litres for men and 15 for women), a person is gifted with the title of "Distinguished Honorary Blood Donor" as well as a medal. In addition, a popular privilege in larger Polish cities is the right to free use of public transport, but the conditions for obtaining a privilege may vary depending on the city. Also in Poland, Poznań's theatre Teatr Nowy offers theatregoers standing discounts on theatre tickets. During the COVID-19 pandemic, many US blood centers advertised free COVID-19 antibody testing as an incentive to donate; however, these antibody tests were also useful for blood centers in determining which donors could be flagged for convalescent plasma donations. Most allogeneic blood donors donate as an act of charity and do not expect to receive any direct benefit from the donation. The sociologist Richard Titmuss, in his 1970 book The Gift Relationship: From Human Blood to Social Policy, compared the merits of the commercial and non-commercial blood donation systems of the US and the UK, coming down in favor of the latter. The book became a bestseller in the US, resulting in legislation to regulate the private market in blood. The book is still referenced in modern debates about turning blood into a commodity. The book was republished in 1997 and the same ideas and principles are applied to analogous donation programs, such as organ donation and sperm donation. In low-resource countries, directed donation from family members and friends is common, because there is no other realistic option. The practice causes ethical challenges, because the donor may feel coerced or may be receiving undisclosed payments. Additionally, relying on a social network means that people with large networks of healthy adults have a better chance of receiving life-saving treatment than people without the same social privileges.
Biology and health sciences
Medical procedures: General
Health
505721
https://en.wikipedia.org/wiki/Wolfdog
Wolfdog
A wolfdog is a canine produced by the mating of a domestic dog (Canis familiaris) with a gray wolf (Canis lupus), eastern wolf (Canis lycaon), red wolf (Canis rufus), or Ethiopian wolf (Canis simensis) to produce a hybrid. Admixture There are a range of experts who believe that they can tell the difference between a wolf, a dog, and a wolfdog, but they have been proven to be incorrect when providing their evidence before courts of law. Admixture between domestic dogs and other subspecies of gray wolves are the most common wolfdogs since dogs and gray wolves are considered the same species, are genetically very close and have shared vast portions of their ranges for millennia. Such admixture in the wild have been detected in many populations scattered throughout Europe and North America, usually occurring in areas where wolf populations have declined from human impacts and persecutions. At the same time, wolfdogs are also often bred in captivity for various purposes. A mixture of dogs and two other North American wolf species have also occurred historically in the wild, although it is often difficult for biologists to discriminate the dog genes in the eastern timber and red wolves from the gray wolf genes also present in these wolf species due to their historical overlaps with North American gray wolves as well as with coyotes, both of which have introgressed into the eastern timber and red wolf gene pools. At the same time, because many isolated populations of the three wolf species in North America have also mixed with coyotes in the wild, it has been speculated by some biologists that some of the coywolf hybrids in the northeastern third of the continent may also have both coydogs and wolfdogs in their gene pool. Hybrids between dogs and Ethiopian wolves discovered in the Ethiopian Highlands likely originated from past interactions between free-roaming feral dogs and Ethiopian wolves living in isolated areas. Recognized wolfdog breeds by the FCI are the Czechoslovakian Wolfdog and the Saarloos Wolfdog. History Whole genome sequencing has been used to study gene flow between wild and domestic species. There is evidence of widespread gene-flow from dogs into wolf populations, and very few deliberate crossings of wolves with dogs, such as the Saarloos Wolfdog. However, the global dog population forms a genetic cluster with little evidence for gene flow from wolves into dogs. Ancient DNA shows that dogs from Europe over 5,000 years ago also show little evidence of interbreeding with wild canids. Prehistoric wolfdogs A 1982 study of canine skulls from Wyoming dating back 10,000 years ago identified some that match the morphology of wolfdogs. This study was rebutted as not providing convincing evidence four years later. Teotihuacan wolfdogs In 2010, archeologists found the remains of wolf-dogs in a warrior's burial in Mexico's central valley that date about two thousand years ago, therefore what was once thought as coyotes depicted in Teotihuacan civilization art are being re-examined. New World black wolves Genetic research revealed that wolves with black pelts owe their distinctive coloration to a mutation that entered the wolf population through admixture with the domestic dog. Adolph Murie was among the first wolf biologists to speculate that the wide color variation in wolves was due to interbreeding with dogs; In 2008, it was discovered that a gene mutation responsible for the protein beta-defensin 3 is responsible for the black coat color in dogs. The same mutation was responsible for black wolves in North America and the Italian Apennines, with the mutation having arisen in dogs 13,000 to 120,000 years ago, with a preferred date of 47,000 years ago after comparing large sections of wolf, dog, and coyote genomes. Robert K. Wayne, a canine evolutionary biologist, stated that he believed that dogs were the first to have the mutation. He further stated that even if it originally arose in Eurasian wolves, it was passed on to dogs who, soon after their arrival, brought it to the New World and then passed it to wolves and coyotes. Black wolves with recent dog ancestry tend to retain black pigment longer as they age. North American gray wolf-domestic dog admixture As of 1999 in the United States, over 100,000 wolfdogs exist. In first-generation wolfdogs, gray wolves are most often crossed with wolf-like dogs (such as German Shepherd Dogs, Siberian Huskies, and Alaskan Malamutes) for an appearance most appealing to owners desiring an exotic pet. Documented breeding The first record of wolfdog breeding in Great Britain comes from the year 1766 when what is thought was a male wolf mated with a dog identified in the language of the day as a "Pomeranian", although it may have differed from the modern Pomeranian breed. The union resulted in a litter of nine pups. Wolfdogs were occasionally purchased by English noblemen, who viewed them as a scientific curiosity. Wolfdogs were popular exhibits in British menageries and zoos. Six breeds of dog exist that acknowledge a significant amount of recent wolf-dog admixture in their creation. One breed is the "wolamute", a.k.a. "malawolf", a cross between an Alaskan Malamute and a timber wolf. Four breeds were the result of intentional crosses with German Shepherd Dogs and have distinguishing characteristics of appearance that may reflect the varying subspecies of wolf that contributed to their foundation stock. Other, more unusual crosses have occurred; recent experiments in Germany were conducted in the crossing of wolves and Poodles. The intent behind creating the breeds has ranged widely from simply the desire for a recognizable companion high-content wolfdog to professional military working dogs. The Saarloos Wolfdog In 1932, Dutch breeder Leendert Saarloos crossed a male German Shepherd dog with a female European wolf. He then bred the female offspring back with the male German Shepherd, creating the Saarloos wolfdog. The breed was created to be a hardy, self-reliant companion and house dog. The Dutch Kennel Club recognized the breed in 1975. To honor its creator they changed the name to "Saarloos Wolfdog". In 1981, the breed was recognized by the Fédération Cynologique Internationale (FCI). The Czechoslovakian Wolfdog In the 1950s, the Czechoslovakian Wolfdog was also created to work on border patrol in the countries now known as Slovakia and the Czech Republic. It was originally bred from lines of German Shepherds with Carpathian grey wolves. It was officially recognized as a national breed in Czechoslovakia in 1982, and later was recognized by the Fédération Cynologique Internationale, the American Kennel Club's Foundation Stock Service and the United Kennel Club, and today is used in agility, obedience, search and rescue, police work, therapy work, and herding in Europe and the United States. Volkosob The Volkosob (, plural: ) was initially developed in the 1990s after the fall of the Soviet Union. Russian border guards wanted a dog that would possess the trainability and pack mentality of the German Shepherd, combined with the strength, superior senses and cold-resistance of a wild wolf, able to cope in the harsh conditions of the vast Russian borders. In 2000, a Caspian Steppe Wolf, noted for being unusually friendly and cooperative towards humans, was bred with German Shepherds of an East European Shepherd line, until an F3 generation was standardised. Unlike the previous hybrids, the Volkosob was the only breed that was an effective border guardian as they are renowned for not being too shy. Livestock guardian dogs A 2014 study found that 20% of wolves and 37% of dogs shared the same mitochondrial haplotypes in Georgia. More than 13% of the studied wolves had detectable dog ancestry and more than 10% of the dogs had detectable wolf ancestry. The results of the study suggest that admixture between wolves and dogs is a common event in the areas where large livestock guardian dogs are held in a traditional way, and that gene flow between dogs and gray wolves was an important force influencing gene pool of dogs for millennia since early domestication events. Wolfdogs in the wild Hybridization between wolves and dogs typically occurs when the wolf population is under strong hunting pressure and its structure is disrupted due to a high number of free-ranging dogs. Wolves typically display aggressiveness toward dogs, but a wolf can change its behaviour and become playful or submissive when it becomes socially isolated. Admixture in the wild usually occurs near human habitations where wolf density is low and dogs are common. However, there were several reported cases of wolfdogs in areas with normal wolf densities in the former Soviet Union. Wild wolfdogs were occasionally hunted by European aristocracy, and were termed lycisca to distinguish them from common wolves. Noted historic cases (such as the Beast of Gévaudan) of large wolves that were abnormally aggressive toward humans, may be attributable to wolf-dog mating. In Europe, unintentional mating of dogs and wild wolves have been confirmed in some populations through genetic testing. As the survival of some Continental European wolf packs is severely threatened, scientists fear that the creation of wolfdog populations in the wild is a threat to the continued existence of European wolf populations. However, extensive admixture between wolf and dog is not supported by morphological evidence, and analyses of mtDNA sequences have revealed that such mating is rare. In 1997, during the Mexican Wolf Arizona Reintroduction, controversy arose when a captive pack at Carlsbad designated for release was found by Roy McBride to be largely composed of wolf-dogs, who had captured many wolves for the recovery programme in the 1970s. Though staff initially argued that the animals' odd appearance was due to captivity and diet, it was later decided to euthanise them. In 2018, a study compared the sequences of 61,000 single-nucleotide polymorphisms (mutations) taken from across the genome of grey wolves. The study indicated that there exists individual wolves of dog-wolf ancestry in most of the wolf populations of Eurasia, but less so in North America. The admixture has been occurring across different time scales and was not a recent event. Low-level admixture did not reduce the wolf distinctiveness. In September-November 2018, in the Osogovo mountainous region along the border between Bulgaria and North Macedonia a putative grey wolf was recorded by camera to be living with a pack of 10 feral dogs, and by its behaviour and phenotype was assumed to be a wolf-dog hybrid. Breed-specific legislation The wolfdog has been the center of controversy for much of its history, and most breed-specific legislation is either the result of the animal's perceived danger or its categorization as protected native wildlife. The Humane Society of the United States, the RSPCA, Ottawa Humane Society, the Dogs Trust, and the Wolf Specialist Group of the IUCN Species Survival Commission consider wolfdogs to be wild animals and therefore unsuitable as pets, and support an international ban on the private possession, breeding, and sale of wolfdogs. According to the National Wolfdog Alliance, 40 U.S. states effectively forbid the ownership, breeding, and importation of wolfdogs, while others impose some form of regulation upon ownership. In Canada, the provinces of Alberta, Manitoba, Newfoundland and Labrador, and Prince Edward Island prohibit wolfdogs as pets. Most European nations have either outlawed the animal entirely or put restrictions on ownership. Wolfdogs were among the breeds banned from the U.S. Marine Corps base at Camp Pendleton and elsewhere after a fatal dog attack by a pit bull on a child. Description The physical characteristics of an animal created by breeding a wolf to a dog are not predictable, similar to that of mixed-breed dogs. In many cases the resulting adult wolfdog may be larger than either of its parents due to the genetic phenomenon of heterosis (commonly known as hybrid vigor). Breeding experiments in Germany with poodles and wolves, and later on with the resulting wolfdogs showed unrestricted fertility, mating via free choice and no significant problems of communication (even after a few generations). However, the offspring of poodles with either coyotes and jackals, all showed a decrease in fertility, significant communication problems, and an increase of genetic diseases after three generations of interbreeding between the hybrids. The researchers therefore concluded that domestic dogs and wolves are the same species. Wolfdogs display a wide variety of appearances, ranging from a resemblance to dogs without wolf blood to animals that are often mistaken for full-blooded wolves. A lengthy study by DEFRA and the RSPCA found several examples of misrepresentation by breeders and indeterminate levels of actual wolf pedigree in many animals sold as wolfdogs. The report noted that uneducated citizens misidentify dogs with wolf-like appearance as wolfdogs. Wolfdogs tend to have somewhat smaller heads than pure wolves, with larger, pointier ears that lack the dense fur commonly seen in those of wolves. Fur markings also tend to be very distinctive and not well blended. Black-colored wolfdogs tend to retain black pigment longer as they age, compared to black wolves. In some cases, the presence of dewclaws on the hind feet is considered a useful, but not absolute, indicator of dog gene contamination in wild wolves. Dewclaws are the vestigial first toes, which are common on the hind legs of domestic dogs but thought absent from pure wolves, which only have four hind toes. Observations on wild wolfdogs in the former Soviet Union indicate that in a wild state these may form larger packs than pure wolves, and have greater endurance when chasing prey. High wolf-content wolfdogs typically have longer canine teeth than dogs of comparable size, with some officers in the South African Defence Force commenting that the animals are capable of biting through the toughest padding "like a knife through butter". Tests undertaken in the Perm Institute of Interior Forces in Russia demonstrated that high wolf-content wolfdogs took 15–20 seconds to track down a target in training sessions, whereas ordinary police dogs took 3–4 minutes. The scientific evidence to support the claims by wolfdog researchers is minimal, and more research has been called for. Health Wolfdogs are generally said to be naturally healthy animals, and are affected by fewer inherited diseases than most breeds of dog. Wolfdogs are usually healthier than either parent due to heterosis. There is some controversy over the effectiveness of the standard dog/cat rabies vaccine on a wolfdog. The USDA has not to date approved any rabies vaccine for use in wolfdogs, though they do recommend an off-label use of the vaccine. Wolfdog owners and breeders purport that the lack of official approval is a political move to prevent condoning wolfdog ownership. Temperament and behavior Wolfdogs are a mixture of genetic traits, which results in less predictable behavior patterns compared to either the wolf or dog. The adult behavior of wolfdog pups also cannot be predicted with comparable certainty to dog pups, even in third-generation pups produced by wolfdog mating with dogs or from the behavior of the parent animals. Thus, though the behavior of a single individual wolfdog may be predictable, the behavior of the type as a whole is not. Due to the variability inherent to their admixture, whether a wolf–dog cross should be considered more dangerous than a dog depends on behavior specific to the individual alone rather than to wolfdogs as a group. The view that aggressive characteristics are inherently a part of wolfdog temperament has been contested in recent years by wolfdog breeders and other advocates of wolfdogs as pets. In popular culture Jed was a Canadian timber wolf-Alaskan Malamute and animal actor, known for his roles in such movies as White Fang (1991), White Fang 2: Myth of the White Wolf (1994), The Journey of Natty Gann (1985), and The Thing (1982); he was born in 1977 and died in June 1995, aged 18. Balto, Aleu and Kodi are fictitious wolfdogs in the animated films Balto (1995), Balto II: Wolf Quest (2002) and Balto III: Wings of Change (2004), respectively. The actual Balto was not a wolfdog but instead a Siberian Husky. White Fang is the titular character of Jack London's eponymous 1906 novel, first serialized in Outing magazine, that details the wild wolfdog's journey to domestication in the Yukon Territory and the Northwest Territories during the 1890s Klondike Gold Rush. The Wolf Dog (1933) is an American Pre-Code Mascot film serial starring Frankie Darro and Rin Tin Tin Jr. Wolf Dog (1958), also known as A Boy and His Dog, is a Northwestern movie, directed and produced by Sam Newfield, and produced by Regal Films Wolfdogs Magazine self-describes as a progressive "community based publication for wolfdog enthusiasts".
Biology and health sciences
Canines
Animals
506239
https://en.wikipedia.org/wiki/Woolly%20rhinoceros
Woolly rhinoceros
The woolly rhinoceros (Coelodonta antiquitatis) is an extinct species of rhinoceros that inhabited northern Eurasia during the Pleistocene epoch. The woolly rhinoceros was a member of the Pleistocene megafauna. The woolly rhinoceros was large, comparable to in size to the largest living rhinoceros species, the white rhinoceros (Ceratotherium simum), and covered with long, thick hair that allowed it to survive in the extremely cold, harsh mammoth steppe. It had a massive hump reaching from its shoulder and fed mainly on herbaceous plants that grew in the steppe. Mummified carcasses preserved in permafrost and many bone remains of woolly rhinoceroses have been found. Images of woolly rhinoceroses are found among cave paintings in Europe and Asia, and evidence has been found suggesting that the species was hunted by humans. The range of the woolly rhinoceros contracted towards Siberia beginning around 17,000 years ago, with the youngest known records being around 14,000 years old in northeast Siberia, coinciding with the Bølling–Allerød warming, which likely disrupted its habitat, with environmental DNA records possibly extending the range of the species around 9,800 years ago. Its closest living relative is the Sumatran rhinoceros (Dicerorhinus sumatrensis). Taxonomy Woolly rhinoceros remains have been known long before the species was described and were the basis for some mythical creatures. Native peoples of Siberia believed their horns were the claws of giant birds. A rhinoceros skull was found in Klagenfurt, Austria, in 1335, and was believed to be that of a dragon. In 1590, it was used as the basis for the head on a statue of a lindworm. Gotthilf Heinrich von Schubert maintained the belief that the horns were the claws of giant birds, and classified the animal under the name Gryphus antiquitatis, meaning "griffin of antiquity". One of the earliest scientific descriptions of an ancient rhinoceros species was made in 1769, when the naturalist Peter Simon Pallas wrote a report on his expeditions to Siberia where he found a skull and two horns in the permafrost. In 1772, Pallas acquired a head and two legs of a rhinoceros from the locals in Irkutsk, and named the species Rhinoceros lenenesis (after the Lena River). In 1799, Johann Friedrich Blumenbach studied rhinoceros bones from the collection of the University of Göttingen, and proposed the scientific name Rhinoceros antiquitatis. The geologist Heinrich Georg Bronn moved the species to Coelodonta in 1831 because of its differences in dental formation with members of the Rhinoceros genus. This name comes from the Greek words κοιλος (koilos, "hollow") and ὀδούς (odoús "tooth"), from the depression in the rhino's molar structure, giving the scientific name Coelodonta antiquitatis, "hollow-tooth of antiquity". Evolution The woolly rhinoceros was the most recent species of the genus Coelodonta. The closest living relative of Coelodonta is the Sumatran rhinoceros, and the genus is also closely related to the extinct genus Stephanorhinus. A cladogram showing the relationships of C. antiquitatis to other Late Pleistocene-recent rhinoceros species based on genomic data is given below. Relationships of the woolly rhinoceros based on morphology, excluding African rhinoceros species:The ancestors of Coelodonta are suggested to have diverged from those of the Sumatran rhinoceros around 9.4 million years ago, with Coelodonta diverging from Stephanorhinus around 5.5 million years ago. The oldest known species of Coelodonta, Coelodonta thibetana is known from the Pliocene of Tibet dating to approximately 3.7 million years ago, with the genus being present in Siberia, Mongolia, and China during the Early Pleistocene. The woolly rhinoceros first appeared during the early Middle Pleistocene in China, and the oldest remains of the species in Europe, which represents the only species of Coelodonta to have been present in the region, date to approximately 450,000 years ago. The woolly rhinoceros is divided into two chrono-subspecies, with C. a. praecursor from the middle Pleistocene and C. a. antiquitatis from the late Pleistocene. Mitochondrial genomes suggest that the last mitochondrial ancestor of Late Pleistocene woolly rhinoceroses lived around 570,000 years ago. Description Structure and appearance An adult woolly rhinoceros typically measured from head to tail, stood tall at the shoulder, and weighed up to (with some sources placing the body mass of the species as high as ) making it comparable in size to the largest living rhinoceros species, the white rhinoceros (Ceratotherium simum). Both males and females had two horns which were made of keratin, with one long horn reaching forward and a smaller horn between the eyes. The front horn would have measured long for individuals at 25 to 35 years of age, while the second horn would have measured up to long. Unlike in modern rhinos, the large nasal horn was often flattened in cross-section, and abrasion patterns on the horn indicate it's possible use in brushing away snow when grazing. Compared to other rhinoceroses, the woolly rhinoceros had a longer head and body, and shorter legs. Its shoulder was raised with a powerful hump, used to support the animal's massive front horn. The hump also contained a fat reserve to aid survival through the desolate winters of the mammoth steppe. Frozen specimens indicate that the rhino's long fur coat was reddish-brown, with a thick undercoat that lay under a layer of long, coarse guard hair thickest on the withers and neck. Shorter hair covered the limbs, keeping snow from attaching. The body's length ended with a tail with a brush of coarse hair at the end. Females had two nipples on the udders. The woolly rhinoceros had several features which reduced the body's surface area and minimized heat loss. Its ears were no longer than , while those of rhinos in hot climates are about . Their tails were also relatively shorter. It also had thick skin, ranging from , heaviest on the chest and shoulders. Skull and dentition The skull had a length between . It was longer than those of other rhinoceros, giving the head a deep, downward-facing slanting position, similar to its fossil relative Stephanorhinus hemitoechus and Elasmotherium as well as the white rhinoceros. Strong muscles on its long occipital bone formed its neck hock and held the massive skull. Its massive lower jaw measured up to long and high. The teeth of the woolly rhinoceros had thickened enamel and an open internal cavity. Like other rhinos, adults did not have incisors. It had 3 premolars and 3 molars in both jaws. The molars were high-crowned and had a thick coat of cementum. The nasal septum of the woolly rhinoceros was ossified, unlike modern rhinos. This was most common in adult males. This adaptation probably evolved as a result of the heavy pressure on the horn and face when the rhinoceros grazed underneath the thick snow. Unique to this rhino, the nasal bones were fused to the premaxillae, which is not the case in older Coelodonta types or today's rhinoceroses. This ossification inspired the junior synonym specific name tichorhinus, from Greek τειχος (teikhos) "wall", ῥις (ῥιν-) (rhis (rhin-)) "nose". Paleobiology and palaeoecology The woolly rhinoceros had a similar life history to modern rhinos. Studies on milk teeth show that individuals developed similarly to both the white and black rhinoceros. The two teats in the female suggest that she raised one calf, or more rarely two, every two to three years. With their massive horns and size, adults had few predators, but young individuals could have been attacked by cave hyenas and cave lions. A skull was found with trauma indicating an attack from a feline, but the animal survived to adulthood. Remains of woolly rhinoceros are frequently found in cave hyena dens with gnaw marks indicating that their remains were consumed by them, which to a large degree likely reflects scavenging of the carcasses of already dead rhinoceroses. Woolly rhinos may have used their horns for combat, probably including intraspecific combat as recorded in cave paintings, as well as for moving snow to uncover vegetation during winter. They may have also been used to attract mates. Bull woolly rhinos were probably territorial like their modern counterparts, defending themselves from competitors, particularly during the rutting season. Fossil skulls indicate damage from the front horns of other rhinos,  and lower jaws and back ribs show signs of being broken and re-formed, which may have also come from fighting. The apparent frequency of intraspecific combat, compared to recent rhinos, was likely a result of rapid climatic change during the last glacial period, when the animal faced increased stress from competition with other large herbivores. Diet Woolly rhinoceroses mostly fed on grasses and sedges that grew in the mammoth steppe. Its long, slanted head with a downward-facing posture, and tooth structure all helped it graze on vegetation. It had a wide upper lip like that of the white rhinoceros, which allowed it to easily pluck vegetation directly from the ground. A strain vector biomechanical investigation of the skull, mandible and teeth of a well-preserved last cold stage individual recovered from Whitemoor Haye, Staffordshire, revealed musculature and dental characteristics that support a grazing feeding preference. In particular, the enlargement of the temporalis and neck muscles is consistent with that required to resist the large tugging forces generated when taking large mouthfuls of fodder from the ground. The presence of a large diastema supports this theory. Comparisons with living perissodactyls confirm that the woolly rhinoceros was a hindgut fermentor with a single stomach, consuming cellulose-rich, protein-poor fodder. It had to consume a heavy amount of food to account for the low nutritive content of its diet. Woolly rhinos living in the Arctic during the Last Glacial Maximum consumed approximately equal volumes of forbs, such as Artemisia, and graminoids. Pollen analysis shows it also ate woody plants (including conifers, willows and alders), along with flowers, forbs and mosses. Isotope studies on horns show that the woolly rhinoceros had a seasonal diet; different areas of horn growth suggest that it mainly grazed in summer, while it browsed for shrubs and branches in the winter. Dental mesowear measurements further show that the woolly rhinoceros's diet was heavily composed on abrasive grasses. Growth and pathologies It is estimated that woolly rhinoceroses could reach around 40 years of age, like their modern relatives. In 2014, Shpansky analysed the growth of woolly rhinoceros from its early life stages based on several lower jaw fragments and limb bones. A one-month-old calf was about in length and tall at the shoulder. The most intensive growth in woolly rhinos occurred during the juvenile stage around 3 to 4 years of age with a shoulder height of . At 7 to 10 years of age, woolly rhinos became young adults with a shoulder height of . By more than 14 years of age, woolly rhinos became fully mature, old adults with a shoulder height of . C. antiquitatis individuals of old age display extensive wear and loss of their anterior premolars as a result of tooth abrasion from their intensive grazing lifestyle. Habitat and distribution The woolly rhinoceros lived mainly in lowlands, plateaus and river valleys, with dry to arid climates, and migrated to higher elevations in favourable climate phases. It avoided mountain ranges, due to heavy snow and steep terrain that the animal could not easily cross. The rhino's main habitat was the mammoth steppe, a large, open landscape covered with wide ranges of grass and bushes. The woolly rhinoceros lived alongside other large herbivores, such as the woolly mammoth, giant deer, reindeer, saiga antelope and bison – an assortment of animals known as the Mammuthus-Coelodonta Faunal Complex. With its wide distribution, the woolly rhinoceros lived in some areas alongside the other rhinoceroses Stephanorhinus and Elasmotherium. By the end of the Riss glaciation about 130,000 years ago, the woolly rhinoceros lived throughout northern Eurasia, spanning most of Europe, the Russian Plain, Siberia, and the Mongolian Plateau, ranging to extremes of 72° to 33°N. Fossils have been found as far north as the New Siberian Islands. Even during the very warm Eemian interglacial, the range of the woolly rhinoceros extended into temperate regions such as Poland. It had the widest range of any rhinoceros species. It seemingly did not cross the Bering land bridge during the last ice age (which connected Asia to North America), with its easterly-most occurrence at the Chukotka Peninsula, probably due to the low grass density and lack of suitable habitat in the Yukon combined with competition from other large herbivores on the frigid land bridge. Relationship with humans Hunting Woolly rhinoceroses shared their habitat with humans, but direct evidence that they interacted is relatively rare. Only 11% of the known sites of prehistoric Siberian tribes have remains or images of the animal. Many rhinoceros remains are found in caves (such as the Kůlna Cave in Central Europe), which were not the natural habitat of either rhinos or humans, and large predators such as hyenas may have carried rhinoceros parts there. Sometimes, only individual teeth or bone fragments are uncovered, which usually came from only one animal. Most rhinoceros remains in Western Europe are found in the same places where human remains or artifacts were found, but this may have occurred naturally. Signs that early humans hunted or scavenged the rhinoceros come from markings on the animal's bones. One specimen had injuries caused by human weaponry, with traces of a wound from a sharp object marking the shoulder and thigh, and a preserved spear was found near the carcass. A few sites from the early phase of the Last Glacial Period in the late Middle Paleolithic, such as the Gudenus Cave (Austria) and the open air site of Königsaue (Saxony-Anhalt, Germany), have heavily beaten rhinoceros bones lined with slash marks. This action was done partly to extract the nutritious bone marrow. Both horns and bones of the rhinoceros were used as raw materials for tools and weapons, as were remains from other animals. In what is now Zwoleń, Poland, a device was made from a battered woolly rhinoceros pelvis. Half-meter spear throwers, made from a woolly rhinoceros horn about 27,000 years ago, came from the Yana Rhinoceros Horn Site on the banks of the Yana River. A 13,300-year-old spear found on Bolshoy Lyakhovsky Island has a tip made of rhinoceros horn, the furthest north a human artifact has ever been found. The Pinhole Cave Man is a late Paleolithic figure of a man engraved on a rib bone of a woolly rhinoceros, found at Creswell Crags in England. Ancient art Many cave paintings from the Upper Paleolithic depict woolly rhinoceroses. The animal's defining features are prominently drawn, complete with the raised back and hump, contrasting with its low-lying head. Two curved lines represent the ears. The animal's horns are drawn with their long curvature, and in some cases, the coat is also indicated. Many paintings show a black band dividing the body. About 20 Paleolithic drawings of woolly rhinos were known before the discovery of the Chauvet Cave in France. They are dated at over 31,000 years old, probably from the Aurignacian, engraved on cave walls or drawn in red or black. One scene depicts two rhinos fighting each other with their horns. Other illustrations are found in the Rouffignac and Lascaux caves. One drawing from Font-de-Gaume shows a noticeably higher head posture, and others were drawn in red pigments in the Kapova Cave in the Ural Mountains. Some images show rhinoceroses struck with spears or arrows, signifying human hunting. The site of Dolní Věstonice in Moravia, Czech Republic, was found with more than seven hundred statuettes of animals, many of woolly rhinoceroses. Extinction Analysis of the nuclear genome suggests that the woolly rhinoceros experienced a population expansion beginning around 30,000 years ago. The end of the last glacial period shows a progressive contraction of the range of the woolly rhinoceros, with the species disappearing from Europe during the interval between 17-15,000 years ago, with its youngest confirmed records being from the Urals, dating to 14,200 years ago, and northeast Siberia, dating to around 14,000 years ago. The youngest records of the species coincide with the onset of the Bølling–Allerød warming, which likely resulted in increased precipitation (including snowfall), which transformed the woolly rhinoceros' preferred low-growing grass and herb habitat into one dominated by shrubs and trees. The woolly rhinoceros was likely intolerant of deep snow, which its short limbs were inefficient in moving through. Population fragmentation is likely to have played a role in its extinction. The presence of large numbers of cervical ribs in specimens from the Netherlands may have been due to inbreeding or harsh environmental conditions. A genetic study of the woolly rhinoceros remains in northeast Siberia, dating to around 18,500 years ago, a few thousand years before its extinction, found that the population size was stable and relatively large, despite long-term co-existence with humans in the region. A Holocene survival of the species has been suggested by the finding of environmental DNA of the woolly rhinoceros in sediments of the Kolyma region of Northeast Siberia dating to 9,800 ± 200 years ago. However, it has been demonstrated that ancient DNA in permafrost can be reworked into sediment layers dating to well after the extinction of the originating species, though other authors have argued that this specific environmental DNA record is unlikely to have been reworked. Low level human hunting may have played a decisive role in the extinction by reducing the ability of woolly rhinoceros populations to colonise newly suitable habitat, thereby exacerbating the population fragmentation brought on by environmental change. Frozen specimens Many rhinoceros remains have been found preserved in the permafrost region. In 1771, a head, two legs and hide were found in the Vilyuy River in eastern Siberia and sent to the Kunstkamera in Saint Petersburg. Later in 1877, a Siberian trader recovered a head and one leg from a tributary of the Yana River. In October 1907, miners in Starunia, Russian Empire, found a mammoth carcass buried in an ozokerite pit. A month later, a rhinoceros was found underneath. Both were sent to the Dzieduszycki Museum, where a detailed description was published in the museum's monograph. Photographs were published in paleontological journals and textbooks, and the first modern paintings of the species were based on the mounted specimen. The rhino is now located in the Lviv National Museum along with the mammoth. Later, in 1929, the Polish Academy of Arts and Sciences sent an expedition to Starunia, finding the mummified remains of three rhinos. One specimen, missing only its horns and fur, was taken to the Aquarium and Natural History Museum in Kraków. A plaster cast was made soon afterwards, which is now held in the Natural History Museum in London. Skull and rib fragments of a rhinoceros were found in 1972 in Churapcha, between the Lena and Amga rivers. A whole skeleton was found soon afterwards, with preserved skin, fur, and stomach contents. In 1976, schoolchildren on a class trip found a 20,000-year-old rhinoceros skeleton on the Aldan River's left bank, uncovering a skull with both horns, a spine, ribs and limb bones. In 2007, a partial rhinoceros carcass was found in the lower reaches of the Kolyma river. Its upward-facing position indicates that the animal probably fell into mud and sank. Next year in 2008, a nearly complete skeleton came from the Chukochya River. That same year, locals near the Amga discovered mummified rhinoceros remains, and over the next two years, pelvic bones, tail vertebrae and ribs were excavated along with forelimbs and hind limbs with toes intact. In September 2014, a mummified young rhinoceros was discovered by two hunters, Alexander “Sasha” Banderov and Simeon Ivanov, at a tributary of the Semyulyakh River in the Abyysky District in Yakutia, Russia. Its head and horns, fur, and soft tissues were recovered. Some parts had been thawed and eaten since they were not covered by permafrost. The body was handed over to the Yakutia Academy of Sciences, where it was named “Sasha” after one of its discoverers. Dental analysis shows that the calf was about seven months old at the time of its death. With its well-intact preservation, scientists proceeded to undergo DNA analysis. In August 2020, a rhinoceros was found, after being revealed by melting permafrost, close to the site of the 2014 discovery. The rhino was between three and four years old and it is thought that the cause of death was drowning. It is one of the best-preserved animals recovered from the region, having most of its internal organs intact. The discovery was also notable for the preservation of a small nasal horn, a rarity as these normally decompose quickly.
Biology and health sciences
Perissodactyla
Animals
507190
https://en.wikipedia.org/wiki/Steel%20and%20tin%20cans
Steel and tin cans
A steel can, tin can, tin (especially in British English, Australian English, Canadian English and South African English), or can is a container made of thin metal, for distribution or storage of goods. Some cans are opened by removing the top panel with a can opener or other tool; others have covers removable by hand without a tool. Cans can store a broad variety of contents: food, beverages, oil, chemicals, etc. In a broad sense, any metal container is sometimes called a "tin can", even if it is made, for example, of aluminium. Steel cans were traditionally made of tinplate; the tin coating stopped the contents from rusting the steel. Tinned steel is still used, especially for fruit juices and pale canned fruit. Modern cans are often made from steel lined with transparent films made from assorted plastics, instead of tin. Early cans were often soldered with neurotoxic high-lead solders. High-lead solders were banned in the 1990s in the United States, but smaller amounts of lead were still often present in both the solder used to seal cans and in the mostly-tin linings. Cans are highly recyclable and around 65% of steel cans are recycled. History The tin canning process was conceived by the Frenchman Philippe de Girard, who had British merchant Peter Durand patent the idea in 1810. The canning concept was based on experimental food preservation work in glass containers the year before by the French inventor Nicholas Appert. Durand did not pursue food canning, but, in 1812, sold his patent to two Englishmen, Bryan Donkin and John Hall, who refined the process and product, and set up the world's first commercial canning factory on Southwark Park Road, London. By 1813 they were producing their first tin canned goods for the Royal Navy. By 1820, tin canisters or cans were being used for gunpowder, seeds, and turpentine. Early tin cans were sealed by soldering with a tin–lead alloy, which could lead to lead poisoning. Automated soldering machines started to arrive in the 1870s The steel started to displace iron as a material for the cans at the very end of the 19th century. Locking side seam was invented by Max Ams in 1888, giving the rise to a "sanitary can" design, where the solder was found only on the outside of the can and never touched the food. The modern three-piece design dates back to 1904 (Sanitary Can Company of New York). In 1901 in the United States, the American Can Company was founded, at the time producing 90% of the tin cans in the United States. It bought out the Sanitary Can Company of New York in 1908, and the three-piece design displaced all other cans by the early 1920s. The can saw very little change since then, although better technology brought 20% reduction in the use of steel, and 50% - in the use of tin (the modern cans are 99.5% steel). Canned food in tin cans was already quite popular in various countries when technological advancements in the 1920s lowered the cost of the cans even further. In 1935, the first beer in metal cans was sold; it was an instant sales success. The production of these three-piece cans by the American Can Company stopped for World War II due to lack of material, after the war the first two-piece cans with no side seams and a cone top were introduced. The use of aluminum started in 1958 with Primo beer. About 600 different types of cans were used in the early 21st century, with the most popular being the three-piece design with side seam and two doble-seamed ends, followed by the two-piece construction with sides and bottom drawn as one piece. Description Most cans are right circular cylinders with identical and parallel round tops and bottoms with vertical sides. However, in cans for small volumes or particularly-shaped contents, the top and bottom may be rounded-corner rectangles or ovals. Other contents may suit a can that is somewhat conical in shape. Fabrication of most cans results in at least one rim—a narrow ring slightly larger than the outside diameter of the rest of the can. The flat surfaces of rimmed cans are recessed from the edge of any rim (toward the middle of the can) by about the width of the rim; the inside diameter of a rim, adjacent to this recessed surface, is slightly smaller than the inside diameter of the rest of the can. Three-piece can construction results in top and bottom rims. In two-piece construction, one piece is a flat top and the other a deep-drawn cup-shaped piece that combines the (at least roughly) cylindrical wall and the round base. Transition between wall and base is usually gradual. Such cans have a single rim at the top. Some cans have a separate cover that slides onto the top or is hinged. Two piece steel cans can be made by "drawing" to form the bottom and sides and adding an "end" at the top: these do not have side seams. Cans can be fabricated with separate slip-on, or friction fit covers and with covers attached by hinges. Various easy opening methods are available. In the mid-20th century, a few milk products were packaged in nearly rimless cans, reflecting different construction; in this case, one flat surface had a hole (for filling the nearly complete can) that was sealed after filling with a quickly solidifying drop of molten solder. Concern arose that the milk contained unsafe levels of lead leached from this solder plug. Advantages of steel cans A number of factors make steel cans ideal containers for beverages. Steel cans are stronger than cartons or plastic, and less fragile than glass, protecting the product in transit and preventing leakage or spillage, while also reducing the need for secondary packaging. Steel and aluminium packaging offer complete protection against light, water and air, and metal cans without resealable closures are among the most tamper-evident of all packaging materials. Food and drink packed in steel cans has equivalent vitamin content to freshly prepared, without needing preserving agents. Steel cans also extend the product's shelf-life, allowing longer sell-by and use-by dates and reducing waste. As an ambient packaging medium, steel cans do not require cooling in the supply chain, simplifying logistics and storage, and saving energy and cost. At the same time, steel's relatively high thermal conductivity means canned drinks chill much more rapidly and easily than those in glass or plastic bottles. Materials and health issues Tin No cans currently in wide use are composed primarily or wholly of tin. Until the second half of the 20th century, almost all cans were made of tinplate steel. The steel was cheap and structurally strong, but prone to rust; the tin coating prevented the wet food from corroding the steel. Corrosion-resistant coatings on almost all steel food cans are now made from plastic, not tin. Some manufacturers use Vitreous enamel, instead. Dissolution of tin into the food Tin is corrosion resistant, but acidic food like fruits and vegetables can corrode the tin layer. Nausea, vomiting, and diarrhea have been reported after ingesting canned food containing 200 mg/kg of tin. A 2002 study showed that 99.5% of 1200 tested cans contained below the UK regulatory limit of 200 mg/kg of tin, an improvement over most previous studies largely attributed to the increased use of fully lacquered cans for acidic foods, and concluded that the results do not raise any long term food safety concerns for consumers. The two non-compliant products were voluntarily recalled. Evidence of tin impurities can be indicated by color, as in the case of pears, but lack of color change does not guarantee that a food is not tainted with tin. Lead Lead is harmful to health in any quantity, and more lead is more harmful than less lead. Infants and children are more severely affected, as lead harms brain development. In November 1991, US can manufacturers voluntarily eliminated lead seams in food cans. Imported food cans continued to include lead soldered seams. In 1995, the US FDA issued a rule prohibiting lead soldered food cans, including both domestic and imported food cans. Unfortunately, the FDA did not give a definition of "lead solder", or a quantitative limit to permissible lead levels, and some solders and tin linings used on tin cans still contained significant amounts of lead. In 2017, quantitative limits were set, but they are high enough to permit intentionally adding lead, and the FDA measurements show measurable levels of lead in many US canned foods in the 2010s. Plastic linings Many metal food cans are lined with plastic, to prevent the food from corroding the can. These linings can leach contaminants into the canned food. Some of these contaminants are substances with known health harms, though whether they are ingested in canned food in levels sufficient to cause harms is not known. Among other substances, the plastic linings in food cans often contain bisphenol A (BPA). Pregnant women who eat more canned food have higher levels of BPA in their urine. Other constituents in can linings, including newer BPA-free can linings, have also been identified as having known health harms. Bisphenol-A Bisphenol-A (BPA) is a controversial chemical compound present in commercially available tin can plastic linings and transferred to canned food. The inside of the can is coated with an epoxy coating, in an attempt to prevent food or beverage from coming into contact with the metal. The longer food is in a can, and the warmer and more acidic it is, the more BPA leaches into it. In September 2010, Canada became the first country to declare BPA a toxic substance. In the European Union and Canada, BPA use is banned in baby bottles. The FDA does not regulate BPA (see BPA controversy#Public health regulatory history in the United States). Several companies, like Campbell's Soup, announced plans to eliminate BPA from the linings of their cans, but have not said which chemical they plan to replace it with. (See BPA controversy#Chemical manufacturers reactions to bans.) Canada In 2016, BPA was common in food can linings in Canada. , Health Canada's Food Directorate concluded that "the current dietary exposure to BPA through food packaging uses is not expected to pose a health risk to the general population, including newborns and infants". They also stated that, as some animal studies had shown effects from low levels of BPA, they were seeking to make BPA levels in food packaged for infants and newborns (especially formula). They also cited a WHO review. UK In modern times, the majority of food cans in the UK have been lined with a plastic coating containing bisphenol A (BPA). The coating prevents acids and other substances from corroding the tin or aluminium of the can, but leaching of BPA into the cans contents was investigated as a potential health hazard. The UK Food Standards agency currently considers can-derived BPA levels to be acceptable, but is investigating its safe-level thresholds; it currently has a temporary threshold for BPA, . US A 2016 market, survey using Fourier-transform infrared spectrums to identify materials, found BPA and other substances known to have health harms were common in food can linings in the US. A similar survey done by food can manufacturers in 2020 found BPA only in some imported cans; it did not discuss potential harms from lining substances other than BPA. Labels A can traditionally has a printed label glued to the outside of the curved surface, indicating its contents. Some labels contain additional information, such as recipes, on the reverse side. More recently labels are sometimes printed directly onto the metal before or after the metal sheet is formed into the individual cans. Traditionally, labels were glued on with casein glue, which dissolved easily in hot water. Some other glues may make the label harder to remove for recycling. Standard sizes Cans come in a variety of shapes and sizes. Walls are often stiffened with rib bulges, especially on larger cans, to help the can resist dents that can cause seams to split. Can sizes in the United States have an assortment of designations and sizes. For example, size 7/8 contains one serving of half a cup with an estimated weight of 4 ounces; size 1 "picnic" has two or three servings totalling one and a quarter cups with an estimated weight of 10 ounces; size 303 has four servings totalling 2 cups weighing 15 ounces; and size 10 cans, most widely used by food services selling to cafeterias and restaurants, have twenty-five servings totalling 13 cups with an estimated weight of 103 ounces (size of a roughly 3 pound coffee can). These are U.S. customary cups, not British Imperial standard. In the United States, cook books sometimes reference cans by size. The Can Manufacturers Institute defines these sizes, expressing them in three-digit numbers, as measured in whole and sixteenths of an inch for the container's nominal outside dimensions: a 307 × 512 would thus measure 3 and 7/16" in diameter by 5 and 3/4" (12/16") in height. Older can numbers are often expressed as single digits, their contents being calculated for room-temperature water as approximately eleven ounces (#1 "picnic" can), twenty ounces (#2), thirty-two ounces (#3), fifty-eight ounces (#5), and one-hundred-ten ounces (#10 "coffee" can). In parts of the world using the metric system, tins are made in 250, 500, 750 ml (millilitre) and 1 L (litre) sizes (250 ml is approximately 1 cup or 8 ounces). Cans imported from the US often have odd sizes such as 3.8 L (1 US gallon), 1.9 L (1/2 US gallon), and 946 ml (2 US pints / 1 quart). In the UK and Australia, cans are usually measured by net weight. A standard size tin can holds roughly 400 g; though the weight can vary between 385 g and 425 g depending on the density of the contents. The smaller half sized can holds roughly 200 g, typically varying between 170 g and 225 g. Fabrication of cans Rimmed three-piece can construction involves several stages; Forming a tube and welding or soldering the seam of the sides Joining the bottom end to the tube Printing or attaching labels to the can Filling the can with content; sterilization or retorting is required for many food products Joining the wall and top "end". Double seam rims are crucial to the joining of the wall to a top or bottom surface. An extremely tight fit between the pieces must be accomplished to prevent leakage; the process of accomplishing this radically deforms the rims of the parts. Part of the tube that forms the wall is bent, almost at its end, turning outward through 90 degrees, and then bent further, toward the middle of the tube, until it is parallel to the rest of the tube, a total bend of 180 degrees. The outer edge of the flat piece is bent against this toward the middle of the tubular wall, until parallel with the wall, turning inward through 90 degrees. The edge of bent portion is bent further through another 90 degrees, inward now toward the axis of the tube and parallel to the main portion of the flat piece, making a total bend of 180 degrees. It is bent far enough inward that its circular edge is now slightly smaller in diameter than the edge of the tube. Bending it yet further, until it is parallel with the tube's axis, gives it a total bend of 270 degrees. It now envelops the outward rim of the tube. Looking outward from the axis of the tube, the first surface is the unbent portion of the tube. Slightly further out is a narrow portion of the top, including its edge. The outward-bent portion of the tube, including its edge, is still slightly further out. Furthest out is the 90-degree-bent portion of the flat surface. The combined interacting forces, as the portion of the flat surface adjacent to the interior of the tube is indented toward the middle of the tube and then outward forward the axis of the tube, and the other bent portions of the flat piece and the tube are all forced toward the axis of the tube, drives these five thicknesses of metal against each other from inside and out, forming a "dry" joint so tight that welding or solder is not needed to strengthen or seal it. Illustrations of this process can be found on pages 20–22 of the FAO Fisheries Technical Paper 285 "Manual on fish canning". Design and manufacture Steel for can making The majority of steel used in packaging is tinplate, which is steel that has been coated with a thin layer of tin, whose functionality is required for the production process. The tin layer is usually applied by electroplating. Two-piece steel can design Most steel beverage cans are two-piece designs, made from 1) a disc re-formed into a cylinder with an integral end, double-seamed after filling and 2) a loose end to close it. Steel cans are made in many different diameters and volumes, with opening mechanisms that vary from ring pulls and tab openers, to wide open mouths. Drawn-and-ironed (DWI) steel cans The process of re-forming sheet metal without changing its thickness is known as 'drawing'. Thinning the walls of a two-piece can by passing it through circular dies is called 'ironing'. Steel beverage cans are therefore generally referred to as drawn-and-ironed, or DWI, cans (sometimes D&I). The DWI process is used for making cans where the height is greater than the diameter, and is particularly suited to making large volumes of cans of the same basic specification. Steel can wall thicknesses are now 30% thinner and weigh 40% less than 30 years ago, reducing the amounts of raw materials and energy required to make them. They are also up to 40% thinner than aluminium. Magnetic properties Steel is a ferrous metal and is therefore magnetic. For beverage packaging this is unique. This allows the use of magnetic conveyor systems to transfer empty cans through the filling and packing processes, increasing accuracy and reducing potential spillage and waste. In recycling facilities, steel cans may be readily separated from other waste using magnetic equipment including cross-belt separators, also known as overband magnets, and drum magnets. Opening cans The first cans were heavy-weight containers that required ingenuity to open, with instructions directing to use a hammer and chisel, during the war of 1812, British soldiers resorted to use of bayonets and knives. After an introduction of much thinner cans in the 1850s, a specialized opener became possible, and were introduced in 1855 (Robert Yeates) and 1858 (Ezra J. Warner). The latter unwieldy design saw limited use by soldiers in the American Civil War. The push-lever opener similar to the modern ones was introduced in 1860 ("Bull's Head"), cutting wheel invented in 1870 (William W. Lyman). Serrated wheel, common in the rotating can openers, was first used by the Star Can Opener Company in 1925. While beverage cans or cans of liquid such as broth can be punctured—as with a church key—to pour out the contents, solid or semisolid contents require removing one end of the can. This can be accomplished with a heavy knife or other sharp tool, but can openers are safer, easier, and more convenient. Some cans, such as those used for sardines, have a specially scored lid so that the user can break out the metal by the leverage of winding it around a slotted twist-key. Until the mid-20th century, some sardine tins had solder-attached lids, and the twist-key worked by forcing the solder joint apart. The advent of pull tabs in beverage cans spread to the canning of various food products, such as pet food or nuts (and non-food products such as motor oil and tennis balls). The ends are known as easy open lids because they open without any tools or implements. An additional innovation developed specifically for food cans uses a tab that is bent slightly upwards, creating a larger surface area for easier finger access. Cans can be made with easy open features. Some cans have screw caps for pouring liquids and resealing. Some have hinged covers or slip-on covers for easy access. Paint cans usually have a lid with an interference fit, removable and replaceable any number of times so the paint may be stored between uses. Recycling and re-use Steel from cans and other sources is the most recycled packaging material. Around 65% of steel cans are recycled. In the United States, 63% of steel cans are recycled, compared to 52% of aluminium cans. In Europe, the recycling rate in 2016 is 79.5%. Most can recycling occurs at the smelters, but individual consumers also directly reuse cans in various ways. For instance some people use two tin cans to form a camp or survival stove to cook small meals. Sustainability and recycling of steel beverage cans Steel recycling From an ecological perspective, steel may be regarded as a closed-loop material: post-consumer waste can be collected, recycled and used to make new cans or other products. Each tonne of scrap steel recycled saves 1.5 tonnes of CO2, 1.4 tonnes of iron ore and 740 kg of coal. Steel is the world's most recycled material, with more than 85% of all the world's steel products being recycled at the end of their life: an estimated 630 million tonnes of steel scrap were recycled in 2017, saving 945 million tonnes of CO2. Steel can recycling A steel can can be recycled again and again without loss of quality; however, for the food grade steel it's required to remove tin from the scrap metal, which is done by way of electrochemistry: the tin is leached from a high pH solution at low negative voltage. Recycling a single can saves the equivalent power for one laundry load, 1 hour of TV or 24 hours of lighting (10W LED bulb). Steel beverage cans are recycled by being melted down in an electric arc furnace or basic oxygen furnace. Most steel cans also carry some form of recycling identification such as the Metal Recycles Forever mark, Recyclable Steel, and the Choose Steel campaign logo. There is also a campaign in Europe called Every Can Counts, encouraging can recycling in the workplace. Smaller carbon footprint All beverage packaging creates CO2 emissions at every stage in the production process, from raw material extraction, processing and manufacture through to recycling. However, steel cans are an ecological top performer, as cans can always be recycled. The steel industry needs the used cans and will use them in the production of new steel product. By recycling the cans and closing the loop, CO2 emissions are dramatically reduced. There is also the potential for higher global steel recycling rates as consumers become more aware of the benefits.
Technology
Containers
null
507709
https://en.wikipedia.org/wiki/Azolla
Azolla
Azolla (common called mosquito fern, water fern, and fairy moss) is a genus of seven species of aquatic ferns in the family Salviniaceae. They are extremely reduced in form and specialized, looking nothing like other typical ferns but more resembling the form of some mosses or even duckweeds. Azolla filiculoides is one of two fern species for which a reference genome has been published. It is believed that this genus grew so prolifically during the Eocene (and thus absorbed such a large amount of carbon) that it triggered a global cooling event that has lasted to the present. Azolla may establish as an invasive plant in areas where it is not native . In such a situation it can alter aquatic ecosystems and biodiversity substantially. Phylogeny Phylogeny of Azolla Other species include: At least six extinct species are known from the fossil record: Azolla intertrappea Sahni & H.S. Rao, 1934 (Eocene, India) Azolla berryi Brown, 1934 (Eocene, Green River Formation, Wyoming) Azolla prisca Chandler & Reid, 1926 (Oligocene, London Clay, Isle of Wight) Azolla tertiaria Berry, 1927 (Pliocene, Esmeralda Formation, Nevada) Azolla primaeva (Penhallow) Arnold, 1955 (Eocene, Allenby Formation, British Columbia) Azolla boliviensis Vajda & McLoughlin, 2005 (Maastrichtian – Paleocene, Eslaboacuten Formation and Flora Formation Bolivia) Ecology Azolla is a highly productive plant that can double its biomass in 1.9 days, depending on growing conditions. The plant can yield can reach 8–10 tonnes fresh matter per hectare in Asian paddy fields. 37.8 tonnes fresh weight/ha (2.78 t/ha dry weight) has been reported for A. pinnata in India. Azolla floats on the surface of water by means of numerous small, closely overlapping scale-like leaves, with their roots hanging in the water. They form a symbiotic relationship with the cyanobacterium Anabaena azollae, which lives outside the cells of its host and which fixes atmospheric nitrogen. The typical limiting factor on its growth is phosphorus; thus, an abundance of phosphorus—due for example to eutrophication or chemical runoff—often leads to Azolla blooms. Unlike all other known plants, its symbiotic microorganism transfers directly from one generation to the next. A. azollae is completely dependent on its host, as several of its genes have either been lost or transferred to the nucleus in Azolla's cells. The nitrogen-fixing capability of Azolla has led to widespread use as a biofertiliser, especially in parts of southeast Asia. The plant has been used to bolster agricultural productivity in China for over a thousand years. When rice paddies are flooded in the spring, they can be planted with Azolla, which then quickly multiplies to cover the water, suppressing weeds. The rotting plant material resulting from the die-off of this Azolla releases nitrogen into the water for the rice plants, providing up to nine tonnes of protein per hectare per year. Azolla are weeds in many parts of the world, entirely covering some bodies of water. The myth that no mosquito can penetrate the coating of fern to lay its eggs in the water gives the plant its common name "mosquito fern"; however, Azolla may deter the survival of some of mosquito larvae. Most species can produce large amounts of deoxyanthocyanins in response to various stresses, including bright sunlight and extreme temperatures, causing the water surface to appear to be covered with an intensely red carpet. Herbivore feeding induces accumulation of deoxyanthocyanins and leads to a reduction in the proportion of polyunsaturated fatty acids in the fronds, thus lowering their palatability and nutritive value. Azolla cannot survive winters with prolonged freezing, so is often grown as an ornamental plant at high latitudes where it cannot establish itself firmly enough to become a weed. It is also not tolerant of salinity; normal plants cannot survive in greater than 1–1.6‰, and even conditioned organisms die if grown in water with a salinity above 5.5‰. Azolla filiculoides Azolla filiculoides (red azolla) is the only member of the family Azollaceae found in Tasmania, where it is a common native aquatic plant. It is often found behind farm dams and other still waterbodies. The plants are small (usually only a few cm across) and float, but they are fast growing, and can be abundant and form large mats. The plants are typically red, and have small, water repellent leaves. Reproduction Azolla reproduces sexually, and asexually by splitting. Like all ferns, sexual reproduction leads to spore formation, but unlike other members of this group, Azolla is heterosporous, producing spores of two kinds. During the summer months, numerous spherical structures called sporocarps form on the undersides of the branches. The male sporocarp is greenish or reddish and looks like the egg mass of an insect or spider. It is two millimeters in diameter, and bears numerous male sporangia. Male spores (microspores) are extremely small and are produced inside each microsporangium. Microspores tend to adhere in clumps called massulae. Female sporocarps are much smaller, containing one sporangium and one functional spore. Since an individual female spore is considerably larger than a male spore, it is termed a megaspore. Azolla has microscopic male and female gametophytes that develop inside the male and female spores. The female gametophyte protrudes from the megaspore and bears a small number of archegonia, each containing a single egg. The microspore forms a male gametophyte with a single antheridium which produces eight swimming sperm. The barbed glochidia on the male spore clusters cause them to cling to the female megaspores, thus facilitating fertilization. Applications Food and animal feed In addition to its traditional cultivation as a bio-fertilizer for wetland paddies, Azolla is finding increasing use for sustainable production of livestock feed. Azolla is rich in protein, essential amino acids, vitamins, and minerals. Studies describe feeding Azolla to dairy cattle, pigs, ducks, and chickens, with reported increases in milk production, weight of broiler chickens and egg production of layers, as compared to conventional feed. One FAO study describes how Azolla integrates into a tropical biomass agricultural system, reducing the need for food supplements. Concerns related to BMAA Concerns about biomagnification exist because the plant may contain the neurotoxin BMAA that remains present in the bodies of animals consuming it, and BMAA has been documented as passing along the food chain. Azolla may contain this substance that is a possible cause of neurodegenerative diseases, including causing ALS, Alzheimer's, and Parkinson's. Azolla has been suggested as a foodstuff for human consumption; however, no long-term studies of the safety of eating Azolla have been made on humans. Previous studies attributed neurotoxin production to Anabaena flos-aquae species, which is also a type of nitrogen-fixing cyanobacteria. Studies published in 2024 have found that “the Azolla–Nostoc azollae superorganism does not contain BMAA or their isomers DAB and AEG and that Azolla and N. azollae do not synthesize other common cyanotoxins.” Further research may be needed to ascertain whether A. azollae is a healthy foodstuff for humans. Companion plant Azolla has been used for at least one thousand years in rice paddies as a companion plant, to fix nitrogen and to block out light to prevent competition from other plants. Rice is planted when tall enough to poke through the Azolla layer. Mats of mature Azolla can also be used as a weed-suppressing mulch. Rice farmers used Azolla as a rice biofertilizer 1500 years ago. The earliest known written record of this practice is in a book written by Jia Ssu Hsieh (Jia Si Xue) in 540 AD on The Art of Feeding the People (Chih Min Tao Shu). By the end of the Ming dynasty in the early 17th century, Azolla's use as a green compost was documented in local records. Larvicide The myth that no mosquito can penetrate the coating of fern to lay its eggs in the water gives the plant its common name "mosquito fern". Azolla have been used to control mosquito larvae in rice fields. The plant grows in a thick mat on the surface of the water, making it more difficult for the larvae to reach the surface to breathe, effectively choking the larvae. Paleoclimatology and climate change Azolla has been proposed as a carbon sequestration modality. The proposal draws upon the hypothesized Azolla event that asserts that 55 million years ago, Azolla covered the Arctic – at the time a hot, tropical, freshwater environment – and then sank, permanently sequestering teratons of carbon that would otherwise have contributed to the planet's greenhouse effect. This ended a warming event that reached warmer than present-day averages, eventually causing the formation of ice sheets in Antarctica and the current "icehouse period". They contribute significantly to decreasing the atmospheric CO2 levels. Invasive species This fern has been introduced to other parts of the world, including the United Kingdom, where it has become a pest in some areas. A nominally tropical plant, it has adapted to the colder climate. It can form mats up to thick and cover 100% of a water surface, preventing local insects and amphibians from reaching the surface. Bioremediation Azolla can remove chromium, nickel, copper, zinc, and lead from effluent. It can also remove lead from solutions containing 1–1000 ppm.
Biology and health sciences
Ferns
Plants
507854
https://en.wikipedia.org/wiki/Helium%20flash
Helium flash
A helium flash is a very brief thermal runaway nuclear fusion of large quantities of helium into carbon through the triple-alpha process in the core of low-mass stars (between 0.8 solar masses () and 2.0 ) during their red giant phase. The Sun is predicted to experience a flash 1.2 billion years after it leaves the main sequence. A much rarer runaway helium fusion process can also occur on the surface of accreting white dwarf stars. Low-mass stars do not produce enough gravitational pressure to initiate normal helium fusion. As the hydrogen in the core is exhausted, some of the helium left behind is instead compacted into degenerate matter, supported against gravitational collapse by quantum mechanical pressure rather than thermal pressure. Subsequent hydrogen shell fusion further increases the mass of the core until it reaches temperature of approximately 100 million kelvin, which is hot enough to initiate helium fusion (or "helium burning") in the core. However, a quality of degenerate matter is that increases in temperature do not produce an increase in the pressure of the matter until the thermal pressure becomes so very high that it exceeds degeneracy pressure. In main sequence stars, thermal expansion regulates the core temperature, but in degenerate cores, this does not occur. Helium fusion increases the temperature, which increases the fusion rate, which further increases the temperature in a runaway reaction which quickly spans the entire core. This produces a flash of very intense helium fusion that lasts only a few minutes, but during that time, produces energy at a rate comparable to the entire Milky Way galaxy. In the case of normal low-mass stars, the vast energy release causes much of the core to come out of degeneracy, allowing it to thermally expand. This consumes most of the total energy released by the helium flash, and any left-over energy is absorbed into the star's upper layers. Thus the helium flash is mostly undetectable by observation, and is described solely by astrophysical models. After the core's expansion and cooling, the star's surface rapidly cools and contracts in as little as 10,000 years until it is roughly 2% of its former radius and luminosity. It is estimated that the electron-degenerate helium core weighs about 40% of the star mass and that 6% of the core is converted into carbon. Subflashes Subflashes are pulsational instabilities that occur after the main helium flash. They are driven by stars that do not have good convective or radiative boundaries . Subflashes can last several hours to days and can occur for many years with each subsequent flash generally being weaker . Subflashes can be detected by applying fourier transforms to the light curve data . Red giants During the red giant phase of stellar evolution in stars with less than 2.0 the nuclear fusion of hydrogen ceases in the core as it is depleted, leaving a helium-rich core. While fusion of hydrogen continues in the star's shell causing a continuation of the accumulation of helium in the core, making the core denser, the temperature is still unable to reach the level required for helium fusion, as happens in more massive stars. Thus the thermal pressure from fusion is no longer sufficient to counter the gravitational collapse and create the hydrostatic equilibrium found in most stars. This causes the star to start contracting and increasing in temperature until it eventually becomes compressed enough for the helium core to become degenerate matter. This degeneracy pressure is finally sufficient to stop further collapse of the most central material but the rest of the core continues to contract and the temperature continues to rise until it reaches a point () at which the helium can ignite and start to fuse. The explosive nature of the helium flash arises from its taking place in degenerate matter. Once the temperature reaches 100 million–200 million kelvin and helium fusion begins using the triple-alpha process, the temperature rapidly increases, further raising the helium fusion rate and, because degenerate matter is a good conductor of heat, widening the reaction region. However, since degeneracy pressure (which is purely a function of density) is dominating thermal pressure (proportional to the product of density and temperature), the total pressure is only weakly dependent on temperature. Thus, the dramatic increase in temperature only causes a slight increase in pressure, so there is no stabilizing cooling expansion of the core. This runaway reaction quickly climbs to about 100 billion times the star's normal energy production (for a few seconds) until the temperature increases to the point that thermal pressure again becomes dominant, eliminating the degeneracy. The core can then expand and cool down and a stable burning of helium will continue. A star with mass greater than about 2.25 starts to burn helium without its core becoming degenerate, and so does not exhibit this type of helium flash. In a very low-mass star (less than about 0.5 ), the core is never hot enough to ignite helium. The degenerate helium core will keep on contracting, and finally becomes a helium white dwarf. The helium flash is not directly observable on the surface by electromagnetic radiation. The flash occurs in the core deep inside the star, and the net effect will be that all released energy is absorbed by the entire core, causing the degenerate state to become nondegenerate. Earlier computations indicated that a nondisruptive mass loss would be possible in some cases, but later star modeling taking neutrino energy loss into account indicates no such mass loss. In a one solar mass star, the helium flash is estimated to release about , or about 0.3% of the energy release of a type Ia supernova, which is triggered by an analogous ignition of carbon fusion in a carbon–oxygen white dwarf. Binary white dwarfs When hydrogen gas is accreted onto a white dwarf from a binary companion star, the hydrogen can fuse to form helium for a narrow range of accretion rates, but most systems develop a layer of hydrogen over the degenerate white dwarf interior. This hydrogen can build up to form a shell near the surface of the star. When the mass of hydrogen becomes sufficiently large, runaway fusion causes a nova. In a few binary systems where the hydrogen fuses on the surface, the mass of helium built up can burn in an unstable helium flash. In certain binary systems the companion star may have lost most of its hydrogen and donate helium-rich material to the compact star. Note that similar flashes occur on neutron stars. Helium shell flash Helium shell flashes are a somewhat analogous but much less violent, nonrunaway helium ignition event, taking place in the absence of degenerate matter. They occur periodically in asymptotic giant branch stars in a shell outside the core. This is late in the life of a star in its giant phase. The star has burnt most of the helium available in the core, which is now composed of carbon and oxygen. Helium fusion continues in a thin shell around this core, but then turns off as helium becomes depleted. This allows hydrogen fusion to start in a layer above the helium layer. After enough additional helium accumulates, helium fusion is reignited, leading to a thermal pulse which eventually causes the star to expand and brighten temporarily (the pulse in luminosity is delayed because it takes a number of years for the energy from restarted helium fusion to reach the surface). Such pulses may last a few hundred years, and are thought to occur periodically every 10,000 to 100,000 years. After the flash, helium fusion continues at an exponentially decaying rate for about 40% of the cycle as the helium shell is consumed. Thermal pulses may cause a star to shed circumstellar shells of gas and dust.
Physical sciences
Stellar astronomy
Astronomy
507960
https://en.wikipedia.org/wiki/Solid%20geometry
Solid geometry
Solid geometry or stereometry is the geometry of three-dimensional Euclidean space (3D space). A solid figure is the region of 3D space bounded by a two-dimensional closed surface; for example, a solid ball consists of a sphere and its interior. Solid geometry deals with the measurements of volumes of various solids, including pyramids, prisms (and other polyhedrons), cubes, cylinders, cones (and truncated cones). History The Pythagoreans dealt with the regular solids, but the pyramid, prism, cone and cylinder were not studied until the Platonists. Eudoxus established their measurement, proving the pyramid and cone to have one-third the volume of a prism and cylinder on the same base and of the same height. He was probably also the discoverer of a proof that the volume enclosed by a sphere is proportional to the cube of its radius. Topics Basic topics in solid geometry and stereometry include: incidence of planes and lines dihedral angle and solid angle the cube, cuboid, parallelepiped the tetrahedron and other pyramids prisms octahedron, dodecahedron, icosahedron cones and cylinders the sphere other quadrics: spheroid, ellipsoid, paraboloid and hyperboloids. Plane vs Solid Geometry Advanced topics include: projective geometry of three dimensions (leading to a proof of Desargues' theorem by using an extra dimension) further polyhedra descriptive geometry. List of solid figures Whereas a sphere is the surface of a ball, for other solid figures it is sometimes ambiguous whether the term refers to the surface of the figure or the volume enclosed therein, notably for a cylinder. Techniques Various techniques and tools are used in solid geometry. Among them, analytic geometry and vector techniques have a major impact by allowing the systematic use of linear equations and matrix algebra, which are important for higher dimensions. Applications A major application of solid geometry and stereometry is in 3D computer graphics.
Mathematics
Three-dimensional space
null
293758
https://en.wikipedia.org/wiki/Loperamide
Loperamide
Loperamide, sold under the brand name Imodium, among others, is a medication of the opioid receptor agonist class used to decrease the frequency of diarrhea. It is often used for this purpose in irritable bowel syndrome, inflammatory bowel disease, short bowel syndrome, Crohn's disease, and ulcerative colitis. It is not recommended for those with blood in the stool, mucus in the stool, or fevers. The medication is taken by mouth. Common side effects include abdominal pain, constipation, sleepiness, vomiting, and dry mouth. It may increase the risk of toxic megacolon. Loperamide's safety in pregnancy is unclear, but no evidence of harm has been found. It appears to be safe in breastfeeding. It is an opioid with no significant absorption from the gut and does not cross the blood–brain barrier when used at normal doses. It works by slowing the contractions of the intestines. Loperamide was first made in 1969 and used medically in 1976. It is on the World Health Organization's List of Essential Medicines. Loperamide is available as a generic medication. In 2022, it was the 297th most commonly prescribed medication in the United States, with more than 400,000 prescriptions. Medical uses Loperamide is effective for the treatment of a number of types of diarrhea. This includes control of acute nonspecific diarrhea, mild traveler's diarrhea, irritable bowel syndrome, chronic diarrhea due to bowel resection, and chronic diarrhea secondary to inflammatory bowel disease. It is also useful for reducing ileostomy output. Off-label uses for loperamide also include chemotherapy-induced diarrhea, especially related to irinotecan use. Loperamide should not be used as the primary treatment in cases of bloody diarrhea, acute exacerbation of ulcerative colitis, or bacterial enterocolitis. Loperamide is often compared to diphenoxylate. Studies suggest that loperamide is more effective and has lower neural side effects. Side effects Adverse drug reactions most commonly associated with loperamide are constipation (which occurs in 1.7–5.3% of users), dizziness (up to 1.4%), nausea (0.7–3.2%), and abdominal cramps (0.5–3.0%). Rare, but more serious, side effects include toxic megacolon, paralytic ileus, angioedema, anaphylaxis/allergic reactions, toxic epidermal necrolysis, Stevens–Johnson syndrome, erythema multiforme, urinary retention, and heat stroke. The most frequent symptoms of loperamide overdose are drowsiness, vomiting, and abdominal pain, or burning. High doses may result in heart problems such as abnormal heart rhythms. Contraindications Treatment should be avoided in the presence of high fever or if the stool is bloody. Treatment is not recommended for people who could have negative effects from rebound constipation. If suspicion exists of diarrhea associated with organisms that can penetrate the intestinal walls, such as E. coli O157:H7 or Salmonella, loperamide is contraindicated as a primary treatment. Loperamide treatment is not used in symptomatic C. difficile infections, as it increases the risk of toxin retention and precipitation of toxic megacolon. Loperamide should be administered with caution to people with liver failure due to reduced first-pass metabolism. Additionally, caution should be used when treating people with advanced HIV/AIDS, as cases of both viral and bacterial toxic megacolon have been reported. If abdominal distension is noted, therapy with loperamide should be discontinued. Children The use of loperamide in children under two years is not recommended. Rare reports of fatal paralytic ileus associated with abdominal distention have been made. Most of these reports occurred in the setting of acute dysentery, overdose, and with very young children less than two years of age. A review of loperamide in children under 12 years old found that serious adverse events occurred only in children under three years old. The study reported that the use of loperamide should be contraindicated in children who are under 3, systemically ill, malnourished, moderately dehydrated, or have bloody diarrhea. In 1990, all formulations for children of the antidiarrheal loperamide were banned in Pakistan. The National Health Service in the United Kingdom recommends that loperamide should only be given to children under the age of twelve if prescribed by a doctor. Formulations for children are only available on prescription in the UK. Pregnancy and breast feeding Loperamide is not recommended in the United Kingdom for use during pregnancy or by nursing mothers. Studies in rat models have shown no teratogenicity, but sufficient studies in humans have not been conducted. One controlled, prospective study of 89 women exposed to loperamide during their first trimester of pregnancy showed no increased risk of malformations. This, however, was only one study with a small sample size. Loperamide can be present in breast milk and is not recommended for breastfeeding mothers. Drug interactions Loperamide is a substrate of P-glycoprotein; therefore, the concentration of loperamide increases when given with a P-glycoprotein inhibitor. Common P-glycoprotein inhibitors include quinidine, ritonavir, and ketoconazole. Loperamide can decrease the absorption of some other drugs. As an example, saquinavir concentrations can decrease by half when given with loperamide. Loperamide is an antidiarrheal agent, which decreases intestinal movement. As such, when combined with other antimotility drugs, the risk of constipation is increased. These drugs include other opioids, antihistamines, antipsychotics, and anticholinergics. Mechanism of action Loperamide is an opioid-receptor agonist and acts on the μ-opioid receptors in the myenteric plexus of the large intestine. It works like morphine, decreasing the activity of the myenteric plexus, which decreases the tone of the longitudinal and circular smooth muscles of the intestinal wall. This increases the time material stays in the intestine, allowing more water to be absorbed from the fecal matter. It also decreases colonic mass movements and suppresses the gastrocolic reflex. Loperamide's circulation in the bloodstream is limited in two ways. Efflux by P-glycoprotein in the intestinal wall reduces the passage of loperamide, and the fraction of drug crossing is then further reduced through first-pass metabolism by the liver. Loperamide metabolizes into an MPTP-like compound, but is unlikely to exert neurotoxicity. Blood–brain barrier Efflux by P-glycoprotein also prevents circulating loperamide from effectively crossing the blood-brain barrier, so it can generally only antagonize muscarinic receptors in the peripheral nervous system, and currently has a score of one on the anticholinergic cognitive burden scale. Concurrent administration of P-glycoprotein inhibitors such as quinidine potentially allows loperamide to cross the blood-brain barrier and produce central morphine-like effects. At high doses (>70mg), loperamide can saturate P-glycoprotein (thus overcoming the efflux) and produce euphoric effects. Loperamide taken with quinidine was found to produce respiratory depression, indicative of central opioid action. High doses of loperamide have been shown to cause a mild physical dependence during preclinical studies, specifically in mice, rats, and rhesus monkeys. Symptoms of mild opiate withdrawal were observed following abrupt discontinuation of long-term treatment of animals with loperamide. Chemistry Synthesis Loperamide is synthesized starting from the lactone 3,3-diphenyldihydrofuran-2(3H)-one and ethyl 4-oxopiperidine-1-carboxylate, on a lab scale. On a large scale a similar synthesis is followed, except that the lactone and piperidinone are produced from cheaper materials rather than purchased. Physical properties Loperamide is typically manufactured as the hydrochloride salt. Its main polymorph has a melting point of 224 °C and a second polymorph exists with a melting point of 218 °C. A tetrahydrate form has been identified which melts at 190 °C. History Loperamide hydrochloride was first synthesized in 1969 by Paul Janssen from Janssen Pharmaceuticals in Beerse, Belgium, following previous discoveries of diphenoxylate hydrochloride (1956) and fentanyl citrate (1960). The first clinical reports on loperamide were published in 1973 in the Journal of Medicinal Chemistry with the inventor being one of the authors. The trial name for it was "R-18553". Loperamide oxide has a different research code: R-58425. The trial against placebo was conducted from December 1972 to February 1974, its results being published in 1977 in the journal Gut. In 1973, Janssen started to promote loperamide under the brand name Imodium. In December 1976, Imodium got US FDA approval. During the 1980s, Imodium became the best-selling prescription antidiarrheal in the United States. In March 1988, McNeil Pharmaceutical began selling loperamide as an over-the-counter drug under the brand name Imodium A-D. In the 1980s, loperamide also existed in the form of drops (Imodium Drops) and syrup. Initially, it was intended for children's usage, but Johnson & Johnson voluntarily withdrew it from the market in 1990 after 18 cases of paralytic ileus (resulting in six deaths) were registered in Pakistan and reported by the World Health Organization (WHO). In the following years (1990-1991), products containing loperamide have been restricted for children's use in several countries (ranging from two to five years of age). In the late 1980s, before the US patent expired on 30 January 1990, McNeil started to develop Imodium Advanced containing loperamide and simethicone for treating both diarrhea and gas. In March 1997, the company patented this combination. The drug was approved in June 1997, by the FDA as Imodium Multi-Symptom Relief in the form of a chewable tablet. A caplet formulation was approved in November 2000. In November 1993, loperamide was launched as an orally disintegrating tablet based on Zydis technology. In 2013, loperamide in the form of 2-mg tablets was added to the WHO Model List of Essential Medicines. In 2020, it was discovered by researchers at Goethe University that Loperamide was effective at killing glioblastoma cells. Society and culture Legal status United States Loperamide was formerly a controlled substance in the United States. First, it was a Schedule II controlled substance. However, this was lowered to Schedule V. Loperamide was finally removed from control by the Drug Enforcement Administration in 1982, courtesy of then-Administrator Francis M. Mullen Jr. UK Loperamide can be sold freely to the public by chemists (pharmacies) as the treatment of diarrhoea and acute diarrhoea associated with medically diagnosed irritable bowel syndrome in adults over 18 years of age Economics Loperamide is sold as a generic medication. In 2016, Imodium was one of the biggest-selling branded over-the-counter medications sold in Great Britain, with sales of £32.7 million. Brand names Loperamide was originally marketed as Imodium, and many generic brands are sold. Off-label/unapproved use Loperamide has typically been deemed to have a relatively low risk of misuse. In 2012, no reports of loperamide abuse were made. In 2015, however, case reports of extremely high-dose loperamide use were published. The primary intent of users has been to manage symptoms of opioid withdrawal such as diarrhea, although a small portion derive psychoactive effects at these higher doses. At these higher doses central nervous system penetration occurs and long-term use may lead to tolerance, dependence, and withdrawal on abrupt cessation. Dubbing it "the poor man's methadone", clinicians warned that increased restrictions on the availability of prescription opioids passed in response to the opioid epidemic were prompting recreational users to turn to loperamide as an over-the-counter treatment for withdrawal symptoms. The FDA responded to these warnings by calling on drug manufacturers to voluntarily limit the package size of loperamide for public-safety reasons. However, there is no quantity restriction on number of packages that can be purchased, and most pharmacies do not feel capable of restricting its sale, so it is unclear that this intervention will have any impact without further regulation to place loperamide behind the counter. Since 2015, several reports of sometimes-fatal cardiotoxicity due to high-dose loperamide abuse have been published.
Biology and health sciences
Specific drugs
Health
293802
https://en.wikipedia.org/wiki/Closure%20%28mathematics%29
Closure (mathematics)
In mathematics, a subset of a given set is closed under an operation of the larger set if performing that operation on members of the subset always produces a member of that subset. For example, the natural numbers are closed under addition, but not under subtraction: is not a natural number, although both 1 and 2 are. Similarly, a subset is said to be closed under a collection of operations if it is closed under each of the operations individually. The closure of a subset is the result of a closure operator applied to the subset. The closure of a subset under some operations is the smallest superset that is closed under these operations. It is often called the span (for example linear span) or the generated set. Definitions Let be a set equipped with one or several methods for producing elements of from other elements of . A subset of is said to be closed under these methods, if, when all input elements are in , then all possible results are also in . Sometimes, one may also say that has the . The main property of closed sets, which results immediately from the definition, is that every intersection of closed sets is a closed set. It follows that for every subset of , there is a smallest closed subset of such that (it is the intersection of all closed subsets that contain ). Depending on the context, is called the closure of or the set generated or spanned by . The concepts of closed sets and closure are often extended to any property of subsets that are stable under intersection; that is, every intersection of subsets that have the property has also the property. For example, in a Zariski-closed set, also known as an algebraic set, is the set of the common zeros of a family of polynomials, and the Zariski closure of a set of points is the smallest algebraic set that contains . In algebraic structures An algebraic structure is a set equipped with operations that satisfy some axioms. These axioms may be identities. Some axioms may contain existential quantifiers in this case it is worth to add some auxiliary operations in order that all axioms become identities or purely universally quantified formulas. See Algebraic structure for details. A set with a single binary operation that is closed is called a magma. In this context, given an algebraic structure , a substructure of is a subset that is closed under all operations of , including the auxiliary operations that are needed for avoiding existential quantifiers. A substructure is an algebraic structure of the same type as . It follows that, in a specific example, when closeness is proved, there is no need to check the axioms for proving that a substructure is a structure of the same type. Given a subset of an algebraic structure , the closure of is the smallest substructure of that is closed under all operations of . In the context of algebraic structures, this closure is generally called the substructure generated or spanned by , and one says that is a generating set of the substructure. For example, a group is a set with an associative operation, often called multiplication, with an identity element, such that every element has an inverse element. Here, the auxiliary operations are the nullary operation that results in the identity element and the unary operation of inversion. A subset of a group that is closed under multiplication and inversion is also closed under the nullary operation (that is, it contains the identity) if and only if it is non-empty. So, a non-empty subset of a group that is closed under multiplication and inversion is a group that is called a subgroup. The subgroup generated by a single element, that is, the closure of this element, is called a cyclic group. In linear algebra, the closure of a non-empty subset of a vector space (under vector-space operations, that is, addition and scalar multiplication) is the linear span of this subset. It is a vector space by the preceding general result, and it can be proved easily that is the set of linear combinations of elements of the subset. Similar examples can be given for almost every algebraic structures, with, sometimes some specific terminology. For example, in a commutative ring, the closure of a single element under ideal operations is called a principal ideal. Binary relations A binary relation on a set can be defined as a subset of the set of the ordered pairs of elements of . The notation is commonly used for Many properties or operations on relations can be used to define closures. Some of the most common ones follow: Reflexivity A relation on the set is reflexive if for every As every intersection of reflexive relations is reflexive, this defines a closure. The reflexive closure of a relation is thus Symmetry Symmetry is the unary operation on that maps to A relation is symmetric if it is closed under this operation, and the symmetric closure of a relation is its closure under this relation. Transitivity Transitivity is defined by the partial binary operation on that maps and to A relation is transitive if it is closed under this operation, and the transitive closure of a relation is its closure under this operation. A preorder is a relation that is reflective and transitive. It follows that the reflexive transitive closure of a relation is the smallest preorder containing it. Similarly, the reflexive transitive symmetric closure or equivalence closure of a relation is the smallest equivalence relation that contains it. Other examples In matroid theory, the closure of X is the largest superset of X that has the same rank as X. The transitive closure of a set. The algebraic closure of a field. The integral closure of an integral domain in a field that contains it. The radical of an ideal in a commutative ring. In geometry, the convex hull of a set S of points is the smallest convex set of which S is a subset. In formal languages, the Kleene closure of a language can be described as the set of strings that can be made by concatenating zero or more strings from that language. In group theory, the conjugate closure or normal closure of a set of group elements is the smallest normal subgroup containing the set. In mathematical analysis and in probability theory, the closure of a collection of subsets of X under countably many set operations is called the σ-algebra generated by the collection. Closure operator In the preceding sections, closures are considered for subsets of a given set. The subsets of a set form a partially ordered set (poset) for inclusion. Closure operators allow generalizing the concept of closure to any partially ordered set. Given a poset whose partial order is denoted with , a closure operator on is a function that is increasing ( for all ), idempotent (), and monotonic (). Equivalently, a function from to is a closure operator if for all An element of is closed if it is its own closure, that is, if By idempotency, an element is closed if and only if it is the closure of some element of . An example is the topological closure operator; in Kuratowski's characterization, axioms K2, K3, K4' correspond to the above defining properties. An example not operating on subsets is the ceiling function, which maps every real number to the smallest integer that is not smaller than . Closure operator vs. closed sets A closure on the subsets of a given set may be defined either by a closure operator or by a set of closed sets that is stable under intersection and includes the given set. These two definitions are equivalent. Indeed, the defining properties of a closure operator implies that an intersection of closed sets is closed: if is an intersection of closed sets, then must contain and be contained in every This implies by definition of the intersection. Conversely, if closed sets are given and every intersection of closed sets is closed, then one can define a closure operator such that is the intersection of the closed sets containing . This equivalence remains true for partially ordered sets with the greatest-lower-bound property, if one replace "closed sets" by "closed elements" and "intersection" by "greatest lower bound".
Mathematics
Set theory
null
293921
https://en.wikipedia.org/wiki/Seoul%20Metropolitan%20Subway
Seoul Metropolitan Subway
The Seoul Metropolitan Subway () is a metropolitan railway system consisting of 23 rapid transit, light metro, commuter rail and people mover lines located in northwest South Korea. The system serves most of the Seoul Metropolitan Area including the Incheon metropolis and satellite cities in Gyeonggi province. Some regional lines in the network stretch out beyond the Seoul Metropolitan Area to rural areas in northern Chungnam province and western Gangwon Province, that lie over away from the capital. The network consists of multiple systems that form a larger, coherent system. These being the Seoul Metro proper, consisting of Seoul Metro lines 1 through 9 and certain light rail lines, that serves Seoul city proper and its surroundings; Korail regional rail lines, which serve the greater metropolitan region and beyond; Incheon Metro lines, operated by Incheon Transit Corporation, that serve Incheon city proper; and miscellaneous light rail lines, such as Gimpo Goldline and Yongin Everline, that connect lower-density areas of their respective cities to the rest of the network. Most of the system is operated by three companies – Seoul Metro, Korail (Korea Railroad Corporation), and Incheon Metro – with the rest being operated by an assortment of local municipal corporations and private rail companies. Its first metro line, Line 1, started construction in 1971 and began operations in 1974, with through-operation to Korail's suburban railways. As of 2022, the network has of track on lines 1–9 alone. Most of the trains were built by Hyundai Rotem, South Korea's leading train manufacturer. Overview The first line of the Seoul Subway network started construction in 1971. The first section of subway was built using the cheaper cut and cover construction method. Initial lines relied heavily on Japanese technology, and subsequent lines (until the early-2000s) procured technological imports from Japan and the United Kingdom (in particular, GEC Traction equipment used on wide-width Lines 2, 3 and 4 rolling stock from the 1980s). For example, Line 1 opened in 1974 with through services joining surrounding Korail suburban railway lines influenced by the Tokyo subway. Today, many of the Seoul Metropolitan Subway's lines are operated by Korail, South Korea's national rail operator. The subway has free WiFi accessible in all stations and trains. All stations have platform screen doors. These safety doors were completed by 2017, however many stations previously had metal barriers installed decades beforehand. The world's first virtual mart for smartphone users opened at Seolleung station in 2011. All directional signs in the system are written in Korean using Hangul, as well as English and Katakana/Chinese characters for Japanese and Mandarin Chinese. However the maps on the walls are in Korean and English only. In the trains, there are in addition many LCD screens giving service announcements, upcoming stop names, YTN news, stock prices and animated shorts. There are also prerecorded voice announcements that give the upcoming station, any possible line transfer, and the exiting side in Korean, followed by English. At major stations, this is followed by Japanese, then Mandarin Chinese, as well. Seoul Subway uses full-color LCD screens at all stations to display real-time subway arrival times, which are also available on apps for smartphones. Most trains have digital TV screens, and all of them have air conditioning and climate controlled seats installed that are automatically heated in the winter. In 2014, it became the world's first metro operator to use transparent displays for ads when it installed 48 transparent displays on major stations of Line 2 in Gangnam District. All lines use the T-money smart payment system using RFID and NFC technology for automatic payment by T-money smart cards, smartphones, or credit cards and one can transfer to any of the other line within the system for free. Trains on numbered lines and light rail lines generally run on the right-hand track, while trains on the named heavy-rail lines (e.g. Shinbundang Line, Suin–Bundang Line, and AREX) run on the left-hand track. The exceptions are the trains on Line 1, as well as those on Line 4 south of Namtaeryeong station. These lines run on the left-hand track because these rail lines are government-owned via Korail or through-run to government-owned lines and follow a different standard to the metro, one that is followed by all national rail lines (with the exception of the otherwise self-contained Ilsan Line) because much of the Korean Peninsula's early rail network was constructed during Japanese rule. History Line 1, from Seongbuk station to Incheon station and Suwon station, opened on 15 August 1974. On 9 December 1978, the Yongsan-Cheongnyangni line via Wangsimni (now part of the Jungang Line) was added to Line 1. Line 2 opened on 10 October 1980. Line 4 opened on 20 April 1985, and Line 3 on 12 July. On 1 April 1994, the Indeogwon-Namtaeryeong extension of Line 4 opened. The Bundang Line, from Suseo station to Ori station, opened on 1 September. On 15 November 1995, Line 5 opened. The Jichuk-Daehwa extension of Line 3 opened on 30 January 1996. On 20 March, the Kkachisan-Sindorim extension of Line 2 opened. Line 7 opened on 11 October, and Line 8 on 23 November. On 6 October 1999, Incheon Subway Line 1 opened. Seoul Subway Line 6 opened on 7 August 2000. In 2004 the fare system reverted to charging by distance, and free bus transfers were introduced. The Byeongjeom-Cheonan extension of Line 1 opened on 20 January 2005. On 16 December, the Jungang Line from Yongsan station to Deokso station opened. The Uijeongbu-Soyosan extension of Line 1 opened and shuttle service from Yongsan station to Gwangmyeong station began (with the route now shortened from Yeongdeungpo to Gwangmyeong) on 15 December 2006. On 23 March 2007, AREX opened. The Deokso-Paldang extension of the Jungang Line opened on 27 December. On 15 December 2008, the Cheonan-Sinchang extension of Line 1 opened. The magnetic paper ticket changed to an RFID-based card on 1 May 2009. On 1 July the Gyeongui Line from Seoul Station to Munsan station opened, and on 24 July Line 9 from Gaehwa station to Sinnonhyeon station opened. The Byeongjeom-Seodongtan extension of Line 1 opened on 26 February 2010, and the Gyeongchun Line opened on 21 December. On 28 October 2011, the Shinbundang Line from Gangnam station to Jeongja station opened. The Suin Line, from Oido station to Songdo station, opened on 30 June 2012. The U Line opened on 1 July, the Onsu-Bupyeong-gu Office extension of Line 7 on 27 October and the Gongdeok-Gajwa extension of the Gyeongui Line on 15 December, and on 26 April 2013, EverLine opened. On 27 December 2014, the Gyeongui Line was extended to Yongsan and started through running to the Jungang Line, forming the Gyeongui–Jungang Line. The Sinnonhyeon-Sports Complex extension of Line 9 opened on 28 March 2015. On 30 January 2016 the Jeongja-Gwanggyo extension of the Shinbundang Line opened, followed by the Songdo-Incheon extension of the Suin Line on 27 February. Incheon Subway Line 2 opened on 30 July, and the Gyeonggang Line on 24 September. The Gyeongui–Jungang Line is extended one station east to Jipyeong station on 21 January 2017, with 4 round trips to Jipyeong station. On 16 June 2018 the Seohae Line opened. Magongnaru station on Line 9 became an interchange station with AREX on 29 September 2018. Bundang line was extended northeastward to Cheongnyangni station, allowing for connections to the Gyeongchun Line and regional rail services on 31 December 2018. On 28 September 2019, the Gimpo Goldline opened. On 12 September 2020, the Suin Line extension between Hanyang Univ. at Ansan and Suwon, beginning the interlining with Line 4 between Oido and Hanyang Univ. at Ansan, as well as through-running with the Bundang Line to form the Suin–Bundang Line. On May 24, 2022, the Sillim Line opened, becoming the newest addition to the Seoul Metropolitan Subway. Lines and branches The system is organized such that numbered lines, with some exceptions, are considered as urban rapid transit lines located within the Seoul Metropolitan Area, whereas wide-area commuter lines operated by Korail provide a metro-like commuter rail service that usually extends far beyond the boundaries of the metropolitan area, rather similar to the RER in Paris. The AREX is an airport rail link that links Incheon International Airport and Gimpo Airport to central Seoul, and offers both express service directly to Incheon International Airport and all-stop commuter service for people living along the vicinity of the line. While operating hours may vary depending on the line and station in question, the Seoul Metropolitan Subway generally operates every day from 5.30 a.m. until midnight, with some lines operated by Seoul Metro ending services around 1 a.m. on weekdays. Rolling stock Fares and ticketing The Seoul Metropolitan Subway system operates on a unified transportation fare system, meaning that subways and buses in Seoul, Incheon and Gyeonggi Province are treated as one system when it comes to fares. For example, a subway rider can transfer to any other line for free (with the exception of Shinbundang Line, EverLine and U Line, the latter two adding a flat charge of 200 and 300 won respectively). One can also transfer to any Seoul, Incheon, Gyeonggi-do, or some South Chungcheong Province city buses for free and get discounted fares on the more expensive express buses. In the case of Shinbundang Line, charges vary depending on the section used. The Sinsa - Gangnam section always charges 500 won, while the Gangnam - Jeongja section or the Jeongja - Gwanggyo section charges 1,000 won when used alone, and 1,400 altogether when used in conjunction with another. In total, the maximum added fee one can be charged is 1,900 won, which can be achieved by using all three sections. From 1974 until 1985, the subway's fare system was distance-based and Edmondson railway tickets, originally introduced for the Korean railways during Japanese rule, were used for fare validation. In 1985, the fare system changed to a zone-based system and magnetic-stripe paper tickets were introduced to replace the Edmondson system. In 1996, the Seoul Metropolitan Subway became the first subway system in the world to roll out contactless smart cards, called Upass, for fare validation. These cards were issued up till October 2014, when they were discontinued in favour of the newer T-money cards. Currently, the fare system is distance-based and accepted payment methods are single-use tickets, transportation cards including T-money and Cash Bee. Transportation cards can also be used on buses, taxis, convenience stores and many other popular retail places. Riders must tap in with a smartphone (KakaoPay and Samsung Pay/Wallet only), contactless-equipped credit or debit cards or other prepaid metro card at the entry gates. Popular methods of payments are using NFC-enabled Android smartphones (topped up or billed to the owner's credit/debit card via the T-money app) or credit or check (debit) cards with built-in RFID technology issued by the bank or card company. The current single-use ticket is a credit card-sized plastic card with RFID technology, which can be obtained from automated machines in every subway station. A 500 won deposit fee is included in the price, and is refunded when the ticket is returned at any station. Multiple use cards are sold in convenience stores and the functionality is included in many credit/debit cards. Fares (except for single-use tickets) are currently 1,400 won for a trip up to 10 km (6.2 mi), with 100 won added for each subsequent 5 km (3.1 mi). Once 50 km (31.1 mi) has been passed, 100 won will be added every 8 km (5.0 mi). Single-use ticket users must pay RFID deposit 500 won plus 100 won surcharge to fare. Half-priced children's tickets are available. The city government also uses Seoul Citypass as a transportation card. Senior citizens and disabled people qualify for free transit and can get a free ticket with a valid ID card or enter with a registered transportation card without having the fare deducted. International travelers can also use a Metropolitan Pass (MPASS) which provides up to 20 trips per day during the prepaid duration of 1 day to 7 days. Depending on where you purchase the card, the service is limited to the Seoul metropolitan area or Jeju Island and does not work in taxis or certain convenience stores. Current construction Opening 2025 The Incheon Subway Line 1 will be extended north in June 2025 by 6.8 km (4.226 mi), from Gyeyang station to Geomdan Lake Park station, with 3 new stations. Geomdan Lake Park station is later expected to become a transfer station with the Gimpo Goldline and the Incheon Subway Line 2, for which extensions are in planning. The Wirye Line, another light metro line in southeastern Seoul, will open in September 2025 between Macheon station on Line 5 and will have two branches: one will head to Bokjeong station on Line 8 and Suin-Bundang line, and one at Namwirye station, also a station on Line 8, with 12 stations planned in total. While technically part of the subway system, the Wirye Line will actually be a tramway line. Opening 2026 The Seohae Line is set to extend South from Wonsi station to Seohwaseongnamyang Station by March 2026. The Dongbuk Line, a light metro line in northeastern Seoul, is scheduled to open in July 2026 with 14 stations between Wangsimni station and Eunhaeng Sageori station. The Sinansan Line will open in December 2026. The line will start at Yeouido station and split into two branches: one to Hanyang University ERICA Campus station, and one to Songsan station on the Seohae Line. The latter branch will partially share tracks with the Seohae Line and the Gyeonggang Line. Hagik station, between Songdo station and Inha University station on the Suin–Bundang Line, will open as an in-fill station once the redevelopment of the surrouneding area is completed. This area will feature cultural, commercial, and medical facilities along with new residential areas. GTX A will open its central section between Seoul station and Suseo station (15.3 km) by September 2026, completing the entire GTX A Line. However, Samseong Station will still be under construction at the time of the opening, and there will be no intermediary stop between Seoul station and Suseo station. Line 7 will be extended by 2 stations northwards to Goeup station in Yangju by 2026, with a transfer at Tapseok station with the U Line. Gwacheon Information Town station, between Indeogwon station and Government Complex Gwacheon station on Line 4, will open as an in-fill station in December 2026 once the redevelopment of the surrounding area is completed. Opening 2027 Changneung station, between Daegok station and Yeonsinnae station will open as an in-fill station on the GTX A Line, to go along with urban development in the area. Hoecheonjungang station, between Deokgye station and Deokjeong station will open as an in-fill station on the Line 1 Line, to go along with urban development in the area. Buseong station, between Jiksan station and Dujeong station will open as an in-fill station on the Line 1 Line, to go along with urban development in the area. Line 7 will be extended from Seongnam station to Cheongna Int'l Business Complex station in 2027. The new extension will have 6 stations and a total length of 6.8km. Opening 2028 Line 9 will be extended 4 stations eastwards from VHS Medical Center station to Saemteo Park station, with a transfer with Godeok station on Line 5 by 2028. Samseong station will open as an in-fill station on GTX-A, between Seoul station and Suseo station in April 2028. The opening of the GTX-A part of the station was delayed due to the construction delay of the Yeongdongdaero Transfer Complex, a complex that will connect Samseong station of Line 2, Samseong station of GTX-A, GTX-C, and Wirye-Sinsa Line, and Bongeunsa Station on Line 9. GTX-C will open between Deokjeong Station in the North and Suwon Station and Sangnoksu Station to the South, splitting into 2 branches. The line will feature new dedicated tracks on its central section and share tracks with Line 1 at its ends. In total, the length of the line witll be 85.9km, with 14 stations. Opening 2029 or later Line 7 will be extended from Cheongna Int'l Business Complex station to Cheongna International City station in 2029, connecting with the AREX Line. The new extension will have 2 stations and a total length of 3.1km. Line 7 will be extended from Goeup Station to Pocheon station in 2030. The new extension will have 4 stations and a total length of 19.3km. The Dongtan-Indeogwon Line will open between Dongtan Station and Indeogwon Station by 2029. The Line will have 17 stops and a length of 38.1 km. The Gyeonggang Line will be extended to the west, from Pangyo station to Wolgot station by December 2029. The extension will be 49.6 km long, and partly share tracks with the Sinansan Line. There will be 11 additional stations to the line, including transfers available at Wolgot station (Suin-Bundang Line), Siheung City Hall station (Seohae Line, Sinansan Line), Gwangmyeong Station (Line 1, Sinansan Line), Anyang station (Line 1), Indeogwon station (Line 4, Indeogwon-Dongtan Line). Service may then be further extended further west towards downtown Incheon using the tracks of the Suin-Bundang Line. The Shinbundang Line will be extended south from Gwanggyo Jungang station to Homaesil station in 2029, with 5 new stations and 11 km of tracks. The Line 1 branch to Seodongtan Station will be extended to Dongtan Station by 2029, as part of the construction of the Dongtan-Indeogwon Line. The extension will have 2 stops and a length of 4.6 km. GTX-B will open in 2030 between Incheon National University Station in the West and Maseok Station in the South. The line will feature new dedicated tracks, except East of Mangu Station where it will share tracks with the Gyeongchun Line. In total, the length of the line will be 80.3km, with 13 stations. Approved for construction The following lines have not started construction, but are considered to be approved after their plans and their financing have been finalized. Most of these lines are scheduled to start construction in the next couple of years. The Daejang-Hongdae Line will be a medium capacity line between Hongik University Station and Daejang Station in the city of Bucheon, scheduled to begin construction in 2024. The line will have a length of 20.1km and 12 stations The Wirye–Sinsa Line, a light metro line in southeastern Seoul, will open between Sinsa station and Wirye with 11 stations planned. Construction has been delayed due to issues with the contractors. Line 9 will also be further extended to the East, with 6 new stations, from Ogeum Station to Hanam City Hall station, for a length of 11.7km. Completion is planned for 2032. The Shinbundang Line will be extended north from Sinsa station to Yongsan station, with 3 new stations over 5.3 km. Construction will begin in 2026 for a completion in 2032, upon the completion of the transfer of ownership of the Yongsan Garrison to the Korean government. The Seobu Line is a new light metro line, which will have a length of 18 km and go through 16 stations, starting at Gwanaksan station, which is also the last station of the Sillim Line, and then go North-West across the Han River and up to Saejeol station on Line 6. Construction will begin in 2025. Myeonmok Line is a light metro in the northeastern area of Seoul running between Cheongnyangni station and Sinnae station with 12 stations and connections to the Gyeongchun Line and Line 6. The line was approved in June 2024. Dongtan Metro is a set of 2 tramway lines, which will be part of the Seoul Metropolitan Subway, under the names Dongtan Line 1 and Dongtan Line 2, with both lines connecting at Dongtan Station. Dongtan Line 1 will have 17 stations over 16.9km, while Dongtan Line 2 will have 19 stations over 15.5km. Construction will begin in early 2025 for an opening by December 2027. The Ui LRT will have a new Northern branch, starting from Solbat Park Station, and reaching Banghak Station on Line 1, for a length of 3.5km and 3 new stations. Construction will begin in 2025 for a completion in 2031. Line 3 will be extended to the East, with 8 new stations across the Han river and northwards from Saemteo Park station to Pungyang station, for a length of 17.4km. Construction should begin in 2025 for completion in 2031. Planned Seoul City The Seoul Metropolitan government published a ten-year plan for expansion of the subway with the following projects under consideration. Gangbukhoengdan Line, a new line running in an arc north of Seoul between Cheongnyangni station and Mok-dong station with 19 stations planned. The line will provide transfers to Lines 1, 3, 4, 5, 6, 9, AREX, Gyeongui–Jungang, Gyeongchun, Bundang and Ui line. Ui LRT will open a branch line from Solbat Park station to Banghak station on Line 1, the extension will open with 3 stations. Myeonmok Line is a light metro in the northeastern area of Seoul running between Cheongnyangni station and Sinnae station with 12 stations and connections to the Gyeongchun Line and Line 6. Nangok Line is a branch of the light metro Sillim Line in the southwestern area of Seoul running between Nangok-dong and Boramae Park with 5 stations planned. Mok-dong Line is a light metro in southwestern Seoul running between Sinwol-dong and Dangsan station on line 2, with 12 stations planned. Line 4 will start running express services between Danggogae station and Namtaeryeong station. Line 5 will start running shuttle services connecting Gubeundari station on the mainline and Dunchon-dong station on the Macheon Branch. The Sillim Line will be connected to Seobu Line with a track between Seoul National University station (Line 2) and Gwanaksan(Seoul National Univ.). Incheon City The Incheon Metropolitan government is working on the Second Incheon Metro Network Construction Plan that inherits the Incheon Metro Network Construction Plan published in 2016. It includes the construction of five new tram lines. The draft is expected to be released in October 2020. Incheon Subway line 3 is planned to be a semi-circular subway line of Incheon. It will intersect Seoul Line 1 at Dowon station and to Incheon Line 1 at Dongmak station. Partial network map Gallery
Technology
South Korea
null
293961
https://en.wikipedia.org/wiki/Aleph%20number
Aleph number
In mathematics, particularly in set theory, the aleph numbers are a sequence of numbers used to represent the cardinality (or size) of infinite sets that can be well-ordered. They were introduced by the mathematician Georg Cantor and are named after the symbol he used to denote them, the Hebrew letter aleph (ℵ). The cardinality of the natural numbers is ℵ0 (read aleph-nought, aleph-zero, or aleph-null), the next larger cardinality of a well-ordered set is aleph-one ℵ1, then ℵ2 and so on. Continuing in this manner, it is possible to define a cardinal number ℵα for every ordinal number α, as described below. The concept and notation are due to Georg Cantor, who defined the notion of cardinality and realized that infinite sets can have different cardinalities. The aleph numbers differ from the infinity (∞) commonly found in algebra and calculus, in that the alephs measure the sizes of sets, while infinity is commonly defined either as an extreme limit of the real number line (applied to a function or sequence that "diverges to infinity" or "increases without bound"), or as an extreme point of the extended real number line. Aleph-zero ℵ0 (aleph-nought, aleph-zero, or aleph-null) is the cardinality of the set of all natural numbers, and is an infinite cardinal. The set of all finite ordinals, called ω or ω0 (where ω is the lowercase Greek letter omega), has cardinality ℵ0. A set has cardinality ℵ0 if and only if it is countably infinite, that is, there is a bijection (one-to-one correspondence) between it and the natural numbers. Examples of such sets are the set of natural numbers, irrespective of including or excluding zero, the set of all integers, any infinite subset of the integers, such as the set of all square numbers or the set of all prime numbers, the set of all rational numbers, the set of all constructible numbers (in the geometric sense), the set of all algebraic numbers, the set of all computable numbers, the set of all computable functions, the set of all binary strings of finite length, and the set of all finite subsets of any given countably infinite set. These infinite ordinals: ω, ω + 1, ω⋅2, ω2 are among the countably infinite sets. For example, the sequence (with ordinality ω⋅2) of all positive odd integers followed by all positive even integers {1, 3, 5, 7, 9, ...; 2, 4, 6, 8, 10, ...} is an ordering of the set (with cardinality ℵ0) of positive integers. If the axiom of countable choice (a weaker version of the axiom of choice) holds, then ℵ0 is smaller than any other infinite cardinal, and is therefore the (unique) least infinite ordinal. Aleph-one ℵ1 is, by definition, the cardinality of the set of all countable ordinal numbers. This set is denoted by ω1 (or sometimes Ω). The set ω1 is itself an ordinal number larger than all countable ones, so it is an uncountable set. Therefore, ℵ1 is distinct from ℵ0. The definition of ℵ1 implies (in ZF, Zermelo–Fraenkel set theory without the axiom of choice) that no cardinal number is between ℵ0 and ℵ1. If the axiom of choice is used, it can be further proved that the class of cardinal numbers is totally ordered, and thus ℵ1 is the second-smallest infinite cardinal number. One can show one of the most useful properties of the set ω1: Any countable subset of ω1 has an upper bound in ω1 (this follows from the fact that the union of a countable number of countable sets is itself countable). This fact is analogous to the situation in ℵ0: Every finite set of natural numbers has a maximum which is also a natural number, and finite unions of finite sets are finite. The ordinal ω1 is actually a useful concept, if somewhat exotic-sounding. An example application is "closing" with respect to countable operations; e.g., trying to explicitly describe the σ-algebra generated by an arbitrary collection of subsets (see e.g. Borel hierarchy). This is harder than most explicit descriptions of "generation" in algebra (vector spaces, groups, etc.) because in those cases we only have to close with respect to finite operations – sums, products, etc. The process involves defining, for each countable ordinal, via transfinite induction, a set by "throwing in" all possible countable unions and complements, and taking the union of all that over all of ω1. Continuum hypothesis The cardinality of the set of real numbers (cardinality of the continuum) is 2ℵ0. It cannot be determined from ZFC (Zermelo–Fraenkel set theory augmented with the axiom of choice) where this number fits exactly in the aleph number hierarchy, but it follows from ZFC that the continuum hypothesis (CH) is equivalent to the identity 2ℵ0 = ℵ1. The CH states that there is no set whose cardinality is strictly between that of the integers and the real numbers. CH is independent of ZFC: It can be neither proven nor disproven within the context of that axiom system (provided that ZFC is consistent). That CH is consistent with ZFC was demonstrated by Kurt Gödel in 1940, when he showed that its negation is not a theorem of ZFC. That it is independent of ZFC was demonstrated by Paul Cohen in 1963, when he showed conversely that the CH itself is not a theorem of ZFC – by the (then-novel) method of forcing. Aleph-omega Aleph-omega is ℵω = sup{ ℵn | n ∈ ω } = sup{ ℵn | n ∈ {0, 1, 2, ...} } where the smallest infinite ordinal is denoted as ω. That is, the cardinal number ℵω is the least upper bound of { ℵn | n ∈ {0, 1, 2, ...} }. Notably, ℵω is the first uncountable cardinal number that can be demonstrated within Zermelo–Fraenkel set theory not to be equal to the cardinality of the set of all real numbers 2ℵ0: For any natural number n ≥ 1, we can consistently assume that 2ℵ0 = ℵn, and moreover it is possible to assume that 2ℵ0 is as least as large as any cardinal number we like. The main restriction ZFC puts on the value of 2ℵ0 is that it cannot equal certain special cardinals with cofinality ℵ0. An uncountably infinite cardinal κ having cofinality ℵ0 means that there is a (countable-length) sequence κ0 ≤ κ1 ≤ κ2 ≤ ... of cardinals κi < κ whose limit (i.e. its least upper bound) is κ (see Easton's theorem). As per the definition above, ℵω is the limit of a countable-length sequence of smaller cardinals. Aleph-α for general α To define ℵα for arbitrary ordinal number α, we must define the successor cardinal operation, which assigns to any cardinal number ρ the next larger well-ordered cardinal ρ+ (if the axiom of choice holds, this is the (unique) next larger cardinal). We can then define the aleph numbers as follows: ℵ0 = ω ℵα+1 = (ℵα)+ ℵλ = ⋃{ ℵα | α < λ } for λ an infinite limit ordinal, The α-th infinite initial ordinal is written ωα. Its cardinality is written ℵα. Informally, the aleph function ℵ: On → Cd is a bijection from the ordinals to the infinite cardinals. Formally, in ZFC, ℵ is not a function, but a function-like class, as it is not a set (due to the Burali-Forti paradox). Fixed points of omega For any ordinal α we have α ≤ ωα. In many cases ωα is strictly greater than α. For example, it is true for any successor ordinal: α + 1 < ωα+1 holds. There are, however, some limit ordinals which are fixed points of the omega function, because of the fixed-point lemma for normal functions. The first such is the limit of the sequence ω, ωω, ωωω, ..., which is sometimes denoted ωω.... Any weakly inaccessible cardinal is also a fixed point of the aleph function. This can be shown in ZFC as follows. Suppose κ = ℵλ is a weakly inaccessible cardinal. If λ were a successor ordinal, then ℵλ would be a successor cardinal and hence not weakly inaccessible. If λ were a limit ordinal less than κ then its cofinality (and thus the cofinality of ℵλ) would be less than κ and so κ would not be regular and thus not weakly inaccessible. Thus λ ≥ κ and consequently λ = κ which makes it a fixed point. Role of axiom of choice The cardinality of any infinite ordinal number is an aleph number. Every aleph is the cardinality of some ordinal. The least of these is its initial ordinal. Any set whose cardinality is an aleph is equinumerous with an ordinal and is thus well-orderable. Each finite set is well-orderable, but does not have an aleph as its cardinality. Over ZF, the assumption that the cardinality of each infinite set is an aleph number is equivalent to the existence of a well-ordering of every set, which in turn is equivalent to the axiom of choice. ZFC set theory, which includes the axiom of choice, implies that every infinite set has an aleph number as its cardinality (i.e. is equinumerous with its initial ordinal), and thus the initial ordinals of the aleph numbers serve as a class of representatives for all possible infinite cardinal numbers. When cardinality is studied in ZF without the axiom of choice, it is no longer possible to prove that each infinite set has some aleph number as its cardinality; the sets whose cardinality is an aleph number are exactly the infinite sets that can be well-ordered. The method of Scott's trick is sometimes used as an alternative way to construct representatives for cardinal numbers in the setting of ZF. For example, one can define card(S) to be the set of sets with the same cardinality as S of minimum possible rank. This has the property that card(S) = card(T) if and only if S and T have the same cardinality. (The set card(S) does not have the same cardinality of S in general, but all its elements do.)
Mathematics
Set theory
null
294127
https://en.wikipedia.org/wiki/Grains%20of%20paradise
Grains of paradise
Grains of paradise (Aframomum melegueta) is a species in the ginger family, Zingiberaceae, and closely related to cardamom. Its seeds are used as a spice (ground or whole); it imparts a pungent, black-pepper-like flavor with hints of citrus. It is also known as melegueta pepper, Guinea grains, ossame, or fom wisa, and is sometimes confused with alligator pepper. The terms African pepper and Guinea pepper have also been used, but are ambiguous as they can apply to other spices such as grains of Selim (Xylopia aethiopica). It is native to West Africa, which is sometimes named the Pepper Coast (or Grain Coast) because of this commodity. It is also an important cash crop in the Basketo district of southern Ethiopia. Characteristics Aframomum melegueta is an herbaceous perennial plant native to swampy habitats along the West African coast. Its trumpet-shaped, purple flowers develop into pods long, containing numerous small, reddish-brown seeds. The pungent, peppery taste of the seeds is caused by aromatic ketones, such as (6)-paradol (systematic name: 1-(4-hydroxy-3-methoxyphenyl)-decan-3-one). Essential oils, which are the dominating flavor components in the closely related cardamom, occur only in traces. The stem at times can be short, and usually shows signs of scars and dropped leaves. The leaves are narrow and similar to those of bamboo, with a well-structured vascular system. The flowers of the herbaceous plant are aromatic, with an orange-colored lip and rich pinkish-orange upper part. The fruits contain numerous, small, golden red-brown seeds. Ecology Melegueta is a major component in the diet of Western Lowland gorilla—around 80 to 90 percent. The gorillas eat the entire fruit and act as a source of seed dispersal for meleguetta. In addition to food, the plant is also the most common material used to make nests and beds. Uses Melegueta pepper is commonly used in the cuisines of West and North Africa, from where it has been traditionally transported by camel caravan routes through the Sahara desert and distributed to Sicily and the rest of Italy. Mentioned by Pliny as "African pepper" but subsequently forgotten in Europe, they were renamed "grains of paradise" and became a popular substitute for black pepper in Europe in the 14th and 15th centuries. The Ménagier de Paris recommends it for improving wine that "smells stale". Through the Middle Ages and into the early modern period, the theory of the four humors governed theories about nourishment on the part of doctors, herbalists, and druggists. In this context, John Russell characterized grains of paradise in The Boke of Nurture as "hot and moist". In 1469, King Afonso V of Portugal granted the monopoly of trade in the Gulf of Guinea to Lisbon merchant Fernão Gomes. This included the exclusivity in trade of Aframomum melegueta, then called malagueta pepper. The grant came at the cost of 100,000 annually and agreement to explore of the coast of Africa per year for five years; this gives some indication of the European value of the spice. After Christopher Columbus reached the New World in 1492 and brought the first samples of the chili pepper (Capsicum frutescens) back with him to Europe, the name , and Spanish and Portuguese spelling, was then applied to the new chili "pepper" because its piquancy was reminiscent of grains of paradise. Malagueta, thanks to its low price, remained popular in Europe even after the Portuguese opened the direct maritime route to the Spice Islands around 1500. This namesake, the malagueta chili, remains popular in Brazil, the Caribbean, Portugal, and Mozambique. The importance of the A. melegueta spice is shown by the designation of the area from the St. John River (near present-day Buchanan) to Harper in Liberia as the Grain Coast or Pepper Coast in honor of the availability of grains of paradise. Later, the craze for the spice waned, and its uses were reduced to a flavoring for sausages and beer. In the 18th century, its importation to Great Britain collapsed after an act of Parliament during the reign of George III forbade its use in alcoholic beverages. In 1855, England imported about per year legally (duty paid). By 1880, the 9th edition of the Encyclopædia Britannica reported: "Grains of paradise are to some extent used in veterinary practice, but for the most part illegally to give a fictitious strength to malt liquors, gin, and cordials". The presence of the seeds in the diets of lowland gorillas in the wild seems to have some sort of beneficial effect on their cardiovascular health. They also eat the leaves, and use them for bedding material. The absence of the seeds in the diets of captive lowland gorillas may contribute to their occasionally poor cardiovascular health in zoos. Today the condiment is sometimes used in gourmet cuisine as a replacement for pepper, and to give unique flavor in some craft beers, gins, and Norwegian . Grains of paradise are starting to enjoy a slight resurgence in popularity in North America due to their use by some well-known chefs. Alton Brown is a fan of the condiment, and he uses it in okra stew and his apple-pie recipe on an episode of the TV cooking show Good Eats. Grains of paradise are also used by people on certain diets, such as a raw food diet, because they are considered less irritating to digestion than black pepper. Folk medicine and ritual uses In West African folk medicine, grains of paradise are valued for their warming and digestive properties, and among the Efik people in Nigeria have been used for divination and ordeals determining guilt. A. melegueta has been introduced to the Caribbean and Latin America, where it is used in Voodoo religious rites. It is also found widely among Protestant Christian practitioners of African-American hoodoo and rootwork, where the seeds are employed in luck-bringing and may be held in the mouth or chewed to prove sincerity.
Biology and health sciences
Herbs and spices
Plants
294248
https://en.wikipedia.org/wiki/Observable
Observable
In physics, an observable is a physical property or physical quantity that can be measured. In classical mechanics, an observable is a real-valued "function" on the set of all possible system states, e.g., position and momentum. In quantum mechanics, an observable is an operator, or gauge, where the property of the quantum state can be determined by some sequence of operations. For example, these operations might involve submitting the system to various electromagnetic fields and eventually reading a value. Physically meaningful observables must also satisfy transformation laws that relate observations performed by different observers in different frames of reference. These transformation laws are automorphisms of the state space, that is bijective transformations that preserve certain mathematical properties of the space in question. Quantum mechanics In quantum mechanics, observables manifest as self-adjoint operators on a separable complex Hilbert space representing the quantum state space. Observables assign values to outcomes of particular measurements, corresponding to the eigenvalue of the operator. If these outcomes represent physically allowable states (i.e. those that belong to the Hilbert space) the eigenvalues are real; however, the converse is not necessarily true. As a consequence, only certain measurements can determine the value of an observable for some state of a quantum system. In classical mechanics, any measurement can be made to determine the value of an observable. The relation between the state of a quantum system and the value of an observable requires some linear algebra for its description. In the mathematical formulation of quantum mechanics, up to a phase constant, pure states are given by non-zero vectors in a Hilbert space V. Two vectors v and w are considered to specify the same state if and only if for some non-zero . Observables are given by self-adjoint operators on V. Not every self-adjoint operator corresponds to a physically meaningful observable. Also, not all physical observables are associated with non-trivial self-adjoint operators. For example, in quantum theory, mass appears as a parameter in the Hamiltonian, not as a non-trivial operator. In the case of transformation laws in quantum mechanics, the requisite automorphisms are unitary (or antiunitary) linear transformations of the Hilbert space V. Under Galilean relativity or special relativity, the mathematics of frames of reference is particularly simple, considerably restricting the set of physically meaningful observables. In quantum mechanics, measurement of observables exhibits some seemingly unintuitive properties. Specifically, if a system is in a state described by a vector in a Hilbert space, the measurement process affects the state in a non-deterministic but statistically predictable way. In particular, after a measurement is applied, the state description by a single vector may be destroyed, being replaced by a statistical ensemble. The irreversible nature of measurement operations in quantum physics is sometimes referred to as the measurement problem and is described mathematically by quantum operations. By the structure of quantum operations, this description is mathematically equivalent to that offered by the relative state interpretation where the original system is regarded as a subsystem of a larger system and the state of the original system is given by the partial trace of the state of the larger system. In quantum mechanics, dynamical variables such as position, translational (linear) momentum, orbital angular momentum, spin, and total angular momentum are each associated with a self-adjoint operator that acts on the state of the quantum system. The eigenvalues of operator correspond to the possible values that the dynamical variable can be observed as having. For example, suppose is an eigenket (eigenvector) of the observable , with eigenvalue , and exists in a Hilbert space. Then This eigenket equation says that if a measurement of the observable is made while the system of interest is in the state , then the observed value of that particular measurement must return the eigenvalue with certainty. However, if the system of interest is in the general state (and and are unit vectors, and the eigenspace of is one-dimensional), then the eigenvalue is returned with probability , by the Born rule. Compatible and incompatible observables in quantum mechanics A crucial difference between classical quantities and quantum mechanical observables is that some pairs of quantum observables may not be simultaneously measurable, a property referred to as complementarity. This is mathematically expressed by non-commutativity of their corresponding operators, to the effect that the commutator This inequality expresses a dependence of measurement results on the order in which measurements of observables and are performed. A measurement of alters the quantum state in a way that is incompatible with the subsequent measurement of and vice versa. Observables corresponding to commuting operators are called compatible observables. For example, momentum along say the and axis are compatible. Observables corresponding to non-commuting operators are called incompatible observables or complementary variables. For example, the position and momentum along the same axis are incompatible. Incompatible observables cannot have a complete set of common eigenfunctions. Note that there can be some simultaneous eigenvectors of and , but not enough in number to constitute a complete basis.
Physical sciences
Quantum mechanics
Physics
294338
https://en.wikipedia.org/wiki/Lead%20poisoning
Lead poisoning
Lead poisoning, also known as plumbism and saturnism, is a type of metal poisoning caused by lead in the body. Symptoms may include abdominal pain, constipation, headaches, irritability, memory problems, infertility, and tingling in the hands and feet. It causes almost 10% of intellectual disability of otherwise unknown cause and can result in behavioral problems. Some of the effects are permanent. In severe cases, anemia, seizures, coma, or death may occur. Exposure to lead can occur by contaminated air, water, dust, food, or consumer products. Lead poisoning poses a significantly increased risk to children as they are far more likely to ingest lead indirectly by chewing on toys or other objects that are coated in lead paint. Additionally, children absorb greater quantities of lead from ingested sources than adults. Exposure at work is a common cause of lead poisoning in adults with certain occupations at particular risk. Diagnosis is typically by measurement of the blood lead level. The Centers for Disease Control and Prevention (US) has set the upper limit for blood lead for adults at 10 μg/dL (10 μg/100 g) and for children at 3.5 μg/dL; before October 2021 the limit was 5 μg/dL. Elevated lead may also be detected by changes in red blood cells or dense lines in the bones of children as seen on X-ray. Lead poisoning is preventable. This includes individual efforts such as removing lead-containing items from the home, workplace efforts such as improved ventilation and monitoring, state and national policies that ban lead in products such as paint, gasoline, ammunition, wheel weights, and fishing weights, reduce allowable levels in water or soil, and provide for cleanup of contaminated soil. Workers' education could be helpful as well. The major treatments are removal of the source of lead and the use of medications that bind lead so it can be eliminated from the body, known as chelation therapy. Chelation therapy in children is recommended when blood levels are greater than 40–45 μg/dL. Medications used include dimercaprol, edetate calcium disodium, and succimer. In 2021, 1.5 million deaths worldwide were attributed to lead exposure. It occurs most commonly in the developing world. An estimated 800 million children have blood lead levels over 5 μg/dL in low- and middle-income nations, though comprehensive public health data remains inadequate. Thousands of American communities may have higher lead burdens than those seen during the peak of the Flint water crisis. Those who are poor are at greater risk. Lead is believed to result in 0.6% of the world's disease burden. Half of the US population has been exposed to substantially detrimental lead levels in early childhood – mainly from car exhaust, from which lead pollution peaked in the 1970s and caused widespread loss in cognitive ability. Globally, over 15% of children are known to have blood lead levels (BLL) of over 10 μg/dL, at which point clinical intervention is strongly indicated. People have been mining and using lead for thousands of years. Descriptions of lead poisoning date to at least 200 BC, while efforts to limit lead's use date back to at least the 16th century. Concerns for low levels of exposure began in the 1970s with there being no safe threshold for lead exposure. Classification Classically, "lead poisoning" or "lead intoxication" has been defined as exposure to high levels of lead typically associated with severe health effects. Poisoning is a pattern of symptoms that occur with toxic effects from mid to high levels of exposure; toxicity is a wider spectrum of effects, including subclinical ones (those that do not cause symptoms). However, professionals often use "lead poisoning" and "lead toxicity" interchangeably, and official sources do not always restrict the use of "lead poisoning" to refer only to symptomatic effects of lead. The amount of lead in the blood and tissues, as well as the time course of exposure, determine toxicity. Lead poisoning may be acute (from intense exposure of short duration) or chronic (from repeat low-level exposure over a prolonged period), but the latter is much more common. Diagnosis and treatment of lead exposure are based on blood lead level (the amount of lead in the blood), measured in micrograms of lead per deciliter of blood (μg/dL). Urine lead levels may be used as well, though less commonly. In cases of chronic exposure, lead often sequesters in the highest concentrations first in the bones, then in the kidneys. If a provider is performing a provocative excretion test, or "chelation challenge", a measurement obtained from urine rather than blood is likely to provide a more accurate representation of total lead burden to a skilled interpreter. The US Centers for Disease Control and Prevention and the World Health Organization state that a blood lead level of 10 μg/dL or above is a cause for concern; however, lead may impair development and have harmful health effects even at lower levels, and there is no known safe exposure level. Authorities such as the American Academy of Pediatrics define lead poisoning as blood lead levels higher than 10 μg/dL. Lead forms a variety of compounds and exists in the environment in various forms. Features of poisoning differ depending on whether the agent is an organic compound (one that contains carbon), or an inorganic one. Organic lead poisoning is now very rare, because countries across the world have phased out the use of organic lead compounds as gasoline additives, but such compounds are still used in industrial settings. Organic lead compounds, which cross the skin and respiratory tract easily, affect the central nervous system predominantly. Signs and symptoms Lead poisoning can cause a variety of symptoms and signs which vary depending on the individual and the duration of lead exposure. Symptoms are nonspecific and may be subtle, and someone with elevated lead levels may have no symptoms. Symptoms usually develop over weeks to months as lead builds up in the body during a chronic exposure, but acute symptoms from brief, intense exposures also occur. Symptoms from exposure to organic lead, which is probably more toxic than inorganic lead due to its lipid solubility, occur rapidly. Poisoning by organic lead compounds has symptoms predominantly in the central nervous system, such as insomnia, delirium, cognitive deficits, tremor, hallucinations, and convulsions. Symptoms may be different in adults and children; the main symptoms in adults are headache, abdominal pain, memory loss, kidney failure, male reproductive problems, and weakness, pain, or tingling in the extremities. Early symptoms of lead poisoning in adults are commonly nonspecific and include depression, loss of appetite, intermittent abdominal pain, nausea, diarrhea, constipation, and muscle pain. Other early signs in adults include malaise, fatigue, decreased libido, and problems with sleep. An unusual taste in the mouth and personality changes are also early signs. In adults, symptoms can occur at levels above 40 μg/dL, but are more likely to occur only above 50–60 μg/dL. Symptoms begin to appear in children generally at around 60 μg/dL. However, the lead levels at which symptoms appear vary widely depending on unknown characteristics of each individual. At blood lead levels between 25 and 60 μg/dL, neuropsychiatric effects such as delayed reaction times, irritability, and difficulty concentrating, as well as slowed motor nerve conduction and headache can occur. Anemia may appear at blood lead levels higher than 50 μg/dL. In adults, abdominal colic, involving paroxysms of pain, may appear at blood lead levels greater than 80 μg/dL. Signs that occur in adults at blood lead levels exceeding 100 μg/dL include wrist drop and foot drop, and signs of encephalopathy (a condition characterized by brain swelling), such as those that accompany increased pressure within the skull, delirium, coma, seizures, and headache. In children, signs of encephalopathy such as bizarre behavior, discoordination, and apathy occur at lead levels exceeding 70 μg/dL. For both adults and children, it is rare to be asymptomatic if blood lead levels exceed 100 μg/dL. Acute poisoning In acute poisoning, typical neurological signs are pain, muscle weakness, numbness and tingling, and, rarely, symptoms associated with inflammation of the brain. Abdominal pain, nausea, vomiting, diarrhea, and constipation are other acute symptoms. Lead's effects on the mouth include astringency and a metallic taste. Gastrointestinal problems, such as constipation, diarrhea, poor appetite, or weight loss, are common in acute poisoning. Absorption of large amounts of lead over a short time can cause shock (insufficient fluid in the circulatory system) due to loss of water from the gastrointestinal tract. Hemolysis (the rupture of red blood cells) due to acute poisoning can cause anemia and hemoglobin in the urine. Damage to kidneys can cause changes in urination such as acquired Fanconi syndrome and decreased urine output. People who survive acute poisoning often go on to display symptoms of chronic poisoning. Chronic poisoning Chronic poisoning usually presents with symptoms affecting multiple systems, but is associated with three main types of symptoms: gastrointestinal, neuromuscular, and neurological. Central nervous system and neuromuscular symptoms usually result from intense exposure, while gastrointestinal symptoms usually result from exposure over longer periods. Signs of chronic exposure include loss of short-term memory or concentration, depression, nausea, abdominal pain, loss of coordination, and numbness and tingling in the extremities. Fatigue, problems with sleep, headaches, stupor, slurred speech, and anemia are also found in chronic lead poisoning. A "lead hue" of the skin with pallor and/or lividity is another feature. A blue line along the gum with bluish black edging to the teeth, known as a Burton line, is another indication of chronic lead poisoning. Children with chronic poisoning may refuse to play or may have hyperkinetic or aggressive behavior disorders. Visual disturbance may present with gradually progressing blurred vision as a result of central scotoma, caused by toxic optic neuritis. Effects on children A pregnant woman who has elevated blood lead levels is at greater risk of a premature birth or with a low birth weight. Children are more at risk for lead poisoning because their smaller bodies are in a continuous state of growth and development. Young children are much more vulnerable to lead poisoning, as they absorb 4 to 5 times more lead than an adult from a given source. Furthermore, children, especially as they are learning to crawl and walk, are constantly on the floor and therefore more prone to ingesting and inhaling dust that is contaminated with lead. The classic signs and symptoms in children are loss of appetite, abdominal pain, vomiting, weight loss, constipation, anemia, kidney failure, irritability, lethargy, learning disabilities, and behavioral problems. Slow development of normal childhood behaviors, such as talking and use of words, and permanent intellectual disability are both commonly seen. Although less common, it is possible for fingernails to develop leukonychia striata if exposed to abnormally high lead concentrations. On July 30, 2020, a report by UNICEF and Pure Earth revealed that lead poisoning is affecting children on a "massive and previously unknown scale". According to the report, one in three children, up to 800 million globally, have blood lead levels at or above 5 micrograms per decilitre (μg/dL), which is the commonly-accepted threshold beyond which action is required. By organ system Lead affects every one of the body's organ systems, especially the nervous system, but also the bones and teeth, the kidneys, and the cardiovascular, immune, and reproductive systems. Hearing loss and tooth decay have been linked to lead exposure, as have cataracts. Intrauterine and neonatal lead exposure promote tooth decay. Aside from the developmental effects unique to young children, the health effects experienced by adults are similar to those in children, although the thresholds are generally higher. Kidneys Kidney damage occurs with exposure to high levels of lead, and evidence suggests that lower levels can damage kidneys as well. The toxic effect of lead causes nephropathy and may cause Fanconi syndrome, in which the proximal tubular function of the kidney is impaired. Long-term exposure at levels lower than those that cause lead nephropathy have also been reported as nephrotoxic in patients from developed countries that had chronic kidney disease or were at risk because of hypertension or diabetes mellitus. Lead poisoning inhibits excretion of the waste product urate and causes a predisposition for gout, in which urate builds up. This condition is known as saturnine gout. Cardiovascular system Evidence suggests lead exposure is associated with high blood pressure, and studies have also found connections between lead exposure and coronary heart disease, heart rate variability, and death from stroke, but this evidence is more limited. People who have been exposed to higher concentrations of lead may be at a higher risk for cardiac autonomic dysfunction on days when ozone and fine particles are higher. Reproductive system Lead affects both the male and female reproductive systems. In men, when blood lead levels exceed 40 μg/dL, sperm count is reduced and changes occur in volume of sperm, their motility, and their morphology. A pregnant woman's elevated blood lead level can lead to miscarriage, prematurity, low birth weight, and problems with development during childhood. Lead is able to pass through the placenta and into breast milk, and blood lead levels in mothers and infants are usually similar. A fetus may be poisoned in utero if lead from the mother's bones is subsequently mobilized by the changes in metabolism due to pregnancy; increased calcium intake in pregnancy may help mitigate this phenomenon. Nervous system Lead affects the peripheral nervous system (especially motor nerves) and the central nervous system. Peripheral nervous system effects are more prominent in adults and central nervous system effects are more prominent in children. Lead causes the axons of nerve cells to degenerate and lose their myelin coats. Lead exposure in young children has been linked to learning disabilities, and children with blood lead concentrations greater than 10 μg/dL are in danger of developmental disabilities. Increased blood lead level in children has been correlated with decreases in intelligence, nonverbal reasoning, short-term memory, attention, reading and arithmetic ability, fine motor skills, emotional regulation, and social engagement. The effect of lead on children's cognitive abilities takes place at very low levels. There is no apparent lower threshold to the dose-response relationship (unlike other heavy metals such as mercury). Reduced academic performance has been associated with lead exposure even at blood lead levels lower than 5 μg/dL. Blood lead levels below 10 μg/dL have been reported to be associated with lower IQ and behavior problems such as aggression, in proportion with blood lead levels. Between the blood lead levels of 5 and 35 μg/dL, an IQ decrease of 2–4 points for each μg/dL increase is reported in children. However, studies that show associations between low-level lead exposure and health effects in children may be affected by confounding and overestimate the effects of low-level lead exposure. High blood lead levels in adults are also associated with decreases in cognitive performance and with psychiatric symptoms such as depression and anxiety. It was found in a large group of current and former inorganic lead workers in Korea that blood lead levels in the range of 20–50 μg/dL were correlated with neuro-cognitive defects. Increases in blood lead levels from about 50 to about 100 μg/dL in adults have been found to be associated with persistent, and possibly permanent, impairment of central nervous system function. Lead exposure in children is also correlated with neuropsychiatric disorders such as attention deficit hyperactivity disorder and anti-social behaviour. Elevated lead levels in children are correlated with higher scores on aggression and delinquency measures. A correlation has also been found between prenatal and early childhood lead exposure and violent crime in adulthood. Countries with the highest air lead levels have also been found to have the highest murder rates, after adjusting for confounding factors. A May 2000 study by economic consultant Rick Nevin theorizes that lead exposure explains 65% to 90% of the variation in violent crime rates in the US. A 2007 paper by the same author claims to show a strong association between preschool blood lead and subsequent crime rate trends over several decades across nine countries. Lead exposure in childhood appears to increase school suspensions and juvenile detention among boys. It is believed that the US ban on lead paint in buildings in the late 1970s, as well as the phaseout of leaded gasoline in the 1970s and 1980s, partially helped contribute to the decline of violent crime in the United States since the early 1990s. Exposure routes Lead is a common environmental pollutant. Causes of environmental contamination include lead-based paint that is deteriorating (e.g. peeling, chipping, chalking, cracking, damp or damage), renovation, repair or painting activities (disturbing or demolishing painted surfaces generate toxic lead dust ), industrial use of lead, such as found in facilities that process lead-acid batteries or produce lead wire or pipes, metal recycling and foundries, and burning of joss paper. Storage batteries and ammunition are made with the largest amounts of lead consumed in the economy each year, in the US as of 2013. Children living near facilities that process lead, such as lead smelters, have been found to have unusually high blood lead levels. In August 2009, parents rioted in China after lead poisoning was found in nearly 2000 children living near zinc and manganese smelters. Lead exposure can occur from contact with lead in air, household dust, soil, water, and commercial products. Leaded gasoline has also been linked to increases in lead pollution. Some research has suggested a link between leaded gasoline and crime rates. Man-made lead pollution has been elevated in the air for the past 2000 years. Lead pollution in the air is entirely due to human activity (mining and smelting, as well as in gasoline). Occupational exposure In adults, occupational exposure is the main cause of lead poisoning. People can be exposed when working in facilities that produce a variety of lead-containing products; these include radiation shields, ammunition, certain surgical equipment, developing dental X-ray films prior to digital X-rays (each film packet had a lead liner to prevent the radiation from going through), fetal monitors, plumbing, circuit boards, jet engines, and ceramic glazes. In addition, lead miners and smelters, plumbers and fitters, auto mechanics, glass manufacturers, construction workers, battery manufacturers and recyclers, firing range workers, and plastic manufacturers are at risk for lead exposure. Other occupations that present lead exposure risks include welding, manufacture of rubber, printing, zinc and copper smelting, processing of ore, combustion of solid waste, and production of paints and pigments. Lead exposure can also occur with intense use of gun ranges, regardless of whether these ranges are indoor or out. Parents who are exposed to lead in the workplace can bring lead dust home on clothes or skin and expose their children. Occupational exposure to lead increases the risk of cardiovascular disease, in particular: stroke, and high blood pressure. Food Lead may be found in food when food is grown in soil that is high in lead, airborne lead contaminates the crops, animals eat lead in their diet, or lead enters the food either from what it was stored or cooked in. Ingestion of lead paint and batteries is also a route of exposure for livestock, which can subsequently affect humans. Milk produced by contaminated cattle can be diluted to a lower lead concentration and sold for consumption. In Bangladesh, lead chromate has been added to turmeric to make it more yellow. This is believed to have started in the 1980s. It was believed to have been one of the main sources of high lead levels in the country. Following a 2019 report identifying adulterated turmeric as the main cause of lead poisoning in Bangladesh, the government began a rapid crackdown and public service campaign on it. By 2021, leaded turmeric had vanished from the Bangladeshi market, and blood lead levels in workers at turmeric mills had dropped by a median of 30%. In Hong Kong, the maximum allowed lead parts per million is 6 in solid foods and 1 in liquid foods. In December 2022, 28 dark chocolate brands were tested by Consumer Reports, which found that 23 of them contained cadmium, lead or both. When cocoa beans are set outside near polluting industrial plants, they can be contaminated by dust containing lead. Paint Some lead compounds are colorful and are used widely in paints, and lead paint is a major route of lead exposure in children. A study conducted in 1998–2000 found that 38 million housing units in the US had lead-based paint, down from a 1990 estimate of 64 million. Deteriorating lead paint can produce dangerous lead levels in household dust and soil. Deteriorating lead paint and lead-containing household dust are the main causes of chronic lead poisoning. The lead breaks down into the dust and since children are more prone to crawling on the floor, it is easily ingested. Many young children display pica, eating things that are not food. Even a small amount of a lead-containing product such as a paint chip or a sip of glaze can contain tens or hundreds of milligrams of lead. Eating chips of lead paint presents a particular hazard to children, generally producing more severe poisoning than lead-contaminated dust. Because removing lead paint from dwellings, e.g. by sanding or torching, creates lead-containing dust and fumes, it is generally safer to seal the lead paint under new paint (excepting moveable windows and doors, which create paint dust when operated). Alternatively, special precautions must be taken if the lead paint is to be removed. In oil painting, it was once common for colours such as yellow or white to be made with lead carbonate. Lead white oil colour was the main white of oil painters until superseded by compounds containing zinc or titanium in the mid-20th century. It is speculated that the painter Caravaggio and possibly Francisco Goya and Vincent Van Gogh had lead poisoning due to overexposure or carelessness when handling this colour. Soil Residual lead in soil contributes to lead exposure in urban areas. It has been thought that the more polluted an area is with various contaminants, the more likely it is to contain lead. However, this is not always the case, as there are several other reasons for lead contamination in soil. Lead content in soil may be caused by broken-down lead paint, residues from lead-containing gasoline, used engine oil, tire weights, or pesticides used in the past, contaminated landfills, or from nearby industries such as foundries or smelters. For example, in the Montevideo neighborhood of La Teja, former industrial sites became important sources of exposure in local communities in the early 2000s. Although leaded soil is less of a problem in countries that no longer have leaded gasoline, it remains prevalent, raising concerns about the safety of urban agriculture; eating food grown in contaminated soil can present a lead hazard. Interfacial solar evaporation has been recently studied as a technique for remediating lead-contaminated sites, which involves the evaporation of heavy metal ions from moist soil. Water Lead from the atmosphere or soil can end up in groundwater and surface water. It is also potentially in drinking water, e.g. from plumbing and fixtures that are either made of lead or have lead solder. Since acidic water breaks down lead in plumbing more readily, chemicals can be added to municipal water to increase the pH and thus reduce the corrosivity of the public water supply. Chloramines, which were adopted as a substitute for chlorine disinfectants due to fewer health concerns, increase corrositivity. In the US, 14–20% of total lead exposure is attributed to drinking water. In 2004, a team of seven reporters from The Washington Post discovered high levels of lead in the drinking water in Washington, DC, and won an award for investigative reporting for a series of articles about this contamination. In the water crisis in Flint, Michigan, a switch to a more corrosive municipal water source caused elevated lead levels in domestic tap water. Like Flint, Michigan, and Washington, D.C., a similar situation affects the state of Wisconsin, where estimates call for replacement of up to 176,000 underground pipes made of lead known as lead service lines. The City of Madison, Wisconsin, addressed the issue and replaced all of their lead service lines, but there are still other cities that have yet to follow suit. While there are chemical methods that could help reduce the amount of lead in the water distributed, a permanent fix would be to replace the pipes completely. While the state may replace the pipes below ground, homeowners must replace the pipes on their property, at an average cost of $3,000. Experts say that if the city were to replace their pipes and the citizens were to keep the old pipes located within their homes, there would be a potential for more lead to dissolve into their drinking water. The US Congress authorized the EPA to dedicate funds to assist states and nonprofits with the costs of lead service line removal under Section 50105 of the Safe Drinking Water Act. Collected rainwater from roof runoff used as potable water may contain lead, if there are lead contaminants on the roof or in the storage tank. The Australian Drinking Water Guidelines allow a maximum of 0.01 mg/L (10 ppb) lead in water. Lead wheel weights have been found to accumulate on roads and interstates and erode in traffic entering the water runoff through drains. Leaded fishing weights accumulate in rivers, streams, ponds, and lakes. Gasoline Tetraethyllead was first added to gasoline in 1923, as it helped prevent engine knocking. Automotive exhaust represented a major way for lead to be inhaled, invade the bloodstream and pass into the brain. The use of lead in gasoline peaked in the 1970s. By the next decade most high-income countries prohibited the use of leaded petrol. As late as 2002, almost all low- and middle-income countries, including some OECD members, still used it. The UN Environment Programme (UNEP) thus launched a campaign in 2002 to eliminate its use, leading to Algeria being the last country to stop its use in July 2021. Lead-containing products Lead can be found in products such as kohl, an ancient cosmetic from the Middle East, South Asia, and parts of Africa that has many other names; and from some toys. In 2007, millions of toys made in China were recalled from multiple countries owing to safety hazards including lead paint. Vinyl mini-blinds, found especially in older housing, may contain lead. Lead is commonly incorporated into herbal remedies such as Indian Ayurvedic preparations and remedies of Chinese origin. There are also risks of elevated blood lead levels caused by folk remedies like and , powders containing lead tetroxide and lead oxide, respectively, which each contain about 95% lead. Ingestion of metallic lead, such as small lead fishing lures, increases blood lead levels and can be fatal. Ingestion of lead-contaminated food is also a threat. Ceramic glaze often contains lead, and dishes that have been improperly fired can leach the metal into food, potentially causing severe poisoning. In some places, the solder in cans used for food contains lead. When manufacturing medical instruments and hardware, solder containing lead may be present. People who eat animals hunted with lead bullets may be at risk for lead exposure. Bullets lodged in the human body rarely cause significant levels of lead, but bullets lodged in the joints are the exception, as they deteriorate and release lead into the body over time. In May 2015, Indian food safety regulators in the state of Uttar Pradesh found that samples of Maggi 2 Minute Noodles contained lead up to 17 times beyond permissible limits. On 3 June 2015, the New Delhi Government banned the sale of Maggi noodles in New Delhi stores for 15 days because it was found to contain lead beyond the permissible limit. The Gujarat FDA on 4 June 2015 banned the noodles for 30 days after 27 out of 39 samples were detected with objectionable levels of metallic lead, among other things. Some of India's biggest retailers like Future Group, Big Bazaar, Easyday, and Nilgiris have imposed a nationwide ban on Maggi noodles. Many other states too have banned Maggi noodles. Bullets Contact with ammunition is a source of lead exposure. As of 2013, lead-based ammunition production is the second largest annual use of lead in the US, accounting for over 84,800 metric tons in 2013, second only to the manufacture of storage batteries. The Environmental Protection Agency (EPA) cannot regulate cartridges and shells, as a matter of law. Lead birdshot is banned in some areas, but this is primarily for the benefit of the birds and their predators, rather than humans. Contamination from heavily used gun ranges is of concern to those who live near by. Non-lead alternatives include copper, zinc, steel, tungsten-nickel-iron, bismuth-tin, and polymer blends such as tungsten-polymer and copper-polymer. Because game animals can be shot using lead bullets, the potential for lead ingestion from game meat consumption has been studied clinically and epidemiologically. In a recent study conducted by the CDC, a cohort from North Dakota was enrolled and asked to self-report historical consumption of game meat, and participation in other activities that could cause lead exposure. The study found that participants' age, sex, housing age, current hobbies with potential for lead exposure, and game consumption were all associated with blood lead level (PbB). According to a study published in 2008, 1.1% of the 736 persons consuming wild game meat tested had PbB ≥5 μg/dL In November 2015 the US Health and Human Services (HHS), Centers for Disease Control and Prevention (CDC), and National Institute for Occupational Safety and Health (NIOSH) designated 5 μg/dL (five micrograms per deciliter) of whole blood, in a venous blood sample, as the reference blood lead level for adults. An elevated blood lead level (BLL) is defined as a BLL ≥5 μg/dL. This case definition is used by the Adult Blood Lead Epidemiology and Surveillance (ABLES) program, the Council of State and Territorial Epidemiologists (CSTE), and CDC's National Notifiable Diseases Surveillance System (NNDSS). Previously (i.e. from 2009 until November 2015), the case definition for an elevated BLL was a BLL ≥10 μg/dL. To virtually eliminate the potential for lead contamination, some researchers have suggested the use of lead-free copper non-fragmenting bullets. Bismuth is an element used as a lead-replacement for shotgun pellets used in waterfowl hunting although shotshells made from bismuth are nearly ten times the cost of lead. Opium Lead-contaminated opium has been the source of poisoning in Iran and other Middle Eastern countries. This has also appeared in the illicit narcotic supply in North America, resulting in confirmed lead poisoning. Cannabis In 2007, a mass poisoning due to adulterated marijuana was uncovered in Leipzig, Germany, where 29 young adults were hospitalized with lead poisoning for several months after having smoked marijuana that had been tainted with small lead particles. One hypothesis from the police was that lead, with its high specific gravity, was used to increase the weight of street marijuana sold by the gram, thereby maximizing the dealers' profits. The researchers estimated that the profit per kilogram increased by as much as $1,500 with the lead added. It is common for drugs to be cut with less-expensive substances to increase the profits of dealers or distributors (e.g., cocaine is routinely adulterated with sugars, talcum powder, magnesium salts, and even other drugs). It is thought that the adverse reactions to many of these drugs are a result of poor manufacturing rather than face-value overdoses. Besides adulteration, cannabis plants have an inherent ability to absorb heavy metals from the soil. This makes them useful for remediating contaminated sites. But this may also make cannabis dangerous for consumers who ingest it. Some cannabis strains have been bred specifically to remove pollutants from soil, air or water, a method known as phytoremediation. In 2022, around 40% of cannabis products sold at unlicensed storefronts in New York City were found to contain heavy metals (e.g., lead, nickel), pesticides, and bacteria. Toxicokinetics Toxicokinetics describes how the body handles the toxin over time, including absorption, distribution, metabolism, and excretion. Exposure occurs through inhalation, ingestion, or occasionally skin contact. Lead may be taken in through direct contact with mouth, nose, and eyes (mucous membranes), and through breaks in the skin. Tetraethyllead, which was a gasoline additive and is still used in aviation gasoline, passes through the skin; and other forms of lead, including inorganic lead are also absorbed through skin. The main sources of absorption of inorganic lead are from ingestion and inhalation. In adults, about 35–40% of inhaled lead dust is deposited in the lungs, and about 95% of that goes into the bloodstream. Of ingested inorganic lead, about 15% is absorbed, but this percentage is higher in children, pregnant women, and people with deficiencies of calcium, zinc, or iron. Infants may absorb about 50% of ingested lead, but little is known about absorption rates in children. The main body tissues that store lead are the blood, soft tissues, and bone; the half-life of lead in these tissues is measured in weeks for blood, months for soft tissues, and years for bone. Lead in the bones, teeth, hair, and nails is bound tightly and not available to other tissues, and is generally thought not to be harmful. In adults, 94% of absorbed lead is deposited in the bones and teeth, but children only store 70% in this manner, a fact which may partially account for the more serious health effects on children. The half-life of lead in bone has been estimated as years to decades, and bone can introduce lead into the bloodstream long after the initial exposure is gone. The half-life of lead in the blood in men is about 40 days, but it may be longer in children and pregnant women, whose bones are undergoing remodeling, which allows the lead to be continuously re-introduced into the bloodstream. Also, if lead exposure takes place over years, clearance is much slower, partly due to the re-release of lead from bone. Many other tissues store lead, but those with the highest concentrations (other than blood, bone, and teeth) are the brain, spleen, kidneys, liver, and lungs. Lead is removed from the body very slowly, mainly through urine. Smaller amounts of lead are also eliminated through the feces, and very small amounts in hair, nails, and sweat. Toxicodynamics Toxicodynamics describes how the toxin affects the body, including the mechanisms causing its symptoms. Lead has no known physiologically necessary role in the body, and its harmful effects are myriad. Lead and other heavy metals create reactive radicals which damage cell structures including DNA and cell membranes. Lead also interferes with DNA transcription, enzymes that help in the synthesis of vitamin D, and enzymes that maintain the integrity of the cell membrane. Anemia may result when the cell membranes of red blood cells become more fragile as the result of damage to their membranes. Lead interferes with metabolism of bones and teeth and alters the permeability of blood vessels and collagen synthesis. Lead may also be harmful to the developing immune system, causing production of excessive inflammatory proteins; this mechanism may mean that lead exposure is a risk factor for asthma in children. Lead exposure has also been associated with a decrease in activity of immune cells such as polymorphonuclear leukocytes. Lead also interferes with the normal metabolism of calcium in cells and causes it to build up within them. Enzymes The primary cause of lead's toxicity is its interference with a variety of enzymes because it binds to sulfhydryl groups found on many enzymes. Part of lead's toxicity results from its ability to mimic other metals that take part in biological processes, which act as cofactors in many enzymatic reactions, displacing them at the enzymes on which they act. Lead is able to bind to and interact with many of the same enzymes as these metals but, due to its differing chemistry, does not properly function as a cofactor, thus interfering with the enzyme's ability to catalyze its normal reaction or reactions. Among the essential metals which lead displaces in this way are calcium, iron, and zinc. The lead ion has a lone pair in its electronic structure, which can result in a distortion in the coordination of ligands, and in 2007 was hypothesized to be important in lead poisoning's effects on enzymes (see ). One of the main causes for the pathology of lead is that it interferes with the activity of an essential enzyme called delta-aminolevulinic acid dehydratase, or ALAD (see image of the enzyme structure), which is important in the biosynthesis of heme, the cofactor found in hemoglobin. Lead also inhibits the enzyme ferrochelatase, another enzyme involved in the formation of heme. Ferrochelatase catalyzes the joining of protoporphyrin and Fe2+ to form heme. Lead's interference with heme synthesis results in production of zinc protoporphyrin and the development of anemia. Another effect of lead's interference with heme synthesis is the buildup of heme precursors, such as aminolevulinic acid, which may be directly or indirectly harmful to neurons. Elevation of aminolevulinic acid results in lead poisoning having symptoms similar to acute porphyria. Neurons The brain is the organ most sensitive to lead exposure. Lead is able to pass through the endothelial cells at the blood brain barrier because it can substitute for calcium ions and be taken up by calcium-ATPase pumps. Lead poisoning interferes with the normal development of a child's brain and nervous system; therefore children are at greater risk of lead neurotoxicity than adults are. In a child's developing brain, lead interferes with synapse formation in the cerebral cortex, neurochemical development (including that of neurotransmitters), and organization of ion channels. It causes loss of neurons' myelin sheaths, reduces numbers of neurons, interferes with neurotransmission, and decreases neuronal growth. Lead ions (Pb), like magnesium ions (Mg), block NMDA receptors. Therefore, an increase in Pb concentration will effectively inhibit ongoing long-term potentiation (LTP), and lead to an abnormal increase in long-term depression (LTD) on neurons in the affected parts of the nervous system. These abnormalities lead to the indirect downregulation of NMDA-receptors, effectively initiating a positive feedback-loop for LTD. The targeting of NMDA receptors is thought to be one of the main causes for lead's toxicity to neurons. Diagnosis Diagnosis includes determining the clinical signs and the medical history, with inquiry into possible routes of exposure. Clinical toxicologists, medical specialists in the area of poisoning, may be involved in diagnosis and treatment. The main tool in diagnosing and assessing the severity of lead poisoning is laboratory analysis of the blood lead level (BLL). Blood film examination may reveal basophilic stippling of red blood cells (dots in red blood cells visible through a microscope), as well as the changes normally associated with iron-deficiency anemia (microcytosis and hypochromasia). This may be known as sideroblastic anemia. However, basophilic stippling is also seen in unrelated conditions, such as megaloblastic anemia caused by vitamin B12 (colbalamin) and folate deficiencies. Contrary to other sideroblastic anemia, there are no ring sideroblasts in a bone marrow smear. Exposure to lead also can be evaluated by measuring erythrocyte protoporphyrin (EP) in blood samples. EP is a part of red blood cells known to increase when the amount of lead in the blood is high, with a delay of a few weeks. Thus EP levels in conjunction with blood lead levels can suggest the time period of exposure; if blood lead levels are high but EP is still normal, this finding suggests exposure was recent. However, the EP level alone is not sensitive enough to identify elevated blood lead levels below about 35 μg/dL. Due to this higher threshold for detection and the fact that EP levels also increase in iron deficiency, use of this method for detecting lead exposure has decreased. Blood lead levels are an indicator mainly of recent or current lead exposure, not of total body burden. Lead in bones can be measured noninvasively by X-ray fluorescence; this may be the best measure of cumulative exposure and total body burden. However, this method is not widely available and is mainly used for research rather than routine diagnosis. Another radiographic sign of elevated lead levels is the presence of radiodense lines called lead lines at the metaphysis in the long bones of growing children, especially around the knees. These lead lines, caused by increased calcification due to disrupted metabolism in the growing bones, become wider as the duration of lead exposure increases. X-rays may also reveal lead-containing foreign materials such as paint chips in the gastrointestinal tract. Fecal lead content that is measured over the course of a few days may also be an accurate way to estimate the overall amount of childhood lead intake. This form of measurement may serve as a useful way to see the extent of oral lead exposure from all the diet and environmental sources of lead. Lead poisoning shares symptoms with other conditions and may be easily missed. Conditions that present similarly and must be ruled out in diagnosing lead poisoning include carpal tunnel syndrome, Guillain–Barré syndrome, renal colic, appendicitis, encephalitis in adults, and viral gastroenteritis in children. Other differential diagnoses in children include constipation, abdominal colic, iron deficiency, subdural hematoma, neoplasms of the central nervous system, emotional and behavior disorders, and intellectual disability. Reference levels The current reference range for acceptable blood lead concentrations in healthy persons without excessive exposure to environmental sources of lead is less than 3.5 μg/dL for children. It was less than 25 μg/dL for adults. Previous to 2012 the value for children was 10 (μg/dL). Lead-exposed workers in the US are required to be removed from work when their level is greater than 50 μg/dL if they do construction and otherwise greater than 60 μg/dL. In 2015, the US Health and Human Services (HHS), Centers for Disease Control and Prevention (CDC), and National Institute for Occupational Safety and Health (NIOSH) designated 5 μg/dL (five micrograms per deciliter) of whole blood, in a venous blood sample, as the reference blood lead level for adults. An elevated blood lead level (BLL) is defined as a BLL ≥5 μg/dL. This case definition is used by the Adult Blood Lead Epidemiology and Surveillance (ABLES) program, the Council of State and Territorial Epidemiologists (CSTE), and CDC's National Notifiable Diseases Surveillance System (NNDSS). Previously (i.e. from 2009 until November 2015), the case definition for an elevated BLL was a BLL ≥10 μg/dL. The US national BLL geometric mean among adults was 1.2 μg/dL in 2009–2010. Blood lead concentrations in poisoning victims have ranged from 30 to 80 μg/dL in children exposed to lead paint in older houses, 77–104 μg/dL in persons working with pottery glazes, 90–137 μg/dL in individuals consuming contaminated herbal medicines, 109–139 μg/dL in indoor shooting range instructors and as high as 330 μg/dL in those drinking fruit juices from glazed earthenware containers. Prevention In most cases, lead poisoning is preventable by avoiding exposure to lead. Prevention strategies can be divided into individual (measures taken by a family), preventive medicine (identifying and intervening with high-risk individuals), and public health (reducing risk on a population level). Recommended steps by individuals to reduce the blood lead levels of children include increasing their frequency of hand washing and their intake of calcium and iron, discouraging them from putting their hands to their mouths, vacuuming frequently, and eliminating the presence of lead-containing objects such as blinds and jewellery in the house. In houses with lead pipes or plumbing solder, these can be replaced. Less permanent but cheaper methods include running water in the morning to flush out the most contaminated water, or adjusting the water's chemistry to prevent corrosion of pipes. Lead testing kits are commercially available for detecting the presence of lead in the household. Testing kit accuracy depends on the user testing all layers of paint and the quality of the kit; the US Environmental Protection Agency (EPA) only approves kits with an accuracy rating of at least 95%. Professional lead testing companies caution that DIY test kits can create health risks for users that do not understand their limitations and liability issues for employers with regard to worker protection. As hot water is more likely than cold water to contain higher amounts of lead, only cold water from the tap should be used for drinking, cooking, and making baby formula. Since most of the lead in household water usually comes from plumbing in the house and not from the local water supply, using cold water can avoid lead exposure. Measures such as dust control and household education do not appear to be effective in changing children's blood levels. Prevention measures also exist on national and municipal levels. Recommendations by health professionals for lowering childhood exposures include banning the use of lead where it is not essential and strengthening regulations that limit the amount of lead in soil, water, air, household dust, and products. Regulations exist to limit the amount of lead in paint; for example, a 1978 law in the US restricted the lead in paint for residences, furniture, and toys to 0.06% or less. In October 2008, the US EPA reduced the allowable lead level by a factor of ten to 0.15 micrograms per cubic meter of air, giving states five years to comply with the standards. The European Union's Restriction of Hazardous Substances Directive limits amounts of lead and other toxic substances in electronics and electrical equipment. In some places, remediation programs exist to reduce the presence of lead when it is found to be high, for example in drinking water. As a more radical solution, entire towns located near former lead mines have been "closed" by the government, and the population resettled elsewhere, as was the case with Picher, Oklahoma, in 2009. Removing lead from airplane fuel would also be useful. Screening Screening may be an important method of prevention for those at high risk, such as those who live near lead-related industries. The United States Preventive Services Task Force (USPSTF) has stated that general screening of those without symptoms include children and pregnant women is of unclear benefit . The American College of Obstetricians and Gynecologists (ACOG) and American Academy of Pediatrics (AAP), however, recommend asking about risk factors and testing those who have them. Education The education of workers on lead, its danger and how its workplace exposure can be decreased, especially when initial blood lead level and urine lead level are high, could help reduce the risk of lead poisoning in the workplace. Treatment The mainstays of treatment are removal from the source of lead and, for people who have significantly high blood lead levels or who have symptoms of poisoning, chelation therapy. Treatment of iron, calcium, and zinc deficiencies, which are associated with increased lead absorption, is another part of treatment for lead poisoning. When lead-containing materials are present in the gastrointestinal tract (as evidenced by abdominal X-rays), whole bowel irrigation, cathartics, endoscopy, or even surgical removal may be used to eliminate it from the gut and prevent further exposure. Lead-containing bullets and shrapnel may also present a threat of further exposure and may need to be surgically removed if they are in or near fluid-filled or synovial spaces. If lead encephalopathy is present, anticonvulsants may be given to control seizures, and treatments to control swelling of the brain include corticosteroids and mannitol. Treatment of organic lead poisoning involves removing the lead compound from the skin, preventing further exposure, treating seizures, and possibly chelation therapy for people with high blood lead concentrations. Before the advent of organic chelating agents, salts of iodide were given orally, such as heavily popularized by Louis Melsens and many nineteenth- and early twentieth-century doctors. A chelating agent is a molecule with at least two negatively charged groups that allow it to form complexes with metal ions with multiple positive charges, such as lead. The chelate that is thus formed is nontoxic and can be excreted in the urine, initially at up to 50 times the normal rate. The chelating agents used for treatment of lead poisoning are edetate disodium calcium (CaNa2EDTA), dimercaprol (BAL), which are injected, and succimer and d-penicillamine, which are administered orally. Chelation therapy is used in cases of acute lead poisoning, severe poisoning, and encephalopathy, and is considered for people with blood lead levels above 25 μg/dL. While the use of chelation for people with symptoms of lead poisoning is widely supported, use in asymptomatic people with high blood lead levels is more controversial. Chelation therapy is of limited value for cases of chronic exposure to low levels of lead. Chelation therapy is usually stopped when symptoms resolve or when blood lead levels return to premorbid levels. When lead exposure has taken place over a long period, blood lead levels may rise after chelation is stopped because lead is leached into blood from stores in the bone; thus repeated treatments are often necessary. People receiving dimercaprol need to be assessed for peanut allergies since the commercial formulation contains peanut oil. Calcium EDTA is also effective if administered four hours after the administration of dimercaprol. Administering dimercaprol, DMSA (Succimer), or DMPS prior to calcium EDTA is necessary to prevent the redistribution of lead into the central nervous system. Dimercaprol used alone may also redistribute lead to the brain and testes. An adverse side effect of calcium EDTA is renal toxicity. Succimer (DMSA) is the preferred agent in mild to moderate lead poisoning cases. This may be the case in instances where children have a blood lead level >25 μg/dL. The most reported adverse side effect for succimer is gastrointestinal disturbances. It is also important to note that chelation therapy only lowers blood lead levels and may not prevent the lead-induced cognitive problems associated with lower lead levels in tissue. This may be because of the inability of these agents to remove sufficient amounts of lead from tissue or inability to reverse preexisting damage. Chelating agents can have adverse effects; for example, chelation therapy can lower the body's levels of necessary nutrients like zinc. Chelating agents taken orally can increase the body's absorption of lead through the intestine. Chelation challenge, also known as provocation testing, is used to indicate an elevated and mobilizable body burden of heavy metals including lead. This testing involves collecting urine before and after administering a one-off dose of chelating agent to mobilize heavy metals into the urine. Then urine is analyzed by a laboratory for levels of heavy metals; from this analysis overall body burden is inferred. Chelation challenge mainly measures the burden of lead in soft tissues, though whether it accurately reflects long-term exposure or the amount of lead stored in bone remains controversial. Although the technique has been used to determine whether chelation therapy is indicated and to diagnose heavy metal exposure, some evidence does not support these uses as blood levels after chelation are not comparable to the reference range typically used to diagnose heavy metal poisoning. The single chelation dose could also redistribute the heavy metals to more sensitive areas such as central nervous system tissue. Epidemiology Since lead has been used widely for centuries, the effects of exposure are worldwide. Environmental lead is ubiquitous, and everyone has some measurable blood lead level. Atmospheric lead pollution increased dramatically beginning in the 1950s as a result of the widespread use of leaded gasoline. Lead is one of the largest environmental medicine problems in terms of numbers of people exposed and the public health toll it takes. Lead exposure accounts for about 0.2% of all deaths and 0.6% of disability adjusted life years globally. Although regulation reducing lead in products has greatly reduced exposure in the developed world since the 1970s, lead is still allowed in products in many developing countries. According to the World Health Organization, as of June 2022, only 45% of countries had confirmed legally-binding controls on production and use of lead paint. Significant disparities exist in the enactment of bans, with regions such as the Middle East, North Africa, and Sub-Saharan Africa currently the most likely to have countries lacking such measures. Despite phase out in many parts of the Global North, Global South exposure has increased by nearly three times. In all countries that have banned leaded gasoline, average blood lead levels have fallen sharply. However, some developing countries still allow leaded gasoline, which is the primary source of lead exposure in most developing countries. Beyond exposure from gasoline, the frequent use of pesticides in developing countries adds a risk of lead exposure and subsequent poisoning. Poor children in developing countries are at especially high risk for lead poisoning. Of North American children, 7% have blood lead levels above 10 μg/dL, whereas among Central and South American children, the percentage is 33–34%. About one fifth of the world's disease burden from lead poisoning occurs in the Western Pacific, and another fifth is in Southeast Asia. In developed countries, people with low levels of education living in poorer areas are most at risk for elevated lead. In the US, the groups most at risk for lead exposure are the impoverished, city-dwellers, and immigrants. African-American children and those living in old housing have also been found to be at elevated risk for high blood lead levels in the US. Low-income people often live in old housing with lead paint, which may begin to peel, exposing residents to high levels of lead-containing dust. Risk factors for elevated lead exposure include alcohol consumption and smoking (possibly because of contamination of tobacco leaves with lead-containing pesticides). Adults with certain risk factors might be more susceptible to toxicity; these include calcium and iron deficiencies, old age, disease of organs targeted by lead (e.g. the brain, the kidneys), and possibly genetic susceptibility. Differences in vulnerability to lead-induced neurological damage between males and females have also been found, but some studies have found males to be at greater risk, while others have found females to be. In adults, blood lead levels steadily increase with increasing age. In adults of all ages, men have higher blood lead levels than women do. Children are more sensitive to elevated blood lead levels than adults are. Children may also have a higher intake of lead than adults; they breathe faster and may be more likely to have contact with and ingest soil. Children of ages one to three tend to have the highest blood lead levels, possibly because at that age they begin to walk and explore their environment, and they use their mouths in their exploration. Blood levels usually peak at about 18–24 months old. In many countries including the US, household paint and dust are the major route of exposure in children. Notable cases Cases of mass lead poisoning can occur. In 2009, 15,000 people were planned to be relocated from Jiyuan in central Henan province to other locations after 1000 children living around China's largest smelter plant (owned and operated by Yuguang Gold and Lead) were found to have excess lead in their blood. The total cost of this project is estimated to around 1 billion yuan ($150 million). 70% of the cost was estimated to be paid by local government and the smelter company, while the rest would be paid by the residents themselves. The government suspended production at 32 of 35 lead plants. The affected area includes people from 10 different villages. The Zamfara State lead poisoning epidemic occurred in Nigeria in 2010. As of 5 October 2010 at least 400 children have died from the effects of lead poisoning. Sex-specific susceptibility Neuroanatomical pathology due to lead exposure is more pronounced in males, suggesting that lead-related toxicity has a disparate impact across sexes. Prognosis Reversibility Outcome is related to the extent and duration of lead exposure. Effects of lead on the physiology of the kidneys and blood are generally reversible; its effects on the central nervous system are not. While peripheral effects in adults often go away when lead exposure ceases, evidence suggests that most of lead's effects on a child's central nervous system are irreversible. Children with lead poisoning may thus have adverse health, cognitive, and behavioral effects that follow them into adulthood. Encephalopathy Lead encephalopathy is a medical emergency and causes permanent brain damage in 70–80% of children affected by it, even those that receive the best treatment. The mortality rate for people who develop cerebral involvement is about 25%, and of those who survive who had lead encephalopathy symptoms by the time chelation therapy was begun, about 40% have permanent neurological problems such as cerebral palsy. Long-term Exposure to lead may also decrease lifespan and have health effects in the long term. Death rates from a variety of causes have been found to be higher in people with elevated blood lead levels; these include cancer, stroke, and heart disease, and general death rates from all causes. Lead is considered a possible human carcinogen based on evidence from animal studies. Evidence also suggests that age-related mental decline and psychiatric symptoms are correlated with lead exposure. Cumulative exposure over a prolonged period may have a more important effect on some aspects of health than recent exposure. Some health effects, such as high blood pressure, are only significant risks when lead exposure is prolonged (over about one year). Furthermore, the neurological effects of lead exposure have been shown to be exacerbated and long lasting in low income children in comparison to those of higher economic standing. This does not imply that being wealthy can prevent lead from causing long-term mental health issues. Violence Lead poisoning in children has been linked to changes in brain function that can result in low IQ and increased impulsivity and aggression. These traits of childhood lead exposure are associated with crimes of passion, such as aggravated assault in young adults. An increase in lead exposure in children was linked to an increase in aggravated assault rates 22 years later. For instance, the peak in leaded gasoline use in the late 1970s correlates with a peak in aggravated assault rates in the late 1990s in urban areas across the United States. History Lead poisoning was among the first known and most widely studied work-related environmental hazards. One of the first metals to be smelted and used, lead is thought to have been discovered and first mined in Anatolia around 6500 BC. Its density, workability, and corrosion resistance were among the metal's attractions. In the 2nd century BC the Greek botanist Nicander described the colic and paralysis seen in lead-poisoned people. Dioscorides, a Greek physician who lived in the 1st century AD, wrote that lead makes the mind "give way". Lead was used extensively in Roman aqueducts from about 500 BC to 300 AD. Julius Caesar's engineer, Vitruvius, reported, "water is much more wholesome from earthenware pipes than from lead pipes. For it seems to be made injurious by lead, because white lead is produced by it, and this is said to be harmful to the human body." Gout, prevalent in affluent Rome, is thought to be the result of lead, or leaded eating and drinking vessels. Sugar of lead (lead(II) acetate) was used to sweeten wine, and the gout that resulted from this was known as "saturnine" gout. It is even hypothesized that lead poisoning may have contributed to the decline of the Roman Empire, a hypothesis thoroughly disputed: However, recent research supports the idea that the lead found in the water came from the supply pipes, rather than another source of contamination. It was not unknown for locals to punch holes in the pipes to draw water off, increasing the number of people exposed to the lead. Romans also consumed lead through the consumption of defrutum, carenum, and sapa, musts made by boiling down fruit in lead cookware. and its relatives were used in ancient Roman cuisine and cosmetics, including as a food preservative. The use of leaden cookware, though popular, was not the general standard and copper cookware was used far more generally. There is also no indication how often was added or in what quantity. In 1983, environmental chemist Jerome Nriagu argued in a milestone paper that Roman civilization collapsed as a result of lead poisoning. Clair Patterson, the scientist who convinced governments to ban lead from gasoline, enthusiastically endorsed this idea, which nevertheless triggered a volley of publications aimed at refuting it. In 1984, John Scarborough, a pharmacologist and classicist, criticized the conclusions drawn by Nriagu's book as "so full of false evidence, miscitations, typographical errors, and a blatant flippancy regarding primary sources that the reader cannot trust the basic arguments." Although today lead is no longer seen as the prime culprit of Rome's demise, its status in the system of water distribution by lead pipes () still stands as a major public health issue. By measuring Pb isotope compositions of sediments from the Tiber River and the Trajanic Harbor, the present work shows that "tap water" from ancient Rome had 100 times more lead than local spring waters. After antiquity, mention of lead poisoning was absent from medical literature until the end of the Middle Ages. In 1656 the German physician Samuel Stockhausen recognized dust and fumes containing lead compounds as the cause of disease, called since ancient Roman times , that were known to afflict miners, smelter workers, potters, and others whose work exposed them to the metal. The painter Caravaggio might have died of lead poisoning. Bones with high lead levels were recently found in a grave thought likely to be his. Paints used at the time contained high amounts of lead salts. Caravaggio is known to have exhibited violent behavior, a symptom commonly associated with lead poisoning. In 17th-century Germany, the physician Eberhard Gockel discovered lead-contaminated wine to be the cause of an epidemic of colic. He had noticed that monks who did not drink wine were healthy, while wine drinkers developed colic, and traced the cause to sugar of lead, made by simmering litharge with vinegar. As a result, Eberhard Ludwig, Duke of Württemberg issued an edict in 1696 banning the adulteration of wines with litharge. In the 18th century lead poisoning was fairly frequent on account of the widespread drinking of rum, which was made in stills with a lead component (the "worm"). It was a significant cause of mortality amongst slaves and sailors in the colonial West Indies. Lead poisoning from rum was also noted in Boston. Benjamin Franklin suspected lead to be a risk in 1786. Also in the 18th century, "Devonshire colic" was the name given to the symptoms experienced by people of Devon who drank cider made in presses that were lined with lead. Lead was added to cheap wine illegally in the 18th and early 19th centuries as a sweetener. The composer Beethoven, a heavy wine drinker, had elevated lead levels (as later detected in his hair) possibly due to this; lead poisoning is a contender as a factor to his hearing loss and death (cause of which is still controversial). With the Industrial Revolution in the 19th century, lead poisoning became common in the work setting. The introduction of lead paint for residential use in the 19th century increased childhood exposure to lead; for millennia before this, most lead exposure had been occupational. The first legislation in the UK to limit pottery workers' exposure to lead was included in the Factories Act Extension Act in 1864, with further introduced in 1899. William James Furnival (1853–1928), research ceramist of City & Guilds London Institute, appeared before Parliament in 1901 and presented a decade's evidence to convince the nation's leaders to remove lead completely from the British ceramic industry. His 852-page treatise, Leadless Decorative Tiles, Faience, and Mosaic of 1904 published that campaign and provided recipes to promote lead-free ceramics. At the request of the Illinois state government in the US, Alice Hamilton (1869–1970) documented lead toxicity in Illinois industry and in 1911 presented results to the 23rd Annual Meeting of the American Economic Association. Hamilton was a founder of the field of occupational safety and health and published the first edition of her manual, Industrial Toxicology, in 1934, yet in print in revised forms. An important step in the understanding of childhood lead poisoning occurred when toxicity in children from lead paint was recognized in Australia in 1897. France, Belgium, and Austria banned white lead interior paints in 1909; the League of Nations followed suit in 1922. However, in the United States, laws banning lead house paint were not passed until 1971, and it was phased out and not fully banned until 1978. The 20th century saw an increase in worldwide lead exposure levels due to the increased widespread use of the metal. Beginning in the 1920s, lead was added to gasoline to improve its combustion; lead from this exhaust persists today in soil and dust in buildings. Midcentury ceramicist Carol Janeway provides a case history of lead poisoning in an artist using lead glazes in decorating tiles in the 1940s; her monograph suggests that other artists' potential for lead poisoning be investigated, for example Vally Wieselthier and Dora Carrington. Blood lead levels worldwide have been declining sharply since the 1980s, when leaded gasoline began to be phased out. In those countries that have banned lead in solder for food and drink cans and have banned leaded gasoline additives, blood lead levels have fallen sharply since the mid-1980s. The levels found today in most people are orders of magnitude greater than those of pre-industrial society. Due to reductions of lead in products and the workplace, acute lead poisoning is rare in most countries today, but low-level lead exposure is still common. It was not until the second half of the 20th century that subclinical lead exposure became understood to be a problem. During the end of the 20th century, the blood lead levels deemed acceptable steadily declined. Blood lead levels once considered safe are now considered hazardous, with no known safe threshold. In the late 1950s through the 1970s Herbert Needleman and Clair Cameron Patterson did research trying to prove lead's toxicity to humans. In the 1980s Needleman was falsely accused of scientific misconduct by lead industry associates. In 2002 Tommy Thompson, United States Secretary of Health and Human Services, appointed at least two persons with conflicts of interest to the CDC's Lead Advisory Committee. In 2014 a case by the State of California against a number of companies decided against Sherwin-Williams, NL Industries and ConAgra and ordered them to pay $1.15 billion. The disposition of The People v. ConAgra Grocery Products Company et al. in the California 6th Appellate District Court on 14 November 2017 is that: On 6 December 2017, the petitions for rehearing from NL Industries, Inc., ConAgra Grocery Products Company and The Sherwin-Williams Company were denied. Studies have found a weak link between lead from leaded gasoline and crime rates. in the United States lead paint in rental housing remains a hazard to children. Both landlords and insurance companies have adopted strategies which limit the chance of recovery for damages due to lead poisoning: insurance companies by excluding coverage for lead poisoning from policies and landlords by crafting barriers to collection of any money damages compensating plaintiffs for damage. Other species Humans are not alone in suffering from lead's effects; plants and animals are also affected by lead toxicity to varying degrees depending on species. Animals experience many of the same effects of lead exposure as humans do, such as abdominal pain, peripheral neuropathy, and behavioral changes such as increased aggression. Much of what is known about human lead toxicity and its effects is derived from animal studies. Animals are used to test the effects of treatments, such as chelating agents, and to provide information on the pathophysiology of lead, such as how it is absorbed and distributed in the body. Farm animals such as cows and horses as well as pet animals are also susceptible to the effects of lead toxicity. Sources of lead exposure in pets can be the same as those that present health threats to humans sharing the environment, such as paint and blinds, and there is sometimes lead in toys made for pets. Lead poisoning in a pet dog may indicate that children in the same household are at increased risk for elevated lead levels. Wildlife Lead, one of the leading causes of toxicity in waterfowl, has been known to cause die-offs of wild bird populations. When hunters use lead shot, waterfowl such as ducks can ingest the spent pellets later and be poisoned; predators that eat these birds are also at risk. Lead shot-related waterfowl poisonings were first documented in the US in the 1880s. By 1919, the spent lead pellets from waterfowl hunting was positively identified as the source of waterfowl deaths. Lead shot has been banned for hunting waterfowl in several countries, including the US in 1991 and Canada in 1997. Other threats to wildlife include lead paint, sediment from lead mines and smelters, and lead weights from fishing lines. Lead in some fishing gear has been banned in several countries. The critically endangered California condor has also been affected by lead poisoning. As scavengers, condors eat carcasses of game that have been shot but not retrieved, and with them the fragments from lead bullets; this increases their lead levels. Among condors around the Grand Canyon, lead poisoning due to eating lead shot is the most frequently diagnosed cause of death. In an effort to protect this species, in areas designated as the California condor's range the use of projectiles containing lead has been banned to hunt deer, feral pigs, elk, pronghorn antelope, coyotes, ground squirrels, and other non-game wildlife. Also, conservation programs exist which routinely capture condors, check their blood lead levels, and treat cases of poisoning.
Biology and health sciences
Types
Health
294340
https://en.wikipedia.org/wiki/Poisson%20bracket
Poisson bracket
In mathematics and classical mechanics, the Poisson bracket is an important binary operation in Hamiltonian mechanics, playing a central role in Hamilton's equations of motion, which govern the time evolution of a Hamiltonian dynamical system. The Poisson bracket also distinguishes a certain class of coordinate transformations, called canonical transformations, which map canonical coordinate systems into canonical coordinate systems. A "canonical coordinate system" consists of canonical position and momentum variables (below symbolized by and , respectively) that satisfy canonical Poisson bracket relations. The set of possible canonical transformations is always very rich. For instance, it is often possible to choose the Hamiltonian itself as one of the new canonical momentum coordinates. In a more general sense, the Poisson bracket is used to define a Poisson algebra, of which the algebra of functions on a Poisson manifold is a special case. There are other general examples, as well: it occurs in the theory of Lie algebras, where the tensor algebra of a Lie algebra forms a Poisson algebra; a detailed construction of how this comes about is given in the universal enveloping algebra article. Quantum deformations of the universal enveloping algebra lead to the notion of quantum groups. All of these objects are named in honor of Siméon Denis Poisson. He introduced the Poisson bracket in his 1809 treatise on mechanics. Properties Given two functions and that depend on phase space and time, their Poisson bracket is another function that depends on phase space and time. The following rules hold for any three functions of phase space and time: Anticommutativity Bilinearity Leibniz's rule Jacobi identity Also, if a function is constant over phase space (but may depend on time), then for any . Definition in canonical coordinates In canonical coordinates (also known as Darboux coordinates) on the phase space, given two functions and , the Poisson bracket takes the form The Poisson brackets of the canonical coordinates are where is the Kronecker delta. Hamilton's equations of motion Hamilton's equations of motion have an equivalent expression in terms of the Poisson bracket. This may be most directly demonstrated in an explicit coordinate frame. Suppose that is a function on the solution's trajectory-manifold. Then from the multivariable chain rule, Further, one may take and to be solutions to Hamilton's equations; that is, Then Thus, the time evolution of a function on a symplectic manifold can be given as a one-parameter family of symplectomorphisms (i.e., canonical transformations, area-preserving diffeomorphisms), with the time being the parameter: Hamiltonian motion is a canonical transformation generated by the Hamiltonian. That is, Poisson brackets are preserved in it, so that any time in the solution to Hamilton's equations, can serve as the bracket coordinates. Poisson brackets are canonical invariants. Dropping the coordinates, The operator in the convective part of the derivative, , is sometimes referred to as the Liouvillian (see Liouville's theorem (Hamiltonian)). Poisson matrix in canonical transformations The concept of Poisson brackets can be expanded to that of matrices by defining the Poisson matrix. Consider the following canonical transformation:Defining , the Poisson matrix is defined as , where is the symplectic matrix under the same conventions used to order the set of coordinates. It follows from the definition that: The Poisson matrix satisfies the following known properties: where the is known as a Lagrange matrix and whose elements correspond to Lagrange brackets. The last identity can also be stated as the following:Note that the summation here involves generalized coordinates as well as generalized momentum. The invariance of Poisson bracket can be expressed as: , which directly leads to the symplectic condition: . Constants of motion An integrable system will have constants of motion in addition to the energy. Such constants of motion will commute with the Hamiltonian under the Poisson bracket. Suppose some function is a constant of motion. This implies that if is a trajectory or solution to Hamilton's equations of motion, then along that trajectory. Then where, as above, the intermediate step follows by applying the equations of motion and we assume that does not explicitly depend on time. This equation is known as the Liouville equation. The content of Liouville's theorem is that the time evolution of a measure given by a distribution function is given by the above equation. If the Poisson bracket of and vanishes (), then and are said to be in involution. In order for a Hamiltonian system to be completely integrable, independent constants of motion must be in mutual involution, where is the number of degrees of freedom. Furthermore, according to Poisson's Theorem, if two quantities and are explicitly time independent () constants of motion, so is their Poisson bracket . This does not always supply a useful result, however, since the number of possible constants of motion is limited ( for a system with degrees of freedom), and so the result may be trivial (a constant, or a function of and .) The Poisson bracket in coordinate-free language Let be a symplectic manifold, that is, a manifold equipped with a symplectic form: a 2-form which is both closed (i.e., its exterior derivative vanishes) and non-degenerate. For example, in the treatment above, take to be and take If is the interior product or contraction operation defined by , then non-degeneracy is equivalent to saying that for every one-form there is a unique vector field such that . Alternatively, . Then if is a smooth function on , the Hamiltonian vector field can be defined to be . It is easy to see that The Poisson bracket on is a bilinear operation on differentiable functions, defined by ; the Poisson bracket of two functions on is itself a function on . The Poisson bracket is antisymmetric because: Furthermore, Here denotes the vector field applied to the function as a directional derivative, and denotes the (entirely equivalent) Lie derivative of the function . If is an arbitrary one-form on , the vector field generates (at least locally) a flow satisfying the boundary condition and the first-order differential equation The will be symplectomorphisms (canonical transformations) for every as a function of if and only if ; when this is true, is called a symplectic vector field. Recalling Cartan's identity and , it follows that . Therefore, is a symplectic vector field if and only if α is a closed form. Since , it follows that every Hamiltonian vector field is a symplectic vector field, and that the Hamiltonian flow consists of canonical transformations. From above, under the Hamiltonian flow , This is a fundamental result in Hamiltonian mechanics, governing the time evolution of functions defined on phase space. As noted above, when , is a constant of motion of the system. In addition, in canonical coordinates (with and ), Hamilton's equations for the time evolution of the system follow immediately from this formula. It also follows from that the Poisson bracket is a derivation; that is, it satisfies a non-commutative version of Leibniz's product rule: The Poisson bracket is intimately connected to the Lie bracket of the Hamiltonian vector fields. Because the Lie derivative is a derivation, Thus if and are symplectic, using , Cartan's identity, and the fact that is a closed form, It follows that , so that Thus, the Poisson bracket on functions corresponds to the Lie bracket of the associated Hamiltonian vector fields. We have also shown that the Lie bracket of two symplectic vector fields is a Hamiltonian vector field and hence is also symplectic. In the language of abstract algebra, the symplectic vector fields form a subalgebra of the Lie algebra of smooth vector fields on , and the Hamiltonian vector fields form an ideal of this subalgebra. The symplectic vector fields are the Lie algebra of the (infinite-dimensional) Lie group of symplectomorphisms of . It is widely asserted that the Jacobi identity for the Poisson bracket, follows from the corresponding identity for the Lie bracket of vector fields, but this is true only up to a locally constant function. However, to prove the Jacobi identity for the Poisson bracket, it is sufficient to show that: where the operator on smooth functions on is defined by and the bracket on the right-hand side is the commutator of operators, . By , the operator is equal to the operator . The proof of the Jacobi identity follows from because, up to the factor of -1, the Lie bracket of vector fields is just their commutator as differential operators. The algebra of smooth functions on M, together with the Poisson bracket forms a Poisson algebra, because it is a Lie algebra under the Poisson bracket, which additionally satisfies Leibniz's rule . We have shown that every symplectic manifold is a Poisson manifold, that is a manifold with a "curly-bracket" operator on smooth functions such that the smooth functions form a Poisson algebra. However, not every Poisson manifold arises in this way, because Poisson manifolds allow for degeneracy which cannot arise in the symplectic case. A result on conjugate momenta Given a smooth vector field on the configuration space, let be its conjugate momentum. The conjugate momentum mapping is a Lie algebra anti-homomorphism from the Lie bracket to the Poisson bracket: This important result is worth a short proof. Write a vector field at point in the configuration space as where is the local coordinate frame. The conjugate momentum to has the expression where the are the momentum functions conjugate to the coordinates. One then has, for a point in the phase space, The above holds for all , giving the desired result. Quantization Poisson brackets deform to Moyal brackets upon quantization, that is, they generalize to a different Lie algebra, the Moyal algebra, or, equivalently in Hilbert space, quantum commutators. The Wigner-İnönü group contraction of these (the classical limit, ) yields the above Lie algebra. To state this more explicitly and precisely, the universal enveloping algebra of the Heisenberg algebra is the Weyl algebra (modulo the relation that the center be the unit). The Moyal product is then a special case of the star product on the algebra of symbols. An explicit definition of the algebra of symbols, and the star product is given in the article on the universal enveloping algebra.
Physical sciences
Classical mechanics
Physics
294390
https://en.wikipedia.org/wiki/Commutative%20property
Commutative property
In mathematics, a binary operation is commutative if changing the order of the operands does not change the result. It is a fundamental property of many binary operations, and many mathematical proofs depend on it. Perhaps most familiar as a property of arithmetic, e.g. or , the property can also be used in more advanced settings. The name is needed because there are operations, such as division and subtraction, that do not have it (for example, ); such operations are not commutative, and so are referred to as noncommutative operations. The idea that simple operations, such as the multiplication and addition of numbers, are commutative was for many years implicitly assumed. Thus, this property was not named until the 19th century, when mathematics started to become formalized. A similar property exists for binary relations; a binary relation is said to be symmetric if the relation applies regardless of the order of its operands; for example, equality is symmetric as two equal mathematical objects are equal regardless of their order. Mathematical definitions A binary operation on a set S is called commutative if In other words, an operation is commutative if every two elements commute. An operation that does not satisfy the above property is called noncommutative. One says that commutes with or that and commute under if That is, a specific pair of elements may commute even if the operation is (strictly) noncommutative. Examples Commutative operations Addition and multiplication are commutative in most number systems, and, in particular, between natural numbers, integers, rational numbers, real numbers and complex numbers. This is also true in every field. Addition is commutative in every vector space and in every algebra. Union and intersection are commutative operations on sets. "And" and "or" are commutative logical operations. Noncommutative operations Division, subtraction, and exponentiation Division is noncommutative, since . Subtraction is noncommutative, since . However it is classified more precisely as anti-commutative, since . Exponentiation is noncommutative, since . This property leads to two different "inverse" operations of exponentiation (namely, the nth-root operation and the logarithm operation), whereas multiplication only has one inverse operation. Truth functions Some truth functions are noncommutative, since the truth tables for the functions are different when one changes the order of the operands. For example, the truth tables for and are Function composition of linear functions Function composition of linear functions from the real numbers to the real numbers is almost always noncommutative. For example, let and . Then and This also applies more generally for linear and affine transformations from a vector space to itself (see below for the Matrix representation). Matrix multiplication Matrix multiplication of square matrices is almost always noncommutative, for example: Vector product The vector product (or cross product) of two vectors in three dimensions is anti-commutative; i.e., b × a = −(a × b). History and etymology Records of the implicit use of the commutative property go back to ancient times. The Egyptians used the commutative property of multiplication to simplify computing products. Euclid is known to have assumed the commutative property of multiplication in his book Elements. Formal uses of the commutative property arose in the late 18th and early 19th centuries, when mathematicians began to work on a theory of functions. Today the commutative property is a well-known and basic property used in most branches of mathematics. The first recorded use of the term commutative was in a memoir by François Servois in 1814, which used the word commutatives when describing functions that have what is now called the commutative property. Commutative is the feminine form of the French adjective commutatif, which is derived from the French noun commutation and the French verb commuter, meaning "to exchange" or "to switch", a cognate of to commute. The term then appeared in English in 1838. in Duncan Gregory's article entitled "On the real nature of symbolical algebra" published in 1840 in the Transactions of the Royal Society of Edinburgh. Propositional logic Rule of replacement In truth-functional propositional logic, commutation, or commutativity refer to two valid rules of replacement. The rules allow one to transpose propositional variables within logical expressions in logical proofs. The rules are: and where "" is a metalogical symbol representing "can be replaced in a proof with". Truth functional connectives Commutativity is a property of some logical connectives of truth functional propositional logic. The following logical equivalences demonstrate that commutativity is a property of particular connectives. The following are truth-functional tautologies. Commutativity of conjunction Commutativity of disjunction Commutativity of implication (also called the law of permutation) Commutativity of equivalence (also called the complete commutative law of equivalence) Set theory In group and set theory, many algebraic structures are called commutative when certain operands satisfy the commutative property. In higher branches of mathematics, such as analysis and linear algebra the commutativity of well-known operations (such as addition and multiplication on real and complex numbers) is often used (or implicitly assumed) in proofs. Mathematical structures and commutativity A commutative semigroup is a set endowed with a total, associative and commutative operation. If the operation additionally has an identity element, we have a commutative monoid An abelian group, or commutative group is a group whose group operation is commutative. A commutative ring is a ring whose multiplication is commutative. (Addition in a ring is always commutative.) In a field both addition and multiplication are commutative. Related properties Associativity The associative property is closely related to the commutative property. The associative property of an expression containing two or more occurrences of the same operator states that the order operations are performed in does not affect the final result, as long as the order of terms does not change. In contrast, the commutative property states that the order of the terms does not affect the final result. Most commutative operations encountered in practice are also associative. However, commutativity does not imply associativity. A counterexample is the function which is clearly commutative (interchanging x and y does not affect the result), but it is not associative (since, for example, but ). More such examples may be found in commutative non-associative magmas. Furthermore, associativity does not imply commutativity either – for example multiplication of quaternions or of matrices is always associative but not always commutative. Distributivity Symmetry Some forms of symmetry can be directly linked to commutativity. When a commutative operation is written as a binary function then this function is called a symmetric function, and its graph in three-dimensional space is symmetric across the plane . For example, if the function is defined as then is a symmetric function. For relations, a symmetric relation is analogous to a commutative operation, in that if a relation R is symmetric, then . Non-commuting operators in quantum mechanics In quantum mechanics as formulated by Schrödinger, physical variables are represented by linear operators such as (meaning multiply by ), and . These two operators do not commute as may be seen by considering the effect of their compositions and (also called products of operators) on a one-dimensional wave function : According to the uncertainty principle of Heisenberg, if the two operators representing a pair of variables do not commute, then that pair of variables are mutually complementary, which means they cannot be simultaneously measured or known precisely. For example, the position and the linear momentum in the -direction of a particle are represented by the operators and , respectively (where is the reduced Planck constant). This is the same example except for the constant , so again the operators do not commute and the physical meaning is that the position and linear momentum in a given direction are complementary.
Mathematics
Algebra
null
294408
https://en.wikipedia.org/wiki/Penrose%E2%80%93Hawking%20singularity%20theorems
Penrose–Hawking singularity theorems
The Penrose–Hawking singularity theorems (after Roger Penrose and Stephen Hawking) are a set of results in general relativity that attempt to answer the question of when gravitation produces singularities. The Penrose singularity theorem is a theorem in semi-Riemannian geometry and its general relativistic interpretation predicts a gravitational singularity in black hole formation. The Hawking singularity theorem is based on the Penrose theorem and it is interpreted as a gravitational singularity in the Big Bang situation. Penrose shared half of the Nobel Prize in Physics in 2020 "for the discovery that black hole formation is a robust prediction of the general theory of relativity". Singularity A singularity in solutions of the Einstein field equations is one of three things: Spacelike singularities: The singularity lies in the future or past of all events within a certain region. The Big Bang singularity and the typical singularity inside a non-rotating, uncharged Schwarzschild black hole are spacelike. Timelike singularities: These are singularities that can be avoided by an observer because they are not necessarily in the future of all events. An observer might be able to move around a timelike singularity. These are less common in known solutions of the Einstein field equations. Null singularities: These singularities occur on light-like or null surfaces. An example might be found in certain types of black hole interiors, such as the Cauchy horizon of a charged (Reissner–Nordström) or rotating (Kerr) black hole. A singularity can be either strong or weak: Weak singularities: A weak singularity is one where the tidal forces (which are responsible for the spaghettification in black holes) are not necessarily infinite. An observer falling into a weak singularity might not be torn apart before reaching the singularity, although the laws of physics would still break down there. The Cauchy horizon inside a charged or rotating black hole might be an example of a weak singularity. Strong singularities: A strong singularity is one where tidal forces become infinite. In a strong singularity, any object would be destroyed by infinite tidal forces as it approaches the singularity. The singularity at the center of a Schwarzschild black hole is an example of a strong singularity. Space-like singularities are a feature of non-rotating uncharged black holes as described by the Schwarzschild metric, while time-like singularities are those that occur in charged or rotating black hole exact solutions. Both of them have the property of geodesic incompleteness, in which either some light-path or some particle-path cannot be extended beyond a certain proper time or affine parameter (affine parameter being the null analog of proper time). The Penrose theorem guarantees that some sort of geodesic incompleteness occurs inside any black hole whenever matter satisfies reasonable energy conditions. The energy condition required for the black-hole singularity theorem is weak: it says that light rays are always focused together by gravity, never drawn apart, and this holds whenever the energy of matter is non-negative. Hawking's singularity theorem is for the whole universe, and works backwards in time: it guarantees that the (classical) Big Bang has infinite density. This theorem is more restricted and only holds when matter obeys a stronger energy condition, called the strong energy condition, in which the energy is larger than the pressure. All ordinary matter, with the exception of a vacuum expectation value of a scalar field, obeys this condition. During inflation, the universe violates the dominant energy condition, and it was initially argued (e.g. by Starobinsky) that inflationary cosmologies could avoid the initial big-bang singularity. However, it has since been shown that inflationary cosmologies are still past-incomplete, and thus require physics other than inflation to describe the past boundary of the inflating region of spacetime. It is still an open question whether (classical) general relativity predicts spacelike singularities in the interior of realistic charged or rotating black holes, or whether these are artefacts of high-symmetry solutions and turn into null or timelike singularities when perturbations are added. Interpretation and significance In general relativity, a singularity is a place that objects or light rays can reach in a finite time where the curvature becomes infinite, or spacetime stops being a manifold. Singularities can be found in all the black-hole spacetimes, the Schwarzschild metric, the Reissner–Nordström metric, the Kerr metric and the Kerr–Newman metric, and in all cosmological solutions that do not have a scalar field energy or a cosmological constant. One cannot predict what might come "out" of a big-bang singularity in our past, or what happens to an observer that falls "in" to a black-hole singularity in the future, so they require a modification of physical law. Before Penrose, it was conceivable that singularities only form in contrived situations. For example, in the collapse of a star to form a black hole, if the star is spinning and thus possesses some angular momentum, maybe the centrifugal force partly counteracts gravity and keeps a singularity from forming. The singularity theorems prove that this cannot happen, and that a singularity will always form once an event horizon forms. In the collapsing star example, since all matter and energy is a source of gravitational attraction in general relativity, the additional angular momentum only pulls the star together more strongly as it contracts: the part outside the event horizon eventually settles down to a Kerr black hole (see No-hair theorem). The part inside the event horizon necessarily has a singularity somewhere. The proof is somewhat constructiveit shows that the singularity can be found by following light-rays from a surface just inside the horizon. But the proof does not say what type of singularity occurs, spacelike, timelike, null, orbifold, jump discontinuity in the metric. It only guarantees that if one follows the time-like geodesics into the future, it is impossible for the boundary of the region they form to be generated by the null geodesics from the surface. This means that the boundary must either come from nowhere or the whole future ends at some finite extension. An interesting "philosophical" feature of general relativity is revealed by the singularity theorems. Because general relativity predicts the inevitable occurrence of singularities, the theory is not complete without a specification for what happens to matter that hits the singularity. One can extend general relativity to a unified field theory, such as the Einstein–Maxwell–Dirac system, where no such singularities occur. Elements of the theorems In history, there is a deep connection between the curvature of a manifold and its topology. The Bonnet–Myers theorem states that a complete Riemannian manifold that has Ricci curvature everywhere greater than a certain positive constant must be compact. The condition of positive Ricci curvature is most conveniently stated in the following way: for every geodesic there is a nearby initially parallel geodesic that will bend toward it when extended, and the two will intersect at some finite length. When two nearby parallel geodesics intersect (see conjugate point), the extension of either one is no longer the shortest path between the endpoints. The reason is that two parallel geodesic paths necessarily collide after an extension of equal length, and if one path is followed to the intersection then the other, you are connecting the endpoints by a non-geodesic path of equal length. This means that for a geodesic to be a shortest length path, it must never intersect neighboring parallel geodesics. Starting with a small sphere and sending out parallel geodesics from the boundary, assuming that the manifold has a Ricci curvature bounded below by a positive constant, none of the geodesics are shortest paths after a while, since they all collide with a neighbor. This means that after a certain amount of extension, all potentially new points have been reached. If all points in a connected manifold are at a finite geodesic distance from a small sphere, the manifold must be compact. Roger Penrose argued analogously in relativity. If null geodesics, the paths of light rays, are followed into the future, points in the future of the region are generated. If a point is on the boundary of the future of the region, it can only be reached by going at the speed of light, no slower, so null geodesics include the entire boundary of the proper future of a region. When the null geodesics intersect, they are no longer on the boundary of the future, they are in the interior of the future. So, if all the null geodesics collide, there is no boundary to the future. In relativity, the Ricci curvature, which determines the collision properties of geodesics, is determined by the energy tensor, and its projection on light rays is equal to the null-projection of the energy–momentum tensor and is always non-negative. This implies that the volume of a congruence of parallel null geodesics once it starts decreasing, will reach zero in a finite time. Once the volume is zero, there is a collapse in some direction, so every geodesic intersects some neighbor. Penrose concluded that whenever there is a sphere where all the outgoing (and ingoing) light rays are initially converging, the boundary of the future of that region will end after a finite extension, because all the null geodesics will converge. This is significant, because the outgoing light rays for any sphere inside the horizon of a black hole solution are all converging, so the boundary of the future of this region is either compact or comes from nowhere. The future of the interior either ends after a finite extension, or has a boundary that is eventually generated by new light rays that cannot be traced back to the original sphere. Nature of a singularity The singularity theorems use the notion of geodesic incompleteness as a stand-in for the presence of infinite curvatures. Geodesic incompleteness is the notion that there are geodesics, paths of observers through spacetime, that can only be extended for a finite time as measured by an observer traveling along one. Presumably, at the end of the geodesic the observer has fallen into a singularity or encountered some other pathology at which the laws of general relativity break down. Assumptions of the theorems Typically a singularity theorem has three ingredients: An energy condition on the matter, A condition on the global structure of spacetime, Gravity is strong enough (somewhere) to trap a region. There are various possibilities for each ingredient, and each leads to different singularity theorems. Tools employed A key tool used in the formulation and proof of the singularity theorems is the Raychaudhuri equation, which describes the divergence of a congruence (family) of geodesics. The divergence of a congruence is defined as the derivative of the log of the determinant of the congruence volume. The Raychaudhuri equation is where is the shear tensor of the congruence and is also known as the Raychaudhuri scalar (see the congruence page for details). The key point is that will be non-negative provided that the Einstein field equations hold and the null energy condition holds and the geodesic congruence is null, or the strong energy condition holds and the geodesic congruence is timelike. When these hold, the divergence becomes infinite at some finite value of the affine parameter. Thus all geodesics leaving a point will eventually reconverge after a finite time, provided the appropriate energy condition holds, a result also known as the focusing theorem. This is relevant for singularities thanks to the following argument: Suppose we have a spacetime that is globally hyperbolic, and two points and that can be connected by a timelike or null curve. Then there exists a geodesic of maximal length connecting and . Call this geodesic . The geodesic can be varied to a longer curve if another geodesic from intersects at another point, called a conjugate point. From the focusing theorem, we know that all geodesics from have conjugate points at finite values of the affine parameter. In particular, this is true for the geodesic of maximal length. But this is a contradictionone can therefore conclude that the spacetime is geodesically incomplete. In general relativity, there are several versions of the Penrose–Hawking singularity theorem. Most versions state, roughly, that if there is a trapped null surface and the energy density is nonnegative, then there exist geodesics of finite length that cannot be extended. These theorems, strictly speaking, prove that there is at least one non-spacelike geodesic that is only finitely extendible into the past but there are cases in which the conditions of these theorems obtain in such a way that all past-directed spacetime paths terminate at a singularity. Versions There are many versions; below is the null version: Assume The null energy condition holds. We have a noncompact connected Cauchy surface. We have a closed trapped null surface . Then, we either have null geodesic incompleteness, or closed timelike curves. Sketch of proof: Proof by contradiction. The boundary of the future of , is generated by null geodesic segments originating from with tangent vectors orthogonal to it. Being a trapped null surface, by the null Raychaudhuri equation, both families of null rays emanating from will encounter caustics. (A caustic by itself is unproblematic. For instance, the boundary of the future of two spacelike separated points is the union of two future light cones with the interior parts of the intersection removed. Caustics occur where the light cones intersect, but no singularity lies there.) The null geodesics generating have to terminate, however, i.e. reach their future endpoints at or before the caustics. Otherwise, we can take two null geodesic segmentschanging at the causticand then deform them slightly to get a timelike curve connecting a point on the boundary to a point on , a contradiction. But as is compact, given a continuous affine parameterization of the geodesic generators, there exists a lower bound to the absolute value of the expansion parameter. So, we know caustics will develop for every generator before a uniform bound in the affine parameter has elapsed. As a result, has to be compact. Either we have closed timelike curves, or we can construct a congruence by timelike curves, and every single one of them has to intersect the noncompact Cauchy surface exactly once. Consider all such timelike curves passing through and look at their image on the Cauchy surface. Being a continuous map, the image also has to be compact. Being a timelike congruence, the timelike curves can't intersect, and so, the map is injective. If the Cauchy surface were noncompact, then the image has a boundary. We're assuming spacetime comes in one connected piece. But is compact and boundariless because the boundary of a boundary is empty. A continuous injective map can't create a boundary, giving us our contradiction. Loopholes: If closed timelike curves exist, then timelike curves don't have to intersect the partial Cauchy surface. If the Cauchy surface were compact, i.e. space is compact, the null geodesic generators of the boundary can intersect everywhere because they can intersect on the other side of space. Other versions of the theorem involving the weak or strong energy condition also exist. Modified gravity In modified gravity, the Einstein field equations do not hold and so these singularities do not necessarily arise. For example, in Infinite Derivative Gravity, it is possible for to be negative even if the Null Energy Condition holds.
Physical sciences
Theory of relativity
Physics
294419
https://en.wikipedia.org/wiki/Sunscreen
Sunscreen
Sunscreen, also known as sunblock, sun lotion or sun cream, is a photoprotective topical product for the skin that helps protect against sunburn and prevent skin cancer. Sunscreens come as lotions, sprays, gels, foams (such as an expanded foam lotion or whipped lotion), sticks, powders and other topical products. Sunscreens are common supplements to clothing, particularly sunglasses, sunhats and special sun protective clothing, and other forms of photoprotection (such as umbrellas). Sunscreens may be classified according to the type of active ingredient(s) present in the formulation (inorganic compounds or organic molecules) as: Mineral sunscreens (also referred to as physical), which use only inorganic compounds (zinc oxide and/or titanium dioxide) as active ingredients. These ingredients primarily work by absorbing UV rays but also through reflection and refraction. Chemical sunscreens, which use organic molecules as active ingredients. These products are sometimes referred to as petrochemical sunscreens since the active organic molecules are synthesized starting from building blocks typically derived from petroleum. Chemical sunscreen ingredients also mainly work by absorbing the UV rays. Over the years, some organic UV absorbers have been heavily scrutinised to assess their toxicity and a few of them have been banned in places such as Hawaii and Thailand for their impact on aquatic life and the environment. Hybrid sunscreens, which contain a combination of organic and inorganic UV filters. Medical organizations such as the American Cancer Society recommend the use of sunscreen because it aids in the prevention of squamous cell carcinomas. The routine use of sunscreens may also reduce the risk of melanoma. To effectively protect against all the potential damages of UV light, the use of broad-spectrum sunscreens (covering both UVA and UVB radiation) has been recommended. History Early civilizations used a variety of plant products to help protect the skin from sun damage. For example, ancient Greeks used olive oil for this purpose, and ancient Egyptians used extracts of rice, jasmine, and lupine plants whose products are still used in skin care today. Zinc oxide paste has also been popular for skin protection for thousands of years. Among the nomadic sea-going Sama-Bajau people of the Philippines, Malaysia, and Indonesia, a common type of sun protection is a paste called borak or burak, which was made from water weeds, rice and spices. It is used most commonly by women to protect the face and exposed skin areas from the harsh tropical sun at sea. In Myanmar, thanaka, a yellow-white cosmetic paste made of ground bark, is traditionally used for sun protection. In Madagascar, a ground wood paste called masonjoany has been worn for sun protection, as well as decoration and insect repellent, since the 18th century, and is ubiquitous in the Northwest coastal regions of the island to this day. The first ultraviolet B filters were produced in 1928. Followed by the first sunscreen, invented in Australia by chemist H.A. Milton Blake, in 1932 formulating with the UV filter 'salol' (Phenyl salicylate) at a concentration of 10%. Its protection was verified by the University of Adelaide. In 1936, L'Oreal released its first sunscreen product, formulated by French chemist Eugène Schueller. The US military was an early adopter of sunscreen. In 1944, as the hazards of sun overexposure became apparent to soldiers stationed in the Pacific tropics at the height of World War II, Benjamin Green, an airman and later a pharmacist produced Red Vet Pet (for red veterinary petrolatum) for the US military. Sales boomed when Coppertone improved and commercialized the substance under the Coppertone girl and Bain de Soleil branding in the early 1950s. In 1946, Austrian chemist Franz Greiter introduced a product, called Gletscher Crème (Glacier Cream), subsequently became the basis for the company Piz Buin, named in honor of the mountain where Greiter allegedly received the sunburn. In 1974, Greiter adapted earlier calculations from Friedrich Ellinger and Rudolf Schulze and introduced the "sun protection factor" (SPF), which has become the global standard for measuring UVB protection. It has been estimated that Gletscher Crème had an SPF of 2. Water-resistant sunscreens were introduced in 1977, and recent development efforts have focused on overcoming later concerns by making sunscreen protection both longer-lasting and broader-spectrum (protection from both UVA & UVB rays), more environmentally friendly, more appealing to use and addressing the safety concerns of petrochemical sunscreens, i.e. FDA studies showing their systematic absorption into the bloodstream. Health effects Benefits Sunscreen use can help prevent melanoma and squamous cell carcinoma, two types of skin cancer. There is little evidence that it is effective in preventing basal cell carcinoma. A 2013 study concluded that the diligent, everyday application of sunscreen could slow or temporarily prevent the development of wrinkles and sagging skin. The study involved 900 white people in Australia and required some of them to apply a broad-spectrum sunscreen every day for four and a half years. It found that people who did so had noticeably more resilient and smoother skin than those assigned to continue their usual practices. A study on 32 subjects showed that daily use of sunscreen (SPF 30) reversed photoaging of the skin within 12 weeks and the amelioration continued until the end of the investigation period of one year. Sunscreen is inherently anti-ageing as the sun is the number one cause of premature ageing; it therefore may slow or temporarily prevent the development of wrinkles, dark spots, and sagging skin. Minimizing UV damage is especially important for children and fair-skinned individuals and those who have sun sensitivity for medical reasons. Risks In February 2019, the US Food and Drug Administration (FDA) started classifying already approved UV filter molecules into three categories: those which are generally recognized as safe and effective (GRASE), those which are non-GRASE due to safety issues, and those requiring further evaluation. As of 2021, only zinc oxide and titanium dioxide are recognized as GRASE. Two previously approved UV filters, para-aminobenzoic acid (PABA) and trolamine salicylate, were banned in 2021 due to safety concerns. The remaining FDA approved active ingredients were put in the third category as their manufacturers have yet to produce sufficient safety data — despite the fact that some of the chemicals have sold in sunscreen products for more than 40 years. Some researchers argue that the risk of sun-induced skin cancer outweighs concerns about toxicity and mutagenicity, although environmentalists say this ignores "ample safer alternatives available on the market containing the active ingredient minerals zinc oxide or titanium dioxide", which are also safer for the environment. Regulators can investigate and ban UV filters over safety concerns (such as PABA), which can result in withdrawal of products from the consumer market. Regulators, such as the TGA and the FDA, have also been concerned with recent reports of contamination in sunscreen products with known possible human carcinogens such as benzene and benzophenone. Independent laboratory testing carried out by Valisure found benzene contamination in 27% of the sunscreens they tested, with some batches having up to triple the FDA's conditionally restricted limit of 2 parts per million (ppm). This resulted in a voluntary recall by some major sunscreen brands that were implicated in the testing, as such, regulators also help publicise and coordinate these voluntary recalls. VOC's (Volatile Organic Compounds) such as benzene, are particularly harmful in sunscreen formulations as many active and inactive ingredients can increase permeation across the skin. Butane, which is used as a propellant in spray sunscreens, has been found to have benzene impurities from the refinement process. There is a risk of an allergic reaction to sunscreen for some individuals, as "Typical allergic contact dermatitis may occur in individuals allergic to any of the ingredients that are found in sunscreen products or cosmetic preparations that have a sunscreen component. The rash can occur anywhere on the body where the substance has been applied and sometimes may spread to unexpected sites." Vitamin D production There are some concerns about potential vitamin D deficiency arising from prolonged use of sunscreen. The typical use of sunscreen does not usually result in vitamin D deficiency; however, extensive usage may. Sunscreen prevents ultraviolet light from reaching the skin, and even moderate protection can substantially reduce vitamin D synthesis. However, adequate amounts of vitamin D can be obtained via diet or supplements. Vitamin D overdose is impossible from UV exposure due to an equilibrium the skin reaches in which vitamin D degrades as quickly as it is created. High-SPF sunscreens filter out most UVB radiation, which triggers vitamin D production in the skin. However, clinical studies show that regular sunscreen use does not lead to vitamin D deficiency. Even high-SPF sunscreens allow a small amount of UVB to reach the skin, sufficient for vitamin D synthesis. Additionally, brief, unprotected sun exposure can produce ample vitamin D, but this exposure also risks significant DNA damage and skin cancer. To avoid these risks, vitamin D can be obtained safely through diet and supplements. Foods like fatty fish, fortified milk, and orange juice, along with supplements, provide necessary vitamin D without harmful sun exposure. Studies have shown that sunscreen with a high UVA protection factor enabled significantly higher vitamin D synthesis than a low UVA protection factor sunscreen, likely because it allows more UVB transmission. Measurements of protection Sun protection factor and labeling The sun protection factor (SPF rating, introduced in 1974) is a measure of the fraction of sunburn-producing UV rays that reach the skin. For example, "SPF 15" means that of the burning radiation will reach the skin, assuming sunscreen is applied evenly at a thick dosage of 2 milligrams per square centimeter (mg/cm2). It is important to note that sunscreens with higher SPF do not last or remain effective on the skin any longer than lower SPF and must be continually reapplied as directed, usually every two hours. The SPF is an imperfect measure of skin damage because invisible damage and skin malignant melanomas are also caused by ultraviolet A (UVA, wavelengths 315–400 or 320–400 nm), which does not primarily cause reddening or pain. Conventional sunscreen blocks very little UVA radiation relative to the nominal SPF; broad-spectrum sunscreens are designed to protect against both UVB and UVA. According to a 2004 study, UVA also causes DNA damage to cells deep within the skin, increasing the risk of malignant melanomas. Even some products labeled "broad-spectrum UVA/UVB protection" have not always provided good protection against UVA rays. Titanium dioxide probably gives good protection but does not completely cover the UVA spectrum, with early 2000s research suggesting that zinc oxide is superior to titanium dioxide at wavelengths 340–380 nm. Owing to consumer confusion over the real degree and duration of protection offered, labelling restrictions are enforced in several countries. In the EU, sunscreen labels can only go up to SPF 50+ (initially listed as 30 but soon revised to 50). Australia's Therapeutic Goods Administration increased the upper limit to 50+ in 2012. In its 2007 and 2011 draft rules, the US Food and Drug Administration (FDA) proposed a maximum SPF label of 50, to limit unrealistic claims. (As of August 2019, the FDA has not adopted the SPF 50 limit.) Others have proposed restricting the active ingredients to an SPF of no more than 50, due to lack of evidence that higher dosages provide more meaningful protection, despite a common misconception that protection directly scales with SPF; doubling when SPF is doubled. Different sunscreen ingredients have different effectiveness against UVA and UVB. The SPF can be measured by applying sunscreen to the skin of a volunteer and measuring how long it takes before sunburn occurs when exposed to an artificial sunlight source. In the US, such an in vivo test is required by the FDA. It can also be measured in vitro with the help of a specially designed spectrometer. In this case, the actual transmittance of the sunscreen is measured, along with the degradation of the product due to being exposed to sunlight. In this case, the transmittance of the sunscreen must be measured over all wavelengths in sunlight's UVB–UVA range (290–400 nm), along with a table of how effective various wavelengths are in causing sunburn (the erythemal action spectrum) and the standard intensity spectrum of sunlight (see the figure). Such in vitro measurements agree very well with in vivo measurements. Numerous methods have been devised for evaluation of UVA and UVB protection. The most-reliable spectrophotochemical methods eliminate the subjective nature of grading erythema. The ultraviolet protection factor (UPF) is a similar scale developed for rating fabrics for sun protective clothing. According to recent testing by Consumer Reports, UPF ~30+ is typical for protective fabrics, while UPF ~20 is typical for standard summer fabrics. Mathematically, the SPF (or the UPF) is calculated from measured data as: where is the solar irradiance spectrum, the erythemal action spectrum, and the monochromatic protection factor, all functions of the wavelength . The MPF is roughly the inverse of the transmittance at a given wavelength. The combined SPF of two layers of sunscreen may be lower than the square of the single-layer SPF. UVA protection Persistent pigment darkening The persistent pigment darkening (PPD) method is a method of measuring UVA protection, similar to the SPF method of measuring sunburn protection. Originally developed in Japan, it is the preferred method used by manufacturers such as L'Oréal. Instead of measuring erythema, the PPD method uses UVA radiation to cause a persistent darkening or tanning of the skin. Theoretically, a sunscreen with a PPD rating of 10 should allow a person 10 times as much UVA exposure as would be without protection. The PPD method is an in vivo test like SPF. In addition, the European Cosmetic and Perfumery Association (Colipa) has introduced a method that, it is claimed, can measure this in vitro and provide parity with the PPD method. SPF equivalence As part of revised guidelines for sunscreens in the EU, there is a requirement to provide the consumer with a minimum level of UVA protection in relation to the SPF. This should be a UVA protection factor of at least 1/3 of the SPF to carry the UVA seal. The 1/3 threshold derives from the European Commission recommendation 2006/647/EC. This Commission recommendation specifies that the UVA protection factor should be measured using the PPD method as modified by the French health agency AFSSAPS (now ANSM) "or an equivalent degree of protection obtained with any in vitro method". A set of final US FDA rules effective from summer 2012 defines the phrase "broad spectrum" as providing UVA protection proportional to the UVB protection, using a standardized testing method. Star rating system In the UK and Ireland, the Boots star rating system is a proprietary in vitro method used to describe the ratio of UVA to UVB protection offered by sunscreen creams and sprays. Based on original work by Brian Diffey at Newcastle University, the Boots Company in Nottingham, UK, developed a method that has been widely adopted by companies marketing these products in the UK. One-star products provide the lowest ratio of UVA protection, five-star products the highest. The method was revised in light of the Colipa UVA PF test and the revised EU recommendations regarding UVA PF. The method still uses a spectrophotometer to measure absorption of UVA versus UVB; the difference stems from a requirement to pre-irradiate samples (where this was not previously required) to give a better indication of UVA protection and photostability when the product is used. With the current methodology, the lowest rating is three stars, the highest being five stars. In August 2007, the FDA put out for consultation the proposal that a version of this protocol be used to inform users of American product of the protection that it gives against UVA; but this was not adopted, for fear it would be too confusing. PA system Asian brands, particularly Japanese ones, tend to use The Protection Grade of UVA (PA) system to measure the UVA protection that a sunscreen provides. The PA system is based on the PPD reaction and is now widely adopted on the labels of sunscreens. According to the Japan Cosmetic Industry Association, PA+ corresponds to a UVA protection factor between two and four, PA++ between four and eight, and PA+++ more than eight. This system was revised in 2013 to include PA++++ which corresponds to a PPD rating of sixteen or above. Expiration date Some sunscreens include an expiration date—a date indicating when they may become less effective. Active ingredients Sunscreen formulations contain UV absorbing compounds (the active ingredients) dissolved or dispersed in a mixture of other ingredients, such as water, oils, moisturizers, and antioxidants. The UV filters can be either: Organic compounds which absorb ultraviolet light. Some organic compounds (bisoctrizole and phenylene bis-diphenyltriazine) also partially reflect incident light. These are also referred to as "chemical" UV filters. Inorganic compounds (zinc oxide and titanium dioxide), which reflect, scatter, and absorb UV light. These are also referred to as "mineral" filters. The organic compounds used as UV filter are often aromatic molecules conjugated with carbonyl groups. This general structure allows the molecule to absorb high-energy ultraviolet rays and release the energy as lower-energy rays, thereby preventing the skin-damaging ultraviolet rays from reaching the skin. So, upon exposure to UV light, most of the ingredients (with the notable exception of avobenzone) do not undergo significant chemical change, allowing these ingredients to retain the UV-absorbing potency without significant photodegradation. A chemical stabilizer is included in some sunscreens containing avobenzone to slow its breakdown. The stability of avobenzone can also be improved by bemotrizinol, octocrylene and various other photostabilisers. Most organic compounds in sunscreens slowly degrade and become less effective over the course of several years even if stored properly, resulting in the expiration dates calculated for the product. Sunscreening agents are used in some hair care products such as shampoos, conditioners and styling agents to protect against protein degradation and color loss. Currently, benzophenone-4 and ethylhexyl methoxycinnamate are the two sunscreens most commonly used in hair products. The common sunscreens used on skin are rarely used for hair products due to their texture and weight effects. UV filters need usually to be approved by local agencies (such as the FDA in the United States) to be used in sunscreen formulations. As of 2023, 29 compounds are approved in the European Union and 17 in the USA. No UV filters have been approved by the FDA for use in cosmetics since 1999. The following are the FDA allowable active ingredients in sunscreens: Zinc oxide was approved as a UV filter by the EU in 2016. Other ingredients approved within the EU and other parts of the world, that have not been included in the current FDA Monograph: * Time and Extent Application (TEA), Proposed Rule on FDA approval originally expected 2009, now expected 2015. Many of the ingredients awaiting approval by the FDA are relatively new, and developed to absorb UVA. The 2014 Sunscreen Innovation Act was passed to accelerate the FDA approval process. Inactive ingredients It is known that SPF is affected by not only the choice of active ingredients and the percentage of active ingredients but also the formulation of the vehicle/base. Final SPF is also impacted by the distribution of active ingredients in the sunscreen, how evenly the sunscreen applies on the skin, how well it dries down on the skin and the pH value of the product among other factors. Changing any inactive ingredient may potentially alter a sunscreen's SPF. When combined with UV filters, added antioxidants can work synergistically to affect the overall SPF value positively. Furthermore, adding antioxidants to sunscreen can amplify its ability to reduce markers of extrinsic photoaging, grant better protection from UV induced pigment formation, mitigate skin lipid peroxidation, improve the photostability of the active ingredients, neutralize reactive oxygen species formed by irradiated photocatalysts (e.g., uncoated TiO₂) and aid in DNA repair post-UVB damage, thus enhancing the efficiency and safety of sunscreens. Compared with sunscreen alone, it has been shown that the addition of antioxidants has the potential to suppress ROS formation by an additional 1.7-fold for SPF 4 sunscreens and 2.4-fold for SPF 15-to-SPF 50 sunscreens, but the efficacy depends on how well the sunscreen in question has been formulated. Sometimes osmolytes are also incorporated into commercially available sunscreens in addition to antioxidants since they also aid in protecting the skin from the detrimental effects of UVR. Examples include the osmolyte taurine, which has demonstrated the ability to protects against UVB-radiation induced immunosuppression and the osmolyte ectoine, which aids in counteracting cellular accelerated aging & UVA-radiation induced premature photoaging. Other inactive ingredients can also assist in photostabilizing unstable UV filters. Cyclodextrins have demonstrated the ability to reduce photodecomposition, protect antioxidants and limit skin penetration past the uppermost skin layers, allowing them to longer maintain the protection factor of sunscreens with UV filters that are highly unstable and/or easily permeate to the lower layers of the skin. Similarly, film-forming polymers like polyester-8 and polycryleneS1 have the ability to protect the efficacy of older petrochemical UV filters by preventing them from destabilizing due to extended light exposure. These kinds of ingredients also increase the water resistance of sunscreen formulations. In the 2010s and 2020s, there has been increasing interest in sunscreens that protect the wearer from the sun's high-energy visible light and infrared light as well as ultraviolet light. This is due to newer research revealing blue & violet visible light and certain wavelengths of infrared light (e.g., NIR, IR-A) work synergistically with UV light in contributing to oxidative stress, free radical generation, dermal cellular damage, suppressed skin healing, decreased immunity, erythema, inflammation, dryness, and several aesthetic concerns, such as: wrinkle formation, loss of skin elasticity and dyspigmentation. Increasingly, a number of commercial sunscreens are being produced that have manufacturer claims regarding skin protection from blue light, infrared light and even air pollution. However, as of 2021 there are no regulatory guidelines or mandatory testing protocols that govern these claims. Historically, the American FDA has only recognized protection from sunburn (via UVB protection) and protection from skin cancer (via SPF 15+ with some UVA protection) as drug/medicinal sunscreen claims, so they do not have regulatory authority over sunscreen claims regarding protecting the skin from damage from these other environmental stressors. Since sunscreen claims not related to protection from ultraviolet light are treated as cosmeceutical claims rather than drug/medicinal claims, the innovative technologies and additive ingredients used to allegedly reduce the damage from these other environmental stressors may vary widely from brand to brand. Some studies show that mineral sunscreens primarily made with substantially large particles (i.e., neither nano nor micronized) may help protect from visible and infrared light to some degree, but these sunscreens are often unacceptable to consumers due to leaving an obligatory opaque white cast on the skin. Further research has shown that sunscreens with added iron oxide pigments and/or pigmentary titanium dioxide can provide the wearer with a substantial amount of HEVL protection. Cosmetic chemists have found that other cosmetic-grade pigments can be functional filler ingredients. Mica was discovered to have significant synergistic effects with UVR filters when formulated in sunscreens, in that it can notably increase the formula's ability to protect the wearer from HEVL. There is a growing amount of research demonstrating that adding various vitamer antioxidants (eg; retinol, alpha tocopherol, gamma tocopherol, tocopheryl acetate, ascorbic acid, ascorbyl tetraisopalmitate, ascorbyl palmitate, sodium ascorbyl phosphate, ubiquinone) and/or a mixture of certain botanical antioxidants (eg; epigallocatechin-3-gallate, b-carotene, vitis vinifera, silymarin, spirulina extract, chamomile extract and possibly others) to sunscreens efficaciously aids in reducing damage from the free radicals produced by exposure to solar ultraviolet radiation, visible light, near infrared radiation and infrared-a radiation. Since sunscreen's active ingredients work preventatively by creating a shielding film on the skin that absorbs, scatters, and reflects light before it can reach the skin, UV filters have been deemed an ideal “first line of defense” against sun damage when exposure can't be avoided. Antioxidants have been deemed a good “second line of defense” since they work responsively by decreasing the overall burden of free radicals that do reach the skin. The degree of the free radical protection from the entire solar spectral range that a sunscreen can offer has been termed the "radical protection factor" (RPF) by some researchers. Application SPF 30 or above must be used to effectively prevent UV rays from damaging skin cells. This is the amount that is recommended to prevent against skin cancer. Sunscreen must also be applied thoroughly and re-applied during the day, especially after being in the water. Special attention should be paid to areas like the ears and nose, which are common spots of skin cancer. Dermatologists may be able to advise about what sunscreen is best to use for specific skin types. The dose used in FDA sunscreen testing is 2 mg/cm2 of exposed skin. If one assumes an "average" adult build of height 5 ft 4 in (163 cm) and weight 150 lb (68 kg) with a 32-inch (82-cm) waist, that adult wearing a bathing suit covering the groin area should apply approximately 30 g (or 30 ml, approximately 1 oz) evenly to the uncovered body area. This can be more easily thought of as a "golf ball" size amount of product per body, or at least six teaspoonfuls. Larger or smaller individuals should scale these quantities accordingly. Considering only the face, this translates to about 1/4 to 1/3 of a teaspoon for the average adult face. Some studies have shown that people commonly apply only 1/4 to 1/2 of the amount recommended for achieving the rated sun protection factor (SPF), and in consequence the effective SPF should be downgraded to a 4th root or a square root of the advertised value, respectively. A later study found a significant exponential relation between SPF and the amount of sunscreen applied, and the results are closer to linearity than expected by theory. Claims that substances in pill form can act as sunscreen are false and disallowed in the United States. Regulation Palau On 1 January 2020, Palau banned the manufacturing and selling of sun cream products containing any of the following ingredients: benzophenone-3, octyl methoxycinnamate, octocrylene, 4-methyl-benzylidene camphor, triclosan, methylparaben, ethylparaben, butylparaben, benzyl paraben, and phenoxyethanol. The decision was taken to protect the local coral reef and sea life. Those compounds are known or suspected to be harmful to coral or other sea life. United States Sunscreen labeling standards have been evolving in the United States since the FDA first adopted the SPF calculation in 1978. The FDA issued a comprehensive set of rules in June 2011, taking effect in 2012–2013, designed to help consumers identify and select suitable sunscreen products offering protection from sunburn, early skin aging, and skin cancer. However, unlike other countries, the United States classifies sunscreen as an over-the-counter drug rather than a cosmetic product. As FDA approval of a new drug is typically far slower than for a cosmetic, the result is fewer ingredients available for sunscreen formulations in the US compared with many other countries. In 2019, the FDA proposed tighter regulations on sun protection and general safety, including the requirement that sunscreen products with SPF greater than 15 must be broad spectrum, and imposing a prohibition on products with SPF greater than 60. To be classified as "broad spectrum", sunscreen products must provide protection against both UVA and UVB, with specific tests required for both. Claims of products being "waterproof" or "sweatproof" are prohibited, while the terms "sunblock" and "instant protection" and "protection for more than 2 hours" are all prohibited without specific FDA approval. "Water resistance" claims on the front label must indicate how long the sunscreen remains effective and specify whether this applies to swimming or sweating, based on standard testing. Sunscreens must include standardized "Drug Facts" information on the container. However, there is no regulation that deems it necessary to mention whether the contents contain nanoparticles of mineral ingredients. Furthermore, US products do not require the expiration date of products to be displayed on the label. In 2021, the FDA introduced an additional administrative order regarding the safety classification of cosmetic UV filters, to categorize a given ingredient as either: Generally recognized as safe and effective (GRASE) Not GRASE due to safety issues Not GRASE because additional safety data are needed. To be considered a GRASE active ingredient, the FDA requires it to have undergone both non-clinical animal studies as well as human clinical studies. The animal studies evaluate the potential for inducing carcinogenesis, genetic or reproductive harm, and any toxic effects of the ingredient once absorbed and distributed in the body. The human trials expand upon the animal trials, providing additional information on safety in the pediatric population, protection against UVA and UVB, and the potential for skin reactions after application. Two previously approved UV filters, para-aminobenzoic acid (PABA) and trolamine salicylate, were reclassified as not GRASE due to safety concerns and have consequently been removed from the market. Europe In Europe, sunscreens are considered a cosmetic product rather than an over-the-counter drug. These products are regulated by the Cosmetic Regulation (EC) No 1223/2009, which was created in July 2013. The recommendations for formulating sunscreen products are guided by the Scientific Community on Consumer Safety (SCCS). The regulation of cosmetic products in Europe requires the producer to follow six domains when formulating their product: I. Cosmetic safety report must be conducted by a qualified personnel II. The product must not contain substances banned for cosmetic products III. The product must not contain substances restricted for cosmetic products IV. The product must adhere to the approved list of colourants for cosmetic products. V. The product must adhere to the approved list of preservatives for cosmetic products. VI. The product must contain UV filters approved in Europe. According to the EC, sunscreens at a minimum must exhibit: A SPF of 6 UVA/UVB ratio ≥ 1/3 The critical wavelength is at least 370 nanometers (indicating that it is "broad-spectrum"). Instructions for using and precautions. Evidence the sunscreen meets UVA and SPF requirements. Labels of European sunscreens must disclose the use of nanoparticles in addition to the shelf life of the product. Canada Regulation of sunscreen is dependent on the ingredient used; It is then classified and follows the regulations for either natural health products or drug product. Companies must complete a product licensing application prior to introducing their sunscreen on the market. ASEAN (Brunei, Cambodia, Indonesia, Laos, Malaysia, Myanmar, the Philippines, Singapore, Thailand, Vietnam) The regulation of sunscreen for ASEAN countries closely follows European regulations. However, products are regulated by the ASEAN scientific community rather than the SCCS. Additionally, there are minor differences in the allowed phrasing printed on sunscreen packages. Japan Sunscreen is considered a cosmetic product, and is regulated under the Japan Cosmetic Industry Association (JCIA). Products are regulated mostly for the type of UV filter and SPF. SPF may range from 2 to 50. China Sunscreen is regulated as cosmetic product under the State Food and Drug Administration (SFDA). The list of approved filters is the same as it is in Europe. However, sunscreen in China requires safety testing in animal studies prior to approval. Australia Sunscreens are divided into therapeutic and cosmetic sunscreens. Therapeutic sunscreens are classified into primary sunscreens (SPF ≥ 4) and secondary sunscreens (SPF < 4). Therapeutic sunscreens are regulated by the Therapeutic Goods Administration (TGA). Cosmetic sunscreens are products that contain a sunscreen ingredient, but do not protect from the sun. These products are regulated by the National Industrial Chemicals Notification and Assessment Scheme (NICNAS). New Zealand Sunscreen is classified as a cosmetic product, and closely follows EU regulations. However, New Zealand has a more extensive list of approved UV filters than Europe. Mercosur Mercosur is an international group consisting of Argentina, Brazil, Paraguay, and Uruguay. Regulation of sunscreen as a cosmetic product began in 2012, and is similar in structure to the European regulations. Sunscreens must meet specific standards including water resistance, sun protection factor, and a UVA/UVB ratio of 1/3. The list of approved sunscreen ingredients is greater than in Europe or the US. Environmental effects Some sunscreen active ingredients have been shown to cause toxicity towards marine life and coral, resulting in bans in different states, countries and ecological areas. Coral reefs, comprising organisms in delicate ecological balances, are vulnerable to even minor environmental disturbances. Factors like temperature changes, invasive species, pollution, and detrimental fishing practices have previously been highlighted as threats to coral health. In 2018, Hawaii passed legislation that prohibits the sale of sunscreens containing oxybenzone and octinoxate. In sufficient concentrations, oxybenzone and octinoxate can damage coral DNA, induce deformities in coral larvae, heighten the risk of viral infections, and make corals more vulnerable to bleaching. Such threats are even more concerning given that coral ecosystems are already compromised by climate change, pollution, and other environmental stressors. While there is ongoing debate regarding the real-world concentrations of these chemicals versus laboratory settings, an assessment in Kahaluu Bay in Hawaii showed oxybenzone concentrations to be 262 times higher than what the U.S. Environmental Protection Agency designates as high-risk. Another study in Hanauma Bay found levels of the chemical ranging from 30 ng/L to 27,880 ng/L, noting that concentrations beyond 63 ng/L could induce toxicity in corals. Echoing Hawaii's initiative, other regions including Key West, Florida, the U.S. Virgin Islands, Bonaire, and Palau have also instituted bans on sunscreens containing oxybenzone and octinoxate. The environmental implications of sunscreen usage on marine ecosystems are multi-faceted and vary in severity. In a 2015 study, titanium dioxide nanoparticles, when introduced to water and subjected to ultraviolet light, were shown to amplify the production of hydrogen peroxide, a compound known to damage phytoplankton. In 2002, research indicated that sunscreens might escalate virus abundance in seawater, compromising the marine environment in a manner akin to other pollutants. Further probing the matter, a 2008 investigation examining a variety of sunscreen brands, protective factors, and concentrations revealed unanimous bleaching effects on hard corals. Alarmingly, the degree of bleaching magnified with increasing sunscreen quantities. When assessing individual compounds prevalent in sunscreens, substances such as butylparaben, ethylhexylmethoxycinnamate, benzophenone-3, and 4-methylbenzylidene camphor induced complete coral bleaching at even minimal concentrations. A 2020 study from the journal Current Dermatology Report summarized the situation as the US FDA currently approving only zinc oxide (ZnO) and titanium dioxide (TiO2) as safe ultraviolet filters, and for those concerned with coral bleaching, they should use non-nano ZnO or TiO2 since they have the most consistent safety data. Research and Development New products are in development such as sunscreens based on bioadhesive nanoparticles. These function by encapsulating commercially used UV filters, while being not only adherent to the skin but also non-penetrant. This strategy inhibits primary UV-induced damage as well as secondary free radicals. UV filters based on sinapate esters are also under study. Sunscreens with natural and sustainable connotations are increasingly being developed, as a result of increased environmental concern. Note
Biology and health sciences
Hygiene products
Health
294656
https://en.wikipedia.org/wiki/Andrology
Andrology
Andrology (from , anēr, genitive , andros 'man' and , -logia) is a name for the medical specialty that deals with male health, particularly relating to the problems of the male reproductive system and urological problems that are unique to men. It is the parallel to gynecology, which deals with medical issues which are specific to female health, especially reproductive and urologic health. Process Andrology covers anomalies in the connective tissues pertaining to the genitalia, as well as changes in the volume of cells, such as in genital hypertrophy or macrogenitosomia. From reproductive and urologic viewpoints, male-specific medical and surgical procedures include vasectomy, vasovasostomy (one of the vasectomy reversal procedures), orchidopexy, circumcision, sperm/semen cryopreservation, surgical sperm retrieval, semen analysis (for fertility or post-vasectomy), sperm preparation for assisted reproductive technology (ART) as well as intervention to deal with male genitourinary disorders such as the following: History Unlike gynaecology, which has a plethora of medical board certification programs worldwide, andrology has none. Andrology has only been studied as a distinct specialty since the late 1960s: the first specialist journal on the subject was the German periodical Andrologie (now called Andrologia), published from 1969 onwards. The next specialty journal covering both the basic and clinical andrology was the International Journal of Andrology, established in 1978, which became the official journal of the European Academy of Andrology in 1992. In 1980 the American Society of Andrology launched the Journal of Andrology. In 2012, these two society journals merged into one premier journal in the field, named Andrology, with the first issue published in January 2013.
Biology and health sciences
Fields of medicine
Health
294752
https://en.wikipedia.org/wiki/Quince
Quince
The quince (; Cydonia oblonga) is the sole member of the genus Cydonia in the Malinae subtribe (which contains apples, pears, and other fruits) of the Rosaceae family. It is a deciduous tree that bears hard, aromatic bright golden-yellow pome fruit, similar in appearance to a pear. Ripe quince fruits are hard, tart, and astringent. They are eaten raw or processed into jam, quince cheese, or alcoholic drinks. The quince tree is sometimes grown as an ornamental plant for its attractive pale pink blossoms and as a miniature bonsai plant. In ancient Greece, the word for quince was used slightly ribaldly to signify teenage breasts. Description Quinces are shrubs or small trees up to tall and wide. Young twigs are covered in a grey down. The leaves are oval, and are downy on the underside. The solitary flowers, produced in late spring after the leaves, are white or pink. The ripe fruit is aromatic but remains hard; gritty stone cells are dispersed through the flesh. It is larger than many apples, weighing as much as , often pear-shaped but sometimes roughly spherical. The seeds contain nitriles, common in the seeds of the rose family. In the stomach, enzymes or stomach acid or both cause some of the nitriles to be hydrolysed and produce toxic hydrogen cyanide, which is a volatile gas. The seeds are toxic only if eaten in large quantities. History Quince is native to the Hyrcanian forests south of the Caspian Sea. From that centre of origin it was spread radially by Neolithic farmers, 5000 to 3000 BC, to secondary centres including Turkey, Azerbaijan, Turkmenistan, Afghanistan, Iran, and Syria. In turn, landraces of quince were then distributed across Europe, Russia, China, India, and North Africa. It reached Britain in the 16th century. Settlers brought it to North America in the 17th century, and to Central and South America in the 18th century. The fruit was known in the Akkadian language as supurgillu; "quinces" (collective plural), which was borrowed into Aramaic as sparglin; it was known in Judea during the Mishnaic Hebrew as prishin (a loanword from Jewish Palestinian Aramaic "the miraculous [fruit]"); quince flourished in the heat of the Mesopotamian plain, where apples did not. It was cultivated from an archaic period around the Mediterranean. Some ancients called the fruit "golden apples". The Greeks associated it with Kydonia on Crete, as the "Cydonian pome", and Theophrastus, in his Enquiry into Plants, noted that quince was one of many fruiting plants that do not come true from seed. As a sacred emblem of Aphrodite, a quince figured in a lost poem of Callimachus that survives in a prose epitome: seeing his beloved in the courtyard of the temple of Aphrodite, Acontius plucks a quince from the "orchard of Aphrodite", inscribes its skin and furtively rolls it at the feet of her illiterate nurse, whose curiosity aroused, hands it to the girl to read aloud, and the girl finds herself saying "I swear by Aphrodite that I will marry Acontius". A vow thus spoken in the goddess's temenos cannot be broken. Pliny the Elder mentions "numerous varieties" of quince in his Natural History and describes four. Quinces are ripe on the tree only briefly: the Roman cookbook De re coquinaria of Apicius specifies in attempting to keep quinces, to select perfect unbruised fruits and keep stems and leaves intact, submerged in honey and reduced wine. Taxonomy Cydonia is in the subfamily Amygdaloideae. The modern name originated in the 14th century as a plural of quoyn, via Old French cooin from Latin cotoneum malum / cydonium malum, ultimately from Greek κυδώνιον μῆλον, kydonion melon "Kydonian apple". Cultivation Quince is a hardy, drought-tolerant shrub which adapts to many soils of low to medium pH. It tolerates both shade and sun, but sunlight is required to produce larger flowers and ensure fruit ripening. It is a hardy plant that does not require much maintenance, and tolerates years without pruning or major insect and disease problems. Quince is cultivated on all continents in warm-temperate and temperate climates. It requires a cooler period of the year, with temperatures under , to flower properly. Propagation is done by cuttings or layering; the former method produces better plants, but they take longer to mature than by the latter. Named cultivars are propagated by cuttings or layers grafted on quince rootstock. Propagation by seed is not used commercially. Quince forms thick bushes, which must be pruned and reduced into a single stem to grow fruit-bearing trees for commercial use. The tree is self-pollinated, but it produces better yields when cross-pollinated. Fruits are typically left on the tree to ripen fully. In warmer climates, it may become soft to the point of being edible, but additional ripening may be required in cooler climates. They are harvested in late autumn, before first frosts. Quince is used as rootstock for certain pear cultivars. In Europe, quinces are commonly grown in central and southern areas where the summers are sufficiently hot for the fruit to fully ripen. They are not grown in large amounts; typically one or two quince trees are grown in a mixed orchard with several apples and other fruit trees. In the 18th-century New England colonies, for example, there was always a quince at the lower corner of the vegetable garden, Ann Leighton notes in records of Portsmouth, New Hampshire and Newburyport, Massachusetts. Charlemagne directed that quinces be planted in well-stocked orchards. Quinces in England are first recorded in about 1275, when Edward I had some planted at the Tower of London. Pests and diseases Quince is subject to a variety of pest insects including aphids, scale insects, mealybugs, and moth caterpillars such as leafrollers (Tortricidae) and codling moths. While quince is a hardy shrub, it may develop fungal diseases in hot weather, resulting in premature leaf fall. Quince leaf blight, caused by fungus Diplocarpon mespili, presents a threat in wet summers, causing severe leaf spotting and early defoliation, affecting fruit to a lesser extent. Cedar-quince rust, caused by Gymnosporangium clavipes, requires two hosts to complete its life cycle, one usually a juniper, and the other a member of the Rosaceae. Appearing as red excrescence on various parts of the plant, it may affect quinces grown near junipers. Production In 2021, world production of quinces was 697,563 tonnes, with Turkey and China accounting for 43% of the world total (table). Cultivars Quince cultivars include: The cultivars 'Vranja' Nenadovic and 'Serbian Gold' have gained the Royal Horticultural Society's Award of Garden Merit. Uses Nutrition A raw quince is 84% water, 15% carbohydrates, and contains negligible fat and protein (table). In a reference amount, the fruit provides of food energy and a moderate amount of vitamin C (18% of the Daily Value), with no other micronutrients in significant percentage of the Daily Value (table). Culinary use Quinces are appreciated for their intense aroma, flavour, and tartness. However, most varieties are too hard and tart to be eaten raw. They may be cooked or roasted and used for jams, marmalade, jellies, or pudding. A few varieties, such as 'Aromatnaya' and 'Kuganskaya', can be eaten raw. High in pectin, they are used to make jam, jelly and quince pudding, or they may be peeled, then roasted, baked or stewed; pectin levels diminish as the fruit ripens. Long cooking with sugar turns the flesh of the fruit red due to the presence of pigmented anthocyanins. The strong flavour means they can be added in small quantities to apple pies and jam. Adding a diced quince to apple sauce enhances the taste of the apple sauce. The term "marmalade", originally meaning a quince jam, derives from marmelo, the Portuguese word for this fruit. Quince cheese or quince jelly originated from the Iberian peninsula and is a firm, sticky, sweet reddish hard paste made by slowly cooking down the quince fruit with sugar. It is called in the Spanish-speaking world, where it is eaten with manchego cheese. Quince is used in the Levant, especially in Syria. It is added to either chicken or kibbeh to create an intense and unique taste such as with kibbeh safarjaliyeh. Alcoholic drink In the Balkans, quince eau-de-vie (rakija) is made. Ripe fruits of sweeter varieties are washed and cleared of rot and seeds, then crushed or minced, mixed with cold or boiling sweetened water and yeast, and left for several weeks to ferment. The fermented mash is distilled once, obtaining a 20–30 ABV, or twice, producing an approximately 60% ABV liquor. The two distillates may be mixed or diluted with distilled water to obtain the final product, containing 42–43% ABV. In the Alsace region of France and the Valais region of Switzerland, liqueur de coing made from quince is used as a digestif. In Carolina in 1709, John Lawson allowed that he was "not a fair judge of the different sorts of Quinces, which they call Brunswick, Portugal and Barbary", but he noted "of this fruit they make a wine or liquor which they call Quince-Drink, and which I approve of beyond any that their country affords, though a great deal of cider and perry is there made, The Quince-Drink most commonly purges." Cultural associations Ancient Greek poets such as Ibycus and Aristophanes used quinces (kydonia) as a mildly ribald term for teenage breasts. In Plutarch's Lives, Solon is said to have decreed that "bride and bridegroom shall be shut into a chamber, and eat a quince together." The hero Hercules is associated with golden apples; these are thought by some scholars probably to have been quinces. When a baby is born in the Balkans, a quince tree is planted as a symbol of fertility, love and life. Edward Lear's 1870 nonsense poem The Owl and the Pussycat contains the lines Kate Young writes in The Guardian that the poem may be nonsense, but that slices of quince work well with a meringue and whipped cream dessert.
Biology and health sciences
Pomes
Plants
294812
https://en.wikipedia.org/wiki/Barbecue%20grill
Barbecue grill
A barbecue grill or barbeque grill (known as a barbecue or barbie in Australia and New Zealand) is a device that cooks food by applying heat from below. There are several varieties of grills, with most falling into one of three categories: gas-fueled, charcoal, or electric. There is debate over which method yields superior results. History in the Americas Grilling has existed in the Americas since pre-colonial times. The Arawak people of South America roasted meat on a wooden structure called a barbacoa in Spanish. For centuries, the term barbacoa referred to the wooden structure and not the act of grilling, but it was eventually modified to "barbecue". It was also applied to the pit-style cooking techniques now frequently used in the southeastern United States. Barbecue was originally used to slow-cook hogs; however, different ways of preparing food led to regional variations. Over time, other foods were cooked in a similar fashion, with hamburgers and hot dogs being recent additions. Edward G. Kingsford invented the modern charcoal briquette. Kingsford was a relative of Henry Ford who assigned him the task of establishing a Ford auto parts plant and sawmill in northern Michigan, a challenge that Kingsford embraced. The local community grew and was named Kingsford in his honor. Kingsford noticed that Ford's Model T production lines were generating a large amount of wood scraps that were being discarded. He suggested to Ford that a charcoal manufacturing facility be established next to the assembly line to process and sell charcoal under the Ford name at Ford dealerships. Several years after Kingsford's death, the chemical company was sold to local businessmen and renamed the Kingsford Chemical Company. George Stephen created the iconic hemispherical grill design, jokingly called "Sputnik" by Stephen's neighbors. Stephen, a welder, worked for Weber Brothers Metal Works, a metal fabrication shop primarily concerned with welding steel spheres together to make buoys. Stephen was tired of the wind blowing ash onto his food when he grilled so he took the lower half of a buoy, welded three steel legs onto it, and fabricated a shallower hemisphere for use as a lid. He took the results home and following some initial success, started the Weber-Stephen Products Company. The gas grill was invented in the late 1930s by Don McGlaughlin, owner of the Chicago Combustion Corporation, known today as LazyMan. McGlaughlin invented the first built-in grill from the successful gas broiler called BROILBURGER. These first Lazy-Man grills were marketed as "open-fire charcoal-type gas broilers" which featured "permanent coals", otherwise known as lava rock. In the 1950s, most residential households did not have a barbecue, so the term broiler was used for marketing purposes to commercial establishments. The gas open-broiler design was adapted into the first portable gas grill in 1954 by Chicago Combustion Corporation as the Model AP. McGlaughlin's portable design was the first to feature the use of the 20-lb propane cylinders, which previously were exclusively used by plumbers as a fuel source. Types Electric With an electric grill, the heating comes from an electric heating element. Neither coal nor briquettes are needed. Gas Gas-fueled grills typically use propane or butane (liquified petroleum gas) or natural gas as their fuel source, with the gas flame either cooking food directly or heating grilling elements which in turn radiate the heat necessary to cook food. Gas grills are available in sizes ranging from small, single steak grills up to large, industrial sized restaurant grills which are able to cook enough meat to feed a hundred or more people. According to Better Homes and Gardens magazine, "gas grills are easier to start and generally heat up faster than charcoal grills." Some gas grills can be switched between using liquified petroleum gas and natural gas fuel, although this requires physically changing key components including burners and regulator valves. The majority of gas grills follow the cart grill design concept: the grill unit itself is attached to a wheeled frame that holds the fuel tank. The wheeled frame may also support side tables, storage compartments, and other features. A recent trend in gas grills is for manufacturers to add an infrared radiant burner to the back of the grill enclosure. This radiant burner provides an even heat across the burner and is intended for use with a horizontal rotisserie. A meat item (whole chicken, beef roast, pork loin roast) is placed on a metal skewer that is rotated by an electric motor. Smaller cuts of meat can be grilled in this manner using a round metal basket that slips over the metal skewer. Another type of gas grill gaining popularity is called a flattop grill. According to Hearth and Home magazine, flattop grills "on which food cooks on a griddle-like surface and is not exposed to an open flame at all" is an emerging trend in the outdoor grilling market. A small metal "smoker box" containing wood chips may be used on a gas grill to give a smoky flavor to the grilled foods. Barbecue purists would argue that to get a true smoky flavor (and smoke ring) the user has to cook low and slow, indirectly and using wood or charcoal; gas grills are difficult to maintain at the low temperatures required (~225-250 °F), especially for extended periods. Infrared Infrared grills work by igniting a gas fuel to heat a ceramic tile, causing it to emit infrared radiation by which the food is cooked. The thermal radiation is generated when heat from the movement of charged particles within atoms is converted to electromagnetic radiation in the infrared heat frequency range. Infrared grills allow users to more easily adjust cooking temperature than charcoal grills, and are usually able to reach higher temperatures than standard gas grills, making them popular for searing items quickly. Charcoal Charcoal grills use either charcoal briquettes or natural lump charcoal as their fuel source. When burned, the charcoal will transform into embers radiating the heat necessary to cook food. There is contention among grilling enthusiasts on what type of charcoal is best for grilling. Users of charcoal briquettes emphasize the uniformity in size, burn rate, heat creation, and quality exemplified by briquettes. Users of all-natural lump charcoal emphasize its subtle smoky aromas, high heat production, and the lack of binders and fillers often present in briquettes. There are many different charcoal grill configurations. Grills can be square, round, or rectangular, some have lids while others do not, and they may or may not have a venting system for heat control. The majority of charcoal grills, however, fall into the following categories: Brazier The simplest and most inexpensive of charcoal grills, the brazier grill is made of wire and sheet metal and composed of a cooking grid placed over a charcoal pan. Usually the grill is supported by legs attached to the charcoal pan. The brazier grill does not have a lid or venting system. Heat is adjusted by moving the cooking grid up or down over the charcoal pan. Even after George Stephen invented the kettle grill in the early 1950s, the brazier grill remained a dominant charcoal grill type for a number of years. Brazier grills are available at most discount department stores during the summer. Pellet grill Pellet technology is widely used in home heating in certain parts of North America. Softer woods including pine are often used for home heating. Pellets for home heating are not cooking-grade and should not be used in pellet grills. Square charcoal The square charcoal grill is a hybrid of the brazier and the kettle grill. It has a shallow pan like the brazier and normally a simple method of adjusting the heat, if any. However, it has a lid like a kettle grill and basic adjustable vents. The square charcoal grille is, as expected, priced between the brazier and kettle grill, with the most basic models priced around the same as the most expensive braziers and the most expensive models competing with basic kettle grills. These grills are available at discount stores and have largely displaced most larger braziers. Square charcoal grills almost exclusively have four legs with two wheels on the back so the grill can be tilted back using the handles for the lid to roll the grill. More expensive examples have baskets and shelves mounted on the grill. Shichirin (hibachi) The traditional Japanese hibachi is a heating device and not usually used for cooking. In English, however, "hibachi" often refers to small cooking grills typically made of aluminum or cast iron, with the latter generally being of a higher quality. Owing to their small size, hibachi grills are popular as a form of portable barbecue. They resemble traditional, Japanese, charcoal-heated cooking utensils called shichirin. Alternatively, "hibachi-style" is often used in the U.S. as a term for Japanese teppanyaki cooking, in which gas-heated hotplates are integrated into tables around which many people (often multiple parties) can sit and eat at once. The chef performs the cooking in front of the diners, typically with theatrical flair—such as lighting a volcano-shaped stack of raw onion hoops on fire. In its most common form, the hibachi is an inexpensive grill made of either sheet steel or cast iron and composed of a charcoal pan and two small, independent cooking grids. Like the brazier grill, heat is adjusted by moving the cooking grids up and down. Also like the brazier grill, the hibachi does not have a lid. Some hibachi designs have venting systems for heat control. The hibachi is a good grill choice for those who do not have much space for a larger grill, or those who wish to take their grill traveling. Binchō-tan is most suitable for fuel of shichirin. Kettle The kettle grill is considered the classic American grill design. The original and often-copied Weber kettle grill was invented in 1952 by George Stephen. Ceramic cooker The ceramic cooker design has been around for roughly 3,000 years. The shichirin, a Japanese grill traditionally of ceramic construction, has existed in its current form since the Edo period however more recent designs have been influenced by the mushikamado, a traditional Japanese cooking appliance, which gained recognition among Americans during World War II. Now, it is more commonly referred to as a kamado. The term "Kamado" is derived from the Japanese language and translates to "stove" or "cooking range." The ceramic cooker is more versatile than the kettle grill as the ceramic chamber retains heat and moisture more efficiently. Ceramic cookers are equally adept at grilling, smoking, and barbecuing foods. Tandoor oven A tandoor is used for cooking certain types of Iranian, Indian and Pakistani food, such as tandoori chicken and naan. In a tandoor, the wood fire is kept in the bottom of the oven and the food to be cooked is put on long skewers and inserted into the oven from an opening on the top so the meat items are above the coals of the fire. This method of cooking involves both grilling and oven cooking as the meat item to be cooked sees both high direct infrared heat and the heat of the air in the oven. Tandoor ovens often operate at temperatures above and cook the meat items very quickly. Portable charcoal The portable charcoal grill normally falls into either the brazier or kettle grill category. Some are rectangular in shape. A portable charcoal grill is usually quite compact and has features that make it easier to transport, making it a popular grill for tailgating. Often the legs fold up and lock into place so the grill will fit into a car trunk more easily. Most portable charcoal grills have venting, legs, and lids, though some models do not have lids (making them, technically, braziers). There are also grills designed without venting to prevent ash fallout for use in locations which ash may damage ground surfaces. Some portable grills are designed to replicate the function of a larger more traditional grill/brazier and may include spit roasting as well as a hood and additional grill areas under the hood area. Gravity-fed Gravity-fed charcoal grills have a hopper that is filled with charcoal briquettes or lump charcoal; then a fire is lit at the bottom of the hopper. A digitally controlled fan is used to control the intensity and temperature of the fire burning. The heat and smoke is routed underneath the food to cook and smoke it. Commercial A commercial barbecue typically has a larger cooking capacity than traditional household grills, as well as featuring a variety of accessories for added versatility. End users of commercial barbecue grills include for-profit operations such as restaurants, caterers, food vendors and grilling operations at food fairs, golf tournaments and other charity events, as well as competition cookers. The category lends itself to originality, and many commercial barbecue grills feature designs unique to their respective manufacturer. Commercial barbecue grills can be stationary or transportable. An example of a stationary grill is a built-in pit grill, for indoor or outdoor use. Construction materials include bricks, mortar, concrete, tile and cast iron. Most commercial barbecue grills, however, are mobile, allowing the operator to take the grill wherever the job is. Transportable commercial barbecue grills can be units with removable legs, grills that fold, and grills mounted entirely on trailers. Trailer mounted commercial barbecue grills run the gamut from basic grill cook tops to pit barbecue grills and smokers, to specialized roasting units that cook whole pigs, chicken, ribs, corn and other vegetables. Charcoal Barbeque Trolley Type The charcoal barbecue grill trolley type is designed for an enhanced grilling experience with convenience and mobility. Featuring a sturdy trolley base, it allows easy transportation and storage of your grilling essentials. The spacious grilling surface ensures you can cook large quantities of food at once, making it perfect for gatherings and family events. Equipped with a heat-resistant handle and storage shelves, this grill offers both functionality and portability for outdoor cooking. Benefits: Portability: The trolley design makes it easy to move the grill around your yard or patio, offering flexibility in setup. Storage: Built-in shelves and compartments provide ample space for storing charcoal, tools, and condiments, keeping everything within reach. Durability: Made with high-quality materials, it is designed to withstand heat and weather conditions, ensuring long-lasting performance. Large Cooking Area: The spacious grilling surface allows you to prepare meals for larger groups, making it ideal for parties and BBQs. Parts Many gas grill components can be replaced with new parts, adding to the useful life of the grill. Though charcoal grills can sometimes require new cooking grids and charcoal grates, gas grills are much more complex, and require additional components such as burners, valves, and heat shields. Burners A gas grill burner is the central source of heat for cooking food. Gas grill burners are typically constructed of: stainless steel, aluminized steel, or cast iron, occasionally porcelain-coated. Burners are hollow with gas inlet holes and outlet 'ports'. For each inlet there is a separate control on the control panel of the grill. The most common type of gas grill burners are called 'H' burners and resemble the capital letter 'H' turned on its side. Another popular shape is oval. There are also 'Figure 8', 'Bowtie' and 'Bar' burners. Other grills have a separate burner for each control. These burners can be referred to as 'Pipe', 'Tube', or 'Rail' burners. They are mostly straight since they are only required to heat one portion of the grill. Gas is mixed with air in venturi tubes or simply 'venturis'. Venturis can be permanently attached to the burner or removable. At the other end of the venturi is the gas valve, which is connected to the control knob on the front of the grill. A metal screen covers the fresh air intake of each venturi to keep spiders from clogging the tube with their nests. Cooking grate Cooking grates, also known as cooking grids, are the surface on which the food is cooked in a grill. They are typically made of: Stainless steel - usually the most expensive and longest-lasting option, may carry a lifetime warranty Porcelain-coated cast iron - the next best option after stainless steel, usually thick and good for searing meat Porcelain-coated steel - will typically last as long as porcelain-coated cast iron, but not as good for searing Cast iron - more commonly used for charcoal grills, cast iron must be coated with cooking oil between uses to protect it from rusting Chrome-plated steel - usually the least expensive and shortest-lasting material Cooking grates used over gas or charcoal barbecues will allow fat and oil to drop between the grill bars. This can cause the fat or oil to ignite in a 'flare-up', the flames from which can blacken or burn the food on the grate. In an attempt to combat this problem, some barbecues are fitted with plates, baffles or other means to deflect the dripping flammable fluids away from the burners. Rock grate Rock grates are placed directly above the burner and are designed to hold lava rock or ceramic briquettes. These materials serve a dual purpose - they protect the burner from drippings which can accelerate the deterioration of the burner, and they disperse the heat from the burner more evenly throughout the grill. Heat shield Heat shields are also known as burner shields, heat plates, heat tents, radiation shields, or heat angles. They serve the same purpose as a rock grate and rock, protecting the burner from corrosive meat drippings and dispersing heat. They are more common in newer grills. Heat shields are lighter, easier to replace and harbor less bacteria than rocks. Like lava rock or ceramic briquettes, heat shields also vaporize the meat drippings and 'infuse' the meat with more flavor. Valves Valves can wear out or become rusted and too difficult to operate requiring replacement. A valve is unlike a burner, a replacement valve usually must be an exact match to the original in order to fit properly. As a consequence, many grills are disposed of when valves fail due to a lack of available replacements. If a valve seems to be moving properly, but no gas is getting to the burner, the most common cause for this is debris in the venturi. This impediment can be cleared by using a long flexible object. Cover A barbecue cover is a textile product specially designed to fit over a grill so as to protect it from outdoor elements such as sun, wind, rain and snow, and outdoor contaminants such as dust, pollution, and bird droppings. Barbecue covers are commonly made with a vinyl outer shell and a heat-resistant inner lining, as well as adjustable straps to secure the cover in windy conditions. The cover may have a polyester surface, often with polyurethane coating on the outer surface, with polyvinyl chloride liner.
Technology
Household appliances
null
294813
https://en.wikipedia.org/wiki/Internet%20forum
Internet forum
An Internet forum, or message board, is an online discussion site where people can hold conversations in the form of posted messages. They differ from chat rooms in that messages are often longer than one line of text, and are at least temporarily archived. Also, depending on the access level of a user or the forum set-up, a posted message might need to be approved by a moderator before it becomes publicly visible. Forums have a specific set of jargon associated with them; for example, a single conversation is called a "thread", or topic. The name comes from the forums of Ancient Rome. A discussion forum is hierarchical or tree-like in structure; a forum can contain a number of subforums, each of which may have several topics. Within a forum's topic, each new discussion started is called a thread and can be replied to by as many people as they so wish. Depending on the forum's settings, users can be anonymous or have to register with the forum and then subsequently log in to post messages. On most forums, users do not have to log in to read existing messages. History The modern forum originated from bulletin boards and so-called computer conferencing systems, which are a technological evolution of the dial-up bulletin board system (BBS). From a technological standpoint, forums or boards are web applications that manage user-generated content. Early Internet forums could be described as a web version of an electronic mailing list or newsgroup (such as those that exist on Usenet), allowing people to post messages and comment on other messages. Later developments emulated the different newsgroups or individual lists, providing more than one forum dedicated to a particular topic. Internet forums are prevalent in several developed countries. Japan posts the most, with over two million per day on their largest forum, 2channel. China also has millions of posts on forums such as Tianya Club. Some of the first forum systems were the Planet-Forum system, developed at the beginning of the 1970s; the EIES system, first operational in 1976; and the KOM system, first operational in 1977. One of the first forum sites (which is still active today) is Delphi Forums, once called Delphi. The service, with four million members, dates to 1983. Forums perform a function similar to that of dial-up bulletin board systems and Usenet networks that were first created in the late 1970s. Early web-based forums date back as far as 1994, with the WIT project from the W3 Consortium, and starting at this time, many alternatives were created. A sense of virtual community often develops around forums that have regular users. Technology, video games, sports, music, fashion, religion, and politics are popular areas for forum themes, but there are forums for a huge number of topics. Internet slang and image macros popular across the Internet are abundant and widely used in Internet forums. Forum software packages are widely available on the Internet and are written in a variety of programming languages, such as PHP, Perl, Java, and ASP. The configuration and records of posts can be stored in text files or in a database. Each package offers different features, from the most basic, providing text-only postings, to more advanced packages, offering multimedia support and formatting code (usually known as BBCode). Many packages can be integrated easily into an existing website to allow visitors to post comments on articles. Several other web applications, such as blog software, also incorporate forum features. WordPress comments at the bottom of a blog post allow for a single-threaded discussion of any given blog post. Slashcode, on the other hand, is far more complicated, allowing fully threaded discussions and incorporating a robust moderation and meta-moderation system as well as many of the profile features available to forum users. Some stand-alone threads on forums have reached fame and notability, such as the "I am lonely will anyone speak to me" thread on MovieCodec.com's forums, which was described as the "web's top hangout for lonely folk" by Wired magazine, or Stevan Harnad's Subversive Proposal. Structure A forum consists of a tree-like directory structure. The top end is "Categories". A forum can be divided into categories for the relevant discussions. Under the categories are sub-forums, and these sub-forums can further have more sub-forums. The topics (commonly called threads) come under the lowest level of sub-forums, and these are the places under which members can start their discussions or posts. Logically, forums are organized into a finite set of generic topics (usually with one main topic), driven and updated by a group known as members, and governed by a group known as moderators. It can also have a graph structure. All message boards will use one of three possible display formats. Each of the three basic message board display formats: Non-Threaded/Semi-Threaded/Fully Threaded, has its own advantages and disadvantages. If messages are not related to one another at all, a Non-Threaded format is best. If a user has a message topic and multiple replies to that message topic, a semi-threaded format is best. If a user has a message topic and replies to that message topic and responds to replies, then a fully threaded format is best. User groups Internally, Western-style forums organize visitors and logged-in members into user groups. Privileges and rights are given based on these groups. A user of the forum can automatically be promoted to a more privileged user group based on criteria set by the administrator. A person viewing a closed thread as a member will see a box saying he does not have the right to submit messages there, but a moderator will likely see the same box, granting him access to more than just posting messages. An unregistered user of the site is commonly known as a guest or visitor. Guests are typically granted access to all functions that do not require database alterations or breach privacy. A guest can usually view the contents of the forum or use such features as read marking, but occasionally an administrator will disallow visitors to read their forum as an incentive to become a registered member. A person who is a very frequent visitor of the forum, a section, or even a thread is referred to as a lurker, and the habit is referred to as lurking. Registered members often will refer to themselves as lurking in a particular location, which is to say they have no intention of participating in that section but enjoy reading the contributions to it. Moderators The moderators (short singular form: "mod") are users (or employees) of the forum who are granted access to the posts and threads of all members for the purpose of moderating discussion (similar to arbitration) and also keeping the forum clean (neutralizing spam and spambots, etc.). Moderators also answer users' concerns about the forum and general questions, as well as respond to specific complaints. Common privileges of moderators include: deleting, merging, moving, and splitting of posts and threads, locking, renaming, and stickying of threads; banning, unbanning, suspending, unsuspending, warning the members; or adding, editing, and removing the polls of threads. "Junior modding", "backseat modding", or "forum copping" can refer negatively to the behavior of ordinary users who take a moderator-like tone in criticizing other members. Essentially, it is the duty of the moderator to manage the day-to-day affairs of a forum or board as it applies to the stream of user contributions and interactions. The relative effectiveness of this user management directly impacts the quality of a forum in general, its appeal, and its usefulness as a community of interrelated users. Moderators act as unpaid volunteers on many websites, which has sparked controversies and community tensions. On Reddit, some moderators have prominently expressed dissatisfaction with their unpaid labor being underappreciated, while other site users have accused moderators of abusing special access privileges to act as a "cabal" of "petty tyrants". On 4chan, moderators are subject to notable levels of mockery and contempt. There, they are often referred to as janitors (or, more pejoratively, "jannies") given their job, which is tantamount to cleaning up the imageboards' infamous shitposting. Administrators The administrators (short form: "admin") manage the technical details required for running the site. As such, they have the authority to appoint and revoke members as moderators, manage the rules, create sections and sub-sections, as well as perform any database operations (database backup, etc.). Administrators often also act as moderators. Administrators may also make forum-wide announcements or change the appearance (known as the skin) of a forum. There are also many forums where administrators share their knowledge. Post A post is a user-submitted message enclosed in a block containing the user's details and the date and time it was submitted. Members are usually allowed to edit or delete their own posts. Posts are contained in threads, where they appear as blocks one after another. The first post starts the thread; this may be called the TS (thread starter) or OP (original post). Posts that follow in the thread are meant to continue discussion about that post or respond to other replies; it is not uncommon for discussions to be derailed. On Western forums, the classic way to show a member's own details (such as name and avatar) has been on the left side of the post, in a narrow column of fixed width, with the post controls located on the right, at the bottom of the main body, above the signature block. In more recent forum software implementations, the Asian style of displaying the members' details above the post has been copied. Posts have an internal limit, usually measured in characters. Often, one is required to have a message with a minimum length of 10 characters. There is always an upper limit, but it is rarely reached – most boards have it at either 10,000, 20,000, 30,000, or 50,000 characters. Most forums keep track of a user's postcount. The postcount is a measurement of how many posts a certain user has made. Users with higher postcounts are often considered more reputable than users with lower postcounts, but not always. For instance, some forums have disabled postcounts with the hopes that doing so will emphasize the quality of information over quantity. Thread A thread (sometimes called a topic) is a collection of posts, usually displayed from oldest to latest, although this is typically configurable: Options for newest to oldest and for a threaded view (a tree-like view applying logical reply structure before chronological order) can be available. A thread is defined by a title, an additional description that may summarize the intended discussion, and an opening or original post (common abbreviation OP, which can also be used to refer to the original poster), which opens whatever dialogue or makes whatever announcement the poster wishes. A thread can contain any number of posts, including multiple posts from the same members, even if they are one after the other. Bumping A thread is contained in a forum and may have an associated date that is taken as the date of the last post (options to order threads by other criteria are generally available). When a member posts in a thread, it will jump to the top since it is the latest updated thread. Similarly, other threads will jump in front of it when they receive posts. When a member posts in a thread for no reason but to have it go to the top, it is referred to as a bump or bumping. It has been suggested that "bump" is an acronym of "bring up my post"; however, this is almost certainly a backronym, and the usage is entirely consistent with the verb "bump" which means "to knock to a new position". On some messageboards, users can choose to sage (correctly pronounced though often confused as ) a post if they wish to make a post but not "bump" it. The word "sage" derives from the 2channel terminology 下げる sageru, meaning "to lower". Stickying Threads that are important but rarely receive posts are stickied (or, in some software, "pinned"). A sticky thread will always appear in front of normal threads, often in its own section. A "threaded discussion group" is simply any group of individuals who use a forum for threaded, or asynchronous, discussion purposes. The group may or may not be the only users of the forum. A thread's popularity is measured on forums in reply (total posts minus one, the opening post, in most default forum settings) counts. Some forums also track page views. Threads meeting a set number of posts or a set number of views may receive a designation such as "hot thread" and be displayed with a different icon compared to other threads. This icon may stand out more to emphasize the thread. If the forum's users have lost interest in a particular thread, it becomes a dead thread. Discussion Forums prefer the premise of open and free discussion and often adopt de facto standards. The most common topics on forums include questions, comparisons, polls of opinion, and debates. It is not uncommon for nonsense or unsocial behavior to sprout as people lose their temper, especially if the topic is controversial. Poor understanding of the differences in values among the participants is a common problem on forums. Because replies to a topic are often worded to target someone's point of view, discussion will usually go slightly off in several directions as people question each other's validity, sources, and so on. Circular discussion and ambiguity in replies can extend for several tens of posts in a thread, eventually ending when everyone gives up or attention spans waver and a more interesting subject takes over. It is not uncommon for debate to end in ad hominem attacks. Liabilities of owners and moderators Several lawsuits have been brought against the forums and moderators, claiming libel and damage. A recent case is the Scubaboard lawsuit, where a business in the Maldives filed a suit against Scubaboard for libel and defamation in January 2010. For the most part, though, forum owners and moderators in the United States are protected by Section 230 of the Communications Decency Act, which states that "[n]o provider or user of an interactive computer service shall be treated as the publisher or speaker of any information provided by another information content provider". In 2019, Facebook was faced with a class action lawsuit set forth by moderators diagnosed with post-traumatic stress disorder. It was settled for $52 million the following year. Common features By default, to be an Internet forum, the web application needs the ability to submit threads and replies. Typically, threads are in a newer to older view, and replies are in an older to newer view. Tripcodes and capcodes Most imageboards and 2channel-style discussion boards allow (and encourage) anonymous posting and use a system of tripcodes instead of registration. A tripcode is the hashed result of a password that allows one's identity to be recognized without storing any data about the user. In a tripcode system, a secret password is added to the user's name following a separator character (often a number sign). This password, or tripcode, is hashed into a special key, or trip, distinguishable from the name by HTML styles. Tripcodes cannot be faked, but on some types of forum software, they are insecure and can be guessed. On other types, they can be brute-forced with software designed to search for tripcodes, such as Tripcode Explorer. Moderators and administrators will frequently assign themselves capcodes or tripcodes where the guessable trip is replaced with a special notice (such as "# Administrator") or cap. Personal message A personal or private message, or PM for short, is a message sent in private from a member to one or more other members. The ability to send so-called blind carbon copies (BCC) is sometimes available. When sending a BCC, the users to whom the message is sent directly will not be aware of the recipients of the BCC or even if one was sent in the first place. Private messages are generally used for personal conversations. They can also be used with tripcodes—a message is addressed to a public trip and can be picked up by typing in the tripcode. Attachment An attachment can be almost any file. When someone attaches a file to a person's post, they are uploading that particular file to the forum's server. Forums usually have very strict limits on what can be attached and what cannot (among which is the size of the files in question). Attachments can be part of a thread, a social group, etc. BBCode and HTML HyperText Markup Language (HTML) is sometimes allowed, but usually its use is discouraged or, when allowed, extensively filtered. Modern bulletin board systems often have it disabled altogether or allow only administrators to use it, as allowing it at any normal user level is considered a security risk due to the high rate of XSS vulnerabilities. When HTML is disabled, Bulletin Board Code (BBCode) is the most common preferred alternative. BBCode usually consists of a tag, similar to HTML, but instead of < and >, the tagname is enclosed within square brackets (meaning: [ and ]). Commonly, [i] is used for italic type, [b] is used for bold, [u] for underline, [color="value"] for color, and [list] for lists, as well as [img] for images and [url] for links. The following example BBCode: [b]This[/b] is [i]clever[/i] [b] [i]text[/i] [/b]. When the post is viewed, the code is rendered to HTML and will appear as: This is clever text. Many forum packages offer a way to create Custom BBCodes, or BBcodes that are not built into the package, where the administrator of the board can create complex BBCodes to allow the use of JavaScript or iframe functions in posts, for example, embedding a YouTube or Google Video complete with viewer directly into a post. Emoticon An emoticon, or smiley, is a symbol or combination of symbols used to convey emotional content in written or message form. Forums implement a system through which some of the text representations of emoticons (e.g., xD, :p) are rendered as a small image. Depending on what part of the world the forum's topic originates from (since most forums are international), smilies can be replaced by other forms of similar graphics; an example would be kaoani (e.g., *(^O^)*, (^-^)b), or even text between special symbols (e.g., :blink:, :idea:). Poll Most forums implement an opinion poll system for threads. Most implementations allow for single-choice or multi-choice (sometimes limited to a certain number) when selecting options, as well as private or public display of voters. Polls can be set to expire after a certain date or, in some cases, after a number of days from their creation. Members vote in a poll, and a statistic is displayed graphically. Other features An ignore list allows members to hide posts of other members that they do not want to see or have a problem with. In most implementations, they are referred to as foe list or ignore list. The posts are usually not hidden but minimized, with only a small bar indicating a post from the user on the ignore list is there. Almost all Internet forums include a member list, which allows the display of all forum members with an integrated search feature. Some forums will not list members with zero posts, even if they have activated their accounts. Many forums allow users to give themselves an avatar. An avatar is an image that appears beside all of a user's posts in order to make the user more recognizable. The user may upload the image to the forum database or provide a link to an image on a separate website. Each forum has limits on the height, width, and data size of avatars that may be used; if the user tries to use an avatar that is too big, it may be scaled down or rejected. Similarly, most forums allow users to define a signature (sometimes called a sig), which is a block of text, possibly with BBCode, that appears at the bottom of all of the user's posts. There is a character limit on signatures, though it may be so high that it is rarely hit. Often, the forum's moderators impose manual rules on signatures to prevent them from being obnoxious (for example, being extremely long or having flashing images) and issue warnings or bans to users who break these rules. Like avatars, signatures may improve the recognizability of a poster. They may also allow the user to attach information to all of their posts, such as proclaiming support for a cause, noting facts about themselves, or quoting humorous things that have previously been said on the forum. A subscription is a form of automated notification integrated into the software of most forums. It usually notifies the member either by email or on the site when the member returns. The option to subscribe is available for every thread while logged in. Subscriptions work with read marking, namely the property of unread, which is given to the content never served to the user by the software. Recent developments in some popular implementations of forum software have brought social network features and functionality. Such features include personal galleries and pages, as well as social networks like chat systems. Most forum software is now fully customizable, with "hacks" or "modifications" readily available to customize a person's forum to theirs and their members' needs. Often forums use "cookies", or information about the user's behavior on the site sent to a user's browser and used upon re-entry into the site. This is done to facilitate automatic login and to show a user whether a thread or forum has received new posts since his or her last visit. These may be disabled or cleared at any time. Rules and policies Forums are governed by a set of individuals, collectively referred to as staff, made up of administrators and moderators, which are responsible for the forums' conception, technical maintenance, and policies (creation and enforcing). Most forums have a list of rules detailing the wishes, aim, and guidelines of the forums' creators. There is usually also a FAQ section containing basic information for new members and people not yet familiar with the use and principles of a forum (generally tailored for specific forum software). Rules on forums usually apply to the entire user body and often have preset exceptions, most commonly designating a section as an exception. For example, in an IT forum any discussion regarding anything but computer programming languages may be against the rules, with the exception of a general chat section. Forum rules are maintained and enforced by the moderation team, but users are allowed to help out via what is known as a report system. Most Western forum platforms automatically provide such a system. It consists of a small function applicable to each post (including one's own). Using it will notify all currently available moderators of its location, and subsequent action or judgment can be carried out immediately, which is particularly desirable in large or very developed boards. Generally, moderators encourage members to also use the private message system if they wish to report behavior. Moderators will generally frown upon attempts of moderation by non-moderators, especially when the would-be moderators do not even issue a report. Messages from non-moderators acting as moderators generally declare a post as against the rules or predict punishment. While not harmful, statements that attempt to enforce the rules are discouraged. When rules are broken several steps are commonly taken. First, a warning is usually given; this is commonly in the form of a private message but recent development has made it possible for it to be integrated into the software. Subsequent to this, if the act is ignored and warnings do not work, the member is – usually – first exiled from the forum for a number of days. Denying someone access to the site is called a ban. Bans can mean the person can no longer log in or even view the site anymore. If the offender, after the warning sentence, repeats the offense, another ban is given, usually this time a longer one. Continuous harassment of the site eventually leads to a permanent ban. In most cases, this means simply that the account is locked. In extreme cases where the offender – after being permanently banned – creates another account and continues to harass the site, administrators will apply an IP address ban or block (this can also be applied at the server level): If the IP address is static, the machine of the offender is prevented from accessing the site. In some extreme circumstances, IP address range bans or country bans can be applied; this is usually for political, licensing, or other reasons.
Technology
Internet
null
294943
https://en.wikipedia.org/wiki/Carotenoid
Carotenoid
Carotenoids () are yellow, orange, and red organic pigments that are produced by plants and algae, as well as several bacteria, archaea, and fungi. Carotenoids give the characteristic color to pumpkins, carrots, parsnips, corn, tomatoes, canaries, flamingos, salmon, lobster, shrimp, and daffodils. Over 1,100 identified carotenoids can be further categorized into two classes xanthophylls (which contain oxygen) and carotenes (which are purely hydrocarbons and contain no oxygen). All are derivatives of tetraterpenes, meaning that they are produced from 8 isoprene units and contain 40 carbon atoms. In general, carotenoids absorb wavelengths ranging from 400 to 550 nanometers (violet to green light). This causes the compounds to be deeply colored yellow, orange, or red. Carotenoids are the dominant pigment in autumn leaf coloration of about 15-30% of tree species, but many plant colors, especially reds and purples, are due to polyphenols. Carotenoids serve two key roles in plants and algae: they absorb light energy for use in photosynthesis, and they provide photoprotection via non-photochemical quenching. Carotenoids that contain unsubstituted beta-ionone rings (including β-carotene, α-carotene, β-cryptoxanthin, and γ-carotene) have vitamin A activity (meaning that they can be converted to retinol). In the eye, lutein, meso-zeaxanthin, and zeaxanthin are present as macular pigments whose importance in visual function, as of 2016, remains under clinical research. Structure and function Carotenoids are produced by all photosynthetic organisms and are primarily used as accessory pigments to chlorophyll in the light-harvesting part of photosynthesis. They are highly unsaturated with conjugated double bonds, which enables carotenoids to absorb light of various wavelengths. At the same time, the terminal groups regulate the polarity and properties within lipid membranes. Most carotenoids are tetraterpenoids, regular C40 isoprenoids. Several modifications to these structures exist: including cyclization, varying degrees of saturation or unsaturation, and other functional groups. Carotenes typically contain only carbon and hydrogen, i.e., they are hydrocarbons. Prominent members include α-carotene, β-carotene, and lycopene, are known as carotenes. Carotenoids containing oxygen include lutein and zeaxanthin. They are known as xanthophylls. Their color, ranging from pale yellow through bright orange to deep red, is directly related to their structure, especially the length of the conjugation. Xanthophylls are often yellow, giving their class name. Carotenoids also participate in different types of cell signaling. They are able to signal the production of abscisic acid, which regulates plant growth, seed dormancy, embryo maturation and germination, cell division and elongation, floral growth, and stress responses. Photophysics The length of the multiple conjugated double bonds determines their color and photophysics. After absorbing a photon, the carotenoid transfers its excited electron to chlorophyll for use in photosynthesis. Upon absorption of light, carotenoids transfer excitation energy to and from chlorophyll. The singlet-singlet energy transfer is a lower energy state transfer and is used during photosynthesis. The triplet-triplet transfer is a higher energy state and is essential in photoprotection. Light produces damaging species during photosynthesis, with the most damaging being reactive oxygen species (ROS). As these high energy ROS are produced in the chlorophyll the energy is transferred to the carotenoid’s polyene tail and undergoes a series of reactions in which electrons are moved between the carotenoid bonds in order to find the most balanced (lowest energy) state for the carotenoid. Carotenoids defend plants against singlet oxygen, by both energy transfer and by chemical reactions. They also protect plants by quenching triplet chlorophyll. By protecting lipids from free-radical damage, which generate charged lipid peroxides and other oxidised derivatives, carotenoids support crystalline architecture and hydrophobicity of lipoproteins and cellular lipid structures, hence oxygen solubility and its diffusion therein. Structure-property relationships Like some fatty acids, carotenoids are lipophilic due to the presence of long unsaturated aliphatic chains. As a consequence, carotenoids are typically present in plasma lipoproteins and cellular lipid structures. Morphology Carotenoids are located primarily outside the cell nucleus in different cytoplasm organelles, lipid droplets, cytosomes and granules. They have been visualised and quantified by raman spectroscopy in an algal cell. With the development of monoclonal antibodies to trans-lycopene it was possible to localise this carotenoid in different animal and human cells. Foods Beta-carotene, found in pumpkins, sweet potato, carrots and winter squash, is responsible for their orange-yellow colors. Dried carrots have the highest amount of carotene of any food per 100-gram serving, measured in retinol activity equivalents (provitamin A equivalents). Vietnamese gac fruit contains the highest known concentration of the carotenoid lycopene. Although green, kale, spinach, collard greens, and turnip greens contain substantial amounts of beta-carotene. The diet of flamingos is rich in carotenoids, imparting the orange-colored feathers of these birds. Reviews of preliminary research in 2015 indicated that foods high in carotenoids may reduce the risk of head and neck cancers and prostate cancer. There is no correlation between consumption of foods high in carotenoids and vitamin A and the risk of Parkinson's disease. Humans and other animals are mostly incapable of synthesizing carotenoids, and must obtain them through their diet. Carotenoids are a common and often ornamental feature in animals. For example, the pink color of salmon, and the red coloring of cooked lobsters and scales of the yellow morph of common wall lizards are due to carotenoids. It has been proposed that carotenoids are used in ornamental traits (for extreme examples see puffin birds) because, given their physiological and chemical properties, they can be used as visible indicators of individual health, and hence are used by animals when selecting potential mates. Carotenoids from the diet are stored in the fatty tissues of animals, and exclusively carnivorous animals obtain the compounds from animal fat. In the human diet, absorption of carotenoids is improved when consumed with fat in a meal. Cooking carotenoid-containing vegetables in oil and shredding the vegetable both increase carotenoid bioavailability. Plant colors The most common carotenoids include lycopene and the vitamin A precursor β-carotene. In plants, the xanthophyll lutein is the most abundant carotenoid and its role in preventing age-related eye disease is currently under investigation. Lutein and the other carotenoid pigments found in mature leaves are often not obvious because of the masking presence of chlorophyll. When chlorophyll is not present, as in autumn foliage, the yellows and oranges of the carotenoids are predominant. For the same reason, carotenoid colors often predominate in ripe fruit after being unmasked by the disappearance of chlorophyll. Carotenoids are responsible for the brilliant yellows and oranges that tint deciduous foliage (such as dying autumn leaves) of certain hardwood species as hickories, ash, maple, yellow poplar, aspen, birch, black cherry, sycamore, cottonwood, sassafras, and alder. Carotenoids are the dominant pigment in autumn leaf coloration of about 15-30% of tree species. However, the reds, the purples, and their blended combinations that decorate autumn foliage usually come from another group of pigments in the cells called anthocyanins. Unlike the carotenoids, these pigments are not present in the leaf throughout the growing season, but are actively produced towards the end of summer. Bird colors and sexual selection Dietary carotenoids and their metabolic derivatives are responsible for bright yellow to red coloration in birds. Studies estimate that around 2956 modern bird species display carotenoid coloration and that the ability to utilize these pigments for external coloration has evolved independently many times throughout avian evolutionary history. Carotenoid coloration exhibits high levels of sexual dimorphism, with adult male birds generally displaying more vibrant coloration than females of the same species. These differences arise due to the selection of yellow and red coloration in males by female preference. In many species of birds, females invest greater time and resources into raising offspring than their male partners. Therefore, it is imperative that female birds carefully select high quality mates. Current literature supports the theory that vibrant carotenoid coloration is correlated with male quality—either though direct effects on immune function and oxidative stress, or through a connection between carotenoid metabolizing pathways and pathways for cellular respiration. It is generally considered that sexually selected traits, such as carotenoid-based coloration, evolve because they are honest signals of phenotypic and genetic quality. For instance, among males of the bird species Parus major, the more colorfully ornamented males produce sperm that is better protected against oxidative stress due to increased presence of carotenoid antioxidants. However, there is also evidence that attractive male coloration may be a faulty signal of male quality. Among stickleback fish, males that are more attractive to females due to carotenoid colorants appear to under-allocate carotenoids to their germline cells. Since carotinoids are beneficial antioxidants, their under-allocation to germline cells can lead to increased oxidative DNA damage to these cells. Therefore, female sticklebacks may risk fertility and the viability of their offspring by choosing redder, but more deteriorated partners with reduced sperm quality. Aroma chemicals Products of carotenoid degradation such as ionones, damascones and damascenones are also important fragrance chemicals that are used extensively in the perfumes and fragrance industry. Both β-damascenone and β-ionone although low in concentration in rose distillates are the key odor-contributing compounds in flowers. In fact, the sweet floral smells present in black tea, aged tobacco, grape, and many fruits are due to the aromatic compounds resulting from carotenoid breakdown. Disease Some carotenoids are produced by bacteria to protect themselves from oxidative immune attack. The aureus (golden) pigment that gives some strains of Staphylococcus aureus their name is a carotenoid called staphyloxanthin. This carotenoid is a virulence factor with an antioxidant action that helps the microbe evade death by reactive oxygen species used by the host immune system. Biosynthesis The basic building blocks of carotenoids are isopentenyl diphosphate (IPP) and dimethylallyl diphosphate (DMAPP). These two isoprene isomers are used to create various compounds depending on the biological pathway used to synthesize the isomers. Plants are known to use two different pathways for IPP production: the cytosolic mevalonic acid pathway (MVA) and the plastidic methylerythritol 4-phosphate (MEP). In animals, the production of cholesterol starts by creating IPP and DMAPP using the MVA. For carotenoid production plants use MEP to generate IPP and DMAPP. The MEP pathway results in a 5:1 mixture of IPP:DMAPP. IPP and DMAPP undergo several reactions, resulting in the major carotenoid precursor, geranylgeranyl diphosphate (GGPP). GGPP can be converted into carotenes or xanthophylls by undergoing a number of different steps within the carotenoid biosynthetic pathway. MEP pathway Glyceraldehyde 3-phosphate and pyruvate, intermediates of photosynthesis, are converted to deoxy-D-xylulose 5-phosphate (DXP) catalyzed by DXP synthase (DXS). DXP reductoisomerase catalyzes the reduction by NADPH and subsequent rearrangement. The resulting MEP is converted to 4-(cytidine 5’-diphospho)-2-C-methyl-D-erythritol (CDP-ME) in the presence of CTP using the enzyme MEP cytidylyltransferase. CDP-ME is then converted, in the presence of ATP, to 2-phospho-4-(cytidine 5’-diphospho)-2-C-methyl-D-erythritol (CDP-ME2P). The conversion to CDP-ME2P is catalyzed by CDP-ME kinase. Next, CDP-ME2P is converted to 2-C-methyl-D-erythritol 2,4-cyclodiphosphate (MECDP). This reaction occurs when MECDP synthase catalyzes the reaction and CMP is eliminated from the CDP-ME2P molecule. MECDP is then converted to (e)-4-hydroxy-3-methylbut-2-en-1-yl diphosphate (HMBDP) via HMBDP synthase in the presence of flavodoxin and NADPH. HMBDP is reduced to IPP in the presence of ferredoxin and NADPH by the enzyme HMBDP reductase. The last two steps involving HMBPD synthase and reductase can only occur in completely anaerobic environments. IPP is then able to isomerize to DMAPP via IPP isomerase. Carotenoid biosynthetic pathway Two GGPP molecules condense via phytoene synthase (PSY), forming the 15-cis isomer of phytoene. PSY belongs to the squalene/phytoene synthase family and is homologous to squalene synthase that takes part in steroid biosynthesis. The subsequent conversion of phytoene into all-trans-lycopene depends on the organism. Bacteria and fungi employ a single enzyme, the bacterial phytoene desaturase (CRTI) for the catalysis. Plants and cyanobacteria however utilize four enzymes for this process. The first of these enzymes is a plant-type phytoene desaturase which introduces two additional double bonds into 15-cis-phytoene by dehydrogenation and isomerizes two of its existing double bonds from trans to cis producing 9,15,9’-tri-cis-ζ-carotene. The central double bond of this tri-cis-ζ-carotene is isomerized by the zeta-carotene isomerase Z-ISO and the resulting 9,9'-di-cis-ζ-carotene is dehydrogenated again via a ζ-carotene desaturase (ZDS). This again introduces two double bonds, resulting in 7,9,7’,9’-tetra-cis-lycopene. CRTISO, a carotenoid isomerase, is needed to convert the cis-lycopene into an all-trans lycopene in the presence of reduced FAD. This all-trans lycopene is cyclized; cyclization gives rise to carotenoid diversity, which can be distinguished based on the end groups. There can be either a beta ring or an epsilon ring, each generated by a different enzyme (lycopene beta-cyclase [beta-LCY] or lycopene epsilon-cyclase [epsilon-LCY]). α-Carotene is produced when the all-trans lycopene first undergoes reaction with epsilon-LCY then a second reaction with beta-LCY; whereas β-carotene is produced by two reactions with beta-LCY. α- and β-Carotene are the most common carotenoids in the plant photosystems but they can still be further converted into xanthophylls by using beta-hydrolase and epsilon-hydrolase, leading to a variety of xanthophylls. Regulation It is believed that both DXS and DXR are rate-determining enzymes, allowing them to regulate carotenoid levels. This was discovered in an experiment where DXS and DXR were genetically overexpressed, leading to increased carotenoid expression in the resulting seedlings. Also, J-protein (J20) and heat shock protein 70 (Hsp70) chaperones are thought to be involved in post-transcriptional regulation of DXS activity, such that mutants with defective J20 activity exhibit reduced DXS enzyme activity while accumulating inactive DXS protein. Regulation may also be caused by external toxins that affect enzymes and proteins required for synthesis. Ketoclomazone is derived from herbicides applied to soil and binds to DXP synthase. This inhibits DXP synthase, preventing synthesis of DXP and halting the MEP pathway. The use of this toxin leads to lower levels of carotenoids in plants grown in the contaminated soil. Fosmidomycin, an antibiotic, is a competitive inhibitor of DXP reductoisomerase due to its similar structure to the enzyme. Application of said antibiotic prevents reduction of DXP, again halting the MEP pathway. Naturally occurring carotenoids Hydrocarbons Lycopersene 7,8,11,12,15,7',8',11',12',15'-Decahydro-γ,γ-carotene Phytofluene Lycopene Hexahydrolycopene 15-cis-7,8,11,12,7',8'-Hexahydro-γ,γ-carotene Torulene 3',4'-Didehydro-β,γ-carotene α-Zeacarotene 7',8'-Dihydro-ε,γ-carotene α-Carotene β-Carotene γ-Carotene δ-Carotene ε-Carotene ζ-Carotene Alcohols Alloxanthin Bacterioruberin 2,2'-Bis(3-hydroxy-3-methylbutyl)-3,4,3',4'-tetradehydro-1,2,1',2'-tetrahydro-γ,γ-carotene-1,1'-diol Cynthiaxanthin Pectenoxanthin Cryptomonaxanthin (3R,3'R)-7,8,7',8'-Tetradehydro-β,β-carotene-3,3'-diol Crustaxanthin β,-Carotene-3,4,3',4'-tetrol Gazaniaxanthin (3R)-5'-cis-β,γ-Caroten-3-ol OH-Chlorobactene 1',2'-Dihydro-f,γ-caroten-1'-ol Loroxanthin β,ε-Carotene-3,19,3'-triol Lutein (3R,3′R,6′R)-β,ε-carotene-3,3′-diol Lycoxanthin γ,γ-Caroten-16-ol Rhodopin 1,2-Dihydro-γ,γ-caroten-l-ol Rhodopinol a.k.a. Warmingol 13-cis-1,2-Dihydro-γ,γ-carotene-1,20-diol Saproxanthin 3',4'-Didehydro-1',2'-dihydro-β,γ-carotene-3,1'-diol Zeaxanthin Glycosides Oscillaxanthin 2,2'-Bis(β-L-rhamnopyranosyloxy)-3,4,3',4'-tetradehydro-1,2,1',2'-tetrahydro-γ,γ-carotene-1,1'-diol Phleixanthophyll 1'-(β-D-Glucopyranosyloxy)-3',4'-didehydro-1',2'-dihydro-β,γ-caroten-2'-ol Ethers Rhodovibrin 1'-Methoxy-3',4'-didehydro-1,2,1',2'-tetrahydro-γ,γ-caroten-1-ol Spheroidene 1-Methoxy-3,4-didehydro-1,2,7',8'-tetrahydro-γ,γ-carotene Epoxides Diadinoxanthin 5,6-Epoxy-7',8'-didehydro-5,6-dihydro—carotene-3,3-diol Luteoxanthin 5,6: 5',8'-Diepoxy-5,6,5',8'-tetrahydro-β,β-carotene-3,3'-diol Mutatoxanthin Citroxanthin Zeaxanthin furanoxide 5,8-Epoxy-5,8-dihydro-β,β-carotene-3,3'-diol Neochrome 5',8'-Epoxy-6,7-didehydro-5,6,5',8'-tetrahydro-β,β-carotene-3,5,3'-triol Foliachrome Trollichrome Vaucheriaxanthin 5',6'-Epoxy-6,7-didehydro-5,6,5',6'-tetrahydro-β,β-carotene-3,5,19,3'-tetrol Aldehydes Rhodopinal Warmingone 13-cis-1-Hydroxy-1,2-dihydro-γ,γ-caroten-20-al Torularhodinaldehyde 3',4'-Didehydro-β,γ-caroten-16'-al Acids and acid esters Torularhodin 3',4'-Didehydro-β,γ-caroten-16'-oic acid Torularhodin methyl ester Methyl 3',4'-didehydro-β,γ-caroten-16'-oate Ketones Astacene Astaxanthin Canthaxanthin a.k.a. Aphanicin, Chlorellaxanthin β,β-Carotene-4,4'-dione Capsanthin (3R,3'S,5'R)-3,3'-Dihydroxy-β,κ-caroten-6'-one Capsorubin (3S,5R,3'S,5'R)-3,3'-Dihydroxy-κ,κ-carotene-6,6'-dione Cryptocapsin (3'R,5'R)-3'-Hydroxy-β,κ-caroten-6'-one 2,2'-Diketospirilloxanthin 1,1'-Dimethoxy-3,4,3',4'-tetradehydro-1,2,1',2'-tetrahydro-γ,γ-carotene-2,2'-dione Echinenone β,β-Caroten-4-one 3'-Hydroxyechinenone Flexixanthin 3,1'-Dihydroxy-3',4'-didehydro-1',2'-dihydro-β,γ-caroten-4-one 3-OH-Canthaxanthin a.k.a. Adonirubin a.k.a. Phoenicoxanthin 3-Hydroxy-β,β-carotene-4,4'-dione Hydroxyspheriodenone 1'-Hydroxy-1-methoxy-3,4-didehydro-1,2,1',2',7',8'-hexahydro-γ,γ-caroten-2-one Okenone 1'-Methoxy-1',2'-dihydro-c,γ-caroten-4'-one Pectenolone 3,3'-Dihydroxy-7',8'-didehydro-β,β-caroten-4-one Phoeniconone a.k.a. Dehydroadonirubin 3-Hydroxy-2,3-didehydro-β,β-carotene-4,4'-dione Phoenicopterone β,ε-caroten-4-one Rubixanthone 3-Hydroxy-β,γ-caroten-4'-one Siphonaxanthin 3,19,3'-Trihydroxy-7,8-dihydro-β,ε-caroten-8-one Esters of alcohols Astacein 3,3'-Bispalmitoyloxy-2,3,2',3'-tetradehydro-β,β-carotene-4,4'-dione or 3,3'-dihydroxy-2,3,2',3'-tetradehydro-β,β-carotene-4,4'-dione dipalmitate Fucoxanthin 3'-Acetoxy-5,6-epoxy-3,5'-dihydroxy-6',7'-didehydro-5,6,7,8,5',6'-hexahydro-β,β-caroten-8-one Isofucoxanthin 3'-Acetoxy-3,5,5'-trihydroxy-6',7'-didehydro-5,8,5',6'-tetrahydro-β,β-caroten-8-one Physalien Siphonein 3,3'-Dihydroxy-19-lauroyloxy-7,8-dihydro-β,ε-caroten-8-one or 3,19,3'-trihydroxy-7,8-dihydro-β,ε-caroten-8-one 19-laurate Apocarotenoids β-Apo-2'-carotenal 3',4'-Didehydro-2'-apo-b-caroten-2'-al Apo-2-lycopenal Apo-6'-lycopenal 6'-Apo-y-caroten-6'-al Azafrinaldehyde 5,6-Dihydroxy-5,6-dihydro-10'-apo-β-caroten-10'-al Bixin 6'-Methyl hydrogen 9'-cis-6,6'-diapocarotene-6,6'-dioate Citranaxanthin 5',6'-Dihydro-5'-apo-β-caroten-6'-one or 5',6'-dihydro-5'-apo-18'-nor-β-caroten-6'-one or 6'-methyl-6'-apo-β-caroten-6'-one Crocetin 8,8'-Diapo-8,8'-carotenedioic acid Crocetinsemialdehyde 8'-Oxo-8,8'-diapo-8-carotenoic acid Crocin Digentiobiosyl 8,8'-diapo-8,8'-carotenedioate Hopkinsiaxanthin 3-Hydroxy-7,8-didehydro-7',8'-dihydro-7'-apo-b-carotene-4,8'-dione or 3-hydroxy-8'-methyl-7,8-didehydro-8'-apo-b-carotene-4,8'-dione Methyl apo-6'-lycopenoate Methyl 6'-apo-y-caroten-6'-oate Paracentrone 3,5-Dihydroxy-6,7-didehydro-5,6,7',8'-tetrahydro-7'-apo-b-caroten-8'-one or 3,5-dihydroxy-8'-methyl-6,7-didehydro-5,6-dihydro-8'-apo-b-caroten-8'-one Sintaxanthin 7',8'-Dihydro-7'-apo-b-caroten-8'-one or 8'-methyl-8'-apo-b-caroten-8'-one Nor- and seco-carotenoids Actinioerythrin 3,3'-Bisacyloxy-2,2'-dinor-b,b-carotene-4,4'-dione β-Carotenone 5,6:5',6'-Diseco-b,b-carotene-5,6,5',6'-tetrone Peridinin 3'-Acetoxy-5,6-epoxy-3,5'-dihydroxy-6',7'-didehydro-5,6,5',6'-tetrahydro-12',13',20'-trinor-b,b-caroten-19,11-olide Pyrrhoxanthininol 5,6-epoxy-3,3'-dihydroxy-7',8'-didehydro-5,6-dihydro-12',13',20'-trinor-b,b-caroten-19,11-olide Semi-α-carotenone 5,6-Seco-b,e-carotene-5,6-dione Semi-β-carotenone 5,6-seco-b,b-carotene-5,6-dione or 5',6'-seco-b,b-carotene-5',6'-dione Triphasiaxanthin 3-Hydroxysemi-b-carotenone 3'-Hydroxy-5,6-seco-b,b-carotene-5,6-dione or 3-hydroxy-5',6'-seco-b,b-carotene-5',6'-dione Retro-carotenoids and retro-apo-carotenoids Eschscholtzxanthin 4',5'-Didehydro-4,5'-retro-b,b-carotene-3,3'-diol Eschscholtzxanthone 3'-Hydroxy-4',5'-didehydro-4,5'-retro-b,b-caroten-3-one Rhodoxanthin 4',5'-Didehydro-4,5'-retro-b,b-carotene-3,3'-dione Tangeraxanthin 3-Hydroxy-5'-methyl-4,5'-retro-5'-apo-b-caroten-5'-one or 3-hydroxy-4,5'-retro-5'-apo-b-caroten-5'-one Higher carotenoids Nonaprenoxanthin 2-(4-Hydroxy-3-methyl-2-butenyl)-7',8',11',12'-tetrahydro-e,y-carotene Decaprenoxanthin 2,2'-Bis(4-hydroxy-3-methyl-2-butenyl)-e,e-carotene C.p. 450 2-[4-Hydroxy-3-(hydroxymethyl)-2-butenyl]-2'-(3-methyl-2-butenyl)-b,b-carotene C.p. 473 2'-(4-Hydroxy-3-methyl-2-butenyl)-2-(3-methyl-2-butenyl)-3',4'-didehydro-l',2'-dihydro-β,γ-caroten-1'-ol Bacterioruberin 2,2'-Bis(3-hydroxy-3-methylbutyl)-3,4,3',4'-tetradehydro-1,2,1',2'-tetrahydro-γ,γ-carotene-1,1'-diol
Biology and health sciences
Biochemistry and molecular biology
null
294959
https://en.wikipedia.org/wiki/Mistral%20%28wind%29
Mistral (wind)
The mistral (, Corsican: maestrale, , , , ) is a strong, cold, northwesterly wind that blows from southern France into the Gulf of Lion in the northern Mediterranean. It produces sustained winds averaging 31 miles an hour (50 kilometres an hour), sometimes reaching 60 miles an hour (100 kilometres an hour). It can last for several days. Periods of the wind exceeding for more than sixty-five hours have been reported. It is most common in the winter and spring, and strongest in the transition between the two seasons. It affects the northeast of the plain of Languedoc and Provence to the east of Toulon, where it is felt as a strong west wind. It has a major influence all along the Mediterranean coast of France, and often causes sudden storms in the Mediterranean between Corsica and the Balearic Islands. The name mistral comes from the Languedoc dialect of the Occitan and means "masterly". The same wind is called mistrau in the Provençal variant of Occitan, mestral in Catalan, maestrale in Italian and Corsican, maistràle or bentu maestru in Sardinian, and majjistral in Maltese. The mistral is usually accompanied by clear, fresh weather, and it plays an important role in creating the climate of Provence. It can reach speeds of more than , particularly in the Rhône Valley. Its average speed during the day can reach about , calming noticeably at night. The mistral usually blows in winter or spring, though it occurs in all seasons. It sometimes lasts only one or two days, frequently lasts several days, and sometimes lasts more than a week. Cause The mistral takes place each time there is an anticyclone, or area of high pressure, in the Bay of Biscay, and an area of low pressure around the Gulf of Genoa. When this happens, the flow of air between the high and low pressure areas draws in a current of cold air from the north which accelerates through the lower elevations between the foothills of the Alps and the Cevennes. The conditions for a mistral are even more favorable when a cold rainy front has crossed France from the northwest to the southeast as far as the Mediterranean. This cold, dry wind usually causes a period of cloudless skies and luminous sunshine, which gives the mistral its reputation for making the sky especially clear. There is also, however, the mistral noir, which brings clouds and rain. The mistral noir occurs when the Azores High is extended and draws in unusually moist air from the northwest. The long and enclosed shape of the Rhône Valley, and the Venturi effect of funnelling the air through a narrowing space, is frequently cited as the reason for the speed and force of the mistral, but the reasons are apparently more complex. The mistral reaches its maximum speed not at the narrowest part of the Rhône Valley, south of Valence, but much farther south, where the Valley has widened. Also, the wind occurs not just in the Valley, but high above in the atmosphere, up to the troposphere, above the earth. The mistral is very strong at the summit of Mont Ventoux, 1900 meters (6300') in elevation, though the plain below is very wide. Other contributing factors to the strength of the mistral are the accumulation of masses of cold air, whose volume is greater, pouring down the mountains and valleys to the lower elevations. This is similar to a foehn wind, but unlike a foehn wind the descent in altitude does not significantly warm the mistral. The causes and characteristics of the mistral are very similar to those of the Tramontane, another wind of the French Mediterranean region. In France, the mistral particularly affects Provence, Languedoc east of Montpellier, as well as all of the Rhône Valley from Lyon to Marseille, and as far southeast as Corsica and Sardinia. The mistral usually blows from the north or northwest, but in certain pre-alpine valleys and along the Côte d'Azur, the wind is channelled by the mountains so that it blows from east to west. Sometimes it also blows from the north-northeast toward the east of Languedoc as far as Cap Béar. Frequently, the mistral will affect only one part of the region. In the Languedoc area, where the tramontane is the strongest wind, the mistral and the tramontane blow together onto the Gulf of Lion and the northwest of the western Mediterranean, and can be felt to the east of the Balearic Islands, in Sardinia, and sometimes as far as the coast of Africa. When the mistral blows from the west, the mass of air is not so cold and the wind only affects the plain of the Rhône delta and the Côte d'Azur. The good weather is confined to the coast of the Mediterranean, while it can rain in the interior. The Côte d'Azur generally has a clear sky and warmer temperatures. This type of mistral usually blows for no more than one to three days. The mistral originating from the northeast has a very different character; it is felt only in the west of Provence and as far as Montpellier, with the wind coming from either a northerly or north-northeasterly direction. In the winter this is by far the coldest form of the mistral. The wind can blow for more than a week. This kind of mistral is often connected with a low pressure area in the Gulf of Genoa, and it can bring unstable weather to the Côte d'Azur and the east of Provence, sometimes bringing heavy snow to low altitudes in winter. When the flow of air comes from the northeast due to a widespread low pressure area over the Atlantic and atmospheric disturbances over France, the air is even colder at both high altitudes and ground level, and the mistral is even stronger, and the weather worse, with the creation of cumulus clouds bringing weak storms. This kind of mistral is weaker in the east of Provence and the Côte d'Azur. The mistral is not always synonymous with clear skies. When a low pressure front over the Mediterranean approaches the coast from the southeast, the weather can change quickly for the worse, and the mistral and its clear sky changes rapidly to an east wind bringing humid air and threatening clouds. The position of the low-pressure front creates a flow of air from the northwest or the northeast, channeled through the Rhône Valley. If this low-pressure area moves back toward the southeast, the mistral will quickly clear the air and the good weather will return; but if the cold-weather front continues to approach the land, bad weather will continue for several days in the entire Mediterranean basin, sometimes transforming into what French meteorologists call an épisode cévenol, a succession of torrential rains and floods, particularly in the areas west of the Rhône Valley: the Ardèche, the Gard, Hérault and Lozère. The summer mistral, unlike the others, is created by purely local conditions. It usually happens in July, and only in the valley of the Rhône and on the coast of Provence. It is caused by a thermal depression over the interior of Provence (The Var and Alpes de Haute-Provence), created when the land is overheated. This creates a flow of air from the north toward the east of Provence. This wind is frequently cancelled out close to the coast by the breezes from the sea. It does not blow for more than a single day, but it is feared in Provence, because it dries the vegetation and it can spread forest fires. Effects The mistral helps explain the unusually sunny climate (2700 to 2900 hours of sunshine a year) and clarity of the air of Provence. When other parts of France have clouds and storms, Provence is rarely affected for long, since the mistral quickly clears the sky. In less than two hours, the sky can change from completely covered to completely clear. The mistral also blows away the dust, and makes the air particularly clear, so that during the mistral it is possible to see mountains and farther away. This clarity of the air and light is one of the features that attracted many French impressionist and post-impressionist artists to the South of France. The mistral has the reputation of bringing good health, since the dry air dries stagnant water and the mud, giving the mistral the local name mange-fange (Eng. "mud-eater"). It also blows away pollution from the skies over the large cities and industrial areas. The sunshine and dryness brought by the mistral have an important effect on the local vegetation. The vegetation in Provence, which is already dry because of the small amount of rainfall, is made even drier by the wind, which makes it particularly susceptible to fires, which the wind spreads very rapidly, sometimes devastating vast expanses of mountainside before being extinguished. During the summer, thousands of hectares can burn when the mistral is blowing. In the Rhône Valley and on the plain of la Crau, the regularity and force of the mistral causes the trees to grow leaning to the south. Once the forest has been razed by fire, the strong wind makes it difficult for new trees to grow. The farmers of the Rhône Valley have long planted rows of cypress trees to shelter their crops from the dry force of the mistral. The mistral can also have beneficial effects—the moving air can save crops from the spring frost, which can last until the end of April. As summer visitors to the beach in Provence learn, the summer mistral can quickly lower the temperature of the sea, as the wind pushes the warm water near the surface out to sea and it is replaced by colder water from greater depths. Beyond France The mistral regularly affects the weather in Sardinia and sometimes also affects the weather in North Africa, Sicily and Malta and other parts of the Mediterranean, particularly when low-pressure areas form in the Gulf of Genoa. The winds create a physically cold, salty ocean body that sinks in the Gulf of Lion when certain weather conditions are present. Maestral or maestro in the Adriatic Similar names—maestral or maestro—are used for (although also mostly northwestern) a quite different wind in the Adriatic Sea. It is an anabatic sea-breeze wind which blows in the summer when the east Adriatic coast gets warmer than the sea. It is thus a mild sea-to-coast wind, unlike the mistral. The strong katabatic wind there is the northeastern bora. In Greece, it is also known as maïstros or maïstráli. In southwestern Crete, it is considered the most beneficial wind, said to blow only in daytime. In Provençal culture The mistral played an important part in the life and culture of Provence from the beginning. Excavations at the prehistoric site called Terra Amata, at the foot of Mount Boron in Nice, showed that in about 40,000 B.C. the inhabitants had built a low wall of rocks and beach stones to the northwest of their fireplace to protect their fire from the power of the mistral. The mas (farmhouse) traditionally faces south, with its back to the mistral. The bell towers of villages in Provence are often open iron frameworks, which allow the wind to pass through. The traditional Provençal Nativity scene usually includes a figure of a shepherd holding his hat, with his cloak blowing in the mistral. A Fête du Vent (Festival of Wind) is held periodically on the Prado Beach in Marseille.
Physical sciences
Winds
Earth science
294995
https://en.wikipedia.org/wiki/Euler%E2%80%93Lagrange%20equation
Euler–Lagrange equation
In the calculus of variations and classical mechanics, the Euler–Lagrange equations are a system of second-order ordinary differential equations whose solutions are stationary points of the given action functional. The equations were discovered in the 1750s by Swiss mathematician Leonhard Euler and Italian mathematician Joseph-Louis Lagrange. Because a differentiable functional is stationary at its local extrema, the Euler–Lagrange equation is useful for solving optimization problems in which, given some functional, one seeks the function minimizing or maximizing it. This is analogous to Fermat's theorem in calculus, stating that at any point where a differentiable function attains a local extremum its derivative is zero. In Lagrangian mechanics, according to Hamilton's principle of stationary action, the evolution of a physical system is described by the solutions to the Euler equation for the action of the system. In this context Euler equations are usually called Lagrange equations. In classical mechanics, it is equivalent to Newton's laws of motion; indeed, the Euler-Lagrange equations will produce the same equations as Newton's Laws. This is particularly useful when analyzing systems whose force vectors are particularly complicated. It has the advantage that it takes the same form in any system of generalized coordinates, and it is better suited to generalizations. In classical field theory there is an analogous equation to calculate the dynamics of a field. History The Euler–Lagrange equation was developed in the 1750s by Euler and Lagrange in connection with their studies of the tautochrone problem. This is the problem of determining a curve on which a weighted particle will fall to a fixed point in a fixed amount of time, independent of the starting point. Lagrange solved this problem in 1755 and sent the solution to Euler. Both further developed Lagrange's method and applied it to mechanics, which led to the formulation of Lagrangian mechanics. Their correspondence ultimately led to the calculus of variations, a term coined by Euler himself in 1766. Statement Let be a real dynamical system with degrees of freedom. Here is the configuration space and the Lagrangian, i.e. a smooth real-valued function such that and is an -dimensional "vector of speed". (For those familiar with differential geometry, is a smooth manifold, and where is the tangent bundle of Let be the set of smooth paths for which and The action functional is defined via A path is a stationary point of if and only if Here, is the time derivative of When we say stationary point, we mean a stationary point of with respect to any small perturbation in . See proofs below for more rigorous detail. Example A standard example is finding the real-valued function y(x) on the interval [a, b], such that y(a) = c and y(b) = d, for which the path length along the curve traced by y is as short as possible. the integrand function being . The partial derivatives of L are: By substituting these into the Euler–Lagrange equation, we obtain that is, the function must have a constant first derivative, and thus its graph is a straight line. Generalizations Single function of single variable with higher derivatives The stationary values of the functional can be obtained from the Euler–Lagrange equation under fixed boundary conditions for the function itself as well as for the first derivatives (i.e. for all ). The endpoint values of the highest derivative remain flexible. Several functions of single variable with single derivative If the problem involves finding several functions () of a single independent variable () that define an extremum of the functional then the corresponding Euler–Lagrange equations are Single function of several variables with single derivative A multi-dimensional generalization comes from considering a function on n variables. If is some surface, then is extremized only if f satisfies the partial differential equation When n = 2 and functional is the energy functional, this leads to the soap-film minimal surface problem. Several functions of several variables with single derivative If there are several unknown functions to be determined and several variables such that the system of Euler–Lagrange equations is Single function of two variables with higher derivatives If there is a single unknown function f to be determined that is dependent on two variables x1 and x2 and if the functional depends on higher derivatives of f up to n-th order such that then the Euler–Lagrange equation is which can be represented shortly as: wherein are indices that span the number of variables, that is, here they go from 1 to 2. Here summation over the indices is only over in order to avoid counting the same partial derivative multiple times, for example appears only once in the previous equation. Several functions of several variables with higher derivatives If there are p unknown functions fi to be determined that are dependent on m variables x1 ... xm and if the functional depends on higher derivatives of the fi up to n-th order such that where are indices that span the number of variables, that is they go from 1 to m. Then the Euler–Lagrange equation is where the summation over the is avoiding counting the same derivative several times, just as in the previous subsection. This can be expressed more compactly as Field theories Generalization to manifolds Let be a smooth manifold, and let denote the space of smooth functions . Then, for functionals of the form where is the Lagrangian, the statement is equivalent to the statement that, for all , each coordinate frame trivialization of a neighborhood of yields the following equations: Euler-Lagrange equations can also be written in a coordinate-free form as where is the canonical momenta 1-form corresponding to the Lagrangian . The vector field generating time translations is denoted by and the Lie derivative is denoted by . One can use local charts in which and and use coordinate expressions for the Lie derivative to see equivalence with coordinate expressions of the Euler Lagrange equation. The coordinate free form is particularly suitable for geometrical interpretation of the Euler Lagrange equations.
Physical sciences
Classical mechanics
Physics
295148
https://en.wikipedia.org/wiki/Shoal
Shoal
In oceanography, geomorphology, and geoscience, a shoal is a natural submerged ridge, bank, or bar that consists of, or is covered by, sand or other unconsolidated material, and rises from the bed of a body of water close to the surface or above it, which poses a danger to navigation. Shoals are also known as sandbanks, sandbars, or gravelbars. Two or more shoals that are either separated by shared troughs or interconnected by past or present sedimentary and hydrographic processes are referred to as a shoal complex. The term shoal is also used in a number of ways that can be either similar to, or quite different from, how it is used in geologic, geomorphic, and oceanographic literature. Sometimes, the term refers to either any relatively shallow place in a stream, lake, sea, or other body of water; a rocky area on the seafloor within an area mapped for navigation purposes; or, a growth of vegetation on the bottom of a deep lake, that occurs at any depth, or is used as a verb for the process of proceeding from a greater to a lesser depth of water. Description Shoals are characteristically long and narrow (linear) ridges. They can develop where a stream, river, or ocean current promotes deposition of sediment and granular material, resulting in localized shallowing (shoaling) of the water. Marine shoals also develop either by the in-place drowning of barrier islands as the result of episodic sea level rise or by the erosion and submergence of inactive delta lobes. Shoals can appear as a coastal landform in the sea, where they are classified as a type of ocean bank, or as fluvial landforms in rivers, streams, and lakes. A shoal–sandbar may seasonally separate a smaller body of water from the sea, such as: Marine lagoons Brackish water estuaries Freshwater seasonal stream and river mouths and deltas. The term bar can apply to landform features spanning a considerable range in size, from a length of a few meters in a small stream to marine depositions stretching for hundreds of kilometers along a coastline, often called barrier islands. Composition They are typically composed of sand, although they could be of any granular matter that the moving water has access to and is capable of shifting around (for example, soil, silt, gravel, cobble, shingle, or even boulders). The grain size of the material comprising a bar is related to the size of the waves or the strength of the currents moving the material, but the availability of material to be worked by waves and currents is also important. Formation Wave shoaling is the process when surface waves move towards shallow water, such as a beach, they slow down, their wave height increases and the distance between waves decreases. This behavior is called shoaling, and the waves are said to shoal. The waves may or may not build to the point where they break, depending on how large they were to begin with, and how steep the slope of the beach is. In particular, waves shoal as they pass over submerged sandbanks or reefs. This can be treacherous for boats and ships. Shoaling can also refract waves, so the waves change direction. For example, if waves pass over a sloping bank which is shallower at one end than the other, then the shoaling effect will result in the waves slowing more at the shallow end. Thus, the wave fronts will refract, changing direction like light passing through a prism. Refraction also occurs as waves move towards a beach if the waves come in at an angle to the beach, or if the beach slopes more gradually at one end than the other. Types Sandbars and longshore bars Sandbars, also known as a trough bars, form where the waves are breaking, because the breaking waves set up a shoreward current with a compensating counter-current along the bottom. Sometimes this occurs seaward of a trough (marine landform). Sand carried by the offshore moving bottom current is deposited where the current reaches the wave break. Other longshore bars may lie further offshore, representing the break point of even larger waves, or the break point at low tide. Peresyp In Russian tradition of geomorphology, a peresyp is a sandbar that rises above the water level (like a spit) and separates a liman or a lagoon from the sea. Unlike tombolo bars, a peresyp seldom forms a contiguous strip and usually has one or several channels that connect the liman and the sea.<ref> Федченко Г.П, 'О самосадочной соли и соляных озерах Каспийского и Азовского бассейнов 1870, p. 54</ref> Harbor and river bars A harbor or river bar is a sedimentary deposit formed at a harbor entrance or river mouth by the deposition of freshwater sediment or by the action of waves on the sea floor or on up-current beaches. Where beaches are suitably mobile, or the river's suspended or bed loads are large enough, deposition can build up a sandbar that completely blocks a river mouth and dams the river. It can be a seasonally natural process of aquatic ecology, causing the formation of estuaries and wetlands in the lower course of the river. This situation will persist until the bar is eroded by the sea, or the dammed river develops sufficient head to break through the bar. The formation of harbor bars that prevent access for boats and shipping can be the result of: construction up-coast or at the harbor — e.g.: breakwaters, dune habitat destruction. upriver development — e.g.: dams and reservoirs, riparian zone destruction, river bank alterations, river adjacent agricultural land practices, water diversions. watershed erosion from habitat alterations — e.g.: deforestation, wildfires, grading for development. artificially created/deepened harbors that require periodic dredging maintenance. Nautical navigation In a nautical sense, a bar is a shoal, similar to a reef: a shallow formation of (usually) sand that is a navigation or grounding hazard, with a depth of water of or less. It therefore applies to a silt accumulation that shallows the entrance to or course of a river, or creek. A bar can form a dangerous obstacle to shipping, preventing access to the river or harbor in poor weather conditions or at some states of the tide. Geological units In addition to longshore bars discussed above that are relatively small features of a beach, the term shoal'' can be applied to larger geological units that form off a coastline as part of the process of coastal erosion, such as spits and baymouth bars that form across the front of embayments and rias. A tombolo is a bar that forms an isthmus between an island or offshore rock and a mainland shore. In places of reentrance along a coastline (such as inlets, coves, rias, and bays), sediments carried by a longshore current will fall out where the current dissipates, forming a spit. An area of water isolated behind a large bar is called a lagoon. Over time, lagoons may silt up, becoming salt marshes. In some cases, shoals may be precursors to beach expansion and dunes formation, providing a source of windblown sediment to augment such beach or dunes landforms. Human habitation Since prehistoric times, humans have chosen some shoals as a site of habitation. In some early cases, the locations provided easy access to exploit marine resources. In modern times, these sites are sometimes chosen for the water amenity or view, but many such locations are prone to storm damage. An area in Northwest Alabama is commonly referred to as "The Shoals" by local inhabitants, and one of the cities, Muscle Shoals, is named for such landform and its abundance of Mussels.
Physical sciences
Oceanic and coastal landforms
Earth science
295183
https://en.wikipedia.org/wiki/Galilean%20transformation
Galilean transformation
In physics, a Galilean transformation is used to transform between the coordinates of two reference frames which differ only by constant relative motion within the constructs of Newtonian physics. These transformations together with spatial rotations and translations in space and time form the inhomogeneous Galilean group (assumed throughout below). Without the translations in space and time the group is the homogeneous Galilean group. The Galilean group is the group of motions of Galilean relativity acting on the four dimensions of space and time, forming the Galilean geometry. This is the passive transformation point of view. In special relativity the homogeneous and inhomogeneous Galilean transformations are, respectively, replaced by the Lorentz transformations and Poincaré transformations; conversely, the group contraction in the classical limit of Poincaré transformations yields Galilean transformations. The equations below are only physically valid in a Newtonian framework, and not applicable to coordinate systems moving relative to each other at speeds approaching the speed of light. Galileo formulated these concepts in his description of uniform motion. The topic was motivated by his description of the motion of a ball rolling down a ramp, by which he measured the numerical value for the acceleration of gravity near the surface of the Earth. Translation Although the transformations are named for Galileo, it is the absolute time and space as conceived by Isaac Newton that provides their domain of definition. In essence, the Galilean transformations embody the intuitive notion of addition and subtraction of velocities as vectors. The notation below describes the relationship under the Galilean transformation between the coordinates and of a single arbitrary event, as measured in two coordinate systems and , in uniform relative motion (velocity ) in their common and directions, with their spatial origins coinciding at time : Note that the last equation holds for all Galilean transformations up to addition of a constant, and expresses the assumption of a universal time independent of the relative motion of different observers. In the language of linear algebra, this transformation is considered a shear mapping, and is described with a matrix acting on a vector. With motion parallel to the x-axis, the transformation acts on only two components: Though matrix representations are not strictly necessary for Galilean transformation, they provide the means for direct comparison to transformation methods in special relativity. Galilean transformations The Galilean symmetries can be uniquely written as the composition of a rotation, a translation and a uniform motion of spacetime. Let represent a point in three-dimensional space, and a point in one-dimensional time. A general point in spacetime is given by an ordered pair . A uniform motion, with velocity , is given by where . A translation is given by where and . A rotation is given by where is an orthogonal transformation. As a Lie group, the group of Galilean transformations has dimension 10. Galilean group Two Galilean transformations and compose to form a third Galilean transformation, . The set of all Galilean transformations forms a group with composition as the group operation. The group is sometimes represented as a matrix group with spacetime events as vectors where is real and is a position in space. The action is given by where is real and and is a rotation matrix. The composition of transformations is then accomplished through matrix multiplication. Care must be taken in the discussion whether one restricts oneself to the connected component group of the orthogonal transformations. has named subgroups. The identity component is denoted . Let represent the transformation matrix with parameters : anisotropic transformations. isochronous transformations. spatial Euclidean transformations. uniformly special transformations / homogeneous transformations, isomorphic to Euclidean transformations. shifts of origin / translation in Newtonian spacetime. rotations (of reference frame) (see SO(3)), a compact group. uniform frame motions / boosts. The parameters span ten dimensions. Since the transformations depend continuously on , is a continuous group, also called a topological group. The structure of can be understood by reconstruction from subgroups. The semidirect product combination () of groups is required. ( is a normal subgroup) Origin in group contraction The Lie algebra of the Galilean group is spanned by and (an antisymmetric tensor), subject to commutation relations, where is the generator of time translations (Hamiltonian), is the generator of translations (momentum operator), is the generator of rotationless Galilean transformations (Galileian boosts), and stands for a generator of rotations (angular momentum operator). This Lie Algebra is seen to be a special classical limit of the algebra of the Poincaré group, in the limit . Technically, the Galilean group is a celebrated group contraction of the Poincaré group (which, in turn, is a group contraction of the de Sitter group ). Formally, renaming the generators of momentum and boost of the latter as in , where is the speed of light (or any unbounded function thereof), the commutation relations (structure constants) in the limit take on the relations of the former. Generators of time translations and rotations are identified. Also note the group invariants and . In matrix form, for , one may consider the regular representation (embedded in , from which it could be derived by a single group contraction, bypassing the Poincaré group), The infinitesimal group element is then Central extension of the Galilean group One may consider a central extension of the Lie algebra of the Galilean group, spanned by and an operator M: The so-called Bargmann algebra is obtained by imposing , such that lies in the center, i.e. commutes with all other operators. In full, this algebra is given as and finally where the new parameter shows up. This extension and projective representations that this enables is determined by its group cohomology.
Physical sciences
Classical mechanics
Physics
295477
https://en.wikipedia.org/wiki/Bowden%20cable
Bowden cable
A Bowden cable ( ) is a type of flexible cable used to transmit mechanical force or energy by the movement of an inner cable relative to a hollow outer cable housing. The housing is generally of composite construction, consisting of an inner lining, a longitudinally incompressible layer such as a helical winding or a sheath of steel wire, and a protective outer covering. The linear movement of the inner cable may be used to transmit pull force, or both push and pull forces. Many light aircraft use a push/pull Bowden cable for the throttle control, and here it is normal for the inner element to be a solid wire, rather than a multi-strand cable. Usually, provision is made for adjusting the cable tension using an inline hollow bolt (often called a "barrel adjuster"), which lengthens or shortens the cable housing relative to a fixed anchor point. Lengthening the housing (turning the barrel adjuster out) tightens the cable; shortening the housing (turning the barrel adjuster in) loosens the cable. History The origin and invention of the Bowden cable are open to some dispute, confusion and myth. The invention of the Bowden cable has been popularly attributed to Sir Frank Bowden, one time owner of the Raleigh Bicycle Company who, circa 1902, was reputed to have started replacing the rigid rods used for brakes with a flexible wound cable but no evidence for this exists. The Bowden mechanism was invented by Irishman Ernest Monnington Bowden (1860 to April 3, 1904) of 35 Bedford Place, London, W.C. The first patent was granted in 1896 (English Patent 25,325 and U.S. Pat. No. 609,570), and the invention was reported in the Automotor Journal of 1897 where Bowden's address was given as 9 Fopstone Rd, Earls Court. The two Bowdens are not known to be closely related. The principal element of this was a flexible tube (made from hard wound wire and fixed at each end) containing a length of fine wire rope that could slide within the tube, directly transmitting pulling, pushing or turning movements on the wire rope from one end to the other without the need of pulleys or flexible joints. The cable was particularly intended for use in conjunction with bicycle brakes. The Bowden Brake was launched amidst a flurry of enthusiasm in the cycle press in 1896. It consisted of a stirrup, pulled up by the cable from a handlebar mounted lever, with rubber pads acting against the rear wheel rim. At this date bicycles were fixed-wheel (no freewheel), additional braking being offered by a 'plunger' brake pressing on the front tyre. The Bowden offered extra braking power still, and was novel enough to appeal to riders who scorned the plunger arrangement, which was heavy and potentially damaging to the (expensive) pneumatic tyre. The problem for Bowden was his failure to develop effective distribution networks and the brake was often incorrectly, or inappropriately fitted, resulting in a good number of complaints being aired in the press. Its most effective application was on those machines fitted with Westwood pattern steel rims which offered flat contact surfaces for the brake pads. The potential of the Bowden cable and associated brake was not to be fully realised until the freewheel sprocket became a standard feature of bicycles, over the period 1899-1901, and increasing numbers of applications were found for it, such as gear change mechanisms. Importantly in 1903, Hendee developed the twist-grip throttle using a similar cable for his 'Indian' motorcycles. Its lightness and flexibility recommended it to further automotive use such as clutch and speedometer drive cables. It is reported that "on 12th January 1900 E. M. Bowden granted a licence to The Raleigh Cycle Company of Nottingham", whose directors were Frank Bowden and Edward Harlow. At this signing they became members of 'E. M. Bowden's Patent Syndicate Limited'. The syndicate included, among others, R. H. Lea & Graham I. Francis of Lea & Francis Ltd, and William Riley of the Riley Cycle Company. The Raleigh company were soon offering the Bowden Brake as an accessory, and were quick to incorporate the cable into handlebar mounted Sturmey-Archer (in which they had a major interest) gear changes. Undoubtedly this is why E. Bowden and F. Bowden are sometimes confused today. Early Bowden cable, from the 1890s and first years of the twentieth century, is characterised by the outer tube being wound from round wire and being uncovered. At the ends they are usually fitted with a brass collar marked 'BOWDEN PATENT', (this legend is also stamped into other components of the original brake). More modern versions have their outer tube wound from square section wire. From the cable was usually covered in a waterproofed fabric sheath; in the early post-war period this was replaced by plastic. Possible contribution by Larkin An unpublished typescript exists in the archives of the National Motor Museum, written by the son of one of Bowden's employees that attempts to claim the invention of the cable for his father to the point of suggesting that it was never applied to bicycles before 1902. Although this is easily disproved by reference to 'Cycling' or the other UK cycle press through 1896–97, it serves to remind one of the attempts made to rewrite cycle history through priority claims. British National Archives In this narrative a flexible cable brake for cycles was separately 'invented' by George Frederick Larkin, a skilled automobile and motorcycle engineer, who patented his design in 1902. He was subsequently recruited by, and worked for E.M. Bowden until 1917 as General Works Manager. "George Larkin is known for his invention of the flexible cable brake for cycles, which was patented in 1902. The original patent for a similar invention known as the 'Bowden mechanism' was granted to Ernest Monnington Bowden in 1896. The following year E.M. Bowden's Patents Syndicate Ltd. was formed to market the device but initially the project was a failure because all the company could offer was a flimsy mechanism capable of transmitting comparatively enormous power. The Bowden Mechanism was not developed in connection with a cycle brake as there is no record of the cable having been associated with the cycle industry until 1902, when George Larkin's invention was patented." "During Larkin's employment with Bassett Motor Syndicate his duties included the assembly of motor cars and motor cycles, and a major difficulty was the assembly of the braking systems which at that time comprised steel rods, not easily adaptable to the contour of the chassis. He designed a flexible cable brake and approached S.J. Withers, Patent Agent, to have the design patented. Withers noticed the similarity of Larkin's idea to the Bowden Mechanism and introduced him to the Bowden Syndicate, who agreed to manufacture and market the invention with the proviso that it should be patented jointly in the names of the inventor and themselves. Within a few months, Larkin, then aged 23, was engaged as Motor Department Manager with E.M. Bowden's Patents Syndicate, and he was appointed General Works Manager on 1 May 1904." Parts and variations Housing The original, standard Bowden cable housing consists of a close-wound helix of round or square steel wire. This makes a flexible housing but causes the length to change as the housing flexes. Because on the inside of the bend the turns of a close-wound helix can't get any closer together, the bending causes the turns to separate on the outside of the bend, and so at the centerline of the housing, there must also be an increase of length with increasing bend. In order to support indexed shifting, Shimano developed a type of housing that does not change length as it is flexed. This housing has several wire strands running in a multiple helix, with a pitch short enough such that bends in the cable are shared by all strands, but long enough so that the housing's flexibility comes by bending the individual strands rather than by twisting them. A consequence of a long winding pitch in a support helix is that it approaches the case of parallel strands where the wires are bound only by the plastic jacket. Housings with a long helix cannot withstand the high compression that is associated with high cable tensions, and on overload tend to fail by the buckling of the housing strands. For this reason, helical support for brake cables is close wound, while housings with a longer helix are used for less critical applications. Longitudinally arranged support wires are used in applications such as bicycle gear-shifting. A third type of housing consists of short hollow rigid aluminum or carbon fiber cylinders slid over a flexible liner. Claimed benefits over steel wire housing include less weight, tighter curves, and less compression under load. Inner wire Inner wire ropes for push applications have an additional winding that runs in the opposite direction to the wind of the actual inner wire. The wind may be like that of a spring or a wind with a flat strip; these are called spring wrap and spiral wrap respectively. Some applications such as lawn mower throttles, automobile manual chokes, and some bicycle shifting systems require significant pushing ability and so use a cable with a solid inner wire. These cables are usually less flexible than ones with stranded inner wires. Ends One end of the inner cable may have a small shaped piece of metal, known (from the pear-shaped soldered terminations used in some cases) as a nipple (as can be seen in the BMX rear brake detangler picture) that fits into a shifter or brake lever mechanism. The other end is often clamped (as can be seen in the rear derailleur picture) to the part of the brake or shifter that needs to be moved, or as is most common with motorcycle control cables, fitted with another nipple. Traditionally, in bicycles, shifter cables are anchored on the shifter with a small cylindrical nipple, concentric with the cable. Bicycle brake nipples however, vary between mountain bikes (MTB), with straight handlebars, and road bikes, with drop-handlebars. MTB bikes use a barrel-shaped (cylindrical) nipple to anchor the brake cable at the brake lever, while road bikes have a pear-shaped nipple. Some replacement brake cables for bicycles come with both styles, one on each end. The unneeded end is cut off and discarded upon installation. In bicycle applications, for both brakes and gear-shifting, the outer dimension of the cap or ferrule that terminates a housing, is selected to make a loose fit within the barrel-adjuster's end. In this way the barrel will slip on the ferrule as it is turned during adjustments. If the ferrule were to be jammed into the barrel end, then the cable would twist during fruitless attempts at adjustment. Nipples are also available separately from the cable, for purposes of repair or custom cable construction. They are fitted to the cable by soldering. Where free rotation of nipples, relative to the cable axis is required, the cable end may be finished with a brass ferrule or "trumpet" soldered to the cable. The barrel nipple will be a sliding fit over the brass ferrule, and can thus rotate, to ensure alignment of the nipples at each end of the cable, and avoidance of twisting of the inner cable. Applying heat to the inner cable for soldering may weaken the steel, and although soft soldering is less strong than silver solder, a lower temperature is required to form the joint, and there is less likelihood of the inner cable being damaged as a result. Silver soldering may require additional heat treatment of the wire to preserve the temper of it in order to prevent it from becoming too soft or too brittle Nipples that clamp to the cable by means of a screw also are available for emergency repair purposes, or where removal is required for maintenance. A small ferrule, also called a 'crimp', (seen in the rear derailleur picture) may be crimped on to prevent cable ends from fraying. Other methods to prevent fraying include soft or silver soldering the wire ends, or ideally by flash cutting the wires. If the inner wire is solid, as in automotive and lawnmower throttle and choke applications, it may simply have a bend at one or both ends to engage what ever it pushes or pulls. Donuts Small rubber tori, called donuts, can be threaded onto a bare run of the inner cable to prevent it from striking the bicycle frame causing rattles or abrasion. Common practice The indexed shifting of a bicycle derailleur needs to be exact. A typical 7-speed shifter changes the cable length by either 2.9 mm (Shimano 2:1) or 4.5 mm (SRAM 1:1) for each shift, and any length errors accumulate with the number of shifts. To this end, the housing needs to behave as if it were a solid tube, so it, and its end-pieces have to be compressionless. Currently, the most commonly used compressionless housing for gear shifting has longitudinally spaced steel wires. The flat-cut ends of such housing are tight-terminated with end caps or ferrules, and the ends caps are sized to fit either into a fixture on the frame, or as a loose fitting in the end of a barrel adjuster. Housings for bicycle brakes need not be quite so compressionless, but need to be stronger, and items currently sold for this purpose use a close-wound spiral support wire. One end of a brake housing is tight-terminated in an end cap or ferrule that makes a loose fit within a barrel adjuster, and the other in any of a variety of fittings that includes end caps, or parts to effect a smooth change of direction. In any case, at the point where the cable emerges for attachment to the brake arms, is fitted a nipple. In view of the wide array of cable constructions on offer, confusion as to the best housing can easily result. In general, a housing sold for one purpose should not be used for any other, and in any case the advice of the manufacturer should be followed. In particular, longitudinally reinforced housings should not be used for bicycle brakes, since they are weaker than the spirally wound housings. Housings for bicycles are made in two main diameters; most often 4 mm diameter is used for gear-shifting and 5 mm for brakes. Both shifting and brake housings are manufactured in both sizes. However, some care is needed in changing cables, since, for example, the 4 mm barrel adjuster end of an existing shifter is probably made only for that housing size. Although the individual parts for cable assembly can be acquired, ready-made cables for both brakes and shifting are available. These usually consist of an inner wire within a length of housing, and depending on their purpose, with one or more end caps fitted. However, because of the wide range of fittings in use, it is probable that these universal cables' caps, although suiting many situations, will not suit every purpose, as their name wrongly implies. The shortening of housings requires the use of a special hand tool, designed to make a square cut without closing the cable entry. The same tool is used to cut the internal steel cable. To avoid unraveling of the cable's wires during installation, manufacturers weld or crimp the ends. Housings for cables have habitually been made only in black, though some colored housings can also be found. Maintenance Bowden cables can cease to function smoothly, particularly if water or contaminants get into the housing. Modern lined and stainless steel cables are less prone to these problems; unlined housings should be lubricated with a light machine oil. In cold climates Bowden cable mechanisms are prone to malfunction due to water freezing. Cables also wear through use over a long time, and can be damaged through kinking or raveling. A common failure occurs on bicycles at the point where the housing enters a barrel adjuster; loose housing ends tend to fray the housing, making adjustments uncertain. Fraying due to fatigue is most likely if the cable passes over a pulley, which on bicycles is often below the recommended diameter, or where the cable is bent repeatedly where it attaches to the brake lever or caliper. A cable passing around a sharp bend tend to furrow the inner cable sleeve, leading to contact with the outer housing and rub fraying. A frayed cable can suddenly break when force is applied strongly, e.g. during emergency braking. The specifications for cables and housings rarely give any details other than dimensions and the purpose of the products. The specific resistance to compression or bending is never quoted, so there is much rhetorical evidence and comment as to the performance and durability of products, but little available science for the consumer's use. A particularly severe quality test for housings is at or near the hinge of a folding bicycle where a sharp bend is made repeatedly. The radius of curvature of cables on a folded bicycle can be as low as 1.5 inches, (4 cm); therefore it is advisable to shift to the gear with the lowest cable pressure before folding, to minimize any adverse effects on housings or derailleurs. This gear is usually the one with the highest index number on the gear shifter. There is some controversy surrounding the existence of the phenomenon known as "cable stretch". Newly installed cables can seem to elongate, requiring readjustment. While it is generally agreed that inner wires actually stretch very little - if at all - housings and linings may compress slightly, and all parts may generally "settle in". Lightweight assemblies such as those used on bicycles are more susceptible to this phenomenon. Uses Sustain pedal linkage on Wurlitzer electric pianos Bicycle brakes and gear shift cables Photographic shutter release cables Automotive clutch, throttle/cruise control, emergency brake, and various latch release cables Aircraft engine controls including throttle or power control, propeller pitch or RPM, fuel mixture, carburetor heat, and cowl flaps Motorcycle throttle, clutch and (now rarely) brake cables Control surfaces on small aircraft Remote hi-hats in drum kits Operate terminal device hook on prosthetic arms Lawn mower throttle and dead man's switch Interlocking in electrical switchgear Trigger activation for remotely located machine guns on French 1930s era Char B1 bis tanks, and for the DISA tripod used with the Madsen machine gun. Many 3-D printers of the type that extrude plastic filament use a "Bowden extruder" to feed the filament through a tube, typically PTFE ('Teflon') to minimize friction, to the 'hot-end' where it is melted and deposited through a nozzle. This has the advantage of shifting the mass of the extruder's mechanism and stepper motor from the moving hot-end to a fixed mount on the printer's frame, allowing greater printing speed and accuracy. Flexible drive shafts for 'Dremel'-type rotary multi-tools, die grinder attachments for bench grinders, and the like Bowden cables are sometimes used in flush toilets to connect the flush to the flapper.
Technology
Flexible components
null
295670
https://en.wikipedia.org/wiki/Urban%20ecology
Urban ecology
Urban ecology is the scientific study of the relation of living organisms with each other and their surroundings in an urban environment. An urban environment refers to environments dominated by high-density residential and commercial buildings, paved surfaces, and other urban-related factors that create a unique landscape. The goal of urban ecology is to achieve a balance between human culture and the natural environment. Urban ecology is a recent field of study compared to ecology. Currently, most of the information in this field is based on the easier to study species of mammals and birds [source needed]. To close the gap in knowledge, attention should be paid to all species in the urban space like insects and fish. This study should also expand to suburban spaces with its unique mix of development and surrounding nature. The methods and studies of urban ecology is a subset of ecology. The study of urban ecology carries increasing importance because more than 50% of the world's population today lives in urban areas. It is also estimated that within the next 40 years, two-thirds of the world's population will be living in expanding urban centers. The ecological processes in the urban environment are comparable to those outside the urban context. However, the types of urban habitats and the species that inhabit them are poorly documented which is why more research should be done in urban ecology. History Historically, ecology has focused on natural environments, but by the 1970s many ecologists began to turn their interest towards ecological interactions taking place in and caused by urban environments. In the nineteenth century, naturalists such as Malthus, De Candolle, Lyell, and Darwin found that competition for resources was crucial in controlling population growth and is a driver of extinction. This concept was the basis of evolutionary ecology. Jean-Marie Pelt's 1977 book The Re-Naturalized Human, Brian Davis' 1978 publication Urbanization and the diversity of insects, and Sukopp et al.'s 1979 article "The soil, flora and vegetation of Berlin's wastelands" are some of the first publications to recognize the importance of urban ecology as a separate and distinct form of ecology the same way one might see landscape ecology as different from population ecology. Forman and Godron's 1986 book Landscape Ecology first distinguished urban settings and landscapes from other landscapes by dividing all landscapes into five broad types. These types were divided by the intensity of human influence ranging from pristine natural environments to urban centers. Early ecologists defined ecology as the study of organisms and their environment. As time progressed urban ecology was recognized as a diverse and complex concept which differs in application between North America and Europe. The European concept of urban ecology examines the biota of urban areas, the North American concept has traditionally examined the social sciences of the urban landscape, as well as the ecosystem fluxes and processes, and the Latin American concept examines the effect of human activity on the biodiversity and fluxes of urban ecosystems. A renaissance in the development of urban ecology occurred in the 1990s that was initiated by the US National Science in funding two urban long-term ecological research centers and this promoted the study of urban ecology. The field of urban ecology is rapidly expanding, with an increasing number of dedicated research centers emerging. Among the pioneers are the Urban Ecology Research Laboratory (UERL) at the University of Washington, established in 2001, and the Urban Ecology Laboratory (LEU) at the Costa Rican Distance University, founded in 2008. The UERL in Washington specializes in analyzing urban landscape patterns, ecosystem functions, modeling land cover changes, and developing scenarios for urban adaptation within the state. In contrast, Costa Rica's LEU holds distinction as the world's first research center exclusively devoted to studying tropical urban ecosystems. Research conducted there spans various facets of urban ecology, including biodiversity, the impacts of climate change on cities and their surrounding areas (particularly tropical highlands), and the intricate interactions between human activities and urban environments. Methods Since urban ecology is a subfield of ecology, many of the techniques are similar to that of ecology. Ecological study techniques have been developed over centuries, but many of the techniques use for urban ecology are more recently developed. Methods used for studying urban ecology involve chemical and biochemical techniques, temperature recording, heat mapping remote sensing, and long-term ecological research sites. Chemical and biochemical techniques Chemical techniques may be used to determine pollutant concentrations and their effects. Tests can be as simple as dipping a manufactured test strip, as in the case of pH testing, or be more complex, as in the case of examining the spatial and temporal variation of heavy metal contamination due to industrial runoff. In that particular study, livers of birds from many regions of the North Sea were ground up and mercury was extracted. Additionally, mercury bound in feathers was extracted from both live birds and from museum specimens to test for mercury levels across many decades. Through these two different measurements, researchers were able to make a complex picture of the spread of mercury due to industrial runoff both spatially and temporally. Other chemical techniques include tests for nitrates, phosphates, sulfates, etc. which are commonly associated with urban pollutants such as fertilizer and industrial byproducts. These biochemical fluxes are studied in the atmosphere (e.g. greenhouse gases), aquatic ecosystems and soil nematodes. Broad reaching effects of these biochemical fluxes can be seen in various aspects of both the urban and surrounding rural ecosystems. Temperature data and heat mapping Temperature data can be used for various kinds of studies. An important aspect of temperature data is the ability to correlate temperature with various factors that may be affecting or occurring in the environment. Oftentimes, temperature data is collected long-term by the Office of Oceanic and Atmospheric Research (OAR), and made available to the scientific community through the National Oceanic and Atmospheric Administration (NOAA). Data can be overlaid with maps of terrain, urban features, and other spatial areas to create heat maps. These heat maps can be used to view trends and distribution over time and space. Remote sensing Remote sensing is the technique in which data is collected from distant locations through the use of satellite imaging, radar, and aerial photographs. In urban ecology, remote sensing is used to collect data about terrain, weather patterns, light, and vegetation. One application of remote sensing for urban ecology is to detect the productivity of an area by measuring the photosynthetic wavelengths of emitted light. Satellite images can also be used to detect differences in temperature and landscape diversity to detect the effects of urbanization. LTERs and long-term data sets Long-term ecological research (LTER) sites are research sites funded by the government that have collected reliable long-term data over an extended period of time in order to identify long-term climatic or ecological trends. These sites provide long-term temporal and spatial data such as average temperature, rainfall and other ecological processes. The main purpose of LTERs for urban ecologists is the collection of vast amounts of data over long periods of time. These long-term data sets can then be analyzed to find trends relating to the effects of the urban environment on various ecological processes, such as species diversity and abundance over time. Another example is the examination of temperature trends that are accompanied with the growth of urban centers. There are currently two active urban LTERs: Central Arizona-Phoenix (CAP), first launched in 1997 and housed at Arizona State University and Minneapolis-St. Paul Metropolitan Area (MSP). The Baltimore Ecosystem Study (BES) was originally funded in 1998 as an urban LTER but as is no longer funded by the National Science Foundation as of 2021. Urban effects on the environment Humans are the driving force behind urban ecology and influence the environment in a variety of ways - urbanization being a key example. Urbanization is tied to social, economic and environmental processes. There are six core aspects: air pollution, ecosystems, land use, biogeochemical cycles, water pollution, solid waste management, and the climate. Urbanization was driven by migration into cities and the rapid environmental implications that came with it; increased carbon emissions, energy consumption, impaired ecology; all primarily negative. Despite the impacts, the perception of urbanization at present is shifting from challenges to solutions. Cities are home to an abundant amount of financially well-off, knowledgeable and innovative initiators who are increasing the involvement of science in urban policy processes and concepts. The intersection of the multiple processes/integrated systems approach which can easily emerge within a city, includes five characteristics that can emphasize this fundamental shift at a low cost. These solutions are integrated, comprehensive, multifunctional approaches that speak to the social, economic, and cultural contexts of cities. They take into account the chemical, biophysical, and ecological aspects that define urban systems, including lifestyle choices that are interlinked with the culture of a city. However, despite adapting the opportunities that a city can participate in, the results of the concepts that researchers have developed remains uncertain. Modification of land and waterways Humans place high demand on land not only to build urban centers, but also to build surrounding suburban areas for housing. Land is also allocated for agriculture to sustain the growing population of the city. Expanding cities and suburban areas necessitate corresponding deforestation to meet the land-use and resource requirements of urbanization. Key examples of this are Deforestation in the United States and Europe. Along with manipulation of land to suit human needs, natural water resources such as rivers and streams are also modified in urban establishments. Modification can come in the form of dams, artificial canals, and even the reversal of rivers. Reversing the flow of the Chicago River is a major example of urban environmental modification. Urban areas in natural desert settings often bring in water from far areas to maintain the human population and will likely have effects on the local desert climate. Modification of aquatic systems in urban areas also results in decreased stream diversity and increased pollution. Trade, shipping, and spread of invasive species Both local shipping and long-distance trade are required to meet the resource demands important in maintaining urban areas. Carbon dioxide emissions from the transport of goods also contribute to accumulating greenhouse gasses and nutrient deposits in the soil and air of urban environments. In addition, shipping facilitates the unintentional spread of living organisms, and introduces them to environments that they would not naturally inhabit. Introduced or alien species are populations of organisms living in a range in which they did not naturally evolve due to intentional or inadvertent human activity. Increased transportation between urban centers furthers the incidental movement of animal and plant species. Alien species often have no natural predators and pose a substantial threat to the dynamics of existing ecological populations in the environment into which they are introduced. Invasive species are successful when they are able to have proliferate reproduction due to short life cycles, contain or adapt to have traits that suit the environment and appear in high densities. Such invasive species are numerous and include house sparrows, ring-necked pheasants, European starlings, brown rats, Asian carp, American bullfrogs, emerald ash borer, kudzu vines, and zebra mussels among numerous others, most notably domesticated animals. Brown rats are a highly invasive species in urban environments, and are commonly seen in the streets and subways of New York City, where they pose multiple negative effects to infrastructure, native species and human health. Brown rats carry several types of parasites and pathogens that can possibly infect humans and other animals. In New York City a genetic study exploring genome wide variation concluded that multiple rats were originally from Great Britain. In Australia, it has been found that removing Lantana (L. camara, an alien species) from urban green spaces can have negative impacts on bird diversity locally, as it provides refugia for species like the superb fairy (Malurus cyaneus) and silvereye (Zosterops lateralis), in the absence of native plant equivalents. Although, there seems to be a density threshold in which too much Lantana (thus homogeneity in vegetation cover) can lead to a decrease in bird species richness or abundance. Effects of urban animals on humans Positive effects Some urban animals can have a positive impact on the lives of humans. Studies show that the presence of domestic animals can reduce stress, anxiety and loneliness. Additionally some urban animals act as predators to animals like insects, etc., that can be harmful to humans Also urban species can serve many more purposes including agriculture, transport, and protection. Negative effects Some urban species have a negative impact on humans. For example, pests' urine fecal matter, and skin fragments can spread germs if ingested by humans Diseases caused by pests or insects can be fatal. They include: salmonella, meningitis, Weil's disease, Lyme disease, etc. Some people are allergic to certain insects like bees, wasps and therefore being exposed to them will cause serious allergic responses (rashes for example). According to Seth Magle, attacks from wildlife in an urban setting, while rare are detrimental to societal views on wildlife. Due to media coverage of these rare attacks, urban populations assume these interactions as more common than they really are which further affects the tolerance of urban wildlife. Human effects on biogeochemical pathways Urbanization results in a large demand for chemical use by industry, construction, agriculture, and energy providing services. Such demands have a substantial impact on biogeochemical cycles, resulting in phenomena such as acid rain, eutrophication, and global warming. Furthermore, natural biogeochemical cycles in the urban environment can be impeded due to impermeable surfaces that prevent nutrients from returning to the soil, water, and atmosphere. Demand for fertilizers to meet agricultural needs exerted by expanding urban centers can alter chemical composition of soil. Such effects often result in abnormally high concentrations of compounds including sulfur, phosphorus, nitrogen, and heavy metals. In addition, nitrogen and phosphorus used in fertilizers have caused severe problems in the form of agricultural runoff, which alters the concentration of these compounds in local rivers and streams, often resulting in adverse effects on native species. A well-known effect of agricultural runoff is the phenomenon of eutrophication. When the fertilizer chemicals from agricultural runoff reach the ocean, an algal bloom results, then rapidly dies off. The dead algae biomass is decomposed by bacteria that also consume large quantities of oxygen, which they obtain from the water, creating a "dead zone" without oxygen for fish or other organisms. A classic example is the dead zone in the Gulf of Mexico due to agricultural runoff into the Mississippi River. Just as pollutants and alterations in the biogeochemical cycle alter river and ocean ecosystems, they exert likewise effects in the air. Some stems from the accumulation of chemicals and pollution and often manifests in urban settings, which has a great impact on local plants and animals. Because urban centers are often considered point sources for pollution, local plants have adapted to withstand such conditions. Urban effects on climate Urban environments and outlying areas have been found to exhibit unique local temperatures, precipitation, and other characteristic activity due to a variety of factors such as pollution and altered geochemical cycles. Some examples of the urban effects on climate are urban heat island, oasis effect, greenhouse gases, and acid rain. This further stirs the debate as to whether urban areas should be considered a unique biome. Despite common trends among all urban centers, the surrounding local environment heavily influences much of the climate. One such example of regional differences can be seen through the urban heat island and oasis effect. Urban heat island effect The urban heat island is a phenomenon in which central regions of urban centers exhibit higher mean temperatures than surrounding urban areas. Much of this effect can be attributed to low city albedo, the reflecting power of a surface, and the increased surface area of buildings to absorb solar radiation. Concrete, cement, and metal surfaces in urban areas tend to absorb heat energy rather than reflect it, contributing to higher urban temperatures. Brazel et al. found that the urban heat island effect demonstrates a positive correlation with population density in the city of Baltimore. The heat island effect has corresponding ecological consequences on resident species. However, this effect has only been seen in temperate climates. Greenhouse gases Emissions of greenhouse gases allow humans to inhabit the earth because they capture heat from the sun to make the climate adequate. In 1896, Swedish scientist Svante Arrhenius established that fossil fuels caused carbon dioxide emissions (the most abundant and harmful greenhouse gas) . In the 20th century, American climate scientist James E. Hansen concluded that Greenhouse effect is changing the climate for the worse. Carbon dioxide is the most abundant greenhouse gas and accounts for 3/4 of emissions. It is emitted by burning coal, oil, gas, wood, and other organic material. Another greenhouse gas is methane. it can come from landfill, natural gases, and or petroleum industries. Nitrous oxide accounts for about 6% of the emissions, and can come from fertilizers, manure, burning of agricultural residues, and or fuel. Finally, fluorinated gases account for 2% of greenhouse gas emissions and can come from refrigerants, solvents, etc. The excessive emission of greenhouse gases is responsible for much of the harm that can be observed today including global warming, respiratory diseases due to pollution, extinction or migration of certain species, etc. These issues can be reduced if not resolved by eliminating the use of fossil fuels in favor of renewable energy sources. Acid rain and pollution Processes related to urban areas result in the emission of numerous pollutants, which change corresponding nutrient cycles of carbon, sulfur, nitrogen, and other elements. Ecosystems in and around the urban center are especially influenced by these point sources of pollution. High sulfur dioxide concentrations resulting from the industrial demands of urbanization cause rainwater to become more acidic. Such an effect has been found to have a significant influence on locally affected populations, especially in aquatic environments. Wastes from urban centers, especially large urban centers in developed nations, can drive biogeochemical cycles on a global scale. Urban environment as an anthropogenic biome The urban environment has been classified as an anthropogenic biome, which is characterized by the predominance of certain species and climate trends such as urban heat island across many urban areas. Examples of species characteristic of many urban environments include, cats, dogs, mosquitoes, rats, flies, and pigeons, which are all generalists. Many of these are dependent on human activity and have adapted accordingly to the niche created by urban centers. However, the large number of wild species being discovered in urban areas around the world suggest that a bewildering diversity of life is able to call urban areas their home. The relationship between urbanisation and wildlife diversity may not be as straightforward as previously imagined. This change in imagination has been possible due to coverage of a much larger number of cities in varied parts of the world that now show that past trends and assumptions were largely due to a bias in coverage of cities in temperate, developed countries. Biodiversity and urbanization Research in countries of temperate areas indicates that, on a small scale, urbanization often increases the biodiversity of non-native species while reducing that of native species. This normally results in an overall reduction in species richness and increase in total biomass and species abundance. Urbanization reduces diversity on a large scale in cities of developed countries, but in tropical and subtropical cities, despite the high human densities, can retain very high diversity if small patches of habitats are retained across the city. Even domestic urban and suburban properties near high density city centres can support well over a thousand macro-organism species, many of which are often native. These urban landscapes can also support many complex ecosystem interactions. However, urbanization disrupts many other species interactions that would occur in undeveloped natural habitat. Urban stream syndrome is a consistently observed trait of urbanization characterized by high nutrient and contaminant concentration, altered stream morphology, increased dominance of dominant species, and decreased biodiversity The two primary causes of urban stream syndrome are storm water runoff and wastewater treatment plant effluent. Changes in diversity Diversity is normally reduced at intermediate-low levels of urbanization but is always reduced at high levels of urbanization. These effects have been observed in vertebrates and invertebrates while plant species tend to increase with intermediate-low levels of urbanization but these general trends do not apply to all organisms within those groups. For example, McKinney's (2006) review did not include the effects of urbanization on fishes and of the 58 studies on invertebrates, 52 included insects while only 10 included spiders. There is also a geographical bias as most of the studies either took place in North America or Europe. The effects of urbanization also depend on the type and range of resources used by the organism. Generalist species, those that use a wide range of resources and can thrive under a large range of living conditions, are likely to survive in uniform environments. Specialist species, those that use a narrow range of resources and can only cope with a narrow range of living conditions, are unlikely to cope with uniform environments. There will likely be a variable effect on these two groups of organisms as urbanization alters habitat uniformity. Endangered plant species have been reported to occur throughout a wide range of urban ecosystems, many of them being novel ecosystems. A study of 463 bird species reported that urban species share dietary traits. Specifically, urban species were larger, consumed more vertebrates and carrion, and fed more frequently on the ground or aerially, and also had broader diets than non‐urban species. Cause of diversity change The urban environment can decrease diversity through habitat removal and species homogenization—the increasing similarity between two previously distinct biological communities. Habitat degradation and habitat fragmentation reduces the amount of suitable habitat by urban development and separates suitable patches by inhospitable terrain such as roads, neighborhoods, and open parks. Although this replacement of suitable habitat with unsuitable habitat will result in extinctions of native species, some shelter may be artificially created and promote the survival of non-native species (e.g. house sparrow and house mice nests). Urbanization promotes species homogenization through the extinction of native endemic species and the introduction of non-native species that already have a widespread abundance. Changes to the habitat may promote both the extinction of native endemic species and the introduction of non-native species. The effects of habitat change will likely be similar in all urban environments as urban environments are all built to cater to the needs of humans. Wildlife in cities are more susceptible to suffering ill effects from exposure to toxicants (such as heavy metals and pesticides). In China, fish that were exposed to industrial wastewater had poorer body condition; being exposed to toxicants can increase susceptibility to infection. Humans have the potential to induce patchy food distribution, which can promote animal aggregation by attracting a high number of animals to common food sources; “this aggregation may increase the spread of parasites transmitted through close contact; parasite deposition on soil, water, or artificial feeders; and stress through inter‐ and intraspecific competition.” The results of a study performed by Maureen Murray (et al.), in which a phylogenetic meta-analysis of 516 comparisons of overall wildlife condition reported in 106 studies was performed, confirmed these results; “our meta‐analysis suggests an overall negative relationship between urbanization and wildlife health, mainly driven by considerably higher toxicant loads and greater parasite abundance, greater parasite diversity, and/or greater likelihood of infection by parasites transmitted through close contact.” The urban environment can also increase diversity in a number of ways. Many foreign organisms are introduced and dispersed naturally or artificially in urban areas. Artificial introductions may be intentional, where organisms have some form of human use, or accidental, where organisms attach themselves to transportation vehicles. Humans provide food sources (e.g. birdfeeder seeds, trash, garden compost) and reduce the numbers of large natural predators in urban environments, allowing large populations to be supported where food and predation would normally limit the population size. There are a variety of different habitats available within the urban environment as a result of differences in land use allowing for more species to be supported than by more uniform habitats. Ways to improve urban ecology: civil engineering and sustainability Cities should be planned and constructed in such a way that minimizes the urban effects on the surrounding environment (urban heat island, precipitation, etc.) as well as optimizing ecological activity. For example, increasing the albedo, or reflective power, of surfaces in urban areas, can minimize urban heat island, resulting in a lower magnitude of the urban heat island effect in urban areas. By minimizing these abnormal temperature trends and others, ecological activity would likely be improved in the urban setting. Need for remediation Urbanization has indeed had a profound effect on the environment, on both local and global scales. Difficulties in actively constructing habitat corridors and returning biogeochemical cycles to normal raise the question as to whether such goals are feasible. However, some groups are working to return areas of land affected by the urban landscape to a more natural state. This includes using landscape architecture to model natural systems and restore rivers to pre-urban states. It is becoming increasingly critical that conservation action be enacted within urban landscapes. Space in cities is limited; urban infill threatens the existence of green spaces. Green spaces that are in close proximity to cities are also vulnerable to urban sprawl. It is common that urban development comes at the cost of valuable land that could host wildlife species. Natural and financial resources are limited; a larger focus must be placed on conservation opportunities that factor in feasibility and maximization of expected benefits. Since the securing of land as a protected area is a luxury that cannot be extensively implemented, alternative approaches must be explored in order to prevent mass extinction of species. Borgström et al. 2006 hold that urban ecosystems are especially prone to "scale mismatch" whereby the right course of action is heavily dependent on species size. For some species conservation can be achieved in a single isolated garden because their small size permits a large population, e.g. soil microorganisms. Meanwhile, that is the wrong scale for species that are more mobile and/or larger, e.g. pollinators and seed dispersers, which will require larger and/or connected spaces. The need to pursue conservation outcomes in urban environments is most pronounced for species whose global distribution is contained within a human-modified landscape. The fact is that many threatened wildlife species are prevalent among land types that were not originally intended for conservation. Of Australia's 39 urban-restricted threatened species, 11 species occur at roadsides, 10 species occur in private lands, 5 species occur in military lands, 4 species in schools, 4 species in golf courses, 4 species at utility easements (such as railways), 3 species at airports and 1 species at hospitals. The spiked rice flower species Pimelea spicata persists mainly at a golf course, while the guinea-flower hibbertia puberula glabrescens is known mainly from the grounds of an airport. Unconventional landscapes as such are the ones that must be prioritized. The goal in the management of these areas is to bring about a “win-win” situation where conservation efforts are practiced while not compromising the original use of the space. While being near to large human populations can pose risks to endangered species inhabiting urban environments, such closeness can prove to be an advantage as long as the human community is conscious and engaged in local conservation efforts. Species reintroduction Reintroduction of species to urban settings can help improve the local biodiversity previously lost; however the following guidelines should be followed in order to avoid undesired effects. No predators capable of killing children will be reintroduced to urban areas. There will be no introduction of species that significantly threaten human health, pets, crops or property. Reintroduction will not be done when it implies significant suffering to the organisms being reintroduced, for example stress from capture or captivity. Organisms that carry pathogens will not be reintroduced. Organisms whose genes threaten the genetic pool of other organisms in the urban area will not be reintroduced. Organisms will only be reintroduced when scientific data support a reasonable chance of long-term survival (if funds are insufficient for the long-term effort, reintroduction will not be attempted). Reintroduced organisms will receive food supplementation and veterinary assistance as needed. Reintroduction will be done in both experimental and control areas to produce reliable assessments (monitoring must continue afterwards to trigger interventions if necessary). Reintroduction must be done in several places and repeated over several years to buffer for stochastic events. People in the areas affected must participate in the decision process, and will receive education to make reintroduction sustainable (but final decisions must be based on objective information gathered according to scientific standards). Sustainability With the ever-increasing demands for resources necessitated by urbanization, recent campaigns to move toward sustainable energy and resource consumption, such as LEED certification of buildings, Energy Star certified appliances, and zero emission vehicles, have gained momentum. Sustainability reflects techniques and consumption ensuring reasonably low resource use as a component of urban ecology. Techniques such as carbon recapture may also be used to sequester carbon compounds produced in urban centers rather continually emitting more of the greenhouse gas. The use of other types of renewable energy like bioenergy, solar energy, geothermal energy, and wind energy would also help to reduce greenhouse gas emissions. Green Infrastructure Implementation Urban areas can be converted to areas that are more conducive to hosting wildlife through the application of green infrastructure. Although the opportunities of green infrastructure (GI) to benefit human populations have been recognized, there are also opportunities to conserve wildlife diversity. Green infrastructure has the potential to support wildlife robustness by providing a more suitable habitat than conventional, “grey” infrastructure as well as aid in stormwater management and air purification. GI can be defined as features that were engineered with natural elements or natural features. This natural constitution helps prevent wildlife exposure to man-made toxicants. Although research on the benefits of GI on biodiversity has increased exponentially in the last decade, these effects have rarely been quantified. In a study performed by Alessandro Filazzola (et al.), 1,883 published manuscripts were examined and meta-analyzed in reference to 33 relevant studies in order to determine the effect of GI on wildlife. Although there was variability in the findings, it was determined that the implementation of GI improved biodiversity compared to conventional infrastructure. In some cases, GI even preserved comparable measures of biodiversity to natural components. Urban green space A barrier associated with finding the right amount of urban green space is the variety of space needed by different species to complete their life cycles. This is also compounded with the man-made barriers neighboring green spaces that can restrict movement of certain species from other urban green spaces located nearby. Increasing Wildlife Habitat Connectivity The implementation of wildlife corridors throughout urban areas (and in between wildlife areas) would promote wildlife habitat connectivity. Habitat connectivity is critical for ecosystem health and wildlife conservation yet is being compromised by increasing urbanization. Urban development has caused green spaces to become increasingly fragmented and has caused adverse effects in genetic variation within species, population abundance and species richness. Urban green spaces that are linked by ecosystem corridors have higher ecosystem health and resilience to global environmental change. Employment of corridors can form an ecosystem network that facilitates movement and dispersal. However, planning these networks requires a comprehensive spatial plan. One approach is to target “shrinking” cities (such as Detroit, Michigan, USA) that have an abundance of vacant lots and land that could be repurposed into greenways to provide ecosystem services (although even cities with growing populations typically have vacant land as well). However, even cities with high vacancy rates sometimes can present social and environmental challenges. For instance, vacant land that stands on polluted soils may contain heavy metals or construction debris; this must be addressed before the repurposing. Once land has been repurposed for ecosystem services, avenues must be pursued that could allow this land to contribute to structural or functional connectivity. Structural connectivity refers to parts of the landscape that are physically connected. Functional connectivity refers to species-specific tendencies that indicate interaction with other parts of the landscape. Throughout the City of Detroit, spatial patterns were detected that could promote structural connectivity. The research performed by Zhang “integrates landscape ecology and graph theory, spatial modeling, and landscape design to develop a methodology for planning multifunctional green infrastructure that fosters social-ecological sustainability and resilience”. Using a functional connectivity index, there was found to be a high correlation between these results (structural and functional connectivity), suggesting that the two metrics could be indicators of each other and could guide green space planning. Although urban wildlife corridors could serve as a potential mitigation tool, it is important that they are constructed so as to facilitate wildlife movement without restriction. As humans may be perceived as a threat, the success of the corridors is dependent on human population density proximity to roads. In a study performed by Tempe Adams (et al.), remote-sensor camera traps and data from GPS collars were utilized to assess whether or not the African elephant would use narrow urban wildlife corridors. The study was performed in three different urban-dominated land use types (in Botswana, South Africa) over a span of two years. The results of the study indicated that elephants tended to move through unprotected areas more quickly, spending less time in those areas. Using vehicular traffic as a measure of human activity, the study indicated that elephant presence was higher during times when human activity was at a minimum. It was determined that “formal protection and designation of urban corridors by the relevant governing bodies would facilitate coexistence between people and wildlife at small spatial scales.” However, the only way this co-existence could be feasible is by creating structural connectivity (and thus promoting functional connectivity) by implementing proper wildlife corridors that facilitate easy movement between habitat patches. The usage of green infrastructure that is connected to natural habitats has been shown to reap greater biodiversity benefits than GI implemented in areas far from natural habitats. GI close to natural areas may also increase functional connectivity in natural environments. Roadkill Mitigation In the United States, roadkill takes the lives of hundreds of thousands to hundreds of millions of mammals, birds and amphibians each year. Roadkill mortality has detrimental effects on the persistence probability, abundance and genetic diversity of wildlife populations (more so than reduced movement through habitat patches). Roadkill also has an effect on driver safety. If green areas cannot be reserved, the presence of wildlife habitats in close proximity to urban roads must be addressed. The optimal situation would be to avoid constructing roads next to these natural habitats, but other preventative measures can be pursued to reduce animal mortality. One way these effects could be mitigated is through implementation of wildlife fencing in prioritized areas. Many countries utilize underpasses and overpasses combined with wildlife fencing to reduce roadkill mortality in an attempt to restore habitat connectivity. It is unrealistic to try to fence entire road networks because of financial constraints. Therefore, areas in which the highest rates of mortality occur should be focused on. Indigenous knowledge Urban sprawl is one of many ways that Indigenous people's land is taken and developed in cities of the global north, thus the intimate knowledge of the native area (ecology) is often lost due to the effects of colonization or because the land has been majorly altered. Urban development occurs around areas where Indigenous Peoples lived as these areas are easy for transport and the natural environmental is fruitful. When developing areas of urban land, consideration should go towards the intimate levels of knowledge held by Indigenous Peoples and the biocultural and linguistic diversity of the place. Urban ecology follows western science frameworks and compartmentalizes nature. Urban ecology has the opportunity to be viewed in an interconnected and holistic way, through "Two-Eyed Seeing" and be inclusive of the Traditional Ecological Knowledge held by the local Indigenous Peoples of the area. Urban restoration ecology would be enriched by partnerships with the local Indigenous Peoples, if done in a respectful way that addresses the currently inequitable relationship. Non-indigenous people can support their local Indigenous communities by learning about the history of the land and ecosystems that is being restored or studied. Ecological restoration built with strong Indigenous partnerships benefits the Indigenous culture and identity, as well as all urban dwellers. Summary Urbanization results in a series of both local and far-reaching effects on biodiversity, biogeochemical cycles, hydrology, and climate, among other stresses. Many of these effects are not fully understood, as urban ecology has only recently emerged as a scientific discipline and more research remains to be done. Research on cities outside the US and Europe remains limited. Observations on the impact of urbanization on biodiversity and species interactions are consistent across many studies but definitive mechanisms have yet to be established. Urban ecology constitutes an important and highly relevant subfield of ecology, and further study must be pursued to more fully understand the effects of human urban areas on the environment.
Biology and health sciences
Ecology
Biology
295684
https://en.wikipedia.org/wiki/Molybdenum%20disulfide
Molybdenum disulfide
Molybdenum disulfide (or moly) is an inorganic compound composed of molybdenum and sulfur. Its chemical formula is . The compound is classified as a transition metal dichalcogenide. It is a silvery black solid that occurs as the mineral molybdenite, the principal ore for molybdenum. is relatively unreactive. It is unaffected by dilute acids and oxygen. In appearance and feel, molybdenum disulfide is similar to graphite. It is widely used as a dry lubricant because of its low friction and robustness. Bulk is a diamagnetic, indirect bandgap semiconductor similar to silicon, with a bandgap of 1.23 eV. Production is naturally found as either molybdenite, a crystalline mineral, or jordisite, a rare low temperature form of molybdenite. Molybdenite ore is processed by flotation to give relatively pure . The main contaminant is carbon. also arises by thermal treatment of virtually all molybdenum compounds with hydrogen sulfide or elemental sulfur and can be produced by metathesis reactions from molybdenum pentachloride. Structure and physical properties Crystalline phases All forms of have a layered structure, in which a plane of molybdenum atoms is sandwiched by planes of sulfide ions. These three strata form a monolayer of . Bulk consists of stacked monolayers, which are held together by weak van der Waals interactions. Crystalline exists in one of two phases, 2H- and 3R-, where the "H" and the "R" indicate hexagonal and rhombohedral symmetry, respectively. In both of these structures, each molybdenum atom exists at the center of a trigonal prismatic coordination sphere and is covalently bonded to six sulfide ions. Each sulfur atom has pyramidal coordination and is bonded to three molybdenum atoms. Both the 2H- and 3R-phases are semiconducting. A third, metastable crystalline phase known as 1T- was discovered by intercalating 2H- with alkali metals. This phase has trigonal symmetry and is metallic. The 1T-phase can be stabilized through doping with electron donors such as rhenium, or converted back to the 2H-phase by microwave radiation. The 2H/1T-phase transition can be controlled via the incorporation of sulfur (S) vacancies. Allotropes Nanotube-like and buckyball-like molecules composed of are known. Exfoliated flakes While bulk in the 2H-phase is known to be an indirect-band gap semiconductor, monolayer has a direct band gap. The layer-dependent optoelectronic properties of have promoted much research in 2-dimensional -based devices. 2D can be produced by exfoliating bulk crystals to produce single-layer to few-layer flakes either through a dry, micromechanical process or through solution processing. Micromechanical exfoliation, also pragmatically called "Scotch-tape exfoliation", involves using an adhesive material to repeatedly peel apart a layered crystal by overcoming the van der Waals forces. The crystal flakes can then be transferred from the adhesive film to a substrate. This facile method was first used by Konstantin Novoselov and Andre Geim to obtain graphene from graphite crystals. However, it can not be employed for a uniform 1-D layers because of weaker adhesion of to the substrate (either silicon, glass or quartz); the aforementioned scheme is good for graphene only. While Scotch tape is generally used as the adhesive tape, PDMS stamps can also satisfactorily cleave if it is important to avoid contaminating the flakes with residual adhesive. Liquid-phase exfoliation can also be used to produce monolayer to multi-layer in solution. A few methods include lithium intercalation to delaminate the layers and sonication in a high-surface tension solvent. Mechanical properties excels as a lubricating material (see below) due to its layered structure and low coefficient of friction. Interlayer sliding dissipates energy when a shear stress is applied to the material. Extensive work has been performed to characterize the coefficient of friction and shear strength of in various atmospheres. The shear strength of increases as the coefficient of friction increases. This property is called superlubricity. At ambient conditions, the coefficient of friction for was determined to be 0.150, with a corresponding estimated shear strength of 56.0 MPa. Direct methods of measuring the shear strength indicate that the value is closer to 25.3 MPa. The wear resistance of in lubricating applications can be increased by doping with Cr. Microindentation experiments on nanopillars of Cr-doped found that the yield strength increased from an average of 821 MPa for pure (at 0% Cr) to 1017 MPa at 50% Cr. The increase in yield strength is accompanied by a change in the failure mode of the material. While the pure nanopillar fails through a plastic bending mechanism, brittle fracture modes become apparent as the material is loaded with increasing amounts of dopant. The widely used method of micromechanical exfoliation has been carefully studied in to understand the mechanism of delamination in few-layer to multi-layer flakes. The exact mechanism of cleavage was found to be layer dependent. Flakes thinner than 5 layers undergo homogenous bending and rippling, while flakes around 10 layers thick delaminated through interlayer sliding. Flakes with more than 20 layers exhibited a kinking mechanism during micromechanical cleavage. The cleavage of these flakes was also determined to be reversible due to the nature of van der Waals bonding. In recent years, has been utilized in flexible electronic applications, promoting more investigation into the elastic properties of this material. Nanoscopic bending tests using AFM cantilever tips were performed on micromechanically exfoliated flakes that were deposited on a holey substrate. The yield strength of monolayer flakes was 270 GPa, while the thicker flakes were also stiffer, with a yield strength of 330 GPa. Molecular dynamic simulations found the in-plane yield strength of to be 229 GPa, which matches the experimental results within error. Bertolazzi and coworkers also characterized the failure modes of the suspended monolayer flakes. The strain at failure ranges from 6 to 11%. The average yield strength of monolayer is 23 GPa, which is close to the theoretical fracture strength for defect-free . The band structure of is sensitive to strain. Chemical reactions Molybdenum disulfide is stable in air and attacked only by aggressive reagents. It reacts with oxygen upon heating forming molybdenum trioxide: Chlorine attacks molybdenum disulfide at elevated temperatures to form molybdenum pentachloride: Intercalation reactions Molybdenum disulfide is a host for formation of intercalation compounds. This behavior is relevant to its use as a cathode material in batteries. One example is a lithiated material, . With butyl lithium, the product is . Applications Lubricant Due to weak van der Waals interactions between the sheets of sulfide atoms, has a low coefficient of friction. in particle sizes in the range of 1–100 μm is a common dry lubricant. Few alternatives exist that confer high lubricity and stability at up to 350 °C in oxidizing environments. Sliding friction tests of using a pin on disc tester at low loads (0.1–2 N) give friction coefficient values of <0.1. is often a component of blends and composites that require low friction. For example, it is added to graphite to improve sticking. A variety of oils and greases are used, because they retain their lubricity even in cases of almost complete oil loss, thus finding a use in critical applications such as aircraft engines. When added to plastics, forms a composite with improved strength as well as reduced friction. Polymers that may be filled with include nylon (trade name Nylatron), Teflon and Vespel. Self-lubricating composite coatings for high-temperature applications consist of molybdenum disulfide and titanium nitride, using chemical vapor deposition. Examples of applications of -based lubricants include two-stroke engines (such as motorcycle engines), bicycle coaster brakes, automotive CV and universal joints, ski waxes and bullets. Other layered inorganic materials that exhibit lubricating properties (collectively known as solid lubricants (or dry lubricants)) includes graphite, which requires volatile additives and hexagonal boron nitride. Catalysis is employed as a cocatalyst for desulfurization in petrochemistry, for example, hydrodesulfurization. The effectiveness of the catalysts is enhanced by doping with small amounts of cobalt or nickel. The intimate mixture of these sulfides is supported on alumina. Such catalysts are generated in situ by treating molybdate/cobalt or nickel-impregnated alumina with or an equivalent reagent. Catalysis does not occur at the regular sheet-like regions of the crystallites, but instead at the edge of these planes. finds use as a hydrogenation catalyst for organic synthesis. As it is derived from a common transition metal, rather than a group 10 metal, is chosen when price or resistance to sulfur poisoning are of primary concern. is effective for the hydrogenation of nitro compounds to amines and can be used to produce secondary amines via reductive amination. The catalyst can also effect hydrogenolysis of organosulfur compounds, aldehydes, ketones, phenols and carboxylic acids to their respective alkanes. However, it suffers from low activity, often requiring hydrogen pressures above 96 MPa and temperatures above 185 °C. Research plays an important role in condensed matter physics research. Hydrogen evolution and related molybdenum sulfides are efficient catalysts for hydrogen evolution, including the electrolysis of water; thus, are possibly useful to produce hydrogen for use in fuel cells. Oxygen reduction and evolution @Fe-N-C core/shell nanosphere with atomic Fe-doped surface and interface (/Fe-N-C) can be used as a used an electrocatalyst for oxygen reduction and evolution reactions (ORR and OER) bifunctionally because of reduced energy barrier due to Fe-N4 dopants and unique nature of /Fe-N-C interface. Microelectronics As in graphene, the layered structures of and other transition metal dichalcogenides exhibit electronic and optical properties that can differ from those in bulk. Bulk has an indirect band gap of 1.2 eV, while monolayers have a direct 1.8 eV electronic bandgap, supporting switchable transistors and photodetectors. nanoflakes can be used for solution-processed fabrication of layered memristive and memcapacitive devices through engineering a / heterostructure sandwiched between silver electrodes. -based memristors are mechanically flexible, optically transparent and can be produced at low cost. The sensitivity of a graphene field-effect transistor (FET) biosensor is fundamentally restricted by the zero band gap of graphene, which results in increased leakage and reduced sensitivity. In digital electronics, transistors control current flow throughout an integrated circuit and allow for amplification and switching. In biosensing, the physical gate is removed and the binding between embedded receptor molecules and the charged target biomolecules to which they are exposed modulates the current. has been investigated as a component of flexible circuits. In 2017, a 115-transistor, 1-bit microprocessor implementation was fabricated using two-dimensional . has been used to create 2D 2-terminal memristors and 3-terminal memtransistors. Valleytronics Due to the lack of spatial inversion symmetry, odd-layer MoS2 is a promising material for valleytronics because both the CBM and VBM have two energy-degenerate valleys at the corners of the first Brillouin zone, providing an exciting opportunity to store the information of 0s and 1s at different discrete values of the crystal momentum. The Berry curvature is even under spatial inversion (P) and odd under time reversal (T), the valley Hall effect cannot survive when both P and T symmetries are present. To excite valley Hall effect in specific valleys, circularly polarized lights were used for breaking the T symmetry in atomically thin transition-metal dichalcogenides. In monolayer , the T and mirror symmetries lock the spin and valley indices of the sub-bands split by the spin-orbit couplings, both of which are flipped under T; the spin conservation suppresses the inter-valley scattering. Therefore, monolayer MoS2 have been deemed an ideal platform for realizing intrinsic valley Hall effect without extrinsic symmetry breaking. Photonics and photovoltaics also possesses mechanical strength, electrical conductivity, and can emit light, opening possible applications such as photodetectors. has been investigated as a component of photoelectrochemical (e.g. for photocatalytic hydrogen production) applications and for microelectronics applications. Superconductivity of monolayers Under an electric field monolayers have been found to superconduct at temperatures below 9.4 K.
Physical sciences
Sulfide salts
Chemistry
295829
https://en.wikipedia.org/wiki/Reflection%20%28mathematics%29
Reflection (mathematics)
In mathematics, a reflection (also spelled reflexion) is a mapping from a Euclidean space to itself that is an isometry with a hyperplane as the set of fixed points; this set is called the axis (in dimension 2) or plane (in dimension 3) of reflection. The image of a figure by a reflection is its mirror image in the axis or plane of reflection. For example the mirror image of the small Latin letter p for a reflection with respect to a vertical axis (a vertical reflection) would look like q. Its image by reflection in a horizontal axis (a horizontal reflection) would look like b. A reflection is an involution: when applied twice in succession, every point returns to its original location, and every geometrical object is restored to its original state. The term reflection is sometimes used for a larger class of mappings from a Euclidean space to itself, namely the non-identity isometries that are involutions. The set of fixed points (the "mirror") of such an isometry is an affine subspace, but is possibly smaller than a hyperplane. For instance a reflection through a point is an involutive isometry with just one fixed point; the image of the letter p under it would look like a d. This operation is also known as a central inversion , and exhibits Euclidean space as a symmetric space. In a Euclidean vector space, the reflection in the point situated at the origin is the same as vector negation. Other examples include reflections in a line in three-dimensional space. Typically, however, unqualified use of the term "reflection" means reflection in a hyperplane. Some mathematicians use "flip" as a synonym for "reflection". Construction In a plane (or, respectively, 3-dimensional) geometry, to find the reflection of a point drop a perpendicular from the point to the line (plane) used for reflection, and extend it the same distance on the other side. To find the reflection of a figure, reflect each point in the figure. To reflect point through the line using compass and straightedge, proceed as follows (see figure): Step 1 (red): construct a circle with center at and some fixed radius to create points and on the line , which will be equidistant from . Step 2 (green): construct circles centered at and having radius . and will be the points of intersection of these two circles. Point is then the reflection of point through line . Properties The matrix for a reflection is orthogonal with determinant −1 and eigenvalues −1, 1, 1, ..., 1. The product of two such matrices is a special orthogonal matrix that represents a rotation. Every rotation is the result of reflecting in an even number of reflections in hyperplanes through the origin, and every improper rotation is the result of reflecting in an odd number. Thus reflections generate the orthogonal group, and this result is known as the Cartan–Dieudonné theorem. Similarly the Euclidean group, which consists of all isometries of Euclidean space, is generated by reflections in affine hyperplanes. In general, a group generated by reflections in affine hyperplanes is known as a reflection group. The finite groups generated in this way are examples of Coxeter groups. Reflection across a line in the plane Reflection across an arbitrary line through the origin in two dimensions can be described by the following formula where denotes the vector being reflected, denotes any vector in the line across which the reflection is performed, and denotes the dot product of with . Note the formula above can also be written as saying that a reflection of across is equal to 2 times the projection of on , minus the vector . Reflections in a line have the eigenvalues of 1, and −1. Reflection through a hyperplane in n dimensions Given a vector in Euclidean space , the formula for the reflection in the hyperplane through the origin, orthogonal to , is given by where denotes the dot product of with . Note that the second term in the above equation is just twice the vector projection of onto . One can easily check that , if is parallel to , and , if is perpendicular to . Using the geometric product, the formula is Since these reflections are isometries of Euclidean space fixing the origin they may be represented by orthogonal matrices. The orthogonal matrix corresponding to the above reflection is the matrix where denotes the identity matrix and is the transpose of a. Its entries are where is the Kronecker delta. The formula for the reflection in the affine hyperplane not through the origin is
Mathematics
Geometry: General
null
295844
https://en.wikipedia.org/wiki/Inversive%20geometry
Inversive geometry
In geometry, inversive geometry is the study of inversion, a transformation of the Euclidean plane that maps circles or lines to other circles or lines and that preserves the angles between crossing curves. Many difficult problems in geometry become much more tractable when an inversion is applied. Inversion seems to have been discovered by a number of people contemporaneously, including Steiner (1824), Quetelet (1825), Bellavitis (1836), Stubbs and Ingram (1842–3) and Kelvin (1845). The concept of inversion can be generalized to higher-dimensional spaces. Inversion in a circle Inverse of a point To invert a number in arithmetic usually means to take its reciprocal. A closely related idea in geometry is that of "inverting" a point. In the plane, the inverse of a point P with respect to a reference circle (Ø) with center O and radius r is a point P, lying on the ray from O through P such that This is called circle inversion or plane inversion. The inversion taking any point P (other than O) to its image P also takes P back to P, so the result of applying the same inversion twice is the identity transformation which makes it a self-inversion (i.e. an involution). To make the inversion a total function that is also defined for O, it is necessary to introduce a point at infinity, a single point placed on all the lines, and extend the inversion, by definition, to interchange the center O and this point at infinity. It follows from the definition that the inversion of any point inside the reference circle must lie outside it, and vice versa, with the center and the point at infinity changing positions, whilst any point on the circle is unaffected (is invariant under inversion). In summary, for a point inside the circle, the nearer the point to the center, the further away its transformation. While for any point (inside or outside the circle), the nearer the point to the circle, the closer its transformation. Compass and straightedge construction Point outside circle To construct the inverse P of a point P outside a circle Ø: Draw the segment from O (center of circle Ø) to P. Let M be the midpoint of OP. (Not shown) Draw the circle c with center M going through P. (Not labeled. It's the blue circle) Let N and N be the points where Ø and c intersect. Draw segment NN. P is where OP and NN intersect. Point inside circle To construct the inverse P of a point P inside a circle Ø: Draw ray r from O (center of circle Ø) through P. (Not labeled, it's the horizontal line) Draw line s through P perpendicular to r. (Not labeled. It's the vertical line) Let N be one of the points where Ø and s intersect. Draw the segment ON. Draw line t through N perpendicular to ON. P is where ray r and line t intersect. Dutta's construction There is a construction of the inverse point to A with respect to a circle P that is independent of whether A is inside or outside P. Consider a circle P with center O and a point A which may lie inside or outside the circle P. Take the intersection point C of the ray OA with the circle P. Connect the point C with an arbitrary point B on the circle P (different from C and from the point on P antipodal to C) Let h be the reflection of ray BA in line BC. Then h cuts ray OC in a point A. A is the inverse point of A with respect to circle P. Properties The inversion of a set of points in the plane with respect to a circle is the set of inverses of these points. The following properties make circle inversion useful. A circle that passes through the center O of the reference circle inverts to a line not passing through O, but parallel to the tangent to the original circle at O, and vice versa; whereas a line passing through O is inverted into itself (but not pointwise invariant). A circle not passing through O inverts to a circle not passing through O. If the circle meets the reference circle, these invariant points of intersection are also on the inverse circle. A circle (or line) is unchanged by inversion if and only if it is orthogonal to the reference circle at the points of intersection. Additional properties include: If a circle q passes through two distinct points A and A' which are inverses with respect to a circle k, then the circles k and q are orthogonal. If the circles k and q are orthogonal, then a straight line passing through the center O of k and intersecting q, does so at inverse points with respect to k. Given a triangle OAB in which O is the center of a circle k, and points A' and B' inverses of A and B with respect to k, then The points of intersection of two circles p and q orthogonal to a circle k, are inverses with respect to k. If M and M' are inverse points with respect to a circle k on two curves m and m', also inverses with respect to k, then the tangents to m and m' at the points M and M' are either perpendicular to the straight line MM' or form with this line an isosceles triangle with base MM'. Inversion leaves the measure of angles unaltered, but reverses the orientation of oriented angles. Examples in two dimensions Inversion of a line is a circle containing the center of inversion; or it is the line itself if it contains the center Inversion of a circle is another circle; or it is a line if the original circle contains the center Inversion of a parabola is a cardioid Inversion of hyperbola is a lemniscate of Bernoulli Application For a circle not passing through the center of inversion, the center of the circle being inverted and the center of its image under inversion are collinear with the center of the reference circle. This fact can be used to prove that the Euler line of the intouch triangle of a triangle coincides with its OI line. The proof roughly goes as below: Invert with respect to the incircle of triangle ABC. The medial triangle of the intouch triangle is inverted into triangle ABC, meaning the circumcenter of the medial triangle, that is, the nine-point center of the intouch triangle, the incenter and circumcenter of triangle ABC are collinear. Any two non-intersecting circles may be inverted into concentric circles. Then the inversive distance (usually denoted δ) is defined as the natural logarithm of the ratio of the radii of the two concentric circles. In addition, any two non-intersecting circles may be inverted into congruent circles, using circle of inversion centered at a point on the circle of antisimilitude. The Peaucellier–Lipkin linkage is a mechanical implementation of inversion in a circle. It provides an exact solution to the important problem of converting between linear and circular motion. Pole and polar If point R is the inverse of point P then the lines perpendicular to the line PR through one of the points is the polar of the other point (the pole). Poles and polars have several useful properties: If a point P lies on a line l, then the pole L of the line l lies on the polar p of point P. If a point P moves along a line l, its polar p rotates about the pole L of the line l. If two tangent lines can be drawn from a pole to the circle, then its polar passes through both tangent points. If a point lies on the circle, its polar is the tangent through this point. If a point P lies on its own polar line, then P is on the circle. Each line has exactly one pole. In three dimensions Circle inversion is generalizable to sphere inversion in three dimensions. The inversion of a point P in 3D with respect to a reference sphere centered at a point O with radius R is a point P ' on the ray with direction OP such that . As with the 2D version, a sphere inverts to a sphere, except that if a sphere passes through the center O of the reference sphere, then it inverts to a plane. Any plane passing through O, inverts to a sphere touching at O. A circle, that is, the intersection of a sphere with a secant plane, inverts into a circle, except that if the circle passes through O it inverts into a line. This reduces to the 2D case when the secant plane passes through O, but is a true 3D phenomenon if the secant plane does not pass through O. Examples in three dimensions Sphere The simplest surface (besides a plane) is the sphere. The first picture shows a non trivial inversion (the center of the sphere is not the center of inversion) of a sphere together with two orthogonal intersecting pencils of circles. Cylinder, cone, torus The inversion of a cylinder, cone, or torus results in a Dupin cyclide. Spheroid A spheroid is a surface of revolution and contains a pencil of circles which is mapped onto a pencil of circles (see picture). The inverse image of a spheroid is a surface of degree 4. Hyperboloid of one sheet A hyperboloid of one sheet, which is a surface of revolution contains a pencil of circles which is mapped onto a pencil of circles. A hyperboloid of one sheet contains additional two pencils of lines, which are mapped onto pencils of circles. The picture shows one such line (blue) and its inversion. Stereographic projection as the inversion of a sphere A stereographic projection usually projects a sphere from a point (north pole) of the sphere onto the tangent plane at the opposite point (south pole). This mapping can be performed by an inversion of the sphere onto its tangent plane. If the sphere (to be projected) has the equation (alternately written ; center , radius , green in the picture), then it will be mapped by the inversion at the unit sphere (red) onto the tangent plane at point . The lines through the center of inversion (point ) are mapped onto themselves. They are the projection lines of the stereographic projection. 6-sphere coordinates The 6-sphere coordinates are a coordinate system for three-dimensional space obtained by inverting the Cartesian coordinates. Axiomatics and generalization One of the first to consider foundations of inversive geometry was Mario Pieri in 1911 and 1912. Edward Kasner wrote his thesis on "Invariant theory of the inversion group". More recently the mathematical structure of inversive geometry has been interpreted as an incidence structure where the generalized circles are called "blocks": In incidence geometry, any affine plane together with a single point at infinity forms a Möbius plane, also known as an inversive plane. The point at infinity is added to all the lines. These Möbius planes can be described axiomatically and exist in both finite and infinite versions. A model for the Möbius plane that comes from the Euclidean plane is the Riemann sphere. Invariant The cross-ratio between 4 points is invariant under an inversion. In particular if O is the centre of the inversion and and are distances to the ends of a line L, then length of the line will become under an inversion with radius 1. The invariant is: Relation to Erlangen program According to Coxeter, the transformation by inversion in circle was invented by L. I. Magnus in 1831. Since then this mapping has become an avenue to higher mathematics. Through some steps of application of the circle inversion map, a student of transformation geometry soon appreciates the significance of Felix Klein's Erlangen program, an outgrowth of certain models of hyperbolic geometry Dilation The combination of two inversions in concentric circles results in a similarity, homothetic transformation, or dilation characterized by the ratio of the circle radii. Reciprocation When a point in the plane is interpreted as a complex number with complex conjugate then the reciprocal of z is Consequently, the algebraic form of the inversion in a unit circle is given by where: . Reciprocation is key in transformation theory as a generator of the Möbius group. The other generators are translation and rotation, both familiar through physical manipulations in the ambient 3-space. Introduction of reciprocation (dependent upon circle inversion) is what produces the peculiar nature of Möbius geometry, which is sometimes identified with inversive geometry (of the Euclidean plane). However, inversive geometry is the larger study since it includes the raw inversion in a circle (not yet made, with conjugation, into reciprocation). Inversive geometry also includes the conjugation mapping. Neither conjugation nor inversion-in-a-circle are in the Möbius group since they are non-conformal (see below). Möbius group elements are analytic functions of the whole plane and so are necessarily conformal. Transforming circles into circles Consider, in the complex plane, the circle of radius around the point where without loss of generality, Using the definition of inversion it is straightforward to show that obeys the equation and hence that describes the circle of center and radius When the circle transforms into the line parallel to the imaginary axis For and the result for is showing that the describes the circle of center and radius . When the equation for becomes Higher geometry As mentioned above, zero, the origin, requires special consideration in the circle inversion mapping. The approach is to adjoin a point at infinity designated ∞ or 1/0 . In the complex number approach, where reciprocation is the apparent operation, this procedure leads to the complex projective line, often called the Riemann sphere. It was subspaces and subgroups of this space and group of mappings that were applied to produce early models of hyperbolic geometry by Beltrami, Cayley, and Klein. Thus inversive geometry includes the ideas originated by Lobachevsky and Bolyai in their plane geometry. Furthermore, Felix Klein was so overcome by this facility of mappings to identify geometrical phenomena that he delivered a manifesto, the Erlangen program, in 1872. Since then many mathematicians reserve the term geometry for a space together with a group of mappings of that space. The significant properties of figures in the geometry are those that are invariant under this group. For example, Smogorzhevsky develops several theorems of inversive geometry before beginning Lobachevskian geometry. In higher dimensions In a real n-dimensional Euclidean space, an inversion in the sphere of radius centered at the point is a map of an arbitrary point found by inverting the length of the displacement vector and multiplying by The transformation by inversion in hyperplanes or hyperspheres in En can be used to generate dilations, translations, or rotations. Indeed, two concentric hyperspheres, used to produce successive inversions, result in a dilation or homothety about the hyperspheres' center. When two parallel hyperplanes are used to produce successive reflections, the result is a translation. When two hyperplanes intersect in an (n–2)-flat, successive reflections produce a rotation where every point of the (n–2)-flat is a fixed point of each reflection and thus of the composition. Any combination of reflections, translations, and rotations is called an isometry. Any combination of reflections, dilations, translations, and rotations is a similarity. All of these are conformal maps, and in fact, where the space has three or more dimensions, the mappings generated by inversion are the only conformal mappings. Liouville's theorem is a classical theorem of conformal geometry. The addition of a point at infinity to the space obviates the distinction between hyperplane and hypersphere; higher dimensional inversive geometry is frequently studied then in the presumed context of an n-sphere as the base space. The transformations of inversive geometry are often referred to as Möbius transformations. Inversive geometry has been applied to the study of colorings, or partitionings, of an n-sphere. Anticonformal mapping property The circle inversion map is anticonformal, which means that at every point it preserves angles and reverses orientation (a map is called conformal if it preserves oriented angles). Algebraically, a map is anticonformal if at every point the Jacobian is a scalar times an orthogonal matrix with negative determinant: in two dimensions the Jacobian must be a scalar times a reflection at every point. This means that if J is the Jacobian, then and Computing the Jacobian in the case , where gives , with , and additionally det(J) is negative; hence the inversive map is anticonformal. In the complex plane, the most obvious circle inversion map (i.e., using the unit circle centered at the origin) is the complex conjugate of the complex inverse map taking z to 1/z. The complex analytic inverse map is conformal and its conjugate, circle inversion, is anticonformal. In this case a homography is conformal while an anti-homography is anticonformal. Inversive geometry and hyperbolic geometry The (n − 1)-sphere with equation will have a positive radius if a12 + ... + an2 is greater than c, and on inversion gives the sphere Hence, it will be invariant under inversion if and only if c = 1. But this is the condition of being orthogonal to the unit sphere. Hence we are led to consider the (n − 1)-spheres with equation which are invariant under inversion, orthogonal to the unit sphere, and have centers outside of the sphere. These together with the subspace hyperplanes separating hemispheres are the hypersurfaces of the Poincaré disc model of hyperbolic geometry. Since inversion in the unit sphere leaves the spheres orthogonal to it invariant, the inversion maps the points inside the unit sphere to the outside and vice versa. This is therefore true in general of orthogonal spheres, and in particular inversion in one of the spheres orthogonal to the unit sphere maps the unit sphere to itself. It also maps the interior of the unit sphere to itself, with points outside the orthogonal sphere mapping inside, and vice versa; this defines the reflections of the Poincaré disc model if we also include with them the reflections through the diameters separating hemispheres of the unit sphere. These reflections generate the group of isometries of the model, which tells us that the isometries are conformal. Hence, the angle between two curves in the model is the same as the angle between two curves in the hyperbolic space.
Mathematics
Non-Euclidean geometry
null
295971
https://en.wikipedia.org/wiki/Wolframite
Wolframite
Wolframite is an iron, manganese, and tungstate mineral with a chemical formula of that is the intermediate mineral between ferberite ( rich) and hübnerite ( rich). Along with scheelite, the wolframite series are the most important tungsten ore minerals. Wolframite is found in quartz veins and pegmatites associated with granitic intrusives. Associated minerals include cassiterite, scheelite, bismuth, quartz, pyrite, galena, sphalerite, and arsenopyrite. This mineral was historically found in Europe in Bohemia, Saxony, and in the UK in Devon and Cornwall. China reportedly has the world's largest supply of tungsten ore with about 60%. Other producers are Spain, Canada, Portugal, Russia, Australia, Thailand, South Korea, Rwanda, Bolivia, the United States, and the Democratic Republic of the Congo. Properties The wolframite series is mainly formed through magmatic-hydrothermal processes associated with felsic magmas, namely skarns, or through metamorphic processes. In the more common granitic deposits, wolframite minerals can be found in both greisen and veins as its formation is tied to these two structures. Crystal structure The wolframite series consists of two endmembers, ferberite (Fe2+ end member), hübnerite (Mn2+ end member), with Wolframite, (Fe,Mn)WO4 itself being a solid solution between the two endmembers. These two end members can be present in any proportion within wolframite, from 100% ferberite to 100% hübnerite. Wolframite Contains the following percentages of its components, 60.63% W+6, 9.21% Fe+2, 9.06% Mn+2, 21.10% O−2. Wolframite ore exhibits massive form with a dark grey to reddish black coloration. Wolframite in its pure crystal form exhibits a monoclinic crystal system with a perfect cleavage of {010} and an iron black color. Wolframite in its crystalline form also displays lamellar and prismatic habit. Name The name "wolframite" is derived from German "wolf rahm", the name given to tungsten by Johan Gottschalk Wallerius in 1747. This, in turn, derives from "Lupi spuma", the name Georg Agricola used for the element in 1546, which translates into English as "wolf's froth" or "wolf's cream". The etymology is not entirely certain but seems to be a reference to the large amounts of tin consumed by the mineral during its extraction, the phenomenon being likened to a wolf eating a sheep. Wolfram is the basis for the chemical symbol W for tungsten as a chemical element. World mine production and reserves As of 2022, estimated world mine production was 84,000 metric tons of tungsten. The foremost producer of tungsten is China, with an estimated 71,000 metric tons produced; as such world tungsten supply is dominated by China and Chinese exports. The next highest producers are Vietnam, Russia, Bolivia, and Rwanda with an estimated 4,800, 2,300, 1,400, and 1,100 respectively. As of 2022, the estimate world reserves of tungsten is 3,800,000 metric tons. Again China contains the greatest reserve at 1,800,000 metric tons of tungsten. The following countries have the next highest reserves: Russia, Vietnam, Spain, and Austria, with an estimated reserve of 400,000, 100,000, 56,000, and 10,000 respectively. Use Wolframite is highly valued as the main source of the metal tungsten, a strong and very dense material with a high melting temperature used for electric filaments and armor-piercing ammunition, as well as hard tungsten carbide machine tools. During World War II, wolframite mines were a strategic asset, due to its use in munitions and tools. Tungsten salts were used in the 19th century to dye cotton and to make stage costumes which were fire retardant. Additionally in the 19th century tungsten sulfides were sparingly used as lubrication for machining. Wolframite is also used to make tungstic acid which is used in the textile industry. A major modern day use of tungsten is as a catalyst for various chemical reactions. One such catalytic use of tungsten is as a hydrocracking catalyst which is used to improve the yield of organic components such as gasoline in hydrocarbon refinement as well as reducing harmful pollution and by products. Another catalytic use of tungsten is as a De-NOX catalyst which is used in the treatment of nitrogen oxide emissions to convert harmful nitrogen oxides into inert N2 gas. Another modern day use of tungsten is as a lubricant. Tungsten disulfide (WS2) is a lubricant with a dynamic coefficient of friction of ~0.03. Tungsten disulfide can be used at temperatures of 583 °C and 1316 °C in air and vacuum respectively. These characteristics allow this lubricant to operate in extreme conditions. Wolframite was considered to be a conflict mineral due to the unethical mining practices observed in the Democratic Republic of the Congo, during the Congo Wars.
Physical sciences
Minerals
Earth science
296445
https://en.wikipedia.org/wiki/Impala
Impala
The impala or rooibok (Aepyceros melampus) is a medium-sized antelope found in eastern and southern Africa. The only extant member of the genus Aepyceros, and tribe Aepycerotini, it was first described to Europeans by German zoologist Hinrich Lichtenstein in 1812. Two subspecies are recognised—the grassland-dwelling common impala (sometimes referred to as the Kenyan impala), and the larger and darker black-faced impala, which lives in slightly more arid, scrubland environments. The impala reaches at the shoulder and weighs . It features a glossy, reddish brown coat. The male's slender, lyre-shaped horns are long. Active mainly during the day, the impala may be gregarious or territorial depending upon the climate and geography. Three distinct social groups can be observed: the territorial males, bachelor herds and female herds. The impala is known for two characteristic leaps that constitute an anti-predator strategy. Browsers as well as grazers, impala feed on monocots, dicots, forbs, fruits and acacia pods (whenever available). An annual, three-week-long rut takes place toward the end of the wet season, typically in May. Rutting males fight over dominance, and the victorious male courts females in oestrus. Gestation lasts six to seven months, following which a single calf is born and immediately concealed in cover. Calves are suckled for four to six months; young males—forced out of the all-female groups—join bachelor herds, while females may stay back. The impala is found in woodlands and sometimes on the interface (ecotone) between woodlands and savannahs; it inhabits places near water. While the black-faced impala is confined to southwestern Angola and Kaokoland in northwestern Namibia, the common impala is widespread across its range and has been reintroduced in Gabon and southern Africa. The International Union for Conservation of Nature (IUCN) classifies the impala as a species of least concern; the black-faced subspecies has been classified as a vulnerable species, with fewer than 1,000 individuals remaining in the wild as of 2008. Etymology The first attested English name, in 1802, was palla or pallah, from the Tswana 'red antelope'; the name impala, also spelled impalla or mpala, is first attested in 1875, and is directly from Zulu. Its Afrikaans name, 'red buck', is also sometimes used in English. The scientific generic name Aepyceros ( ‘high-horned’) comes from Ancient Greek (, 'high, steep') + (, 'horn'); the specific name melampus ( ‘black-foot’) from (, 'black') + (, 'foot'). Taxonomy and evolution The impala is the sole member of the genus Aepyceros and belongs to the family Bovidae. It was first described by German zoologist Martin Hinrich Carl Lichtenstein in 1812. In 1984, palaeontologist Elisabeth Vrba opined that the impala is a sister taxon to the alcelaphines, given its resemblance to the hartebeest. A 1999 phylogenetic study by Alexandre Hassanin (of the National Centre for Scientific Research, Paris) and colleagues, based on mitochondrial and nuclear analyses, showed that the impala forms a clade with the suni (Neotragus moschatus). This clade is sister to another formed by the bay duiker (Cephalophus dorsalis) and the klipspringer (Oreotragus oreotragus). An rRNA and β-spectrin nuclear sequence analysis in 2003 also supported an association between Aepyceros and Neotragus. The following cladogram is based on the 1999 study: Up to six subspecies have been described, although only two are generally recognised on the basis of mitochondrial data. Though morphologically similar, the subspecies show a significant genetic distance between them, and no hybrids between them have been reported. A. m. melampus Lichtenstein, 1812: Known as the common impala, it occurs across eastern and southern Africa. The range extends from central Kenya to South Africa and westward into southeastern Angola. A. m. petersi Bocage, 1879: Known as the black-faced impala, it is restricted to southwestern Africa, occurring in northwestern Namibia and southwestern Angola. According to Vrba, the impala evolved from an alcelaphine ancestor. She noted that while this ancestor has diverged at least 18 times into various morphologically different forms, the impala has continued in its basic form for at least five million years. Several fossil species have been discovered, including A. datoadeni from the Pliocene of Ethiopia. The oldest fossil discovered suggests its ancient ancestors were slightly smaller than the modern form, but otherwise very similar in all aspects to the latter. This implies that the impala has efficiently adapted to its environment since prehistoric times. Its gregarious nature, variety in diet, positive population trend, defence against ticks and symbiotic relationship with the tick-feeding oxpeckers could have played a role in preventing major changes in morphology and behaviour. Description The impala is a medium-sized, slender-bodied antelope, comparable to the kob, puku and Grant's gazelle in size and build. The head-and-body length is around . Males reach approximately at the shoulder, while females are tall. Males typically weigh and females . Sexually dimorphic, females are hornless and smaller than males. Males grow slender, lyre-shaped horns long. The horns, strongly ridged and divergent, are circular in section and hollow at the base. Their arch-like structure allows interlocking of horns, which helps a male throw off his opponent during fights; horns also protect the skull from damage. The glossy coat of the impala shows two-tone colourationthe reddish brown back and the tan flanks; these are in sharp contrast to the white underbelly. Facial features include white rings around the eyes and a light chin and snout. The ears, long, are tipped with black. Black streaks run from the buttocks to the upper hindlegs. The bushy white tail, long, features a solid black stripe along the midline. The impala's colouration bears a strong resemblance to the gerenuk, which has shorter horns and lacks the black thigh stripes of the impala. The impala has scent glands covered by a black tuft of hair on the hindlegs. 2-Methylbutanoic Acid and 2-Nonanone have been identified from this gland. Sebaceous glands concentrated on the forehead and dispersed on the torso of dominant males are most active during the mating season, while those of females are only partially developed and do not undergo seasonal changes. There are four nipples. Of the subspecies, the black-faced impala is significantly larger and darker than the common impala; melanism is responsible for the black colouration. Distinctive of the black-faced impala is a dark stripe, on either side of the nose, that runs upward to the eyes and thins as it reaches the forehead. Other differences include the larger black tip on the ear, and a bushier and nearly 30% longer tail in the black-faced impala. The impala has a special dental arrangement on the front lower jaw similar to the toothcomb seen in strepsirrhine primates, which is used during allogrooming to comb the fur on the head and the neck and remove ectoparasites. Ecology and behaviour The impala is diurnal (active mainly during the day), though activity tends to cease during the hot midday hours; they feed and rest at night. Three distinct social groups can be observedthe territorial males, bachelor herds and female herds. The territorial males hold territories where they may form harems of females; territories are demarcated with urine and faeces and defended against juvenile or male intruders. Bachelor herds tend to be small, with less than 30 members. Individuals maintain distances of from one another; while young and old males may interact, middle-aged males generally avoid one another except to spar. Female herds vary in size from 6 to 100; herds occupy home ranges of . The mother–calf bond is weak, and breaks soon after weaning; juveniles leave the herds of their mothers to join other herds. Female herds tend to be loose and have no obvious leadership. Allogrooming is an important means of social interaction in bachelor and female herds; in fact, the impala appears to be the only ungulate to display self-grooming as well as allogrooming. In allogrooming, females typically groom related impalas, while males associate with unrelated ones. Each partner grooms the other six to twelve times. Social behaviour is influenced by the climate and geography; as such, the impala are territorial at certain times of the year and gregarious at other times, and the length of these periods can vary broadly among populations. For instance, populations in southern Africa display territorial behaviour only during the few months of the rut, whereas in eastern African populations, territoriality is relatively minimal despite a protracted mating season. Moreover, territorial males often tolerate bachelors, and may even alternate between bachelorhood and territoriality at different times of the year. A study of impala in the Serengeti National Park showed that in 94% of the males, territoriality was observed for less than four months. The impala is an important prey species for Africa's large carnivores, such as cheetahs, leopards, wild dogs (its main predator), lions, hyenas, crocodiles and pythons. The antelope displays two characteristic leapsit can jump up to , over vegetation and even other impala, covering distances of up to ; the other type of leap involves a series of jumps in which the animal lands on its forelegs, moves its hindlegs mid-air in a kicking fashion, lands on all fours (stotting) and then rebounds. It leaps in either manner in different directions, probably to confuse predators. At times, the impala may also conceal itself in vegetation to escape the eye of the predator. The most prominent vocalisation is the loud roar, delivered through one to three loud snorts with the mouth closed, followed by two to ten deep grunts with the mouth open and the chin and tail raised; a typical roar can be heard up to away. Scent gland secretions identify a territorial male. Impalas are sedentary; adult and middle-aged males, in particular, can hold their territories for years. Parasites Common ixodid ticks collected from impala include Amblyomma hebraeum, Boophilus decoloratus, Hyalomma marginatum, Ixodes cavipalpus, Rhipicephalus appendiculatus and R. evertsi. In Zimbabwe, heavy infestation by ticks such as R. appendiculatus has proved to be a major cause behind the high mortality of ungulates, as they can lead to tick paralysis. Impala have special adaptations for grooming, such as their characteristic dental arrangement, to manage ticks before they engorge; however, the extensive grooming needed to keep the tick load under control involves the risk of dehydration during summer, lower vigilance against predators and gradual wearing out of the teeth. A study showed that impala adjust the time devoted to grooming and the number of grooming bouts according to the seasonal prevalence of ticks. Impala are symbiotically related to oxpeckers, which feed on ticks from those parts of the antelope's body which the animal cannot access by itself (such as the ears, neck, eyelids, forehead and underbelly). The impala is the smallest ungulate with which oxpeckers are associated. In a study it was observed that oxpeckers selectively attended to impala despite the presence of other animals such as Coke's hartebeest, Grant's gazelle, Thomson's gazelle and topi. A possible explanation for this could be that because the impala inhabits woodlands (which can have a high density of ticks), the impala could have greater mass of ticks per unit area of the body surface. Another study showed that the oxpeckers prefer the ears over other parts of the body, probably because these parts show maximum tick infestation. The bird has also been observed to perch on the udders of a female and pilfer its milk. Lice recorded from impala include Damalinia aepycerus, D. elongata, Linognathus aepycerus and L. nevilli; in a study, ivermectin (a medication against parasites) was found to have an effect on Boophilus decoloratus and Linognathus species, though not on Damalinia species. In a study of impala in South Africa, the number of worms in juveniles showed an increase with age, reaching a peak when impala turned a year old. This study recorded worms of genera such as Cooperia, Cooperoides, Fasciola, Gongylonema. Haemonchus, Impalaia, Longistrongylus and Trichostrongylus; some of these showed seasonal variations in density. Impala show high frequency of defensive behaviours towards flying insects. This is probably the reason for Vale 1977 and Clausen et al 1998 only finding trace levels of feeding by Glossina (tsetse fly) upon impala. Theileria of impala in Kenya are not cross infectious to cattle: Grootenhuis et al 1975 were not able to induce cattle infection and Fawcett et al 1987 did not find it naturally occurring. Diet Impala browse as well as graze; either may predominate, depending upon the availability of resources. The diet comprises monocots, dicots, forbs, fruits and acacia pods (whenever available). Impala prefer places close to water sources, and resort to succulent vegetation if water is scarce. An analysis showed that the diet of impala is composed of 45% monocots, 45% dicots and 10% fruits; the proportion of grasses in the diet increases significantly (to as high as 90%) after the first rains, but declines in the dry season. Browsing predominates in the late wet and dry season, and diets are nutritionally poor in the mid-dry season, when impala feed mostly on woody dicots. Another study showed that the dicot proportion in the diet is much higher in bachelors and females than in territorial males. Impala feed on soft and nutritious grasses such as Digitaria macroblephara; tough, tall grasses, such as Heteropogon contortus and Themeda triandra, are typically avoided. Impala on the periphery of the herds are generally more vigilant against predators than those feeding in the centre; a foraging individual will try to defend the patch it is feeding on by lowering its head. A study revealed that time spent in foraging reaches a maximum of 75.5% of the day in the late dry season, decreases through the rainy season, and is minimal in the early dry season (57.8%). Reproduction Males are sexually mature by the time they are a year old, though successful mating generally occurs only after four years. Mature males start establishing territories and try to gain access to females. Females can conceive after they are a year and a half old; oestrus lasts for 24 to 48 hours, and occurs every 12–29 days in non-pregnant females. The annual three-week-long rut (breeding season) begins toward the end of the wet season, typically in May. Gonadal growth and hormone production in males begin a few months before the breeding season, resulting in greater aggressiveness and territoriality. The bulbourethral glands are heavier, testosterone levels are nearly twice as high in territorial males as in bachelors, and the neck of a territorial male tends to be thicker than that of a bachelor during the rut. Mating tends to take place between full moons. Rutting males fight over dominance, often giving out noisy roars and chasing one another; they walk stiffly and display their neck and horns. Males desist from feeding and allogrooming during the rut, probably to devote more time to garnering females in oestrus; the male checks the female's urine to ensure that she is in oestrus. On coming across such a female, the excited male begins the courtship by pursuing her, keeping a distance of from her. The male flicks his tongue and may nod vigorously; the female allows him to lick her vulva, and holds her tail to one side. The male tries mounting the female, holding his head high and clasping her sides with his forelegs. Mounting attempts may be repeated every few seconds to every minute or two. The male loses interest in the female after the first copulation, though she is still active and can mate with other males. Gestation lasts six to seven months. Births generally occur in the midday; the female will isolate herself from the herd when labour pain begins. The perception that females can delay giving birth for an additional month if conditions are harsh may however not be realistic. A single calf is born, and is immediately concealed in cover for the first few weeks of its birth. The fawn then joins a nursery group within its mother's herd. Calves are suckled for four to six months; young males, forced out of the group, join bachelor herds, while females may stay back. Distribution and habitat The impala inhabits woodlands due to its preference for shade; it can also be found on the interface (ecotone) between woodlands and savannahs. Places near water sources are preferred. In southern Africa, populations tend to be associated with Colophospermum mopane and Acacia woodlands. Habitat choices differ seasonallyAcacia senegal woodlands are preferred in the wet season, and A. drepanolobium savannahs in the dry season. Another factor that could influence habitat choice is vulnerability to predators; impala tend to keep away from areas with tall grasses as predators could be concealed there. A study found that the reduction of woodland cover and creation of shrublands by the African bush elephants has favoured impala population by increasing the availability of more dry season browse. Earlier, the Baikiaea woodland, which has now declined due to elephants, provided minimum browsing for impala. The newly formed Capparis shrubland, on the other hand, could be a key browsing habitat. Impala are generally not associated with montane habitats; however, in KwaZulu-Natal, impala have been recorded at altitudes of up to above sea level. The historical range of the impala – spanning across southern and eastern Africa – has remained intact to a great extent, although it has disappeared from a few places, such as Burundi. The range extends from central and southern Kenya and northeastern Uganda in the east to northern KwaZulu-Natal in the south, and westward up to Namibia and southern Angola. The black-faced impala is confined to southwestern Angola and Kaokoland in northwestern Namibia; the status of this subspecies has not been monitored since the 2000s. The common impala has a wider distribution, and has been introduced in protected areas in Gabon and across southern Africa. Threats and conservation The International Union for Conservation of Nature and Natural Resources (IUCN) classifies the impala as a species of least concern overall. The black-faced impala, however, is classified as a vulnerable species; as of 2008, fewer than 1,000 were estimated in the wild. Though there are no major threats to the survival of the common impala, poaching and natural calamities have significantly contributed to the decline of the black-faced impala. As of 2008, the population of the common impala has been estimated at around two million. According to some studies, translocation of the black-faced impala can be highly beneficial in its conservation. Around a quarter of the common impala populations occur in protected areas, such as the Okavango Delta (Botswana); Masai Mara and Kajiado (Kenya); Kruger National Park (South Africa); the Ruaha and Serengeti National Parks and Selous Game Reserve (Tanzania); Luangwa Valley (Zambia); Hwange, Sebungwe and Zambezi Valley (Zimbabwe). The rare black-faced impala has been introduced into private farms in Namibia and the Etosha National Park. Population densities vary largely from place to place; from less than one impala per square kilometre in Mkomazi National Park (Tanzania) to as high as 135 per square kilometre near Lake Kariba (Zimbabwe).
Biology and health sciences
Artiodactyla
null
296542
https://en.wikipedia.org/wiki/Chrysanthemum
Chrysanthemum
Chrysanthemums ( ), sometimes called mums or chrysanths, are flowering plants of the genus Chrysanthemum in the family Asteraceae. They are native to East Asia and northeastern Europe. Most species originate from East Asia, and the center of diversity is in China. Countless horticultural varieties and cultivars exist. Description The genus Chrysanthemum are perennial herbaceous flowering plants, sometimes subshrubs. The leaves are alternate, divided into leaflets and may be pinnatisect, lobed, or serrate (toothed) but rarely entire; they are connected to stalks with hairy bases. The compound inflorescence is an array of several flower heads, or sometimes a solitary head. The head has a base covered in layers of phyllaries. The simple row of ray florets is white, yellow, or red. The disc florets are yellow. Pollen grains are approximately 34 microns. The fruit is a ribbed achene. Etymology The name "chrysanthemum" is derived from the chrysos (gold) and anthemon (flower). Taxonomy The genus Chrysanthemum was first formally described by Linnaeus in 1753, with 14 species, and hence bears his name (L.) as the botanical authority. The genus once included more species, but was split several decades ago into several genera, putting the economically important florist's chrysanthemums in the genus Dendranthema. The naming of these genera has been contentious, but a ruling of the International Botanical Congress in 1999 changed the defining species of the genus to Chrysanthemum indicum, restoring the florist's chrysanthemums to the genus Chrysanthemum. Genera now separated from Chrysanthemum include Argyranthemum, Glebionis, Leucanthemopsis, Leucanthemum, Rhodanthemum, and Tanacetum. Species , Plants of the World Online accepted the following species: Chrysanthemum aphrodite Kitam. Chrysanthemum arcticum L. Chrysanthemum argyrophyllum Ling Chrysanthemum arisanense Hayata Chrysanthemum chalchingolicum Grubov Chrysanthemum chanetii H.Lév. Chrysanthemum crassum (Kitam.) Kitam. Chrysanthemum cuneifolium Kitam. Chrysanthemum daucifolium Pers. Chrysanthemum dichrum (C.Shih) H.Ohashi & Yonek. Chrysanthemum foliaceum (G.F.Peng, C.Shih & S.Q.Zhang) J.M.Wang & Y.T.Hou Chrysanthemum glabriusculum (W.W.Sm.) Hand.-Mazz. Chrysanthemum horaimontanum Masam. Chrysanthemum hypargyreum Diels Chrysanthemum indicum L. Chrysanthemum integrifolium Richardson Chrysanthemum japonense (Makino) Nakai Chrysanthemum × konoanum Makino Chrysanthemum lavandulifolium Makino Chrysanthemum leucanthum (Makino) Makino Chrysanthemum longibracteatum (C.Shih, G.F.Peng & S.Y.Jin) J.M.Wang & Y.T.Hou Chrysanthemum maximoviczii Kom. Chrysanthemum miyatojimense Kitam. Chrysanthemum × morifolium (Ramat.) Hemsl. Chrysanthemum morii Hayata Chrysanthemum naktongense Nakai Chrysanthemum ogawae Kitam. Chrysanthemum okiense Kitam. Chrysanthemum oreastrum Hance Chrysanthemum ornatum Hemsl. Chrysanthemum parvifolium C.C.Chang Chrysanthemum potentilloides Hand.-Mazz. Chrysanthemum rhombifolium (Y.Ling & C.Shih) H.Ohashi & Yonek. Chrysanthemum × rubellum Sealy Chrysanthemum × shimotomaii Makino Chrysanthemum sinuatum Ledeb. Chrysanthemum vestitum (Hemsl.) Kitam. Chrysanthemum yantaiense M.Sun & J.T.Chen Chrysanthemum yoshinaganthum Makino Chrysanthemum zawadskii Herbich Chrysanthemum zhuozishanense L.Q.Zhao & Jie Yang Former species include: Chrysanthemum carinatum = Ismelia carinata Chrysanthemum cinerariifolium = Tanacetum cinerariifolium Chrysanthemum coccineum = Tanacetum coccineum Chrysanthemum coronarium = Glebionis coronaria Chrysanthemum frutescens = Argyranthemum frutescens Chrysanthemum maximum = Leucanthemum maximum Chrysanthemum pacificum = Ajania pacifica Chrysanthemum segetum = Glebionis segetum Ecology Chrysanthemums start blooming in early autumn. They are also known as a flower associated with the month of November. Cultivation Chrysanthemums () were first cultivated in China as a flowering herb as far back as the 15th century BCE. Over 500 cultivars had been recorded by 1630. By 2014, it was estimated that there were over 20,000 cultivars in the world and about 7,000 cultivars in China. The plant is renowned as one of the Four Gentlemen () in Chinese and East Asian Art. The plant is particularly significant during the Double Ninth Festival. Chrysanthemum cultivation in Japan began during the Nara and Heian periods (early 8th to late 12th centuries) and gained popularity in the Edo period (early 17th to late 19th century). Many flower shapes, colours, and varieties were created. The way the flowers were grown and shaped also developed, and chrysanthemum culture flourished. Various cultivars of chrysanthemums created in the Edo period were characterized by a remarkable variety of flower shapes. They were exported to China from the end of the Edo period, changing the way Chinese chrysanthemum cultivars were grown and their popularity. In addition, from the Meiji period (late 19th to early 20th century), many cultivars with flowers over in diameter, called the Ogiku (lit., great chrysanthemum) style, were created, which influenced the subsequent trend of chrysanthemums. The Imperial Seal of Japan is a chrysanthemum, and the institution of the monarchy is also called the Chrysanthemum Throne. A number of festivals and shows take place throughout Japan in autumn when the flowers bloom. is one of the five ancient sacred festivals. It is celebrated on the 9th day of the 9th month. It was started in 910, when the imperial court held its first chrysanthemum show. Chrysanthemums entered American horticulture in 1798 when Colonel John Stevens imported a cultivated variety known as Dark Purple from England. The introduction was part of an effort to grow attractions within Elysian Fields in Hoboken, New Jersey. Uses Ornamental uses Modern cultivated chrysanthemums are usually brighter and more striking than their wild relatives. Many horticultural specimens have been bred to bear many rows of ray florets in a great variety of colors. The flower heads occur in various forms, and can be daisy-like or decorative, like pompons or buttons. This genus contains many hybrids and thousands of cultivars developed for horticultural purposes. In addition to the traditional yellow, other colors are available, such as white, purple, and red. The most important hybrid is Chrysanthemum × morifolium (syn. C. × grandiflorum), derived primarily from C. indicum, but also involving other species. Over 140 cultivars of chrysanthemum have gained the Royal Horticultural Society's Award of Garden Merit (confirmed 2017). In Japan, a form of bonsai chrysanthemum was developed over the centuries. The cultivated flower has a lifespan of about 5 years and can be kept in miniature size. Another method is to use pieces of dead wood and the flower grows over the back along the wood to give the illusion from the front that the miniature tree blooms. Culinary uses Yellow or white chrysanthemum flowers of the species C. morifolium are boiled to make a tea in some parts of East Asia. The resulting beverage is known simply as chrysanthemum tea (菊 花 茶, pinyin: júhuā chá, in Chinese). In Korea, a rice wine flavored with chrysanthemum flowers is called gukhwaju (국화주). Chrysanthemum leaves are steamed or boiled and used as greens, especially in Chinese cuisine. The flowers may be added to dishes such as mixian in broth or thick snakemeat soup (蛇羹) to enhance the aroma. They are commonly used in hot pot and stir fries. In Japanese cuisine, small chrysanthemums are used as garnish for sashimi. Insecticidal uses Pyrethrum (Chrysanthemum [or Tanacetum] cinerariaefolium) is economically important as a natural source of insecticide. The flowers are pulverized, and the active components, called pyrethrins, which occur in the achenes, are extracted and sold in the form of an oleoresin. This is applied as a suspension in water or oil, or as a powder. Pyrethrins attack the nervous systems of all insects, and inhibit female mosquitoes from biting. In sublethal doses, they have an insect repellent effect. They are harmful to fish, but are far less toxic to mammals and birds than many synthetic insecticides. They are not persistent, being biodegradable, and also decompose easily on exposure to light. Pyrethroids such as permethrin are synthetic insecticides based on natural pyrethrum. Despite this, chrysanthemum leaves are still a major host for destructive pests, such as leafminer flies including L. trifolii. Persian powder is an example of industrial product of chrysanthemum insecticide. Environmental uses Chrysanthemum plants have been shown to reduce indoor air pollution by the NASA Clean Air Study. In culture In some European countries (e.g., France, Belgium, Italy, Spain, Poland, Hungary, Croatia), incurve chrysanthemums symbolize death and are used only for funerals or on graves, while other types carry no such symbolism; similarly, in China, Japan, and Korea of East Asia, white chrysanthemums symbolize adversity, lamentation, and/or grief. In some other countries, they represent honesty. In the United States, the flower is usually regarded as positive and cheerful, with New Orleans as a notable exception. In the Victorian language of flowers, the chrysanthemum had several meanings. The Chinese chrysanthemum meant cheerfulness, whereas the red chrysanthemum stood for "I Love", while the yellow chrysanthemum symbolized slighted love. The chrysanthemum is also the flower of November. East Asia China The chrysanthemum is the city flower of Beijing and Kaifeng. The tradition of cultivating different varieties of chrysanthemums stretches back 1600 years, and the scale reached a phenomenal level during the Song dynasty until its loss to the Jürchens in 1126. The city has held the Kaifeng Chrysanthemum Cultural Festival since 1983 (renamed China Kaifeng Chrysanthemum Cultural Festival in 1994). The event is the largest chrysanthemum festival in China; it has been a yearly feature since, taking place between 18 October and 18 November every year. The chrysanthemum is one of the "Four Gentlemen" () of China (the others being the plum blossom, the orchid, and bamboo). The chrysanthemum is said to have been favored by Tao Qian, an influential Chinese poet, and is symbolic of nobility. It is also one of the four symbolic seasonal flowers. A chrysanthemum festival is held each year in Tongxiang, near Hangzhou, China. Chrysanthemums are the topic in hundreds of poems of China. The "golden flower" referred to in the 2006 movie Curse of the Golden Flower is a chrysanthemum. "Chrysanthemum Gate" (jú huā mén ), often abbreviated as Chrysanthemum (菊花), is taboo slang meaning "anus" (with sexual connotations). An ancient Chinese city (Xiaolan Town of Zhongshan City) was named Ju-Xian, meaning "chrysanthemum city". The plant is particularly significant during the Chinese Double Ninth Festival. In Chinese culture, the chrysanthemum is a symbol of autumn and the flower of the ninth moon. People even drank chrysanthemum wine on the ninth day of the ninth lunar month to prolong their lives during the Han dynasty. It is a symbol of longevity because of its health-giving properties. Because of all of this, the flower was often worn on funeral attire. Pharmacopoeia of the People's Republic of China listed two kinds of chrysanthemum for medical use, Yejuhua and Juhua. Historically Yejuhua is said to treat carbuncle, furuncle, conjunctivitis, headache, and vertigo. Juhua is said to treat cold, headache, vertigo, and conjunctivitis. Japan Chrysanthemums first arrived in Japan by way of China in the 5th century. The chrysanthemum has been used as a theme of waka (Japanese traditional poetry) since around the 10th century in the Heian period, and Kokin Wakashū is the most famous of them. In the 12th century, during the Kamakura period, when the Retired Emperor Go-Toba adopted it as the mon (family crest) of the Imperial family, it became a flower that symbolized autumn in Japan. During the Edo period from the 17th century to the 19th century, due to the development of economy and culture, the cultivation of chrysanthemums, cherry blossoms, Japanese iris, morning glory, etc. became popular, many cultivars were created and many chrysanthemum exhibitions were held. From the Meiji period in the latter half of the 19th century, due to the growing importance of the chrysanthemum, which symbolized the Imperial family, the creation of ogiku style cultivars with a diameter of 20 cm or more became popular. In the present day, each autumn there are chrysanthemum exhibitions at the Shinjuku Gyo-en, Meiji Shrine and Yasukuni Shrine in Tokyo. The Yasukuni Shrine, formerly a state-endowed shrine (官国弊社, kankokuheisha) has adopted the chrysanthemum crest. Culinary-grade chrysanthemums are used to decorate food, and they remain a common motif for traditional Japanese arts like porcelain, lacquerware and kimono. Chrysanthemum growing is still practised actively as a hobby by many Japanese people who enter prize plants in contests. Chrysanthemum "dolls", often depicting fictional characters from both traditional sources like kabuki and contemporary sources like Disney, are displayed throughout the fall months, and the city of Nihonmatsu hosts the "Nihonmatsu Chrysanthemum Dolls Exhibition" every autumn in historical ruin of Nihonmatsu Castle. They are also grown into chrysanthemum bonsai forms. In Japan, the chrysanthemum is a symbol of the Emperor and the Imperial family. In particular, a "chrysanthemum crest" (菊花紋章, kikukamonshō or kikkamonshō), i.e. a mon of chrysanthemum blossom design, indicates a link to the Emperor; there are more than 150 patterns of this design. Notable uses of and reference to the Imperial chrysanthemum include: The Imperial Seal of Japan is used by members of the Japanese imperial family. In 1869, a two-layered, 16-petal design was designated as the symbol of the emperor. Princes used a simpler, single-layer pattern. The Chrysanthemum Throne is the name given to the position of Japanese Emperor and the throne. The Supreme Order of the Chrysanthemum is a Japanese honor awarded by the emperor on the advice of the Japanese government. In Imperial Japan, small arms were required to be stamped with the imperial chrysanthemum, as they were considered the personal property of the emperor. The Nagoya Castle Chrysanthemum Competition started after the end of the Pacific War. The event at the castle has become a tradition for the city. With three categories, it is one of the largest events of its kind in the region by both scale and content. The first category is the exhibition of cultivated flowers. The second category is for bonsai flowers, which are combined with dead pieces of wood to give the illusion of miniature trees. The third category is the creation of miniature landscapes. Korea The flower is found extensively in inlaid Goreyo ware and were reproduced in stamp form in Buncheong wares. Several twentieth century potters, especially Kim Se-yong, created double-wall wares featuring each individual petal painted in white clay against a celadon background. A vase produced using this technique and presented in 1999 to Queen Elizabeth II can be found in the Royal Collection. Laying a wreath of white chrysanthemums to mourn at funerals has been common since the early 20th century. Before the 20th century, white clothing was traditionally worn in funeral settings. However, the introduction of Western culture made black the prevalent color. White chrysanthemums were instead used to preserve the tradition of using white to mourn at funerals. Korea has a number of flower shows that exhibit the chrysanthemum, such as the Masan Gagopa Chrysanthemum Festival. West Asia Iran In Iran, chrysanthemums are associated with the Zoroastrian spiritual being Ashi Vanghuhi (lit. 'good blessings, rewards'), a female Yazad (angel) presiding over blessings. Oceania Australia In Australia, on Mother's Day, which falls in May when the flower is in season, people traditionally wear a white chrysanthemum, or a similar white flower to honour their mothers. Chrysanthemums are often given as Mother's Day presents. North America United States On 5 and 6 November 1883, in Philadelphia, the Pennsylvania Horticultural Society (PHS), at the request of the Florists and Growers Society, held its first Chrysanthemum Show in Horticultural Hall. This would be the first of several chrysanthemum events presented by PHS to the public. The founding of the chrysanthemum industry dates back to 1884, when the Enomoto brothers of Redwood City, California, grew the first chrysanthemums cultivated in America. In 1913, Sadakasu Enomoto (of San Mateo County) astounded the flower world by successfully shipping a carload of Turner chrysanthemums to New Orleans for the All Saints Day Celebration. The chrysanthemum was recognized as the official flower of the city of Chicago by Mayor Richard J. Daley in 1966. The chrysanthemum is the official flower of the city of Salinas, California. The chrysanthemum is the official flower of several fraternities and sororities, including Chi Phi, Phi Kappa Sigma, Phi Mu Alpha Sinfonia, Lambda Kappa Sigma, Sigma Alpha, and Triangle Fraternity. Europe Italy Italian composer Giacomo Puccini wrote Crisantemi (1890), a movement for string quartet, in memory of his friend Amedeo di Savoia Duca d'Aosta. In Italy (and other European countries) the chrysanthemum is the flower that people traditionally bring to their deceased loved ones at the cemetery and is generally associated with mourning. A probable reason for this is the fact that the plant flowers between the end of October and the beginning of November, coinciding with the Day of the Dead (2 November). Poland Chrysanthemums are placed on graves to honor the dead during All Saints' Day and All Souls' Day in Poland. United Kingdom The UK National Collection of hardy chrysanthemums is at Hill Close Gardens near Warwick. Gallery
Biology and health sciences
Asterales
null
296639
https://en.wikipedia.org/wiki/Arrow%20of%20time
Arrow of time
The arrow of time, also called time's arrow, is the concept positing the "one-way direction" or "asymmetry" of time. It was developed in 1927 by the British astrophysicist Arthur Eddington, and is an unsolved general physics question. This direction, according to Eddington, could be determined by studying the organization of atoms, molecules, and bodies, and might be drawn upon a four-dimensional relativistic map of the world ("a solid block of paper"). The arrow of time paradox was originally recognized in the 1800s for gases (and other substances) as a discrepancy between microscopic and macroscopic description of thermodynamics / statistical physics: at the microscopic level physical processes are believed to be either entirely or mostly time-symmetric: if the direction of time were to reverse, the theoretical statements that describe them would remain true. Yet at the macroscopic level it often appears that this is not the case: there is an obvious direction (or flow) of time. Overview The symmetry of time (T-symmetry) can be understood simply as the following: if time were perfectly symmetrical, a video of real events would seem realistic whether played forwards or backwards. Gravity, for example, is a time-reversible force. A ball that is tossed up, slows to a stop, and falls is a case where recordings would look equally realistic forwards and backwards. The system is T-symmetrical. However, the process of the ball bouncing and eventually coming to a stop is not time-reversible. While going forward, kinetic energy is dissipated and entropy is increased. Entropy may be one of the few processes that is not time-reversible. According to the statistical notion of increasing entropy, the "arrow" of time is identified with a decrease of free energy. In his book The Big Picture, physicist Sean M. Carroll compares the asymmetry of time to the asymmetry of space: While physical laws are in general isotropic, near Earth there is an obvious distinction between "up" and "down", due to proximity to this huge body, which breaks the symmetry of space. Similarly, physical laws are in general symmetric to the flipping of time direction, but near the Big Bang (i.e., in the first many trillions of years following it), there is an obvious distinction between "forward" and "backward" in time, due to relative proximity to this special event, which breaks the symmetry of time. Under this view, all the arrows of time are a result of our relative proximity in time to the Big Bang and the special circumstances that existed then. (Strictly speaking, the weak interactions are asymmetric to both spatial reflection and to flipping of the time direction. However, they do obey a more complicated symmetry that includes both.) Conception by Eddington In the 1928 book The Nature of the Physical World, which helped to popularize the concept, Eddington stated: Let us draw an arrow arbitrarily. If as we follow the arrow we find more and more of the random element in the state of the world, then the arrow is pointing towards the future; if the random element decreases the arrow points towards the past. That is the only distinction known to physics. This follows at once if our fundamental contention is admitted that the introduction of randomness is the only thing which cannot be undone. I shall use the phrase 'time's arrow' to express this one-way property of time which has no analogue in space. Eddington then gives three points to note about this arrow: It is vividly recognized by consciousness. It is equally insisted on by our reasoning faculty, which tells us that a reversal of the arrow would render the external world nonsensical. It makes no appearance in physical science except in the study of organization of a number of individuals. (In other words, it is only observed in entropy, a statistical mechanics phenomenon arising from a system.) Arrows Psychological/perceptual arrow of time A related mental arrow arises because one has the sense that one's perception is a continuous movement from the known past to the unknown future. This phenomenon has two aspects: memory (we remember the past but not the future) and volition (we feel we can influence the future but not the past). The two aspects are a consequence of the causal arrow of time: past events (but not future events) are the cause of our present memories, as more and more correlations are formed between the outer world and our brain (see correlations and the arrow of time); and our present volitions and actions are causes of future events. This is because the increase of entropy is thought to be related to increase of both correlations between a system and its surroundings and of the overall complexity, under an appropriate definition; thus all increase together with time. Past and future are also psychologically associated with additional notions. English, along with other languages, tends to associate the past with "behind" and the future with "ahead", with expressions such as "to look forward to welcoming you", "to look back to the good old times", or "to be years ahead". However, this association of "behind ⇔ past" and "ahead ⇔ future" is culturally determined. For example, the Aymara language associates "ahead ⇔ past" and "behind ⇔ future" both in terms of terminology and gestures, corresponding to the past being observed and the future being unobserved. Similarly, the Chinese term for "the day after tomorrow" 後天 ("hòutiān") literally means "after (or behind) day", whereas "the day before yesterday" 前天 ("qiántiān") is literally "preceding (or in front) day", and Chinese speakers spontaneously gesture in front for the past and behind for the future, although there are conflicting findings on whether they perceive the ego to be in front of or behind the past. There are no languages that place the past and future on a left–right axis (e.g., there is no expression in English such as *the meeting was moved to the left), although at least English speakers associate the past with the left and the future with the right, which seems to have its origin in the left-to-right writing system. The words "yesterday" and "tomorrow" both translate to the same word in Hindi: कल ("kal"), meaning "[one] day remote from today." The ambiguity is resolved by verb tense. परसों ("parson") is used for both "day before yesterday" and "day after tomorrow", or "two days from today". तरसों ("tarson") is used for "three days from today" and नरसों ("narson") is used for "four days from today". The other side of the psychological passage of time is in the realm of volition and action. We plan and often execute actions intended to affect the course of events in the future. From the Rubaiyat: The Moving Finger writes; and, having writ, Moves on: nor all thy Piety nor Wit. Shall lure it back to cancel half a Line, Nor all thy Tears wash out a Word of it. — Omar Khayyam (translation by Edward Fitzgerald). In June 2022, researchers reported in Physical Review Letters finding that salamanders were demonstrating counter-intuitive responses to the arrow of time in how their eyes perceived different stimuli. Thermodynamic arrow of time The arrow of time is the "one-way direction" or "asymmetry" of time. The thermodynamic arrow of time is provided by the second law of thermodynamics, which says that in an isolated system, entropy tends to increase with time. Entropy can be thought of as a measure of microscopic disorder; thus the second law implies that time is asymmetrical with respect to the amount of order in an isolated system: as a system advances through time, it becomes more statistically disordered. This asymmetry can be used empirically to distinguish between future and past, though measuring entropy does not accurately measure time. Also, in an open system, entropy can decrease with time. An interesting thought experiment would be to ask: "if entropy was increased in an open system, would the arrow of time flip in polarity and point towards the past." [citation required] British physicist Sir Alfred Brian Pippard wrote: "There is thus no justification for the view, often glibly repeated, that the Second Law of Thermodynamics is only statistically true, in the sense that microscopic violations repeatedly occur, but never violations of any serious magnitude. On the contrary, no evidence has ever been presented that the Second Law breaks down under any circumstances." However, there are a number of paradoxes regarding violation of the second law of thermodynamics, one of them due to the Poincaré recurrence theorem. This arrow of time seems to be related to all other arrows of time and arguably underlies some of them, with the exception of the weak arrow of time. Harold Blum's 1951 book Time's Arrow and Evolution discusses "the relationship between time's arrow (the second law of thermodynamics) and organic evolution." This influential text explores "irreversibility and direction in evolution and order, negentropy, and evolution." Blum argues that evolution followed specific patterns predetermined by the inorganic nature of the earth and its thermodynamic processes. Cosmological arrow of time The cosmological arrow of time points in the direction of the universe's expansion. It may be linked to the thermodynamic arrow, with the universe heading towards a heat death (Big Chill) as the amount of Thermodynamic free energy becomes negligible. Alternatively, it may be an artifact of our place in the universe's evolution (see the Anthropic bias), with this arrow reversing as gravity pulls everything back into a Big Crunch. If this arrow of time is related to the other arrows of time, then the future is by definition the direction towards which the universe becomes bigger. Thus, the universe expands—rather than shrinks—by definition. The thermodynamic arrow of time and the second law of thermodynamics are thought to be a consequence of the initial conditions in the early universe. Therefore, they ultimately result from the cosmological set-up. Radiative arrow of time Waves, from radio waves to sound waves to those on a pond from throwing a stone, expand outward from their source, even though the wave equations accommodate solutions of convergent waves as well as radiative ones. This arrow has been reversed in carefully worked experiments that created convergent waves, so this arrow probably follows from the thermodynamic arrow in that meeting the conditions to produce a convergent wave requires more order than the conditions for a radiative wave. Put differently, the probability for initial conditions that produce a convergent wave is much lower than the probability for initial conditions that produce a radiative wave. In fact, normally a radiative wave increases entropy, while a convergent wave decreases it, making the latter contradictory to the second law of thermodynamics in usual circumstances. Causal arrow of time A cause precedes its effect: the causal event occurs before the event it causes or affects. Birth, for example, follows a successful conception and not vice versa. Thus causality is intimately bound up with time's arrow. An epistemological problem with using causality as an arrow of time is that, as David Hume maintained, the causal relation per se cannot be perceived; one only perceives sequences of events. Furthermore, it is surprisingly difficult to provide a clear explanation of what the terms cause and effect really mean, or to define the events to which they refer. However, it does seem evident that dropping a cup of water is a cause while the cup subsequently shattering and spilling the water is the effect. Physically speaking, correlations between a system and its surrounding are thought to increase with entropy, and have been shown to be equivalent to it in a simplified case of a finite system interacting with the environment. The assumption of low initial entropy is indeed equivalent to assuming no initial correlations in the system; thus correlations can only be created as we move forward in time, not backwards. Controlling the future, or causing something to happen, creates correlations between the doer and the effect, and therefore the relation between cause and effect is a result of the thermodynamic arrow of time, a consequence of the second law of thermodynamics. Indeed, in the above example of the cup dropping, the initial conditions have high order and low entropy, while the final state has high correlations between relatively distant parts of the system – the shattered pieces of the cup, as well as the spilled water, and the object that caused the cup to drop. Quantum arrow of time Quantum evolution is governed by equations of motions that are time-symmetric (such as the Schrödinger equation in the non-relativistic approximation), and by wave function collapse, which is a time-irreversible process, and is either real (by the Copenhagen interpretation of quantum mechanics) or apparent only (by the many-worlds interpretation and relational quantum mechanics interpretation). The theory of quantum decoherence explains why wave function collapse happens in a time-asymmetric fashion due to the second law of thermodynamics, thus deriving the quantum arrow of time from the thermodynamic arrow of time. In essence, following any particle scattering or interaction between two larger systems, the relative phases of the two systems are at first orderly related, but subsequent interactions (with additional particles or systems) make them less so, so that the two systems become decoherent. Thus decoherence is a form of increase in microscopic disorder in short, decoherence increases entropy. Two decoherent systems can no longer interact via quantum superposition, unless they become coherent again, which is normally impossible, by the second law of thermodynamics. In the language of relational quantum mechanics, the observer becomes entangled with the measured state, where this entanglement increases entropy. As stated by Seth Lloyd, "the arrow of time is an arrow of increasing correlations". However, under special circumstances, one can prepare initial conditions that will cause a decrease in decoherence and in entropy. This has been shown experimentally in 2019, when a team of Russian scientists reported the reversal of the quantum arrow of time on an IBM quantum computer, in an experiment supporting the understanding of the quantum arrow of time as emerging from the thermodynamic one. By observing the state of the quantum computer made of two and later three superconducting qubits, they found that in 85% of the cases, the two-qubit computer returned to the initial state. The state's reversal was made by a special program, similarly to the random microwave background fluctuation in the case of the electron. However, according to the estimations, throughout the age of the universe (13.7 billion years) such a reversal of the electron's state would only happen once, for 0.06 nanoseconds. The scientists' experiment led to the possibility of a quantum algorithm that reverses a given quantum state through complex conjugation of the state. Note that quantum decoherence merely allows the process of quantum wave collapse; it is a matter of dispute whether the collapse itself actually takes place or is redundant and apparent only. However, since the theory of quantum decoherence is now widely accepted and has been supported experimentally, this dispute can no longer be considered as related to the arrow of time question. Particle physics (weak) arrow of time Certain subatomic interactions involving the weak nuclear force violate the conservation of both parity and charge conjugation, but only very rarely. An example is the kaon decay. According to the CPT theorem, this means they should also be time-irreversible, and so establish an arrow of time. Such processes should be responsible for matter creation in the early universe. That the combination of parity and charge conjugation is broken so rarely means that this arrow only "barely" points in one direction, setting it apart from the other arrows whose direction is much more obvious. This arrow had not been linked to any large-scale temporal behaviour until the work of Joan Vaccaro, who showed that T violation could be responsible for conservation laws and dynamics.
Physical sciences
Physics basics: General
Physics
25453345
https://en.wikipedia.org/wiki/Tropical%20year
Tropical year
A tropical year or solar year (or tropical period) is the time that the Sun takes to return to the same position in the sky – as viewed from the Earth or another celestial body of the Solar System – thus completing a full cycle of astronomical seasons. For example, it is the time from vernal equinox to the next vernal equinox, or from summer solstice to the next summer solstice. It is the type of year used by tropical solar calendars. The tropical year is one type of astronomical year and particular orbital period. Another type is the sidereal year (or sidereal orbital period), which is the time it takes Earth to complete one full orbit around the Sun as measured with respect to the fixed stars, resulting in a duration of 20 minutes longer than the tropical year, because of the precession of the equinoxes. Since antiquity, astronomers have progressively refined the definition of the tropical year. The entry for "year, tropical" in the Astronomical Almanac Online Glossary states: An equivalent, more descriptive, definition is "The natural basis for computing passing tropical years is the mean longitude of the Sun reckoned from the precessionally moving equinox (the dynamical equinox or equinox of date). Whenever the longitude reaches a multiple of 360 degrees the mean Sun crosses the vernal equinox and a new tropical year begins". The mean tropical year in 2000 was 365.24219 ephemeris days, each ephemeris day lasting 86,400 SI seconds. This is 365.24217 mean solar days. For this reason, the calendar year is an approximation of the solar year: the Gregorian calendar (with its rules for catch-up leap days) is designed so as to resynchronise the calendar year with the solar year at regular intervals. History Origin The word "tropical" comes from the Greek tropikos meaning "turn". Thus, the tropics of Cancer and Capricorn mark the extreme north and south latitudes where the Sun can appear directly overhead, and where it appears to "turn" in its annual seasonal motion. Because of this connection between the tropics and the seasonal cycle of the apparent position of the Sun, the word "tropical" was lent to the period of the seasonal cycle . The early Chinese, Hindus, Greeks, and others made approximate measures of the tropical year. Early value, precession discovery In the 2nd century BC Hipparchus measured the time required for the Sun to travel from an equinox to the same equinox again. He reckoned the length of the year to be 1/300 of a day less than 365.25 days (365 days, 5 hours, 55 minutes, 12 seconds, or 365.24667 days). Hipparchus used this method because he was better able to detect the time of the equinoxes, compared to that of the solstices. Hipparchus also discovered that the equinoctial points moved along the ecliptic (plane of the Earth's orbit, or what Hipparchus would have thought of as the plane of the Sun's orbit about the Earth) in a direction opposite that of the movement of the Sun, a phenomenon that came to be named "precession of the equinoxes". He reckoned the value as 1° per century, a value that was not improved upon until about 1000 years later, by Islamic astronomers. Since this discovery a distinction has been made between the tropical year and the sidereal year. Middle Ages and the Renaissance During the Middle Ages and Renaissance a number of progressively better tables were published that allowed computation of the positions of the Sun, Moon and planets relative to the fixed stars. An important application of these tables was the reform of the calendar. The Alfonsine Tables, published in 1252, were based on the theories of Ptolemy and were revised and updated after the original publication. The length of the tropical year was given as 365 solar days 5 hours 49 minutes 16 seconds (≈ 365.24255 days). This length was used in devising the Gregorian calendar of 1582. In Uzbekistan, Ulugh Beg's Zij-i Sultani was published in 1437 and gave an estimate of 365 solar days 5 hours 49 minutes 15 seconds (365.242535 days). In the 16th century Copernicus put forward a heliocentric cosmology. Erasmus Reinhold used Copernicus' theory to compute the Prutenic Tables in 1551, and gave a tropical year length of 365 solar days, 5 hours, 55 minutes, 58 seconds (365.24720 days), based on the length of a sidereal year and the presumed rate of precession. This was actually less accurate than the earlier value of the Alfonsine Tables. Major advances in the 17th century were made by Johannes Kepler and Isaac Newton. In 1609 and 1619 Kepler published his three laws of planetary motion. In 1627, Kepler used the observations of Tycho Brahe and Waltherus to produce the most accurate tables up to that time, the Rudolphine Tables. He evaluated the mean tropical year as 365 solar days, 5 hours, 48 minutes, 45 seconds (365.24219 days). Newton's three laws of dynamics and theory of gravity were published in his Philosophiæ Naturalis Principia Mathematica in 1687. Newton's theoretical and mathematical advances influenced tables by Edmond Halley published in 1693 and 1749 and provided the underpinnings of all solar system models until Albert Einstein's theory of General relativity in the 20th century. 18th and 19th century From the time of Hipparchus and Ptolemy, the year was based on two equinoxes (or two solstices) a number of years apart, to average out both observational errors and periodic variations (caused by the gravitational pull of the planets, and the small effect of nutation on the equinox). These effects did not begin to be understood until Newton's time. To model short-term variations of the time between equinoxes (and prevent them from confounding efforts to measure long-term variations) requires precise observations and an elaborate theory of the apparent motion of the Sun. The necessary theories and mathematical tools came together in the 18th century due to the work of Pierre-Simon de Laplace, Joseph Louis Lagrange, and other specialists in celestial mechanics. They were able to compute periodic variations and separate them from the gradual mean motion. They could express the mean longitude of the Sun in a polynomial such as: L0 = A0 + A1T + A2T2 days where T is the time in Julian centuries. The derivative of this formula is an expression of the mean angular velocity, and the inverse of this gives an expression for the length of the tropical year as a linear function of T. Two equations are given in the table. Both equations estimate that the tropical year gets roughly a half second shorter each century. Newcomb's tables were sufficiently accurate that they were used by the joint American-British Astronomical Almanac for the Sun, Mercury, Venus, and Mars through 1983. 20th and 21st centuries The length of the mean tropical year is derived from a model of the Solar System, so any advance that improves the solar system model potentially improves the accuracy of the mean tropical year. Many new observing instruments became available, including artificial satellites tracking of deep space probes such as Pioneer 4 beginning in 1959 radars able to measure the distance to other planets beginning in 1961 lunar laser ranging since the 1969 Apollo 11 left the first of a series of retroreflectors which allow greater accuracy than reflectorless measurements artificial satellites such as LAGEOS (1976) and the Global Positioning System (initial operation in 1993) very long baseline interferometry which finds precise directions to quasars in distant galaxies, and allows determination of the Earth's orientation with respect to these objects whose distance is so great they can be considered to show minimal space motion. The complexity of the model used for the Solar System must be limited to the available computation facilities. In the 1920s punched card equipment came into use by L. J. Comrie in Britain. For the American Ephemeris an electromagnetic computer, the IBM Selective Sequence Electronic Calculator was used since 1948. When modern computers became available, it was possible to compute ephemerides using numerical integration rather than general theories; numerical integration came into use in 1984 for the joint US-UK almanacs. Albert Einstein's General Theory of Relativity provided a more accurate theory, but the accuracy of theories and observations did not require the refinement provided by this theory (except for the advance of the perihelion of Mercury) until 1984. Time scales incorporated general relativity beginning in the 1970s. A key development in understanding the tropical year over long periods of time is the discovery that the rate of rotation of the earth, or equivalently, the length of the mean solar day, is not constant. William Ferrel in 1864 and Charles-Eugène Delaunay in 1865 predicted that the rotation of the Earth is being retarded by tides. This could be verified by observation only in the 1920s with the very accurate Shortt-Synchronome clock and later in the 1930s when quartz clocks began to replace pendulum clocks as time standards. Time scales and calendar Apparent solar time is the time indicated by a sundial, and is determined by the apparent motion of the Sun caused by the rotation of the Earth around its axis as well as the revolution of the Earth around the Sun. Mean solar time is corrected for the periodic variations in the apparent velocity of the Sun as the Earth revolves in its orbit. The most important such time scale is Universal Time, which is the mean solar time at 0 degrees longitude (the IERS Reference Meridian). Civil time is based on UT (actually UTC), and civil calendars count mean solar days. However the rotation of the Earth itself is irregular and is slowing down, with respect to more stable time indicators: specifically, the motion of planets, and atomic clocks. Ephemeris time (ET) is the independent variable in the equations of motion of the Solar System, in particular, the equations from Newcomb's work, and this ET was in use from 1960 to 1984. These ephemerides were based on observations made in solar time over a period of several centuries, and as a consequence represent the mean solar second over that period. The SI second, defined in atomic time, was intended to agree with the ephemeris second based on Newcomb's work, which in turn makes it agree with the mean solar second of the mid-19th century. ET as counted by atomic clocks was given a new name, Terrestrial Time (TT), and for most purposes ET = TT = International Atomic Time + 32.184 SI seconds. Since the era of the observations, the rotation of the Earth has slowed down and the mean solar second has grown somewhat longer than the SI second. As a result, the time scales of TT and UT1 build up a growing difference: the amount that TT is ahead of UT1 is known as ΔT, or Delta T. TT is ahead of UT1 by 69.28 seconds. As a consequence, the tropical year following the seasons on Earth as counted in solar days of UT is increasingly out of sync with expressions for equinoxes in ephemerides in TT. As explained below, long-term estimates of the length of the tropical year were used in connection with the reform of the Julian calendar, which resulted in the Gregorian calendar. Participants in that reform were unaware of the non-uniform rotation of the Earth, but now this can be taken into account to some degree. The table below gives Morrison and Stephenson's estimates and standard errors (σ) for ΔT at dates significant in the process of developing the Gregorian calendar. The low-precision extrapolations are computed with an expression provided by Morrison and Stephenson: ΔT in seconds = −20 + 32t2 where t is measured in Julian centuries from 1820. The extrapolation is provided only to show ΔT is not negligible when evaluating the calendar for long periods; Borkowski cautions that "many researchers have attempted to fit a parabola to the measured ΔT values in order to determine the magnitude of the deceleration of the Earth's rotation. The results, when taken together, are rather discouraging." Length of tropical year One definition of the tropical year would be the time required for the Sun, beginning at a chosen ecliptic longitude, to make one complete cycle of the seasons and return to the same ecliptic longitude. Mean time interval between equinoxes Before considering an example, the equinox must be examined. There are two important planes in solar system calculations: the plane of the ecliptic (the Earth's orbit around the Sun), and the plane of the celestial equator (the Earth's equator projected into space). These two planes intersect in a line. One direction points to the so-called vernal, northward, or March equinox which is given the symbol (the symbol looks like the horns of a ram because it used to be toward the constellation Aries). The opposite direction is given the symbol (because it used to be toward Libra). Because of the precession of the equinoxes and nutation these directions change, compared to the direction of distant stars and galaxies, whose directions have no measurable motion due to their great distance (see International Celestial Reference Frame). The ecliptic longitude of the Sun is the angle between and the Sun, measured eastward along the ecliptic. This creates a relative and not an absolute measurement, because as the Sun is moving, the direction the angle is measured from is also moving. It is convenient to have a fixed (with respect to distant stars) direction to measure from; the direction of at noon January 1, 2000 fills this role and is given the symbol 0. There was an equinox on March 20, 2009, 11:44:43.6 TT. The 2010 March equinox was March 20, 17:33:18.1 TT, which gives an interval - and a duration of the tropical year - of 365 days 5 hours 48 minutes 34.5 seconds. While the Sun moves, moves in the opposite direction. When the Sun and met at the 2010 March equinox, the Sun had moved east 359°59'09" while had moved west 51" for a total of 360° (all with respect to 0). This is why the tropical year is 20 min. shorter than the sidereal year. When tropical year measurements from several successive years are compared, variations are found which are due to the perturbations by the Moon and planets acting on the Earth, and to nutation. Meeus and Savoie provided the following examples of intervals between March (northward) equinoxes: Until the beginning of the 19th century, the length of the tropical year was found by comparing equinox dates that were separated by many years; this approach yielded the mean tropical year. Different tropical year definitions If a different starting longitude for the Sun is chosen than 0° (i.e. ), then the duration for the Sun to return to the same longitude will be different. This is a second-order effect of the circumstance that the speed of the Earth (and conversely the apparent speed of the Sun) varies in its elliptical orbit: faster in the perihelion, slower in the aphelion. The equinox moves with respect to the perihelion (and both move with respect to the fixed sidereal frame). From one equinox passage to the next, or from one solstice passage to the next, the Sun completes not quite a full elliptic orbit. The time saved depends on where it starts in the orbit. If the starting point is close to the perihelion (such as the December solstice), then the speed is higher than average, and the apparent Sun saves little time for not having to cover a full circle: the "tropical year" is comparatively long. If the starting point is near aphelion, then the speed is lower and the time saved for not having to run the same small arc that the equinox has precessed is longer: that tropical year is comparatively short. The "mean tropical year" is based on the mean sun, and is not exactly equal to any of the times taken to go from an equinox to the next or from a solstice to the next. The following values of time intervals between equinoxes and solstices were provided by Meeus and Savoie for the years 0 and 2000. These are smoothed values which take account of the Earth's orbit being elliptical, using well-known procedures (including solving Kepler's equation). They do not take into account periodic variations due to factors such as the gravitational force of the orbiting Moon and gravitational forces from the other planets. Such perturbations are minor compared to the positional difference resulting from the orbit being elliptical rather than circular. Mean tropical year current value The mean tropical year on January 1, 2000, was or 365 ephemeris days, 5 hours, 48 minutes, 45.19 seconds. This changes slowly; an expression suitable for calculating the length of a tropical year in ephemeris days, between 8000 BC and 12000 AD is where T is in Julian centuries of 36,525 days of 86,400 SI seconds measured from noon January 1, 2000 TT. Modern astronomers define the tropical year as time for the Sun's mean longitude to increase by 360°. The process for finding an expression for the length of the tropical year is to first find an expression for the Sun's mean longitude (with respect to ), such as Newcomb's expression given above, or Laskar's expression. When viewed over a one-year period, the mean longitude is very nearly a linear function of Terrestrial Time. To find the length of the tropical year, the mean longitude is differentiated, to give the angular speed of the Sun as a function of Terrestrial Time, and this angular speed is used to compute how long it would take for the Sun to move 360°. The above formulae give the length of the tropical year in ephemeris days (equal to 86,400 SI seconds), not solar days. It is the number of solar days in a tropical year that is important for keeping the calendar in synch with the seasons (see below). Calendar year The Gregorian calendar, as used for civil and scientific purposes, is an international standard. It is a solar calendar that is designed to maintain synchrony with the mean tropical year. It has a cycle of 400 years (146,097 days). Each cycle repeats the months, dates, and weekdays. The average year length is 146,097/400 = = 365.2425 days per year, a close approximation to the mean tropical year of 365.2422 days. The Gregorian calendar is a reformed version of the Julian calendar organized by the Catholic Church and enacted in 1582. By the time of the reform, the date of the vernal equinox had shifted about 10 days, from about March 21 at the time of the First Council of Nicaea in 325, to about March 11. The motivation for the change was the correct observance of Easter. The rules used to compute the date of Easter used a conventional date for the vernal equinox (March 21), and it was considered important to keep March 21 close to the actual equinox. If society in the future still attaches importance to the synchronization between the civil calendar and the seasons, another reform of the calendar will eventually be necessary. According to Blackburn and Holford-Strevens (who used Newcomb's value for the tropical year) if the tropical year remained at its 1900 value of days the Gregorian calendar would be 3 days, 17 min, 33 s behind the Sun after 10,000 years. Aggravating this error, the length of the tropical year (measured in Terrestrial Time) is decreasing at a rate of approximately 0.53 s per century and the mean solar day is getting longer at a rate of about 1.5 ms per century. These effects will cause the calendar to be nearly a day behind in 3200. The number of solar days in a "tropical millennium" is decreasing by about 0.06 per millennium (neglecting the oscillatory changes in the real length of the tropical year). This means there should be fewer and fewer leap days as time goes on. A possible reform could omit the leap day in 3200, keep 3600 and 4000 as leap years, and thereafter make all centennial years common except 4500, 5000, 5500, 6000, etc. but the quantity ΔT is not sufficiently predictable to form more precise proposals.
Physical sciences
Celestial sphere: General
Astronomy
25453500
https://en.wikipedia.org/wiki/Coordinated%20Universal%20Time
Coordinated Universal Time
Coordinated Universal Time (UTC) is the primary time standard globally used to regulate clocks and time. It establishes a reference for the current time, forming the basis for civil time and time zones. UTC facilitates international communication, navigation, scientific research, and commerce. UTC has been widely embraced by most countries and is the effective successor to Greenwich Mean Time (GMT) in everyday usage and common applications. In specialized domains such as scientific research, navigation, and timekeeping, other standards such as UT1 and International Atomic Time (TAI) are also used alongside UTC. UTC is based on TAI, which is a weighted average of hundreds of atomic clocks worldwide. UTC is within about one second of mean solar time at 0° longitude, the currently used prime meridian, and is not adjusted for daylight saving time. The coordination of time and frequency transmissions around the world began on 1 January 1960. UTC was first officially adopted as a standard in 1963 and "UTC" became the official abbreviation of Coordinated Universal Time in 1967. The current version of UTC is defined by the International Telecommunication Union. Since adoption, UTC has been adjusted several times, notably adding leap seconds starting in 1972. Recent years have seen significant developments in the realm of UTC, particularly in discussions about eliminating leap seconds from the timekeeping system because leap seconds occasionally disrupt timekeeping systems worldwide. The General Conference on Weights and Measures adopted a resolution to alter UTC with a new system that would eliminate leap seconds by 2035. Etymology The official abbreviation for Coordinated Universal Time is UTC. This abbreviation comes as a result of the International Telecommunication Union and the International Astronomical Union wanting to use the same abbreviation in all languages. The compromise that emerged was UTC, which conforms to the pattern for the abbreviations of the variants of Universal Time (UT0, UT1, UT2, UT1R, etc.). McCarthy described the origin of the abbreviation: In 1967 the CCIR adopted the names Coordinated Universal Time and Temps Universel Coordonné for the English and French names with the acronym UTC to be used in both languages. The name "Coordinated Universal Time (UTC)" was approved by a resolution of IAU Commissions 4 and 31 at the 13th General Assembly in 1967 (Trans. IAU, 1968). Uses Time zones around the world are expressed using positive, zero, or negative offsets from UTC, as in the list of time zones by UTC offset. The westernmost time zone uses UTC−12, being twelve hours behind UTC; the easternmost time zone uses UTC+14, being fourteen hours ahead of UTC. In 1995, the island nation of Kiribati moved those of its atolls in the Line Islands from UTC−10 to UTC+14 so that Kiribati would all be on the same day. UTC is used in many Internet and World Wide Web standards. The Network Time Protocol (NTP), designed to synchronise the clocks of computers over the Internet, transmits time information from the UTC system. If only milliseconds precision is needed, clients can obtain the current UTC from a number of official internet UTC servers. For sub-microsecond precision, clients can obtain the time from satellite signals. UTC is also the time standard used in aviation, e.g. for flight plans and air traffic control. In this context it is frequently referred to as Zulu time, as described below. Weather forecasts and maps all use UTC to avoid confusion about time zones and daylight saving time. The International Space Station also uses UTC as a time standard. Amateur radio operators often schedule their radio contacts in UTC, because transmissions on some frequencies can be picked up in many time zones. Mechanism UTC divides time into days, hours, minutes, and seconds. Days are conventionally identified using the Gregorian calendar, but Julian day numbers can also be used. Each day contains 24 hours and each hour contains 60 minutes. The number of seconds in a minute is usually 60, but with an occasional leap second, it may be 61 or 59 instead. Thus, in the UTC time scale, the second and all smaller time units (millisecond, microsecond, etc.) are of constant duration, but the minute and all larger time units (hour, day, week, etc.) are of variable duration. Decisions to introduce a leap second are announced at least six months in advance in "Bulletin C" produced by the International Earth Rotation and Reference Systems Service. The leap seconds cannot be predicted far in advance due to the unpredictable rate of the rotation of Earth. Nearly all UTC days contain exactly 86,400 SI seconds with exactly 60 seconds in each minute. UTC is within about one second of mean solar time (such as UT1) at 0° longitude, (at the IERS Reference Meridian). The mean solar day is slightly longer than 86,400 SI seconds so occasionally the last minute of a UTC day is adjusted to have 61 seconds. The extra second is called a leap second. It accounts for the grand total of the extra length (about 2 milliseconds each) of all the mean solar days since the previous leap second. The last minute of a UTC day is permitted to contain 59 seconds to cover the remote possibility of the Earth rotating faster, but that has not yet been necessary. The irregular day lengths mean fractional Julian days do not work properly with UTC. Since 1972, UTC may be calculated by subtracting the accumulated leap seconds from International Atomic Time (TAI), which is a coordinate time scale tracking notional proper time on the rotating surface of the Earth (the geoid). In order to maintain a close approximation to UT1, UTC occasionally has discontinuities where it changes from one linear function of TAI to another. These discontinuities take the form of leap seconds implemented by a UTC day of irregular length. Discontinuities in UTC occurred only at the end of June or December. However, there is provision for them to happen at the end of March and September as a second preference as well. The International Earth Rotation and Reference Systems Service (IERS) tracks and publishes the difference between UTC and Universal Time, DUT1 = UT1 − UTC, and introduces discontinuities into UTC to keep DUT1 in the interval (−0.9 s, +0.9 s). As with TAI, UTC is only known with the highest precision in retrospect. Users who require an approximation in real time must obtain it from a time laboratory, which disseminates an approximation using techniques such as GPS or radio time signals. Such approximations are designated UTC(k), where k is an abbreviation for the time laboratory. The time of events may be provisionally recorded against one of these approximations; later corrections may be applied using the International Bureau of Weights and Measures (BIPM) monthly publication of tables of differences between canonical TAI/UTC and TAI(k)/UTC(k) as estimated in real-time by participating laboratories. (See the article on International Atomic Time for details.) Because of time dilation, a standard clock not on the geoid, or in rapid motion, will not maintain synchronicity with UTC. Therefore, telemetry from clocks with a known relation to the geoid is used to provide UTC when required, on locations such as those of spacecraft. It is impossible to compute the exact time interval elapsed between two UTC timestamps without consulting a table showing how many leap seconds occurred during that interval. By extension, it is not possible to compute the precise duration of a time interval that ends in the future and may encompass an unknown number of leap seconds (for example, the number of TAI seconds between "now" and 2099-12-31 23:59:59). Therefore, many scientific applications that require precise measurement of long (multi-year) intervals use TAI instead. TAI is also commonly used by systems that cannot handle leap seconds. GPS time always remains exactly 19 seconds behind TAI (neither system is affected by the leap seconds introduced in UTC). Time zones Time zones are usually defined as differing from UTC by an integer number of hours, although the laws of each jurisdiction would have to be consulted if sub-second accuracy was required. Several jurisdictions have established time zones that differ by an odd integer number of half-hours or quarter-hours from UT1 or UTC. Current civil time in a particular time zone can be determined by adding or subtracting the number of hours and minutes specified by the UTC offset, which ranges from UTC−12:00 in the west to UTC+14:00 in the east (see List of UTC offsets). The time zone using UTC is sometimes denoted UTC+00:00 or by the letter Z—a reference to the equivalent nautical time zone (GMT), which has been denoted by a Z since about 1950. Time zones were identified by successive letters of the alphabet and the Greenwich time zone was marked by a Z as it was the point of origin. The letter also refers to the "zone description" of zero hours, which has been used since 1920 (see time zone history). Since the NATO phonetic alphabet word for Z is "Zulu", UTC is sometimes known as "Zulu time". This is especially true in aviation, where "Zulu" is the universal standard. This ensures that all pilots, regardless of location, are using the same 24-hour clock, thus avoiding confusion when flying between time zones. See the list of military time zones for letters used in addition to Z in qualifying time zones other than Greenwich. On electronic devices which only allow the time zone to be configured using maps or city names, UTC can be selected indirectly by selecting cities such as Accra in Ghana or Reykjavík in Iceland as they are always on UTC and do not currently use daylight saving time (which Greenwich and London do, and so could be a source of error). Daylight saving time UTC does not change with a change of seasons, but local time or civil time may change if a time zone jurisdiction observes daylight saving time (summer time). For example, local time on the east coast of the United States is five hours behind UTC during winter, but four hours behind while daylight saving is observed there. History In 1928, the term Universal Time (UT) was introduced by the International Astronomical Union to refer to GMT, with the day starting at midnight. Until the 1950s, broadcast time signals were based on UT, and hence on the rotation of the Earth. In 1955, the caesium atomic clock was invented. This provided a form of timekeeping that was both more stable and more convenient than astronomical observations. In 1956, the U.S. National Bureau of Standards and U.S. Naval Observatory started to develop atomic frequency time scales; by 1959, these time scales were used in generating the WWV time signals, named for the shortwave radio station that broadcasts them. In 1960, the U.S. Naval Observatory, the Royal Greenwich Observatory, and the UK National Physical Laboratory coordinated their radio broadcasts so that time steps and frequency changes were coordinated, and the resulting time scale was informally referred to as "Coordinated Universal Time". In a controversial decision, the frequency of the signals was initially set to match the rate of UT, but then kept at the same frequency by the use of atomic clocks and deliberately allowed to drift away from UT. When the divergence grew significantly, the signal was phase shifted (stepped) by 20 ms to bring it back into agreement with UT. Twenty-nine such steps were used before 1960. In 1958, data was published linking the frequency for the caesium transition, newly established, with the ephemeris second. The ephemeris second is a unit in the system of time that, when used as the independent variable in the laws of motion that govern the movement of the planets and moons in the solar system, enables the laws of motion to accurately predict the observed positions of solar system bodies. Within the limits of observable accuracy, ephemeris seconds are of constant length, as are atomic seconds. This publication allowed a value to be chosen for the length of the atomic second that would accord with the celestial laws of motion. The coordination of time and frequency transmissions around the world began on 1 January 1960. UTC was first officially adopted in 1963 as CCIR Recommendation 374, Standard-Frequency and Time-Signal Emissions, and "UTC" became the official abbreviation of Coordinated Universal Time in 1967. In 1961, the Bureau International de l'Heure began coordinating the UTC process internationally (but the name Coordinated Universal Time was not formally adopted by the International Astronomical Union until 1967). From then on, there were time steps every few months, and frequency changes at the end of each year. The jumps increased in size to 0.1 seconds. This UTC was intended to permit a very close approximation to UT2. In 1967, the SI second was redefined in terms of the frequency supplied by a caesium atomic clock. The length of second so defined was practically equal to the second of ephemeris time. This was the frequency that had been provisionally used in TAI since 1958. It was soon decided that having two types of second with different lengths, namely the UTC second and the SI second used in TAI, was a bad idea. It was thought better for time signals to maintain a consistent frequency, and that this frequency should match the SI second. Thus it would be necessary to rely on time steps alone to maintain the approximation of UT. This was tried experimentally in a service known as "Stepped Atomic Time" (SAT), which ticked at the same rate as TAI and used jumps of 0.2 seconds to stay synchronised with UT2. There was also dissatisfaction with the frequent jumps in UTC (and SAT). In 1968, Louis Essen, the inventor of the caesium atomic clock, and G. M. R. Winkler both independently proposed that steps should be of 1 second only. to simplify future adjustments. This system was eventually approved as leap seconds in a new UTC in 1970 and implemented in 1972, along with the idea of maintaining the UTC second equal to the TAI second. This CCIR Recommendation 460 "stated that (a) carrier frequencies and time intervals should be maintained constant and should correspond to the definition of the SI second; (b) step adjustments, when necessary, should be exactly 1 s to maintain approximate agreement with Universal Time (UT); and (c) standard signals should contain information on the difference between UTC and UT." As an intermediate step at the end of 1971, there was a final irregular jump of exactly 0.107758 TAI seconds, making the total of all the small time steps and frequency shifts in UTC or TAI during 1958–1971 exactly ten seconds, so that was exactly, and a whole number of seconds thereafter. At the same time, the tick rate of UTC was changed to exactly match TAI. UTC also started to track UT1 rather than UT2. Some time signals started to broadcast the DUT1 correction (UT1 − UTC) for applications requiring a closer approximation of UT1 than UTC now provided. The current version of UTC is defined by International Telecommunication Union Recommendation (ITU-R TF.460-6), Standard-frequency and time-signal emissions, and is based on International Atomic Time (TAI) with leap seconds added at irregular intervals to compensate for the accumulated difference between TAI and time measured by Earth's rotation. Leap seconds are inserted as necessary to keep UTC within 0.9 seconds of the UT1 variant of universal time. See the "Current number of leap seconds" section for the number of leap seconds inserted to date. Current number of leap seconds The first leap second occurred on 30 June 1972. Since then, leap seconds have occurred on average about once every 19 months, always on 30 June or 31 December. , there have been 27 leap seconds in total, all positive, putting UTC 37 seconds behind TAI. A study published in March 2024 in Nature concluded that accelerated melting of ice in Greenland and Antarctica due to climate change has decreased Earth's rotational velocity, affecting UTC adjustments and causing problems for computer networks that rely on UTC. Rationale Earth's rotational speed is very slowly decreasing because of tidal deceleration; this increases the length of the mean solar day. The length of the SI second was calibrated on the basis of the second of ephemeris time and can now be seen to have a relationship with the mean solar day observed between 1750 and 1892, analysed by Simon Newcomb. As a result, the SI second is close to of a mean solar day in the mid‑19th century. In earlier centuries, the mean solar day was shorter than 86,400 SI seconds, and in more recent centuries it is longer than 86,400 seconds. Near the end of the 20th century, the length of the mean solar day (also known simply as "length of day" or "LOD") was approximately 86,400.0013 s. For this reason, UT is now "slower" than TAI by the difference (or "excess" LOD) of 1.3 ms/day. The excess of the LOD over the nominal 86,400 s accumulates over time, causing the UTC day, initially synchronised with the mean sun, to become desynchronised and run ahead of it. Near the end of the 20th century, with the LOD at 1.3 ms above the nominal value, UTC ran faster than UT by 1.3 ms per day, getting a second ahead roughly every 800 days. Thus, leap seconds were inserted at approximately this interval, retarding UTC to keep it synchronised in the long term. The actual rotational period varies on unpredictable factors such as tectonic motion and has to be observed, rather than computed. Just as adding a leap day every four years does not mean the year is getting longer by one day every four years, the insertion of a leap second every 800 days does not indicate that the mean solar day is getting longer by a second every 800 days. It will take about 50,000 years for a mean solar day to lengthen by one second (at a rate of 2 ms per century). This rate fluctuates within the range of 1.7–2.3 ms/cy. While the rate due to tidal friction alone is about 2.3 ms/cy, the uplift of Canada and Scandinavia by several metres since the last ice age has temporarily reduced this to 1.7 ms/cy over the last 2,700 years. The correct reason for leap seconds, then, is not the current difference between actual and nominal LOD, but rather the accumulation of this difference over a period of time: Near the end of the 20th century, this difference was about of a second per day; therefore, after about 800 days, it accumulated to 1 second (and a leap second was then added). In the graph of DUT1 above, the excess of LOD above the nominal 86,400 s corresponds to the downward slope of the graph between vertical segments. (The slope became shallower in the 1980s, 2000s and late 2010s to 2020s because of slight accelerations of Earth's rotation temporarily shortening the day.) Vertical position on the graph corresponds to the accumulation of this difference over time, and the vertical segments correspond to leap seconds introduced to match this accumulated difference. Leap seconds are timed to keep DUT1 within the vertical range depicted by the adjacent graph. The frequency of leap seconds therefore corresponds to the slope of the diagonal graph segments, and thus to the excess LOD. Time periods when the slope reverses direction (slopes upwards, not the vertical segments) are times when the excess LOD is negative, that is, when the LOD is below 86,400 s. Future As the Earth's rotation continues to slow, positive leap seconds will be required more frequently. The long-term rate of change of LOD is approximately +1.7 ms per century. At the end of the 21st century, LOD will be roughly 86,400.004 s, requiring leap seconds every 250 days. Over several centuries, the frequency of leap seconds will become problematic. A change in the trend of the UT1 – UTC values was seen beginning around June 2019 in which instead of slowing down (with leap seconds to keep the difference between UT1 and UTC less than 0.9 seconds) the Earth's rotation has sped up, causing this difference to increase. If the trend continues, a negative leap second may be required, which has not been used before. This may not be needed until 2025. Some time in the 22nd century, two leap seconds will be required every year. The current practice of only allowing leap seconds in June and December will be insufficient to maintain a difference of less than 1 second, and it might be decided to introduce leap seconds in March and September. In the 25th century, four leap seconds are projected to be required every year, so the current quarterly options would be insufficient. In April 2001, Rob Seaman of the National Optical Astronomy Observatory proposed that leap seconds be allowed to be added monthly rather than twice yearly. In 2022 a resolution was adopted by the General Conference on Weights and Measures to redefine UTC and abolish leap seconds, but keep the civil second constant and equal to the SI second, so that sundials would slowly get further and further out of sync with civil time. The leap seconds will be eliminated by 2035. The resolution does not break the connection between UTC and UT1, but increases the maximum allowable difference. The details of what the maximum difference will be and how corrections will be implemented is left for future discussions. This will result in a shift of the sun's movements relative to civil time, with the difference increasing quadratically with time (i.e., proportional to elapsed centuries squared). This is analogous to the shift of seasons relative to the yearly calendar that results from the calendar year not precisely matching the tropical year length. This would be a change in civil timekeeping, and would have a slow effect at first, but becoming drastic over several centuries. UTC (and TAI) would be more and more ahead of UT; it would coincide with local mean time along a meridian drifting eastward faster and faster. Thus, the time system will lose its fixed connection to the geographic coordinates based on the IERS meridian. The difference between UTC and UT would reach 0.5 hours after the year 2600 and 6.5 hours around 4600. ITU-R Study Group 7 and Working Party 7A were unable to reach consensus on whether to advance the proposal to the 2012 Radiocommunications Assembly; the chairman of Study Group 7 elected to advance the question to the 2012 Radiocommunications Assembly (20 January 2012), but consideration of the proposal was postponed by the ITU until the World Radio Conference in 2015. This conference, in turn, considered the question, but no permanent decision was reached; it only chose to engage in further study with the goal of reconsideration in 2023. A proposed alternative to the leap second is the leap hour or leap minute, which requires changes only once every few centuries. ITU World Radiocommunication Conference 2023 (WRC-23), which was held in Dubai (United Arab Emirates) from 20 November to 15 December 2023 formally recognized the Resolution 4 of the 27th CGPM (2022) which decides that the maximum value for the difference (UT1-UTC) will be increased in, or before, 2035.
Technology
Timekeeping
null
25453985
https://en.wikipedia.org/wiki/Atomic%20clock
Atomic clock
An atomic clock is a clock that measures time by monitoring the resonant frequency of atoms. It is based on atoms having different energy levels. Electron states in an atom are associated with different energy levels, and in transitions between such states they interact with a very specific frequency of electromagnetic radiation. This phenomenon serves as the basis for the International System of Units' (SI) definition of a second: The second, symbol s, is the SI unit of time. It is defined by taking the fixed numerical value of the caesium frequency, , the unperturbed ground-state hyperfine transition frequency of the caesium-133 atom, to be when expressed in the unit Hz, which is equal to s−1. This definition is the basis for the system of International Atomic Time (TAI), which is maintained by an ensemble of atomic clocks around the world. The system of Coordinated Universal Time (UTC) that is the basis of civil time implements leap seconds to allow clock time to track changes in Earth's rotation to within one second while being based on clocks that are based on the definition of the second, though leap seconds will be phased out in 2035. The accurate timekeeping capabilities of atomic clocks are also used for navigation by satellite networks such as the European Union's Galileo Programme and the United States' GPS. The timekeeping accuracy of the involved atomic clocks is important because the smaller the error in time measurement, the smaller the error in distance obtained by multiplying the time by the speed of light is (a timing error of a nanosecond or 1 billionth of a second (10 or second) translates into an almost distance and hence positional error). The main variety of atomic clock uses caesium atoms cooled to temperatures that approach absolute zero. The primary standard for the United States, the National Institute of Standards and Technology (NIST)'s caesium fountain clock named NIST-F2, measures time with an uncertainty of 1 second in 300 million years (relative uncertainty ). NIST-F2 was brought online on 3 April 2014. History The Scottish physicist James Clerk Maxwell proposed measuring time with the vibrations of light waves in his 1873 Treatise on Electricity and Magnetism: 'A more universal unit of time might be found by taking the periodic time of vibration of the particular kind of light whose wave length is the unit of length.' Maxwell argued this would be more accurate than the Earth's rotation, which defines the mean solar second for timekeeping. During the 1930s, the American physicist Isidor Isaac Rabi built equipment for atomic beam magnetic resonance frequency clocks. The accuracy of mechanical, electromechanical and quartz clocks is reduced by temperature fluctuations. This led to the idea of measuring the frequency of an atom's vibrations to keep time much more accurately, as proposed by James Clerk Maxwell, Lord Kelvin, and Isidor Rabi. He proposed the concept in 1945, which led to a demonstration of a clock based on ammonia in 1949. This led to the first practical accurate atomic clock with caesium atoms being built at the National Physical Laboratory in the United Kingdom in 1955 by Louis Essen in collaboration with Jack Parry. In 1949, Alfred Kastler and Jean Brossel developed a technique called optical pumping for electron energy level transitions in atoms using light. This technique is useful for creating much stronger magnetic resonance and microwave absorption signals. Unfortunately, this caused a side effect with a light shift of the resonant frequency. Claude Cohen-Tannoudji and others managed to reduce the light shifts to acceptable levels. Ramsey developed a method, commonly known as Ramsey interferometry nowadays, for higher frequencies and narrower resonances in the oscillating fields. Kolsky, Phipps, Ramsey, and Silsbee used this technique for molecular beam spectroscopy in 1950. After 1956, atomic clocks were studied by many groups, such as the National Institute of Standards and Technology (formerly the National Bureau of Standards) in the USA, the Physikalisch-Technische Bundesanstalt (PTB) in Germany, the National Research Council (NRC) in Canada, the National Physical Laboratory in the United Kingdom, International Time Bureau (French: Bureau International de l'Heure, abbreviated BIH), at the Paris Observatory, the National Radio Company, Bomac, Varian, Hewlett–Packard and Frequency & Time Systems. During the 1950s, the National Radio Company sold more than 50 units of the first atomic clock, the Atomichron. In 1964, engineers at Hewlett-Packard released the 5060 rack-mounted model of caesium clocks. Definition of the second In 1968, the SI defined the duration of the second to be vibrations of the unperturbed ground-state hyperfine transition frequency of the caesium-133 atom. Prior to that it was defined by there being seconds in the tropical year 1900. In 1997, the International Committee for Weights and Measures (CIPM) added that the preceding definition refers to a caesium atom at rest at a temperature of absolute zero. Following the 2019 revision of the SI, the definition of every base unit except the mole and almost every derived unit relies on the definition of the second. Timekeeping researchers are currently working on developing an even more stable atomic reference for the second, with a plan to find a more precise definition of the second as atomic clocks improve based on optical clocks or the Rydberg constant around 2030. Metrology advancements and optical clocks Technological developments such as lasers and optical frequency combs in the 1990s led to increasing accuracy of atomic clocks. Lasers enable the possibility of optical-range control over atomic states transitions, which has a much higher frequency than that of microwaves; while optical frequency comb measures highly accurately such high frequency oscillation in light. The first advance beyond the precision of caesium clocks occurred at NIST in 2010 with the demonstration of a "quantum logic" optical clock that used aluminum ions to achieve a precision of . Optical clocks are a very active area of research in the field of metrology as scientists work to develop clocks based on elements ytterbium, mercury, aluminum, and strontium. Scientists at JILA demonstrated a strontium clock with a frequency precision of in 2015. Scientists at NIST developed a quantum logic clock that measured a single aluminum ion in 2019 with a frequency uncertainty of . At JILA in September 2021, scientists demonstrated an optical strontium clock with a differential frequency precision of between atomic ensembles separated by . The second is expected to be redefined when the field of optical clocks matures, sometime around the year 2030 or 2034. In order for this to occur, optical clocks must be consistently capable of measuring frequency with accuracy at or better than . In addition, methods for reliably comparing different optical clocks around the world in national metrology labs must be demonstrated, and the comparison must show relative clock frequency accuracies at or better than . Chip-scale atomic clocks In addition to increased accuracy, the development of chip-scale atomic clocks has expanded the number of places atomic clocks can be used. In August 2004, NIST scientists demonstrated a chip-scale atomic clock that was 100 times smaller than an ordinary atomic clock and had a much smaller power consumption of . The atomic clock was about the size of a grain of rice with a frequency of about 9 GHz. This technology became available commercially in 2011. Atomic clocks on the scale of one chip require less than 30 milliwatts of power. The National Institute of Standards and Technology created a program NIST on a chip to develop compact ways of measuring time with a device just a few millimeters across. Metrologists are currently (2022) designing atomic clocks that implement new developments such as ion traps and optical combs to reach greater accuracies. Measuring time with atomic clocks Clock mechanism An atomic clock is based on a system of atoms which may be in one of two possible energy states. A group of atoms in one state is prepared, then subjected to microwave radiation. If the radiation is of the correct frequency, a number of atoms will transition to the other energy state. The closer the frequency is to the inherent oscillation frequency of the atoms, the more atoms will switch states. Such correlation allows very accurate tuning of the frequency of the microwave radiation. Once the microwave radiation is adjusted to a known frequency where the maximum number of atoms switch states, the atom and thus, its associated transition frequency, can be used as a timekeeping oscillator to measure elapsed time. All timekeeping devices use oscillatory phenomena to accurately measure time, whether it is the rotation of the Earth for a sundial, the swinging of a pendulum in a grandfather clock, the vibrations of springs and gears in a watch, or voltage changes in a quartz crystal watch. However all of these are easily affected by temperature changes and are not very accurate. The most accurate clocks use atomic vibrations to keep track of time. Clock transition states in atoms are insensitive to temperature and other environmental factors and the oscillation frequency is much higher than any of the other clocks (in microwave frequency regime and higher). One of the most important factors in a clock's performance is the atomic line quality factor, , which is defined as the ratio of the absolute frequency of the resonance to the linewidth of the resonance itself . Atomic resonance has a much higher than mechanical devices. Atomic clocks can also be isolated from environmental effects to a much higher degree. Atomic clocks have the benefit that atoms are universal, which means that the oscillation frequency is also universal. This is different from quartz and mechanical time measurement devices that do not have a universal frequency. A clock's quality can be specified by two parameters: accuracy and stability. Accuracy is a measurement of the degree to which the clock's ticking rate can be counted on to match some absolute standard such as the inherent hyperfine frequency of an isolated atom or ion. Stability describes how the clock performs when averaged over time to reduce the impact of noise and other short-term fluctuations (see precision). The instability of an atomic clock is specified by its Allan deviation . The limiting instability due to atom or ion counting statistics is given by where is the spectroscopic linewidth of the clock system, is the number of atoms or ions used in a single measurement, is the time required for one cycle, and is the averaging period. This means instability is smaller when the linewidth is smaller and when (the signal to noise ratio) is larger. The stability improves as the time over which the measurements are averaged increases from seconds to hours to days. The stability is most heavily affected by the oscillator frequency . This is why optical clocks such as strontium clocks (429 terahertz) are much more stable than caesium clocks (9.19 GHz). Modern clocks such as atomic fountains or optical lattices that use sequential interrogation are found to generate type of noise that mimics and adds to the instability inherent in atom or ion counting. This effect is called the Dick effect and is typically the primary stability limitation for the newer atomic clocks. It is an aliasing effect; high frequency noise components in the local oscillator ("LO") are heterodyned to near zero frequency by harmonics of the repeating variation in feedback sensitivity to the LO frequency. The effect places new and stringent requirements on the LO, which must now have low phase noise in addition to high stability, thereby increasing the cost and complexity of the system. For the case of an LO with Flicker frequency noise where is independent of , the interrogation time is , and where the duty factor has typical values , the Allan deviation can be approximated as This expression shows the same dependence on as does , and, for many of the newer clocks, is significantly larger. Analysis of the effect and its consequence as applied to optical standards has been treated in a major review (Ludlow, et al., 2015) that lamented on "the pernicious influence of the Dick effect", and in several other papers. Tuning and optimization The core of the traditional radio frequency atomic clock is a tunable microwave cavity containing a gas. In a hydrogen maser clock the gas emits microwaves (the gas mases) on a hyperfine transition, the field in the cavity oscillates, and the cavity is tuned for maximum microwave amplitude. Alternatively, in a caesium or rubidium clock, the beam or gas absorbs microwaves and the cavity contains an electronic amplifier to make it oscillate. For both types, the atoms in the gas are prepared in one hyperfine state prior to filling them into the cavity. For the second type, the number of atoms that change hyperfine state is detected and the cavity is tuned for a maximum of detected state changes. Most of the complexity of the clock lies in this adjustment process. The adjustment tries to correct for unwanted side-effects, such as frequencies from other electron transitions, temperature changes, and the spreading in frequencies caused by the vibration of molecules including Doppler broadening. One way of doing this is to sweep the microwave oscillator's frequency across a narrow range to generate a modulated signal at the detector. The detector's signal can then be demodulated to apply feedback to control long-term drift in the radio frequency. In this way, the quantum-mechanical properties of the atomic transition frequency of the caesium can be used to tune the microwave oscillator to the same frequency, except for a small amount of experimental error. When a clock is first turned on, it takes a while for the oscillator to stabilize. In practice, the feedback and monitoring mechanism is much more complex. Many of the newer clocks, including microwave clocks such as trapped ion or fountain clocks, and optical clocks such as lattice clocks use a sequential interrogation protocol rather than the frequency modulation interrogation described above. An advantage of sequential interrogation is that it can accommodate much higher Q's, with ringing times of seconds rather than milliseconds. These clocks also typically have a dead time, during which the atom or ion collections are analyzed, renewed and driven into a proper quantum state, after which they are interrogated with a signal from a local oscillator (LO) for a time of perhaps a second or so. Analysis of the final state of the atoms is then used to generate a correction signal to keep the LO frequency locked to that of the atoms or ions. Accuracy The accuracy of atomic clocks has improved continuously since the first prototype in the 1950s. The first generation of atomic clocks were based on measuring caesium, rubidium, and hydrogen atoms. In a time period from 1959 to 1998, NIST developed a series of seven caesium-133 microwave clocks named NBS-1 to NBS-6 and NIST-7 after the agency changed its name from the National Bureau of Standards to the National Institute of Standards and Technology. The first clock had an accuracy of , and the last clock had an accuracy of . The clocks were the first to use a caesium fountain, which was introduced by Jerrod Zacharias, and laser cooling of atoms, which was demonstrated by Dave Wineland and his colleagues in 1978. The next step in atomic clock advances involves going from accuracies of to accuracies of and even . The goal is to redefine the second when clocks become so accurate that they will not lose or gain more than a second in the age of the universe. To do so, scientists must demonstrate the accuracy of clocks that use strontium and ytterbium and optical lattice technology. Such clocks are also called optical clocks where the energy level transitions used are in the optical regime (giving rise to even higher oscillation frequency), which thus, have much higher accuracy as compared to traditional atomic clocks. The goal of an atomic clock with accuracy was first reached at the United Kingdom's National Physical Laboratory's NPL-CsF2 caesium fountain clock and the United States' NIST-F2. The increase in precision from NIST-F1 to NIST-F2 is due to liquid nitrogen cooling of the microwave interaction region; the largest source of uncertainty in NIST-F1 is the effect of black-body radiation from the warm chamber walls. The performance of primary and secondary frequency standards contributing to International Atomic Time (TAI) is evaluated. The evaluation reports of individual (mainly primary) clocks are published online by the International Bureau of Weights and Measures (BIPM). Comparing atomic clocks Time standards A number of national metrology laboratories maintain atomic clocks: including Paris Observatory, the Physikalisch-Technische Bundesanstalt (PTB) in Germany, the National Institute of Standards and Technology (NIST) in Colorado and Maryland, USA, JILA in the University of Colorado Boulder, the National Physical Laboratory (NPL) in the United Kingdom, and the All-Russian Scientific Research Institute for Physical-Engineering and Radiotechnical Metrology. They do this by designing and building frequency standards that produce electric oscillations at a frequency whose relationship to the transition frequency of caesium 133 is known, in order to achieve a very low uncertainty. These primary frequency standards estimate and correct various frequency shifts, including relativistic Doppler shifts linked to atomic motion, the thermal radiation of the environment (blackbody shift) and several other factors. The best primary standards currently produce the SI second with an accuracy approaching an uncertainty of one part in . It is important to note that at this level of accuracy, the differences in the gravitational field in the device cannot be ignored. The standard is then considered in the framework of general relativity to provide a proper time at a specific point. The International Bureau of Weights and Measures (BIPM) provides a list of frequencies that serve as secondary representations of the second. This list contains the frequency values and respective standard uncertainties for the rubidium microwave transition and other optical transitions, including neutral atoms and single trapped ions. These secondary frequency standards can be as accurate as one part in ; however, the uncertainties in the list are one part in –. This is because the uncertainty in the central caesium standard against which the secondary standards are calibrated is one part in –. Primary frequency standards can be used to calibrate the frequency of other clocks used in national laboratories. These are usually commercial caesium clocks having very good long-term frequency stability, maintaining a frequency with a stability better than 1 part in over a few months. The uncertainty of the primary standard frequencies is around one part in . Hydrogen masers, which rely on the 1.4 GHz hyperfine transition in atomic hydrogen, are also used in time metrology laboratories. Masers outperform any commercial caesium clock in terms of short-term frequency stability. In the past, these instruments have been used in all applications that require a steady reference across time periods of less than one day (frequency stability of about 1 part in ten for averaging times of a few hours). Because some active hydrogen masers have a modest but predictable frequency drift with time, they have become an important part of the BIPM's ensemble of commercial clocks that implement International Atomic Time. Synchronization with satellites The time readings of clocks operated in metrology labs operating with the BIPM need to be known very accurately. Some operations require synchronization of atomic clocks separated by great distances over thousands of kilometers. Global Navigational Satellite Systems (GNSS) provide a satisfactory solution to the problem of time transfer. Atomic clocks are used to broadcast time signals in the United States Global Positioning System (GPS), the Russian Federation's Global Navigation Satellite System (GLONASS), the European Union's Galileo system and China's BeiDou system. The signal received from one satellite in a metrology laboratory equipped with a receiver with an accurately known position allows the time difference between the local time scale and the GNSS system time to be determined with an uncertainty of a few nanoseconds when averaged over 15 minutes. Receivers allow the simultaneous reception of signals from several satellites, and make use of signals transmitted on two frequencies. As more satellites are launched and start operations, time measurements will become more accurate. These methods of time comparison must make corrections for the effects of special relativity and general relativity of a few nanoseconds. In June 2015, the National Physical Laboratory (NPL) in Teddington, UK; the French department of Time-Space Reference Systems at the Paris Observatory (LNE-SYRTE); the German German National Metrology Institute (PTB) in Braunschweig; and Italy's Istituto Nazionale di Ricerca Metrologica (INRiM) in Turin labs have started tests to improve the accuracy of current state-of-the-art satellite comparisons by a factor of 10, but it will still be limited to one part in . These four European labs are developing and host a variety of experimental optical clocks that harness different elements in different experimental set-ups and want to compare their optical clocks against each other and check whether they agree. International timekeeping National laboratories usually operate a range of clocks. These are operated independently of one another and their measurements are sometimes combined to generate a scale that is more stable and more accurate than that of any individual contributing clock. This scale allows for time comparisons between different clocks in the laboratory. These atomic time scales are generally referred to as TA(k) for laboratory k. Coordinated Universal Time (UTC) is the result of comparing clocks in national laboratories around the world to International Atomic Time (TAI), then adding leap seconds as necessary. TAI is a weighted average of around 450 clocks in some 80 time institutions. The relative stability of TAI is around one part in . Before TAI is published, the frequency of the result is compared with the SI second at various primary and secondary frequency standards. This requires relativistic corrections to be applied to the location of the primary standard which depend on the distance between the equal gravity potential and the rotating geoid of Earth. The values of the rotating geoid and the TAI change slightly each month and are available in the BIPM Circular T publication. The TAI time-scale is deferred by a few weeks as the average of atomic clocks around the world is calculated. TAI is not distributed in everyday timekeeping. Instead, an integer number of leap seconds are added or subtracted to correct for the Earth's rotation, producing UTC. The number of leap seconds is changed so that mean solar noon at the prime meridian (Greenwich) does not deviate from UTC noon by more than 0.9 seconds. National metrology institutions maintain an approximation of UTC referred to as UTC(k) for laboratory k. UTC(k) is distributed by the BIPM's Consultative Committee for Time and Frequency. The offset UTC-UTC(k) is calculated every 5 days, the results are published monthly. Atomic clocks record UTC(k) to no more than 100 nanoseconds. In some countries, UTC(k) is the legal time that is distributed by radio, television, telephone, Internet, fiber-optic cables, time signal transmitters, and speaking clocks. In addition, GNSS provides time information accurate to a few tens of nanoseconds or better. Fiber Optics In a next phase, these labs strive to transmit comparison signals in the visible spectrum through fibre-optic cables. This will allow their experimental optical clocks to be compared with an accuracy similar to the expected accuracies of the optical clocks themselves. Some of these labs have already established fibre-optic links, and tests have begun on sections between Paris and Teddington, and Paris and Braunschweig. Fibre-optic links between experimental optical clocks also exist between the American NIST lab and its partner lab JILA, both in Boulder, Colorado but these span much shorter distances than the European network and are between just two labs. According to Fritz Riehle, a physicist at PTB, "Europe is in a unique position as it has a high density of the best clocks in the world". In August 2016 the French LNE-SYRTE in Paris and the German PTB in Braunschweig reported the comparison and agreement of two fully independent experimental strontium lattice optical clocks in Paris and Braunschweig at an uncertainty of via a newly established phase-coherent frequency link connecting Paris and Braunschweig, using of telecom fibre-optic cable. The fractional uncertainty of the whole link was assessed to be , making comparisons of even more accurate clocks possible. In 2021, NIST compared transmission of signals from a series of experimental atomic clocks located about apart at the NIST lab, its partner lab JILA, and the University of Colorado all in Boulder, Colorado over air and fiber optic cable to a precision of . Microwave atomic clocks Caesium The SI second is defined as a certain number of unperturbed ground-state hyperfine transitions of the caesium-133 atom. Caesium standards are therefore regarded as primary time and frequency standards. Caesium clocks include the NIST-F1 clock, developed in 1999, and the NIST-F2 clock, developed in 2013. Caesium has several properties that make it a good choice for an atomic clock. Whereas a hydrogen atom moves at 1,600 m/s at room temperature and a nitrogen atom moves at 510 m/s, a caesium atom moves at a much slower speed of 130 m/s due to its greater mass. The hyperfine frequency of caesium (~9.19 GHz) is also higher than other elements such as rubidium (~6.8 GHz) and hydrogen (~1.4 GHz). The high frequency of caesium allows for more accurate measurements. Caesium reference tubes suitable for national standards currently last about seven years and cost about US$35,000. Primary frequency and time standards like the United States Time Standard atomic clocks, NIST-F1 and NIST-F2, use far higher power. Block diagram In a caesium beam frequency reference, timing signals are derived from a high stability voltage-controlled quartz crystal oscillator (VCXO) that is tunable over a narrow range. The output frequency of the VCXO (typically 5 MHz) is multiplied by a frequency synthesizer to obtain microwaves at the frequency of the caesium atomic hyperfine transition (about ). The output of the frequency synthesizer is amplified and applied to a chamber containing caesium gas which absorbs the microwaves. The output current of the caesium chamber increases as absorption increases. The remainder of the circuitry simply adjusts the running frequency of the VCXO to maximize the output current of the caesium chamber which keeps the oscillator tuned to the resonance frequency of the hyperfine transition. Rubidium The BIPM defines the unperturbed ground-state hyperfine transition frequency of the rubidium-87 atom, 6 834 682 610.904 312 6 Hz, in terms of the caesium standard frequency. Atomic clocks based on rubidium standards are therefore regarded as secondary representations of the second. Rubidium standard clocks are prized for their low cost, small size (commercial standards are as small as ) and short-term stability. They are used in many commercial, portable and aerospace applications. Modern rubidium standard tubes last more than ten years, and can cost as little as US$50. Some commercial applications use a rubidium standard periodically corrected by a global positioning system receiver (see GPS disciplined oscillator). This achieves excellent short-term accuracy, with long-term accuracy equal to (and traceable to) the US national time standards. Hydrogen The BIPM defines the unperturbed optical transition frequency of the hydrogen-1 neutral atom, 1 233 030 706 593 514 Hz, in terms of the caesium standard frequency. Atomic clocks based on hydrogen standards are therefore regarded as secondary representations of the second. Hydrogen masers have superior short-term stability compared to other standards, but lower long-term accuracy. The long-term stability of hydrogen maser standards decreases because of changes in the cavity's properties over time. The relative error of hydrogen masers is 5 × 10−16 for periods of 1000 seconds. This makes hydrogen masers good for radio astronomy, in particular for very long baseline interferometry. Hydrogen masers are used for flywheel oscillators in laser-cooled atomic frequency standards and broadcasting time signals from national standards laboratories, although they need to be corrected as they drift from the correct frequency over time. The hydrogen maser is also useful for experimental tests of the effects of special relativity and general relativity such as gravitational red shift. Other types of atomic clocks Quantum clocks In March 2008, physicists at NIST described a quantum logic clock based on individual ions of beryllium and aluminium. This clock was compared to NIST's mercury ion clock. These were the most accurate clocks that had been constructed, with neither clock gaining nor losing time at a rate that would exceed a second in over a billion years. In February 2010, NIST physicists described a second, enhanced version of the quantum logic clock based on individual ions of magnesium and aluminium. Considered the world's most precise clock in 2010 with a fractional frequency inaccuracy of , it offers more than twice the precision of the original. In July 2019, NIST scientists demonstrated such an Al+ quantum logic clock with total uncertainty of , which is the first demonstration of such a clock with uncertainty below . Nuclear clock concept One theoretical possibility for improving the performance of atomic clocks is to use a nuclear energy transition (between different nuclear isomers) rather than the atomic electron transitions which current atomic clocks measure. Most nuclear transitions operate at far too high a frequency to be measured, but the exceptionally low excitation energy of produces "gamma rays" in the ultraviolet frequency range. In 2003, Ekkehard Peik and Christian Tamm noted this makes a clock possible with current optical frequency-measurement techniques. In 2012, it was shown that a nuclear clock based on a single ion could provide a total fractional frequency inaccuracy of , which was better than existing 2019 optical atomic clock technology. Although a precise clock remains an unrealized theoretical possibility, efforts through the 2010s to measure the transition energy culminated in the 2024 measurement of the optical frequency with sufficient accuracy ( = ) that an experimental optical nuclear clock can now be constructed. Although neutral atoms decay in microseconds by internal conversion, this pathway is energetically prohibited in ions, as the second and higher ionization energy is greater than the nuclear excitation energy, giving ions a long half-life on the order of . It is the large ratio between transition frequency and isomer lifetime which gives the clock a high quality factor. A nuclear energy transition offers the following potential advantages: Higher frequency. All other things being equal, a higher-frequency transition offers greater stability for simple statistical reasons (fluctuations are averaged over more cycles). Insensitivity to environmental effects. Due to its small size and the shielding effect of the surrounding electrons, an atomic nucleus is much less sensitive to ambient electromagnetic fields than is an electron in an orbital. Greater number of atoms. Because of the aforementioned insensitivity to ambient fields, it is not necessary to have the clock atoms well-separated in a dilute gas. Current measurements take advantage of the Mössbauer effect and place the thorium ions in a solid, which allows billions of atoms to be interrogated. Potential for redefining the second In 2022, the best realisation of the second is done with caesium primary standard clocks such as IT-CsF2, NIST-F2, NPL-CsF2, PTB-CSF2, SU–CsFO2 or SYRTE-FO2. These clocks work by laser-cooling a cloud of caesium atoms to a microkelvin in a magneto-optic trap. These cold atoms are then launched vertically by laser light. The atoms then undergo Ramsey excitation in a microwave cavity. The fraction of excited atoms are then detected by laser beams. These clocks have systematic uncertainty, which is equivalent to 50 picoseconds per day. A system of several fountains worldwide contributes to International Atomic Time. These caesium clocks also underpin optical frequency measurements. The advantage of optical clocks can be explained by the statement that the instability , where is the instability, f is the frequency, and S/N is the signal-to-noise ratio. This leads to the equation . Optical clocks are based on forbidden optical transitions in ions or atoms. They have frequencies around , with a natural linewidth of typically 1 Hz, so the Q-factor is about , or even higher. They have better stabilities than microwave clocks, which means that they can facilitate evaluation of lower uncertainties. They also have better time resolution, which means the clock "ticks" faster. Optical clocks use either a single ion, or an optical lattice with – atoms. Rydberg constant A definition based on the Rydberg constant would involve fixing the value to a certain value: . The Rydberg constant describes the energy levels in a hydrogen atom with the nonrelativistic approximation . The only viable way to fix the Rydberg constant involves trapping and cooling hydrogen. Unfortunately, this is difficult because it is very light and the atoms move very fast, causing Doppler shifts. The radiation needed to cool the hydrogen —— is also difficult. Another hurdle involves improving the uncertainty in quantum electrodynamics/QED calculations. In the Report of the 25th meeting of the Consultative Committee for Units (2021), 3 options were considered for the redefinition of the second sometime around 2026, 2030, or 2034. The first redefinition approach considered was a definition based on a single atomic reference transition. The second redefinition approach considered was a definition based on a collection of frequencies. The third redefinition approach considered was a definition based on fixing the numerical value of a fundamental constant, such as making the Rydberg constant the basis for the definition. The committee concluded there was no feasible way to redefine the second with the third option, since no physical constant is known to enough digits currently to enable realizing the second with a constant. Requirements A redefinition must include improved optical clock reliability. TAI must be contributed to by optical clocks before the BIPM affirms a redefinition. A consistent method of sending signals, such as fiber-optics, must be developed before the second is redefined. Secondary representations of the second Representations of the second other than the SI cesium standard are motivated by the increasing accuracy of other atomic clocks. In particular the high frequencies and small linewidths of optical clocks promise significantly improved signal-to-noise ratio and instability. Further secondary representations would aid in the preparation of a future redefinition of the second A list of frequencies recommended for secondary representations of the second is maintained by the International Bureau of Weights and Measures (BIPM) since 2006 and is available online. The list contains the frequency values and the respective standard uncertainties for the rubidium microwave transition and for several optical transitions. These secondary frequency standards are accurate at the level of ; however, the uncertainties provided in the list are in the range – since they are limited by the linking to the caesium primary standard that currently (2018) defines the second. Twenty-first century experimental atomic clocks that provide non-caesium-based secondary representations of the second are becoming so precise that they are likely to be used as extremely sensitive detectors for other things besides measuring frequency and time. For example, the frequency of atomic clocks is altered slightly by gravity, magnetic fields, electrical fields, force, motion, temperature and other phenomena. The experimental clocks tend to continue to improve, and leadership in performance has shifted back and forth between various types of experimental clocks. Applications The development of atomic clocks has led to many scientific and technological advances such as precise global and regional navigation satellite systems, and applications in the Internet, which depend critically on frequency and time standards. Atomic clocks are installed at sites of time signal radio transmitters. They are used at some long-wave and medium-wave broadcasting stations to deliver a very precise carrier frequency. Atomic clocks are used in many scientific disciplines, such as for long-baseline interferometry in radio astronomy. Global navigation satellite systems The Global Positioning System (GPS) operated by the United States Space Force provides very accurate timing and frequency signals. A GPS receiver works by measuring the relative time delay of signals from a minimum of four, but usually more, GPS satellites, each of which has at least two onboard caesium and as many as two rubidium atomic clocks. The relative times are mathematically transformed into three absolute spatial coordinates and one absolute time coordinate. GPS Time (GPST) is a continuous time scale and theoretically accurate to about 14 nanoseconds. However, most receivers lose accuracy in the interpretation of the signals and are only accurate to 100 nanoseconds. GPST is related to but differs from TAI (International Atomic Time) and UTC (Coordinated Universal Time). GPST remains at a constant offset from TAI (TAI – GPST = 19 seconds) and like TAI does not implement leap seconds. Periodic corrections are performed to the on-board clocks in the satellites to keep them synchronized with ground clocks. The GPS navigation message includes the difference between GPST and UTC. As of July 2015, GPST is 17 seconds ahead of UTC because of the leap second added to UTC on 30 June 2015. Receivers subtract this offset from GPS Time to calculate UTC. The GLObal NAvigation Satellite System (GLONASS) operated by the Russian Aerospace Defence Forces provides an alternative to the Global Positioning System (GPS) system and is the second navigational system in operation with global coverage and of comparable precision. GLONASS Time (GLONASST) is generated by the GLONASS Central Synchroniser and is typically better than 1,000 nanoseconds. Unlike GPS, the GLONASS time scale implements leap seconds, like UTC. The Galileo Global Navigation Satellite System is operated by the European GNSS Agency and European Space Agency. Galileo started offering global Early Operational Capability (EOC) on 15 December 2016, providing the third, and first non-military operated, global navigation satellite system. Galileo System Time (GST) is a continuous time scale which is generated on the ground at the Galileo Control Centre in Fucino, Italy, by the Precise Timing Facility, based on averages of different atomic clocks and maintained by the Galileo Central Segment and synchronised with TAI with a nominal offset below 50 nanoseconds. According to the European GNSS Agency, Galileo offers 30 nanoseconds timing accuracy. The March 2018 Quarterly Performance Report by the European GNSS Service Centre reported the UTC Time Dissemination Service Accuracy was ≤ 7.6 nanoseconds, computed by accumulating samples over the previous 12 months, and exceeding the ≤ 30 ns target. Each Galileo satellite has two passive hydrogen maser and two rubidium atomic clocks for onboard timing. The Galileo navigation message includes the differences between GST, UTC and GPST, to promote interoperability. In the summer of 2021, the European Union settled on a passive hydrogen maser for the second generation of Galileo satellites, starting in 2023, with an expected lifetime of 12 years per satellite. The masers are about 2 feet long with a weight of 40 pounds. The BeiDou-2/BeiDou-3 satellite navigation system is operated by the China National Space Administration. BeiDou Time (BDT) is a continuous time scale starting at 1 January 2006 at 0:00:00 UTC and is synchronised with UTC within 100 ns. BeiDou became operational in China in December 2011, with 10 satellites in use, and began offering services to customers in the Asia-Pacific region in December 2012. On 27 December 2018 the BeiDou Navigation Satellite System started to provide global services with a reported timing accuracy of 20 ns. The final, 35th, BeiDou-3 satellite for global coverage was launched into orbit on 23 June 2020. Experimental space clock In April 2015, NASA announced that it planned to deploy a Deep Space Atomic Clock (DSAC), a miniaturized, ultra-precise mercury-ion atomic clock, into outer space. NASA said that the DSAC would be much more stable than other navigational clocks. The clock was successfully launched on 25 June 2019, activated on 23 August 2019 and deactivated two years later on 18 September 2021. Military usage In 2022, DARPA announced a drive to upgrade to the U.S. military timekeeping systems for greater precision over time when sensors do not have access to GPS satellites, with a plan to reach precision of 1 part in . The Robust Optical Clock Network will balance usability and accuracy as it is developed over 4 years. Time signal radio transmitters A radio clock is a clock that automatically synchronizes itself by means of radio time signals received by a radio receiver. Some manufacturers may label radio clocks as atomic clocks, because the radio signals they receive originate from atomic clocks. Normal low-cost consumer-grade receivers that rely on the amplitude-modulated time signals have a practical accuracy uncertainty of ± 0.1 second. This is sufficient for many consumer applications. Instrument grade time receivers provide higher accuracy. Radio clocks incur a propagation delay of approximately 1 ms for every 300 kilometres (186 mi) of distance from the radio transmitter. Many governments operate transmitters for timekeeping purposes. General relativity General relativity predicts that clocks tick slower deeper in a gravitational field, and this gravitational redshift effect has been well documented. Atomic clocks are effective at testing general relativity on ever smaller scales. A project to observe twelve atomic clocks from 11 November 1999 to October 2014 resulted in a further demonstration that Einstein's theory of general relativity is accurate at small scales. In 2021 a team of scientists at JILA measured the difference in the passage of time due to gravitational redshift between two layers of atoms separated by one millimeter using a strontium optical clock cooled to 100 nanokelvins with a precision of seconds. Given its quantum nature and the fact that time is a relativistic quantity, atomic clocks can be used to see how time is influenced by general relativity and quantum mechanics at the same time. Financial systems Atomic clocks keep accurate records of transactions between buyers and sellers to the millisecond or better, particularly in high-frequency trading. Accurate timekeeping is needed to prevent illegal trading ahead of time, in addition to ensuring fairness to traders on the other side of the globe. The current system known as NTP is only accurate to a millisecond. Transportable Optical Clocks Many of the most accurate optical clocks are big and only available in large metrology labs. Thus they are not readily useful for space-limited factories or other industrial environments that could use an atomic clock for GPS accuracy. Researchers have designed a strontium optical lattice clock that can be moved around in an air-conditioned car trailer. They achieved a relative uncertainty of compared to a stationary one.
Technology
Timekeeping
null
1942366
https://en.wikipedia.org/wiki/Air%20quality%20index
Air quality index
An air quality index (AQI) is an indicator developed by government agencies to communicate to the public how polluted the air currently is or how polluted it is forecast to become. As air pollution levels rise, so does the AQI, along with the associated public health risk. Children, the elderly and individuals with respiratory or cardiovascular problems are typically the first groups affected by poor air quality. When the AQI is high, governmental bodies generally encourage people to reduce physical activity outdoors, or even avoid going out altogether. When wildfires result in a high AQI, the use of a mask (such as an N95 respirator) outdoors and an air purifier (incorporating both HEPA and activated carbon filters) indoors are also encouraged. Different countries have their own air quality indices, corresponding to different national air quality standards. Some of these are Canada's Air Quality Health Index, Malaysia's Air Pollution Index, and Singapore's Pollutant Standards Index. Overview Computation of the AQI requires an air pollutant concentration over a specified averaging period, obtained from an air monitor or model. Taken together, concentration and time represent the dose of the air pollutant. Health effects corresponding to a given dose are established by epidemiological research. Air pollutants vary in potency, and the function used to convert from air pollutant concentration to AQI varies by pollutant. Its air quality index values are typically grouped into ranges. Each range is assigned a descriptor, a color code, and a standardized public health advisory. The AQI can increase due to an increase of air emissions. For example, during rush hour traffic or when there is an upwind forest fire or from a lack of dilution of air pollutants. Stagnant air, often caused by an anticyclone, temperature inversion, or low wind speeds lets air pollution remain in a local area, leading to high concentrations of pollutants, chemical reactions between air contaminants and hazy conditions. On a day when the AQI is predicted to be elevated due to fine particle pollution, an agency or public health organization might: advise sensitive groups, such as the elderly, children and those with respiratory or cardiovascular problems or suffering from diseases, to avoid outdoor exertion. declare an "action day" to encourage voluntary measures to curtail air emissions, such as using public transportation. recommend the use of masks outdoors and air purifiers indoors to prevent fine particles from entering the lungs. During a period of very poor air quality, such as an air pollution episode, when the AQI indicates that acute exposure may cause significant harm to the public health, agencies may invoke emergency plans that allow them to order major emitters (such as coal burning industries) to curtail emissions until the hazardous conditions abate. Most air contaminants do not have an associated AQI. Many countries monitor ground-level ozone, particulates, sulfur dioxide, carbon monoxide and nitrogen dioxide, and calculate air quality indices for these pollutants. The definition of the AQI in a particular nation reflects the discourse surrounding the development of national air quality standards in that nation. A website allowing government agencies anywhere in the world to submit their real-time air monitoring data for display using a common definition of the air quality index has recently become available. Indices by location Australia Each of the states and territories of Australia is responsible for monitoring air quality and publishing data in accordance with the National Environment Protection (Ambient Air Quality) Measure (NEPM) standards. Each state and territory publishes air quality data for individual monitoring locations, and most states and territories publish air quality indexes for each monitoring location. Across Australia, a consistent approach is taken with air quality indexes, using a simple linear scale where 100 represents the maximum concentration standard for each pollutant, as set by the NEPM. These maximum concentration standards are: The air quality index (AQI) for an individual location is simply the highest of the air quality index values for each pollutant being monitored at that location. There are six AQI bands, with health advice for each: Canada Air quality in Canada has been reported for many years with provincial air quality indices (AQIs). Significantly, AQI values reflect air quality management objectives, which are based on the lowest achievable emissions rate, rather than exclusive concern for human health. The Air Quality Health Index (AQHI) is a scale designed to help understand the impact of air quality on health. It is a health protection tool used to make decisions to reduce short-term exposure to air pollution by adjusting activity levels during increased levels of air pollution. The Air Quality Health Index also provides advice on how to improve air quality by proposing a behavioral change to reduce the environmental footprint. This index pays particular attention to people who are sensitive to air pollution. It provides them with advice on how to protect their health during air quality levels associated with low, moderate, high and very high health risks. The AQHI provides a number from 1 to 10+ to indicate the level of health risk associated with local air quality. On occasion, when the amount of air pollution is abnormally high, the number may exceed 10. The AQHI provides a local air quality current value as well as a local air quality maximums forecast for today, tonight, and tomorrow, and provides associated health advice. China Hong Kong On December 30, 2013, Hong Kong replaced the Air Pollution Index with a new index called the Air Quality Health Index. This index, reported by the Environmental Protection Department, is measured on a scale of 1 to 10+ and considers four air pollutants: ozone; nitrogen dioxide; sulfur dioxide and particulate matter (including PM10 and PM2.5). For any given hour the AQHI is calculated from the sum of the percentage excess risk of daily hospital admissions attributable to the 3-hour moving average concentrations of these four pollutants. The AQHIs are grouped into five AQHI health risk categories with health advice provided: Mainland China China's Ministry of Environmental Protection (MEP) is responsible for measuring the level of air pollution in China. As of January 1, 2013, MEP monitors daily pollution level in 163 of its major cities. The AQI level is based on the level of six atmospheric pollutants, namely sulfur dioxide (SO2), nitrogen dioxide (NO2), suspended particulates smaller than 10 μm in aerodynamic diameter (PM10), suspended particulates smaller than 2.5 μm in aerodynamic diameter (PM2.5), carbon monoxide (CO), and ozone (O3) measured at the monitoring stations throughout each city. AQI mechanics An individual score (Individual Air Quality Index, IAQI) is calculated using breakpoint concentrations below, and using same piecewise linear function to calculate intermediate values as the US AQI scale. and The final AQI value can be calculated either per hour or per 24 hours and is the max of these six scores. The score for each pollutant is non-linear, as is the final AQI score. Thus an AQI of 300 does not mean twice the pollution of AQI at 150, nor does it mean the air is twice as harmful. The concentration of a pollutant when its IAQI is 100 does not equal twice its concentration when its IAQI is 50, nor does it mean the pollutant is twice as harmful. While an AQI of 50 from day 1 to 182 and AQI of 100 from day 183 to 365 does provide an annual average of 75, it does not mean the pollution is acceptable even if the benchmark of 100 is deemed safe. Because the benchmark is a 24-hour target, and the annual average must match the annual target, it is entirely possible to have safe air every day of the year but still fail the annual pollution benchmark. Europe The Common Air Quality Index (CAQI) is an air quality index used in Europe since 2006. In November 2017, the European Environment Agency announced the European Air Quality Index (EAQI) and started encouraging its use on websites and for other ways of informing the public about air quality. CAQI , the EU-supported project CiteairII argued that the CAQI had been evaluated on a "large set" of data, and described the CAQI's motivation and definition. CiteairII stated that having an air quality index that would be easy to present to the general public was a major motivation, leaving aside the more complex question of a health-based index, which would require, for example, effects of combined levels of different pollutants. The main aim of the CAQI was to have an index that would encourage wide comparison across the EU, without replacing local indices. CiteairII stated that the "main goal of the CAQI is not to warn people for possible adverse health effects of poor air quality but to attract their attention to urban air pollution and its main source (traffic) and help them decrease their exposure." The CAQI is a number on a scale from 0 to 100, where a low value means good air quality and a high value means extremely poor air quality. The index is defined in both hourly and daily versions, and separately near roads (a "roadside" or "traffic" index) or away from roads (a "background" index). , the CAQI had two mandatory components for the roadside index, NO2 and PM10, and three mandatory components for the background index, NO2, PM10 and O3. It also included optional pollutants PM2.5, CO and SO2. A "sub-index" is calculated for each of the mandatory (and optional if available) components. The CAQI is defined as the sub-index that represents the worst quality among those components. Some of the key pollutant concentrations in μg/m3 for the hourly background index, the corresponding sub-indices, and five CAQI ranges and verbal descriptions are as follows. Frequently updated CAQI values and maps are shown on www.airqualitynow.eu and other websites. A separate Year Average Common Air Quality Index (YACAQI) is also defined, in which different pollutant sub-indices are separately normalised to a value typically near unity. For example, the yearly averages of NO2, PM10 and PM2.5 are divided by 40 μg/m3, 40 μg/m3 and 20 μg/m3, respectively. The overall background or traffic YACAQI for a city is the arithmetic mean of a defined subset of these sub-indices. India The National Air Quality Index (NAQI) was launched in New Delhi on September 17, 2014, under the Swachh Bharat Abhiyan. The highest AQI in India was recorded in New Delhi on 18th November 2024 with it being 1,081 and the concentration of PM2.5 - particulate matter measuring 2.5 microns or less in diameter that can be carried into lungs, causing deadly diseases and cardiac issues. Expected to soar even higher later or next year. The Central Pollution Control Board along with State Pollution Control Boards has been operating National Air Monitoring Program (NAMP) covering 240 cities of the country having more than 342 monitoring stations. An Expert Group comprising medical professionals, air quality experts, academia, advocacy groups, and SPCBs was constituted and a technical study was awarded to IIT Kanpur. IIT Kanpur and the Expert Group recommended an AQI scheme in 2014. While the earlier measuring index was limited to three indicators, the new index measures eight parameters. The continuous monitoring systems that provide data on near real-time basis are installed in New Delhi, Mumbai, Pune, Kolkata and Ahmedabad. There are six NAQI categories, namely Good, Satisfactory, Moderate, Poor, Very Poor and Severe. The proposed NAQI will consider eight pollutants PM10, PM2.5, NO2, SO2, CO, O3, NH3, and Pb) for which short-term (up to 24-hourly averaging period) National Ambient Air Quality Standards are prescribed. Based on the measured ambient concentrations, corresponding standards and likely health impact, a sub-index is calculated for each of these pollutants. The worst sub-index reflects overall NAQI. Likely health impacts for different NAQI categories and pollutants have also been suggested, with primary inputs from the medical experts in the group. The NAQI values and corresponding ambient concentrations (health breakpoints) as well as associated likely health impacts for the identified eight pollutants are as follows: Japan According to Japan Weather Association, Japan uses a different scale to measure the air quality index. Mexico The air quality in Mexico City is reported in IMECAs. The IMECA is calculated using the measurements of average times of the chemicals ozone (O3), sulfur dioxide (SO2), nitrogen dioxide (NO2), carbon monoxide (CO), particles smaller than 2.5 micrometers (PM2.5), and particles smaller than 10 micrometers (PM10). Singapore Singapore uses the Pollutant Standards Index to report on its air quality, with details of the calculation similar but not identical to those used in Malaysia and Hong Kong. The PSI chart below is grouped by index values and descriptors, according to the National Environment Agency. South Korea The Ministry of Environment of South Korea uses the Comprehensive Air-quality Index (CAI) to describe the ambient air quality based on the health risks of air pollution. The index aims to help the public easily understand the air quality and protect people's health. The CAI is on a scale from 0 to 500, which is divided into six categories. The higher the CAI value, the greater the level of air pollution. Of values of the five air pollutants, the highest is the CAI value. The index also has associated health effects and a colour representation of the categories as shown below. The N Seoul Tower on Namsan Mountain in central Seoul, South Korea, is illuminated in blue, from sunset to 23:00 and 22:00 in winter, on days where the air quality in Seoul is 45 or less. During the spring of 2012, the Tower was lit up for 52 days, which is four days more than in 2011. United Kingdom The most commonly used air quality index in the UK is the Daily Air Quality Index recommended by the Committee on the Medical Effects of Air Pollutants (COMEAP). This index has ten points, which are further grouped into four bands: low, moderate, high and very high. Each of the bands comes with advice for at-risk groups and the general population. The index is based on the concentrations of five pollutants. The index is calculated from the concentrations of the following pollutants: ozone, nitrogen dioxide, sulfur ioxide, PM2.5 and PM10. The breakpoints between index values are defined for each pollutant separately and the overall index is defined as the maximum value of the index. Different averaging periods are used for different pollutants. United States The United States Environmental Protection Agency (EPA) has developed an Air Quality Index that is used to report air quality. This AQI is divided into six categories indicating increasing levels of health concern. The AQI is based on the five "criteria" pollutants regulated under the Clean Air Act: ground-level ozone, particulate matter, carbon monoxide, sulfur dioxide, and nitrogen dioxide. The EPA has established National Ambient Air Quality Standards (NAAQS) for each of these pollutants in order to protect public health. An AQI value of 100 generally corresponds to the level of the NAAQS for the pollutant. The Clean Air Act (USA) (1990) requires the EPA to review its National Ambient Air Quality Standards every five years to reflect evolving health effects information. The Air Quality Index is adjusted periodically to reflect these changes. Computing the AQI The air quality index is a piecewise linear function of the pollutant concentration. At the boundary between AQI categories, there is a discontinuous jump of one AQI unit. To convert from concentration to AQI this equation is used: (If multiple pollutants are measured, the calculated AQI is the highest value calculated from the above equation applied for each pollutant.) where: = the (Air Quality) index, = the pollutant concentration, = the concentration breakpoint that is ≤ , = the concentration breakpoint that is ≥ , = the index breakpoint corresponding to , = the index breakpoint corresponding to . The EPA's table of breakpoints is: Suppose a monitor records a 24-hour average fine particle (PM2.5) concentration of 26.4 micrograms per cubic meter. The equation above results in an AQI of: which rounds to index value of 83, corresponding to air quality in the "Moderate" range. To convert an air pollutant concentration to an AQI, EPA has developed a calculator. If multiple pollutants are measured at a monitoring site, then the largest or "dominant" AQI value is reported for the location. The ozone AQI between 100 and 300 is computed by selecting the larger of the AQI calculated with a 1-hour ozone value and the AQI computed with the 8-hour ozone value. Eight-hour ozone averages do not define AQI values greater than 300; AQI values of 301 or greater are calculated with 1-hour ozone concentrations. 1-hour SO2 values do not define higher AQI values greater than 200. AQI values of 201 or greater are calculated with 24-hour SO2 concentrations. Real-time monitoring data from continuous monitors are typically available as 1-hour averages. However, computation of the AQI for some pollutants requires averaging over multiple hours of data. (For example, calculation of the ozone AQI requires computation of an 8-hour average and computation of the PM2.5 or PM10 AQI requires a 24-hour average.) To accurately reflect the current air quality, the multi-hour average used for the AQI computation should be centered on the current time, but as concentrations of future hours are unknown and are difficult to estimate accurately, EPA uses surrogate concentrations to estimate these multi-hour averages. For reporting the PM2.5, PM10 and ozone air quality indices, this surrogate concentration is called the NowCast. The Nowcast is a particular type of weighted average that provides more weight to the most recent air quality data when air pollution levels are changing. Public availability of the AQI Real time monitoring data and forecasts of air quality that are color-coded in terms of the air quality index are available from EPA's AirNow web site. Other organizations provide monitoring for members of sensitive groups such as asthmatics, children and adults over the age of 65. Historical air monitoring data including AQI charts and maps are available at EPA's AirData website. There is a free email subscription service for New York inhabitants – AirNYC. Subscribers get notifications about the changes in the AQI values for the selected location (e.g. home address), based on air quality conditions. A detailed map containing current AQI levels and a two-day AQI forecast is available at the Aerostate web site. Regulatory Air Monitors and Low Cost Sensors Historically, EPA has only allowed data from regulatory monitors operated by regulatory or public health professionals to be included in its real time national maps. In the past decade, low cost sensors (LCS's) have become increasingly popular with citizen scientists, and large LCS networks have sprung up in the US and across the globe. Recently, EPA has developed a data correction algorithm for a particular brand of PM2.5 LCS (the Purple Air monitor) that makes the LCS data comparable to regulatory data for the purpose of computing the AQI. This corrected LCS data currently appears alongside regulatory data on EPA's national fire map. History of the AQI The AQI made its debut in 1968, when the National Air Pollution Control Administration undertook an initiative to develop an air quality index and to apply the methodology to Metropolitan Statistical Areas. The impetus was to draw public attention to the issue of air pollution and indirectly push responsible local public officials to take action to control sources of pollution and enhance air quality within their jurisdictions. Jack Fensterstock, the head of the National Inventory of Air Pollution Emissions and Control Branch, was tasked to lead the development of the methodology and to compile the air quality and emissions data necessary to test and calibrate resultant indices. The initial iteration of the air quality index used standardized ambient pollutant concentrations to yield individual pollutant indices. These indices were then weighted and summed to form a single total air quality index. The overall methodology could use concentrations that are taken from ambient monitoring data or are predicted by means of a diffusion model. The concentrations were then converted into a standard statistical distribution with a preset mean and standard deviation. The resultant individual pollutant indices are assumed to be equally weighted, although values other than unity can be used. Likewise, the index can incorporate any number of pollutants although it was only used to combine SOx, CO, and TSP because of a lack of available data for other pollutants. While the methodology was designed to be robust, the practical application for all metropolitan areas proved to be inconsistent due to the paucity of ambient air quality monitoring data, lack of agreement on weighting factors, and non-uniformity of air quality standards across geographical and political boundaries. Despite these issues, the publication of lists ranking metropolitan areas achieved the public policy objectives and led to the future development of improved indices and their routine application.
Physical sciences
Other_2
Basics and measurement
1945208
https://en.wikipedia.org/wiki/Burgundy%20%28color%29
Burgundy (color)
Burgundy is a purplish red. The color burgundy takes its name from the Burgundy wine in France. When referring to the color, "burgundy" is not usually capitalized. The color burgundy is similar to Bordeaux (Web color code #4C1C24), Merlot (#73343A), Berry (#A01641), and Redberry (#701f28). Burgundy is made of 50% red, 0% green, and 13% blue. The CMYK percentages are 0% cyan, 100% magenta, 75% yellow, 50% black. The first recorded use of "burgundy" as a color name in English was in 1881. Variations Vivid burgundy In cosmetology, a brighter tone of burgundy called vivid burgundy is used for coloring hair. Old burgundy The color old burgundy is a dark tone of burgundy. The first recorded use of old burgundy as a color name in English was in 1926.
Physical sciences
Colors
Physics
21059721
https://en.wikipedia.org/wiki/Ulmus%20parvifolia
Ulmus parvifolia
Ulmus parvifolia, commonly known as the Chinese elm or lacebark elm, is a species native to eastern Asia, including China, India, Japan, Korea, and Vietnam. It has been described as "one of the most splendid elms, having the poise of a graceful Nothofagus". Description A small to medium deciduous or semideciduous (rarely semievergreen) tree, it grows to tall and wide with a slender trunk and crown. The leathery, lustrous green, single-toothed leaves are small, 2–5 cm long by 1–3 cm broad, and often retained as late as December or even January in Europe and North America. In some years the leaves take on a purplish-red autumn colour. The apetalous wind-pollinated perfect flowers are produced in early autumn, small and inconspicuous. The fruit is a samara, elliptical to ovate-elliptical, 10–13 mm long by 6–8 mm broad. The samara is mostly glabrous, the seed at the centre or toward the apex, is borne on a stalk 1–3 mm in length; it matures rapidly and disperses by late autumn. The trunk has a handsome, flaking bark of mottled greys with tans and reds, giving rise to its other common name, the lacebark elm, although scarring from major branch loss can lead to large, canker-like wounds. Ploidy: 2n = 28. Many nurserymen and foresters mistakenly refer to Ulmus pumila, the rapidly growing, disease-ridden, relatively short-lived, weak-wooded Siberian elm, as "Chinese elm". This has given the true Chinese elm an undeserved bad reputation. The two elms are very distinct and different species. The Siberian elm's bark becomes deeply ridged and furrowed with age, among other obvious differences. It possesses a very rough, greyish-black appearance, while the Chinese elm's smooth bark becomes flaky and blotchy, exposing very distinctive, light-coloured mottling, hence the synonym lacebark elm for the real Chinese elm. Wood and timber Elms, hickory, and ash all have remarkably hard, tough wood, making them popular for tool handles, bows, and baseball bats. Chinese elm is considered the hardest of the elms. Chinese elm is said to be the best of all woods for chisel handles and similar uses due to its superior hardness, toughness, and resistance to splitting. Chinese elm lumber is used most for furniture, cabinets, veneer, hardwood flooring, and specialty uses such as longbow construction and tool handles. Most commercially milled lumber goes directly to manufacturers rather than to retail lumber outlets. Chinese elm heartwood ranges in tone from reddish-brown to light tan, while the sapwood approaches off-white. The grain is often handsome and dramatic. Unlike other elms, the freshly cut Chinese elm has a peppery or spicy odour. While it turns easily and will take a nice polish off the lathe without any finish, and it holds detail well, the fibrous wood is usually considered too tough for carving or hand tools. Chinese elm contains silica which is hard on planer knives and chainsaws, but it sands fairly easily. Like other woods with interlocking grain, planes should be kept extra sharp to prevent tearing at the grain margins. It steam-bends easily and holds screws well, but pilot holes and countersinking are needed. It tends to be a "lively" wood, tending to warp and distort while drying. This water-resistant wood easily takes most finishes and stains. Taxonomy Subspecies, varieties, and forms: Ulmus parvifolia var. coreana Nakai Ulmus parvifolia f. lanceolata Ueki Pests and diseases The Chinese elm is highly resistant, but not immune, to Dutch elm disease. It is also very resistant to the elm leaf beetle Xanthogaleruca luteola, but has a moderate susceptibility to elm yellows. In trials at the Sunshine Nursery, Oklahoma, the species was adjudged as having the best pest resistance of about 200 taxa However, foliage was regarded as only "somewhat resistant" to black spot by the Plant Diagnostic Clinic of the University of Missouri. Cottony cushion scale or mealy bugs, often protected and "herded" by ants, exude sticky, sweet honeydew, which can mildew leaves and be a minor annoyance by dripping on cars and furniture. However, severe infestations on or obvious damage to otherwise healthy trees are uncommon. In some regions of the Southern United States, a fungus known as Phymatotrichopsis omnivora is known to cause sudden death of lacebark elms when infected. Alan Mitchell reported (1984) that established trees at Kew Gardens and at Royal Victoria Park, Bath, had been killed by honey fungus. Cultivation The Chinese elm is a tough landscape tree, hardy enough for use in harsh planting situations such as parking lots, small planters along streets, and plazas or patios. The tree is arguably the most ubiquitous elm, now found on all continents except Antarctica. It was introduced to Europe at the end of the 18th century as an ornamental and is found in many botanical gardens and arboreta. The tree was introduced to the UK in 1794 by James Main, who collected in China for Gilbert Slater of Low Layton, Essex. It was also introduced to the United States in 1794, where, before the introduction of cold-hardy forms from the 1990s, it was mainly planted in southern States and in California. It has proved very popular in recent years as a replacement for American elms killed by Dutch elm disease. The tree was distributed in Victoria, Australia, from 1857. At the beginning of the 20th century, Searl's Garden Emporium, in Sydney, marketed it. Three U. parvifolia were supplied in 1902 by Späth to the Royal Botanic Garden Edinburgh. In New Zealand, it was found to be particularly suitable for windswept locations along the coast. The tree is commonly planted as an ornamental in Japan, notably around Osaka Castle. Ulmus parvifolia is one of the cold-hardiest of the Chinese species. In artificial freezing tests at the Morton Arboretum. the LT50 (temp. at which 50% of tissues die) was found to be . Bonsai Owing to its versatility and ability to tolerate a wide range of temperatures, light, and humidity conditions, the Chinese elm is a popular choice as a bonsai species. It is perhaps the single most widely available. It is considered a good choice for beginners because of its high tolerance of pruning. Cultivars Numerous cultivars have been raised, mostly in North America: Hybrid cultivars It is an autumn-flowering species, whereas most other elms flower in the spring. Hybrids include: Frontier Rebella Naturalisation U. parvifolia has become naturalised in various parts of the US, including Idaho, West Virginia, and Kentucky. It is listed as invasive in District of Columbia, North Carolina, Nebraska, New Jersey, Virginia, and Wisconsin. Notable trees The tree in Central Park, New York, planted c.1873, from which the cultivar was cloned, was believed to be the oldest specimen of lacebark elm in the US at the time of its death in the 1990s, with a diameter at breast height of 1.4 m. Accessions North America Arnold Arboretum, US. Acc. nos. 1353-73, 17917, 195-90, 197-90. Bartlett Tree Experts, US. Acc. nos. 5546, 8109. Brenton Arboretum, Dallas Center, Iowa, US. No details available. Brooklyn Botanic Garden, New York, US. Acc. nos. 000880, 160001, 20020466, 850222, X00450, X00485, X02727, X02771. Chicago Botanic Garden, US. 2 trees, no other details available. Dominion Arboretum, Ottawa, Ontario, Canada. No acc. details. Fullerton Arboretum, California State University, US. Acc. no. 80-036. Holden Arboretum, US. Acc. nos. 57-1241, 80-665, 84-1214, 90-323. Longwood Gardens, US. Acc. nos. 1957-1058, 1959-1500, 1960-1138, 1991-0981. Missouri Botanical Garden, St. Louis, US. Acc. nos. 1986-0108, 1986-0276, 1986-0277, 1987-0019, 199-3195, 1996-3462. Morris Arboretum, University of Pennsylvania, US. Acc. no. 32-0052-A. Morton Arboretum, US. Acc. nos. 991-27, 772-54, 1231–57, 558-83, 52-96. New York Botanical Garden, US. Acc. nos. 195/56, 486/91, 68072. Phipps Conservatory, US. Acc. nos. 83-006, 83-058, 91-050, 2001-212UN. Scott Arboretum, US. Acc. nos. 62210, 71765, 71767, 71771, 75152, 64441. Smith College, US. Acc. no. 42894. U S National Arboretum, Washington, D.C., US. Acc. nos. 58000/1/2/3/4/5/6/7/8. Europe Brighton & Hove City Council, UK. NCCPG Elm Collection. Cambridge Botanic Garden, University of Cambridge, UK. No accession details available. Dyffryn Gardens, Glamorgan. UK champion, 13 m high, 37 cm d.b.h., last surveyed 1997. Grange Farm Arboretum, Sutton St. James, Spalding, Lincolnshire, UK. Acc. no. 516. Great Fontley Butterfly Conservation Elm Trials plantation, UK. One seedling planted 2019. Hortus Botanicus Nationalis, Salaspils, Latvia. Acc. nos. 18150, 18151. Linnaean Gardens of Uppsala, Sweden. Acc. no. 2002-1542. Royal Botanic Gardens Kew. Acc. nos. 1979-1613, 1979-1614, 1982–8479, 1982-8505, 1982-6280, 1982-6284, 2002-137, 2003-1267, 2005-1076. Royal Botanic Gardens Kew Wakehurst Place, UK. Acc. nos. 1969-33664, 1969-35133, 1973-21049, 1973-21525. Royal Horticultural Society Gardens, Wisley, UK. No details are available. Wijdemeren City Council Elm arboretum: 4 cv. ‘UPMTF’ planted Molenmeent Loosdrecht in 2017. Strona Arboretum, University of Life Sciences, Warsaw, Poland. No accession details are available. Tallinn Botanic Garden, Estonia. No accession details available. Thenford House arboretum, Banbury, UK. No details are available. University of Copenhagen Botanic Garden. Denmark. Acc. nos. S1956-1338, S1997-1304. Westonbirt Arboretum, Tetbury, Glos., UK. Planted 1981. No acc. no. Australasia Eastwoodhill Arboretum, Gisborne, New Zealand. 9 trees, details not known.
Biology and health sciences
Rosales
Plants
6485164
https://en.wikipedia.org/wiki/Aquaculture%20of%20catfish
Aquaculture of catfish
Catfish are easy to farm in warm climates, leading to inexpensive and safe food at local grocers. Catfish raised in inland tanks or channels are considered safe for the environment, since their waste and disease should be contained and not spread to the wild. Asia In Asia, many catfish species are important as food. Several airbreathing catfish (Claridae) and shark catfish (Pangasiidae) species are heavily cultured in Africa and Asia. Exports of one particular shark catfish species from Vietnam, Pangasius bocourti, has met with pressures from the U.S. catfish industry. In 2003, the United States Congress passed a law preventing the imported fish from being labeled as catfish. As a result, the Vietnamese exporters of this fish now label their products sold in the U.S. as "basa fish". United States Ictalurids are cultivated in North America, especially in the Deep South, with Mississippi being the largest domestic catfish producer. Channel catfish (Ictalurus punctatus) supported a $450 million/yr aquaculture industry in 2003. The US farm-raised catfish industry began in the early 1960s in Kansas, Oklahoma and Arkansas. Channel catfish quickly became the major catfish grown, as it was hardy and easily spawned in earthen ponds. By the late 1960s, the industry moved into the Mississippi Delta as farmers struggled with sagging profits in cotton, rice and soybeans, especially on those farm areas where soils had a very high clay content. The Mississippi Delta became the industry home for the catfish industry, as they had the soils, climate and shallow aquifers to provide water for the earthen ponds that grow 360-380 million pounds (160,000 to 170,000 tons) of catfish annually. Catfish are fed a grain-based diet that includes soybean meal. Fish are fed daily through the summer, at rates of 1-6% of body weight with pelleted floating feed. Catfish need about two pounds of feed to produce one pound of live weight. Mississippi is home to of catfish ponds, the largest of any state. Other states important in growing catfish include Alabama, Arkansas and Louisiana. Aquarium There is a large and growing ornamental fish trade, with hundreds of species of catfish, such as Corydoras and armored suckermouth catfish (often called plecos), being a popular component of many aquaria. Other catfish commonly found in the aquarium trade are banjo catfish, talking catfish, and long-whiskered catfish.
Technology
Aquaculture
null
6487947
https://en.wikipedia.org/wiki/Bullet%20Cluster
Bullet Cluster
The Bullet Cluster (1E 0657-56) consists of two colliding clusters of galaxies. Strictly speaking, the name Bullet Cluster refers to the smaller subcluster, moving away from the larger one. It is at a comoving radial distance of . The object is of a particular note for astrophysicists, because gravitational lensing studies of the Bullet Cluster are claimed to provide strong evidence for the existence of dark matter. Observations of other galaxy cluster collisions, such as MACS J0025.4-1222, similarly support the existence of dark matter. Overview The major components of the cluster pair—stars, gas and the putative dark matter—behave differently during collision, allowing them to be studied separately. The stars of the galaxies, observable in visible light, were not greatly affected by the collision, and most passed right through, gravitationally slowed but not otherwise altered. The hot gas of the two colliding components, seen in X-rays, represents most of the baryonic, or "ordinary", matter in the cluster pair. The gases of the intracluster medium interact electromagnetically, causing the gases of both clusters to slow much more than the stars. The third component, the dark matter, was detected indirectly by the gravitational lensing of background objects. In theories without dark matter, such as modified Newtonian dynamics (MOND), the lensing would be expected to follow the baryonic matter; i.e. the X-ray gas. However, the lensing is strongest in two separated regions near (possibly coincident with) the visible galaxies. This provides support for the idea that most of the gravitation in the cluster pair is in the form of two regions of dark matter, which bypassed the gas regions during the collision. This accords with predictions of dark matter as only gravitationally interacting, other than weakly interacting. The Bullet Cluster is one of the hottest-known clusters of galaxies. It provides an observable constraint for cosmological models, which may diverge at temperatures beyond their predicted critical cluster temperature. Observed from Earth, the subcluster passed through the cluster center 150 million years ago, creating a "bow-shaped shock wave located near the right side of the cluster" formed as "70 million kelvin gas in the sub-cluster plowed through 100 million kelvin gas in the main cluster at a speed of about nearly 10 million km/h (6 million miles per hour)". The bow shock radiation output is equivalent to the energy of 10 typical quasars. Significance to dark matter The Bullet Cluster provides strong evidence for the nature of dark matter and provides "evidence against some of the more popular versions of modified Newtonian dynamics (MOND)" as applied to large galactic clusters. At a statistical significance of 8, it was found that the spatial offset of the center of the total mass from the center of the baryonic mass peaks cannot be explained with an alteration of the gravitational force law alone. According to Greg Madejski: According to Eric Hayashi: A 2010 study claimed that the velocities of the collision are "incompatible with the prediction of a LCDM model". However, subsequent work has found the collision to be consistent with LCDM simulations, with the previous discrepancy stemming from small simulations and the methodology of identifying pairs. Earlier work claiming the Bullet Cluster was inconsistent with standard cosmology was based on an erroneous estimate of the in-fall velocity based on the speed of the shock in the X-ray-emitting gas. Based on the analysis of the shock driven by the merger, it was recently argued that a lower merger velocity ~3,950 km/s is consistent with the Sunyaev–Zeldovich effect and X-ray data, provided that the equilibration of the electron and ion downstream temperatures is not instantaneous. Alternative interpretations Mordehai Milgrom, the original proposer of modified Newtonian dynamics, has posted an online rebuttal of claims that the Bullet Cluster proves the existence of dark matter. He contends that the observed characteristics of the Bullet Cluster could just as well be caused by undetected standard matter. Another study in 2006 cautions against "simple interpretations of the analysis of weak lensing in the bullet cluster", leaving it open that even in the non-symmetrical case of the Bullet Cluster, MOND, or rather its relativistic version TeVeS (tensor–vector–scalar gravity), could account for the observed gravitational lensing. There are other alternate theories of gravity like the MOG and the Many-body gravity (MBG), which claim to be able to explain the bullet cluster's weak gravitational lensing.
Physical sciences
Notable galaxy clusters
Astronomy
6493704
https://en.wikipedia.org/wiki/D%C3%A9collement
Décollement
Décollement () is a gliding plane between two rock masses, also known as a basal detachment fault. Décollements are a deformational structure, resulting in independent styles of deformation in the rocks above and below the fault. They are associated with both compressional settings (involving folding and overthrusting) and extensional settings. Origin The term was first used by geologists studying the structure of the Swiss Jura Mountains, coined in 1907 by A. Buxtorf, who released a paper that theorized that the Jura is the frontal part of a décollement at the base of a nappe, rooted in the faraway Swiss Alps. Marcel Alexandre Bertrand published a paper in 1884 that dealt with Alpine nappism. Thin-skinned tectonics was implied in that paper but the actual term was not used until Buxtorf's 1907 publication. Formation Décollements are caused by surface forces, which 'push' at converging plate boundaries, facilitated by body forces (gravity sliding). Mechanically weak layers in strata allow the development of stepped thrusts (either over- or underthrusts), which originate at subduction zones and emerge deep in the foreland. Rock bodies with differing lithologies have different characteristics of tectonic deformation. They can behave in a brittle manner above the décollement surface, with intense ductile deformation below the décollement surface. Décollement horizons may be at depths as great as 10 km and form due to high compressibility between differing rock bodies or along planes of high pore pressures. Typically, the basal detachment of the foreland part of a fold-thrust belt lies in a weak shale or evaporite at or near the basement. Rocks above the décollement are allochthonous, rocks below are autochthonous. If material is transported along a décollement greater than 2 km, it may be considered a nappe. The faulting and folding that occurs with a regional basal detachment may be referred to as "thin-skinned tectonics", but décollements occur in 'thick-skinned' deformational regimes as well. Compressional setting In a fold-thrust belt, the décollement is the lowest detachment (see Fig 1.) and forms in the foreland basin of a subduction zone. A fold-thrust belt may contain other detachments above the décollement—an imbricate fan of thrust faults and duplexes as well as other detachment horizons. In compressional settings, the layer directly above the décollement will develop more intense deformation than other layers, and weaker deformation below the décollement. Effect of friction Décollements are responsible for duplex formation, the geometry of which greatly influences the dynamics of the thrust wedge. The amount of friction along the décollement affects the shape of the wedge; a low-angle slope reflects a low-friction décollement, whereas a higher-angle slope reflects a higher-friction basal detachment. Types of folding Two different types of folding may occur at a décollement. Concentric folding is identified by uniform bed thickness throughout the fold, and is necessarily accompanied by detachment or a décollement as part of the deformation that occurs with a thrust fault. Disharmonic folding does not have uniform bed thickness throughout the fold. Extensional setting Décollements in extensional settings are accompanied by tectonic denudation and high cooling rates. They can form by several methods: The megalandslide model predicts extension with normal faults near the original fault source and shortening further away from the source. The in situ model predicts numerous normal faults overlying one large décollement. The rooted, low angle normal fault model predicts that the décollement is created when two thin sheets of rock decouple at depth. Near the thickest part of the upper plate, extensional faulting may be negligible or absent, but as the upper plate thins, it loses its ability to remain coherent and may behave as a thin-skinned extensional terrane. Décollements can form from high angle normal faults. Uplift in a second stage of extension allows the exhumation of a metamorphic core complex (see Fig. 2). A half graben forms, but stress orientation is not perturbed due to high fault friction. Next, elevated pore pressure (Pp) leads to low effective friction that forces σ1 to be parallel to the fault in the footwall. A low-angle fault forms and is ready to act as décollement. Then, the upper crust is thinned above the décollement by normal faulting. New high-angle faults control the propagation of the décollement and help crustal exhumation. Finally, major and rapid horizontal extension lifts the terrain isostatically and isothermally. A décollement develops as an antiform that migrates toward shallower depths. Examples Jura Décollement Located in the Jura Mountains, north of the Alps, it was originally thought to be a folded décollement nappe. The thin-skinned nappe was sheared off on 1000 meter-thick deposits of Triassic evaporites. The frontal basal detachment of the Jura fold-and-thrust belt forms the most external limit of the Alpine orogenic wedge with the youngest fold-and-thrust activity. The Mesozoic and Cenozoic cover of the fold-and-thrust belt and the adjacent Molasse Basin have been deformed over the weak basal décollement and displaced by some 20 km and more toward the northwest. Appalachian-Ouachita Décollement The Appalachian-Ouachita orogen along the southeastern margin of the North American craton includes a late Paleozoic fold-thrust belt with a thin-skinned flat-and-ramp geometry, related to lateral and vertical variations in rock lithologies. The décollement surface varies along and across strike. Promontories and embayments in the late Precambrian-early Paleozoic rifted margin are preserved in the décollement geometry.
Physical sciences
Tectonics
Earth science
8453671
https://en.wikipedia.org/wiki/Subtractor
Subtractor
In electronics, a subtractor – a digital circuit that performs subtraction of numbers – can be designed using the same approach as that of an adder. The binary subtraction process is summarized below. As with an adder, in the general case of calculations on multi-bit numbers, three bits are involved in performing the subtraction for each bit of the difference: the minuend (), subtrahend (), and a borrow in from the previous (less significant) bit order position (). The outputs are the difference bit () and borrow bit . The subtractor is best understood by considering that the subtrahend and both borrow bits have negative weights, whereas the X and D bits are positive. The operation performed by the subtractor is to rewrite (which can take the values -2, -1, 0, or 1) as the sum . , where ⊕ represents exclusive or. Subtractors are usually implemented within a binary adder for only a small cost when using the standard two's complement notation, by providing an addition/subtraction selector to the carry-in and to invert the second operand. (definition of two's complement notation) Half subtractor The half subtractors can be designed through the combinational Boolean logic circuits [2] as shown in Figure 1 and 2. The half subtractor is a combinational circuit which is used to perform subtraction of two bits. It has two inputs, the minuend and subtrahend and two outputs the difference and borrow out . The borrow out signal is set when the subtractor needs to borrow from the next digit in a multi-digit subtraction. That is, when . Since and are bits, if and only if and . An important point worth mentioning is that the half subtractor diagram aside implements and not since on the diagram is given by . This is an important distinction to make since subtraction itself is not commutative, but the difference bit is calculated using an XOR gate which is commutative. The truth table for the half subtractor is: Using the table above and a Karnaugh map, we find the following logic equations for and : . Consequently, a simplified half-subtract circuit, advantageously avoiding crossed traces in particular as well as a negate gate is: X ── XOR ─┬─────── |X-Y|, is 0 if X equals Y, 1 otherwise ┌──┘ └──┐ Y ─┴─────── AND ── borrow, is 1 if Y > X, 0 otherwise where lines to the right are outputs and others (from the top, bottom or left) are inputs. Full subtractor The full subtractor is a combinational circuit which is used to perform subtraction of three input bits: the minuend , subtrahend , and borrow in . The full subtractor generates two output bits: the difference and borrow out . is set when the previous digit is borrowed from . Thus, is also subtracted from as well as the subtrahend . Or in symbols: . Like the half subtractor, the full subtractor generates a borrow out when it needs to borrow from the next digit. Since we are subtracting and from , a borrow out needs to be generated when . When a borrow out is generated, 2 is added in the current digit. (This is similar to the subtraction algorithm in decimal. Instead of adding 2, we add 10 when we borrow.) Therefore, . The truth table for the full subtractor is: Therefore the equation is:
Technology
Digital logic
null
4946686
https://en.wikipedia.org/wiki/Relative%20velocity
Relative velocity
The relative velocity of an object B relative to an observer A, denoted (also or ), is the velocity vector of B measured in the rest frame of A. The relative speed is the vector norm of the relative velocity. Classical mechanics In one dimension (non-relativistic) We begin with relative motion in the classical, (or non-relativistic, or the Newtonian approximation) that all speeds are much less than the speed of light. This limit is associated with the Galilean transformation. The figure shows a man on top of a train, at the back edge. At 1:00 pm he begins to walk forward at a walking speed of 10 km/h (kilometers per hour). The train is moving at 40 km/h. The figure depicts the man and train at two different times: first, when the journey began, and also one hour later at 2:00 pm. The figure suggests that the man is 50 km from the starting point after having traveled (by walking and by train) for one hour. This, by definition, is 50 km/h, which suggests that the prescription for calculating relative velocity in this fashion is to add the two velocities. The diagram displays clocks and rulers to remind the reader that while the logic behind this calculation seem flawless, it makes false assumptions about how clocks and rulers behave. (See The train-and-platform thought experiment.) To recognize that this classical model of relative motion violates special relativity, we generalize the example into an equation: where: is the velocity of the Man relative to Earth, is the velocity of the Man relative to the Train, is the velocity of the Train relative to Earth. Fully legitimate expressions for "the velocity of A relative to B" include "the velocity of A with respect to B" and "the velocity of A in the coordinate system where B is always at rest". The violation of special relativity occurs because this equation for relative velocity falsely predicts that different observers will measure different speeds when observing the motion of light. In two dimensions (non-relativistic) The figure shows two objects A and B moving at constant velocity. The equations of motion are: where the subscript i refers to the initial displacement (at time t equal to zero). The difference between the two displacement vectors, , represents the location of B as seen from A. Hence: After making the substitutions and , we have: Galilean transformation (non-relativistic) To construct a theory of relative motion consistent with the theory of special relativity, we must adopt a different convention. Continuing to work in the (non-relativistic) Newtonian limit we begin with a Galilean transformation in one dimension: where x' is the position as seen by a reference frame that is moving at speed, v, in the "unprimed" (x) reference frame. Taking the differential of the first of the two equations above, we have, , and what may seem like the obvious statement that , we have: To recover the previous expressions for relative velocity, we assume that particle A is following the path defined by dx/dt in the unprimed reference (and hence dx′/dt′ in the primed frame). Thus and , where and refer to motion of A as seen by an observer in the unprimed and primed frame, respectively. Recall that v is the motion of a stationary object in the primed frame, as seen from the unprimed frame. Thus we have , and: where the latter form has the desired (easily learned) symmetry. Special relativity As in classical mechanics, in special relativity the relative velocity is the velocity of an object or observer B in the rest frame of another object or observer A. However, unlike the case of classical mechanics, in Special Relativity, it is generally not the case that This peculiar lack of symmetry is related to Thomas precession and the fact that two successive Lorentz transformations rotate the coordinate system. This rotation has no effect on the magnitude of a vector, and hence relative speed is symmetrical. Parallel velocities In the case where two objects are traveling in parallel directions, the relativistic formula for relative velocity is similar in form to the formula for addition of relativistic velocities. The relative speed is given by the formula: Perpendicular velocities In the case where two objects are traveling in perpendicular directions, the relativistic relative velocity is given by the formula: where The relative speed is given by the formula General case The general formula for the relative velocity of an object or observer B in the rest frame of another object or observer A is given by the formula: where The relative speed is given by the formula
Physical sciences
Classical mechanics
Physics
4948091
https://en.wikipedia.org/wiki/Port%20of%20New%20York%20and%20New%20Jersey
Port of New York and New Jersey
The Port of New York and New Jersey is the port district of the New York-Newark metropolitan area, encompassing the region within approximately a radius of the Statue of Liberty National Monument. It includes the system of navigable waterways in the New York–New Jersey Harbor Estuary, which runs along over of shoreline in the vicinity of New York City and northeastern New Jersey, and is considered one of the largest natural harbors in the world. Having long been the busiest port on the East Coast it became the busiest port by maritime cargo volume in the United States in 2022 and is a major economic engine for the region. The region's airports make the port the nation's top gateway for international flights and its busiest center for overall passenger and air freight flights. There are two foreign-trade zones (FTZ) within the port. Geography Port district Encompassing an area within an approximate radius of the Statue of Liberty National Monument, the port district comprises all or part of seventeen counties in the region. The nine that are completely within the district are Hudson, Bergen, Essex, Union (in New Jersey), and the five boroughs of New York City, which are coterminous with the counties of New York, Bronx, Kings, Queens, and Richmond. Abutting sections of Passaic, Middlesex, Monmouth, Morris, and Somerset in New Jersey, and Nassau, Westchester, and Rockland in New York are also within the district. Waterways Bodies of water New York Harbor is one of the world's largest natural harbors. The Atlantic Ocean is to the southeast of the port. The sea at the entrance to the port is called the New York Bight; it lies between the peninsulas of Sandy Hook and Rockaway. In Lower New York Bay and its western arm, Raritan Bay, vessels orient themselves for passage to the west into Arthur Kill or Raritan River or to the north to The Narrows. To the east lies the Rockaway Inlet, which leads to Jamaica Bay. The Narrows connects to the Upper New York Bay at the mouth of the Hudson River, which is sometimes (particularly in navigation) called the North River. Large ships are able to navigate upstream to the Port of Albany-Rensselaer. To the west lies Kill van Kull, the strait leading to Newark Bay, fed by the Passaic River and Hackensack River, and the northern entrance of Arthur Kill. The Gowanus Canal and Buttermilk Channel are entered from the east. The East River is a broad strait that travels north to Newtown Creek and the Harlem River, turning east at Hell Gate before opening to Long Island Sound, which provides an outlet to the open sea. Channels The port consists of a complex of approximately of shipping channels, as well as anchorages and port facilities. Most vessels require pilotage, and larger vessels require tugboat assistance for the sharper channel turns. The Ambrose leads from the sea to the Upper Bay, where it becomes the Anchorage Channel. Connecting channels are the Bay Ridge, the Red Hook, the Buttermilk, the Claremont, the Port Jersey, the Kill Van Kull, the Newark Bay, the Port Newark, the Elizabeth, and the Arthur Kill. Anchorages are known as Stapleton, Bay Ridge and Gravesend. The natural depth of the harbor is about , but it was deepened over the years, to a controlling depth of about in 1880. By 1891, the Main Ship Channel was minimally deep. Following the Rivers and Harbors Act of 1899 over $1.2 million of initial funding was appropriated for the dredging of 40 ft (12.2 m)-deep channels at Bay Ridge, Red Hook, and Sandy Hook. In 1914, Ambrose Channel became the main entrance to the port, at deep and wide. During World War II the main channel was dredged to deep to accommodate larger ships up to Panamax size. In 2016, the Army Corps of Engineers completed a $2.1 billion dredging project, deepening harbor channels to in order to accommodate Post-Panamax container vessels, which can pass through the widened Panama Canal as well as the Suez Canal. This has been a source of environmental concern along channels connecting the container facilities in Port Newark to the Atlantic. PCBs and other pollutants lay in a blanket just underneath the soil. In June 2009 it was announced that 200,000 cubic yards of dredged PCBs would be "cleaned" and stored en masse at the site of the former Yankee Stadium and at Brooklyn Bridge Park. In many areas the sandy bottom has been excavated down to rock and now requires blasting. Dredging equipment then picks up the rock and disposes of it. At one point in 2005, there were 70 pieces of dredging equipment working to deepen channels, the largest fleet of dredging equipment anywhere in the world. The channel of the Hudson is the Anchorage Channel and is approximately 50 feet deep in the midpoint of Upper Bay. A project to replace two water mains between Brooklyn and Staten Island, which will eventually allowing for dredging of the channel to nearly , was begun in April 2012. The Army Corps has recommended that most channels in the port be maintained at 50 feet deep. Dredging of the canals to 50 feet was completed in August 2016. The channels also include bridges that limit the heights of vessels that can use the harbor. The Verrazzano-Narrows Bridge has a clearance of at mean high water. The Brooklyn Bridge has of clearance, while the Bayonne Bridge has been raised from to . Pilotage The Sandy Hook Pilots are licensed maritime pilots that go aboard oceangoing vessels, passenger liners, freighters, and tankers and are responsible for the navigation of larger ships through port district. History Early history The estuary was originally the territory of the Lenape, a seasonally migrational people who would relocate summer encampments along its shore and use its waterways for transport and fishing. Many of the tidal salt marshes supported vast oyster banks that remained a major source of food for the region until the end of the 19th century, by which time contamination and landfilling had obliterated most of them. The first recorded European visit was that of Giovanni da Verrazzano, who anchored in The Narrows in 1524. For the next hundred years, the region was visited sporadically by ships on fishing trips and slave raids. European colonization began after Henry Hudson's 1609 exploration of the region with the establishment of New Amsterdam, the capital of the Dutch province of New Netherland at the tip of Manhattan. The British colonial era saw a concerted effort to expand the port in the triangular trade between Europe, Africa, and North America with a concentration of wharves along the mouth of the East River. After the Battle of Brooklyn, the British controlled the harbor for the duration of American Revolutionary War, and prison ships housed thousands at Wallabout Bay. 19th century In the early 19th century, the Erie Canal (often used for grain) and Morris Canal (mostly used for anthracite) gave the port access to the American interior, leading to transshipment operations, manufacturing, and industrialization. The invention of the steam engine led to expansion of the railroads and vast terminals along the western banks of the Hudson River, complemented by an extensive network of ferries and carfloats, with a large cluster along the Harlem River. The era of the ocean liner around the turn of the 20th century led to the creation of berths at North River piers and Hoboken. This coincided with the immigration of millions, processed at Castle Clinton and later at Ellis Island, some staying in the region, others boarding barges, ships, and trains to points across the United States. In 1910, the port was the busiest in the world. 20th century During the World Wars the waterfront supported shipyards and military installations such as the Federal Shipbuilding and Drydock Company and the Brooklyn Navy Yard and played an important role in troop transport as a Port of Embarkation. The mid-century also saw the construction of major highways such as the Belt Parkway, East River Drive, and Major Deegan Expressway along parts of the shoreline. After the end of World War I, the 1919 New York City Harbor Strike shut down the port for weeks. The era of the longshoreman, captured in the classic film On the Waterfront, faded by the 1970s as much of the waterfront became obsolete due to changing transportation patterns. The nation's first facility for container shipping, which became the prototype, opened in 1962. Expanded intermodal freight transport systems and the Interstate Highway System effected a shift to new terminals at Newark Bay. Since the 1980s, sections of waterfront in the traditional harbor have been being redeveloped to include public access to the water's edge, with the creation of linear park greenways such as Hudson River Park, Hudson River Waterfront Walkway, and Brooklyn Bridge Park. 21st century The CMA CGM Theodore Roosevelt, the largest ship to call at an East Coast port, passed under the raised Bayonne Bridge in July 2017. Jurisdiction and regulation Responsibilities within the port are divided among all levels of government, from municipal to federal, as well as public and private agencies. Established in 1921, the bi-state Port Authority of New York and New Jersey, in addition to overseeing maritime facilities, is responsible for the vehicular crossings and the rapid transit system between New York and New Jersey, several of the region's airports, and other transportation and real estate development projects. The Port Authority maintains its own police force, as does the Waterfront Commission, created in 1953 to investigate, prosecute, and prevent criminal activity. The United States Army Corps of Engineers, which has been involved in harbor maintenance since about 1826, when Congress passed an omnibus rivers and harbors act, is responsible for bulkhead and channel maintenance. The United States Coast Guard deals with issues such as floatable debris, spills, vessel rescues, and counter-terrorism. Both states, and some municipal governments (New York City, in particular), maintain maritime police units. The United States Park Police monitors federal properties. The National Park Service oversees some of the region's historic sites, nature reserves, and parks. The port is a port of entry. The United States Customs and Border Protection (CBP) and Immigration and Customs Enforcement (ICE) regulate international imports and passenger arrivals. The "green lane" program, in which trusted shippers have fewer containers inspected. There are two foreign trade zones in the port: FTZ 1, the first in the nation, established in 1937, on the New York side of the port; and FTZ 49, on the New Jersey side. In March 2006, some of the passenger facilities management was to be transferred to Dubai Ports World. There was considerable controversy over security and ownership by a foreign corporation, particularly one of Arab origin, of a U.S. port operation, despite the fact that the operator was British-based P&O Ports. DP World later sold P&O's American operations to American International Group's asset management division, Global Investment Group, for an undisclosed sum. The Seamen's Church Institute of New York and New Jersey, the Teamsters, and the International Longshoremen's Association assist and represent some of the port's mariners and dockworkers. Cargo infrastructure Airports The airports in the Port of New York and New Jersey combine to create the largest airport system in the United States, the second in the world in terms of passenger traffic, and the first in the world in terms of total flight operations. JFK air freight cargo operations make it the busiest in the US. FedEx Express, the world's busiest cargo airline, uses Newark Liberty International Airport as its regional hub. Container terminals There are four container terminals in the port: Howland Hook Marine Terminal Port Jersey Marine Terminal Port Newark-Elizabeth Marine Terminal Red Hook Marine Terminal Terminals are leased to different port operators, such as A. P. Moller-Maersk Group, American Stevedoring, NYCT, and Global Marine Terminal. In June 2010, the Port Authority of New York and New Jersey agreed to purchase from Bayonne of land at the Military Ocean Terminal at Bayonne, indicating that additional container port facilities would be created. The agency is expected to develop a terminal capable of handling the larger container ships to be in service once the new, wider Panama Canal opens in 2014, some of which would not have passed under the original Bayonne Bridge at the Kill van Kull. A project to raise to the roadway of the bridge within the existing arch was completed in May, 2019. The terminal's combined volume makes it the largest on the East Coast, the third busiest in the United States, Handling a cargo volume in year 2023 of over 7.8 million TEUs, benefitting post-Panamax from the expansion of the Panama Canal. As of 2023, the terminals experienced a more severe reduction in cargo volume compared to California seaports, resulting in the Port of Los Angeles reclaiming its position as the nation's busiest. ExpressRail ExpressRail is the rail network supporting intermodal freight transport at the major container terminals of the port. The development of dockside trackage and railyards for transloading has been overseen by the Port Authority of New York and New Jersey which works in partnership other public and private stakeholders. Various switching and terminal railroads, including the Conrail Shared Assets Operations (CRCX) on the Chemical Coast Secondary connect to the East Coast rail freight network carriers Norfolk Southern (NS), CSX Transportation (CSX), and Canadian Pacific (CP). The network is partially financed by a surcharge on all containers passing through the port by train or truck. Bulk cargo and marine transfer While most consumer goods are transported in containers, other commodities such as petroleum and scrap metal are handled at facilities for marine transfer operations, bulk cargo, and break bulk cargo throughout the port, many along its straits and canals. At some locations, water pollution has led to inclusion on the list of Superfund sites in the United States. Arthur Kill, along its shore the Bayway Refinery and the Chemical Coast Kill van Kull at Constable Hook Gowanus Canal in South Brooklyn Newtown Creek, East River at Greenpoint and Hunter's Point Passaic River from Newark Bay to Passaic South Brooklyn Marine Terminal Car float and Cross-Harbor Rail Tunnel At one time, nearly 600,000 railcars were transferred annually by barge between the region's extensive rail facilities. Today, approximately 1,600 cars are "floated" on the remaining car float in the port. The New York New Jersey Rail, LLC transfers freight cars across the Upper Bay between the Greenville Yard in Jersey City and the 65th Street Yard and the Bush Terminal Yard in Brooklyn. At the Greenville end, CSX Transportation operates through Conrail's North Jersey Shared Assets Area along the National Docks Secondary. At Brooklyn, end connections are made to the New York and Atlantic Railway's Bay Ridge Branch and the South Brooklyn Railway. The crossing takes approximately 45 minutes. The equivalent truck trip would be 35 to long. Freight rail has never used the New York Tunnel Extension under the Hudson Palisades, Hudson River, Manhattan, and East River due to electrified lines and lack of ventilation. Overland travel crosses the Hudson River 140 miles (225 km) to the north using a right of way known as the Selkirk hurdle. The Cross-Harbor Rail Tunnel is a proposed rail tunnel under the Upper Bay. The western portal would be located at the Greenville Yard, while the eastern portal is undetermined and a source of controversy. In May 2010, the Port Authority announced that it would purchase the Greenville Yard and build a new barge-to-rail facility there, as well as improve the existing railcar float system. The barge-to-rail facility is expected to handle an estimated 60,000 to 90,000 containers of solid waste per year from New York City, eliminating up to 360,000 trash truck trips a year. The authority's board authorized $118.1 million for the project. The National Docks Secondary rail line is being upgraded in anticipation of expanded volumes. In September 2014, the PANYNJ announced a $356 million capital project to upgrade and expand the facility, including Roll-on/roll-off operations. Expected to be operational about July 2016, an initial capacity of at least 125,000 cargo container lifts a year is projected. Port Inland Distribution Network The Port Inland Distribution Network involves new or expanded transportation systems for redistribution by barge and rail for the shipped goods and containers that are delivered at area ports in an effort to curtail the use of trucks and their burden on the environment, traffic, and highway systems. The Port Authority of New York and New Jersey (PANYNJ), New Jersey Department of Transportation (NJDOT), and Delaware Valley Regional Planning Commission (DVRPC), are involved in initiatives to review and develop this network. To instantiate PIDN, the PANYNJ signed an agreement November 29, 2003 with the Port of Albany to provide twice weekly barge service. By 2014, the service had been discontinued. In 2018, service between Newark and Brooklyn to Port of Davisville in Rhode Island was initiated. America's Marine Highway America's Marine Highway is a similar United States Department of Transportation initiative to capitalize on U.S. waterways for the transport of goods. In 2016, MARAD made a grant of $1.6 million to improve the terminal at Red Hook as part of the Marine Highway program. Barges carrying containers on a route between Red Hook and Newark began operation in September 2016. In 2010, a private sector service provider began short sea shipping of aggregate products with a barge service between Tremley Point, Linden on the Arthur Kill and the Port of Salem to address a critical, yet weak link in freight transport with ports in the Delaware Valley. Cruise terminals and ferries Cruise terminals The golden age of the North Atlantic ocean liner lasted from the end of the 19th century to the post–World War II period, after which innovations in air travel became commercially viable. Many berths for the great ships that lined the North River (Hudson River) were more or less abandoned by the 1970s. Nowadays most travel is recreational. While many cruises are to points in the Caribbean and to the Southern Hemisphere, there are also ships calling at the port that sail transatlantically, notably with a scheduled service to Southampton, England. The passenger cruise ship terminals in the port are located in the traditional, or "inner", harbor. Collectively the cruise terminals in the Port of New York and New Jersey are the sixth busiest in the United States and 16th busiest in the world for passenger travel. Cape Liberty Cruise Port, MOTBY, Upper Bay Brooklyn Cruise Terminal, Buttermilk Channel, Upper Bay New York Passenger Ship Terminal, Hudson River Ferries and sightseeing There has been continuous ferry service between Staten Island and Lower Manhattan since the 18th century. Travelling across the Upper Bay between South Ferry and St. George Ferry Terminal, the free Staten Island Ferry transports on average 75,000 passengers per day. Service on the East River ended in the early 20th century and on the Hudson River in the 1960s. It has been restored and grown significantly since the 1980s providing regular service to points in Manhattan, mostly below 42nd Street. Major terminals are Hoboken Terminal, Battery Park City Ferry Terminal at World Financial Center, Paulus Hook Ferry Terminal, Weehawken Port Imperial, Pier 11/Wall Street, West Midtown Ferry Terminal, and the East 34th Street Ferry Landing. There also are numerous ferry slips that each serve one route only, including the historic Fulton Ferry. In addition to regular and rush hour routes, there are excursions, trips, and seasonal service to Gateway National Recreation Area beaches. Sightseeing boats circumnavigate Manhattan or make excursions into the Upper New York Bay. Circle Line Downtown Circle Line Sightseeing Ellis Island and Liberty Island Governor's Island Ferry (seasonal) Liberty Water Taxi New York Water Taxi NYC Ferry NY Waterway New York Beach Ferry SeaStreak Staten Island Ferry Lights and lighthouses There are both historic and modern lighthouses throughout the port, some of which have been decommissioned Ambrose Light, Lower Bay (dismantled 2008) Bergen Point Light, Newark Bay (replaced) Blackwell Island Light, Roosevelt Island, East River (retired 1940) Chapel Hill Rear Range Light, Sandy Hook Bay (deactivated 1957) Conover Beacon (front range light), Leonardo Coney Island (Nortons Point) Light, Lower New York Bay, Sea Gate, Brooklyn Elm Tree Beacon Light, The Narrows, New Dorp, Staten Island Execution Rocks Light, Long Island Sound Fort Tompkins Light, The Narrows, Staten Island (retired) Fort Wadsworth Light, The Narrows, Brooklyn Great Beds Light, Raritan Bay, South Amboy Kings Point Light, Long Island Sound, Great Neck Kingsborough Community College Light, Sheepshead Bay, Brooklyn Little Gull Island Light, Long Island Sound Little Red Lighthouse (Jeffrey's Hook Lighthouse), Fort Washington, Manhattan Navesink Twin Lights, Sandy Hook Bay, Highlands New Dorp Light, The Narrows, Staten Island Swash Channel (retired) North Brother Island Light, East River Old Orchard Shoal Light, Gedney Channel, Lower Bay Princes Bay Light, Staten Island Robbins Reef Light, Constable Hook, Upper Bay Romer Shoal Light, Lower Bay near Sandy Hook Bay Sandy Hook Light, Sandy Hook Staten Island Light Lighthouse Hill, Staten Island Statue of Liberty, Liberty Island, Upper Bay (until 1902) Stepping Stones Light, Long Island Sound, near City Island Stony Point light, Hudson River Throgs Neck Light, Throggs Neck, East River (decommissioned) Titanic Memorial, South Street Seaport, East River West Bank Light, Ambrose Channel, Lower Bay (range front) Whitestone Point Light, Whitestone Point, southerly side of East River Land reclamation and ocean dumping Channelization and landfilling began in the colonial era and continued well into the 20th century. The expansion of the land area of Lower Manhattan through encroachment began in the 17th-century Dutch settlement of New Amsterdam and continued into 20th century. Early materials were shellfish and other refuse, and later construction debris from projects such as the New York City Subway and Pennsylvania Station. Rubble from the bombing of London was transported for ballast during World War II. New land has been created throughout the port, including large swaths that are now Battery Park City, Ellis Island, Liberty State Park, Flushing Meadows–Corona Park, and the Meadowlands Sports Complex. From 1924 until 1986, sewerage sludge was hauled by tugboat and barge to a point offshore in the Atlantic. From 1986 to 1992 it was dumped at a site 106 nautical miles from Atlantic City, after which ocean dumping was banned. Barges were also used to transport waste to Fresh Kills Landfill, the world's largest, which operated from 1948 to 1991. Both operations were known to be detrimental to Long Island and Jersey Shore beaches, notably the 1987 Syringe Tide. Shipwrecks and abandoned boats The port has many sunken ships, some of which can be seen, others that lie on the floor of the ports waterways. The Staten Island boat graveyard is a marine scrapyard located in the Arthur Kill near the Fresh Kills Landfill, on the West Shore of Staten Island. Tourism and recreation Harbor-related historic sites, promenades, and nature preserves within the port district include: South Street Seaport USS Intrepid Sea, Air & Space Museum (Pier 86) Gateway National Recreation Area Statue of Liberty National Monument, Ellis Island and Liberty Island and Governor's Island Hudson River Park, Hudson River Waterfront Walkway, Brooklyn Bridge Park Liberty State Park and Communipaw Terminal Battery Park and Castle Clinton Hackensack Meadowlands, Riverwalk, and Environment Center Pier 63, New York Central Railroad 69th Street Transfer Bridge, and 79th Street Boat Basin Gantry Plaza State Park Manhattan Waterfront Greenway Hoboken Terminal City Island Economy In 2010, 4,811 ships entered the harbor carrying over 32.2 million metric tons of cargo valued at over $175 billion. In 2010, the New York-New Jersey Port industry supported: 170,770 direct jobs 279,200 total jobs in the NY-NJ region Nearly $11.6 billion in personal income Over $37.1 billion in business income Almost $5.2 billion in federal, state and local tax revenues Local and State Tax Revenue: $1.6 billion Federal Tax Revenue: $3.6 billion Approximately 3.2 million twenty-foot equivalent units (TEU) of containers and 700,000 automobiles are handled per year. In the first half of 2014, the port handled 1,583,449 containers, a 35,000-container increase above the six-month record set in 2012, while the port handled a monthly record of 306,805 containers in October 2014. In 2014, the port handled 3,342,286 containers and 393,931 automobiles. In January through June 2015, the top 10 imports that went through the port of New York and New Jersey were: Petroleum: $6.78 billion Appliances: $3.80 billion Vehicles: $2.59 billion Plastics: $1.72 billion Electronics: $1.46 billion Chemicals: $1.45 billion Oils and perfumes: $928.7 million Pharmaceuticals: $897.5 million Optical and photographic: $801.8 million Pearls and precious gems and metals: $562.4 million
Technology
Specific piers and ports
null
24066956
https://en.wikipedia.org/wiki/Alluvial%20river
Alluvial river
An alluvial river is one in which the bed and banks are made up of mobile sediment and/or soil. Alluvial rivers are self-formed, meaning that their channels are shaped by the magnitude and frequency of the floods that they experience, and the ability of these floods to erode, deposit, and transport sediment. For this reason, alluvial rivers can assume a number of forms based on the properties of their banks; the flows they experience; the local riparian ecology; and the amount, size, and type of sediment that they carry. At a smaller spatial scale and shorter time scale, the patterns of water movement, from events such as seasonal flooding, create different patches of soils that range from aerobic to anaerobic and have differing nutrients and decomposition rates and dynamics. When looking at larger spatial scales, the topographic features have been created by glacial events, such as glaciation and deglaciation, changes in sea-levels, tectonic movements, and other events that occur over longer time scales. These short and long-term scales together determine the patterns and characteristics of alluvial rivers. These rivers also consist of certain topographic features that include hillslopes at the formation of the valley's sides, terraces, remains of old floodplains at higher elevations than the floodplain that is currently active, levees that are natural, meander scrolls, natural drainage channels, and floodplains that are temporary, as well as permanent. Alluvial channel patterns Natural alluvial channels have a variety of morphological patterns, but can be generally described as straight, meandering, braided, or anastomosing. Different channel patterns result from differences in bankfull discharge, gradient, sediment supply, and bank material. Channel patterns can be described based on their level of sinuosity, which is the ratio of the channel length measured along its center to the straight line distance measured down the valley axis. Straight/sinuous channels Straight channels (sinuosity <1.3) are relatively rare in natural systems due to the fact that sediment and flow are rarely distributed evenly across a landscape. Irregularities in the deposition and erosion of sediments leads to the formation of alternate bars that are on opposite sides of the channel in succession. Alternating bar sequences result in flow to be directed in a sinuous pattern, leading to the formation of sinuous channels (sinuosity of 1.3-1.5). Meandering channels Meandering channels are more sinuous (>1.5 sinuosity) than straight or sinuous channels, and are defined by the meander wavelength morphological unit. The meander wavelength is the distance from the apex of one bend to the next on the same side of the channel. Meandering channels wavelength are described in section 1.2 Geomorphic Units. Meandering channels are widespread in current times, but no geomorphic evidence of their existence before the evolution of land plants has been found. This is largely attributed to the effect of vegetation in increasing bank stability and maintaining meander formation. Braided channels Braided channels are characterized by multiple, active streams within a broad, low sinuosity channel. The smaller strands of streams diverge around sediment bars and then converge in a braiding pattern. Braided channels are dynamic, with strands moving within the channel. Braided channels are caused by sediment loads that exceed the capacity of stream transport. They are found downstream of glaciers and mountain slopes in conditions of high slope, variable discharge, and high loads of coarse sediment. Anastomosing channels Anastomosing channels are similar to braided channels in that they are composed of complex strands that diverge and then converge downstream. However, anastomosing channels are distinct from braided channels in that they flow around relatively stable, typically vegetated islands. They also have generally lower gradients, are narrower and deeper, and have more permanent strands. Geomorphic units Meander wavelength The meander wavelength or alternate bar sequence is considered the primary ecological and morphological unit of meandering alluvial rivers. The meander wavelength is composed of two alternating bar units, each with a pool scoured out from a cutbank, an aggradational lobe or point bar, and a riffle that connects the pool and point bar. In an idealized channel, the meander wavelength is around 10 to 11 channel widths. This equates to pools (and riffles and point bars) being separated by an average of 5 to 6 channel widths. The radius of curvature of a meander bend describes the tightness of a meander arc, and is measured by the radius of a circle that fits the meander arc. The radius of curvature is between 2 and 3 times the channel width. Landforms Floodplains Floodplains are the land areas adjacent to alluvial river channels that are frequently flooded. Floodplains are built up by deposition of suspended load from overbank flow, bedload deposition from lateral river migration, and landscape processes such as landslides. Natural levees Natural levees occur when the floodplain of an alluvial river is primarily shaped by overbank deposition and when relatively coarse materials are deposited near the main channel. The natural levees become higher than the adjacent floodplain, leading to the formation of backswamps and yazoo channels, in which tributary streams are forced to flow parallel to the main channel rather than converge with the main channel. Terraces Terraces are sediment storage features that record an alluvial river's past sediment delivery. Many changes in boundary conditions can form terraces in alluvial river systems. The most basic reason for their formation is that the river does not have the transport capacity to move the sediment supplied to it by its watershed. Past climate during the Quaternary has been linked to the aggradation and incision of floodplains, leaving step-like terrace features behind. Uplift as well as sea level retreat can also cause terraces to form as the river cuts into its underlying bed and preserves sediment in its floodplain. Geomorphic processes Natural hydrograph components Natural hydrograph components such as storm events (floods), baseflows, snowmelt peaks, and recession limbs, are the river-specific catalysts that shape alluvial river ecosystems and provide for important geomorphic and ecological processes. Preserving annual variations in a river's hydrologic regime – patterns of magnitude, duration, frequency, and timing of flows- are essential for sustaining ecological integrity within alluvial river ecosystems. Channel migration Bank erosion at cutbanks on the outside of meanders combined with deposition of point bars on the inside of meanders cause channel migration. The greatest bank erosion often occurs just downstream of the meander apex, causing downstream migration as the high velocity flow eats away at the bank as it is forced around the meander curve. Avulsion is another process of channel migration that occurs much more rapidly than the gradual migration process of cutbank erosion and point bar deposition. Avulsion occurs when lateral migration causes two meanders to become so close that the river bank between them is breached, causing the joining of the meanders and the creation of two channels. When the original channel is cut off from the new channel by the deposition of sediments, oxbow lakes are formed. Channel migration is important to sustaining diverse aquatic and riparian habitats The migration causes sediments and woody debris to enter the river, and creates areas of new floodplain on the inside of the meander. Sediment budgets Dynamic steady states of sediment erosion and deposition work to sustain alluvial channel morphology, as river reaches import and export fine and coarse sediments at approximately equal rates. At the apex of meander curves, high velocity flows scour out sediment and form pools. The mobilized sediment is then deposited at the point bar directly across the channel or downstream. Flows of high magnitude and duration can be seen as important thresholds that drive channelbed mobility. Channel aggradation or degradation indicate sediment budget imbalances. Flooding Flooding is an important component that shapes channel morphology in alluvial river systems. Seasonal flooding also enhances productivity and connectivity of the floodplain. Large floods that exceed the 10 to 20 year recurrence interval form and maintain main channels as well as avulse and form side channels, wetlands, and oxbow lakes. Floodplain inundation occurs on average every 1–2 years at flows above bankfull stage and moderates flood severity and channel scour and helps to cycle nutrients between the river and surrounding landscape. Flooding is important to aquatic and riparian habitat complexity because it forms a diversity of habitat features that vary in their ecosystem function. Biologic components Riparian habitats Riparian habitats are especially dynamic in alluvial river ecosystems due to the constantly changing fluvial environment. Alternate bar scour, channel migration, floodplain inundation, and channel avulsion create variable habitat conditions that riparian vegetation must adapt to. Seedling establishment and forest stand development depend on favorable substrate, which in turn is dependent on how sediment is sorted along the channel banks. In general, young riparian vegetation and pioneer species will establish in areas that are subjected to active channel processes such as at point bars, where coarser sediments such as gravels and cobbles are present but are seasonally mobilized. Mature riparian vegetation can establish farther upslope where finer sediments such as sands and silts dominate and disturbance from active river processes are less frequent. Aquatic habitats Aquatic habitats in alluvial rivers are sculpted by the complex interplay between sediment, flow, vegetation, and woody debris. Pools provide deeper areas of relatively cool water and provide shelter for fish and other aquatic organisms. Pool habitats are improved by complex structures such as large woody debris or boulders. Riffles provide shallower, highly turbulent aquatic habitat of primarily cobbles. Here, water mixes with the air at the water surface, increasing dissolved oxygen levels within the stream. Benthic macroinvertebrates thrive in riffles, living on the surfaces and interstitial spaces between rocks. Many species also depend on low energy backwater areas for feeding and important life cycle stages. Human impacts Land use impacts Logging Logging of timberland in alluvial watersheds has been shown to increase sediment yields to rivers, causing aggradation of the streambed, increasing turbidity, and altering sediment size and sediment distribution along the channel. The increase in sediment yield is attributed to increased runoff and erosion and slope failure, a result of removing vegetation from the landscape as well as building roads. Agriculture Agricultural land uses divert water from alluvial rivers for crop production, as well constrain the river's ability to meander or migrate due levee construction or other forms of armoring. The result is simplified channel morphology with lower baseflows. Dams and diversions Dams and diversions alter the natural hydrologic regime of rivers, both upstream and downstream, with widespread effects that alter the watershed ecosystem. Since alluvial river morphology and fluvial ecosystem processes are largely shaped by the complex interplay of hydrograph components such as the magnitude, frequency, duration, timing, and rate of change of flow, any change in one of these components can be associated with a tangible alteration of the ecosystem. Dams are often associated with reduced wet season flood magnitudes and altered (oftentimes reduced) dry season baseflow. This can negatively affect aquatic organisms that are specifically evolved to natural flow conditions. By altering the natural hydrograph components, particularly reducing flow magnitudes, dams and other diversions reduce the river's ability to mobilize sediment, resulting in sediment-choked channels. Conversely, dams are a physical barrier to the naturally continuous movement of sediment from headwaters to the river mouth, and can create sediment deficient conditions and incision directly downstream. Understanding the natural attributes of alluvial rivers is necessary when restoring their function on small-scale levels below dams. Though the function of the rivers may never be fully restored, it is possible to recreate and preserve their integrity with proper planning and consideration of their necessary attributes. Restoration efforts should focus on restoring the connectivity between the main channel and other floodplain bodies that were lost due to dam creation and flow regulation. The preservation and reconstruction of these alluvial river habitats is necessary in maintaining and sustaining the ecological integrity of river-floodplain ecosystems.
Physical sciences
Hydrology
Earth science
24066971
https://en.wikipedia.org/wiki/Bedrock%20river
Bedrock river
A bedrock river is a river that has little to no alluvium mantling the bedrock over which it flows. However, most bedrock rivers are not pure forms; they are a combination of a bedrock channel and an alluvial channel. The way one can distinguish between bedrock rivers and alluvial rivers is through the extent of sediment cover. The extent of sediment coverage is based upon the sediment flux supplied to the channel and the channel transport capacity. Bedrock rivers are typically found in upland or mountainous regions. Their formation can have several erosional factors. Bedrock rivers are also one of the only ways to study incision into bedrock that is not related to glaciers. Forming and erosion Bedrock incision can be caused by tectonic plate movement. As the land is uplifted the river is forced to incise into the bedrock to keep flowing. Incision can be carried out through a variety of erosional processes. The type of bedrock may change as a river flows downstream, affecting erosional processes. The main processes being: stream power, abrasion, quarrying, wedging, and dissolution. These rivers are a combination of all of these processes but are dependent upon the individual river and its type of bedrock. Stream power Stream power is the process energy from the water converted into kinetic energy due to the steepening in slope. When water is being transported down a channel, it is doing so by gravitational potential energy. Due to the laws of conservation of energy, energy that is lost traveling downstream must be transformed into another type of energy. The energy form that is it transformed into is the kinetic energy of the water beating on the bedrock. The rate of the potential energy loss is calculated in the stream power of the river. The stream power equation is: where: = stream power = water density g = gravitational constant Q = hydraulic discharge of the stream (m3/s) S = slope of the channel This equation suggests that stream power might be the single most important factor in bedrock incision. In an alluvial river the stream power would be more of a transport because it would be picking up loose material and depositing it, but with a constant influx of sediment it would not be incising. Abrasion Abrasion is the process by which sediments are transported in the flow. The rate of erosion done using abrasion is affected by the strength of the bedrock. Forms of erosion include "abrasion, plucking, cavitation, debris-flow scour and weathering" (4). Abrasion is also affected by the amount of sediment load present in the river. Too much sediment and most of the particles will not have enough energy; too little and not enough of the particles will come into contact with the bed. The process can erode individual grains, or flakes from the rocks surface. The most common indicators of abrasion is potholes in the bedrock or a trough-like shape to the river. There are three types of sediment transport in a fluvial process: dissolved load, suspended load, and bed load. The process that most affects a bedrock river is the suspended load. Suspended load is the grains that are light enough to be carried in the water and do not contact the bed of the river unless there is an obstruction or topographic change in the bed. The way these particles erode a bedrock river is by contact with these obstructions. Being as they are carried as part of the river flow they have a significantly higher kinetic energy and coming into contact with an abnormality in the river bed can cause more damage than a larger grain with lower energy. The grain size that is normally held in the suspended load ranges from very fine to fine; clays and silts. Bedload erosion can also be a major factor in bedrock erosion. It is caused by saltating grains or traction. Saltating is where the grains are lifted up by the water and then tossed back down. Most of the time this is with gravels and if the stream power is big enough pebbles. However the clays and silts have too much cohesion to be transported by this method. When the particles come into contact with the bedrock they slowly wear away at its surface. They can gradually form micro-cracks or extend already existing cracks. The physics behind this erosional process states: the mass of the rock worn away by the incoming particle is directly proportional to the kinetic energy of that incoming particle. Traction is where the sediment is too large to be picked up in the river flow but is small enough to be pushed or rolled along at a slower rate. Traction is covered in quarrying. Quarrying Quarrying (also known as plucking) is the process by which a chunk of the bedrock must be somehow removed from the bed of the river and then forced along the planar surface of the riverbed. This process is the most similar to glacial erosion. It is most effective in rivers where the jointing is close enough to allow the blocks to be moved by river flow. The process of removing the piece of bedrock can be caused by many different factors. A crack or a flex in the bedrock will initially make a disconnected piece of the bedrock. Then, either by hydraulic wedging or frost-cracking the block can be forced out. If the bedrock is already highly jointed, fractured or a bedding plane it will be easier for the chunk to be removed. Highly jointed or bedding plane bedrock can make it easier for the blocks to be lifted or shifted out of their position. Scientists believe that this happens because of the weathering of joints surfaces. Subsequently, the joints are wedged apart and might be weakened by being bombarded by bedload particles. Once the block of bedrock is removed it must then be pushed along the bed of the river. In order for this to happen the shear stress of the river on the top of the rock must exceed the frictional forces on the bottom of the rock. The blocks will eventually erode, but will cause headward erosion of the river while it exists. Wedging Wedging is the process by which small cracks appear in the bed of the river which are enlarged by smaller particles. It can cause large blocks of the river to be removed from the bed starting the quarrying process. The initial cracks appear due to a flux in the bedrock itself which is caused by a "rapid and large pressure variation". These can be caused by mass movements, or heavy storms. After the initial crack is made a small amount of sediment, sometimes no more than a grain, is passively deposited in the crack. When the bedrock flexes back into its original position the crack is left open due to the wedging. Gradually as more sediment accumulates in the crack it will widen and deepen. This is more common in an already jointed river bed. Dissolution Dissolution is the process by which the downstream change in the solute concentration is controlled by the dissolution rate of the rock. This process typically only affects a bedrock river when the rock is already prone to dissolution, such as a sandstone. One would be most likely to see this in caves made up of carbonate rocks. Some other factors that this process is dependent on are "the ratio of mineral surface are to water volume, the degree of chemical under-saturation, and the time it takes a water parcel to move through a reach." It is one of the least likely forms of incision but it does play a role in the process. Transport and deposition Bedrock rivers are by definition bedrock, however that does not limit them from transporting all types of sediment and having sediment patches along its bed. The reason it is more likely to be a patch, rather than individual grains is that the grains are more likely to settle where the grain stability is increased. Grain stability is increased where the bedrock is rougher and where there is less kinetic energy in the water. Even though grains can be deposited in bedrock rivers most of the time they will be transported through a bedrock section of a river to a more alluvial section of the river. The cohesion between particles will make it easier for them to be deposited in a patch as well. With nothing to hold the particles down in the bedrock section the particles will continually be picked up by the river and carried further downstream. This will develop in the form of "alluvial bed forms or bars". The deeper and wider the river is the more likely it is for grains to be deposited along the bed of the river. However this is also dependent on the slope and inflow of water.
Physical sciences
Hydrology
Earth science
24071884
https://en.wikipedia.org/wiki/Haneda%20Airport
Haneda Airport
, sometimes referred to as Tokyo-Haneda, is the busier of the two international airports serving the Greater Tokyo Area, the other one being Narita International Airport (NRT). It serves as the primary domestic base of Japan's two largest airlines, Japan Airlines (Terminal 1) and All Nippon Airways (Terminal 2), as well as RegionalPlus Wings Corp. (Air Do and Solaseed Air), Skymark Airlines, and StarFlyer. It is located in Ōta, Tokyo, south of Tokyo Station. The facility covers 1,522 hectares (3,761 acres) of land. Haneda previously carried the IATA airport code TYO, which is now used by airline reservation systems within the Greater Tokyo Area, and was the primary international airport serving Tokyo until 1978; from 1978 to 2010, Haneda handled almost all domestic flights to and from Tokyo as well as "scheduled charter" flights to a small number of major cities in East and Southeast Asia, while Narita handled the vast majority of international flights from further locations. In 2010, a dedicated international terminal, currently Terminal 3, was opened at Haneda in conjunction with the completion of a fourth runway, allowing long-haul flights to operate during night-time . Haneda opened up to long-haul service during the daytime in March 2014, with carriers offering nonstop service to 25 cities in 17 countries. Since the resuming of international flights, airlines in Japan strategize Haneda as "Hub of Japan": providing connections between intercontinental flights with Japanese domestic flights, while envisioning Narita as the "Hub of Asia" between intercontinental destinations with Asian destinations. The Japanese government encourages the use of Haneda for premium business routes and the use of Narita for leisure routes and by low-cost carriers. However, the major full-service carriers may have a choice to fly to both airports. Haneda handled 87,098,683 passengers in 2018; by passenger throughput, it was the third-busiest airport in Asia and the fourth-busiest in the world. It returned to the second-busiest airport in Asia after Dubai International Airport in 2023 in the Airports Council International rankings. It is able to handle 90 million passengers per year following its expansion in 2018. With Haneda and Narita combined, Tokyo has the third-busiest city airport system in the world, after London and New York. In 2020, Haneda was named the second-best airport after Singapore's Changi Airport and the World's Best Domestic Airport. It maintained its second place in Skytrax’s world's top 100 airports for 2021 and 2022, in-between Qatar's Hamad International Airport and Singapore's Changi Airport, and maintaining its best Domestic Airport title from the previous year. History Before the construction of Haneda, the area was a prosperous resort centered around Anamori Inari Shrine, and Tokyo's primary airport was Tachikawa Airfield. It was the main operating base of Japan Air Transport, then the country's flag carrier. But as it was a military base and away from central Tokyo, aviators in Tokyo used various beaches of Tokyo Bay as airstrips, including beaches near the current site of Haneda (Haneda was a town located on Tokyo Bay, which merged into the Tokyo ward of Kamata in 1932). In 1930, the Japanese postal ministry purchased a portion of reclaimed land from a private individual in order to construct an airport. Empire era (1931–1945) first opened in 1931 on a small piece of reclaimed land at the west end of today's airport complex. A concrete runway, a small airport terminal and 2 hangars were constructed. The first flight from the airport on August 25, 1931, carried a load of insects to Dairen in the Kwantung Leased Territory (now part of China). During the 1930s, Haneda handled flights to destinations in Japan mainland, Taiwan, Korea (both under Japanese rule) and Manchuria (ruled as the Japanese puppet state of Manchukuo). The major Japanese newspapers also built their first flight departments at Haneda during this time, and Manchukuo National Airways began service between Haneda and Hsingking, the capital of Manchukuo. JAT was renamed Imperial Japanese Airways following its nationalization in 1938. Passenger and freight traffic grew dramatically in these early years. In 1939, Haneda's first runway was extended to in length and a second runway was completed. The airport's size grew to using land purchased by the postal ministry from a nearby exercise ground. During World War II, both IJA and Haneda Airport shifted to almost exclusively military transport services. Haneda Airport was also used by the Imperial Japanese Navy Air Service for flight training during the war. In the late 1930s, the Tokyo government planned a new Tokyo Municipal Airport on an artificial island in Koto Ward. At , the airport would have been five times the size of Haneda at the time, and significantly larger than Tempelhof Airport in Berlin, which was said to be the largest airport in the world at the time. The airport plan was finalized in 1938 and work on the island began in 1939 for completion in 1941, but the project fell behind schedule due to resource constraints during World War II. This plan was officially abandoned following the war, as the Allied occupation authorities favored expanding Haneda rather than building a new airport; the island was later expanded by dumping garbage into the bay, and is now known as Yumenoshima. Occupation era (1945–1952) On September 12, 1945, General Douglas MacArthur, Supreme Commander for the Allied Powers and head of the Occupation of Japan following World War II, ordered that Haneda be handed over to the occupation forces. On the following day, he took delivery of the airport, which was renamed Haneda Army Air Base, and ordered the eviction of many nearby residents in order to make room for various construction projects, including extending one runway to and the other to . On the 21st, Anamori Inari Shrine and over 3,000 residents received orders to leave their homes within 48 hours. Many resettled on the other side of a river in the Haneda district of Ota, surrounding Anamoriinari Station, and some still live in the area today. The expansion work commenced in October 1945 and was completed in June 1946, at which point the airport covered . Haneda AAF was designated as a port of entry to Japan. Haneda was mainly a military and civilian transportation base used by the U.S. Army and Air Force as a stop-over for C-54 transport planes departing San Francisco, en route to the Far East and returning flights. A number of C-54s, based at Haneda AFB, participated in the Berlin Blockade airlift. These planes were specially outfitted for hauling coal to German civilians. Many of these planes were decommissioned after their participation due to coal dust contamination. Several US Army or Air Force generals regularly parked their personal planes at Haneda while visiting Tokyo, including General Ennis Whitehead. During the Korean War, Haneda was the main regional base for United States Navy flight nurses, who evacuated patients from Korea to Haneda for treatment at military hospitals in Tokyo and Yokosuka. US military personnel based at Haneda were generally housed at the Washington Heights residential complex in central Tokyo (now Yoyogi Park). Haneda Air Force Base received its first international passenger flights in 1947 when Northwest Orient Airlines began DC-4 flights across the North Pacific to the United States, and within Asia to China, South Korea, and the Philippines. Pan American World Airways made Haneda a stop on its "round the world" route later in 1947, with westbound DC-4 service to Shanghai, Hong Kong, Kolkata, Karachi, Damascus, Istanbul, London and New York, and eastbound Constellation service to Wake Island, Honolulu and San Francisco. The U.S. military gave part of the base back to Japan in 1952; this portion became known as Tokyo International Airport. The US military maintained a base at Haneda until 1958 when the remainder of the property was returned to the Japanese government. First international era (1952–1978) Japan's flag carrier Japan Airlines began its first domestic operations from Haneda in 1951. For a few postwar years, Tokyo International Airport did not have a passenger terminal building. The Japan Airport Terminal Co., Ltd. was founded in 1953 to develop the first passenger terminal, which opened in 1955. An extension for international flights opened in 1963. European carriers began service to Haneda in the 1950s. Air France arrived at Haneda for the first time in November 1952. BOAC de Havilland Comet flights to London via the southern route began in 1953, and SAS DC-7 flights to Copenhagen via Anchorage began in 1957. JAL and Aeroflot began cooperative service from Haneda to Moscow in 1967. Pan Am and Northwest Orient used Haneda as a hub. The August 1957 Official Airline Guide shows 86 domestic and 8 international departures each week on Japan Air Lines. Other international departures per week: seven Civil Air Transport, three Thai DC4s, 2 Hong Kong Airways Viscounts (and maybe three DC-6Bs), two Air India and one QANTAS. Northwest had 16 departures a week, Pan Am had 12 and Canadian Pacific had four; Air France three, KLM three, SAS five, Swissair two and BOAC three. As of 1966, the airport had three runways: 15L/33R (), 15R/33L () and 4/22 (). The Tokyo Monorail opened between Haneda and central Tokyo in 1964, in time for the Tokyo Olympics. During 1964, Japan lifted travel restrictions on its citizens, causing passenger traffic at the airport to swell. The introduction of jet aircraft in the 1960s followed by the Boeing 747 in 1970 also required various facility improvements at Haneda, including extending Runway 4/22 over the water and repurposing part of Runway 15R/33L as an airport apron. A new international arrivals facility opened in June 1970. Around 1961, the government began considering further expansion of Haneda with a third runway and additional apron space, but forecast that the expansion would only meet capacity requirements for about ten years following completion. In 1966, the government decided to build a new airport for international flights. In 1978, Narita Airport opened, taking over almost all international service in the Greater Tokyo Area, and Haneda became a domestic airport. Domestic era (1978–2010) While most international flights moved from Haneda to Narita in 1978, airlines of the Republic of China (Taiwan) remained at Haneda Airport for many years due to the ongoing political conflict between Taiwan and the People's Republic of China (mainland China) and danger of potential conflict when carriers of both nations cross paths at any Japanese airport. Taipei and Honolulu flights from Haneda were served by China Airlines and were the airport's only international routes until the early 2000s. The Transport Ministry released an expansion plan for Haneda in 1983 under which it would be expanded onto new landfill in Tokyo Bay with the aim of increasing capacity, reducing noise and making use of the large amount of garbage generated by Tokyo. In July 1988, a new runway opened on the landfill. In September 1993, the old airport terminal was replaced by a new West Passenger Terminal, nicknamed "Big Bird", which was built farther out on the landfill. New runways 16L/34R (parallel) and 4/22 (cross) were completed in March 1997 and March 2000 respectively. A new international terminal opened next to the domestic terminal in March 1998. Taiwan's second major airline, EVA Air, joined CAL at Haneda in 1999. All Taiwan flights were moved to Narita in 2002, and Haneda-Honolulu services ceased. In 2003, JAL, ANA, Korean Air and Asiana began service to Gimpo Airport in Seoul, providing a "scheduled charter" city-to-city service. In 2004, Terminal 2 opened at Haneda for ANA and Air Do; the 1993 terminal, now known as Terminal 1, became the base for JAL, Skymark and Skynet Asia Airways, and JAL expanded its footprint into the northern wing of the terminal. In October 2006, Japanese Prime Minister Shinzo Abe and Chinese Premier Wen Jiabao reached an informal agreement to launch bilateral talks regarding an additional city-to-city service between Haneda and Shanghai Hongqiao International Airport. On 25 June 2007, the two governments concluded an agreement allowing for the Haneda-Hongqiao service to commence from October 2007. Since August 2015, Haneda also began flight services to Shanghai's other airport, Shanghai Pudong International Airport (where most flights operate from Narita International Airport) which means there is no longer a city-to-city service between Tokyo and Hongqiao Airport as all flights from Haneda and Shanghai are focused at Pudong Airport. In December 2007, Japan and the People's Republic of China reached a basic agreement on opening charter services between Haneda and Beijing Nanyuan Airport. However, because of difficulties in negotiating with the Chinese military operators of Nanyuan, the first charter flights in August 2008 (coinciding with the 2008 Summer Olympics) used Beijing Capital International Airport instead, as did subsequent scheduled charters to Beijing. In June 2007, Haneda gained the right to host international flights that depart between 8:30 pm and 11:00 pm and arrive between 6 am and 8:30 am. The airport allows departures and arrivals between 11 pm and 6 am, as Narita Airport is closed during these hours. Macquarie Bank and Macquarie Airports owned a 19.9% stake in Japan Airport Terminal until 2009, when they sold their stake back to the company. Second international era (2010–present) A third terminal for international flights was completed in October 2010. The cost to construct the five-story terminal building and attached 2,300-car parking deck was covered by a private finance initiative process, revenues from duty-free concessions and a facility use charge of ¥2,000 per passenger. Both the Tokyo Monorail and the Keikyū Airport Line added stops at the new terminal, and an international air cargo facility was constructed nearby. The fourth runway (05/23), which is called D Runway, was also completed in 2010, having been constructed via land reclamation to the south of the existing airfield. This runway was designed to increase Haneda's operational capacity from 285,000 movements to 407,000 movements per year, permitting increased frequencies on existing routes, as well as routes to new destinations. In particular, Haneda would offer additional slots to handle 60,000 overseas flights a year (30,000 during the day and 30,000 during late night and early morning hours). In May 2008, the Japanese Ministry of Transport announced that international flights would be allowed between Haneda and any overseas destination, provided that such flights must operate between 11 pm and 7 am. The Ministry of Transport originally planned to allocate a number of the newly available landing slots to international flights of or less (the distance to Ishigaki, the longest domestic flight operating from Haneda). 30,000 annual international slots became available upon the opening of the International Terminal, current Terminal 3, in October 2010, and were allocated to government authorities in several countries for further allocation to airlines. While service to Seoul, Taipei, Shanghai and other regional destinations continued to be allowed during the day, long-haul services were initially limited to overnight hours. Many long-haul services from Haneda struggled, such as British Airways service to London (temporarily suspended and then restored on a less than daily basis before becoming a daily daytime service) and Air Canada service to Vancouver (announced but never commenced until Air Canada began a code share on ANA's Haneda-Vancouver flight). Delta Air Lines replaced its initial service to Detroit with service to Seattle before cancelling the service entirely in favor of the daytime services to Los Angeles and Minneapolis (although both the Detroit and the Seattle services have since resumed as daytime services). In October 2013, American Airlines announced the cancellation of its service between Haneda and New York JFK stating that it was "quite unprofitable" owing to the schedule constraints at Haneda. Haneda Airport's new International Terminal has received numerous complaints from passengers using it during night hours. One of the complaints is the lack of amenities available in the building as most restaurants and shops are closed at night. Another complaint is that there is no affordable public transportation at night operating out of the terminals. The Keikyu Airport Line, Tokyo Monorail and most bus operators stop running services out of Haneda by midnight, and so passengers landing at night are forced to go by car or taxi to their destination. A Haneda spokesperson said that they would work with transportation operators and the government to improve the situation. Daytime international slots were allocated in October 2013. In the allocation among Japanese carriers, All Nippon Airways argued that it should receive more international slots than Japan Airlines due to JAL's recent government-supported bankruptcy restructuring, and ultimately won 11 daily slots to JAL's five. Nine more daytime slot pairs were allocated for service to the United States in February 2016. They were intended to be allocated along with the other daytime slots, but allocation talks were stalled in 2014, leading the Japanese government to release these slots for charter services to other countries. The new daytime slots led to increased flight capacity between Tokyo and many Asian markets, but did not have a major effect on capacity between Japan and Europe, as several carriers simply transferred flights from Narita to Haneda (most notably ANA and Lufthansa services to Germany, which almost entirely shifted to Haneda). In an effort to combat this effect, the Ministry of Land, Infrastructure and Transport gave non-binding guidance to airlines that any new route at Haneda should not lead to the discontinuation of a route at Narita, although it was possible for airlines to meet this requirement through cooperation with a code sharing partner (for instance, ANA moved its London flight to Haneda while maintaining a code share on Virgin Atlantic's Narita-London flight). An expansion of the new international terminal was completed at the end of March 2014. The expansion includes a new 8-gate pier to the northwest of the existing terminal, an expansion of the adjacent apron with four new aircraft parking spots, a hotel inside the international terminal, and expanded check-in, customs/immigration/quarantine and baggage claim areas. The Ministry of Land, Infrastructure and Transport constructed a new road tunnel between the Terminal 1/2 and Terminal 3 in order to shorten the connection time. Construction began in 2015 and concluded in 2020. In addition to its international slot restrictions, Haneda remains subject to domestic slot restrictions; domestic slots are reallocated by MLIT every five years, and each slot is valued at 2–3 billion yen in annual income. Haneda Innovation City, a new business hub, was built on the site of the old terminal near Tenkūbashi Station and opened on November 16, 2023. Facilities Haneda has four runways, arranged in two parallel pairs. The critical facilities of the airport such as runways, taxiways and aprons are managed by Ministry of Land, Infrastructure, Transport and Tourism. The Safety Promotion Center is a museum and educational center operated by Japan Airlines to promote airline safety. Due to the airport's position between Yokota Air Base and NAF Atsugi to the west, Narita International Airport to the east, and densely populated areas of Tokyo and Kanagawa to the immediate north and west, most Haneda flights arrive and depart using circular routes over Tokyo Bay. During north wind operations (60% of the time), aircraft arrive from the south on 34L and 34R and depart to the east from 34R and 05. During south wind operations (40% of the time), aircraft depart to the south from 16L and 16R, as well as 22 between 15:00 and 18:00, and arrive either on a high-angle approach from the north on 16L and 16R over west-central Tokyo (15:00 to 18:00 only) or from the east on 22 and 23 over Tokyo Bay (all other times). Haneda Airport has three passenger terminals with 71 gates with jet bridges. Terminals 1 and 2 are connected by an underground walkway. A free inter-terminal shuttle bus connects all terminals on the landside. Terminal 1 and the domestic flight areas of Terminal 2 are only open from 5:00 am to 12:00 am. Terminal 3 and the international flight area of Terminal 2 are open 24 hours a day. Terminal 1 Terminal 1, nicknamed the "Big Bird", opened in 1993 replacing the smaller 1970 terminal complex, which was located where the current Terminal 3 stands. It is exclusively used for domestic flights within Japan and is served by Japan Airlines, Skymark Airlines, and StarFlyer's routes. The terminal has 23 gates with jet bridges, and is managed by . The linear building features a six-story restaurant, shopping area and conference rooms in its center section and a large rooftop observation deck with open-air rooftop café. The terminal has gates 1 through 24 assigned for jet bridges and gates 31–40 and 84–90 assigned for ground boarding by bus. Terminal 2 Terminal 2 opened on December 1, 2004. The construction of Terminal 2 was financed by levying a ¥170 (from 1 April 2011) passenger service facility charge on tickets, the first domestic Passenger Service Facilities Charge (PSFC) in Japan. The terminal is managed by . Terminal 2 is served by All Nippon Airways, Air Do, and Solaseed Air for their domestic flights. On March 29, 2020, some international flights operated by All Nippon Airways were relocated to Terminal 2 after the addition of international departure halls and CIQ facilities (Customs, Immigration, Quarantine) in preparation for 2020 Summer Olympics in Tokyo. However, the international departures and check-in hall was closed indefinitely on April 11, 2020, less than two weeks after its opening, due to the COVID-19 pandemic. International flights at Terminal 2 resumed from 19 July 2023 with the easing of COVID-19 restrictions and border controls. The terminal has 27 gates with jet bridges, and features an open-air rooftop restaurant, a six-story shopping area with restaurants and the 387-room Haneda Excel Hotel Tokyu. The terminal has gates 51 through 73 assigned with jet bridges (gates 51 to 65 for domestic flights, gates 66 to 70 for domestic or international flights, gates 71 to 73 for international flights), gates 46–48 in satellite, and gates 500 through 511 (for domestic flights) and gates 700 through 702 (for international flights) assigned for ground boarding by bus. Terminal 3 Terminal 3, formerly known as the International Terminal, opened on October 21, 2010 (occupying the site of the former 1970 terminal complex), replacing the much smaller 1998 International Terminal adjacent to Terminal 2. The terminal serves most of the airport's international flights, with the exception of some All Nippon Airways flights departing from Terminal 2. The first two long-haul flights were scheduled to depart after midnight on October 31, 2010, from the new terminal, but both flights departed ahead of schedule before midnight on October 30. Terminal 3 is managed by . Terminal 3 has 20 gates with jet bridges, and has airline lounges operated by oneworld members Japan Airlines & Cathay Pacific, Star Alliance member All Nippon Airways, and SkyTeam member Delta Air Lines. The terminal has gates 105–114 and 140–149 assigned with jet bridges and gates 131 through 139 assigned for ground boarding by bus. Of these, gate 107 has triple jet bridges, enabling Haneda to technically handle the Airbus A380. Even so, there are no A380 services regularly scheduled at Haneda due to wake turbulence concerns during busy hours. The International Terminal was renamed to Terminal 3 on March 14, 2020, as Terminal 2 began handling some international flights operated by All Nippon Airways from March 29, 2020. Airlines and destinations The following airlines operate scheduled passenger flights at Haneda Airport: Statistics Source: Japanese Ministry of Land, Infrastructure, Transport and Tourism Busiest domestic routes (2024) Number of landings Number of passengers Cargo volume in tonnes On-time performance In 2022, Haneda Airport was the most on-time international airport with the fewest delays worldwide. Flights departing Haneda had a 90.3% on-time departure rate across 373,264 total flights according to aviation analytics firm Cirium. Ground transport Haneda Airport is served by the Keikyu Airport Line and Tokyo Monorail. In addition, East Japan Railway Company's Haneda Airport Access Line is under construction and will connect Terminals 1 and 2 to central Tokyo by 2031. The airport is bisected by the Shuto Expressway Bayshore Route and Japan National Route 357, while Shuto Expressway Route 1 and Tokyo Metropolitan Route 311 (Kampachi-dori Ave) run along the western perimeter. Tamagawa Sky Bridge connects the airport with Japan National Route 409 and Shuto Expressway Route K6 to the southwest across Tama River. The airport has five parkades. Scheduled bus service to various points in the Kanto region is provided by Airport Transport Service (Airport Limousine) and Keihin Kyuko Bus. Tokyo City Air Terminal, Shinjuku Expressway Bus Terminal and Yokohama City Air Terminal are major limousine bus terminals. Emirates operates bus services to Shinagawa Station and Tokyo Station. Keisei runs direct suburban trains (called "Access Express") between Haneda and Narita in 93 minutes. There are also direct buses between the airports operated by Airport Limousine Bus. The journey takes 65–85 minutes or longer depending on traffic. Accidents and incidents 24 August 1938: two civilian aircraft originating from Haneda, one operated by Japan Air Transport and another by Japan Flight School, collided into each other mid-air. All 5 crews of both aircraft died as well as 80 people on the ground in the Ōmori area of Tokyo. In the span of a month in 1966, three accidents occurred at, or on flights inbound to or outbound from, Haneda. 4 February 1966: All Nippon Airways Flight 60, a Boeing 727-81, crashed into Tokyo Bay about from Haneda in clear weather conditions while on an evening approach. All 133 passengers and crew were killed. The accident held the death toll record for a single-plane accident until 1969. 4 March 1966: Canadian Pacific Air Lines Flight 402, a Douglas DC-8-43 registered CF-CPK, descended below the glide path and struck the approach lights and a seawall during a night landing attempt in poor visibility. The flight had departed Hong Kong Kai Tak Airport and had almost diverted to Taipei due to the poor weather at Haneda. Of the 62 passengers and 10 crew, only 8 passengers survived. On 5 March 1966, less than 24 hours after the Canadian Pacific crash, BOAC Flight 911, a Boeing 707–436 registered G-APFE, broke up in flight en route from Haneda Airport to Hong Kong Kai Tak Airport, on a segment of an around-the-world flight. The bad weather that had caused the Canadian Pacific crash the day before also caused exceptionally strong winds around Mt. Fuji, and the BOAC jet encountered severe turbulence that caused the aircraft to break up in mid-air near the city of Gotemba, Shizuoka Prefecture at an altitude of , killing all 113 passengers and 11 crew. The debris field was over long. Although there was not a cockpit voice recorder on this aircraft or any distress calls made by the crew, the investigators did find an 8mm film shot by one of the passengers that, when developed, confirmed the accident was consistent with an in-flight breakup and loss of control due to severe turbulence. There is a famous photo of the BOAC plane taxiing past the still smouldering wreckage of the Canadian Pacific DC-8 as it taxied out to the runway for its last ever takeoff. 26 August 1966: A Japan Air Lines Convair 880, leased from Japan Domestic Airlines on a training flight, crashed after takeoff when after the nose lifted off, the aircraft yawed to the left. At after the plane went off the runway and all the engines separated as well as the nose and left main gear. The aircraft caught fire. All five occupants died. Cause of left yaw unknown. 17 March 1977: , a Boeing 727–81 flight departing from Haneda to Sendai, was hijacked by a yakuza shortly after takeoff. The aircraft quickly returned to the airport due to the hijacker firing his pistol. The hijacker locked himself inside the aircraft toilet before killing himself. 9 February 1982: Japan Air Lines Flight 350, a McDonnell Douglas DC-8-61, crashed on approach in shallow water short of the runway when the captain, experiencing some form of a mental aberration, deliberately engaged the thrust-reversers for two of the four engines. Twenty-four passengers were killed. 12 August 1985: Japan Air Lines Flight 123, a Boeing 747-100SR, lost its rear pressure bulkhead and vertical stabilizer 12 minutes after takeoff due to improperly repaired tailstrike damaged that had occurred 7 years earlier. The plane flew for 32 minutes until it crashed into Mount Takamagahara. 520 of the 524 passengers and crew on board were killed making the crash the worst single-aircraft accident of all time. 23 July 1999: All Nippon Airways Flight 61 was hijacked shortly after takeoff. The hijacker killed the captain before he was subdued; the aircraft landed safely. 27 May 2016: Korean Air Flight 2708, a Boeing 777-3B5 bound for Gimpo Airport, suffered an engine fire as it was taking off from Runway 34R. The takeoff was aborted and all passengers and crew aboard were swiftly evacuated. Investigations later determined the cause of the engine fire as an uncontained engine failure caused by maintenance crew oversight. 10 June 2023, Thai Airways International Flight 683, an Airbus A330-300, which was to depart from Haneda Airport for Bangkok–Suvarnabhumi collided with EVA Air Flight 189, an Airbus A330-300 headed for Taipei–Songshan. No injuries were reported. However, both aircraft sustained minor damage as a result of the collision. The collision forced one of the four runways of Haneda to close for approximately two hours. 2 January 2024: A ground collision occurred between an Airbus A350-941 from Sapporo–Chitose and a Japan Coast Guard De Havilland Canada DHC-8-315. All 379 occupants aboard the Japan Airlines flight were evacuated, while five of the six occupants aboard the Coast Guard aircraft were killed. Both aircraft were written off.
Technology
Asia
null
19973790
https://en.wikipedia.org/wiki/Hawksbill%20sea%20turtle
Hawksbill sea turtle
The hawksbill sea turtle (Eretmochelys imbricata) is a critically endangered sea turtle belonging to the family Cheloniidae. It is the only extant species in the genus Eretmochelys. The species has a global distribution that is largely limited to tropical and subtropical marine and estuary ecosystems. The appearance of the hawksbill is similar to that of other marine turtles. In general, it has a flattened body shape, a protective carapace, and flipper-like limbs, adapted for swimming in the open ocean. E. imbricata is easily distinguished from other sea turtles by its sharp, curving beak with prominent tomium, and the saw-like appearance of its shell margins. Hawksbill shells slightly change colors, depending on water temperature. While this turtle lives part of its life in the open ocean, it spends more time in shallow lagoons and coral reefs. The World Conservation Union, primarily as a result of human fishing practices, classifies E. imbricata as critically endangered. Hawksbill shells were the primary source of tortoiseshell material used for decorative purposes. The Convention on International Trade in Endangered Species (CITES) regulates the international trade of hawksbill sea turtles and products derived from them. Taxonomy Linnaeus described the hawksbill sea turtle as Testudo imbricata in 1766, in the 12th edition of his Systema Naturae. In 1843, Austrian zoologist Leopold Fitzinger moved it into the genus Eretmochelys. In 1857, the species was temporarily misdescribed as Eretmochelys imbricata squamata. Neither the IUCN nor the United States Endangered Species Act assessment processes recognize any formal subspecies, but instead recognize one globally distributed species with populations, subpopulations, or regional management units. Fitzinger derived the genus name Eretmochelys from the Ancient Greek roots eretmo and chelys, corresponding to "oar" and "turtle", respectively, in reference to the turtles' oar-like front flippers. The species name imbricate is Latin, corresponding to the English term imbricate, in reference to the turtles' shingle-like, overlapping carapace scutes. Description Adult hawksbill sea turtles typically grow to in length, weighing around on average. The heaviest hawksbill ever captured weighed . The turtle's shell, or carapace, has an amber background patterned with an irregular combination of light and dark streaks, with predominantly black and mottled-brown colors radiating to the sides. Several characteristics of the hawksbill sea turtle distinguish it from other sea turtle species. Its elongated, tapered head ends in a beak-like mouth (from which its common name is derived), and its beak is more sharply pronounced and hooked than others. The hawksbill's forelimbs have two visible claws on each flipper. A readily distinguished characteristic of the hawksbill is the pattern of thick scutes that make up its carapace. While its carapace has five central scutes and four pairs of lateral scutes like several members of its family, E. imbricata posterior scutes overlap in such a way as to give the rear margin of its carapace a serrated look, similar to the edge of a saw or a steak knife. The turtle's carapace can reach almost in length. The hawksbill appears to frequently employ its sturdy shell to insert its body into tight spaces in reefs. Crawling with an alternating gait, hawksbill tracks left in the sand are asymmetrical. In contrast, the green sea turtle and the leatherback turtle have a more symmetrical gait. Due to its consumption of venomous cnidarians, hawksbill sea turtle flesh can become toxic. The hawksbill is biofluorescent and is the first reptile recorded with this characteristic. It is unknown if the effect is due to the turtle's diet, which includes biofluorescent organisms like the hard coral Physogyra lichtensteini. Males have more intense pigmentation than females, and a behavioral role of these differences is speculated. Distribution Hawksbill sea turtles have a wide range, found predominantly in tropical reefs of the Indian, Pacific, and Atlantic Oceans. Of all the sea turtle species, E. imbricata is the one most associated with warm tropical waters. Two significant subpopulations are known, in the Atlantic and Indo-Pacific. Atlantic subpopulation In the Atlantic, hawksbill populations range as far west as the Gulf of Mexico and as far southeast as the Cape of Good Hope in South Africa. They live off the Brazilian coast (specifically Bahia, Fernando de Noronha). Along the East Coast of the United States, hawksbill sea turtle range from Virginia to Florida. In Florida, according to the Florida Fish and Wildlife Conservation Commission, hawksbills are found primarily on reefs in the Florida Keys and along the southeastern Atlantic coast. Several major nesting sites are found in coastal Palm Beach, Broward, and Dade County. THE FLORIDA HAWKSBILL PROJECT, is a comprehensive research and conservation Program to study and protect the region’s hawksbill sea turtles and the habitats in which they live. Within the scope of this project, numerous studies have been undertaken to characterize the hawksbill aggregations found in southeast Florida waters, and educational programs have been developed to engage the local dive community in the protection of hawksbill sea turtles and coral reef habitats. This program is hosted by the National Save the Sea Turtle Foundation, located in Fort Lauderdale, Florida.Throughout their global range, hawksbill turtles are known to closely associate with coral reef habitats, mostly due to their preference for eating sponges and corals. Due to the large extent of Florida’s barrier reefs (approx. 350 linear miles), the Hawksbill Project focuses on representative sites in the northern, central, and southern sections of the Southeast Florida Reef Tract. The barrier reefs of northern Palm Beach County, the patch reefs of the northern Keys, and the finger reefs of Key West are the primary locations for their sampling efforts In the Caribbean, the main nesting beaches are in the Lesser Antilles, Barbados, Guadeloupe, Tortuguero in Costa Rica, and the Yucatan. They feed in the waters off Cuba and around Mona Island near Puerto Rico, among other places. Indo-Pacific subpopulation In the Indian Ocean, hawksbills are a common sight along the east coast of Africa. You can find them in the seas surrounding Madagascar and Mozambique, and island groups like Primeiras e Segundas, which include the turtle protection island of Ilha do Fogo. Hawksbills are also common along the southern Asian coast, including the Persian Gulf, the Red Sea, and the Indian subcontinent and Southeast Asia coasts. They are present across the Malay Archipelago and northern Australia. Their Pacific range is limited to the ocean's tropical and subtropical regions. In the west, it extends from the southwestern tips of the Korean Peninsula and the Japanese Archipelago south to northern New Zealand. The Philippines hosts several nesting sites, including the island of Boracay and Punta Dumalag in Davao City. Dahican Beach in Mati City, Davao Oriental, hosts one of the essential hatcheries of its kind, along with olive ridley sea turtles in the archipelagic country of the Philippines. A small group of islands in the southwest of the archipelago is named the "Turtle Islands" because two species of sea turtles nest there: the hawksbill and the green sea turtle. In January 2016, a juvenile was seen in Gulf of Thailand. A 2018 article by The Straits Times reported that around 120 hawksbill juvenile turtles recently hatched at Pulau Satumu, Singapore. Commonly found in Singapore waters, hawksbill turtles have returned to areas such East Coast Park and Palau Satumu to nest. In Hawaii, hawksbills mostly nest on the "main" islands of Oahu, Maui, Molokai, and Hawaii. In Australia, hawksbills are known to nest on Milman Island in the Great Barrier Reef. Hawksbill sea turtles nest as far west as Cousine Island in the Seychelles, where the species since 1994 is legally protected, and the population is showing some recovery. The Seychelles' inner islands and islets, such as Aldabra, are popular feeding grounds for immature hawksbills. Eastern Pacific subpopulation In the eastern Pacific, hawksbills are known to occur from the Baja Peninsula in Mexico south along the coast to southern Peru. Nonetheless, as recently as 2007, the species had been considered extirpated mainly in the region. Important remnant nesting and foraging sites have since been discovered in Mexico, El Salvador, Nicaragua, and Ecuador, providing new research and conservation opportunities. In contrast to their traditional roles in other parts of the world, where hawksbills primarily inhabit coral reefs and rocky substrate areas, in the eastern Pacific, hawksbills tend to forage and nest principally in mangrove estuaries, such as those present in the Bahia de Jiquilisco (El Salvador), Gulf of Fonseca (Nicaragua, El Salvador, and Honduras), Estero Padre Ramos (Nicaragua), and the Gulf of Guayaquil (Ecuador). Multi-national initiatives, such as the Eastern Pacific Hawksbill Initiative , are currently pushing efforts to research and conserve the population, which remains poorly understood. Habitat and feeding Habitat Adult hawksbill sea turtles are primarily found in tropical coral reefs. They are usually seen resting in caves and ledges in and around these reefs throughout the day. As a highly migratory species, they inhabit a wide range of habitats, from the open ocean to lagoons and even mangrove swamps in estuaries. Little is known about the habitat preferences of early life-stage E. imbricata; like other young sea turtles, they are assumed to be completely pelagic, remaining at sea until they mature. Feeding While they are omnivorous, sea sponges are their principal food; they constitute 70–95% of the turtles' diets. However, like many spongivores, they feed only on select species, ignoring many others. Caribbean populations feed primarily on the orders Astrophorida, Spirophorida, and Hadromerida in the class Demospongiae. Aside from sponges, hawksbills feed on algae, marine plants (seagrasses), woody plant remains, mangrove fruits and seeds, cnidarians (comb jellies and other jellyfish, hydrozoans, hard corals, corallimorphs, zoanthids, and sea anemones), bryozoans, mollusks (squid, snails, nudibranchs, bivalves, and tusk shells), echinoderms (sea cucumbers and sea urchins), tunicates, fish and their eggs, crustaceans, and arthropods (crabs, lobsters, barnacles, copepods, and beetles). They also feed on the dangerous jellyfish-like hydrozoan, the Portuguese man o' war (Physalia physalis). Hawksbills close their unprotected eyes when they feed on these cnidarians. The man o' war's stinging cells cannot penetrate the turtles' armored heads. Hawksbills are highly resilient and resistant to their prey. Some of the sponges they eat, such as Aaptos aaptos, Chondrilla nucula, Tethya actinia, Spheciospongia vesparium, and Suberites domuncula, are highly (often lethally) toxic to other organisms. In addition, hawksbills choose sponge species with significant numbers of siliceous spicules, such as Ancorina, Geodia (G. gibberosa), Ecionemia, and Placospongia. Life history Less is known about the life history of hawksbills by comparison to several other sea turtle species. Their life history may be divided into three phases, the: (i) early life history phase from approximately 4–30 cm straight carapace length, (ii) benthic phase when the immature turtles recruit to foraging areas, and (iii) reproductive phase, when individuals reach sexual maturity and begin periodically migrating to breeding grounds. The early life history phase is not as geographically resolved as other sea turtle species. This phase appears to vary across ocean regions and may occur in both pelagic and nearshore waters, possibly lasting from 0–4 years of age. One study from the central Pacific Ocean population used bomb radiocarbon (14C) dating and von Bertalanffy growth models to estimate hawksbills reach sexual maturity at ~ 72 cm and 29 years of age (range 23–36 years). Hawksbills show a degree of fidelity after recruiting to the benthic phase however, the movement to other similar habitats is possible. Breeding Hawksbills mate biannually in secluded lagoons off their nesting beaches in remote islands throughout their range. The most significant nesting beaches are in Mexico, the Seychelles, Indonesia, Sri Lanka, and Australia. The mating season for Atlantic hawksbills usually spans April to November. Indian Ocean populations, such as the Seychelles hawksbill population, mate from September to February. After mating, females drag their heavy bodies high onto the beach during the night. They clear an area of debris and dig a nesting hole using their rear flippers, then lay clutches of eggs and cover them with sand. Caribbean and Florida nests of E. imbricata typically contain around 140 eggs. After the hours-long process, the female returns to the sea. Their nests can be found throughout beaches in about 60 countries. Hatchlings, usually weighing less than , hatch at night after around two months. These newly emergent hatchlings are dark-colored, with heart-shaped carapaces measuring approximately long. They instinctively crawl into the sea, attracted by the moon's reflection on the water (disrupted by light sources such as street lamps and lights). While they emerge under the cover of darkness, hatchlings that do not reach the water by daybreak are preyed upon by shorebirds, shore crabs, and other predators. Maturity Hawksbills evidently reach maturity after 20 years. Their lifespan is unknown. Like other sea turtles, hawksbills are solitary for most of their lives; they meet only to mate. They are highly migratory. Because of their tough carapaces, adults' only predators are humans, sharks, estuarine crocodiles, octopuses, and some pelagic fish species. A series of biotic and abiotic cues, such as individual genetics, foraging quantity and quality, or population density, may trigger the maturation of the reproductive organs and the production of gametes and thus determine sexual maturity. Like many reptiles, all marine turtles of the same aggregation are highly unlikely to reach sexual maturity at the same size and thus age. Age at maturity has been estimated to occur between 10 and 25 years of age for Caribbean hawksbills. Turtles nesting in the Indo-Pacific region may reach maturity at a minimum of 30 to 35 years. Evolutionary history Within the sea turtles, E. imbricata has several unique anatomical and ecological traits. It is the only primarily spongivorous reptile. Because of this, its evolutionary position is somewhat unclear. Molecular analyses support Eretmochelys placement within the taxonomic tribe Carettini, including the carnivorous loggerhead and ridley sea turtles, rather than in the tribe Chelonini, which includes the herbivorous green turtle. The hawksbill probably evolved from carnivorous ancestors. Exploitation by humans Throughout the world, hawksbill turtles have been hunted by humans, though it is illegal to capture, kill, and trade hawksbills in many countries today. In some parts of the world, hawksbill turtles and their eggs continue to be exploited as food. As far back as the fifth century BCE, sea turtles, including the hawksbill, were eaten as delicacies in China. Beyond direct consumption for food, many cultures have also exploited hawksbill populations for their ornate carapace shells, known variously as tortoiseshell, turtle shell, and bekko. In China, the hawksbill is called dai mei or dai mao ("tortoise-shell turtle"), and was used to make and decorate a variety of small items, as it was in the West. Along the south coast of Java, stuffed hawksbill turtles are sold in souvenir shops, though numbers have decreased in the last two decades. In Japan, the turtles are harvested for their shell scutes, called bekko in Japanese. Bekko is used in various personal implements, such as eyeglass frames and the shamisen (Japanese traditional three-stringed instrument) picks. In 1994, Japan stopped importing hawksbill shells from other nations. Prior to this, the Japanese hawksbill shell trade was around of raw shells per year. In Europe, hawksbill sea turtle shells were harvested by the ancient Greeks and ancient Romans for jewellery, such as combs, brushes, and rings. Recently, processed shells were regularly available in large amounts in countries including the Dominican Republic and Colombia. Global estimates of the historical exploitation of hawksbills have received recent attention. From 1950-1992, one pioneering study estimated that as many as 1.37 million adult hawksbills were killed in the international tortoiseshell trade alone. With the aid of substantial additional trade data, including official trade records from the imperial Japanese archives, the international trade of tortoiseshell was recently updated to have killed approximately 8.98 million hawksbills (range 4.64 to 9.83 million) from 1844-1992. Most of the trade occurred in the Pacific Ocean basin, and the countries of origin and trade routes bore similarity to what is known of illegal, unreported and unregulated fishing (IUU fishing). Conservation Consensus has determined sea turtles, including E. imbricata to be at least threatened, because of their slow growth and maturity and low reproductive rates. Humans have killed many adult turtles, both accidentally and deliberately. Their existence is threatened due to pollution and loss of nesting areas because of coastal development. Biologists estimate that the hawksbill population has declined 80 percent in the past 100–135 years. Human and animal encroachment threatens nesting sites, and small mammals dig up the eggs to eat. In the US Virgin Islands, mongooses raid hawksbill nests (along with other sea turtles, such as Dermochelys coriacea) right after they are laid. In 1982, the IUCN Red List of Threatened Species first listed E. imbricata as endangered. This endangered status continued through several reassessments in 1986, 1988, 1990, and 1994 until it was upgraded in status to critically endangered in 1996. Two petitions challenged its status as an endangered species prior to this, claiming the turtle (along with three other species) had several significant stable populations worldwide. These petitions were rejected based on their data analysis submitted by the Marine Turtle Specialist Group (MTSG). The MTSG data showed the worldwide hawksbill sea turtle population had declined by 80% in the three most recent generations, and no significant population increase had occurred as of 1996. CR A2 status was denied, however, because the IUCN did not find sufficient data to show the population likely to decrease by a further 80%. The species (along with the entire Cheloniidae family) has been listed in Appendix I of the Convention on International Trade in Endangered Species. This means commercial international trade (including in parts and derivatives) is prohibited and non-commercial international trade is regulated. Hawksbill turtles are listed in Annex II of the Protocol Concerning Specially Protected Areas and Wildlife to the Convention for the Protection and Development of the Marine Environment of the Wider Caribbean Region (SPAW), part of the Cartagena Convention. The United States Fish and Wildlife Service and National Marine Fisheries Service have classified hawksbills as endangered under the Endangered Species Act since 1970. The US government established several recovery plans for protecting E. imbricata. The Zoological Society of London has inscribed the reptile as an EDGE species, meaning that it is both endangered and highly genetically distinct, and therefore of particular concern for conservation efforts. The World Wildlife Fund Australia (WWF-Australia) has several ongoing projects aiming at protecting the reptile. On Rosemary Island, an island in the Dampier Archipelago off the Pilbara coast of Western Australia, volunteers have been monitoring hawksbill turtles since 1986. In November 2020, a 60-year old turtle first tagged in November 1990 and again in 2011 returned to the same location.
Biology and health sciences
Turtles
Animals
19975098
https://en.wikipedia.org/wiki/Pull-apart%20basin
Pull-apart basin
In geology, a basin is a region where subsidence generates accommodation space for the deposition of sediments. A pull-apart basin is a structural basin where two overlapping (en echelon) strike-slip faults or a fault bend create an area of crustal extension undergoing tension, which causes the basin to sink down. Frequently, the basins are rhombic or sigmoidal in shape. Dimensionally, basins are limited to the distance between the faults and the length of overlap. Mechanics and fault configuration The inhomogeneity and structural complexity of continental crust causes faults to deviate from a straight course and frequently causes bends or step-overs in fault paths. Bends and step-overs of adjacent faults become favorable locations for extensional and compressional stress or transtension and transpression stress, if the shear motion is oblique. Pull-apart basins form in extensional to transtensional environments along fault bends or between two adjacent left-lateral faults or two right-lateral faults. The step-over or bend in the fault must be the same direction as sense of motion on the fault otherwise the area will be subject to transpression. For example, two overlapping left lateral fault must have a left-step-over to create a pull-apart basin. This is illustrated in the accompanying figures. A regional strike slip fault is referred to as a principle displacement zone (PDZ). Connecting the tips of step over faults to the opposite fault are bounding basin sidewall faults. The tectonic subsidence of strike-slip basins is mainly episodic, short lived (typically less than 10 Ma), and end abruptly with commonly very high tectonic subsidence rates (greater than 0.5 km/Ma) compared to all other basin types. Recent sandbox models have shown that the geometry and evolution of pull-apart basins varies greatly in pure-strike slip situations versus transtensional settings. Transtensional settings are believed to generate greater surface subsidence than pure-strike slip alone. Examples Famous localities for continental pull-apart basins are the Dead Sea, the Salton Sea, and the Sea of Marmara. Pull-apart basins are amenable to research because sediments deposited in the basin provide a timeline of activity along the fault. The Salton Trough is an active pull-apart located in a step-over between the dextral San Andreas Fault and the Imperial Fault. Displacement on the fault is approximately 6 cm/yr. The current transtensional state generates normal growth faults and some strike slip motion. The growth faults in the region strike N15E, have steep dips (~70 deg), and vertical displacements of 1–4 mm/yr. Eight large slip events have occurred on these faults with throw ranging from 0.2 to 1.0 meters. These produce earthquakes greater than magnitude six and are responsible for the majority of extension in the basin and consequently thermal anomalies, subsidence, and localization of rhyolite buttes such as the Salton Buttes. Economic significance Pull-apart basins represent an important exploration target for oil and gas, porphyry copper mineralisation, and geothermal fields. The Matzen fault system in the Matzen oil field has been recast as extensional grabens produced by pull-apart basins of the Vienna Basin. The Dead Sea has been studied extensively and thinning of the crust in pull-aparts may generate differential loading and instigate salt diapirs to rise, a frequent trap for hydrocarbons. Likewise intense deformation and rapid subsidence and deposition in pull-aparts creates numerous structural and stratigraphic traps, enhancing their viability as hydrocarbon reservoirs. The shallow extensional regime of pull-apart basins also facilitates the emplacement of felsic intrusive rocks with high copper mineralisation. It is believed to be the main structural control on the giant Escondida deposit in Chile. Geothermal fields are located in pull-aparts for the same reason due to the high heat flow associated with rising magmas.
Physical sciences
Tectonics
Earth science
26950550
https://en.wikipedia.org/wiki/Type-cD%20galaxy
Type-cD galaxy
The type-cD galaxy (also cD-type galaxy, cD galaxy) is a galaxy morphology classification, a subtype of type-D giant elliptical galaxy. Characterized by a large halo of stars, they can be found near the centres of some rich galaxy clusters. They are also known as supergiant ellipticals or central dominant galaxies. Characteristics The cD-type is a classification in the Yerkes galaxy classification scheme, one of two Yerkes classifications still in common use, along with D-type. The "c" in "cD" refers to the fact that the galaxies are very large, hence the adjective supergiant, while the "D" refers to the fact that the galaxies appear diffuse. A backformation of "cD" is frequently used to indicate "central Dominant galaxy". cDs are also frequently considered the largest galaxies. cD galaxies are similar to lenticular galaxies (S0) or elliptical galaxies (E#), but many times larger, some having envelopes that exceed one million light years in radius. They appear elliptical-like, with large low surface brightness envelopes which may belong as much to the galaxy cluster as the cD galaxy. It is currently thought that cDs are the result of galaxy mergers. Some cDs have multiple galactic nuclei. cD galaxies are one of the types frequently found to be the brightest cluster galaxy (BCG) of a cluster. Many fossil group galaxies are similar to cD BCG galaxies, leading some to theorize that the cD results from the creation of a fossil group, and then the new cluster accumulating around the fossil group. However, cDs themselves are not found as field galaxies, unlike fossil groups. cDs form around 20% of BCGs. Importance Massive galaxies such as supergiant elliptical galaxies are important to understanding the evolution of the Universe, because they, along with other large-early type galaxies, account for half of the Universe's stellar mass, contribute a lot to its chemical enrichment and provide clues to the star formation history of the Universe. Growth cD galaxies are believed to grow via mergers of galaxies that spiral in to the center of a galaxy cluster, a theory first proposed by Herbert J. Rood in 1965. This "cannibalistic" mode of growth leads to the large diameter and luminosity of the cDs. The second-brightest galaxy in the cluster is usually under-luminous, a consequence of its having been "eaten". Remains of "eaten" galaxies sometimes appear as a diffuse halo of gas and dust, or tidal streams, or undigested off-center nuclei in the cD galaxy. The envelope or halo may also consist of the "intra-cluster light", originating from stars stripped away from their original galaxy, and it can be up to 3 million light years in diameter. It is estimated that the cD galaxy alone contributes 1–7%, depending on the cluster mass, of the total baryon mass within 12.5 virial radii. Dynamical friction Dynamical friction is believed to play an important role in the formation of cD galaxies at the centres of galaxy clusters. This process begins when the motion of a large galaxy in a cluster attracts smaller galaxies and dark matter into a wake behind it. This over-density follows behind the larger galaxy and exerts a constant gravitational force on it, causing it to slow down. As it loses kinetic energy, the large galaxy gradually spirals toward the centre of the cluster. Once there, the stars, gas, dust and dark matter of the large galaxy and its trailing galaxies will join with those of other galaxies who preceded them in the same fate. A giant or supergiant diffuse or elliptical galaxy will result from this accumulation. The centers of merged or merging galaxies can remain recognizable for long times, appearing as multiple "nuclei" of the cD galaxy. cD clusters Type-cD galaxies are also used to define clusters. A galaxy cluster with a cD at its centre is termed a "cD cluster" or "cD galaxy cluster" or "cD cluster of galaxies". Examples 3C 401 Abell 1201 BCG Abell 1413 BCG ESO 383-76, the large central galaxy of Abell 3571 Holmberg 15A (home to one of the largest black holes currently known) IC 1101, the large central galaxy of the massive cluster Abell 2029. Messier 87, the central galaxy in the Virgo Cluster NeVe 1, the host galaxy of the Ophiuchus Supercluster eruption event, the most energetic outburst known. NGC 1399 in the Fornax Cluster NGC 4889, is also known as the Caldwell 35, a supergiant galaxy, a class-4 elliptical galaxy, it is the brightest within Caldwell Objects in the constellation Coma Berenices NGC 6086 NGC 6166 Perseus A QSO 0957, the first identified gravitationally lensed object
Physical sciences
Galaxy classification
Astronomy
26959489
https://en.wikipedia.org/wiki/Tenaille
Tenaille
A tenaille (archaic tenalia) is an advanced defensive-work, in front of the main defences of a fortress, which takes its name from resemblance to the lip of a pair of pincers. It is "from French, literally: tongs, from Late Latin tenācula, pl of tenaculum". In a letter to John Bradshaw, President of the Council of State in London, Oliver Cromwell writing from Dublin on 16 September 1649 described one such tenaille that played a significant part during the storming of Drogheda. Tenaille were a development in fortification formalised by Vauban, among others. A postern gate was placed low down in the curtain wall close to the centre in order to allow the defenders to access the ditches that front the wall. To protect the postern, an outwork, originally V-shaped, was placed in front of the gate, providing an area where the defenders could leave the fortification without being seen or directly shot at. A simple tenaille is shown in the top image to the right; it is the chevron between the two corner bastions. The design also evolved a version in which the tenaille possesses projections at each end, as seen in the middle image to the right. The name was also used for some other V-shaped parts of outworks; the bottom-most image, a priest's cap, has two tenailles. Also shown is another approach to protect a gate; the roughly triangular outwork seen in the middle of the bottom drawing is a ravelin.
Technology
Fortification
null
21076226
https://en.wikipedia.org/wiki/Plant%20tissue%20culture
Plant tissue culture
Plant tissue culture is a collection of techniques used to maintain or grow plant cells, tissues, or organs under sterile conditions on a nutrient culture medium of known composition. It is widely used to produce clones of a plant in a method known as micropropagation. Different techniques in plant tissue culture may offer certain advantages over traditional methods of propagation, including: The production of exact copies of plants that produce particularly good flowers, fruits, or other desirable traits. To quickly produce mature plants. To produce a large number of plants in a reduced space. The production of multiples of plants in the absence of seeds or necessary pollinators to produce seeds. The regeneration of whole plants from plant cells that have been genetically modified. The production of plants in sterile containers allows them to be moved with greatly reduced chances of transmitting diseases, pests, and pathogens. The production of plants from seeds that otherwise have very low chances of germinating and growing, e.g., orchids and Nepenthes. To clean particular plants of viral and other infections and to quickly multiply these plants as 'cleaned stock' for horticulture and agriculture. Reproduce recalcitrant plants required for land restoration Storage of genetic plant material to safeguard native plant species. Plant tissue culture relies on the fact that many plant parts have the ability to regenerate into a whole plant (cells of those regenerative plant parts are called totipotent cells which can differentiate into various specialized cells). Single cells, plant cells without cell walls (protoplasts), pieces of leaves, stems or roots can often be used to generate a new plant on culture media given the required nutrients and plant hormones. Techniques used for plant tissue culture in vitro Preparation of plant tissue for tissue culture is performed under aseptic conditions under HEPA filtered air provided by a laminar flow cabinet. Thereafter, the tissue is grown in sterile containers, such as Petri dishes or flasks in a growth room with controlled temperature and light intensity. Living plant materials from the environment are naturally contaminated on their surfaces (and sometimes interiors) with microorganisms, so their surfaces are sterilized in chemical solutions (usually alcohol and sodium or calcium hypochlorite) before suitable samples (known as explants) are taken. The sterile explants are then usually placed on the surface of a sterile solid culture medium but are sometimes placed directly into a sterile liquid medium, particularly when cell suspension cultures are desired. Solid and liquid media are generally composed of inorganic salts plus a few organic nutrients, vitamins, and plant hormones. Solid media are prepared from liquid media with the addition of a gelling agent, usually purified agar. The composition of the medium, particularly the plant hormones and the nitrogen source (nitrate versus ammonium salts or amino acids) have profound effects on the morphology of the tissues that grow from the initial explant. For example, an excess of auxin will often result in a proliferation of roots, while an excess of cytokinin may yield shoots. A balance of both auxin and cytokinin will often produce an unorganised growth of cells, or callus, but the morphology of the outgrowth will depend on the plant species as well as the medium composition. As cultures grow, pieces are typically sliced off and subcultured onto new media to allow for growth or to alter the morphology of the culture. The skill and experience of the tissue culturist are important in judging which pieces to culture and which to discard. As shoots emerge from a culture, they may be sliced off and rooted with auxin to produce plantlets which, when mature, can be transferred to potting soil for further growth in the greenhouse as normal plants. Regeneration pathways The specific differences in the regeneration potential of different organs and explants have various explanations. The significant factors include differences in the stage of the cells in the cell cycle, the availability of or ability to transport endogenous growth regulators, and the metabolic capabilities of the cells. The most commonly used tissue explants are the meristematic ends of the plants like the stem tip, axillary bud tip, and root tip. These tissues have high rates of cell division and either concentrate or produce required growth-regulating substances including auxins and cytokinins. Shoot regeneration efficiency in tissue culture is usually a quantitative trait that often varies between plant species and within a plant species among subspecies, varieties, cultivars, or ecotypes. Therefore, tissue culture regeneration can become complicated especially when many regeneration procedures have to be developed for different genotypes within the same species. The three common pathways of plant tissue culture regeneration are propagation from preexisting meristems (shoot culture or nodal culture), organogenesis, and non-zygotic embryogenesis. The propagation of shoots or nodal segments is usually performed in four stages for mass production of plantlets through in vitro vegetative multiplication but organogenesis is a standard method of micropropagation that involves tissue regeneration of adventitious organs or axillary buds directly or indirectly from the explants. Non-zygotic embryogenesis is a noteworthy developmental pathway that is highly comparable to that of zygotic embryos and it is an important pathway for producing somaclonal variants, developing artificial seeds, and synthesizing metabolites. Due to the single-cell origin of non-zygotic embryos, they are preferred in several regeneration systems for micropropagation, ploidy manipulation, gene transfer, and synthetic seed production. Nonetheless, tissue regeneration via organogenesis has also proved to be advantageous for studying regulatory mechanisms of plant development. Choice of explant The tissue obtained from a plant to be cultured is called an explant. Explants can be taken from many different parts of a plant, including portions of shoots, leaves, stems, flowers, roots, single undifferentiated cells, and from many types of mature cells provided they still contain living cytoplasm and nuclei and are able to de-differentiate and resume cell division. This has given rise to the concept of totipotency of plant cells. However, this is not true for all cells or for all plants. In many species explants of various organs vary in their rates of growth and regeneration, while some do not grow at all. The choice of explant material also determines if the plantlets developed via tissue culture are haploid or diploid. Also, the risk of microbial contamination is increased with inappropriate explants. The first method involving the meristems and induction of multiple shoots is the preferred method for the micropropagation industry since the risks of somaclonal variation (genetic variation induced in tissue culture) are minimal when compared to the other two methods. Somatic embryogenesis is a method that has the potential to be several times higher in multiplication rates and is amenable to handling in liquid culture systems like bioreactors. Some explants, like the root tip, are hard to isolate and are contaminated with soil microflora that becomes problematic during the tissue culture process. Certain soil microflora can form tight associations with the root systems, or even grow within the root. Soil particles bound to roots are difficult to remove without injury to the roots that then allows a microbial attack. These associated microflora will generally overgrow the tissue culture medium before there is significant growth of plant tissue. Some cultured tissues are slow in their growth. For them there would be two options: (i) Optimizing the culture medium; (ii) Culturing highly responsive tissues or varieties. Necrosis can spoil cultured tissues. Generally, plant varieties differ in susceptibility to tissue culture necrosis. Thus, by culturing highly responsive varieties (or tissues) it can be managed. Aerial (above soil) explants are also rich in undesirable microflora. However, they are more easily removed from the explant by gentle rinsing, and the remainder usually can be killed by surface sterilization. Most of the surface microflora do not form tight associations with the plant tissue. Such associations can usually be found by visual inspection as a mosaic, de-colorization, or localized necrosis on the surface of the explant. An alternative for obtaining uncontaminated explants is to take explants from seedlings which are aseptically grown from surface-sterilized seeds. The hard surface of the seed is less permeable to the penetration of harsh surface sterilizing agents, such as hypochlorite, so the acceptable conditions of sterilization used for seeds can be much more stringent than for vegetative tissues. Tissue-cultured plants are clones. If the original mother plant used to produce the first explants is susceptible to a pathogen or environmental condition, the entire crop would be susceptible to the same problem. Conversely, any positive traits would remain within the line also. Applications of plant tissue culture Plant tissue culture is used widely in the plant sciences, forestry, and horticulture. Applications include: The commercial production of plants used as potting, landscape, and florist subjects, which uses meristem and shoot culture to produce large numbers of identical individuals. To conserve rare or endangered plant species. A plant breeder may use tissue culture to screen cells rather than plants for advantageous characters, e.g. herbicide resistance/tolerance, detection of chimeras at an early developmental stage. Large-scale growth of plant cells in liquid culture in bioreactors for production of valuable compounds, like plant-derived secondary metabolites and recombinant proteins used as biopharmaceuticals. To cross distantly related species by protoplast fusion and regeneration of the novel hybrid. To rapidly study the molecular basis for physiological, biochemical, and reproductive mechanisms in plants, for example in vitro selection for stress-tolerant plants. To cross-pollinate distantly related species and then tissue culture the resulting embryo which would otherwise normally die (Embryo Rescue). For chromosome doubling and induction of polyploidy, for example doubled haploids, tetraploids, and other forms of polyploids. This is usually achieved by the application of antimitotic agents such as colchicine or oryzalin. As a tissue for transformation, followed by either short-term testing of genetic constructs or regeneration of transgenic plants. Certain techniques such as meristem tip culture can be used to produce clean plant material from virused stock, such as sugarcane, potatoes and many species of soft fruit. Production of identical sterile hybrid species can be obtained. Large scale production of artificial seeds through somatic embryogenesis Examples Developing Somaclonal variation Climate resilience - As in Kaveri Vaman (by NRCB , Tamil Nadu) , a Tissue Culture Banana Mutant to withstand heavy rains. Secondary metabolites production - Such as Caffeine from coffea arabica, Nicotine from Nicotiana rustica or phenolic acids from Echinacea purpurea. Induction of flowering - In trees with delay in flowering or Bamboo - where some species flower once in their life but may live longer than 50 years.
Technology
Biotechnology
null
21076839
https://en.wikipedia.org/wiki/Web%20page
Web page
A web page (or webpage) is a document on the Web that is accessed in a web browser. A website typically consists of many web pages linked together under a common domain name. The term "web page" is therefore a metaphor of paper pages bound together into a book. Navigation Each web page is identified by a distinct Uniform Resource Locator (URL). When the user inputs a URL into their web browser, the browser retrieves the necessary content from a web server and then transforms it into an interactive visual representation on the user's screen. If the user clicks or taps a link, the browser repeats this process to load the new URL, which could be part of the current website or a different one. The browser has features, such as the address bar, that indicate which page is displayed. Elements A web page is a structured document. The core element is a text file written in the HyperText Markup Language (HTML). This specifies the content of the page, including images and video. Cascading Style Sheets (CSS) specify the presentation of the page. CSS rules can be in separate text files or embedded within the HTML file. The vast majority of pages have JavaScript programs, enabling a wide range of behavior. The newer WebAssembly language can also be used as a supplement. The most sophisticated web pages, known as web apps, combine these elements in a complex manner. Deployment From the perspective of server-side website deployment, there are two types of web pages: static and dynamic. Static pages are retrieved from the web server's file system without any modification, while dynamic pages must be created by the server on the fly, typically reading from a database to fill out a template, before being sent to the user's browser. An example of a dynamic page is a search engine results page.
Technology
Internet
null
3654207
https://en.wikipedia.org/wiki/List%20of%20Chinese%20star%20names
List of Chinese star names
Chinese star names (Chinese: , xīng míng) are named according to ancient Chinese astronomy and astrology. The sky is divided into star mansions (, xīng xiù, also translated as "lodges") and asterisms (, xīng guān). The ecliptic is divided into four sectors that are associated with the Four Symbols, guardians in Chinese mythology, and further into 28 mansions. Stars around the north celestial pole are grouped into three enclosures (, yuán). The system of 283 asterisms under the Three Enclosures and Twenty-Eight Mansions was established by Chen Zhuo of the Three Kingdoms period, who synthesized ancient constellations and the asterisms created by early astronomers Shi Shen, Gan De and Wuxian. Since the Han and Jin dynasties, stars have been given reference numbers within their asterisms in a system similar to the Bayer or Flamsteed designations, so that individual stars can be identified. For example, Deneb (α Cyg) is referred to as (Tiān Jīn Sì, the Fourth Star of Celestial Ford). In the Qing dynasty, Chinese knowledge of the sky was improved by the arrival of European star charts. Yixiang Kaocheng, compiled in mid-18th century by then deputy Minister of Rites Ignaz Kögler, expanded the star catalogue to more than 3000 stars. The newly added stars (, zēng xīng) were named as (zēng yī, 1st added star), (zēng èr, 2nd added star) etc. For example, γ Cephei is referred to as (Shào Wèi Zēng Bā, 8th Added Star of Second Imperial Guard). Some stars may have been assigned more than one name due to the inaccuracies of traditional star charts. While there is little disagreement on the correspondence between traditional Chinese and Western star names for brighter stars, many asterisms, in particular those originally from Gan De, were created primarily for astrological purposes and can only be mapped to very dim stars. The first attempt to fully map the Chinese constellations was made by Paul Tsuchihashi in late 19th century. In 1981, based on Yixiang Kaocheng and Yixiang Kaocheng Xubian, the first complete map of Chinese stars and constellations was published by Yi Shitong (伊世同). The list is based on Atlas Comparing Chinese and Western Star Maps and Catalogues by Yi Shitong (1981) and Star Charts in Ancient China by Chen Meidong (1996). In a few cases, meanings of the names are vague due to their antiquity. In this article, the translation by Hong Kong Space Museum is used. Three Enclosures Purple Forbidden Enclosure The Purple Forbidden Enclosure ( Zǐ Wēi Yuán) occupies the region around the north celestial pole and represents the imperial palace. It corresponds to constellations Auriga, Boötes, Camelopardalis, Canes Venatici, Cassiopeia, Cepheus, Draco, Hercules, Leo Minor, Lynx, Ursa Major, and Ursa Minor. Added Stars Supreme Palace Enclosure The Supreme Palace Enclosure (, Tài Wēi Yuán) represents the imperial court. It corresponds to constellations Canes Venatici, Coma Berenices, Leo, Leo Minor, Lynx, Sextans, Ursa Major and Virgo. Added Stars Heavenly Market Enclosure The Heavenly Market Enclosure (, Tiān Shì Yuán) represents the emperor's realm. It corresponds to constellations Aquila, Boötes, Corona Borealis, Draco, Hercules, Ophiuchus, Sagitta, Serpens and Vulpecula. Added Stars Azure Dragon Horn The Horn mansion represents the Dragon's horns. It corresponds to constellations Centaurus, Circinus, Coma Berenices, Hydra, Lupus and Virgo. Added Stars Neck The Neck mansion represents the Dragon's neck. It corresponds to constellations Boötes, Centaurus, Hydra, Libra, Lupus and Virgo. Added Stars Root The Root mansion represents the Dragon's chest. It corresponds to constellations Boötes, Centaurus, Hydra, Libra, Lupus, Serpens and Virgo. Added Stars Room The Room mansion represents the Dragon's abdomen. It corresponds to constellations Libra, Lupus, Ophiuchus and Scorpius. Added Stars Heart The Heart mansion represents the Dragon's heart. It corresponds to constellations Lupus, Ophiuchus and Scorpius. Added Stars Tail The Tail mansion represents the Dragon's tail. It corresponds to constellations Ara, Ophiuchus and Scorpius. Added Stars Winnowing Basket The Winnowing Basket mansion is the last of the Azure Dragon mansions. It corresponds to constellations Ara, Ophiuchus and Sagittarius. Added Stars Black Turtle Dipper The Dipper mansion is the first of the Black Turtle mansions. It corresponds to constellations Aquila, Corona Australis, Ophiuchus, Sagittarius, Scutum and Telescopium. Added Stars Ox The Ox mansion corresponds to constellations Aquila, Capricornus, Cygnus, Delphinus, Lyra, Microscopium, Sagitta, Sagittarius and Vulpecula. Its name derives from the Cowherd Star. Added Stars Girl The Girl mansion corresponds to constellations Aquarius, Aquila, Capricornus, Cygnus, Draco and Delphinus. Added Stars Ruins The Ruins mansion (also translated as Emptiness) corresponds to constellations Aquarius, Capricornus, Delphinus, Equuleus, Grus, Microscopium, Pegasus and Piscis Austrinus. Added Stars Rooftop The Rooftop mansion corresponds to constellations Andromeda, Aquarius, Cepheus, Cygnus, Draco, Lacerta, Pegasus, Piscis Austrinus and Vulpecula. Added Stars Encampment The Encampment mansion corresponds to constellations Andromeda, Aquarius, Capricornus, Cassiopeia, Cepheus, Cygnus, Lacerta, Pegasus, Pisces and Piscis Austrinus. Added Stars Wall The Wall mansion corresponds to constellations Andromeda, Cetus, Pegasus and Pisces. Added Stars White Tiger Legs The Legs mansion represents the tail of White Tiger. It corresponds to constellations Andromeda, Cassiopeia, Cetus, Pisces and Triangulum. Added Stars Bond The Bond mansion represents the body of White Tiger. It corresponds to constellations Andromeda, Aries, Cetus, Fornax, Perseus, Pisces and Triangulum. Added Stars Stomach The Stomach mansion represents the body of White Tiger. It corresponds to constellations Aries, Camelopardalis, Cetus, Eridanus, Perseus, Taurus and Triangulum. Added Stars Hairy Head The Hairy Head mansion represents the body of White Tiger. It corresponds to constellations Aries, Cetus, Eridanus, Fornax, Perseus and Taurus. Added Stars Net The Net mansion represents the body of White Tiger. It corresponds to constellations Auriga, Eridanus, Horologium, Lepus, Orion, Perseus and Taurus. Added Stars Turtle Beak The Turtle Beak mansion represents the head of White Tiger. It corresponds to constellations Auriga, Gemini, Lynx, Orion and Taurus. Added Stars Three Stars The Three Stars mansion represents the body of White Tiger. It corresponds to constellations Columba, Eridanus, Lepus, Monoceros and Orion. Added Stars Vermilion Bird The final seven mansions represents the Vermilion Bird, creature of the direction south and the element Fire. Well The Well Mansion corresponds to constellations Auriga, Cancer, Canis Major, Canis Minor, Carina, Columba, Gemini, Monoceros, Orion, Pictor, Puppis and Taurus. Added Stars Ghosts The Ghosts Mansion corresponds to constellations Cancer, Gemini, Hydra, Monoceros, Puppis, Pyxis and Vela. Added Stars Willow The Willow Mansion corresponds to constellations Cancer, Hydra and Leo. Added Stars Star The Star Mansion corresponds to constellations Cancer, Hydra, Leo, Leo Minor, Lynx, Sextans. Added Stars Extended Net The Extended Net Mansion corresponds to constellation Hydra. Added Stars Wings The Wings Mansion corresponds to constellations Crater and Hydra. Added Stars Chariot The Chariot Mansion corresponds to constellations Corvus, Crater, Hydra and Virgo. Added Stars Southern asterisms Stars near the south celestial pole had not been catalogued in China until the arrival of western star charts. In the early 17th century, 23 new asterisms were designated during the compilation of the Chongzhen calendar. Added Stars Individual stars with traditional names Names listed above are all enumerations within the respective Chinese constellations. The following stars have traditional proper names. Single star asterisms Proper names of individual stars
Physical sciences
Celestial sphere: General
Astronomy
3656192
https://en.wikipedia.org/wiki/Screw%20axis
Screw axis
A screw axis (helical axis or twist axis) is a line that is simultaneously the axis of rotation and the line along which translation of a body occurs. Chasles' theorem shows that each Euclidean displacement in three-dimensional space has a screw axis, and the displacement can be decomposed into a rotation about and a slide along this screw axis. Plücker coordinates are used to locate a screw axis in space, and consist of a pair of three-dimensional vectors. The first vector identifies the direction of the axis, and the second locates its position. The special case when the first vector is zero is interpreted as a pure translation in the direction of the second vector. A screw axis is associated with each pair of vectors in the algebra of screws, also known as screw theory. The spatial movement of a body can be represented by a continuous set of displacements. Because each of these displacements has a screw axis, the movement has an associated ruled surface known as a screw surface. This surface is not the same as the axode, which is traced by the instantaneous screw axes of the movement of a body. The instantaneous screw axis, or 'instantaneous helical axis' (IHA), is the axis of the helicoidal field generated by the velocities of every point in a moving body. When a spatial displacement specializes to a planar displacement, the screw axis becomes the displacement pole, and the instantaneous screw axis becomes the velocity pole, or instantaneous center of rotation, also called an instant center. The term centro is also used for a velocity pole, and the locus of these points for a planar movement is called a centrode. History The proof that a spatial displacement can be decomposed into a rotation around, and translation along, a line in space is attributed to Michel Chasles in 1830. Recently the work of Giulio Mozzi has been identified as presenting a similar result in 1763. Screw axis symmetry A screw displacement (also screw operation or rotary translation) is the composition of a rotation by an angle φ about an axis (called the screw axis) with a translation by a distance d along this axis. A positive rotation direction usually means one that corresponds to the translation direction by the right-hand rule. This means that if the rotation is clockwise, the displacement is away from the viewer. Except for φ = 180°, we have to distinguish a screw displacement from its mirror image. Unlike for rotations, a righthand and lefthand screw operation generate different groups. The combination of a rotation about an axis and a translation in a direction perpendicular to that axis is a rotation about a parallel axis. However, a screw operation with a nonzero translation vector along the axis cannot be reduced like that. Thus the effect of a rotation combined with any translation is a screw operation in the general sense, with as special cases a pure translation, a pure rotation and the identity. Together these are all the direct isometries in 3D. In crystallography, a screw axis symmetry is a combination of rotation about an axis and a translation parallel to that axis which leaves a crystal unchanged. If φ = for some positive integer n, then screw axis symmetry implies translational symmetry with a translation vector which is n times that of the screw displacement. Applicable for space groups is a rotation by about an axis, combined with a translation along the axis by a multiple of the distance of the translational symmetry, divided by n. This multiple is indicated by a subscript. So, 63 is a rotation of 60° combined with a translation of one half of the lattice vector, implying that there is also 3-fold rotational symmetry about this axis. The possibilities are 21, 31, 41, 42, 61, 62, and 63, and the enantiomorphous 32, 43, 64, and 65. Considering a screw axis n, if g is the greatest common divisor of n and m, then there is also a g-fold rotation axis. When screw operations have been performed, the displacement will be , which since it is a whole number means one has moved to an equivalent point in the lattice, while carrying out a rotation by . So 4, 6 and 6 create two-fold rotation axes, while 6 creates a three-fold axis. A non-discrete screw axis isometry group contains all combinations of a rotation about some axis and a proportional translation along the axis (in rifling, the constant of proportionality is called the twist rate); in general this is combined with k-fold rotational isometries about the same axis (k ≥ 1); the set of images of a point under the isometries is a k-fold helix; in addition there may be a 2-fold rotation about a perpendicularly intersecting axis, and hence a k-fold helix of such axes. Screw axis of a spatial displacement Geometric argument Let be an orientation-preserving rigid motion of R3. The set of these transformations is a subgroup of Euclidean motions known as the special Euclidean group SE(3). These rigid motions are defined by transformations of x in R3 given by consisting of a three-dimensional rotation A followed by a translation by the vector d. A three-dimensional rotation A has a unique axis that defines a line L. Let the unit vector along this line be S so that the translation vector d can be resolved into a sum of two vectors, one parallel and one perpendicular to the axis L, that is, In this case, the rigid motion takes the form Now, the orientation preserving rigid motion D* = A(x) + d⊥ transforms all the points of R3 so that they remain in planes perpendicular to L. For a rigid motion of this type there is a unique point c in the plane P perpendicular to L through 0, such that The point C can be calculated as because d⊥ does not have a component in the direction of the axis of A. A rigid motion D* with a fixed point must be a rotation of around the axis Lc through the point c. Therefore, the rigid motion consists of a rotation about the line Lc followed by a translation by the vector dL in the direction of the line Lc. Conclusion: every rigid motion of R3 is the result of a rotation of R3 about a line Lc followed by a translation in the direction of the line. The combination of a rotation about a line and translation along the line is called a screw motion. Computing a point on the screw axis A point C on the screw axis satisfies the equation: Solve this equation for C using Cayley's formula for a rotation matrix where [B] is the skew-symmetric matrix constructed from Rodrigues' vector such that Use this form of the rotation A to obtain which becomes This equation can be solved for C on the screw axis P(t) to obtain, The screw axis of this spatial displacement has the Plücker coordinates . Dual quaternion The screw axis appears in the dual quaternion formulation of a spatial displacement . The dual quaternion is constructed from the dual vector defining the screw axis and the dual angle , where φ is the rotation about and d the slide along this axis, which defines the displacement D to obtain, A spatial displacement of points q represented as a vector quaternion can be defined using quaternions as the mapping where d is translation vector quaternion and S is a unit quaternion, also called a versor, given by that defines a rotation by 2θ around an axis S. In the proper Euclidean group E+(3) a rotation may be conjugated with a translation to move it to a parallel rotation axis. Such a conjugation, using quaternion homographies, produces the appropriate screw axis to express the given spatial displacement as a screw displacement, in accord with Chasles’ theorem. Mechanics The instantaneous motion of a rigid body may be the combination of rotation about an axis (the screw axis) and a translation along that axis. This screw move is characterized by the velocity vector for the translation and the angular velocity vector in the same or opposite direction. If these two vectors are constant and along one of the principal axes of the body, no external forces are needed for this motion (moving and spinning]]). As an example, if gravity and drag are ignored, this is the motion of a bullet fired from a rifled gun. Biomechanics This parameter is often used in biomechanics, when describing the motion of joints of the body. For any period of time, joint motion can be seen as the movement of a single point on one articulating surface with respect to the adjacent surface (usually distal with respect to proximal). The total translation and rotations along the path of motion can be defined as the time integrals of the instantaneous translation and rotation velocities at the IHA for a given reference time. In any single plane, the path formed by the locations of the moving instantaneous axis of rotation (IAR) is known as the 'centroid', and is used in the description of joint motion.
Physical sciences
Basics_4
Physics
2694386
https://en.wikipedia.org/wiki/Swietenia
Swietenia
Swietenia is a genus of trees in the chinaberry family, Meliaceae. It occurs natively in the Neotropics, from southern Florida, the Caribbean, Mexico and Central America south to Bolivia. The genus is named for Dutch-Austrian physician Gerard van Swieten (1700–1772). The wood of Swietenia trees is known as mahogany. Overview The genus was introduced into several Asian countries as a replacement source of mahogany timber around the time it was restricted in its native locations in the late 1990s. Trade in Asian grown plantation mahogany is not restricted. Fiji and India are the largest exporters of plantation mahogany and wild mahogany remains commercially unavailable to this day. It is usually taken to consist of three species, geographically separated. They are medium-sized to large trees growing to 20–45 m tall, and up to trunk diameter. The leaves are 10–30 cm long, pinnate, with 3-6 pairs of leaflets, the terminal leaflet absent; each leaflet is 5–15 cm long. The leaves are deciduous to semi-evergreen, falling shortly before the new foliage grows. The flowers are produced in loose inflorescences, each flower small, with five white to greenish-yellowish petals. The fruit is a pear-shaped five-valved capsule 8–20 cm long, containing numerous winged seeds about 5–9 cm long. The three species are poorly defined biologically, in part because they hybridize freely when grown in proximity. Species Formerly placed here now in Meliaceae Chloroxylon swietenia DC. (as S. chloroxylon Roxb.) Khaya senegalensis (Desr.) A.Juss. (as S. senegalensis Desr.) Soymida febrifuga (Roxb.) A.Juss. (as S. febrifuga Roxb.) Toona sureni (Blume) Merr. (as S. sureni Blume) Uses The genus is famed as the supplier of mahogany, at first yielded by Swietenia mahagoni, a Caribbean species, which was so extensively used locally and exported that its trade ended by the 1950s. These days almost all mahogany is yielded by the mainland species, Swietenia macrophylla, although no longer from its native locations due to the restrictions set by CITES (see following.) As a timber, both Swietenia macrophylla and Swietenia mahogoni are both grown in plantations in several Asian countries such as Fiji, Indonesia, India, and Bangladesh and this plantation mahogany timber is the main source of the world's current supply of "genuine mahogany", due to cultivation and trade of it in its native locations being restricted by the Convention On International Trade in Endangered Species of Wild Flora and Fauna (CITES) since the late 1990s. Trade in Swietenia grown and harvested in these non-native locations is not restricted. Species of this genus are only occasionally plantation-grown in Central America, in spite of the good growing conditions and high price of the wood, due to the ubiquitous presence of the mahogany shoot borer moth (also known as the cedar tip moth), Hypsipyla grandella, which damages the form of the tree by killing the terminal shoot, causing excessive branching. Control requires extensive and frequent spraying with pesticides, rendering the genus relatively uneconomic wherever the shoot borer is present. The fruits of Swietenia macrophylla are called "sky fruit", because they seem to hang upwards from the tree. The "sky fruit" concentrate is sold as a natural remedy that is said to improve blood circulation and skin. It is also said to have Viagra-like qualities regarding erectile dysfunction. A somewhat comparable wood is yielded by the related African genus Khaya. This is traded as African mahogany and is from the same family as Swietenia. Conservation All species of Swietenia are CITES-listed. Swietenia timber that crosses a border needs its paperwork in order. International environmental organizations such as Greenpeace, Friends of the Earth, and Rainforest Action Network have focused on Swietenia so as to expose illegal traffic in the wood, notably from Brazil.
Biology and health sciences
Sapindales
Plants
2696466
https://en.wikipedia.org/wiki/Demand%20response
Demand response
Demand response is a change in the power consumption of an electric utility customer to better match the demand for power with the supply. Until the 21st century decrease in the cost of pumped storage and batteries, electric energy could not be easily stored, so utilities have traditionally matched demand and supply by throttling the production rate of their power plants, taking generating units on or off line, or importing power from other utilities. There are limits to what can be achieved on the supply side, because some generating units can take a long time to come up to full power, some units may be very expensive to operate, and demand can at times be greater than the capacity of all the available power plants put together. Demand response, a type of energy demand management, seeks to adjust in real-time the demand for power instead of adjusting the supply. Utilities may signal demand requests to their customers in a variety of ways, including simple off-peak metering, in which power is cheaper at certain times of the day, and smart metering, in which explicit requests or changes in price can be communicated to customers. The customer may adjust power demand by postponing some tasks that require large amounts of electric power, or may decide to pay a higher price for their electricity. Some customers may switch part of their consumption to alternate sources, such as on-site solar panels and batteries. In many respects, demand response can be put simply as a technology-enabled economic rationing system for electric power supply. In demand response, voluntary rationing is accomplished by price incentives—offering lower net unit pricing in exchange for reduced power consumption in peak periods. The direct implication is that users of electric power capacity not reducing usage (load) during peak periods will pay "surge" unit prices, whether directly, or factored into general rates. Involuntary rationing, if employed, would be accomplished via rolling blackouts during peak load periods. Practically speaking, summer heat waves and winter deep freezes might be characterized by planned power outages for consumers and businesses if voluntary rationing via incentives fails to reduce load adequately to match total power supply. Background As of 2011, according to the US Federal Energy Regulatory Commission, demand response (DR) was defined as: "Changes in electric usage by end-use customers from their normal consumption patterns in response to changes in the price of electricity over time, or to incentive payments designed to induce lower electricity use at times of high wholesale market prices or when system reliability is jeopardized." DR includes all intentional modifications to consumption patterns of electricity to induce customers that are intended to alter the timing, level of instantaneous demand, or the total electricity consumption. In 2013, it was expected that demand response programs will be designed to decrease electricity consumption or shift it from on-peak to off-peak periods depending on consumers' preferences and lifestyles. In 2016 demand response was defined as "a wide range of actions which can be taken at the customer side of the electricity meter in response to particular conditions within the electricity system such as peak period network congestion or high prices". In 2010, demand response was defined as a reduction in demand designed to reduce peak demand or avoid system emergencies. It can be a more cost-effective alternative than adding generation capabilities to meet the peak and occasional demand spikes. The underlying objective of DR is to actively engage customers in modifying their consumption in response to pricing signals. The goal is to reflect supply expectations through consumer price signals or controls and enable dynamic changes in consumption relative to price. In electricity grids, DR is similar to dynamic demand mechanisms to manage customer consumption of electricity in response to supply conditions, for example, having electricity customers reduce their consumption at critical times or in response to market prices. The difference is that demand response mechanisms respond to explicit requests to shut off, whereas dynamic demand devices passively shut off when stress in the grid is sensed. Demand response can involve actually curtailing power used or by starting on-site generation which may or may not be connected in parallel with the grid. This is a quite different concept from energy efficiency, which means using less power to perform the same tasks, on a continuous basis or whenever that task is performed. At the same time, demand response is a component of smart energy demand, which also includes energy efficiency, home and building energy management, distributed renewable resources, and electric vehicle charging. Current demand response schemes are implemented with large and small commercial as well as residential customers, often through the use of dedicated control systems to shed loads in response to a request by a utility or market price conditions. Services (lights, machines, air conditioning) are reduced according to a preplanned load prioritization scheme during the critical time frames. An alternative to load shedding is on-site generation of electricity to supplement the power grid. Under conditions of tight electricity supply, demand response can significantly decrease the peak price and, in general, electricity price volatility. Demand response is generally used to refer to mechanisms used to encourage consumers to reduce demand, thereby reducing the peak demand for electricity. Since electrical generation and transmission systems are generally sized to correspond to peak demand (plus margin for forecasting error and unforeseen events), lowering peak demand reduces overall plant and capital cost requirements. Depending on the configuration of generation capacity, however, demand response may also be used to increase demand (load) at times of high production and low demand. Some systems may thereby encourage energy storage to arbitrage between periods of low and high demand (or low and high prices). Bitcoin mining is an electricity intensive process to convert computer hardware infrastructure, software skills and electricity into electronic currency. Bitcoin mining is used to increase the demand during surplus hours by consuming cheaper power. There are three types of demand response - emergency demand response, economic demand response and ancillary services demand response. Emergency demand response is employed to avoid involuntary service interruptions during times of supply scarcity. Economic demand response is employed to allow electricity customers to curtail their consumption when the productivity or convenience of consuming that electricity is worth less to them than paying for the electricity. Ancillary services demand response consists of a number of specialty services that are needed to ensure the secure operation of the transmission grid and which have traditionally been provided by generators. Electricity pricing In most electric power systems, some or all consumers pay a fixed price per unit of electricity independent of the cost of production at the time of consumption. The consumer price may be established by the government or a regulator, and typically represents an average cost per unit of production over a given timeframe (for example, a year). Consumption therefore is not sensitive to the cost of production in the short term (e.g. on an hourly basis). In economic terms, consumers' usage of electricity is inelastic in short time frames since the consumers do not face the actual price of production; if consumers were to face the short run costs of production they would be more inclined to change their use of electricity in reaction to those price signals. A pure economist might extrapolate the concept to hypothesize that consumers served under these fixed-rate tariffs are endowed with theoretical "call options" on electricity, though in reality, like any other business, the customer is simply buying what is on offer at the agreed price. A customer in a department store buying a $10 item at 9.00 am might notice 10 sales staff on the floor but only one occupied serving him or her, while at 3.00 pm the customer could buy the same $10 article and notice all 10 sales staff occupied. In a similar manner, the department store cost of sales at 9.00 am might therefore be 5-10 times that of its cost of sales at 3.00 pm, but it would be far-fetched to claim that the customer, by not paying significantly more for the article at 9.00 am than at 3.00 pm, had a 'call option' on the $10 article. In virtually all power systems electricity is produced by generators that are dispatched in merit order, i.e., generators with the lowest marginal cost (lowest variable cost of production) are used first, followed by the next cheapest, etc., until the instantaneous electricity demand is satisfied. In most power systems the wholesale price of electricity will be equal to the marginal cost of the highest cost generator that is injecting energy, which will vary with the level of demand. Thus the variation in pricing can be significant: for example, in Ontario between August and September 2006, wholesale prices (in Canadian Dollars) paid to producers ranged from a peak of $318 per MW·h to a minimum of - (negative) $3.10 per MW·h. It is not unusual for the price to vary by a factor of two to five due to the daily demand cycle. A negative price indicates that producers were being charged to provide electricity to the grid (and consumers paying real-time pricing may have actually received a rebate for consuming electricity during this period). This generally occurs at night when demand falls to a level where all generators are operating at their minimum output levels and some of them must be shut down. The negative price is the inducement to bring about these shutdowns in a least-cost manner. Two Carnegie Mellon studies in 2006 looked at the importance of demand response for the electricity industry in general terms and with specific application of real-time pricing for consumers for the PJM Interconnection Regional Transmission authority, serving 65 million customers in the US with 180 gigawatts of generating capacity. The latter study found that even small shifts in peak demand would have a large effect on savings to consumers and avoided costs for additional peak capacity: a 1% shift in peak demand would result in savings of 3.9%, billions of dollars at the system level. An approximately 10% reduction in peak demand (achievable depending on the elasticity of demand) would result in systems savings of between $8 and $28 billion. In a discussion paper, Ahmad Faruqui, a principal with the Brattle Group, estimates that a 5 percent reduction in US peak electricity demand could produce approximately $35 billion in cost savings over a 20-year period, exclusive of the cost of the metering and communications needed to implement the dynamic pricing needed to achieve these reductions. While the net benefits would be significantly less than the claimed $35 billion, they would still be quite substantial. In Ontario, Canada, the Independent Electricity System Operator has noted that in 2006, peak demand exceeded 25,000 megawatts during only 32 system hours (less than 0.4% of the time), while maximum demand during the year was just over 27,000 megawatts. The ability to "shave" peak demand based on reliable commitments would therefore allow the province to reduce built capacity by approximately 2,000 megawatts. Electricity grids and peak demand response In an electricity grid, electricity consumption and production must balance at all times; any significant imbalance could cause grid instability or severe voltage fluctuations, and cause failures within the grid. Total generation capacity is therefore sized to correspond to total peak demand with some margin of error and allowance for contingencies (such as plants being off-line during peak demand periods). Operators will generally plan to use the least expensive generating capacity (in terms of marginal cost) at any given period, and use additional capacity from more expensive plants as demand increases. Demand response in most cases is targeted at reducing peak demand to reduce the risk of potential disturbances, avoid additional capital cost requirements for additional plants, and avoid use of more expensive or less efficient operating plants. Consumers of electricity will also pay higher prices if generation capacity is used from a higher-cost source of power generation. Demand response may also be used to increase demand during periods of high supply and low demand. Some types of generating plant must be run at close to full capacity (such as nuclear), while other types may produce at negligible marginal cost (such as wind and solar). Since there is usually limited capacity to store energy, demand response may attempt to increase load during these periods to maintain grid stability. For example, in the province of Ontario in September 2006, there was a short period of time when electricity prices were negative for certain users. Energy storage such as pumped-storage hydroelectricity is a way to increase load during periods of low demand for use during later periods. Use of demand response to increase load is less common, but may be necessary or efficient in systems where there are large amounts of generating capacity that cannot be easily cycled down. Some grids may use pricing mechanisms that are not real-time, but easier to implement (users pay higher prices during the day and lower prices at night, for example) to provide some of the benefits of the demand response mechanism with less demanding technological requirements. In the UK, Economy 7 and similar schemes that attempt to shift demand associated with electric heating to overnight off-peak periods have been in operation since the 1970s. More recently, in 2006 Ontario began implementing a "smart meter" program that implements "time-of-use" (TOU) pricing, which tiers pricing according to on-peak, mid-peak and off-peak schedules. During the winter, on-peak is defined as morning and early evening, mid-peak as midday to late afternoon, and off-peak as nighttime; during the summer, the on-peak and mid-peak periods are reversed, reflecting air conditioning as the driver of summer demand. As of May 1, 2015, most Ontario electrical utilities have completed converting all customers to "smart meter" time-of-use billing with on-peak rates about 200% and mid-peak rates about 150% of the off-peak rate per kWh. Australia has national standards for Demand Response (AS/NZS 4755 series), which has been implemented nationwide by electricity distributors for several decades, e.g. controlling storage water heaters, air conditioners and pool pumps. In 2016, how to manage electrical energy storage (e.g., batteries) has been added into the series of standards. Load shedding When the loss of load happens (generation capacity falls below the load), utilities may impose load shedding (also known as emergency load reduction program, ELRP) on service areas via targeted blackouts, rolling blackouts or by agreements with specific high-use industrial consumers to turn off equipment at times of system-wide peak demand. Incentives to shed loads Energy consumers need some incentive to respond to such a request from a demand response provider. Demand response incentives can be formal or informal. The utility might create a tariff-based incentive by passing along short-term increases in the price of electricity, or they might impose mandatory cutbacks during a heat wave for selected high-volume users, who are compensated for their participation. Other users may receive a rebate or other incentive based on firm commitments to reduce power during periods of high demand, sometimes referred to as negawatts (the term was coined by Amory Lovins in 1985). For example, California introduced its own ELRP, where upon an emergency declaration enrolled customers get a credit for lowering their electricity use ($1 per kWh in 2021, $2 in 2022). Commercial and industrial power users might impose load shedding on themselves, without a request from the utility. Some businesses generate their own power and wish to stay within their energy production capacity to avoid buying power from the grid. Some utilities have commercial tariff structures that set a customer's power costs for the month based on the customer's moment of highest use, or peak demand. This encourages users to flatten their demand for energy, known as energy demand management, which sometimes requires cutting back services temporarily. Smart metering has been implemented in some jurisdictions to provide real-time pricing for all types of users, as opposed to fixed-rate pricing throughout the demand period. In this application, users have a direct incentive to reduce their use at high-demand, high-price periods. Many users may not be able to effectively reduce their demand at various times, or the peak prices may be lower than the level required to induce a change in demand during short time periods (users have low price sensitivity, or elasticity of demand is low). Automated control systems exist, which, although effective, may be too expensive to be feasible for some applications. Smart grid application Smart grid applications improve the ability of electricity producers and consumers to communicate with one another and make decisions about how and when to produce and consume electrical power. This emerging technology will allow customers to shift from an event-based demand response where the utility requests the shedding of load, towards a more 24/7-based demand response where the customer sees incentives for controlling load all the time. Although this back-and-forth dialogue increases the opportunities for demand response, customers are still largely influenced by economic incentives and are reluctant to relinquish total control of their assets to utility companies. One advantage of a smart grid application is time-based pricing. Customers who traditionally pay a fixed rate for consumed energy (kWh) and requested peak load can set their threshold and adjust their usage to take advantage of fluctuating prices. This may require the use of an energy management system to control appliances and equipment and can involve economies of scale. Another advantage, mainly for large customers with generation, is being able to closely monitor, shift, and balance load in a way that allows the customer to save peak load and not only save on kWh and kW/month but be able to trade what they have saved in an energy market. Again, this involves sophisticated energy management systems, incentives, and a viable trading market. Smart grid applications increase the opportunities for demand response by providing real time data to producers and consumers, but the economic and environmental incentives remain the driving force behind the practice. One of the most important means of demand response in the future smart grids is electric vehicles. Aggregation of this new source of energy, which is also a new source of uncertainty in the electrical systems, is critical to preserving the stability and quality of smart grids, consequently, the electric vehicle parking lots can be considered a demand response aggregation entity. Application for intermittent renewable distributed energy resources The modern power grid is making a transition from the traditional vertically integrated utility structures to distributed systems as it begins to integrate higher penetrations of renewable energy generation. These sources of energy are often diffusely distributed and intermittent by nature. These features introduce problems in grid stability and efficiency which lead to limitations on the amount of these resources which can be effectively added to the grid. In a traditional vertically integrated grid, energy is provided by utility generators which are able to respond to changes in demand. Generation output by renewable resources is governed by environmental conditions and is generally not able to respond to changes in demand. Responsive control over noncritical loads that are connected to the grid has been shown to be an effective strategy able to mitigate undesirable fluctuations introduced by these renewable resources. In this way instead of the generation responding to changes in demand, the demand responds to changes in generation. This is the basis of demand response. In order to implement demand response systems, coordination of large numbers of distributed resources through sensors, actuators, and communications protocols becomes necessary. To be effective, the devices need to be economical, robust, and yet still effective at managing their tasks of control. In addition, effective control requires a strong capability to coordinate large networks of devices, managing and optimizing these distributed systems from both an economic and a security standpoint. In addition, the increased presence of variable renewable generation drives a greater need for authorities to procure more ancillary services for grid balance. One of these services is contingency reserve, which is used to regulate the grid frequency in contingencies. Many independent system operators are structuring the rules of ancillary service markets such that demand response can participate alongside traditional supply-side resources - the available capacity of the generators can be used more efficiently when operated as designed, resulting in lower costs and less pollution. As the ratio of inverter-based generation compared to conventional generation increases, the mechanical inertia used to stabilize frequency decreases. When coupled with the sensitivity of inverter-based generation to transient frequencies, the provision of ancillary services from other sources than generators becomes increasingly important. Technologies for demand reduction Technologies are available, and more are under development, to automate the process of demand response. Such technologies detect the need for load shedding, communicate the demand to participating users, automate load shedding, and verify compliance with demand-response programs. GridWise and EnergyWeb are two major federal initiatives in the United States to develop these technologies. Universities and private industry are also doing research and development in this arena. Scalable and comprehensive software solutions for DR enable business and industry growth. Some utilities are considering and testing automated systems connected to industrial, commercial and residential users that can reduce consumption at times of peak demand, essentially delaying draw marginally. Although the amount of demand delayed may be small, the implications for the grid (including financial) may be substantial, since system stability planning often involves building capacity for extreme peak demand events, plus a margin of safety in reserve. Such events may only occur a few times per year. The process may involve turning down or off certain appliances or sinks (and, when demand is unexpectedly low, potentially increasing usage). For example, heating may be turned down or air conditioning or refrigeration may be turned up (turning up to a higher temperature uses less electricity), delaying slightly the draw until a peak in usage has passed. In the city of Toronto, certain residential users can participate in a program (Peaksaver AC) whereby the system operator can automatically control hot water heaters or air conditioning during peak demand; the grid benefits by delaying peak demand (allowing peaking plants time to cycle up or avoiding peak events), and the participant benefits by delaying consumption until after peak demand periods, when pricing should be lower. Although this is an experimental program, at scale these solutions have the potential to reduce peak demand considerably. The success of such programs depends on the development of appropriate technology, a suitable pricing system for electricity, and the cost of the underlying technology. Bonneville Power experimented with direct-control technologies in Washington and Oregon residences, and found that the avoided transmission investment would justify the cost of the technology. Other methods to implementing demand response approach the issue of subtly reducing duty cycles rather than implementing thermostat setbacks. These can be implemented using customized building automation systems programming, or through swarm-logic methods coordinating multiple loads in a facility (e.g. Encycle's EnviroGrid controllers). Similar approach can be implemented for managing air conditioning peak demand in summer peak regions. Pre-cooling or maintaining slightly higher thermostat setting can help with the peak demand reduction. In 2008 it was announced that electric refrigerators will be sold in the UK sensing dynamic demand which will delay or advance the cooling cycle based on monitoring grid frequency but they are not readily available as of 2018. Industrial customers Industrial customers are also providing demand response. Compared with commercial and residential loads, industrial loads have the following advantages: the magnitude of power consumption by an industrial manufacturing plant and the change in power it can provide are generally very large; besides, the industrial plants usually already have the infrastructures for control, communication and market participation, which enables the provision of demand response; moreover, some industrial plants such as the aluminum smelter are able to offer fast and accurate adjustments in their power consumption. For example, Alcoa's Warrick Operation is participating in MISO as a qualified demand response resource, and the Trimet Aluminium uses its smelter as a short-term nega-battery. The selection of suitable industries for demand response provision is typically based on an assessment of the so-called value of lost load. Some data centers are located far apart for redundancy and can migrate loads between them, while also performing demand response. Short-term inconvenience for long-term benefits Shedding loads during peak demand is important because it reduces the need for new power plants. To respond to high peak demand, utilities build very capital-intensive power plants and lines. Peak demand happens just a few times a year, so those assets run at a mere fraction of their capacity. Electric users pay for this idle capacity through the prices they pay for electricity. According to the Demand Response Smart Grid Coalition, 10%–20% of electricity costs in the United States are due to peak demand during only 100 hours of the year. DR is a way for utilities to reduce the need for large capital expenditures, and thus keep rates lower overall; however, there is an economic limit to such reductions because consumers lose the productive or convenience value of the electricity not consumed. Thus, it is misleading to only look at the cost savings that demand response can produce without also considering what the consumer gives up in the process. Importance for the operation of electricity markets It is estimated that a 5% lowering of demand would have resulted in a 50% price reduction during the peak hours of the California electricity crisis in 2000–2001. With consumers facing peak pricing and reducing their demand, the market should become more resilient to intentional withdrawal of offers from the supply side. Residential and commercial electricity use often vary drastically during the day, and demand response attempts to reduce the variability based on pricing signals. There are three underlying tenets to these programs: Unused electrical production facilities represent a less efficient use of capital (little revenue is earned when not operating). Electric systems and grids typically scale total potential production to meet projected peak demand (with sufficient spare capacity to deal with unanticipated events). By "smoothing" demand to reduce peaks, less investment in operational reserve will be required, and existing facilities will operate more frequently. In addition, significant peaks may only occur rarely, such as two or three times per year, requiring significant capital investments to meet infrequent events. US Energy Policy Act regarding demand response The United States Energy Policy Act of 2005 has mandated the Secretary of Energy to submit to the US Congress "a report that identifies and quantifies the national benefits of demand response and makes a recommendation on achieving specific levels of such benefits by January 1, 2007." Such a report was published in February 2006. The report estimates that in 2004 potential demand response capability equaled about 20,500 megawatts (MW), 3% of total U.S. peak demand, while actual delivered peak demand reduction was about 9,000 MW (1.3% of peak), leaving ample margin for improvement. It is further estimated that load management capability has fallen by 32% since 1996. Factors affecting this trend include fewer utilities offering load management services, declining enrollment in existing programs, the changing role and responsibility of utilities, and changing supply/demand balance. To encourage the use and implementation of demand response in the United States, the Federal Energy Regulatory Commission (FERC) issued Order No. 745 in March 2011, which requires a certain level of compensation for providers of economic demand response that participate in wholesale power markets. The order is highly controversial and has been opposed by a number of energy economists, including Professor William W. Hogan at Harvard University's Kennedy School. Professor Hogan asserts that the order overcompensates providers of demand response, thereby encouraging the curtailment of electricity whose economic value exceeds the cost of producing it. Professor Hogan further asserts that Order No. 745 is anticompetitive and amounts to "...an application of regulatory authority to enforce a buyer's cartel." Several affected parties, including the State of California, have filed suit in federal court challenging the legality of Order 745. A debate regarding the economic efficiency and fairness of Order 745 appeared in a series of articles published in The Electricity Journal. On May 23, 2014, the D.C. Circuit Court of Appeals vacated Order 745 in its entirety. On May 4, 2015, the United States Supreme Court agreed to review the DC Circuit's ruling, addressing two questions: Whether the Federal Energy Regulatory Commission reasonably concluded that it has authority under the Federal Power Act, 16 U. S. C. 791a et seq., to regulate the rules used by operators of wholesale electricity markets to pay for reductions in electricity consumption and to recoup those payments through adjustments to wholesale rates. Whether the Court of Appeals erred in holding that the rule issued by the Federal Energy Regulatory Commission is arbitrary and capricious. On January 25, 2016, the United States Supreme Court in a 6-2 decision in FERC v. Electric Power Supply Ass'n concluded that the Federal Energy Regulatory Commission acted within its authority to ensure "just and reasonable" rates in the wholesale energy market. FERC issued its Order No. 2222 on September 17, 2020, enabling distributed energy resources to participate in regional wholesale electricity markets. Market operators submitted initial compliance plans by early 2022. Demand reduction and the use of diesel generators in the British National Grid As of December 2009 National Grid had 2369 MW contracted to provide demand response, known as STOR, the demand side provides 839 MW (35%) from 89 sites. Of this 839 MW approximately 750 MW is back-up generation with the remaining being load reduction. A paper based on extensive half-hourly demand profiles and observed electricity demand shifting for different commercial and industrial buildings in the UK shows that only a small minority engaged in load shifting and demand turn-down, while the majority of demand response is provided by stand-by generators.
Technology
Electricity transmission and distribution
null
6499752
https://en.wikipedia.org/wiki/Electrical%20fault
Electrical fault
In an electric power system, a fault or fault current is any abnormal electric current. For example, a short circuit is a fault in which a live wire touches a neutral or ground wire. An open-circuit fault occurs if a circuit is interrupted by a failure of a current-carrying wire (phase or neutral) or a blown fuse or circuit breaker. In three-phase systems, a fault may involve one or more phases and ground, or may occur only between phases. In a "ground fault" or "earth fault", current flows into the earth. The prospective short-circuit current of a predictable fault can be calculated for most situations. In power systems, protective devices can detect fault conditions and operate circuit breakers and other devices to limit the loss of service due to a failure. In a polyphase system, a fault may affect all phases equally, which is a "symmetric fault". If only some phases are affected, the resulting "asymmetric fault" becomes more complicated to analyse. The analysis of these types of faults is often simplified by using methods such as symmetrical components. The design of systems to detect and interrupt power system faults is the main objective of power-system protection. Transient fault A transient fault is a fault that is no longer present if power is disconnected for a short time and then restored; or an insulation fault which only temporarily affects a device's dielectric properties which are restored after a short time. Many faults in overhead power lines are transient in nature. When a fault occurs, equipment used for power system protection operate to isolate the area of the fault. A transient fault will then clear and the power-line can be returned to service. Typical examples of transient faults include: momentary tree contact bird or other animal contact lightning strike conductor clashing Transmission and distribution systems use an automatic re-close function which is commonly used on overhead lines to attempt to restore power in the event of a transient fault. This functionality is not as common on underground systems as faults there are typically of a persistent nature. Transient faults may still cause damage both at the site of the original fault or elsewhere in the network as fault current is generated. Persistent fault A persistent fault is present regardless of power being applied. Faults in underground power cables are most often persistent due to mechanical damage to the cable, but are sometimes transient in nature due to lightning. Types of fault Asymmetric fault An asymmetric or unbalanced fault does not affect each of the phases equally. Common types of asymmetric fault, and their causes: line-to-line fault - a short circuit between lines, caused by ionization of air, or when lines come into physical contact, for example due to a broken insulator. In transmission line faults, roughly 5% - 10% are asymmetric line-to-line faults. line-to-ground fault - a short circuit between one line and ground, very often caused by physical contact, for example due to lightning or other storm damage. In transmission line faults, roughly 65% - 70% are asymmetric line-to-ground faults. double line-to-ground fault - two lines come into contact with the ground (and each other), also commonly due to storm damage. In transmission line faults, roughly 15% - 20% are asymmetric double line-to-ground. Symmetric fault A symmetric or balanced fault affects each of the phases equally. In transmission line faults, roughly 5% are symmetric. These faults are rare compared to asymmetric faults. Two kinds of symmetric fault are line to line to line (L-L-L) and line to line to line to ground (L-L-L-G). Symmetric faults account for 2 to 5% of all system faults. However, they can cause very severe damage to equipment even though the system remains balanced. Bolted fault One extreme is where the fault has zero impedance, giving the maximum prospective short-circuit current. Notionally, all the conductors are considered connected to ground as if by a metallic conductor; this is called a "bolted fault". It would be unusual in a well-designed power system to have a metallic short circuit to ground but such faults can occur by mischance. In one type of transmission line protection, a "bolted fault" is deliberately introduced to speed up operation of protective devices. Ground fault (earth fault) A ground fault (earth fault) is any failure that allows unintended connection of power circuit conductors with the earth. Such faults can cause objectionable circulating currents, or may energize the housings of equipment at a dangerous voltage. Some special power distribution systems may be designed to tolerate a single ground fault and continue in operation. Wiring codes may require an insulation monitoring device to give an alarm in such a case, so the cause of the ground fault can be identified and remedied. If a second ground fault develops in such a system, it can result in overcurrent or failure of components. Even in systems that are normally connected to ground to limit overvoltages, some applications require a Ground Fault Interrupter or similar device to detect faults to ground. Realistic faults Realistically, the resistance in a fault can be from close to zero to fairly high relative to the load resistance. A large amount of power may be consumed in the fault, compared with the zero-impedance case where the power is zero. Also, arcs are highly non-linear, so a simple resistance is not a good model. All possible cases need to be considered for a good analysis. Arcing fault Where the system voltage is high enough, an electric arc may form between power system conductors and ground. Such an arc can have a relatively high impedance (compared to the normal operating levels of the system) and can be difficult to detect by simple overcurrent protection. For example, an arc of several hundred amperes on a circuit normally carrying a thousand amperes may not trip overcurrent circuit breakers but can do enormous damage to bus bars or cables before it becomes a complete short circuit. Utility, industrial, and commercial power systems have additional protection devices to detect relatively small but undesired currents escaping to ground. In residential wiring, electrical regulations may now require arc-fault circuit interrupters on building wiring circuits, to detect small arcs before they cause damage or a fire. For example, these measures are taken in locations involving running water. Analysis Symmetric faults can be analyzed via the same methods as any other phenomena in power systems, and in fact many software tools exist to accomplish this type of analysis automatically (see power flow study). However, there is another method which is as accurate and is usually more instructive. First, some simplifying assumptions are made. It is assumed that all electrical generators in the system are in phase, and operating at the nominal voltage of the system. Electric motors can also be considered to be generators, because when a fault occurs, they usually supply rather than draw power. The voltages and currents are then calculated for this base case. Next, the location of the fault is considered to be supplied with a negative voltage source, equal to the voltage at that location in the base case, while all other sources are set to zero. This method makes use of the principle of superposition. To obtain a more accurate result, these calculations should be performed separately for three separate time ranges: subtransient is first, and is associated with the largest currents transient comes between subtransient and steady-state steady-state occurs after all the transients have had time to settle An asymmetric fault breaks the underlying assumptions used in three-phase power, namely that the load is balanced on all three phases. Consequently, it is impossible to directly use tools such as the one-line diagram, where only one phase is considered. However, due to the linearity of power systems, it is usual to consider the resulting voltages and currents as a superposition of symmetrical components, to which three-phase analysis can be applied. In the method of symmetric components, the power system is seen as a superposition of three components: a positive-sequence component, in which the phases are in the same order as the original system, i.e., a-b-c a negative-sequence component, in which the phases are in the opposite order as the original system, i.e., a-c-b a zero-sequence component, which is not truly a three-phase system, but instead all three phases are in phase with each other. To determine the currents resulting from an asymmetric fault, one must first know the per-unit zero-, positive-, and negative-sequence impedances of the transmission lines, generators, and transformers involved. Three separate circuits are then constructed using these impedances. The individual circuits are then connected together in a particular arrangement that depends upon the type of fault being studied (this can be found in most power systems textbooks). Once the sequence circuits are properly connected, the network can then be analyzed using classical circuit analysis techniques. The solution results in voltages and currents that exist as symmetrical components; these must be transformed back into phase values by using the A matrix. Analysis of the prospective short-circuit current is required for selection of protective devices such as fuses and circuit breakers. If a circuit is to be properly protected, the fault current must be high enough to operate the protective device within as short a time as possible; also the protective device must be able to withstand the fault current and extinguish any resulting arcs without itself being destroyed or sustaining the arc for any significant length of time. The magnitude of fault currents differ widely depending on the type of earthing system used, the installation's supply type and earthing system, and its proximity to the supply. For example, for a domestic UK 230 V, 60 A TN-S or USA 120 V/240 V supply, fault currents may be a few thousand amperes. Large low-voltage networks with multiple sources may have fault levels of 300,000 amperes. A high-resistance-grounded system may restrict line to ground fault current to only 5 amperes. Prior to selecting protective devices, prospective fault current must be measured reliably at the origin of the installation and at the furthest point of each circuit, and this information applied properly to the application of the circuits. Detecting and locating faults Overhead power lines are easiest to diagnose since the problem is usually obvious, e.g., a tree has fallen across the line, or a utility pole is broken and the conductors are lying on the ground. Locating faults in a cable system can be done either with the circuit de-energized, or in some cases, with the circuit under power. Fault location techniques can be broadly divided into terminal methods, which use voltages and currents measured at the ends of the cable, and tracer methods, which require inspection along the length of the cable. Terminal methods can be used to locate the general area of the fault, to expedite tracing on a long or buried cable. In very simple wiring systems, the fault location is often found through inspection of the wires. In complex wiring systems (for example, aircraft wiring) where the wires may be hidden, wiring faults are located with a Time-domain reflectometer. The time domain reflectometer sends a pulse down the wire and then analyzes the returning reflected pulse to identify faults within the electrical wire. In historic submarine telegraph cables, sensitive galvanometers were used to measure fault currents; by testing at both ends of a faulted cable, the fault location could be isolated to within a few miles, which allowed the cable to be grappled up and repaired. The Murray loop and the Varley loop were two types of connections for locating faults in cables Sometimes an insulation fault in a power cable will not show up at lower voltages. A "thumper" test set applies a high-energy, high-voltage pulse to the cable. Fault location is done by listening for the sound of the discharge at the fault. While this test contributes to damage at the cable site, it is practical because the faulted location would have to be re-insulated when found in any case. In a high resistance grounded distribution system, a feeder may develop a fault to ground but the system continues in operation. The faulted, but energized, feeder can be found with a ring-type current transformer collecting all the phase wires of the circuit; only the circuit containing a fault to ground will show a net unbalanced current. To make the ground fault current easier to detect, the grounding resistor of the system may be switched between two values so that the fault current pulses. Batteries The prospective fault current of larger batteries, such as deep-cycle batteries used in stand-alone power systems, is often given by the manufacturer. In Australia, when this information is not given, the prospective fault current in amperes "should be considered to be 6 times the nominal battery capacity at the C A·h rate," according to AS 4086 part 2 (Appendix H).
Technology
Concepts
null
6502961
https://en.wikipedia.org/wiki/IAU%20definition%20of%20planet
IAU definition of planet
The International Astronomical Union (IAU) defined in August 2006 that, in the Solar System, a planet is a celestial body that: is in orbit around the Sun, has sufficient mass to assume hydrostatic equilibrium (a nearly round shape), and has "cleared the neighbourhood" around its orbit. A non-satellite body fulfilling only the first two of these criteria (such as Pluto, which had hitherto been considered a planet) is classified as a dwarf planet. According to the IAU, "planets and dwarf planets are two distinct classes of objects" – in other words, "dwarf planets" are not planets. A non-satellite body fulfilling only the first criterion is termed a small Solar System body (SSSB). An alternate proposal included dwarf planets as a subcategory of planets, but IAU members voted against this proposal. The decision was a controversial one, and has drawn both support and criticism from astronomers. The IAU has stated that there are eight known planets in the Solar System. It has been argued that the definition is problematic because it depends on the location of the body: if a Mars-sized body were discovered in the inner Oort cloud, it would not have enough mass to clear out a neighbourhood that size and meet criterion 3. The requirement for hydrostatic equilibrium (criterion 2) is also universally treated loosely as simply a requirement for roundedness; Mercury is not actually in hydrostatic equilibrium, but is explicitly included by the IAU definition as a planet. The working definition of an exoplanet is as follows: Background The process of new discoveries spurring a contentious refinement of Pluto's categorization echoed a debate in the 19th century that began with the discovery of Ceres on January 1, 1801. Astronomers immediately declared the tiny object to be the "missing planet" between Mars and Jupiter. Within four years, however, the discovery of two more objects with comparable sizes and orbits had cast doubt on this new thinking. By 1851, the number of planets had grown to 23 (the 8 major planets, plus 15 minor planets between Mars and Jupiter), and it was clear that hundreds more would eventually be discovered. Astronomers began cataloguing them separately and began calling them "asteroids" instead of "planets". With the discovery of Pluto by Clyde Tombaugh in 1930, astronomers considered the Solar System to have nine planets, along with thousands of smaller bodies such as asteroids and comets. Pluto was initially thought to be larger than Mercury. Tombaugh discovered Pluto while working at the Lowell Observatory founded by Percival Lowell, one of many astronomers who had theorized on the existence of the large trans-Neptunian object Planet X, and Tombaugh had been searching for Planet X when he found Pluto. Almost immediately after its discovery, however, astronomers questioned whether Pluto could be Planet X. Willy Ley wrote a column in 1956 titled "The Demotion of Pluto", stating that it "simply failed to live up to the advance publicity it received as 'Planet X' before its discovery. It has been a disappointment all along, for it did not turn out to be what one could reasonably have expected". In 1978, Pluto's moon Charon was discovered. By measuring Charon's orbital period, astronomers could accurately calculate Pluto's mass for the first time, which they found to be much smaller than expected. Pluto's mass was roughly one twenty-fifth of Mercury's, making it by far the smallest planet, smaller even than the Earth's Moon, although it was still over ten times as massive as the largest asteroid, Ceres. In the 1990s, astronomers began finding other objects at least as far away as Pluto, known as Kuiper Belt objects, or KBOs. Many of these shared some of Pluto's key orbital characteristics and are consequently called plutinos. Pluto came to be seen as the largest member of a new class of objects, and some astronomers stopped referring to Pluto as a planet. Pluto's eccentric and inclined orbit, while very unusual for a planet in the Solar System, fits in well with the other KBOs. New York City's newly renovated Hayden Planetarium did not include Pluto in its exhibit of the planets when it reopened as the Rose Center for Earth and Space in 2000. Starting in 2000, with the discovery of at least three bodies (Quaoar, Sedna, and Eris) all comparable to Pluto in terms of size and orbit, it became clear that either they all had to be called planets or Pluto would have to be reclassified. Astronomers also thought it likely that more objects as large as Pluto would be discovered, and the number of planets would start growing quickly. They were also concerned about the classification of planets in other planetary systems. In 2006, the first measurement of the volume of Eris erroneously (until the New Horizons mission to Pluto) showed it to be slightly larger than Pluto, and so was thought to be equally deserving of the status of "planet". Because new planets are discovered infrequently, the IAU did not have any mechanism for their definition and naming. After the discovery of Sedna, it set up a 19-member committee in 2005, with the British astronomer Iwan Williams in the chair, to consider the definition of a planet. It proposed three definitions that could be adopted: Cultural a planet is a planet if enough people say it is; Structural a planet is large enough to form a sphere; Dynamical the object is large enough to cause all other objects to eventually leave its orbit. Another committee, chaired by a historian of astronomy, Owen Gingerich, a historian and astronomer emeritus at Harvard University who led the committee which generated the original definition, and consisting of five planetary scientists and the science writer Dava Sobel, was set up to make a firm proposal. Proposals First draft proposal The IAU published the original definition proposal on August 16, 2006. Its form followed loosely the second of three options proposed by the original committee. It stated that: This definition would have led to three more celestial bodies being recognized as planets, in addition to the previously accepted nine: Ceres, which had been considered a planet at the time of its discovery, but was subsequently treated as an asteroid Charon, a moon of Pluto; the Pluto-Charon system would have been considered a double planet Eris, a body in the scattered disk of the outer Solar System A further twelve bodies, pending refinements of knowledge regarding their physical properties, were possible candidates to join the list under this definition. Some objects in this second list were more likely eventually to be adopted as 'planets' than others. Despite what had been claimed in the media, the proposal did not necessarily leave the Solar System with only twelve planets. Mike Brown, the discoverer of Sedna and Eris, has said that at least 53 known bodies in the Solar System probably fit the definition, and that a complete survey would probably reveal more than 200. The definition would have considered a pair of objects to be a double planet system if each component independently satisfied the planetary criteria and the common center of gravity of the system (known as the barycenter) was located outside of both bodies. Pluto and Charon would have been the only known double planet in the Solar System. Other planetary satellites (such as the Moon or Ganymede) might be in hydrostatic equilibrium, but would still not have been defined as a component of a double planet, since the barycenter of the system lies within the more massive celestial body. The term "minor planet" would have been abandoned, replaced by the categories "small Solar System body" (SSSB) and a new classification of "pluton". The former would have described those objects underneath the "spherical" threshold. The latter would have been applied to those planets with highly inclined orbits, large eccentricities and an orbital period of more than 200 earth years (that is, those orbiting beyond Neptune). Pluto would have been the prototype for this class. The term "dwarf planet" would have been available to describe all planets smaller than the eight "classical planets" in orbit around the Sun, though would not have been an official IAU classification. The IAU did not make recommendations in the draft resolution on what separated a planet from a brown dwarf. A vote on the proposal was scheduled for August 24, 2006. Such a definition of the term "planet" could also have led to changes in classification for the trans-Neptunian objects , , Sedna, Orcus, Quaoar, Varuna, , Ixion, and , and the asteroids Vesta, Pallas, and Hygiea. On 18 August the Committee of the Division of Planetary Sciences (DPS) of the American Astronomical Society endorsed the draft proposal. The DPS Committee represents a small subset of the DPS members, and no resolution in support of the IAU definition was considered or approved by the DPS membership. According to an IAU draft resolution, the roundness condition generally results in the need for a mass of at least 5 kg, or diameter of at least 800 km. However, Mike Brown claimed that these numbers are only right for rocky bodies like asteroids, and that icy bodies like Kuiper Belt objects reach hydrostatic equilibrium at much smaller sizes, probably somewhere between 200 and 400 km in diameter. It all depends on the rigidity of the material that makes up the body, which is in turn strongly influenced by its internal temperature. Assuming that Methone's shape reflects the balance between the tidal force exerted by Saturn and the moon's gravity, its tiny 3 km diameter suggests Methone is composed of icy fluff. The IAU's stated radius and mass limit are not too far off from what as of 2019 is believed to be the approximate limit for objects beyond Neptune that are fully compact, solid bodies, with (r = , m = ) and possibly (r = , m unknown) being borderline cases both for the 2006 Q&A expectations and in more recent evaluations, and with being just above the expected limit. Advantages The proposed definition found support among many astronomers as it used the presence of a physical qualitative factor (the object being round) as its defining feature. Most other potential definitions depended on a limiting quantity (e.g., a minimum size or maximum orbital inclination) tailored for the Solar System. According to members of the IAU committee this definition did not use human-made limits but instead deferred to "nature" in deciding whether or not an object was a planet. It also had the advantage of measuring an observable quality. Suggested criteria involving the nature of formation would have been more likely to see accepted planets later declassified as scientific understanding improved. Additionally, the definition kept Pluto as a planet. Pluto's planetary status was and is fondly thought of by many, especially in the United States since Pluto was found by American astronomer Clyde Tombaugh, and the general public could have been alienated from professional astronomers; there was considerable uproar when the media last suggested, in 1999, that Pluto might be demoted, which was a misunderstanding of a proposal to catalog all trans-Neptunian objects uniformly. Criticism The proposed definition was criticised as ambiguous: Astronomer Phil Plait and NCSE writer Nick Matzke both wrote about why they thought the definition was not, in general, a good one. It defined a planet as orbiting a star, which would have meant that any planet ejected from its star system or formed outside of one (a rogue planet) could not have been called a planet, even if it fit all other criteria. However, a similar situation already applies to the term 'moon'—such bodies ceasing to be moons on being ejected from planetary orbit—and this usage has widespread acceptance. Another criticism was that the definition did not differentiate between planets and brown dwarf stars. Any attempt to clarify this differentiation was to be left until a later date. There had also been criticism of the proposed definition of double planet: at present the Moon is defined as a satellite of the Earth, but over time the Earth-Moon barycenter will drift outwards (see tidal acceleration) and could eventually become situated outside of both bodies. This development would then upgrade the Moon to planetary status at that time, according to the definition. The time taken for this to occur, however, would be billions of years, long after many astronomers expect the Sun to expand into a red giant and destroy both Earth and Moon. In an 18 August 2006 Science Friday interview, Mike Brown expressed doubt that a scientific definition was even necessary. He stated, "The analogy that I always like to use is the word "continent". You know, the word "continent" has no scientific definition ... they're just cultural definitions, and I think the geologists are wise to leave that one alone and not try to redefine things so that the word "continent" has a big, strict definition." On 18 August, Owen Gingerich said that correspondence he had received had been evenly divided for and against the proposal. Alternative proposal According to Alan Boss of the Carnegie Institution of Washington, a subgroup of the IAU met on August 18, 2006, and held a straw poll on the draft proposal: Only 18 were in favour of it, with over 50 against. The 50 in opposition preferred an alternative proposal drawn up by Uruguayan astronomers Gonzalo Tancredi and Julio Ángel Fernández. Under this proposal, Pluto would have been demoted to a dwarf planet. Revised draft proposal On 22 August 2006 the draft proposal was rewritten with two changes from the previous draft. The first was a generalisation of the name of the new class of planets (previously the draft resolution had explicitly opted for the term pluton), with a decision on the name to be used postponed. Many geologists had been critical of the choice of name for Pluto-like planets, being concerned about the term pluton, which has been used for years within the geological community to represent a form of magmatic intrusion; such formations are fairly common balls of rock. Confusion was thought undesirable due to the status of planetology as a field closely allied to geology. Further concerns surrounded use of the word pluton as in major languages such as French and Spanish, Pluto is itself called Pluton, potentially adding to confusion. The second change was a redrawing of the planetary definition in the case of a double planet system. There had been a concern that, in extreme cases where a double body had its secondary component in a highly eccentric orbit, there could have been a drift of the barycenter in and out of the primary body, leading to a shift in the classification of the secondary body between satellite and planet depending on where the system was in its orbit. Thus the definition was reformulated so as to consider a double planet system in existence if its barycenter lay outside both bodies for a majority of the system's orbital period. Later on August 22, two open meetings were held which ended in an abrupt about-face on the basic planetary definition. The position of astronomer Julio Ángel Fernández gained the upper hand among the members attending and was described as unlikely to lose its hold by August 24. This position would result in only eight major planets, with Pluto ranking as a "dwarf planet". The discussion at the first meeting was heated and lively, with IAU members in vocal disagreement with one another over such issues as the relative merits of static and dynamic physics; the main sticking point was whether or not to include a body's orbital characteristics among the definition criteria. In an indicative vote, members heavily defeated the proposals on Pluto-like objects and double planet systems, and were evenly divided on the question of hydrostatic equilibrium. The debate was said to be "still open", with private meetings being held ahead of a vote scheduled for the following day. At the second meeting of the day, following "secret" negotiations, a compromise began to emerge after the Executive Committee moved explicitly to exclude consideration of extra-solar planets and to bring into the definition a criterion concerning the dominance of a body in its neighbourhood. Final draft proposal The final, third draft definition proposed on 24 August 2006 read: Plenary session debate Voting on the definition took place at the Assembly plenary session during the afternoon. Following a reversion to the previous rules on 15 August, as a planetary definition is a primarily scientific matter, every individual member of the Union attending the Assembly was eligible to vote. The plenary session was chaired by astronomer Jocelyn Bell Burnell. During this session, IAU members cast votes on each resolution by raising yellow cards. A team of students counted the votes in each section of the auditorium, and astronomer Virginia Trimble compiled and tallied the vote counts. The IAU Executive Committee presented four Resolutions to the Assembly, each concerning a different aspect of the debate over the definition. Minor amendments were made on the floor for the purposes of clarification. Resolution 5A constituted the definition itself as stated above. There was much discussion among members about the appropriateness of using the expression "cleared the neighbourhood" instead of the earlier reference to "dominant body", and about the implications of the definition for satellites. The Resolution was ultimately approved by a near-unanimous vote. Resolution 5B sought to amend the above definition by the insertion of the word classical before the word planet in paragraph (1) and footnote [1]. This represented a choice between having a set of three distinct categories of body (planet, "dwarf planet" and SSSB) and the opening of an umbrella of 'planets' over the first two such categories. The Resolution proposed the latter option; it was defeated convincingly, with only 91 members voting in its favour. Resolution 6A proposed a statement concerning Pluto: "Pluto is a dwarf planet by the above definition and is recognized as the prototype of a new category of trans-Neptunian objects." After a little quibbling over the grammar involved and questions of exactly what constituted a "trans-Neptunian object", the Resolution was approved by a vote of 237–157, with 30 abstentions. A new category of dwarf planet was thus established. It would be named "plutoid" and more narrowly defined by the IAU Executive Committee on 11 June 2008. Resolution 6B sought to insert an additional sentence at the end of the statement in 6A: "This category is to be called 'plutonian objects'." There was no debate on the question, and in the vote the proposed name was defeated by 186–183; a proposal to conduct a re-vote was rejected. An IAU process was then to be put in motion to determine the name for the new category. On a literal reading of the Resolution, "dwarf planets" are by implication of paragraph (1) excluded from the status of "planet". Use of the word planet in their title may, however, cause some ambiguity. Final definition The final definition, as passed on 24 August 2006 under the Resolution 5A of the 26th General Assembly, is: {{cquote| The IAU...resolves that planets and other bodies, except satellites, in the Solar System be defined into three distinct categories in the following way:(1) A planet [1] is a celestial body that (a) is in orbit around the Sun, (b) has sufficient mass for its self-gravity to overcome rigid body forces so that it assumes a hydrostatic equilibrium (nearly round) shape, and (c) has cleared the neighbourhood around its orbit. (2) A "dwarf planet" is a celestial body that (a) is in orbit around the Sun, (b) has sufficient mass for its self-gravity to overcome rigid body forces so that it assumes a hydrostatic equilibrium (nearly round) shape [2], (c) has not cleared the neighbourhood around its orbit, and (d) is not a satellite. (3) All other objects [3], except satellites, orbiting the Sun shall be referred to collectively as "Small Solar System Bodies".
Physical sciences
Planetary science
Astronomy
856761
https://en.wikipedia.org/wiki/Muricidae
Muricidae
Muricidae is a large and varied taxonomic family of small to large predatory sea snails, marine gastropod mollusks, commonly known as murex snails or rock snails. With over 1,700 living species, the Muricidae represent almost 10% of the Neogastropoda. Additionally, 1,200 fossil species have been recognized. Numerous subfamilies are recognized, although experts disagree about the subfamily divisions and the definitions of the genera. Many muricids have unusual shells which are considered attractive by shell collectors and by interior designers. Shell description Muricid shells are variably shaped, generally with a raised spire and strong sculpture with spiral ridges and often axial varices (typically three or more varices on each whorl), also frequently bearing spines, tubercles, or blade-like processes. Periostracum is absent in this family. The aperture is variable in shape; it may be ovate to more or less contracted, with a well-marked anterior siphonal canal that may be very long. The shell's outer lip is often denticulated inside, sometimes with a tooth-like process on its margin. The columella is smoothish to weakly ridged. The operculum is corneous and of variable thickness, with the nucleus near the anterior end or at about midlength of the outer margin. Many muricids have episodic growth, which means their shells grow in spurts, remaining the same size for a while (during which time the varix develops) before rapidly growing to the next size stage. The result is the series of above mentioned varices on each whorl. Life habits Most species of muricids are carnivorous, active predators that feed on other gastropods, bivalves, and barnacles. The access to the soft parts of the prey is typically obtained by boring a hole through the shell by means of a softening secretion and the scraping action of the radula. Because of their carnivory, some species may be considered pests because they can cause considerable destruction both in exploited natural beds of bivalves, and in farmed areas of commercial bivalves. Muricids lay eggs in protective, corneous capsules, the size and shape of which vary by species. From these capsules the crawling juveniles, or more rarely planktonic larvae, hatch. Historical value Members of the family were harvested by early Mediterranean peoples, with the Phoenicians possibly the first to do so, to extract an expensive, vivid, stable dye known as Tyrian purple, imperial purple, or royal purple. The fossil record The family Muricidae first appears in the fossil record during the Aptian age of the Cretaceous period. Subfamilies According to the taxonomy of the Gastropoda by Bouchet & Rocroi (2005) the family Muricidae consists of these subfamilies: Aspellinae Keen, 1971 Coralliophilinae Chenu, 1859 - synonym: Magilidae Thiele, 1925 Ergalataxinae Kuroda, Habe & Oyama, 1971 Haustrinae Tan, 2003 Muricinae Rafinesque, 1815 Muricopsinae Radwin & d'Attilio, 1971: synonym of Aspellinae Keen, 1971 ( junior subjective synonym) Ocenebrinae Cossmann, 1903 Pagodulinae Barco, Schiaparelli, Houart & Oliverio, 2012 Rapaninae Gray, 1853 - synonym: Thaididae Jousseaume, 1888 Tripterotyphinae d'Attilio & Hertz, 1988: synonym of Muricopsinae Radwin & D'Attilio, 1971 : synonym of Aspellinae Keen, 1971 (junior subjective synonym) Trophoninae Cossmann, 1903: synonym of Ocenebrinae Cossmann, 1903 (junior subjective synonym) Typhinae Cossmann, 1903 [unassigned] Muricidae Synonyms Subfamily Drupinae Wenz, 1938: synonym of Rapaninae Gray, 1853 Genus Drupinia [sic]: synonym of Drupina Dall, 1923 Genus Galeropsis Hupé, 1860: synonym of Coralliophila H. Adams & A. Adams, 1853 Tritoninae Gray, 1847: synonym of Ranellidae Gray, 1854 (Invalid: type genus placed on the Official Index by Opinion 886 [junior homonym of Triton Linnaeus, 1758])
Biology and health sciences
Gastropods
Animals
857235
https://en.wikipedia.org/wiki/Equivalence%20principle
Equivalence principle
The equivalence principle is the hypothesis that the observed equivalence of gravitational and inertial mass is a consequence of nature. The weak form, known for centuries, relates to masses of any composition in free fall taking the same trajectories and landing at identical times. The extended form by Albert Einstein requires special relativity to also hold in free fall and requires the weak equivalence to be valid everywhere. This form was a critical input for the development of the theory of general relativity. The strong form requires Einstein's form to work for stellar objects. Highly precise experimental tests of the principle limit possible deviations from equivalence to be very small. Concept In classical mechanics, Newton's equation of motion in a gravitational field, written out in full, is: inertial mass × acceleration = gravitational mass × gravitational acceleration Careful experiments have shown that the inertial mass on the left side and gravitational mass on the right side are numerically equal and independent of the material composing the masses. The equivalence principle is the hypothesis that this numerical equality of inertial and gravitational mass is a consequence of their fundamental identity. The equivalence principle can be considered an extension of the principle of relativity, the principle that the laws of physics are invariant under uniform motion. An observer in a windowless room cannot distinguish between being on the surface of the Earth and being in a spaceship in deep space accelerating at 1g and the laws of physics are unable to distinguish these cases. History By experimenting with the acceleration of different materials, Galileo determined that gravitation is independent of the amount of mass being accelerated. Newton, just 50 years after Galileo, investigated whether gravitational and inertial mass might be different concepts. He compared the periods of pendulums composed of different materials and found them to be identical. From this, he inferred that gravitational and inertial mass are the same thing. The form of this assertion, where the equivalence principle is taken to follow from empirical consistency, later became known as "weak equivalence". A version of the equivalence principle consistent with special relativity was introduced by Albert Einstein in 1907, when he observed that identical physical laws are observed in two systems, one subject to a constant gravitational field causing acceleration and the other subject to constant acceleration, like a rocket far from any gravitational field. Since the physical laws are the same, Einstein assumed the gravitational field and the acceleration were "physically equivalent". Einstein stated this hypothesis by saying he would: In 1911 Einstein demonstrated the power of the equivalence principle by using it to predict that clocks run at different rates in a gravitational potential, and light rays bend in a gravitational field. He connected the equivalence principle to his earlier principle of special relativity: Soon after completing work on his theory of gravity (known as general relativity) and then also in later years, Einstein recalled the importance of the equivalence principle to his work: Einstein's development of general relativity necessitated some means of empirically discriminating the theory from other theories of gravity compatible with special relativity. Accordingly, Robert Dicke developed a test program incorporating two new principles—the , and the —each of which assumes the weak equivalence principle as a starting point. Definitions Three main forms of the equivalence principle are in current use: weak (Galilean), Einsteinian, and strong. Some proposals also suggest finer divisions or minor alterations. Weak equivalence principle The weak equivalence principle, also known as the universality of free fall or the Galilean equivalence principle can be stated in many ways. The strong equivalence principle, a generalization of the weak equivalence principle, includes astronomic bodies with gravitational self-binding energy. Instead, the weak equivalence principle assumes falling bodies are self-bound by non-gravitational forces only (e.g. a stone). Either way: "All uncharged, freely falling test particles follow the same trajectories, once an initial position and velocity have been prescribed". "... in a uniform gravitational field all objects, regardless of their composition, fall with precisely the same acceleration." "The weak equivalence principle implicitly assumes that the falling objects are bound by non-gravitational forces." "... in a gravitational field the acceleration of a test particle is independent of its properties, including its rest mass." Mass (measured with a balance) and weight (measured with a scale) are locally in identical ratio for all bodies (the opening page to Newton's Philosophiæ Naturalis Principia Mathematica, 1687). Uniformity of the gravitational field eliminates measurable tidal forces originating from a radial divergent gravitational field (e.g., the Earth) upon finite sized physical bodies. Einstein equivalence principle What is now called the "Einstein equivalence principle" states that the weak equivalence principle holds, and that: Here local means that experimental setup must be small compared to variations in the gravitational field, called tidal forces. The test experiment must be small enough so that its gravitational potential does not alter the result. The two additional constraints added to the weak principle to get the Einstein form − (1) the independence of the outcome on relative velocity (local Lorentz invariance) and (2) independence of "where" known as (local positional invariance) − have far reaching consequences. With these constraints alone Einstein was able to predict the gravitational redshift. Theories of gravity that obey the Einstein equivalence principle must be "metric theories", meaning that trajectories of freely falling bodies are geodesics of symmetric metric. Around 1960 Leonard I. Schiff conjectured that any complete and consistent theory of gravity that embodies the weak equivalence principle implies the Einstein equivalence principle; the conjecture can't be proven but has several plausibility arguments in its favor. Nonetheless, the two principles are tested with very different kinds of experiments. The Einstein equivalence principle has been criticized as imprecise, because there is no universally accepted way to distinguish gravitational from non-gravitational experiments (see for instance Hadley and Durand). Strong equivalence principle The strong equivalence principle applies the same constraints as the Einstein equivalence principle, but allows the freely falling bodies to be massive gravitating objects as well as test particles. Thus this is a version of the equivalence principle that applies to objects that exert a gravitational force on themselves, such as stars, planets, black holes or Cavendish experiments. It requires that the gravitational constant be the same everywhere in the universe and is incompatible with a fifth force. It is much more restrictive than the Einstein equivalence principle. Like the Einstein equivalence principle, the strong equivalence principle requires gravity to be geometrical by nature, but in addition it forbids any extra fields, so the metric alone determines all of the effects of gravity. If an observer measures a patch of space to be flat, then the strong equivalence principle suggests that it is absolutely equivalent to any other patch of flat space elsewhere in the universe. Einstein's theory of general relativity (including the cosmological constant) is thought to be the only theory of gravity that satisfies the strong equivalence principle. A number of alternative theories, such as Brans–Dicke theory and the Einstein-aether theory add additional fields. Active, passive, and inertial masses Some of the tests of the equivalence principle use names for the different ways mass appears in physical formulae. In nonrelativistic physics three kinds of mass can be distinguished: Inertial mass intrinsic to an object, the sum of all of its mass–energy. Passive mass, the response to gravity, the object's weight. Active mass, the mass that determines the objects gravitational effect. By definition of active and passive gravitational mass, the force on due to the gravitational field of is: Likewise the force on a second object of arbitrary mass2 due to the gravitational field of mass0 is: By definition of inertial mass:if and are the same distance from then, by the weak equivalence principle, they fall at the same rate (i.e. their accelerations are the same). Hence: Therefore: In other words, passive gravitational mass must be proportional to inertial mass for objects, independent of their material composition if the weak equivalence principle is obeyed. The dimensionless Eötvös-parameter or Eötvös ratio is the difference of the ratios of gravitational and inertial masses divided by their average for the two sets of test masses "A" and "B". Values of this parameter are used to compare tests of the equivalence principle. A similar parameter can be used to compare passive and active mass. By Newton's third law of motion: must be equal and opposite to It follows that: In words, passive gravitational mass must be proportional to active gravitational mass for all objects. The difference, is used to quantify differences between passive and active mass. Experimental tests Tests of the weak equivalence principle Tests of the weak equivalence principle are those that verify the equivalence of gravitational mass and inertial mass. An obvious test is dropping different objects and verifying that they land at the same time. Historically this was the first approach—though probably not by Galileo's Leaning Tower of Pisa experiment but instead earlier by Simon Stevin, who dropped lead balls of different masses off the Delft churchtower and listened for the sound of them hitting a wooden plank. Isaac Newton measured the period of pendulums made with different materials as an alternative test giving the first precision measurements. Loránd Eötvös's approach in 1908 used a very sensitive torsion balance to give precision approaching 1 in a billion. Modern experiments have improved this by another factor of a million. A popular exposition of this measurement was done on the Moon by David Scott in 1971. He dropped a falcon feather and a hammer at the same time, showing on video that they landed at the same time. Experiments are still being performed at the University of Washington which have placed limits on the differential acceleration of objects towards the Earth, the Sun and towards dark matter in the Galactic Center. Future satellite experiments – Satellite Test of the Equivalence Principle and Galileo Galilei – will test the weak equivalence principle in space, to much higher accuracy. With the first successful production of antimatter, in particular anti-hydrogen, a new approach to test the weak equivalence principle has been proposed. Experiments to compare the gravitational behavior of matter and antimatter are currently being developed. Proposals that may lead to a quantum theory of gravity such as string theory and loop quantum gravity predict violations of the weak equivalence principle because they contain many light scalar fields with long Compton wavelengths, which should generate fifth forces and variation of the fundamental constants. Heuristic arguments suggest that the magnitude of these equivalence principle violations could be in the 10−13 to 10−18 range. Currently envisioned tests of the weak equivalence principle are approaching a degree of sensitivity such that non-discovery of a violation would be just as profound a result as discovery of a violation. Non-discovery of equivalence principle violation in this range would suggest that gravity is so fundamentally different from other forces as to require a major reevaluation of current attempts to unify gravity with the other forces of nature. A positive detection, on the other hand, would provide a major guidepost towards unification. Tests of the Einstein equivalence principle In addition to the tests of the weak equivalence principle, the Einstein equivalence principle requires testing the local Lorentz invariance and local positional invariance conditions. Testing local Lorentz invariance amounts to testing special relativity, a theory with vast number of existing tests. Nevertheless, attempts to look for quantum gravity require even more precise tests. The modern tests include looking for directional variations in the speed of light (called "clock anisotropy tests") and new forms of the Michelson-Morley experiment. The anisotropy measures less than one part in 10−20. Testing local positional invariance divides in to tests in space and in time. Space-based tests use measurements of the gravitational redshift, the classic is the Pound–Rebka experiment in the 1960s. The most precise measurement was done in 1976 by flying a hydrogen maser and comparing it to one on the ground. The Global positioning system requires compensation for this redshift to give accurate position values. Time-based tests search for variation of dimensionless constants and mass ratios. For example, Webb et al. reported detection of variation (at the 10−5 level) of the fine-structure constant from measurements of distant quasars. Other researchers dispute these findings. The present best limits on the variation of the fundamental constants have mainly been set by studying the naturally occurring Oklo natural nuclear fission reactor, where nuclear reactions similar to ones we observe today have been shown to have occurred underground approximately two billion years ago. These reactions are extremely sensitive to the values of the fundamental constants. Tests of the strong equivalence principle The strong equivalence principle can be tested by 1) finding orbital variations in massive bodies (Sun-Earth-Moon), 2) variations in the gravitational constant (G) depending on nearby sources of gravity or on motion, or 3) searching for a variation of Newton's gravitational constant over the life of the universe Orbital variations due to gravitational self-energy should cause a "polarization" of solar system orbits called the Nordtvedt effect. This effect has been sensitively tested by the Lunar Laser Ranging Experiment. Up to the limit of one part in 1013 there is no Nordtvedt effect. A tight bound on the effect of nearby gravitational fields on the strong equivalence principle comes from modeling the orbits of binary stars and comparing the results to pulsar timing data. In 2014, astronomers discovered a stellar triple system containing a millisecond pulsar PSR J0337+1715 and two white dwarfs orbiting it. The system provided them a chance to test the strong equivalence principle in a strong gravitational field with high accuracy. Most alternative theories of gravity predict a change in the gravity constant over time. Studies of Big Bang nucleosynthesis, analysis of pulsars, and the lunar laser ranging data have shown that G cannot have varied by more than 10% since the creation of the universe. The best data comes from studies of the ephemeris of Mars, based on three successive NASA missions, Mars Global Surveyor, Mars Odyssey, and Mars Reconnaissance Orbiter.
Physical sciences
Theory of relativity
null
858097
https://en.wikipedia.org/wiki/Salt%20lake
Salt lake
A salt lake or saline lake is a landlocked body of water that has a concentration of salts (typically sodium chloride) and other dissolved minerals significantly higher than most lakes (often defined as at least three grams of salt per liter). In some cases, salt lakes have a higher concentration of salt than sea water; such lakes can also be termed hypersaline lake, and may also be pink lakes on account of their color. An alkalic salt lake that has a high content of carbonate is sometimes termed a soda lake. Salt lakes are classified according to salinity levels. The formation of these lakes is influenced by processes such as evaporation and deposition. Salt lakes face serious conservation challenges due to climate change, pollution and water diversion. Classification The primary method of classification for salt lakes involves assessing the chemical composition of the water within the lakes, specifically its salinity, pH, and the dominant ions present. Subsaline Subsaline lakes have a salinity lower than that of seawater but higher than freshwater, typically ranging from 0.5 to 3 grams per liter (g/L). Hyposaline Hyposaline lakes exhibit salinities from 0.5 to 3 g/L, which allows for the presence of freshwater species along with some salt-tolerant aquatic organisms. Lake Alchichica in Mexico is a hyposaline lake. Mesosaline Mesosaline lakes have a salinity level ranging from 3 to 35 g/L. An example of a mesosaline lake is Redberry Lake in Saskatchewan, Canada. Hypersaline Hypersaline lakes possess salinities greater than 35 g/L, often reaching levels that can exceed 200 g/L. The extreme salinity levels create harsh conditions that limit the diversity of life, primarily supporting specialized organisms such as halophilic bacteria and certain species of brine shrimp. These lakes can have high concentrations of sodium salts and minerals, such as lithium, making such lakes vulnerable to mining interests. Hypersaline lakes can be found in the McMurdo Dry Valleys in Antarctica, where salinity can reach ≈440‰. Formation Salt lakes form through complex chemical, geological, and biological processes, influenced by environmental conditions like high evaporation rates and restricted water outflow. As water carrying dissolved minerals (sodium, potassium, and magnesium) enters these basins, it gradually evaporates, concentrating these minerals until they precipitate as salt deposits. Then, specific ions interact under controlled temperatures, which leads to solid-solution formation and salt crystal deposition within the lake bed. This cycle of evaporation and deposition is the main process to the unique saline environment that characterizes a salt lake.Environmental factors further shape the composition and formation of salt lakes. Seasonal variations in temperature and evaporation drive mineral saturation and promote salt crystallization. In dry regions, water loss during warmer seasons concentrates the lake's salts. This creates a dynamic environment where seasonal shifts affect the salt lake's mineral layers, contributing to its evolving structure and composition. Groundwater rich in dissolved ions often serve as primary mineral sources that, combined with processes like evaporation and deposition, contribute to salt lake development. Biodiversity Salt lakes host a diverse range of animals, despite high levels of salinity acting as significant environmental constraints. Increased salinity worsens oxygen levels and thermal conditions, raising the water's density and viscosity, which demands greater energy for animal movement. Despite these challenges, salt lakes support biota adapted to such conditions with specialized physiological and biochemical mechanisms. Common salt lake invertebrates include various parasites, with around 85 parasite species found in saline waters, including crustaceans and monogeneans. Among them, the filter-feeding brine shrimp plays a crucial role as a keystone species by regulating phytoplankton and bacterioplankton levels. The Artemia species also serves as an intermediate host for helminth parasites that affect migratory water birds like flamingos, grebes, gulls, shorebirds, and ducks. Vertebrates in saline lakes include certain fish and bird species, though they are sensitive to fluctuations in salinity. Many saline lakes are also alkaline, which imposes physiological challenges for fish, especially in managing nitrogenous waste excretion. Fish species vary by lake; for instance, the Salton Sea is home to species such as carp, striped mullet, humpback sucker, and rainbow trout. Stratification Stratification in salt lakes occurs as a result of the unique chemical and environmental processes that cause water to separate into layers based on density. In these lakes, high rates of evaporation often concentrate salts, leading to denser, saltier water sinking to the lake's bottom, while fresher water remains nearer the surface. These seasonal changes influence the lake's structure, making stratification more pronounced during warmer months due to increasing evaporation, which drives separation between saline and fresher layers in the lake, leading a phenomenon known as meromixis (meromictic state), primarily prevents oxygen from penetrating the deeper layers and create the hypoxic (low oxygen) or anoxic (no oxygen) zones. This separation eventually influenced the lake's chemistry, supporting only specialized microbial life adapted to extreme environments with high salinity and low oxygen levels. The restricted vertical mixing limits nutrient cycling, creating a favorable ecosystem for halophiles (salt-loving organisms) that rely on these saline conditions for stability and balance. The extreme conditions within stratified salt lakes have a profound effect on aquatic life, as oxygen levels are severely limited due to the lack of vertical mixing. Extremophiles, including specific bacteria and archaea, inhabit the hypersaline and oxygen-deficient zones at lower depths. Bacteria and archaea, for example, rely on alternative metabolic processes that do not depend on oxygen. These microorganisms play a critical role in nutrient cycling within salt lakes, as they break down organic material and release by-products that support other microbial communities. Due to limited biodiversity, the restrictive environment limits biodiversity, allowing only specially adapted life forms to survive, which creates unique, highly specialized ecosystems that are distinct from freshwater or less saline habitats. Conservation Salt lakes declined worldwide in recent years. The Aral Sea, once of the largest saline lakes with a surface area of 67,499 km in 1960, diminished to approximately 6,990 km in 2016. This trend is not limited to the Aral Sea; salt lakes around the world are shrinking due to excessive water diversion, dam construction, pollution, urbanization, and rising temperatures associated with climate change. The resulting declines cause severe disruptions to local ecosystems and biodiversity, degrades the environment, threatens economic stability, and displaces communities dependent on these lakes for resources and livelihood. In Utah, if the Great Salt Lake is not conserved, the state could face potential economic and public health crises, with consequences for air quality, local agriculture, and wildlife. According to "Utah’s Great Salt Lake Strike Team", in order increase the lake's level within the next 30 years, see average inflows must increase by 472,00 acre-feet per year, which is about a 33% increase in the amount that has reached the lake in recent years. Water conservation is viewed as being the most cost-effective and practical strategy to save salt lakes like the Great Salt Lake. Implementing strong water management policies, improving community awareness, and ensuring the return of water flow to these lakes are additional ways that may restore ecological balance. Other proposed methods of maintaining lake levels include cloud seeding and the mitigation of dust transmission hotspots. List Note: Some of the following are also partly fresh and/or brackish water. Aral Sea Aralsor Aydar Lake Bakhtegan Lake Caspian Sea Chilika Lake Chott el Djerid Dabusun Lake Dead Sea Devil's Lake Don Juan Pond Garabogazköl Goose Lake Great Salt Lake Grevelingen Khyargas Nuur Laguna Colorada Laguna Verde Lake Abert Lake Alakol Lake Assal Lake Balkhash Lake Barlee Lake Baskunchak Lake Bumbunga Lake Elton Lake Enriquillo Lake Eyre Lake Gairdner Lake Hillier Lake Karum Lake Mackay Lake Natron Lake Paliastomi Lake Pontchartrain Lake Texoma Lake Torrens Lake Tuz Lake Tyrrell Lake Urmia Lake Van Lake Vanda Larnaca Salt Lake Little Manitou Lake Lonar Lake Lough Hyne Maharloo Lake Mar Chiquita Lake Mono Lake Nam Lake Pangong Lake Pulicat Lake Qarhan Playa Redberry Lake Salton Sea Sambhar Salt Lake Sarygamysh Lake Sawa Lake Siling Lake South Hulsan Lake Sutton Salt Lake Uvs Lake Gallery
Physical sciences
Hydrology
Earth science
859077
https://en.wikipedia.org/wiki/Beige
Beige
Beige is variously described as a pale sandy fawn color, a grayish tan, a light-grayish yellowish brown, or a pale to grayish yellow. It takes its name from French, where the word originally meant natural wool that has been neither bleached nor dyed, hence also the color of natural wool. The word "beige" has come to be used to describe a variety of light tints chosen for their neutral or pale warm appearance. Beige began to commonly be used as a term for a color in France beginning approximately 1855–60; the writer Edmond de Goncourt used it in the novel in 1877. The first recorded use of beige as a color name in English was in 1887. Beige is notoriously difficult to produce in traditional offset CMYK printing because of the low levels of inks used on each plate; often it will print in purple or green and vary within a print run. Beige is also a popular color in clothing, such as for men's trousers, as well as for interior design. Various beige colors Cosmic latte Cosmic latte is a name assigned in 2002 to the average color of the universe (derived from a sampling of the electromagnetic radiation from 200,000 galaxies), given by a team of astronomers from Johns Hopkins University. Cream Cream is the color of the cream produced by cattle grazing on natural pasture with plants rich in yellow carotenoid pigments, some of which are incorporated into the cream, to give a yellow tone to white. The first recorded use of cream as a color name in English was in 1590. Unbleached silk Unbleached silk is one of the Japanese traditional colors in use since beginning in 660 CE in the form of various dyes that are used in designing kimonos. The name of this color in Japanese is . Tuscan The first recorded use of Tuscan as a color name in English was in 1887. Buff Buff is a pale yellow-brown color that got its name from the color of buffed leather. According to the Oxford English Dictionary, buff as a descriptor of a color was first used in the London Gazette of 1686, describing a uniform to be "A Red Coat with a Buff-colour'd lining". Desert sand The color desert sand may be regarded as a deep shade of beige. It is a pale tint of a color called desert. The color name "desert" was first used in 1920. In the 1960s, the American Telephone & Telegraph Company (AT&T) marketed desert sand–colored telephones for offices and homes. However, they described the color as "beige". It is therefore common for many people to refer to the color desert sand as "beige". Ecru Originally in the 19th century and up to at least 1930, the color ecru meant exactly the same color as beige (i.e. the pale cream color shown above as beige), and the word is often used to refer to such fabrics as silk and linen in their unbleached state. Ecru comes from the French word , which means literally "raw" or "unbleached". Since at least the 1950s, however, the color ecru has been regarded as a different color from beige, presumably in order to allow interior designers a wider palette of colors to choose from. Khaki Khaki was designated in the 1930 book A Dictionary of Color, the standard for color nomenclature before the introduction of computers. The first recorded use of khaki as a color name in English was in 1848. French beige The first recorded use of French beige as a color name in English was in 1927. The normalized color coordinates for French beige are identical to café au lait and Tuscan tan, which were first recorded as color names in English in 1839 and 1926, respectively. Mode beige Mode beige is a very dark shade of beige. The first recorded use of mode beige as a color name in English was in 1928. The normalized color coordinates for mode beige are identical to the color names drab, sand dune, and bistre brown, which were first recorded as color names in English, respectively, in 1686, 1925, and 1930. In nature Fish Beige catshark Mammal Beige rabbit Metaphor Beige is sometimes used as a metaphor for something which is bland, boring, conventional, or even sad. In this sense, it is used in contradistinction to more vibrant and exciting (or more individual) colors.
Physical sciences
Colors
Physics
859275
https://en.wikipedia.org/wiki/Displacement%20%28geometry%29
Displacement (geometry)
In geometry and mechanics, a displacement is a vector whose length is the shortest distance from the initial to the final position of a point P undergoing motion. It quantifies both the distance and direction of the net or total motion along a straight line from the initial position to the final position of the point trajectory. A displacement may be identified with the translation that maps the initial position to the final position. Displacement is the shift in location when an object in motion changes from one position to another. For motion over a given interval of time, the displacement divided by the length of the time interval defines the average velocity (a vector), whose magnitude is the average speed (a scalar quantity). Formulation A displacement may be formulated as a relative position (resulting from the motion), that is, as the final position of a point relative to its initial position . The corresponding displacement vector can be defined as the difference between the final and initial positions: Rigid body In dealing with the motion of a rigid body, the term displacement may also include the rotations of the body. In this case, the displacement of a particle of the body is called linear displacement (displacement along a line), while the rotation of the body is called angular displacement. Derivatives For a position vector that is a function of time , the derivatives can be computed with respect to . The first two derivatives are frequently encountered in physics. Velocity Acceleration Jerk These common names correspond to terminology used in basic kinematics. By extension, the higher order derivatives can be computed in a similar fashion. Study of these higher order derivatives can improve approximations of the original displacement function. Such higher-order terms are required in order to accurately represent the displacement function as a sum of an infinite series, enabling several analytical techniques in engineering and physics. The fourth order derivative is called jounce. Discussion In considering motions of objects over time, the instantaneous velocity of the object is the rate of change of the displacement as a function of time. The instantaneous speed, then, is distinct from velocity, or the time rate of change of the distance travelled along a specific path. The velocity may be equivalently defined as the time rate of change of the position vector. If one considers a moving initial position, or equivalently a moving origin (e.g. an initial position or origin which is fixed to a train wagon, which in turn moves on its rail track), the velocity of P (e.g. a point representing the position of a passenger walking on the train) may be referred to as a relative velocity; this is opposed to an absolute velocity, which is computed with respect to a point and coordinate axes which are considered to be at rest (a inertial frame of reference such as, for instance, a point fixed on the floor of the train station and the usual vertical and horizontal directions).
Physical sciences
Basics_4
Physics
859283
https://en.wikipedia.org/wiki/Cylinder
Cylinder
A cylinder () has traditionally been a three-dimensional solid, one of the most basic of curvilinear geometric shapes. In elementary geometry, it is considered a prism with a circle as its base. A cylinder may also be defined as an infinite curvilinear surface in various modern branches of geometry and topology. The shift in the basic meaning—solid versus surface (as in a solid ball versus sphere surface)—has created some ambiguity with terminology. The two concepts may be distinguished by referring to solid cylinders and cylindrical surfaces. In the literature the unadorned term cylinder could refer to either of these or to an even more specialized object, the right circular cylinder. Types The definitions and results in this section are taken from the 1913 text Plane and Solid Geometry by George A. Wentworth and David Eugene Smith . A is a surface consisting of all the points on all the lines which are parallel to a given line and which pass through a fixed plane curve in a plane not parallel to the given line. Any line in this family of parallel lines is called an element of the cylindrical surface. From a kinematics point of view, given a plane curve, called the directrix, a cylindrical surface is that surface traced out by a line, called the generatrix, not in the plane of the directrix, moving parallel to itself and always passing through the directrix. Any particular position of the generatrix is an element of the cylindrical surface. A solid bounded by a cylindrical surface and two parallel planes is called a (solid) . The line segments determined by an element of the cylindrical surface between the two parallel planes is called an element of the cylinder. All the elements of a cylinder have equal lengths. The region bounded by the cylindrical surface in either of the parallel planes is called a of the cylinder. The two bases of a cylinder are congruent figures. If the elements of the cylinder are perpendicular to the planes containing the bases, the cylinder is a , otherwise it is called an . If the bases are disks (regions whose boundary is a circle) the cylinder is called a . In some elementary treatments, a cylinder always means a circular cylinder. The (or altitude) of a cylinder is the perpendicular distance between its bases. The cylinder obtained by rotating a line segment about a fixed line that it is parallel to is a . A cylinder of revolution is a right circular cylinder. The height of a cylinder of revolution is the length of the generating line segment. The line that the segment is revolved about is called the of the cylinder and it passes through the centers of the two bases. Right circular cylinders The bare term cylinder often refers to a solid cylinder with circular ends perpendicular to the axis, that is, a right circular cylinder, as shown in the figure. The cylindrical surface without the ends is called an . The formulae for the surface area and the volume of a right circular cylinder have been known from early antiquity. A right circular cylinder can also be thought of as the solid of revolution generated by rotating a rectangle about one of its sides. These cylinders are used in an integration technique (the "disk method") for obtaining volumes of solids of revolution. A tall and thin needle cylinder has a height much greater than its diameter, whereas a short and wide disk cylinder has a diameter much greater than its height. Properties Cylindric sections A cylindric section is the intersection of a cylinder's surface with a plane. They are, in general, curves and are special types of plane sections. The cylindric section by a plane that contains two elements of a cylinder is a parallelogram. Such a cylindric section of a right cylinder is a rectangle. A cylindric section in which the intersecting plane intersects and is perpendicular to all the elements of the cylinder is called a . If a right section of a cylinder is a circle then the cylinder is a circular cylinder. In more generality, if a right section of a cylinder is a conic section (parabola, ellipse, hyperbola) then the solid cylinder is said to be parabolic, elliptic and hyperbolic, respectively. For a right circular cylinder, there are several ways in which planes can meet a cylinder. First, planes that intersect a base in at most one point. A plane is tangent to the cylinder if it meets the cylinder in a single element. The right sections are circles and all other planes intersect the cylindrical surface in an ellipse. If a plane intersects a base of the cylinder in exactly two points then the line segment joining these points is part of the cylindric section. If such a plane contains two elements, it has a rectangle as a cylindric section, otherwise the sides of the cylindric section are portions of an ellipse. Finally, if a plane contains more than two points of a base, it contains the entire base and the cylindric section is a circle. In the case of a right circular cylinder with a cylindric section that is an ellipse, the eccentricity of the cylindric section and semi-major axis of the cylindric section depend on the radius of the cylinder and the angle between the secant plane and cylinder axis, in the following way: Volume If the base of a circular cylinder has a radius and the cylinder has height , then its volume is given by This formula holds whether or not the cylinder is a right cylinder. This formula may be established by using Cavalieri's principle. In more generality, by the same principle, the volume of any cylinder is the product of the area of a base and the height. For example, an elliptic cylinder with a base having semi-major axis , semi-minor axis and height has a volume , where is the area of the base ellipse (= ). This result for right elliptic cylinders can also be obtained by integration, where the axis of the cylinder is taken as the positive -axis and the area of each elliptic cross-section, thus: Using cylindrical coordinates, the volume of a right circular cylinder can be calculated by integration Surface area Having radius and altitude (height) , the surface area of a right circular cylinder, oriented so that its axis is vertical, consists of three parts: the area of the top base: the area of the bottom base: the area of the side: The area of the top and bottom bases is the same, and is called the base area, . The area of the side is known as the , . An open cylinder does not include either top or bottom elements, and therefore has surface area (lateral area) The surface area of the solid right circular cylinder is made up the sum of all three components: top, bottom and side. Its surface area is therefore where is the diameter of the circular top or bottom. For a given volume, the right circular cylinder with the smallest surface area has . Equivalently, for a given surface area, the right circular cylinder with the largest volume has , that is, the cylinder fits snugly in a cube of side length = altitude ( = diameter of base circle). The lateral area, , of a circular cylinder, which need not be a right cylinder, is more generally given by where is the length of an element and is the perimeter of a right section of the cylinder. This produces the previous formula for lateral area when the cylinder is a right circular cylinder. Right circular hollow cylinder (cylindrical shell) A right circular hollow cylinder (or ) is a three-dimensional region bounded by two right circular cylinders having the same axis and two parallel annular bases perpendicular to the cylinders' common axis, as in the diagram. Let the height be , internal radius , and external radius . The volume is given by Thus, the volume of a cylindrical shell equals    thickness. The surface area, including the top and bottom, is given by Cylindrical shells are used in a common integration technique for finding volumes of solids of revolution. On the Sphere and Cylinder In the treatise by this name, written , Archimedes obtained the result of which he was most proud, namely obtaining the formulas for the volume and surface area of a sphere by exploiting the relationship between a sphere and its circumscribed right circular cylinder of the same height and diameter. The sphere has a volume that of the circumscribed cylinder and a surface area that of the cylinder (including the bases). Since the values for the cylinder were already known, he obtained, for the first time, the corresponding values for the sphere. The volume of a sphere of radius is . The surface area of this sphere is . A sculpted sphere and cylinder were placed on the tomb of Archimedes at his request. Cylindrical surfaces In some areas of geometry and topology the term cylinder refers to what has been called a cylindrical surface. A cylinder is defined as a surface consisting of all the points on all the lines which are parallel to a given line and which pass through a fixed plane curve in a plane not parallel to the given line. Such cylinders have, at times, been referred to as . Through each point of a generalized cylinder there passes a unique line that is contained in the cylinder. Thus, this definition may be rephrased to say that a cylinder is any ruled surface spanned by a one-parameter family of parallel lines. A cylinder having a right section that is an ellipse, parabola, or hyperbola is called an elliptic cylinder, parabolic cylinder and hyperbolic cylinder, respectively. These are degenerate quadric surfaces. When the principal axes of a quadric are aligned with the reference frame (always possible for a quadric), a general equation of the quadric in three dimensions is given by with the coefficients being real numbers and not all of , and being 0. If at least one variable does not appear in the equation, then the quadric is degenerate. If one variable is missing, we may assume by an appropriate rotation of axes that the variable does not appear and the general equation of this type of degenerate quadric can be written as where Elliptic cylinder If this is the equation of an elliptic cylinder. Further simplification can be obtained by translation of axes and scalar multiplication. If has the same sign as the coefficients and , then the equation of an elliptic cylinder may be rewritten in Cartesian coordinates as: This equation of an elliptic cylinder is a generalization of the equation of the ordinary, circular cylinder (). Elliptic cylinders are also known as cylindroids, but that name is ambiguous, as it can also refer to the Plücker conoid. If has a different sign than the coefficients, we obtain the imaginary elliptic cylinders: which have no real points on them. ( gives a single real point.) Hyperbolic cylinder If and have different signs and , we obtain the hyperbolic cylinders, whose equations may be rewritten as: Parabolic cylinder Finally, if assume, without loss of generality, that and to obtain the parabolic cylinders with equations that can be written as: Projective geometry In projective geometry, a cylinder is simply a cone whose apex (vertex) lies on the plane at infinity. If the cone is a quadratic cone, the plane at infinity (which passes through the vertex) can intersect the cone at two real lines, a single real line (actually a coincident pair of lines), or only at the vertex. These cases give rise to the hyperbolic, parabolic or elliptic cylinders respectively. This concept is useful when considering degenerate conics, which may include the cylindrical conics. Prisms A solid circular cylinder can be seen as the limiting case of a -gonal prism where approaches infinity. The connection is very strong and many older texts treat prisms and cylinders simultaneously. Formulas for surface area and volume are derived from the corresponding formulas for prisms by using inscribed and circumscribed prisms and then letting the number of sides of the prism increase without bound. One reason for the early emphasis (and sometimes exclusive treatment) on circular cylinders is that a circular base is the only type of geometric figure for which this technique works with the use of only elementary considerations (no appeal to calculus or more advanced mathematics). Terminology about prisms and cylinders is identical. Thus, for example, since a truncated prism is a prism whose bases do not lie in parallel planes, a solid cylinder whose bases do not lie in parallel planes would be called a truncated cylinder. From a polyhedral viewpoint, a cylinder can also be seen as a dual of a bicone as an infinite-sided bipyramid.
Mathematics
Three-dimensional space
null
859981
https://en.wikipedia.org/wiki/Biochip
Biochip
In molecular biology, biochips are engineered substrates ("miniaturized laboratories") that can host large numbers of simultaneous biochemical reactions. One of the goals of biochip technology is to efficiently screen large numbers of biological analytes, with potential applications ranging from disease diagnosis to detection of bioterrorism agents. For example, digital microfluidic biochips are under investigation for applications in biomedical fields. In a digital microfluidic biochip, a group of (adjacent) cells in the microfluidic array can be configured to work as storage, functional operations, as well as for transporting fluid droplets dynamically. History The development started with early work on the underlying sensor technology. One of the first portable, chemistry-based sensors was the glass pH electrode, invented in 1922 by Hughes. The basic concept of using exchange sites to create permselective membranes was used to develop other ion sensors in subsequent years. For example, a K+ sensor was produced by incorporating valinomycin into a thin membrane. In 1953, Watson and Crick announced their discovery of the now familiar double helix structure of DNA molecules and set the stage for genetics research that continues to the present day. The development of sequencing techniques in 1977 by Gilbert and Sanger (working separately) enabled researchers to directly read the genetic codes that provide instructions for protein synthesis. This research showed how hybridization of complementary single oligonucleotide strands could be used as a basis for DNA sensing. Two additional developments enabled the technology used in modern DNA-based. First, in 1983 Kary Mullis invented the polymerase chain reaction (PCR) technique, a method for amplifying DNA concentrations. This discovery made possible the detection of extremely small quantities of DNA in samples. Secondly in 1986 Hood and co-workers devised a method to label DNA molecules with fluorescent tags instead of radiolabels, thus enabling hybridization experiments to be observed optically. Figure 1 shows the make up of a typical biochip platform. The actual sensing component (or "chip") is just one piece of a complete analysis system. Transduction must be done to translate the actual sensing event (DNA binding, oxidation/reduction, etc.) into a format understandable by a computer (voltage, light intensity, mass, etc.), which then enables additional analysis and processing to produce a final, human-readable output. The multiple technologies needed to make a successful biochip—from sensing chemistry, to microarraying, to signal processing—require a true multidisciplinary approach, making the barrier to entry steep. One of the first commercial biochips was introduced by Affymetrix. Their "GeneChip" products contain thousands of individual DNA sensors for use in sensing defects, or single nucleotide polymorphisms (SNPs), in genes such as p53 (a tumor suppressor) and BRCA1 and BRCA2 (related to breast cancer). The chips are produced by using microlithography techniques traditionally used to fabricate integrated circuits (see below). Microarray fabrication The microarray—the dense, two-dimensional grid of biosensors—is the critical component of a biochip platform. Typically, the sensors are deposited on a flat substrate, which may either be passive (e.g. silicon or glass) or active, the latter consisting of integrated electronics or micromechanical devices that perform or assist signal transduction. Surface chemistry is used to covalently bind the sensor molecules to the substrate medium. The fabrication of microarrays is non-trivial and is a major economic and technological hurdle that may ultimately decide the success of future biochip platforms. The primary manufacturing challenge is the process of placing each sensor at a specific position (typically on a Cartesian grid) on the substrate. Various means exist to achieve the placement, but typically robotic micro-pipetting or micro-printing systems are used to place tiny spots of sensor material on the chip surface. Because each sensor is unique, only a few spots can be placed at a time. The low-throughput nature of this process results in high manufacturing costs. Fodor and colleagues developed a unique fabrication process (later used by Affymetrix) in which a series of microlithography steps is used to combinatorially synthesize hundreds of thousands of unique, single-stranded DNA sensors on a substrate one nucleotide at a time. One lithography step is needed per base type; thus, a total of four steps is required per nucleotide level. Although this technique is very powerful in that many sensors can be created simultaneously, it is currently only feasible for creating short DNA strands (15–25 nucleotides). Reliability and cost factors limit the number of photolithography steps that can be done. Furthermore, light-directed combinatorial synthesis techniques are not currently possible for proteins or other sensing molecules. As noted above, most microarrays consist of a Cartesian grid of sensors. This approach is used chiefly to map or "encode" the coordinate of each sensor to its function. Sensors in these arrays typically use a universal signalling technique (e.g. fluorescence), thus making coordinates their only identifying feature. These arrays must be made using a serial process (i.e. requiring multiple, sequential steps) to ensure that each sensor is placed at the correct position. "Random" fabrication, in which the sensors are placed at arbitrary positions on the chip, is an alternative to the serial method. The tedious and expensive positioning process is not required, enabling the use of parallelized self-assembly techniques. In this approach, large batches of identical sensors can be produced; sensors from each batch are then combined and assembled into an array. A non-coordinate based encoding scheme must be used to identify each sensor. As the figure shows, such a design was first demonstrated (and later commercialized by Illumina) using functionalized beads placed randomly in the wells of an etched fiber optic cable. Each bead was uniquely encoded with a fluorescent signature. However, this encoding scheme is limited in the number of unique dye combinations that can be used and successfully differentiated. Protein biochip array and other microarray technologies Microarrays are not limited to DNA analysis; protein microarrays, antibody microarray, chemical compound microarray can also be produced using biochips. Randox Laboratories Ltd. launched Evidence, the first protein Biochip Array Technology analyzer in 2003. In protein Biochip Array Technology, the biochip replaces the ELISA plate or cuvette as the reaction platform. The biochip is used to simultaneously analyze a panel of related tests in a single sample, producing a patient profile. The patient profile can be used in disease screening, diagnosis, monitoring disease progression or monitoring treatment. Performing multiple analyses simultaneously, described as multiplexing, allows a significant reduction in processing time and the amount of patient sample required. Biochip Array Technology is a novel application of a familiar methodology, using sandwich, competitive and antibody-capture immunoassays. The difference from conventional immunoassays is that, the capture ligands are covalently attached to the surface of the biochip in an ordered array rather than in solution. In sandwich assays an enzyme-labelled antibody is used; in competitive assays an enzyme-labelled antigen is used. On antibody-antigen binding a chemiluminescence reaction produces light. Detection is by a charge-coupled device (CCD) camera. The CCD camera is a sensitive and high-resolution sensor able to accurately detect and quantify very low levels of light. The test regions are located using a grid pattern then the chemiluminescence signals are analysed by imaging software to rapidly and simultaneously quantify the individual analytes. Biochips are also used in the field of microphysiometry e.g. in skin-on-a-chip applications. For details about other array technologies, see Antibody microarray.
Technology
Biotechnology
null
860000
https://en.wikipedia.org/wiki/Smokeless%20powder
Smokeless powder
Smokeless powder is a type of propellant used in firearms and artillery that produces less smoke and less fouling when fired compared to black powder. Because of their similar use, both the original black powder formulation and the smokeless propellant which replaced it are commonly described as gunpowder. The combustion products of smokeless powder are mainly gaseous, compared to around 55% solid products (mostly potassium carbonate, potassium sulfate, and potassium sulfide) for black powder. In addition, smokeless powder does not leave the thick, heavy fouling of hygroscopic material associated with black powder that causes rusting of the barrel. Despite its name, smokeless powder is not completely free of smoke; while there may be little noticeable smoke from small-arms ammunition, smoke from artillery fire can be substantial. Invented in 1884 by Paul Vieille, the most common formulations are based on nitrocellulose, but the term was also used to describe various picrate mixtures with nitrate, chlorate, or dichromate oxidizers during the late 19th century, before the advantages of nitrocellulose became evident. Smokeless powders are typically classified as division 1.3 explosives under the UN Recommendations on the Transport of Dangerous Goods – Model Regulations, regional regulations (such as ADR) and national regulations. However, they are used as solid propellants; in normal use, they undergo deflagration rather than detonation. Smokeless powder made autoloading firearms with many moving parts feasible (which would otherwise jam or seize under heavy black powder fouling). Smokeless powder allowed the development of modern semi- and fully automatic firearms and lighter breeches and barrels for artillery. History Before the widespread introduction of smokeless powder the use of gunpowder or black powder caused many problems on the battlefield. Military commanders since the Napoleonic Wars reported difficulty with giving orders on a battlefield obscured by the smoke of firing. Visual signals could not be seen through the thick smoke from the gunpowder used by the guns. Unless there was a strong wind, after a few shots, soldiers using gunpowder ammunition would have their view obscured by a huge cloud of smoke, and this problem became worse with increasing rate of fire. In 1884 during the Battle of Tamai Sudanese troops were able to break the square of British infantry armed with Martini–Henries because of that. Sharpshooters firing from concealed positions risked revealing their locations with a cloud of smoke. Gunpowder burns in a relatively inefficient process that produces lower pressures, making it about one-third as powerful as the same amount of smokeless powder. A significant portion of the combustion products from gunpowder are solids that are hygroscopic, i.e. they attract moisture from the air and make cleaning mandatory after every use, in order to prevent water accumulation in the barrel that can lead to corrosion and premature failure. These solids are also behind gunpowder's tendency to produce severe fouling that causes breech-loading actions to jam and can make reloading difficult. Nitroglycerine and guncotton Nitroglycerine was synthesized by the Italian chemist Ascanio Sobrero in 1847. It was subsequently developed and manufactured by Alfred Nobel as an industrial explosive under the trademark "Dynamite", but even then it was unsuitable as a propellant: despite its energetic and smokeless qualities, it detonates at supersonic speed, as opposed to deflagrating smoothly at subsonic speeds, making it more liable to shatter a gun barrel rather than propel a projectile out of it. Nitroglycerine is also highly shock-sensitive, making it unfit to be carried in battlefield conditions. A major step forward was the invention of guncotton, a nitrocellulose-based material, by German chemist Christian Friedrich Schönbein in 1846. He promoted its use as a blasting explosive and sold manufacturing rights to the Austrian Empire. Guncotton was more powerful than gunpowder, but at the same time was once again somewhat more unstable. John Taylor obtained an English patent for guncotton; and John Hall & Sons began manufacture in Faversham. English interest languished after an explosion destroyed the Faversham factory in 1847. Austrian Baron Wilhelm Lenk von Wolfsberg built two guncotton plants producing artillery propellent, but it too was dangerous under field conditions, and guns that could fire thousands of rounds using black powder would reach the end of their service life after only a few hundred shots with the more powerful guncotton. Small arms could not withstand the pressures generated by guncotton. After one of the Austrian factories blew up in 1862, Thomas Prentice & Company began manufacturing guncotton in Stowmarket in 1863; and British War Office chemist Sir Frederick Abel began thorough research at Waltham Abbey Royal Gunpowder Mills leading to a manufacturing process that eliminated the impurities in nitrocellulose making it safer to produce and a stable product safer to handle. Abel patented this process in 1865 when the second Austrian guncotton factory exploded. After the Stowmarket factory exploded in 1871, Waltham Abbey began production of guncotton for torpedo and mine warheads. Improvements In 1863, Prussian artillery captain Johann F. E. Schultze patented a small-arms propellant of nitrated hardwood impregnated with saltpeter or barium nitrate. Prentice received an 1866 patent for a sporting powder of nitrated paper manufactured at Stowmarket, but ballistic uniformity suffered as the paper absorbed atmospheric moisture. In 1871, Frederick Volkmann received an Austrian patent for a colloided version of Schultze powder called Collodin, which he manufactured near Vienna for use in sporting firearms. Austrian patents were not published at the time, and the Austrian Empire considered the operation a violation of the government monopoly on explosives manufacture and closed the Volkmann factory in 1875. In 1882, the Explosives Company at Stowmarket patented an improved formulation of nitrated cotton gelatinised by ether-alcohol with nitrates of potassium and barium. These propellants were suitable for shotguns but not rifles, because rifling results in resistance to a smooth expansion of the gas, which is reduced in smoothbore shotguns. In 1884, Paul Vieille invented a smokeless powder called Poudre B (short for poudre blanche, white powder, as distinguished from black powder) made from 68.2% insoluble nitrocellulose, 29.8% soluble nitrocellulose gelatinized with ether and 2% paraffin. This was adopted for the Lebel rifle chambered in 8×50mmR Lebel. It was passed through rollers to form paper-thin sheets, which were cut into flakes of the desired size. The resulting propellant, known as pyrocellulose, contains somewhat less nitrogen than guncotton does, and is less volatile. A particularly good feature of the propellant is that it will not detonate unless it is compressed, making it very safe to handle under normal conditions. Vieille's powder revolutionized the effectiveness of small guns because it gave off almost no smoke and was three times more powerful than black powder. Higher muzzle velocity meant a flatter trajectory and less wind drift and bullet drop, making shots practicable. Since less powder was needed to propel a bullet, the cartridge could be made smaller and lighter. This allowed troops to carry more ammunition for the same weight. Also, it would burn even when wet. Black powder ammunition had to be kept dry and was almost always stored and transported in watertight cartridges. Other European countries swiftly followed and started using their own versions of Poudre B, the first being Germany and Austria, which introduced new weapons in 1888. Subsequently, Poudre B was modified several times with various compounds being added and removed. Krupp began adding diphenylamine as a stabilizer in 1888. Meanwhile, in 1887, Alfred Nobel obtained an English patent for a smokeless gunpowder he called ballistite. In this propellant the fibrous structure of cotton (nitro-cellulose) was destroyed by a nitroglycerine solution instead of a solvent. In England in 1889, a similar powder was patented by Hiram Maxim, and in the United States in 1890 by Hudson Maxim. Ballistite was patented in the United States in 1891. The Germans adopted ballistite for naval use in 1898, calling it WPC/98. The Italians adopted it as filite, in cord instead of flake form—but, realising its drawbacks, changed to a formulation with nitroglycerine that they called solenite. In 1891 the Russians tasked the chemist Mendeleev with finding a suitable propellant. He created nitrocellulose gelatinised by ether-alcohol, which produced more nitrogen and more uniform colloidal structure than the French use of nitro-cottons in Poudre B. He called it pyrocollodion. Britain conducted trials on all the various types of propellant brought to its attention, but was dissatisfied with them all and sought something superior to all existing types. In 1889, Sir Frederick Abel, James Dewar and Dr W Kellner patented (Nos 5614 and 11,664 in the names of Abel and Dewar) a new formulation that was manufactured at the Royal Gunpowder Factory at Waltham Abbey. It entered British service in 1891 as Cordite Mark 1. Its main composition was 58% nitroglycerine, 37% guncotton and 3% mineral jelly. A modified version, Cordite MD, entered service in 1901, with the guncotton percentage increased to 65% and nitroglycerine reduced to 30%. This change reduced the combustion temperature and hence erosion and barrel wear. Cordite's advantages over gunpowder were reduced maximum pressure in the chamber (hence lighter breeches, etc.) but longer high pressure. Cordite could be made in any desired shape or size. The creation of cordite led to a lengthy court battle between Nobel, Maxim, and another inventor over alleged British patent infringement. The Anglo-American Explosives Company began manufacturing its shotgun powder in Oakland, New Jersey, in 1890. DuPont began producing guncotton at Carneys Point Township, New Jersey, in 1891. Charles E. Munroe of the Naval Torpedo Station in Newport, Rhode Island, patented a formulation of guncotton colloided with nitrobenzene, called Indurite, in 1891. Several United States firms began producing smokeless powder when Winchester Repeating Arms Company started loading sporting cartridges with Explosives Company powder in 1893. California Powder Works began producing a mixture of nitroglycerine and nitrocellulose with ammonium picrate as Peyton Powder, Leonard Smokeless Powder Company began producing nitroglycerine–nitrocellulose Ruby powders, Laflin & Rand negotiated a license to produce Ballistite, and DuPont started producing smokeless shotgun powder. The United States Army evaluated 25 varieties of smokeless powder and selected Ruby and Peyton Powders as the most suitable for use in the Krag–Jørgensen service rifle. Ruby was preferred, because tin-plating was required to protect brass cartridge cases from picric acid in the Peyton Powder. Rather than paying the required royalties for Ballistite, Laflin & Rand financed Leonard's reorganization as the American Smokeless Powder Company. United States Army Lieutenant Whistler assisted American Smokeless Powder Company factory superintendent Aspinwall in formulating an improved powder named W.A. for their efforts. W.A. smokeless powder was the standard for United States military service rifles from 1897 until 1908. In 1897, United States Navy Lieutenant John Bernadou patented a nitrocellulose powder colloided with ether-alcohol. The Navy licensed or sold patents for this formulation to DuPont and the California Powder Works while retaining manufacturing rights for the Naval Powder Factory, Indian Head, Maryland constructed in 1900. The United States Army adopted the Navy single-base formulation in 1908 and began manufacture at Picatinny Arsenal. By that time Laflin & Rand had taken over the American Powder Company to protect their investment, and Laflin & Rand had been purchased by DuPont in 1902. Upon securing a 99-year lease of the Explosives Company in 1903, DuPont enjoyed use of all significant smokeless powder patents in the United States, and was able to optimize production of smokeless powder. When government anti-trust action forced divestiture in 1912, DuPont retained the nitrocellulose smokeless powder formulations used by the United States military and released the double-base formulations used in sporting ammunition to the reorganized Hercules Powder Company. These newer and more powerful propellants were more stable and thus safer to handle than Poudre B. Characteristics The properties of the propellant are greatly influenced by the size and shape of its pieces. The specific surface area of the propellant influences the speed of burning, and the size and shape of the particles determine the specific surface area. By manipulation of the shape it is possible to influence the burning rate and hence the rate at which pressure builds during combustion. Smokeless powder burns only on the surfaces of the pieces. Larger pieces burn more slowly, and the burn rate is further controlled by flame-deterrent coatings that retard burning slightly. The intent is to regulate the burn rate so that a more or less constant pressure is exerted on the propelled projectile as long as it is in the barrel so as to obtain the highest velocity. The perforations stabilize the burn rate because as the outside burns inward (thus shrinking the burning surface area) the inside is burning outward (thus increasing the burning surface area, but faster, so as to fill up the increasing volume of barrel presented by the departing projectile). Fast-burning pistol powders are made by extruding shapes with more area such as flakes or by flattening the spherical granules. Drying is usually performed under a vacuum. The solvents are condensed and recycled. The granules are also coated with graphite to prevent static electricity sparks from causing undesired ignitions. Smokeless powder does not leave the thick, heavy fouling of hygroscopic material associated with black powder that causes rusting of the barrel (though some primer compounds can leave hygroscopic salts that have a similar effect; non-corrosive primer compounds were introduced in the 1920s).) Faster-burning propellants generate higher temperatures and higher pressures, however they also increase wear on gun barrels. Nitrocellulose deteriorates with time, yielding acidic byproducts. Those byproducts catalyze the further deterioration, increasing its rate. The released heat, in case of bulk storage of the powder, or too large blocks of solid propellant, can cause self-ignition of the material. Single-base nitrocellulose propellants are hygroscopic and most susceptible to degradation; double-base and triple-base propellants tend to deteriorate more slowly. To neutralize the decomposition products, which could otherwise cause corrosion of metals of the cartridges and gun barrels, calcium carbonate is added to some formulations. To prevent buildup of the deterioration products, stabilizers are added. Diphenylamine is one of the most common stabilizers used. Nitrated analogs of diphenylamine formed in the process of stabilizing decomposing powder are sometimes used as stabilizers themselves. The stabilizers are added in the amount of 0.5–2% of the total amount of the formulation; higher amounts tend to degrade its ballistic properties. The amount of the stabilizer is depleted with time with substantial changes of ballistic properties. Propellants in storage should be periodically tested for the amount of stabilizer remaining, as its depletion may lead to auto-ignition of the propellant. Moisture changes the stabilizers consumption over time. Composition Propellants using nitrocellulose (detonation velocity , RE factor 1.10) (typically an ether-alcohol colloid of nitrocellulose) as the sole explosive propellant ingredient are described as single-base powder. Propellants mixtures containing nitrocellulose and nitroglycerin (detonation velocity , RE factor 1.54) as explosive propellant ingredients are known as double-base powder. Alternatively diethylene glycol dinitrate (detonation velocity , RE factor 1.17) can be used as a nitroglycerin replacement when reduced flame temperatures without sacrificing chamber pressure are of importance. Reduction of flame temperature significantly reduces barrel erosion and hence wear. During the 1930s, triple-base propellants containing nitrocellulose, nitroglycerin or diethylene glycol dinitrate, and a substantial quantity of nitroguanidine (detonation velocity , RE factor 0.95) as explosive propellant ingredients were commercialized. The first triple-base propellant, featuring 20-25% of nitroguanidine and 30-45% nitroglycerine, was developed at the Dynamit Nobel factory at Avigliana by its director Dr. Modesto Abelli (1859-1911) and patented in 1905. These "cold propellant" mixtures have reduced flash and flame temperature without sacrificing chamber pressure compared to single- and double-base propellants, albeit at the cost of more smoke. In practice, triple-base propellants are, due to their higher price, reserved mainly for high-velocity large caliber ammunition such as used in (naval) artillery and tank guns, which suffer from bore erosion the most. During WWII they had some use by British and German artillery, and after the war they became the standard propellants in all British large-caliber ammunition designs except small arms. Most Western nations, except the United States, followed a similar path. In the late 20th century new propellant formulations started to appear. These are based on nitroguanidine and high explosives of the RDX type (detonation velocity , RE factor 1.60). Detonation velocities are of limited value in assessing the reaction rates of nitrocellulose propellants formulated to avoid detonation. Although the slower reaction is often described as burning because of similar gaseous end products at elevated temperatures, the decomposition differs from combustion in an oxygen atmosphere. Conversion of nitrocellulose propellants to high-pressure gas proceeds from the exposed surface to the interior of each solid particle in accordance with Piobert's law. Studies of solid single- and double-base propellant reactions suggest reaction rate is controlled by heat transfer through the temperature gradient across a series of zones or phases as the reaction proceeds from the surface into the solid. The deepest portion of the solid experiencing heat transfer melts and begins phase transition from solid to gas in a foam zone. The gaseous propellant decomposes into simpler molecules in a surrounding fizz zone. Energy is released in a luminous outer flame zone where the simpler gas molecules react to form conventional combustion products like steam and carbon monoxide. The foam zone acts as an insulator slowing the rate of heat transfer from the flame zone into the unreacted solid. Reaction rates vary with pressure; because the foam allows less effective heat transfer at low pressure, with greater heat transfer as higher pressures compress the gas volume of that foam. Propellants designed for a minimum heat transfer pressure may fail to sustain the flame zone at lower pressures. The energetic components used in smokeless propellants include nitrocellulose (the most common), nitroglycerin, nitroguanidine, DINA (bis-nitroxyethylnitramine; diethanolamine dinitrate, DEADN; DHE), Fivonite (2,2,5,5-tetramethylol-cyclopentanone tetranitrate, CyP), DGN (diethylene glycol dinitrate), and acetyl cellulose. Deterrents (or moderants) are used to slow the burning rate. Deterrents include centralites (symmetrical diphenyl urea—primarily diethyl or dimethyl), dibutyl phthalate, dinitrotoluene (toxic and carcinogenic), akardite (asymmetrical diphenyl urea), ortho-Tolyl urethane, and polyester adipate. Camphor was formerly used but is now obsolete. Stabilizers prevent or slow down self-decomposition. These include diphenylamine, petroleum jelly, calcium carbonate, magnesium oxide, sodium bicarbonate, and beta-Naphthol methyl ether Obsolete stabilizers include amyl alcohol and aniline. Decoppering additives hinder the buildup of copper residues from the gun barrel rifling. These include tin metal and compounds (e.g., tin dioxide), and bismuth metal and compounds (e.g., bismuth trioxide, bismuth subcarbonate, bismuth nitrate, bismuth antimonide); the bismuth compounds are favored as copper dissolves in molten bismuth, forming brittle and easily removable alloy. Lead foil and lead compounds have been phased out due to toxicity. Wear reduction materials including wax, talc and titanium dioxide are added to lower the wear of the gun barrel liners. Large guns use polyurethane jackets over the powder bags. Other additives include ethyl acetate (a solvent for manufacture of spherical powder), rosin (a surfactant to hold the grain shape of spherical powder) and graphite (a lubricant to cover the grains and prevent them from sticking together, and to dissipate static electricity). Flash reduction Flash reducers dim muzzle flash, the light emitted in the vicinity of the muzzle by the hot propellant gases and the chemical reactions that follow as the gases mix with the surrounding air. Before projectiles exit, a slight pre-flash may occur from gases leaking past the projectiles. Following muzzle exit, the heat of gases is usually sufficient to emit visible radiation: the primary flash. The gases expand but as they pass through the Mach disc, they are re-compressed to produce an intermediate flash. Hot, combustible gases (e.g. hydrogen and carbon-monoxide) may follow when they mix with oxygen in the surrounding air to produce the secondary flash, the brightest. The secondary flash does not usually occur with small arms. Nitrocellulose contains insufficient oxygen to completely oxidize its carbon and hydrogen. The oxygen deficit is increased by addition of graphite and organic stabilizers. Products of combustion within the gun barrel include flammable gasses like hydrogen and carbon monoxide. At high temperature, these flammable gasses will ignite when turbulently mixed with atmospheric oxygen beyond the muzzle of the gun. During night engagements, the flash produced by ignition can reveal the location of the gun to enemy forces and cause temporary night-blindness among the gun crew by photo-bleaching visual purple. Flash suppressors are commonly used on small arms to reduce the flash signature, but this approach is not practical for artillery. Artillery muzzle flash up to from the muzzle has been observed, and can be reflected off clouds and be visible for distances up to . For artillery, the most effective method is a propellant that produces a large proportion of inert nitrogen at relatively low temperatures that dilutes the combustible gases. Triple-base propellants are used for this because of the nitrogen in the nitroguanidine. Flash reducers include potassium chloride, potassium nitrate, potassium sulfate, and potassium bitartrate (potassium hydrogen tartrate: a byproduct of wine production formerly used by French artillery). Before the use of triple-base propellants, the usual method of flash reduction was to add inorganic salts like potassium chloride so their specific heat capacity might reduce the temperature of combustion gasses and their finely divided particulate smoke might block visible wavelengths of radiant energy of combustion. All flash reducers have a disadvantage: the production of smoke. Manufacturing Smokeless powder may be corned into small spherical balls or extruded into cylinders or strips with many cross-sectional shapes (strips with various rectangular proportions, single or multi-hole cylinders, slotted cylinders) using solvents such as ether. These extrusions can be cut into short ("flakes") or long pieces ("cords" many inches long). Cannon powder has the largest pieces. The United States Navy manufactured single-base tubular powder for naval artillery at Indian Head, Maryland, beginning in 1900. Similar procedures were used for United States Army production at Picatinny Arsenal beginning in 1907 and for manufacture of smaller grained Improved Military Rifle (IMR) powders after 1914. Short-fiber cotton linter was boiled in a solution of sodium hydroxide to remove vegetable waxes, and then dried before conversion to nitrocellulose by mixing with concentrated nitric and sulfuric acids. Nitrocellulose still resembles fibrous cotton at this point in the manufacturing process, and was typically identified as pyrocellulose because it would spontaneously ignite in air until unreacted acid was removed. The term guncotton was also used; although some references identify guncotton as a more extensively nitrated and refined product used in torpedo and mine warheads prior to use of TNT. Unreacted acid was removed from pyrocellulose pulp by a multistage draining and water washing process similar to that used in paper mills during production of chemical woodpulp. Pressurized alcohol removed remaining water from drained pyrocellulose prior to mixing with ether and diphenylamine. The mixture was then fed through a press extruding a long tubular cord form to be cut into grains of the desired length. Alcohol and ether were then evaporated from "green" powder grains to a remaining solvent concentration between 3 percent for rifle powders and 7 percent for large artillery powder grains. Burning rate is inversely proportional to solvent concentration. Grains were coated with electrically conductive graphite to minimize generation of static electricity during subsequent blending. "Lots" containing more than ten tonnes of powder grains were mixed through a tower arrangement of blending hoppers to minimize ballistic differences. Each blended lot was then subjected to testing to determine the correct loading charge for the desired performance. Military quantities of old smokeless powder were sometimes reworked into new lots of propellants. Through the 1920s Fred Olsen worked at Picatinny Arsenal experimenting with ways to salvage tons of single-base cannon powder manufactured for World War I. Olsen was employed by Western Cartridge Company in 1929 and developed a process for manufacturing spherical smokeless powder by 1933. Reworked powder or washed pyrocellulose can be dissolved in ethyl acetate containing small quantities of desired stabilizers and other additives. The resultant syrup, combined with water and surfactants, can be heated and agitated in a pressurized container until the syrup forms an emulsion of small spherical globules of the desired size. Ethyl acetate distills off as pressure is slowly reduced to leave small spheres of nitrocellulose and additives. The spheres can be subsequently modified by adding nitroglycerine to increase energy, flattening between rollers to a uniform minimum dimension, coating with phthalate deterrents to slow ignition, and/or glazing with graphite to improve flow characteristics during blending. Modern smokeless powder is produced in the United States by St. Marks Powder, Inc. owned by General Dynamics.
Technology
Ammunition
null
22576085
https://en.wikipedia.org/wiki/Arrow%20pushing
Arrow pushing
Arrow pushing or electron pushing is a technique used to describe the progression of organic chemistry reaction mechanisms. It was first developed by Sir Robert Robinson. In using arrow pushing, "curved arrows" or "curly arrows" are drawn on the structural formulae of reactants in a chemical equation to show the reaction mechanism. The arrows illustrate the movement of electrons as bonds between atoms are broken and formed. Arrow pushing never directly show the movement of atoms; it is used to show the movement of electron density, which indirectly shows the movement of atoms themselves. Arrow pushing is also used to describe how positive and negative charges are distributed around organic molecules through resonance. It is important to remember, however, that arrow pushing is a formalism and electrons (or rather, electron density) do not move around so neatly and discretely in reality. Arrow pushing has been extended to inorganic chemistry, especially to the chemistry of s- and p-block elements. It has been shown to work well for hypervalent compounds. Notation The representation of reaction mechanisms using curved arrows to indicate electron flow was developed by Sir Robert Robinson in 1922. Organic chemists use two types of arrows within molecular structures to describe electron movements. Single electrons’ trajectories are designated with single barbed arrows, whereas double-barbed arrows show movement of electron pairs. The arrow's tail is drawn at either a lone pair of electrons on an atom or a bond between atoms, an electron source or area where there is relatively high electron density. Its head points towards electron sinks, or areas of relatively low electron density. When a bond is broken, electrons leave where the bond was; this is represented by a curved arrow pointing away from the bond and ending with the arrow pointing towards the next unoccupied molecular orbital. The electrons can be transferred to a specific atom or can be transferred to a single (sigma) bond, thus making it a double (pi) bond, but the arrow is always pointing towards a specific atom, because electrons always move to a new atom whenever they are "pushed". Organic chemists represent the formation of a bond by a curved arrow pointing between two species. For clarity, when pushing arrows, it is best to draw the arrows starting from a lone pair of electrons or a σ or π bond and ending in a position that can accept a pair of electrons, allowing the reader to know exactly which electrons are moving and where they are ending. Bonds are broken in places where a corresponding antibonding orbital is filled. Some authorities allow the simplification that an arrow can originate at a formal negative charge that corresponds to a lone pair. However, not all formal negative charges correspond to the presence of a lone pair (e.g., the B in F4B−), and care needs to be taken with this usage. Breaking of bonds A covalent bond joining atoms in an organic molecule consists of a group of two electrons. Such a group is referred to as an electron pair. Reactions in organic chemistry proceed through the sequential breaking and formation of such bonds. Organic chemists recognize two processes for the breaking of a chemical bond. These processes are known as homolytic cleavage and heterolytic cleavage. Homolytic bond cleavage Homolytic bond cleavage is a process where the electron pair comprising a bond is split, causing the bond to break. This is denoted by two single barbed curved arrows pointing away from the bond. The consequence of this process is the retention of a single unpaired electron denoted by a dot on each of the atoms that were formerly joined by a bond. The single electron movement can be denoted by a curved arrow commonly referred to as a fish hook. These single electron species are known as free radicals. Heat or light are required to provide enough energy for this process to occur. For example, Ultraviolet light causes the chlorine-chlorine bond to break homolytically. The pair of electrons become split, denoted by the two fish hook arrows between both atoms pointing to both chlorine atoms. After the reaction occurs, it leads to both chlorine molecules left with a single unpaired electron. This is the initiation stage of free radical halogenation. Heterolytic bond cleavage Heterolytic bond cleavage is a process where the electron pair that comprised a bond moves to one of the atoms that was formerly joined by a bond. The bond breaks, forming a negatively charged species (an anion) and a positively charged species (a cation). The anion is the species that retains the electrons from the bond while the cation is stripped of the electrons from the bond. The anion usually forms on the most electronegative atom, in this example atom A. This is because the most electronegative atom will naturally attract electrons towards itself more strongly, leading to its negative charge. Acid-base reactions A Lewis acid-base reaction occurs when a molecule with a lone electron pair, or a base, donates its electrons to an electron-pair acceptor, also known as an acid. This can be shown in a reaction with a curved arrow pointing from the nonbonding electron pair to the electron acceptor. In a reaction involving Brønsted-Lowry acids and bases, the arrows are used in the same manner, and they help to indicate the attacking proton. In a Brønsted-Lowry acid-base reaction the arrow will begin from the base, the proton acceptor, to the acid, the proton donor. SN1 reactions An SN1 reaction occurs when a molecule separates into a positively charged component and a negatively charged component. This generally occurs in highly polar solvents through a process called solvolysis. The positively charged component then reacts with a nucleophile forming a new compound. SN1 reactions are reactions whose rate is dependent only on haloalkane concentration. In the first stage of this reaction (solvolysis), the C-L bond breaks and both electrons from that bond join LG (the leaving group) to form LG− and R3C+ ions. This is represented by the curved arrow pointing away from the C-LG bond and towards LG. The nucleophile Nu−, being attracted to the R3C+, then donates a pair of electrons forming a new C-Nu bond. Because an SN1 reaction proceeds with the Substitution of a leaving group with a Nucleophile, the SN designation is used. Because the initial solvolysis step in this reaction involves a single molecule dissociating from its leaving group, the initial stage of this process is considered a uni-molecular reaction. The involvement of only 1 species in the initial phase of the reaction enhances the mechanistic designation to SN1. An SN1 reaction has two steps. SN2 reactions An SN2 reaction occurs when a nucleophile displaces a leaving group residing on a molecule from the backside of the leaving group. This displacement or substitution results in the formation of a substitution product with inversion of stereochemical configuration. The nucleophile forms a bond with its lone pair as the electron source. The electron sink which ultimately accepts the electron density is the nucleofuge (leaving group), with bond forming and bond breaking occurring simultaneously at the transition state (marked with a double-dagger). The rates of SN2 reactions are dependent on the concentration of the haloalkane and the nucleophile. Because an SN2 reaction proceeds with the substitution of a leaving group with a nucleophile, the SN designation is used. Because this mechanism proceeds with the interaction of two species at the transition state, it is referred to as a bimolecular process, resulting in the SN2 designation. An SN2 reaction is a concerted process, which means that the bonds are breaking and forming concurrently. Thus, the electron movement shown by the arrow pushing is happening simultaneously. An SN2 reaction has one step. E1 eliminations An E1 elimination occurs when a proton adjacent to a positive charge leaves and generates a double bond. Because initial formation of a cation is necessary for E1 reactions to occur, E1 reactions are often observed as side reactions to SN1 mechanisms. E1 eliminations proceed with the Elimination of a leaving group leading to the E designation. Because this mechanism proceeds with the initial dissociation of a single starting material forming a carbocation, this process is considered a uni-molecular reaction. The involvement of only 1 species in the initial phase of the reaction enhances the mechanistic designation to E1. E2 eliminations An E2 elimination occurs when a proton adjacent to a leaving group is extracted by a base with simultaneous elimination of a leaving group and generation of a double bond. Similar to the relationship between E1 eliminations and SN1 mechanisms, E2 eliminations often occur in competition with SN2 reactions. This observation is most often noted when the base is also a nucleophile. In order to minimize this competition, non-nucleophilic bases are commonly used to effect E2 eliminations. E2 eliminations proceed through initial extraction of a proton by a base or nucleophile leading to Elimination of a leaving group justifying the E designation. Because this mechanism proceeds through the interaction of two species (substrate and base/nucleophile), E2 reactions are recognized as bi-molecular. Thus, the involvement of 2 species in the initial phase of the reaction enhances the mechanistic designation to E2. Addition reactions Addition reactions occur when nucleophiles react with carbonyls. When a nucleophile adds to a simple aldehyde or ketone, the result is a 1,2-addition. When a nucleophile adds to a conjugated carbonyl system, the result is a 1,4-addition. The designations 1,2 and 1,4 are derived from numbering the atoms of the starting compound where the oxygen is labeled “1” and each atom adjacent to the oxygen are sequentially numbered out to the site of nucleophilic addition. A 1,2-addition occurs with nucleophilic addition to position 2 while a 1,4-addition occurs with nucleophilic addition to position 4. Addition-elimination reactions Addition-elimination reactions are addition reactions immediately followed by elimination reactions. In general, these reactions take place when esters (or related functional groups) react with nucleophiles. In fact, the only requirement for an addition-elimination reaction to proceed is that the group being eliminated is a better leaving group than the incoming nucleophile.
Physical sciences
Basics_3
Chemistry
1318086
https://en.wikipedia.org/wiki/Revolutions%20per%20minute
Revolutions per minute
Revolutions per minute (abbreviated rpm, RPM, rev/min, r/min, or r⋅min−1) is a unit of rotational speed (or rotational frequency) for rotating machines. One revolution per minute is equivalent to hertz. Standards ISO 80000-3:2019 defines a physical quantity called rotation (or number of revolutions), dimensionless, whose instantaneous rate of change is called rotational frequency (or rate of rotation), with units of reciprocal seconds (s−1). A related but distinct quantity for describing rotation is angular frequency (or angular speed, the magnitude of angular velocity), for which the SI unit is the radian per second (rad/s). Although they have the same dimensions (reciprocal time) and base unit (s−1), the hertz (Hz) and radians per second (rad/s) are special names used to express two different but proportional ISQ quantities: frequency and angular frequency, respectively. The conversions between a frequency and an angular frequency are Thus a disc rotating at 60 rpm is said to have an angular speed of 2π rad/s and a rotation frequency of 1 Hz. The International System of Units (SI) does not recognize rpm as a unit. It defines units of angular frequency and angular velocity as rad s−1, and units of frequency as Hz, equal to s−1. Examples For a wheel, a pump, or a crank shaft, the number of times that it completes one full cycle in one minute is given the unit revolution per minute. A revolution is one complete period of motion, whether this be circular, reciprocating or some other periodic motion. On many kinds of disc recording media, the rotational speed of the medium under the read head is a standard given in rpm. Phonograph (gramophone) records, for example, typically rotate steadily at , , 45 rpm or 78 rpm (0.28, 0.55, 0.75, or 1.3, respectively, in Hz). Air turbine rotating up to (25 kHz) Modern air turbine dental drills can rotate at over (13.3 kHz). The second hand of a conventional analog clock rotates at 1 rpm. Audio CD players read their discs at a precise, constant rate (4.3218 Mbit/s of raw physical data for 1.4112 Mbit/s (176.4 KB/s) of usable audio data) and thus must vary the disc's rotational speed from 8 Hz (480 rpm) when reading at the innermost edge to 3.5 Hz (210 rpm) at the outer edge. DVD players also usually read discs at a constant linear rate. The disc's rotational speed varies from 25.5 Hz (1530 rpm) when reading at the innermost edge, to 10.5 Hz (630 rpm) at the outer edge. A washing machine's drum may rotate at 500 rpm to (8 Hz – 46 Hz) during the spin cycles. A baseball thrown by a Major League Baseball pitcher can rotate at over (41.7 Hz); faster rotation yields more movement on breaking balls. A power-generation turbine (with a two-pole alternator) rotates at 3000 rpm (50 Hz), 3600 rpm (60 Hz), and over 4000 rpm ( Hz) Modern automobile engines are typically operated around – (33 Hz – 50 Hz) when cruising, with a minimum (idle) speed around 750 rpm – 900 rpm (12.5 Hz – 15 Hz), and an upper limit anywhere from 4500 rpm to up to (75 Hz – 166 Hz) for a road car, very rarely reaching up to for certain cars (such as the GMA T.50), or for racing engines such as those in Formula 1 cars (during the season, with the 2.4 L N/A V8 engine configuration; limited to , with the 1.6 L V6 turbo-hybrid engine configuration). The exhaust note of V8, V10, and V12 F1 cars has a much higher pitch than an I4 engine, because each of the cylinders of a four-stroke engine fires once for every two revolutions of the crankshaft. Thus an eight-cylinder engine turning 300 times per second will have an exhaust note of . A piston aircraft engine typically rotates at a rate between and (42 Hz – 166 Hz). Computer hard drives typically rotate at – (125 Hz – 166 Hz), the most common speeds for the ATA or SATA-based drives in consumer models. High-performance drives (used in fileservers and enthusiast-gaming PCs) rotate at – (160 Hz – 250 Hz), usually with higher-level SATA, SCSI or Fibre Channel interfaces and smaller platters to allow these higher speeds, the reduction in storage capacity and ultimate outer-edge speed paying off in much quicker access time and average transfer speed thanks to the high spin rate. Until recently, lower-end and power-efficient laptop drives could be found with or even spindle speeds (70 Hz or 60 Hz), but these have fallen out of favour due to their lower performance, improvements in energy efficiency in faster models and the takeup of solid-state drives for use in slimline and ultraportable laptops. Similar to CD and DVD media, the amount of data that can be stored or read for each turn of the disc is greater at the outer edge than near the spindle; however, hard drives keep a constant rotational speed so the effective data rate is faster at the edge (conventionally, the "start" of the disc, opposite to a CD or DVD). Floppy disc drives typically ran at a constant 300 rpm or occasionally 360 rpm (a relatively slow 5 Hz or 6 Hz) with a constant per-revolution data density, which was simple and inexpensive to implement, though inefficient. Some designs such as those used with older Apple computers (Lisa, early Macintosh, later II's) were more complex and used variable rotational speeds and per-track storage density (at a constant read/record rate) to store more data per disc; for example, between 394 rpm (with 12 sectors per track) and 590 rpm (8 sectors) with Mac's 800 kB double-density drive at a constant 39.4 kB/s (max) – versus 300 rpm, 720 kB and 23 kB/s (max) for double-density drives in other machines. A Zippe-type centrifuge for enriching uranium spins at () or faster. Gas turbine engines rotate at tens of thousands of rpm. JetCat model aircraft turbines are capable of over () with the fastest reaching (). A Flywheel energy storage system works at – (1 kHz – 8.3 kHz) range using a passively magnetic levitated flywheel in a vacuum. The choice of the flywheel material is not the most dense, but the one that pulverises the most safely, at surface speeds about 7 times the speed of sound. A typical 80 mm, 30 CFM computer fan will spin at – (43 Hz – 50 Hz) on 12 V DC power. A millisecond pulsar can have near (833 Hz). A turbocharger can reach (16.6 kHz), while – (1 kHz – 3 kHz) is common. A supercharger can spin at speeds between or as high as – (833 Hz – 1666 Hz) Molecular microbiology – molecular engines. The rotation rates of bacterial flagella have been measured to be (170 Hz) for Salmonella typhimurium, (270 Hz) for Escherichia coli, and up to () for polar flagellum of Vibrio alginolyticus, allowing the latter organism to move in simulated natural conditions at a maximum speed of 540 mm/h.
Physical sciences
Angular velocity
Basics and measurement
1318099
https://en.wikipedia.org/wiki/Kronosaurus
Kronosaurus
Kronosaurus ( ) is an extinct genus of large short-necked pliosaur that lived during the Aptian to Albian stages of the Early Cretaceous in what is now Australia. The first known specimen was received in 1899 and consists of a partially preserved mandibular symphysis, which was first thought to come from an ichthyosaur according to Charles De Vis. However, it was 1924 that Albert Heber Longman formally described this specimen as the holotype of an imposing pliosaurid, to which he gave the scientific name K. queenslandicus, which is still the only recognized species nowadays. The genus name, meaning "lizard of Kronos", refers to its large size and possible ferocity reminiscent of the Titan of the Greek mythology, while the species name alludes to Queensland, the Australian state of its discovery. In the early 1930s, the Harvard Museum of Comparative Zoology sent an organized expedition to Australia that recovered two specimens historically attributed to the taxon, including a well known skeleton that is now massively restored in plaster. Several attributed fossils were subsequently discovered, including two large, more or less partials skeletons. As the holotype specimen does not present diagnostics to concretely distinguish Kronosaurus from other pliosaurids, these same two skeletons are proposed as potential neotypes for future redescriptions. Two additional species were proposed, but these are now seen as unlikely or belonging to another genus. Kronosaurus is one of the largest known pliosaurs identified to date. Initial estimates set its maximum size at around long based on the Harvard skeleton. However, the latter having been reconstructed with an exaggerated number of vertebrae, estimates published from the early 2000s reduce the size of the animal from to more than long. Like all plesiosaurs, Kronosaurus has four paddle-like limbs, a short tail and, like most pliosaurids, a long head and a short neck. The largest identified skulls of Kronosaurus dwarf those of largest known theropod dinosaurs in size. The front of the skull is elongated into a rostrum (snout). The mandibular symphysis, where the front ends of each side of the mandible (lower jaw) fuse, is elongated in Kronosaurus, and contains up to six pairs of teeth. The large cone-shaped teeth of Kronosaurus would have been used for a diet consisting of large prey. The front teeth are larger than the back teeth. The limbs of Kronosaurus were modified into flippers, with the back pair larger than the front. The flippers would have given a wingspan of more than for the largest representatives. Phylogenetic classifications published since 2013 recover Kronosaurus within the subfamily Brachaucheninae, a lineage which includes numerous pliosaurids that lived during different stages of the Cretaceous. Based on its stratigraphic distribution in the fossil record, Kronosaurus inhabited the Eromanga Sea, an ancient inland sea that covered a large part of Australia during the Early Cretaceous. This inner sea reached cold temperatures close to freezing. Kronosaurus would likely have been an apex predator in this sea, with fossil evidence showing that it preyed on sea turtles and other plesiosaurs. Estimates of its bite force suggest that the animal would have reached between , surpassing the placoderm Dunkleosteus and rivaling Tyrannosaurus, but being largely outnumbered by the megalodon. The skull of a juvenile specimen shows that it would have been attacked by an adult, indicating intraspecific aggression or even potential evidence of cannibalism within the genus. Kronosaurus would have faced interspecific competition with other large predators within this sea, with one specimen attributed showing bite marks from a Cretoxyrhina-like shark. Research history Initial finds and research In 1899, a partial fossil of a marine reptile was sent on behalf of a certain Andrew Crombie to the Queensland Museum of Brisbane, Australia, and was received by the zoologist Charles De Vis, who was then the director of the museum during that time. No information regarding the origin locality of the fossil is known, but it seems that it was probably discovered near of Hughenden, Queensland, a town from which Crombie comes. Queensland Museum records show that De Vis even sent a letter to Crombie informing him that he had been made aware of the receipt of the material. The fossil in question, cataloged as QM F1609, consists of a partial mandibular symphysis bearing six conical teeth. Based on his observations, De Vis considers the fossil to come from a representative of the Enaliosauria, a now obsolete taxon which included plesiosaurs and ichthyosaurs. De Vis initially thought the specimen came from an ichthyosaur, specifically Ichthyosaurus australis, which today seems to be placed in the genus Platypterygius. However, the particular dentition of this specimen quickly makes it change its mind about whether it belongs to this specific genus. The fossil was officially described by De Vis's successor, Albert Heber Longman, in a scientific article published in 1924 by the journal of the Queensland Museum. Longman deduces that the fossil comes from a large pliosaur, to which he gives the genus and species name Kronosaurus queenslandicus. The generic name comes from Kronos, a Titan from the Greek mythology, and from ancient Ancient Greek σαῦρος (saûros, "lizard"), to literally give "lizard of Kronos". Longman would have created this generic name in reference to the imposing size and possible ferocity of the animal, which could recall the story of Kronos, who is known in Greek mythology for having devoured his own children, notably Zeus. The specific epithet queenslandicus is named after the Queensland, the Australian state where the holotype specimen was most likely discovered. In August 1929, fifteen more or less partial fossils are discovered nearly 3.2 km south of Hughenden. These same fossils, all catalogued as QM F2137, are identified as coming from the Toolebuc Formation, dating from the Albian stage of the Early Cretaceous, the holotype having very probably also been discovered in this same locality. The majority of the material recovered is then very incomplete, the only two that can be concretely described being proximal parts of propodials (upper limb bones), which are analyzed in more detail the following year, and those again by Longman. In 1932, in an effort to make the animal's fossils "attractive", Longman published one of the oldest known reconstructions of Kronosaurus. The illustration was drawn in 1931 by a certain Wilfrid Morden, who was inspired in particular by the anatomical features of Peloneustes to fill in the still unknown parts of the animal. In May and April 1935, a certain J. Edgar Young for the Queensland Museum, collected several fossils from the Toolebuc Formation, more precisely from the Telemon station, about 30 km west of Hughenden. Among all the fossils Young was involved in exhuming are additional remains attributed to Kronosaurus, including the first somewhat more complete cranial parts identified within the genus. In his article published in October 1935, Longman, due to the high number of fossils, suggested that they came from at least two or three individuals. Noting that the fossils were not fully prepared at the time of his description, he describes them preliminary. The most notable specimen, cataloged as QM F2446, consists of a partial middle of the skull which preserves an occipital condyle, the back of the neurocranium, the external nostrils as well as the orbits. Harvard expedition In 1931, the Museum of Comparative Zoology sent an expedition to Australia with the dual aim of obtaining specimens of both living and extinct animals, and in particular marsupial mammals. This decision came from the fact that the museum had relatively few Australian animals and therefore wanted to collect more. It was then that the Harvard Australian Expedition began, and was undertaken by a team of six men. The team consisted of coleopterologist P. Jackson Darlington Jr., zoologist Glover Morrill Allen and his student Ralph Nicholson Ellis, chief physician Ira M. Dixon, paleontologist William E. Schevill, and their leader, entomologist William Morton Wheeler. The following year, in 1932, it was Schevill who acquired the title of expedition leader, making long journeys and recruiting local help when he could. The Queensland Museum was also invited to participate in this expedition, but this was never approved due to lack of funds and/or interest from the state government. However, Longman, who described the first known fossils of Kronosaurus, nevertheless assisted the expedition, storing specimens as they were sent to him, securing collecting permits, and maintaining correspondence with Schevill. Schevill then ventured into the Rolling Downs geological group, north of the town of Richmond, where he collected two large pliosaur specimens. These same specimens are collected from the Doncaster Member of the Wallumbilla Formation, dating back approximately 112 million years. The first specimen he exhumed, cataloged as MCZ 1284 and discovered on a property called Grampian Valley, consisted of a well-preserved piece of the anterior rostrum closely connected to the entire mandibular symphysis, in addition to several other fragmentary pieces. The story regarding the discovery, exhumation and exhibition of the second specimen, cataloged as MCZ 1285, is much more detailed in many historical sources. This specimen was discovered long before the Harvard Expedition was even launched, by a rancher named Ralph William Haslam Thomas, in a locality known as Army Downs. The latter had been aware for many years of the presence of "something strange coming out of the ground" in a small horse enclosure. These "strange things" were actually a row of vertebrae contained in nodules. Noticing his discovery, Thomas therefore informed the members of the Harvard expedition, and notably Schevill. The latter then contacts a British migrant trained in the use of explosives, nicknamed "The Maniac" by local residents, in order to extract the specimen of of rock which constitutes its geological matrix. When the specimen was unearthed, its fossils were then sent to the United States in 86 crates weighing a total of . According to the export permit, the specimen was transported aboard the SS Canadian Constructor around 1 December 1932. Once arrived at Harvard, the fossils, which represent approximately 60% of the skeleton, took several years to extract from the limestone. The lack of money, manpower and space within the museum is the cause of the long delays, and it will take until 1939 only for the skull to be mounted and exhibited. However, a first scientific description of the skull was made by Theodore E. White in 1935. One year earlier, in 1934, Schevill asked Longman to send a cast of the holotype mandibular symphysis for comparison with the new specimen. It was then Longman's assistant, a certain Tom Marshall, who took it upon himself to make Schevill's request. The researchers then realized that the characters of the holotype (QM F1609) were identical to those of the Harvard specimen (MCZ 1285). Longman, in his letters to Schevill, suggests that he would have enjoyed seeing the specimen during its preparation in the late 1930s, but he never left Australian territory. The rest of the skeleton was kept in the basement of the museum for more than fifteen years. This interim period ended when the fossils attracted the attention of Godfrey Lowell Cabot, a Boston industrialist, philanthropist and founder of the Cabot Corporation. Cabot's family had a history of sighting large sea snakes in the coastal waters around the town he is from. When questioning the museum's director, Alfred Sherwood Romer, about the existence and reports of sea serpents, it occurred to Romer to tell Cabot about the skeleton kept in the museum's basement. So Cabot asks about the cost of a restoration and Romer says "about $10,000". Romer may not have been serious, but Cabot clearly was because the check for said sum came shortly after. Given that Romer's primary interest was the study of non-mammalian synapsids, it is possible that he had little regard for the skeleton as a subject of scientific study. After two years of careful preparations with chisel and acid by Arnold Lewis and James A. Jensen under Romer's direction, their work ultimately cost slightly more than promised by Cabot's base check. The Harvard skeleton was exhibited for the first time on 10 June 1958, and is followed by a detailed scientific description carried out by Romer and Lewis, which was published the following year by the museum journal. When the finalization of the specimen was announced in the Australian press, Longman, who is the descriptor of the taxon, was not mentioned. In response, professor and geologist Walter Heywood Bryan sent a message via telegraph informing journalists that it would be regrettable if such an important announcement made no mention of Longman and the interpretation of the initially fragmentary fossil material. At the age of 93, Thomas, the original discoverer of the specimen, was able to see the mounted skeleton of what he considered "his dinosaur", as well as meet again the leader of the museum's former expedition, each believing that the other had been dead for a long time. The arrival of new knowledge in the field of paleontology subsequently calls into question the restoration of the skeleton as proposed by Romer. Indeed, because of many incomplete bones, the latter ordered Lewis and Jensen to add plaster where he deemed it necessary. This latest decision has made it difficult for paleontologists to access real fossils, to the point where some of them use the nickname "Plasterosaurus" to refer to the specimen. In addition, it seems that the skeleton was reconstructed with the wrong proportions. According to Australian paleontologist Colin McHenry, the specimen has eight extra vertebrae added to the spine and the skull is not supposed to have a bulbous shaped sagittal crest on top. In his thesis revising the genus Kronosaurus published in 2009, McHenry called the Harvard skeleton "a rather disappointing restoration of what must have been an excellent fossil specimen". For this reason, many researchers express their desire to analyze real fossils using CT scans. Later discoveries and genus validity Given that the holotype specimen of K. queenslandicus (QM F1609) is fragmentary and does not present any unique characteristics that would qualify the genus as distinct from other pliosaurs, the validity of this taxon has therefore been questioned. As early as 1962, Samuel Paul Welles considered Kronosaurus as a nomen vanum and recommended the designation of a neotype specimen from Harvard University which would preserve the genus validity. From 1979, a good number of fossils from large pliosaurs were discovered in various localities in Australia, mainly in the geological strata of the Toolebuc Formation, the formation from which the first fossils attributed to the genus were discovered. In other formations, only one additional attributed specimen was discovered in the Doncaster Member of the Wallumbilla Formation, while three specimens, including one attributed to the type species, were discovered in the Allaru Formation. Two specimens with no specific affiliation were identified in the Bulldog Shale. In his 2009 thesis, McHenry describes in detail many fossils attributed to Kronosaurus, including most of the new specimens that he judges to possibly belong to this genus. Of the numerous fossil specimens that he analyzed, McHenry proposed that two partial skeletons, cataloged as QM F10113 and QM F18827, which both come from the Toolebuc Formation, could be candidate neotypes, because they present features that seem to fit with the holotype. However, no formal ICZN petition to designate a neotype was submitted. In 2022, Leslie Francis Noè and Marcela Gómez-Pérez published a study that revised most of the specimens historically attributed to Kronosaurus. Both authors limit Kronosaurus only to the holotype and consider it a nomen dubium. The holotype specimen does not possess any features allowing a diagnostic, the other attributed fossils are provisionally moved to a new taxon that the two authors name Eiectus longmani, in homage to Longman, the paleontologist who named the original genus. The Harvard skeleton (MCZ 1285) is also designated a holotype of this same genus. In 2023, Valentin Fischer and colleagues criticized the reassignments even under these circumstances, predicting that they stand contrary to ICZN Articles 75.5 and 75.6 and that the aforementioned multiple-species possibility cannot justify a tentative reassignment of all specimens to Eiectus. The authors instead opted to refer to all relevant fossils as Kronosaurus-Eiectus. The same year, Stephen F. Poropat and colleagues maintained K. queenslandicus as a nominally valid taxon that includes all fossils from the Toolebuc and Allaru Formation pending an official ICZN petition, recommending specimen QM F18827 as neotype. The authors also criticize the repurposing of Toolebuc specimens, on the grounds that Noè and Gómez-Pérez presumably ignored the conclusion of McHenry's 2009 thesis that only one species of large pliosaur exists in the formation and that, therefore, all of its specimens can be reliably considered conspecific to the holotype. As for Eiectus, Poropat and colleagues limit it only to MCZ 1285 and the referred specimen MCZ 1284, but their assignment without formal redescription also remains subject to debate, given that the holotype is so massively restored with plaster that all features apparent diagnostics are probably unreliable without comprehensive CT scans. Species proposed or formerly classified Although the only currently recognized species of Kronosaurus is K. queenslandicus, several authors have suggested the existence of additional species within the genus. In 1982 and again in 1991, Ralph Molnar expressed doubts as to whether the Harvard skeleton (MCZ 1285) belonged to the species K. queenslandicus, given that it was discovered in a locality distinct from that of the first known specimens, namely in the older Wallumbilla Formation. The author therefore suggests that this specimen would belong to another species of Kronosaurus characterized by a deeper and more robust skull than those coming from the Toolebuc Formation. A study published in 1993 also attributes the specimen under the name Kronosaurus sp., the authors following the same opinion as Molnar. However, as White indicates in his description of the specimen in 1935, much of the skull roof is not preserved and is mostly restored in plaster, the real proportions being therefore uncertain. In his 2009 thesis, McHenry nevertheless continues to refer the specimen to K. queenslandicus because of its taphonomic distribution and certain traits which may be consistent with other specimens discovered in the Toolebuc Formation. To determine whether this statement is true, only a CT scan could reveal the presence of the true notable differences within this reconstructed plaster specimen. In 1977, an almost complete skeleton of a large pliosaur was discovered by local residents of the town of Villa de Leyva, Colombia. The specimen, nicknamed "El Fósil" and dating from the Upper Aptian of the Paja Formation, was first provisionally referred to the genus Kronosaurus two years later, in 1979. It was in 1992 that the German paleontologist Olivier Hampe established a second species of the genus under the name of K. boyacensis, the specific name referring to Boyacá, the department surrounding the discovery site. However, these descriptions were made from photographs and remote imaging techniques, in particular because access to the specimen was prohibited by the local community. In addition, the state of preservation of the specimen and anatomical characteristics different from those of K. queenslandicus also suggested doubts about the affiliation of this species to Kronosaurus. It was therefore in 2022 that Noè and Gómez-Pérez re-described this specimen and discovered that it belonged to a distinct genus, which they named Monquirasaurus, in reference to Monquirá, the administrative division where the specimen was discovered. Description Due to the fact that the holotype specimen of Kronosaurus is non-diagnostic, the majority of anatomical descriptions are based on observations made from more complete fossils later assigned to the genus. The majority of descriptions come from McHenry's thesis published in 2009, although some specimens have been described in other works. Kronosaurus has a morphology typical of the pliosaurids of the thalassophonean group, which has a large elongated skull connected to a short neck, unlike many other plesiosaurs, which have a long neck and a small head. Like all other plesiosaurs, Kronosaurus has a short tail, a massive trunk and two pairs of large flippers. Size Kronosaurus is one of the largest pliosaurs identified to date, but several estimates as to its exact size have been proposed during research. As early as 1930, Longman, in his description of propodiums, considered that Kronosaurus would have exceeded in size the imposing Megalneusaurus, a North American pliosaurid dating from the Late Jurassic. After the collection of fossils assigned to the genus by the Harvard Expedition, the maximum size of Kronosaurus was generally set at long,based on specimen MCZ 1285. Kronosaurus was then considered as being the largest known marine reptile until 1995, when Theagarten Lingham-Soliar suggested that the Late Cretaceous aquatic squamate Mosasaurus hoffmannii would reach around long, the latter having a reduced size to around according to more recent estimates. Currently, the largest marine reptile identified to date is the Late Triassic ichthyosaur Ichthyotitan, which is thought to have reached around in length. The Harvard skeleton restoration being erroneous, McHenry gives a smaller size of this specimen between long for a weight of . These same measurements are seen as the maximum possible estimates of the genus as a whole. Even before McHenry's thesis was published, paleontologist Benjamin P. Kear and marine biologist Richard Ellis proposed comparable estimates in their respective works both published in 2003, ranging from according to Kear at according to Ellis. In 2024, Ruizhe Jackevan Zhao revises the measurements of MCZ 1285 at . Other specimens have been given body estimates although some of these are only known from more limited fossil remains. QM F1609, the holotype specimen, although very fragmentary, would have measured long with a body mass of . The proposed neotype specimen QM F18827 would have reached a length of with a body mass of . The most complete known attributed specimen, QM F10113, would have reached slightly smaller measurements, namely long with a body mass of . The largest specimens of Kronosaurus having been discovered in the Toolebuc Formation, QM F2446 and QM F2454, would have reached measurements almost identical to that of the Harvard skeleton. Respectively, these two specimens would have reached in length with body masses estimated at . Skull Since the holotype of K. queenslandicus (QM F1609) consists of only a partial mandibular symphysis, very little can be said about it. However, more complete fossil skulls that are assigned to the taxon show unique traits. The skulls of various known specimens of Kronosaurus vary in size. The holotype, which although partial and fragmentary, comes from a skull which would have measured a total of long. Candidate neotype specimens QM F10113 and QM F18827 have cranial lengths reaching , respectively. The skull of the Harvard skeleton is estimated to be long. The cranial measurements of the last three specimens previously cited surpass in size the skull of any known theropod dinosaurs. The snout and the mandibular rostrum are long and narrow in shape. The rostrum in general appears to be arched in shape and is relatively elongated, possessing a distinct median and dorsal crest. The eye sockets face obliquely posteriorly, where they are located laterally on the anterior half of the skull. The temporal fossae (openings in the top back of the cranium) are very large, but the skull does not have an anterior interpterygoid vacuity. One of the many traits identified as unique in Kronosaurus is that the premaxilla (front upper tooth-bearing bone) has four instead of five or more caniniform teeth. The frontal bones (bones bordering the eye sockets) do not come into contact with the margin of the eye sockets due to the connection between the postfrontal and prefrontal bones. The frontal bones also do not come into contact with the middle part of the skull roof due to the connection between the parietal bones and posterior facial processes of the premaxillae. The prefrontals are large and contact the anteromedial part of the eye sockets as well as the posterior border of the nostrils. The lacrimal bones (bones bordering the lower front edges of the eye sockets) are present in small specimens, but tend to be fused in adults. The dorsal surface of the median dorsal crest is formed by the premaxillae and nasal bones (bones bordering the external nares), which in adults are fused. The hyoid bones are robust. The mandibular symphysis of Kronosaurus is elongated and spatulate (spoon-shaped), and like its close relatives Brachauchenius and Megacephalosaurus, it contains up to six pairs of teeth. Each dentary (the tooth-bearing bone in the mandible) has up to 26 teeth. The mandibular glenoid (socket of the jaw joint) is kidney-shaped and angled upwards and inwards. The main autapomorphy of Kronosaurus teeth is that they are conical in shape, roughly ridged, and lacking distinct carinae. The dentition of Kronosaurus is heterodont, that is, it has teeth of different shapes. The larger teeth are caniniform and located at the front of the jaws, while the smaller teeth are more sharply recurved, stouter, and located further back. Postcranial skeleton The Harvard skeleton historically attributed to Kronosaurus received a study detailing its postcranial anatomy by Romer and Lewis in 1959. However, as the latter was massively restored in plaster, it is currently difficult to discern the real fossil material. Additionally, the specimen is temporarily referred to Eiectus; CT scans may in time reveal whether or not the specimen belongs to Kronosaurus. Many Kronosaurus specimens preserve postcranial material. The most complete specimen known, catalogued as QM F10113, preserves an important part of the postcranial anatomy which could reveal important information for a more in-depth diagnosis of the taxon. This same specimen should also be described in more detail in a future study. Some features concerning the postcranial anatomy of the genus have however been noted, both in McHenry's thesis and in other articles. Based on the different specimens analyzed, McHenry estimates that Kronosaurus would have had at least 35 presacral vertebrae, including thirteen cervical and five pectoral vertebra. Unlike Pliosaurus, the cervical centra (vertebral bodies) are wider than the dorsals. The anterior dorsal vertebrae are higher than wide. The zygapophyses would have been visibly absent from the anterior dorsal vertebrae and in the caudal vertebrae. In the thoracic region, the ribs would have been robust, as suggested by the transverse processes which are equally robust. The ribs would also been single-headed. Although the tail of Kronosaurus is unknown from articulated specimens, the end of the caudal vertebrae would have supported a small caudal fin like in other plesiosaurs. The coracoid and pubis are both elongated from front to back. The hindlimbs of Kronosaurus are longer than its forelimbs, with the femur being longer and more robust than the humerus. This suggests that the largest representatives of Kronosaurus would have rear flippers which would have formed a wingspan exceeding . Classification and evolution De Vis initially suggested that the Kronosaurus holotype specimen belonged to an ichthyosaur. However, when Longman described the taxon in 1924, he assigned it to the family Pliosauridae based on multiple anatomical features, an affiliation which will be mainly recognized throughout the 20th century as well as in the 21st century by the scientific community. However, some alternative classifications have been proposed throughout research. For example, in 1962, Welles suggested that Kronosaurus possibly belonged to the family Dolichorhynchopidae. However, this family is today recognized as polyphyletic (unnatural grouping) and is seen as invalid. The exact phylogenetic positioning of Kronosaurus within the Pliosauridae has also been debated. In 1992, Hampe proposed to classify Kronosaurus with its close relative Brachauchenius in the proposed family Brachaucheniidae. Kenneth Carpenter agreed with Hampe in 1996, although noting some notable cranial differences between the two genera. The family Brachaucheniidae was originally erected in 1925 by Samuel Wendell Williston to include only Brachauchenius. In 2001, F. Robin O'Keefe revised the classification of Pliosauridae and classified Kronosaurus as a basal representative distantly related to Brachauchenius. In 2008, two studies and a thesis proposed alternative classifications for Kronosaurus. Patrick S. Druckenmiller and Anthony P. Russell classified Kronosaurus as a derived pliosaurid, Hilary F. Ketchum still classifying it as a sister taxon of Brachauchenius in this family. Adam S. Smith and Gareth J. Dyke reclassify both genera within the Brachaucheniidae, but the family is seen as the sister taxon of the Pliosauridae. McHenry suggests that if Ketchum's proposal is proved as valid, then it would be preferable to relegate Brachaucheniidae as a subfamily of the Pliosauridae, therefore being renamed Brachaucheninae. McHenry nevertheless maintains the name Brachaucheniidae in his thesis detailing in more detail Kronosaurus pending further phylogenetic results. In 2013, Roger B. S. Benson and Druckenmiller named a new clade within Pliosauridae, Thalassophonea. This clade included the "classic", short-necked pliosaurids while excluding the earlier, long-necked, more gracile forms. The authors thus move the family Brachaucheniidae as a subfamily, renaming it Brachaucheninae, and classify many Cretaceous pliosaurids there, including Kronosaurus. Within this subfamily, Kronosaurus appears to be one of the most derived representatives, being generally placed in a clade including Brachauchenius and more recently Megacephalosaurus. Subsequent studies have uncovered a similar position for Kronosaurus. The cladogram below is modified from Madzia et al. (2018): The Brachaucheninae subfamily brings together the majority of pliosaurids dating from the Cretaceous, with phylogenetic analyzes often uniting them within this clade. However, it is possible that this is not the only lineage of thalassophoneans to have survived after the Jurassic. Indeed, Lower Cretaceous pliosaur teeth, displaying characteristics distinct from the Brachaucheninae, suggest that at least one other lineage crossed the Jurassic-Cretaceous boundary. Members of the Brachaucheninae are variable and only one uniting characteristic between all is known; the possession of somewhat circularly-shaped teeth rather than full or somewhat trihedral-shaped teeth seen in some Jurassic pliosaurs. Some characteristics that are shared by most brachauchenines like Megacephalosaurus includes skull features (such as an elongated snout, gracile rostrum, and consistently sized teeth) that are better adapted for a general evolutionary shift towards smaller prey. However, there are notable exceptions such as Kronosaurus, which has teeth that are each shaped differently. Kronosaurus is one of the few representatives of this group who not share any of these traits, having differently shaped teeth. This type of dentition therefore indicates that Kronosaurus was a genus specialized in hunting large prey, unlike most other representatives of this group. Paleobiology Plesiosaurs were well-adapted to marine life. They grew at rates comparable to those of birds and had high metabolisms, indicating homeothermy or even endothermy. The possibility of endothermy is also very probable in plesiosaurs that lived in Australia, including Kronosaurus, the southernmost areas having had particularly cold temperatures. A 2019 study by palaeontologist Corinna Fleischle and colleagues found that plesiosaurs had enlarged red blood cells, based on the morphology of their vascular canals, which would have aided them while diving. The short tail, while unlikely to have been used to propel the animal, could have helped stabilise or steer the plesiosaur. Feeding Due to its imposing size, morphology and distribution, Kronosaurus would most likely have been the apex predator of the ancient Eromanga inland sea. Stomach contents have been found in some Kronosaurus specimens. The most notable of these is specimen QM F10113, the most complete known, which contains the remains of a sea turtle. The position of the turtle at the skeletal level indicates that the specimen died of suffocation after swallowing its prey. The fossil remains are too fragmentary to determine what genus this turtle belongs to, but its measurements are similar to the protostegid Notochelone, which is the most widespread sea turtle of the Albian strata of Queensland. In 1993, Tony Thulborn and Susan Turner analyzed the severely crushed skull of an elasmosaurid, which is today recognized as belonging to Eromangasaurus. In their study, the authors discovered the presence of multiple bite marks made by large teeth. These same traces correspond to the dentition of the specimens referred to its contemporary Kronosaurus, proving its predation towards this animal. This is also the first reported evidence of a pliosaur attack on an elasmosaurid. Elasmosaurids having a very elongated neck and a small head, the injuries found in Eromangasaurus suggest that Kronosaurus would have regularly attacked this region of the body. Although no direct fossil evidence of feeding is known, the animal would likely also have preyed on leptocleidids. Intraspecific combat The smallest specimen attributed to Kronosaurus, cataloged as QM F51291, shows bite marks on its skull. In his 2009 thesis, McHenry highlights that the maximum possible size of Kronosaurus is , and suggests that the three known specimens not reaching the minimum size of represent juveniles or subadults. After analysis, he therefore suggests that this specimen would have been a juvenile which would have been fatally killed by the bite of an adult, indicating an intraspecific aggression or even cannibalism in Kronosaurus. He supports this hypothesis on the basis of common observations of many adult crocodilians not hesitating to attack juveniles. However, McHenry suggests that it is also possible that the bites would have been made shortly after the specimen died of another cause. Bite force A large part of McHenry's 2009 thesis is dedicated to the bite force of Kronosaurus using biomechanical analyses. Using these techniques, McHenry discovered that Kronosaurus exceeded the bite force of any living animal, itself being only slightly surpassed in some estimates by the well kown theropod dinosaur Tyrannosaurus. Based on specimen QM F10113, the bite force of Kronosaurus is estimated to be between . Still based on the same specimen, a 2014 Foffa et al. (2014) reestimates the bite force at between , corresponding to its close Jurassic relative Pliosaurus kevani. The estimates of this study regarding the bite force of these two pliosaurids exceed that of the predatory placoderm fish Dunkleosteus but are far from equaling that of the megalodon, to which the latter would have reached between . Paleoecology Contemporaneous biota All the geological formations from which fossils attributed to Kronosaurus have been discovered are located in the Great Artesian Basin (GAB). During the Lower Cretaceous, this geographical area was flooded by an inland sea known as the Eromanga Sea. The sedimentary record shows that this sea was relatively shallow, muddy and stagnant. Temperatures in this sea would have been particularly cold, approaching near freezing, and seasonal ice may have formed in some areas. Sea temperatures during the Albian would nevertheless have been warmer than during the Aptian. Many invertebrates are known from the fossil record dating from the Late Aptian to Late Albian of the GAB, mainly represented by molluscs. Free-swimming organisms include cephalopods, which include many ammonites, belemnites, and squids. Benthic zones are mainly dominated by bivalves, with gastropods and scaphopods being less diverse. Other types of invertebrates are known, such as crinoid echinoderms, decapod crustaceans, brachiopods, polychaete annelids and one species of glass sponge. The diversification of fish within the Eromanga Sea seems to vary according to geological periods, since they are not very present in the Albian strata but are abundant in the Aptian archives, particularly in the Upper Aptian. These include actinopterygians such as Australopachycormus, Richmondichthys Flindersichthys, Cooyoo and Pachyrhizondontus. The only known sarcopterygians are the lungfish Ceratodus and Neoceratodus. Chondrichthyans are also present, represented by Archaeolamna, Carcharias, Cretolamna, Cretoxyrhina, Edaphodon, Echinorhinus, Leptostyrax, Microcorax, Notorynchus, Pseudocorax, Pristiophorus, Scapanorhynchus and several species of orectolobiforms and palaeospinacids. These fish include surface-dwelling, midwater, and benthic varieties of various sizes, some of which could get quite large. They filled a variety of niches, including invertebrate eaters, piscivores, and, in the case of Cretoxyrhina, large apex predators. The Eromanga Sea is known for its great diversification of marine reptiles. Identified marine turtles include the protostegids Cratochelone, Bouliachelys and Notochelone, this latter being the most diverse within the inland sea. Several ichthyosaur fossils have been discovered in Queensland and were historically assigned to several different genera. We now know that these fossils probably belong to the species Platypterygius australis, which is one of the youngest ichthyosaurs known in the fossil record. Other fossils attributable to this species have been discovered in other formations of the GAB, notably in the Bulldog Shale, but they prove to be too fragmentary to determine a clear diagnostic. Several plesiosaurians have been identified, but most fossils are either too fragmentary or non-diagnostic for them to be assigned to a specific genus or species. Kronosaurus is stratigraphically the most widespread plesiosaurian in Australia, and would be the only large representative of a pliosaurid known to date in the country, if we exclude the proposed genus Eiectus. The only known cryptoclidid is Opallionectes. Elasmosaurids include Eromangasaurus and numerous interminate representatives. Some representatives of the Leptocleidia clade, which includes Leptocleididae and Polycotylidae, are known. Leptocleids include Leptocleidus, Umoonasaurus, and a few specimens with undetermined attributions. Polycotylids are only known from undetermined or not yet described specimens, the most notable of them the Richmond specimen. Some archosaurs from various groups have also been identified in the fossil record of the Eromanga Sea. Numerous fragmentary remains of dinosaurs from specimens that probably perished after drowning in the waters of Eromanga are known, these being identified as coming from the sauropod Austrosaurus, the ankylosaurian Minmi and the ornithopod Muttaburrasaurus. In addition to dinosaurs, many pterosaur fossils are known, and these could have been predators comparable to many modern-day seabirds. However, theirs fossils are often fragmentary, and few taxa have been named. Among the erected genera, there are Aussiedraco, Mythunga and Thapunngaka. Interspecific competition Despite its status as an apex predator, Kronosaurus was sometimes attacked by other contemporary predators. Indeed, a mandible cataloged as KK F0630, possibly representing a large subadult or a small adult specimen, shows bite marks which would have been made by lamniform sharks belonging to the Cretoxyrhinidae family. Injuries of this type are not unlikely, as several sharks attributed to this family have been identified in various geological formations where Kronosaurus is known. The grooves showing the bite marks being surrounded by aberrant raised osseous growth indicate that the specimen would have healed during its lifetime.
Biology and health sciences
Prehistoric marine reptiles
Animals
1319795
https://en.wikipedia.org/wiki/Deinosuchus
Deinosuchus
Deinosuchus () is an extinct genus of alligatoroid crocodilian, related to modern alligators and caimans, that lived 82 to 73 million years ago (Ma), during the late Cretaceous period. The name translates as "terrible crocodile" and is derived from the Greek deinos (δεινός), "terrible", and soukhos (σοῦχος), "crocodile". The first remains were discovered in North Carolina (United States) in the 1850s; the genus was named and described in 1909. Additional fragments were discovered in the 1940s and were later incorporated into an influential, though inaccurate, skull reconstruction at the American Museum of Natural History. Knowledge of Deinosuchus remains incomplete, but better cranial material found in recent years has expanded scientific understanding of this massive predator. Although Deinosuchus was far larger than any modern crocodile or alligator, with the largest adults measuring in total length, its overall appearance was fairly similar to its smaller relatives. It had large, robust teeth built for crushing, and its back was covered with thick hemispherical osteoderms. One study indicated Deinosuchus may have lived for up to 50 years, growing at a rate similar to that of modern crocodilians, but maintaining this growth over a much longer time. Deinosuchus fossils have been discovered in 12 U.S. states, including Texas, Montana, and many along the East Coast. Fossils have also been found in northern Mexico. It lived on both sides of the Western Interior Seaway, and was an opportunistic apex predator in the coastal regions of eastern North America. Deinosuchus reached its largest size in its western habitat, but the eastern populations were far more abundant. Opinion remains divided as to whether these two populations represent separate species. Deinosuchus was probably capable of killing and eating large dinosaurs. It may have also fed upon sea turtles, fish, and other aquatic and terrestrial prey. Discovery and naming In 1858, geologist Ebenezer Emmons described two large fossil teeth found in the Tar Heel Formation of Bladen County, North Carolina. Emmons assigned these teeth to Polyptychodon, which he then believed to be "a genus of crocodilian reptiles". Later discoveries showed that Polyptychodon was actually a pliosaur, a type of marine reptile. The teeth described by Emmons were thick, slightly curved, and covered with vertically grooved enamel; he assigned them a new species name, P. rugosus. Although not initially recognized as such, these teeth were probably the first Deinosuchus remains to be scientifically described. Another large tooth that likely came from Deinosuchus, discovered in Tar Heel sediments from neighboring Sampson County, was named Polydectes biturgidus by Edward Drinker Cope in 1869. In 1903, at Willow Creek, Montana, several fossil osteoderms were discovered "lying upon the surface of the soil" by John Bell Hatcher and T.W. Stanton. These osteoderms were initially attributed to the ankylosaurid dinosaur Euoplocephalus. Excavation at the site, carried out by W.H. Utterback, yielded further fossils, including additional osteoderms, as well as vertebrae, ribs, and a pubis. When these specimens were examined, it became clear that they belonged to a large crocodilian and not a dinosaur; upon learning this, Hatcher "immediately lost interest" in the material. After Hatcher died in 1904, his colleague W. J. Holland studied and described the fossils. Holland assigned these specimens to a new genus and species, Deinosuchus hatcheri, in 1909. Deinosuchus comes from the Greek δεινός/deinos, meaning "terrible", and σοῦχος/suchos, meaning "crocodile". A 1940 expedition by the American Museum of Natural History yielded more fossils of giant crocodilians, this time from Big Bend National Park in Texas. These specimens were described by Edwin H. Colbert and Roland T. Bird in 1954, under the name Phobosuchus riograndensis. Donald Baird and Jack Horner later assigned the Big Bend remains to Deinosuchus, which has been accepted by most modern authorities. The genus name Phobosuchus, which was initially created by Baron Franz Nopcsa in 1924, has since been discarded because it contained a variety of different crocodilian species that turned out to not be closely related to each other. The American Museum of Natural History incorporated the skull and jaw fragments into a plaster restoration, modeled after the present-day Cuban crocodile. Colbert and Bird stated this was a "conservative" reconstruction, since an even greater length could have been obtained if a long-skulled modern species, such as the saltwater crocodile had been used as the template. Because it was not then known that Deinosuchus had a broad snout, Colbert and Bird miscalculated the proportions of the skull, and the reconstruction greatly exaggerated its overall width and length. Despite its inaccuracies, the reconstructed skull became the best-known specimen of Deinosuchus, and brought public attention to this giant crocodilian for the first time. Numerous additional specimens of Deinosuchus were discovered over the next several decades. Most were quite fragmentary, but they expanded knowledge of the giant predator's geographic range. As noted by Chris Brochu, the osteoderms are distinctive enough that even "bone granola" can adequately confirm the presence of Deinosuchus. Better cranial material was also found; by 2002, David R. Schwimmer was able to create a composite computer reconstruction of 90% of the skull. Classification and species Since the discovery of the earliest fragmentary remains that would come to be known as Deinosuchus, it was considered a relative of crocodiles and initially placed in their family (Crocodylidae) in 1954 based on dental features. However, the finding of new specimens from Texas and Georgia in 1999 led to phylogenetic analysis placing Deinosuchus in a basal position within the clade Alligatoroidea along with Leidyosuchus. This classification was bolstered in 2005 by the discovery of a well-preserved Deinosuchus brain case from the Blufftown Formation of Alabama, which shows some features reminiscent of those in the modern American alligator, although Deinosuchus was not considered a direct ancestor of modern alligators. The species pertaining to Deinosuchus since the resurrection of the generic name in 1979 have been traditionally recognized as D. rugosus from Appalachia and the larger D. hatcheri/riograndensis from Laramidia, characterized by differences of the shape of their osteoderms and teeth. However, based on the lack of distinctive enough differences beyond size, they have increasingly been considered all the same species. In their overview of crocodyliform material from the Kaiparowits Formation of Utah, Irmis et al. (2013) noted that D. rugosus is dubious due to its holotype teeth being undiagnostic, and recommended using Deinosuchus hatcheri for Deinosuchus material from Laramidia, while stressing that cranial Deinosuchus material from Appalachia has not been described. In a 2020 study, Cossette and Brochu agreed that D. rugosus is dubious and undiagnostic, rendering it a nomen dubium, and alternatively named a new species D. schwimmeri (named after fellow paleontologist David R. Schwimmer) from Appalachia, which included several specimens previously ascribed to D. rugosus. They also noted that the highly incomplete D. hatcheri holotype can be distinguished by the unique shape of the edge of its indented osteoderms, although this may not be reliable because the osteoderms of the other species may simply not be as well preserved. However, due to the incomplete nature of the type species D. hatcheri, Cossette and Brochu proposed to transfer the type species to the better preserved D. riograndensis, which would allow for improved identification and differentiation of the Deinosuchus species. Phylogenetic analysis places Deinosuchus as a basal member of Alligatoroidea, as shown in the simplified cladogram below: Description Morphology Despite its large size, the overall appearance of Deinosuchus was not considerably different from that of modern crocodilians. Deinosuchus had an alligator-like, broad snout, with a slightly bulbous tip. Each premaxilla contained four teeth, with the pair nearest to the tip of the snout being significantly smaller than the other two. Each maxilla (the main tooth-bearing bone in the upper jaw) contained 21 or 22 teeth. The tooth count for each dentary (tooth-bearing bone in the lower jaw) was at least 22. All the teeth were very thick and robust; those close to the rear of the jaws were short, rounded, and blunt. They appear to have been adapted for crushing, rather than piercing. When the mouth was closed, only the fourth tooth of the lower jaw would have been visible. The skull of Deinosuchus itself was of a unique shape not seen in any other living or extinct crocodilians; the skull was broad, but inflated at the front around the nares. Two holes in the premaxilla in front of the nares are present in this genus and are unique autapomorphies not seen in other crocodilians, but nothing is known at present regarding their function. Modern saltwater crocodiles (Crocodylus porosus) have the strongest recorded bite of any living animal, with a maximum force of for a , specimen. The bite force of Deinosuchus has been estimated to be to . Deinosuchus had a secondary bony palate, which would have permitted it to breathe through its nostrils while the rest of the head remained submerged underwater. The vertebrae were articulated in a procoelous manner, meaning they had a concave hollow on the front end and a convex bulge on the rear; these would have fit together to produce a ball and socket joint. The secondary palate and procoelous vertebrae are advanced features also found in modern eusuchian crocodilians. The osteoderms (scutes) covering the back of Deinosuchus were unusually large, heavy, and deeply pitted; some were of a roughly hemispherical shape. Deep pits and grooves on these osteoderms served as attachment points for connective tissue. Together, the osteoderms and connective tissue would have served as load-bearing reinforcement to support the massive body of Deinosuchus out of water. These deeply pitted osteoderms have been used to suggest that, despite its bulk, Deinosuchus could probably have walked on land much like modern-day crocodiles. Size The large size of Deinosuchus has generally been recognized despite the fragmentary nature of the fossils assigned to it. However, estimates of how large it really was have varied considerably over the years. The original estimate from 1954 for the type specimen of the then-named "Phobosuchus riograndensis" were based on a skull of and a lower jaw of long, reconstructed with similar proportions to the Cuban crocodile giving a total estimated length of . However, this reconstruction is currently considered to be inaccurate. Using more complete remains, it was estimated in 1999 that the size attained by specimens of Deinosuchus varied from with weights from . This was later corroborated when it was noted that most known specimens of D. rugosus usually had skulls of about with estimated total lengths of and weights of . A reasonably well-preserved skull specimen discovered in Texas indicated the animal's head measured about , and its body length was estimated at . Schwimmer (2002) suggested the very largest individuals of D. riograndensis could reach sizes up to , 1.5 times that of the average D. rugosus, based on isometrically scaling vertebral lengths from the type specimens of "Phobosuchus riograndensis" (AMNH 3073) and Deinosuchus hatcheri, which he estimated would represent animals nearly . However, Iijima and Kubo (2020) estimated AMNH 3073 to measure in length using regression equations based on modern crocodilians, as the vertebrae of crocodilians scale with positive allometry. A particularly large mandibular fragment from a D. riograndensis specimen was estimated to have come from an individual with a skull length of . This length was used in conjunction with a regression equation relating skull length to total length in the American alligator to estimate a total length of for this particular specimen. This is only slightly lower than previous estimates for the species. Deinosuchus has often been described as the largest crocodyliform of all time. However, other crocodyliforms such as Purussaurus, Gryposuchus, Rhamphosuchus, Euthecodon, and Sarcosuchus may have equaled or exceeded it in size or length. Paleobiology Diet In 1954, Edwin H. Colbert and Roland T. Bird speculated that Deinosuchus "may very well have hunted and devoured some of the dinosaurs with which it was contemporaneous". Colbert restated this hypothesis more confidently in 1961: "Certainly this crocodile must have been a predator of dinosaurs; otherwise why would it have been so overwhelmingly gigantic? It hunted in the water where the giant theropods could not go." David R. Schwimmer proposed in 2002 that several hadrosaurid tail vertebrae found near Big Bend National Park show evidence of Deinosuchus tooth marks, strengthening the hypothesis that Deinosuchus fed on dinosaurs in at least some instances. In 2003, Christopher A. Brochu agreed that Deinosuchus "probably dined on ornithopods from time to time." Deinosuchus is generally thought to have employed hunting tactics similar to those of modern crocodilians, ambushing dinosaurs and other terrestrial animals at the water's edge and then submerging them until they drowned. A 2014 study suggested that it would have been able to perform a "death roll", like modern crocodiles. Schwimmer and G. Dent Williams proposed in 1996 that Deinosuchus may have preyed on marine turtles. Deinosuchus would probably have used the robust, flat teeth near the back of its jaws to crush the turtle shells. The "side-necked" sea turtle Bothremys was especially common in the eastern habitat of Deinosuchus, and several of its shells have been found with bite marks that were most likely inflicted by the giant crocodilian. Schwimmer concluded in 2002 that the feeding patterns of Deinosuchus most likely varied by geographic location; the smaller Deinosuchus specimens of eastern North America would have been opportunistic feeders in an ecological niche similar to that of the modern American alligator. They would have consumed marine turtles, large fish, and smaller dinosaurs. The bigger, but less common, Deinosuchus that lived in Texas and Montana might have been more specialized hunters, capturing and eating large dinosaurs. Schwimmer noted no theropod dinosaurs in Deinosuchus eastern range approached its size, indicating the massive crocodilian could have been the region's apex predator. Growth rates A 1999 study by Gregory M. Erickson and Christopher A. Brochu suggested the growth rate of Deinosuchus was comparable to that of modern crocodilians, but was maintained over a far longer time. Their estimates, based on growth rings in the dorsal osteoderms of various specimens, indicated each Deinosuchus might have taken over 35 years to reach full adult size, and the oldest individuals may have lived for more than 50 years. This was a completely different growth strategy than that of large dinosaurs, which reached adult size much more quickly and had shorter lifespans. According to Erickson, a full-grown Deinosuchus "must have seen several generations of dinosaurs come and go". Schwimmer noted in 2002 that Erickson and Brochu's assumptions about growth rates are only valid if the osteodermal rings reflect annual periods, as they do in modern crocodilians. According to Schwimmer, the growth ring patterns observed could have been affected by a variety of factors, including "migrations of their prey, wet-dry seasonal climate variations, or oceanic circulation and nutrient cycles". If the ring cycle were biannual rather than annual, this might indicate Deinosuchus grew faster than modern crocodilians, and had a similar maximum lifespan. Paleoecology Deinosuchus was present on both sides of the Western Interior Seaway. Specimens have been described from 12 U.S. states: Utah, Montana, Wyoming, New Mexico, New Jersey (Marshalltown Formation), Delaware, Georgia, Alabama, Mississippi, Texas, and North & South Carolina (Tar Heel/Coachman & Bladen Formations). A Deinosuchus osteoderm from the San Carlos Formation was also reported in 2006, so the giant crocodilian's range may have included parts of northern Mexico. There is also a report describing a possible Deinosuchus scute from Colorado. Deinosuchus fossils are most abundant in the Gulf Coastal Plain region of Georgia, near the Alabama border. All known specimens of Deinosuchus were found in rocks dated to the Campanian stage of the Late Cretaceous period. The oldest examples of this genus lived approximately 82 Ma, and the youngest lived around 73 Ma. The distribution of Deinosuchus specimens indicates these giant crocodilians may have preferred estuarine environments. In the Aguja Formation of Texas, where some of the largest specimens of Deinosuchus have been found, these massive predators probably inhabited brackish-water bays. Although some specimens have also been found in marine deposits, it is not clear whether Deinosuchus ventured out into the ocean (like modern-day saltwater crocodiles); these remains might have been displaced after the animals died. Deinosuchus has been described as a "conspicuous" component of a purportedly distinct biome occupying the southern half of Late Cretaceous North America. It has been suggested that the presence of Deinosuchus may have been responsible for the lack of very large predatory theropods from the Late Cretaceous of Appalachia, with the giant crocodilian replacing such large theropods as the top predator of the Appalachian coastal plains.
Biology and health sciences
Prehistoric crocodiles
Animals
1320257
https://en.wikipedia.org/wiki/Resplendent%20quetzal
Resplendent quetzal
The resplendent quetzal (Pharomachrus mocinno) is a small bird found in Central America and southern Mexico that lives in tropical forests, particularly montane cloud forests. They are part of the family Trogonidae and have two recognized subspecies, P. m. mocinno and P. m. costaricensis. Like other quetzals, the resplendent is mostly omnivorous; its diet mainly consists of fruits of plants in the laurel family, Lauraceae, but it occasionally also preys on insects, lizards, frogs and snails. The species is well known for its colorful and complex plumage that differs substantially between sexes. Males have iridescent green plumes, a red lower breast and belly, black innerwings and a white undertail, whilst females are duller and have a shorter tail. Grey lower breasts, bellies, and bills, along with bronze-green heads are characteristic of females. These birds hollow holes in decaying trees or use ones already made by woodpeckers as a nest site. They are known to take turns while incubating, males throughout the day and females at night. The female usually lays one to three eggs, which hatch in 17 to 19 days. The quetzal is an altitudinal migrant, migrating from the slopes to the canopy of the forest. This occurs during the breeding season, which varies depending on the location, but usually commences in March and extends as far as August. The resplendent quetzal is considered near threatened on the IUCN Red List, with habitat destruction being the main threat. It has an important role in Mesoamerican mythology, and is closely associated with Quetzalcoatl, a deity. It is the national animal of Guatemala, being pictured on the flag and coat of arms; it also gives its name to the country's currency, the Guatemalan quetzal. Taxonomy The resplendent quetzal was first described by Mexican naturalist Pablo de La Llave in 1832. It is one of five species of the genus Pharomachrus, commonly known as quetzals. Quetzal is usually specifically used to refer to the resplendent, but it typically applies to all members of the genera Pharomachrus and Euptilotis. Some scholars label the crested quetzal as a very close relative of the resplendent, and either suggest the crested quetzal to be a subspecies of the resplendent or the two form a superspecies. The quetzal clade is thought to have spread out from where it emerged in the Andes, the resplendent quetzal being the youngest species. The name of the genus, Pharomachrus, refers to the physical characteristics of the bird, with pharos meaning '' and makros meaning 'long' in Ancient Greek. The word 'quetzal' came from Nahuatl (Aztec), where quetzalli (from the root quetza, meaning 'stand') means 'tall upstanding plume' and then 'quetzal tail feather'; from that, Nahuatl quetzaltotōtl means 'quetzal-feather bird' and thus 'quetzal'. Two subspecies are recognized, P. m. mocinno and P. m. costaricensis, although there is an ongoing debate about whether costaricensis should be recognized as a distinct species. The specific epithet mocinno is a Latinization of the name of the biologist José Mariano M. Mociño, a mentor of his. Description The resplendent quetzal is the largest trogon. It is long; in the nominate subspecies, the tail streamers measure between and , with the median being for males. The nominate subspecies weighs about , while the subspecies costaricensis is slightly smaller than the nominate race, with shorter wings and bills. The tail plumes are shorter and narrower, measuring between and , with the median being . Resplendent quetzals have a green body (showing iridescence from green-gold to blue-violet) and a red lower breast and belly. Depending on the light, quetzal feathers can shine in a variant of colors: from green, cobalt, lime, and yellow to ultramarine. Their green upper hide their tails and are particularly splendid in breeding males, being longer than the rest of the body. Though the quetzal's plumage appears green, they are actually brown due to the pigment melanin. The primary wing coverts are also unusually long and have a fringed appearance. The male has a helmet-like crest. The bill, which is partly covered by green filamentous feathers, is yellow in mature males and grey in females. Their iridescent feathers, which cause them to appear shiny and green like the canopy leaves, are a camouflage adaptation to hide within the canopy during rainy weather. The quetzal's skin is very thin and easily torn, so it has evolved thick plumage to protect its skin. It has large eyes, adapted to see in the dim light of the forest. Their song is an array of full-toned, mellow, slurred notes in plain patterns and is often remarkably melodious: keow, kowee, keow, k'loo, keeloo. Distribution and habitat This species inhabits amidst lush vegetation, in specially moist rainforests at high elevations (). They populate trees that make up the canopy and subcanopy of the rainforest, though they can also be found in ravines and cliffs. It prefers to live in decaying trees, stumps, and abandoned woodpecker hollows. The vivid colors of the quetzal are disguised by the rainforest. The resplendent quetzal can be found from southern Mexico (southernmost Oaxaca and Chiapas) to western Panama (Chiriquí). The ranges of the two subspecies differ: P. m. mocinno is found in southern Mexico, northern El Salvador, and northwestern Nicaragua, Guatemala and Honduras, while P. m. costaricensis is found in Costa Rica and western Panama. The geographical isolation between the two subspecies is caused by the Nicaraguan depression, a wide, long bottomland that contains the two largest lakes in Central America, Lake Managua and Lake Nicaragua, and the deficiency of the breeding habitats in regions adjoining to. The quetzal migrates from its breeding areas in the lower montane rainforest to the pre-montane rainforest on the Pacific slopes for three to four months (July–October), after which they move across the continental divide to the Atlantic slopes. Quetzal's abundance in its mating areas is correlated with the total number of fruiting species, although the correlation between quetzal abundance and the number of fruiting Lauraceae species is only marginal. Behavior Resplendent quetzals generally display shy and quiet behaviour to elude predators. In contrast, they are rather vocal during the mating season and their behavior is designated to exhibit and attract mates. Their known predators include the ornate hawk-eagle, golden eagle, and other hawks and owls as adults, along with emerald toucanets, brown jays, long-tailed weasels, squirrels, and kinkajous as nestlings or eggs. The resplendent quetzal plays an important ecological role in the cloud forests, helping disseminate the seeds of at least 32 tree species. Feeding Resplendent quetzals are considered specialized fruit-eaters, feeding on 41 to 43 species, although they also feed on insects (primarily wasps, ants, and larvae), frogs, lizards, and snails. Particularly important are the Symplococarpon purpusii and wild avocados, as well as other fruits of the laurel family, which the birds swallow whole before regurgitating the pits, which helps to disperse these trees. Quetzals feed more frequently in the midday hours. The adults eat a more fruit-based diet than the chicks, who eat insects primarily and some fruits. Over fifty percent of the fruit they eat are laurels. Quetzals use the methods of "hovering" and "stalling" in order to selectively pick the fruit from near the tips of the branches. Breeding Resplendent quetzals create their nests over up in the air and court in the air with specific calls. Six specific vocal calls have been recorded: the two-note whistle, gee-gee, wahc-ah-wahc, wec-wec, whistle, coouee, uwac, chatter, and buzzing. The first call is related to male territorial behavior, while the coouee whistle is a mating call. Resplendent quetzals usually live alone when not breeding. They are monogamous territorial breeders, with the size of their territory in Guatemala being . They are also seasonal breeders, with the breeding season lasting from March to April in Mexico, May to June in El Salvador, and March to May in Guatemala. When breeding, females lay one to three pale blue eggs with a mean of x in a nest placed in a hole which they carve in a rotten tree. Resplendent quetzals tend to lay two clutches per year and are known to have a high rate of nest failure, 67-78%. One of the most important factors when choosing a nest location for the quetzal is that the tree must be in a stage of decomposition and decay. They often reuse their previous sites. The height of nest stubs is and the nest holes . Both parents take turns at incubating, with their long tail coverts folded forwards over out of the hole, giving them the appearance of a bunch of fern growing out of the hole. The incubation period lasts about 17 to 19 days, during which the male generally incubates the eggs during the day while the female incubates them at night. When the eggs hatch, both parents take care of the young, feeding them entire fruits, such as berries and avocados, as early as the second day. However, chicks are primarily fed insects, lizards, snails and small frogs. It was observed that males generally give more food, namely insects, than females. Nestlings are often neglected and even abandoned by females near the end of the rearing period, leaving it up to the male to continue caring for the offspring until they are ready to survive on their own. During the incubation period, parents land and rotate their heads side to side before entering the nest, a process known as "bowing in". This process ends when the chicks hatch. Young quetzals begin flying after a month, but the distinctive long tail feathers can take three years to develop in males. Conservation status The population trend varies between subpopulations but is generally decreasing although certain populations may be increasing or are at least stable. It is classified as being near threatened on the IUCN Red List, with an estimated population of 20,000–49,999 individuals. Due to the remote habitat of the quetzal, more monitoring is required to confirm the rate of decline, and depending on the results it could lead to it moving to a higher threat category. In 2001, the quetzal survived only in 11 small, isolated patches of forest. Its biggest threats are habitat loss because of deforestation, forest fragmentation, and agricultural clearing. The quetzal is also sometimes hunted for food and trapped for illegal trading. Cloud forests, the resplendent quetzal's habitat, are one of the most threatened ecosystems in the world, but the species occurs in several protected areas and is a sought-after species for birdwatchers and ecotourists. It was thought that the resplendent quetzal could not be bred or held for a long time in captivity, and was noted for usually dying soon after being captured or caged as a result of assimilation of iron through water ingestion, with this now understood they are now given tannic acid and iron is avoided in their diet. For this reason, it is a traditional symbol of liberty. The national anthem of Guatemala even includes the verse "" (Be rather dead than a slave). However, the scientific discovery about the bird's susceptibility to iron has allowed some zoos, including Miguel Álvarez del Toro Zoo in Tuxtla Gutiérrez, Chiapas, to keep this species. Breeding in captivity was announced in 2004. In culture The resplendent quetzal is of great importance to Guatemalan culture, being present in various legends and myths. It was considered divine and associated with Quetzalcoatl, a feathered serpent and god of life, light, knowledge and the winds, by pre-Columbian Mesoamerican civilizations. Its scintillating green tail feathers, symbolizes spring plant sprout, were venerated by the Aztec and Maya. The Maya also regarded the quetzal as representative of freedom and wealth on account of quetzals dying in captivity and the worth of their feathers along with jade, correspondingly. Mesoamerican rulers and some high ranked nobles wore diadems created from quetzal feathers, symbolically linking them to Quetzalcoatl. Since the killing of quetzals was forbidden under the Mayas and Aztec criminal law, the bird was merely seized, its prolonged tail feathers deplumed, and was set loose. In ancient Mayan culture, the quetzal feathers were considered so precious that they were even used as a medium of exchange. Thus the name of the Guatemala currency, the quetzal. In various Mesoamerican languages, the word quetzal can as well mean precious, sacred, or king, warrior, prince. One Mayan legend has it that a resplendent quetzal accompanied the hero, Tecún Umán, prince of the Quiché (K'iche') Maya, during his battle against Spanish conquistador Pedro de Alvarado. Tecún, equipped with just an arrow and bow, nevertheless is able to incapacitate Alvarado's horse on the first strike. Alvarado was then given a second horse and counter-charged against Tecún, running his chest through with a spear. A quetzal flew down and alighted on Tecún's body, drenching its chest in his blood. It was then that the species, which used to be completely green, obtained its characteristic red chest feathers. Additionally, from that day on, the quetzal, which sang delightfully before the Spanish conquest, has been mute ever since; it will sing anew solely when the land is fully liberated. Gallery
Biology and health sciences
Trogoniformes
null
1324296
https://en.wikipedia.org/wiki/Utility%20pole
Utility pole
A utility pole, commonly referred to as a transmission pole, telephone pole, telecommunication pole, power pole, hydro pole, telegraph pole, or telegraph post, is a column or post used to support overhead power lines and various other public utilities, such as electrical cable, fiber optic cable, and related equipment such as transformers and street lights while depending on its application. They are used for two different types of power lines: sub transmission lines, which carry higher voltage power between substations, and distribution lines, which distribute lower voltage power to customers. Electrical wires and cables are routed overhead on utility poles as an inexpensive way to keep them insulated from the ground and out of the way of people and vehicles. Utility poles are usually made out of wood, aluminum alloy, metal, concrete, or composites like fiberglass. A Stobie pole is a multi-purpose pole made of two steel joists held apart by a slab of concrete in the middle, generally found in South Australia. The first poles were used in 1843 by telegraph pioneer William Fothergill Cooke, who used them on a line along the Great Western Railway. Utility poles were first used in the mid-19th century in America with telegraph systems, starting with Samuel Morse, who attempted to bury a line between Baltimore and Washington, D.C., but moved it above ground when this system proved faulty. Today, underground distribution lines are increasingly used as an alternative to utility poles in residential neighborhoods, due to poles' perceived ugliness, as well as safety concerns in areas with large amounts of snow or ice build up. They have also been suggested in areas prone to hurricanes and blizzards as a way to reduce power outages. Use Utility poles are commonly used to carry two types of electric power lines: distribution lines (or "feeders") and sub transmission lines. Distribution lines carry power from local substations to customers. They generally carry voltages from 4.6 to 33 kilovolts (kV) for distances up to , and include transformers to step the voltage down from the primary voltage to the lower secondary voltage used by the customer. A service drop carries this lower voltage to the customer's premises. Subtransmission lines carry higher voltage power from regional substations to local substations. They usually carry 46 kV, 69 kV, or 115 kV for distances up to . 230 kV lines are often supported on H-shaped towers made with two or three poles. Transmission lines carrying voltages of above 230 kV are usually not supported by poles, but by metal pylons (known as transmission towers in the US). For economic or practical reasons, such as to save space in urban areas, a distribution line is often carried on the same poles as a sub transmission line but mounted under the higher voltage lines; a practice called "underbuild". Telecommunication cables are usually carried on the same poles that support power lines; poles shared in this fashion are known as joint-use poles, but may have their own dedicated poles. Description The standard utility pole in the United States is about tall and is buried about in the ground. In order to meet clearance regulations, poles can, however, reach heights of at least 120 feet (40 meters). They are typically spaced about apart in urban areas, or about in rural areas, but distances vary widely based on terrain. Joint-use poles are usually owned by one utility, which leases space on it for other cables. In the United States, the National Electrical Safety Code, published by the Institute of Electrical and Electronics Engineers (IEEE) (not to be confused with the National Electrical Code published by the National Fire Protection Association [NFPA]), sets the standards for construction and maintenance of utility poles and their equipment. Pole materials Most utility poles are made of wood, pressure-treated with some type of preservative for protection against rot, fungi and insects. Southern yellow pine is the most widely used species in the United States; however, many species of long straight trees are used to make utility poles, including Douglas fir, jack pine, lodgepole pine, western red cedar, and Pacific silver fir. Traditionally, the preservative used was creosote, but due to environmental concerns, alternatives such as pentachlorophenol, copper naphthenate and borates are becoming widespread in the United States. In the United States, standards for wood preservative materials and wood preservation processes, along with test criteria, are set by ANSI, ASTM, and American Wood Protection Association (AWPA) specifications. Despite the preservatives, wood poles decay and have a life of approximately 25 to 50 years depending on climate and soil conditions, therefore requiring regular inspection and remedial preservative treatments. Woodpecker damage to wood poles is the most significant cause of pole deterioration in the U.S. Other common utility pole materials are aluminum, steel and concrete, with composites (such as fiberglass) also becoming more prevalent. One particular patented utility pole variant used in Australia is the Stobie pole, made up of two vertical steel posts with a slab of concrete between them. Power distribution wires and equipment On poles carrying both electrical and communications wiring, the electric power distribution lines and associated equipment are mounted at the top of the pole above the communication cables, for safety. The vertical space on the pole reserved for this equipment is called the supply space. The wires themselves are usually uninsulated, and supported by insulators, commonly mounted on a horizontal beam (). Power is transmitted using the three-phase system, with three wires, or phases, labeled "A", "B", and "C". Sub transmission lines comprise only these 3 wires, plus sometimes an overhead ground wire (OGW), also called a "static line" or a "neutral", suspended above them. The OGW acts like a lightning rod, providing a low resistance path to ground thus protecting the phase conductors from lightning. Distribution lines use two systems, either grounded-wye ("Y" on electrical schematics) or delta (Greek letter "Δ" on electrical schematics). A delta system requires only a conductor for each of the three phases. A grounded-wye system requires a fourth conductor, the neutral, whose source is the center of the "Y" and is grounded. However, "spur lines" branching off the main line to provide power to side streets often carry only one or two phase wires, plus the neutral. A wide range of standard distribution voltages are used, from 2,400 V to 34,500 V. On poles near a service drop, there is a pole-mounted step-down distribution transformer to transform the high distribution voltage to the lower secondary voltage provided to the customer. In North America, service drops provide 240/120 V split-phase power for residential and light commercial service, using cylindrical single-phase transformers. In Europe and most other countries, 230 V three phase (230Y400) service drops are used. The transformer's primary is connected to the distribution line through protective devices called fuse cutouts. In the event of an overload, the fuse melts and the device pivots open to provide a visual indication of the problem. They can also be opened manually by linemen using a long insulated rod called a hot stick to disconnect the transformer from the line. The pole may be grounded with a heavy bare copper or copper-clad steel wire running down the pole, attached to the metal pin supporting each insulator, and at the bottom connected to a metal rod driven into the ground. Some countries ground every pole while others only ground every fifth pole and any pole with a transformer on it. This provides a path for leakage currents across the surface of the insulators to get to ground, preventing the current from flowing through the wooden pole which could cause a fire or shock hazard. It provides similar protection in case of flashovers and lightning strikes. A surge arrester (also called a lightning arrester) may also be installed between the line (ahead of the cutout) and the ground wire for lightning protection. The purpose of the device is to conduct extremely high voltages present on the line directly to ground. If uninsulated conductors touch due to wind or fallen trees, the resultant sparks can start wildfires. To reduce this problem, aerial bundled conductors are being introduced. Communication cables The communications cables are attached below the electric power lines, in a vertical space along the pole designated the communications space. The communications space is separated from the lowest electrical conductor by the communication worker safety zone, which provides room for workers to maneuver safely while servicing the communication cables, avoiding contact with the power lines. The most common communication cables found on utility poles are copper or fibre-optic cable (FOC) for telephone lines and coaxial cable for cable television (CATV). Coaxial or optical fibre cables linking computer networks are also increasingly found on poles in urban areas. The cable linking the telephone exchange to local customers is a thick cable lashed to a thin supporting cable, containing hundreds of twisted pair subscriber lines. Each twisted pair line provides a single telephone circuit or local loop to a customer. There may also be FOCs interconnecting telephone exchanges. Like electrical distribution lines, communication cables connect to service drops when used to provide local service to customers. Other equipment Utility poles may also carry other equipment such as street lights, supports for traffic lights and overhead wires for electric trolleys, and cellular network antennas. They can also carry fixtures and decorations specific for certain holidays or events specific to the city where they are located. Solar panels mounted on utility poles may power auxiliary equipment where the expense of a power line connection is unwanted. Streetlights and holiday fixtures are powered directly from secondary distribution. Pole attachment hardware The primary purpose of pole attachment hardware is to secure the cable and associated aerial plant facilities to poles and to help facilitate necessary plant rearrangements. An aerial plant network requires high-quality reliable hardware to Structurally support the distribution cable plant Provide directional guying to accommodate lateral stresses created on the pole by pole line configurations and pole loading configuration Provide the physical support and protection for drop cable plant from the pole to the customer premises Transition cable plant from the aerial network to underground and buried plant Provide the means for safe and effective grounding, bonding, and isolation connections for the metallic and dielectric components of the network. Functional performance requirements common to pole line hardware for utility poles made of wood, steel, concrete, or Fiber-Reinforced Composite (FRC) materials are contained in Telcordia GR-3174, Generic Requirements for Hardware Attachments for Utility Poles. Attachment hardware by pole type Wood poles The traditional wood pole material provides great flexibility during placement of hardware and cable apparatus. Holes are easily drilled to fit the exact hardware needs and requirements. In addition, fasteners such as lags and screws are easily applied to wood structures to support outside plant (OSP) apparatus. Non-wood poles There are three main non-wood pole materials and structures on which the attachment hardware may be mounted: concrete, steel, and fiber-reinforced composite (FRC). Each material has intrinsic characteristics that need to be considered during the design and manufacture of the attachment hardware. Concrete poles The most widespread use of concrete poles is in marine environments and coastal zones where excellent corrosion resistance is required to reduce the impact of sea water, salt fog, and corrosive soil conditions (e.g., marsh). Their heavy weight also helps the concrete poles resist the high winds possible in coastal areas. The various designs for concrete poles include tapered structures and round poles made of solid concrete; pre-stressed concrete (spun-cast or statically cast); and a hybrid of concrete and steel. The drilling of installed concrete poles is not feasible. Users may wish to have the attachment hardware cast into the concrete during the pole manufacture. As a result of these operational difficulties, banded hardware has become the more popular means to attach cable plant to concrete poles. Design criteria and requirements for concrete poles can be derived from various industry documents including, but not limited to, ASCE-111, ACI-318, ASTM C935, and ASTM C1089. Steel poles Steel poles can provide advantages for high-voltage lines, where taller poles are required for enhanced clearances and longer span requirements. Tubular steel poles are typically made from 11-gauge galvanized steel, with thicker 10- or 7-gauge materials used for some taller poles because of their higher strength and rigidity. For tall tower-type structures, 5-gauge materials are used. Although steel poles can be drilled on-site with an annular drill bit or standard twist drill, it is not a recommended practice. As with concrete poles, bolt holes could be built into the steel pole during manufacture for use as general attachment points or places for steps to be bolted into the pole. Welding of attachment hardware or attachment ledges to steel poles may be a feasible alternate approach to help provide reliable attachment points. However, operational and practical hazards of welding in the field may make this process undesirable or uneconomical. Steel poles should meet industry specifications such as: TIA/EIA-222-G, Structural Standard for Antenna Supporting Structures and Antennas (current); TIA/EIA-222; Structural Standards for Steel; and TIA/EIA-RS-222, or an equivalent requirement set to help ensure a robust and good quality pole is being used. Fiber-reinforced composite (FRC) poles FRC poles cover a family of pole materials that combine fiberglass (fiber) strength members with a cross-linked polyester resin and a variety of chemical additives to produce a lightweight, weather-resistant structure. FRC poles are hollow and similar to the tubular steel poles, with a typical wall thickness of with an outer polyurethane coating that is ~ thin. As with all the other non-wood poles, FRC poles cannot be mounted with the traditional climbing hardware of hooks and gaffs. FRC poles can be pre-drilled by the manufacturer, or holes can be drilled on site. Attachments using lag bolts, teeth, nails, and staples are unacceptable for FRC poles. Through-bolts are used instead of lag bolts for maximum bonding to the pole and to avoid loosening of hardware. The relevant industry documents covering FRC poles include: ASTM D4923, ANSI C136.20, OPCS-03-02, and Telcordia GR-3159, Generic Requirements for Fiber-Reinforced Composite (FRC), Concrete, and Steel Utility Poles. Access In some countries, such as the United Kingdom, utility poles have sets of brackets arranged in a standard pattern up the pole to act as hand and foot holds so that maintenance and repair workers can climb the pole to work on the lines. In the United States, such steps have been determined to be a public hazard and are no longer allowed on new poles. Linemen may use climbing spikes called gaffs to ascend wooden poles without steps on them. In the UK, boots fitted with steel loops that go around the pole (known as "Scandinavian Climbers") are also used for climbing poles. In the US, linemen use bucket trucks for the vast majority of poles that are accessible by vehicle. Dead-end poles The poles at the end of a straight section of utility line where the line ends or angles off in another direction are called dead-end poles in the United States. Elsewhere they may be referred to as anchor or termination poles. These must carry the lateral tension of the long straight sections of wire. They are usually made with heavier construction. The power lines are attached to the pole by horizontal strain insulators, either placed on crossarms (which are either doubled, tripled, or replaced with a steel crossarm, to provide more resistance to the tension forces) or attached directly to the pole itself. Dead-end and other poles that support lateral loads have guy-wires to support them. The guys always have strain insulators inserted in their length to prevent any high voltages caused by electrical faults from reaching the lower portion of the cable that is accessible by the public. In populated areas, guy wires are often encased in a yellow plastic or wood tube with reflectors attached to their lower end, so that they can be seen more easily, reducing the chance of people and animals walking into them or vehicles crashing into them. Another means of providing support for lateral loads is a push brace pole, a second shorter pole that is attached to the side of the first and runs at an angle to the ground. If there is no space for a lateral support, a stronger pole, e.g. a construction of concrete or iron, is used. History The system of suspending telegraph wires from poles with ceramic insulators was invented and patented by British telegraph pioneer William Fothergill Cooke. Cooke was the driving force in establishing the electrical telegraph on a commercial basis. With Charles Wheatstone he invented the Cooke and Wheatstone telegraph and founded the world's first telegraph company, the Electric Telegraph Company. Telegraph poles were first used on the Great Western Railway in 1843 when the Cooke and Wheatstone telegraph line was extended to Slough. The line had previously used buried cables but that system had proved troublesome with failing insulation. In Britain, the trees used for telegraph poles were either native larch or pine from Sweden and Norway. Poles in early installations were treated with tar, but these were found to last only around seven years. Later poles were treated instead with creosote or copper sulphate for the preservative. Utility poles were first used in the mid-19th century in America with telegraph systems. In 1844, the United States Congress granted Samuel Morse $30,000 () to build a 40-mile telegraph line between Baltimore, Maryland and Washington, D.C. Morse began by having a lead-sheathed cable made. After laying underground, he tested it. He found so many faults with this system that he dug up his cable, stripped off its sheath, bought poles and strung his wires overhead. On February 7, 1844, Morse inserted the following advertisement in the Washington newspaper: "Sealed proposals will be received by the undersigned for furnishing 700 straight and sound chestnut posts with the bark on and of the following dimensions to wit: 'Each post must not be less than eight inches in diameter at the butt and tapering to five or six inches at the top. Six hundred and eighty of said posts to be 24 feet in length, and 20 of them 30 feet in length.'" In some parts of Australia, wooden poles are rapidly destroyed by termites, so metal poles must be used instead and in much of the interior wooden poles are vulnerable to fire. The Oppenheimer pole is a collapsible wrought iron pole in three sections. It is named after Oppenheimer and Company in Germany, but they were mostly manufactured in England under license. They were used on the Australian Overland Telegraph Line built in 1872 which connected the continent north to south directly through the centre and linked to the rest of the world through a submarine cable at Darwin. The Stobie pole was invented in 1924 by James Cyril Stobie of the Adelaide Electric Supply Company and first used in South Terrace, Adelaide. One of the early Bell System lines was the Washington DC–Norfolk line which was, for the most part, square-sawn tapered poles of yellow pine probably treated to refusal with creosote. "Treated to refusal" means that the manufacturer forces preservatives into the wood, until it refuses to accept more, but performance is not guaranteed. Some of these were still in service after 80 years. The building of pole lines was resisted in some urban areas in the late 19th century, and political pressure for undergrounding remains powerful in many countries. In Eastern Europe, Russia, and third-world countries, many utility poles still carry bare communication wires mounted on insulators not only along railway lines, but also along roads and sometimes even in urban areas. Errant traffic being uncommon on railways, their poles are usually less tall. In the United States electricity is predominately carried on unshielded aluminum conductors wound around a solid steel core and affixed to rated insulators made from glass, ceramic, or poly. Telephone, CATV, and FOCs are generally attached directly to the pole without insulators. In the United Kingdom, much of the rural electricity distribution system is carried on wooden poles. These normally carry electricity at 11 or 33 kV (three phases) from 132 kV substations supplied from pylons to distribution substations or pole-mounted transformers. Wooden poles have been used for 132 kV for a number of years from the early 1980s one is called the trident they are usually used on short sections, though the line from Melbourne, Cambs to near Buntingford, Herts is quite long. The conductors on these are bare metal connected to the posts by insulators. Wood poles can also be used for low voltage distribution to customers. Today, utility poles may hold much more than the uninsulated copper wire that they originally supported. Thicker cables holding many twisted pair, coaxial cable, or even fibre-optic, may be carried. Simple analogue repeaters or other outside plant equipment have long been mounted against poles, and often new digital equipment for multiplexing/demultiplexing or digital repeaters may now be seen. In many places, as seen in the illustration, providers of electricity, television, telephone, street light, traffic signal and other services share poles, either in joint ownership or by renting space to each other. In the United States, ANSI standard 05.1.2008 governs wood pole sizes and strength loading. Utilities that fall under the Rural Electrification Act must also follow the guidelines set forth in RUS Bulletin 1724E-150 (from the US Department of Agriculture) for pole strength and loading. Steel utility poles are becoming more prevalent in the United States thanks to improvements in engineering and corrosion prevention coupled with lowered production costs. However, premature failure due to corrosion is a concern when compared to wood. The National Association of Corrosion Engineers or NACE is developing inspection, maintenance, and prevention procedures similar to those used on wood utility poles to identify and prevent decay. Markings Pole brandings British Telecom posts are usually marked with the following information: 'BT' – to mark it as a British Telecom UK Pole (This can also be PO (Post Office) or GPO (General Post Office) depending on the age of the pole) a horizontal line marking 3 metres from the bottom of the pole the pole length, typically 8 to 10 metres, and size. 9L is a 9 metres long, light pole, other letters used are 'M' (Medium) and 'S' (Stout). the year of treatment and therefore generally the year of installation (e.g. the pole in the picture was treated in 2003) the batch and type of wood used A date of the last official inspection An alphanumeric designation e.g. DP 242 where DP is an initialism of Distribution Point If relevant, a red D plate meaning 'Dangerous' and indicating that the pole was structurally unsafe to climb or due to its proximity to other hazards The date on the pole is applied by the manufacturer and refers to the date the pole was "preserved" (treated to withstand the elements). In the United States, utility poles are marked with information concerning the manufacturer, pole height, ANSI strength class, wood species, original preservative, and year manufactured (vintage) in accordance with ANSI standard O5.1.2008. This is called branding, as it is usually burned into the surface; the resulting mark is sometimes called the "birth mark". Although the position of the brand is determined by ANSI specification, it is essentially just below "eye level" after installation. A rule of thumb for understanding a pole's brand is the manufacturer's name or logo at the top with a two-digit date beneath (sometimes preceded by a month). Below the date is a two-character wood species abbreviation and one- to three-character preservative. Some wood species may be marked "SP" for southern pine, "WC" for western cedar, or "DF" for Douglas fir. Common preservative abbreviations are "C" for creosote, "P" for pentachlorophenol, and "SK" for chromated copper arsenate (originally referred to salts type K). The next line of the brand is usually the pole's ANSI class, used to determine maximum load; this number ranges from 10 to H6 with a smaller number meaning higher strength. The pole's height (from butt to top) in 5-foot increments is usually to the right of the class separated by a hyphen, although it is not uncommon for older brands to have the height on a separate line. The pole brand is sometimes an aluminum tag nailed in place. Before the practice of branding, many utilities would set a 2- to 4-digit date nail into the pole upon installation. The use of date nails went out of favor during World War II due to war shortages but is still used by a few utilities. These nails are considered valuable to collectors, with older dates being more valuable, and unique markings such as the utilities' name also increasing the value. However, regardless of the value to collectors, all attachments on a utility pole are the property of the utility company, and unauthorized removal is a misdemeanor or felony. (California state law cited as example) Coordinates on pole tags A practice in some areas is to place poles on coordinates upon a grid. The pole at right is a Delmarva Power pole located in a rural area of the state of Maryland in the United States. The lower two tags are the "X" and "Y" coordinates along said grid. Just as in a coordinate plane used in geometry, X increases as one travels east and Y increases as one travels north. The upper two tags are specific to the sub transmission section of the pole; the first refers to the route number, the second to the specific pole along the route. However, not all power lines follow the road. In the British region of East Anglia, EDF Energy Networks often add the Ordnance Survey Grid Reference coordinates of the pole or substation to the name sign. In some areas, utility pole name plates may provide valuable coordinate information: a poor man's GPS. Pole route A pole route (or pole line in the US) is a telephone link or electrical power line between two or more locations by way of multiple uninsulated wires suspended between wooden utility poles. This method of link is common especially in rural areas where burying the cables would be expensive. Another situation in which pole routes were extensively used were on the railways to link signal boxes. Traditionally, prior to around 1965, pole routes were built with open wires along non-electrical operated railways; this necessitated insulation when the wire passed over the pole, thus preventing the signal from becoming attenuated. At electrical operated railways, pole routes were usually not built as too much jamming from the overhead wire would occur. To accomplish this, cables were separated using spars with insulators spaced along them; in general four insulators were used per spar. Only one such pole route still exists on the UK rail network, in the highlands of Scotland. There was also a long section in place between Wymondham, Norfolk and Brandon in Suffolk, United Kingdom; however, this was de-wired and removed during March 2009. Environmental impact Utility poles are used by birds for nesting and to rest on. Utility poles and related structures are regarded by some to be a form of visual pollution. Many lines are placed underground for this reason, in places of high population density or scenic beauty that justify the expense. Architects design some pylons to be pretty, thus avoiding visual pollution. Some chemicals used to preserve wood poles including creosote and pentachlorophenol are toxic and have been found in the environment. The considerable improvement in weathering resistance offered by creosote infusion has long-term drawbacks. In recent years, concerns have been raised about the toxicity of creosote-treated wood waste, such as utility poles. Specifically, their biodegradation can release phenolic compounds in soil, which are considered toxic. Research continues to explore methods to render this waste safe for disposal. Historically, pole-mounted transformers were filled with a polychlorinated biphenyl (PCB) liquid. PCBs persist in the environment and have adverse effects on animals.
Technology
Electricity transmission and distribution
null
4955826
https://en.wikipedia.org/wiki/Ara%20%28bird%29
Ara (bird)
Ara is a Neotropical genus of macaws with eight extant species and at least two extinct species. The genus name was coined by French naturalist Bernard Germain de Lacépède in 1799. It gives its name to and is part of the Arini, or tribe of Neotropical parrots. The genus name Ara is derived from the Tupi word ará, an onomatopoeia of the sound a macaw makes. The Ara macaws are large striking parrots with long tails, long narrow wings and vividly coloured plumage. They all have a characteristic bare face patch around the eyes. Males and females have similar plumage. Many of its members are popular in the pet trade, and bird smuggling is a threat to several species. Taxonomy The genus Ara was erected by the French naturalist Bernard Germain de Lacépède in 1799. The type species was designated as the scarlet macaw (Ara macao) by Robert Ridgway in 1916. The genus name is from ará meaning "macaw" in the Tupi language of Brazil. The word is an onomatopoeia based on the sound of their call. For many years the genus contained additional species but it was split to create three additional genera: Orthopsittaca, Primolius, and Diopsittaca. Orthopsittaca and Diopsittaca are monotypic and are morphologically and behaviourally different, whereas the three Primolius macaws are green and smaller. There are eight surviving species, two extinct species that died out during modern times, and a third extinct species known only from subfossil remains. The last confirmed sighting of the extinct Cuban macaw was in 1864 when one was shot. Several skins of the Cuban macaw are preserved in museums, but none of its eggs have survived. Several hypothetical extinct species of the genus Ara have been postulated based on very little evidence. They may have been distinct species, or familiar parrots that were imported onto an island and later presumed to have a separate identity. Morphology and appearance The Ara macaws are large parrots ranging from in length and 285 to 287 g (10 oz) in weight in the chestnut-fronted macaw, to and in the green-winged macaw. The wings of these macaws are long and narrow, which is typical for species of parrot which travel long distances in order to forage. They have a massive downward curved upper mandible and a patch of pale skin around the eye that extends to base of the beak. The skin patch bears minute feathers arranged in lines that form a pattern over the otherwise bare skin in all species of the genus except the scarlet macaw in which the skin is bare. In most species the bill is black, but the scarlet macaw and green-winged macaw have a predominantly horn coloured upper mandible and a black lower one. The colours in the plumage of the Ara macaws are spectacular. Four species are predominantly green, two species are mostly blue and yellow, and three species (including the extinct Cuban macaw) are mostly red. There is no sexual dimorphism in the plumage, and the plumage of the juveniles is similar to adults, although slightly duller in some species. Distribution and habitat The Ara macaws have a Neotropical distribution from Mexico to Argentina. The centre of Ara distribution is the Amazon Basin and the Panama–Colombia border region; each with as many as four species found together (marginally five where the military macaw approach the western Amazon). Seven species are found in Bolivia, but no single locality in that (or any other) country surpasses four species. The most widespread species, the scarlet macaw, is (or was) distributed throughout large parts of Central America and the Amazon. On the other hand, the blue-throated macaw and the red-fronted macaw have tiny distributions in Bolivia. The overall range of many species and the genus as a whole has declined in historical times due to human activities. The military macaw is distributed from northern Mexico to northern Argentina, but the distribution is discontinuous, with populations in Mexico, a large gap, then a population in the Venezuelan Coastal Range and a population along the Andes from western Venezuela to northern Argentina. The blue-and-yellow macaw was extirpated from Trinidad in the 1960s (but was later reintroduced), and several hypothetical species apparently became extinct in the islands of the Caribbean. The Ara macaws are generally fairly adaptable in their habitat requirements; this reaches its extreme in the scarlet macaw, which as suggested in its widespread distribution, uses most habitat types from humid rainforest to open woodlands to savannah. The only requirement is sufficient large trees, which is where they obtain their food and breeding holes. The other species are slightly more narrow in their habitat choices, but the need for large trees is universal. The blue-throated macaw generally inhabits forest "islands" in the savanna, and the red-fronted macaw prefers arid scrub and cactus woodland. Within their range, birds may travel widely seasonally in search of food. They do not undertake large scale migrations, but instead more local movements amongst a range of different habitats. Feeding and diet Like all macaws and most parrots, seeds and fruit are the major part of the diet of the genus Ara. The particular species and range of diet varies from species to species. Unlike many birds, macaws are seed predators not seed dispersers, and use their immensely strong beaks to open even the hardest shells. Their diet overlaps with that of some monkey species; in one study of green-winged macaws in Venezuela they shared many of the same trees as bearded sakis, although in some cases they ate the seeds at an earlier stage of ripeness than the sakis, when they contained more poison. Macaws, like other parrots, may consume clay to absorb toxic compounds produced by some poisons. As well, the toxic compounds of some foods may be neutralized by compounds, such as tannins, found in other foods consumed at the same time. Breeding Like almost all parrots, the Ara macaws are cavity nesters. The majority of species nest in cavities in trees, either live or dead. Natural holes in trees may be used, particularly those in dead trees, otherwise holes created by other species; in Mexico military macaws still use the cavities excavated by the now critically endangered imperial woodpecker. In addition to nesting in trees, the military macaw and green-winged macaw will also nest in natural fissures in cliffs. This nesting habitat is the only one used by the red-fronted macaws, as large enough trees are absent in its arid range. Species Hypothetical extinct Ara Macaws are known to have been transported between the Caribbean islands and from mainland South America both in historic times by Europeans and in prehistoric times by Paleoamericans. Parrots were important to the culture of native Caribbeans. The birds were traded between islands, and were among the gifts offered to Christopher Columbus when he reached the Bahamas in 1492. It is therefore difficult to determine whether or not the numerous historical records of macaws on these islands mention distinct, endemic species, since they could have been escaped individuals or feral populations of foreign macaws of known species that had been transported there. As many as thirteen extinct macaws have at times been suggested to have lived on the islands until recently. Only three endemic Caribbean Ara macaw species are known from physical remains; the Cuban macaw (Ara tricolor) is known from nineteen museum skins and subfossils, the Saint Croix macaw (Ara autochthones) is only known from subfossils, and the Lesser Antillean macaw (Ara guadeloupensis) is known from subfossils and reports. No endemic Caribbean macaws remain today, and they were likely all driven to extinction by humans, some in historic, and others in prehistoric times. In addition to the three species known from remains, several hypothetical extinct Ara macaws were only based on contemporary accounts, but are considered dubious today. Many of these were named by Walter Rothschild in the early 20th century, who had a tendency to name species based on little tangible evidence. Among others, the red-headed macaw (Ara erythrocephala) and Jamaican red macaw (Ara gossei) were named for accounts of macaws on Jamaica, the Martinique macaw (Ara martinicus) was from Martinique, and the Dominican green-and-yellow macaw (Ara atwoodi) was supposed to come from Dominica. Other species have been mentioned as well, but many never received binomials, or are considered junior synonyms of other species. Woods and Steadman defended the validity of most named Caribbean macaw species, and believed each Greater and Lesser Antillean island had their own endemic species. Olson and Maíz doubted the validity of all the hypothetical macaws, but suggested that the island of Hispaniola would be the most likely place for another macaw species to have existed, due to the large land area, though no descriptions or remains of such are known. They suggested such a species could have been driven to extinction prior to the arrival of Europeans. The identity and distribution of indigenous macaws in the Caribbean is only likely to be further resolved through palaeontological discoveries and examination of contemporary reports and artwork.
Biology and health sciences
Psittaciformes
null
4958097
https://en.wikipedia.org/wiki/55%20Cancri%20e
55 Cancri e
55 Cancri e (abbreviated 55 Cnc e, also known as Janssen ) is an exoplanet orbiting a Sun-like host star, 55 Cancri A. The mass of the exoplanet is about eight Earth masses and its diameter is about twice that of the Earth. 55 Cancri e was discovered on 30 August 2004, thus making it the first super-Earth discovered around a main sequence star, predating Gliese 876 d by a year. It is the innermost planet in its planetary system, taking less than 18 hours to complete an orbit. However, until the 2010 observations and recalculations, this planet had been thought to take about 2.8 days to orbit the star. Due to its proximity to its star, 55 Cancri e is extremely hot, with temperatures on the day side exceeding 3,000 Kelvin. The planet's thermal emission is observed to be variable, possibly as a result of volcanic activity. It has been proposed that 55 Cancri e could be a carbon planet. The atmosphere of 55 Cancri e has been extensively studied, with varying results. Initial studies suggested an atmosphere rich in hydrogen and helium, but later studies failed to confirm this, instead supporting an atmosphere composed of heavier molecules, possibly only a thin atmosphere of vaporized rock. Most recently as of 2024, JWST observations have ruled out the rock vapor atmosphere scenario and provided evidence for a substantial atmosphere rich in carbon dioxide or carbon monoxide. Name In July 2014 the International Astronomical Union (IAU) launched NameExoWorlds, a process for giving proper names to certain exoplanets and their host stars. The process involved public nomination and voting for the new names. In December 2015, the IAU announced the winning name was Janssen for this planet. The winning name was submitted by the Royal Netherlands Association for Meteorology and Astronomy of the Netherlands. It honors the spectacle maker Zacharias Janssen who is sometimes associated with the invention of the telescope. Discovery Like the majority of extrasolar planets found prior to the Kepler mission, 55 Cancri e was discovered by detecting variations in its star's radial velocity. This was achieved by making sensitive measurements of the Doppler shift of the spectrum of 55 Cancri A. At the time of its discovery, three other planets were known orbiting the star. After accounting for these planets, a signal at around 2.8 days remained, which could be explained by a planet of at least 14.2 Earth masses in a very close orbit. The same measurements were used to confirm the existence of the uncertain planet 55 Cancri c. 55 Cancri e was one of the first extrasolar planets with a mass comparable to that of Neptune to be discovered. It was announced at the same time as Gliese 436 b, another "hot Neptune" orbiting the red dwarf star Gliese 436. Planet challenged In 2005, the existence of planet e was questioned by Jack Wisdom in a reanalysis of the data. He suggested that the 2.8-day planet was an alias and, separately, that there was a 260-day planet in orbit around 55 Cancri. In 2008, Fischer et al. published a new analysis that appeared to confirm the existence of the 2.8-day planet and the 260-day planet. However, the 2.8-day planet was shown to be an alias by Dawson and Fabrycky in 2010; its true period was 0.7365 days. Transit The planet's transit of its host star was announced on 27 April 2011, based on two weeks of nearly continuous photometric monitoring with the MOST space telescope. The transits occur with the period (0.74 days) and phase that had been predicted by Dawson and Fabrycky. This is one of the few planetary transits to be confirmed around a well-known star, and allowed investigations into the planet's composition. Orbit and rotation 55 Cancri e orbits very close to its parent star; with average orbital distance of 0.01544 ± 0.00005 AU, it takes only 18 hours to complete an orbit. Analysis of its transits reveal that its orbital inclination is about 83.6°, and appears to be close to being aligned with the rotation of its parent star, with obliquity of 23 , favouring dynamically gentle inward migration scenarios for this planet. 55 Cancri e may also be coplanar with the next planet in the system, 55 Cancri b. Due to its old age and proximity to the star, the planet is extremely likely to be tidally locked, meaning that one hemisphere, referred to as dayside, permanently faces the star, while the other, the nightside, always faces away from it. Characteristics 55 Cancri e receives more radiation than Gliese 436 b. The side of the planet facing its star has temperatures more than 2,000 Kelvin (approximately 1,700 degrees Celsius or 3,100 Fahrenheit), hot enough to melt iron. Infrared mapping with the Spitzer Space Telescope indicated an average day-side temperature of and an average night-side temperature of around . Reanalysis of the Spitzer data in 2022 found a hotter day-side temperature of and set an upper limit of on the night-side temperature. It was initially unknown whether 55 Cancri e was a small gas giant like Neptune or a large rocky terrestrial planet. In 2011, a transit of the planet was confirmed, allowing scientists to calculate its density. At first it was suspected to be a water planet. As initial observations showed no hydrogen in its Lyman-alpha signature during transit, Ehrenreich speculated that its volatile materials might be carbon dioxide instead of water or hydrogen. An alternative possibility is that 55 Cancri e is a solid planet made of carbon-rich material rather than the oxygen-rich material that makes up the terrestrial planets in the Solar System. In this case, roughly a third of the planet's mass would be carbon, much of which may be in the form of diamond as a result of the temperatures and pressures in the planet's interior. Further observations are necessary to confirm the nature of the planet. A third argument is that the tidal forces, together with the orbital and rotational centrifugal forces, can partially confine a hydrogen-rich atmosphere on the nightside. Assuming an atmosphere dominated by volcanic species and a large hydrogen component, the heavier molecules could be confined within latitudes < 80° while the volatile hydrogen is not. Because of this disparity, the hydrogen would have to slowly diffuse out into the dayside where X-ray and ultraviolet irradiation would destroy it. In order for this mechanism to have taken effect, it is necessary for 55 Cancri e to have become tidally locked before losing the totality of its hydrogen envelope. This model is consistent with spectroscopic measurements claiming to have discovered the presence of hydrogen and with other studies which were unable to discover a significant hydrogen-destruction rate. In February 2016, it was announced that NASA's Hubble Space Telescope had detected hydrogen cyanide, but no water vapor, in the atmosphere of 55 Cancri e, which is only possible if the atmosphere is predominantly hydrogen or helium. This is the first time the atmosphere of a super-Earth exoplanet was analyzed successfully. In November 2017, it was announced that infrared observations with the Spitzer Space Telescope indicated the presence of a global lava ocean obscured by an atmosphere with a pressure of about 1.4 bar, slightly thicker than that of Earth. The atmosphere may contain similar chemicals in Earth's atmosphere, such as nitrogen and possibly oxygen, in order to cause the infrared data observed by Spitzer. In contradiction to the February 2016 findings, a spectroscopic study in 2012 failed to detect escaping hydrogen from the atmosphere, and a spectroscopic study in 2020 failed to detect escaping helium, indicating that the planet probably has no primordial atmosphere. Atmospheres made of heavier molecules such as oxygen and nitrogen are not ruled out by these data. A study published in May 2024 used observations from the James Webb Space Telescope's Near-InfraRed Camera and Mid-Infrared Instrument to produce a thermal emission spectrum of 55 Cancri e within the range of 4 to 12 μm. These measurements ruled out the hypothesis that the planet is a lava world covered by a "tenuous atmosphere made of vaporized rock", as previously proposed, and indicated a "bona fide volatile atmosphere likely rich in or CO". The authors stated that 55 Cancri e's magma ocean could be outgassing and sustaining this atmosphere. Volcanism Large surface-temperature variations on 55 Cancri e have been attributed to possible volcanic activity releasing large clouds of dust which blanket the planet and block thermal emissions. By 2022, the observation had shown a large variability in the planetary transit depths, which can be attributed to large-scale volcanism, or to the presence of a variable gas torus co-orbital with the planet. Search for Radio Emissions Since 55 Cancri e orbits less than 0.1 AU from its host star, some scientists hypothesized that it may cause stellar flaring synchronized to the orbital period of the exoplanet. A 2011 search for these magnetic star-planet interactions that would result in coronal radio emissions resulted in no detected signal.
Physical sciences
Notable exoplanets
Astronomy
19980507
https://en.wikipedia.org/wiki/Hybodontiformes
Hybodontiformes
Hybodontiformes, commonly called hybodonts, are an extinct group of shark-like cartilaginous fish (chondrichthyans) which existed from the late Devonian to the Late Cretaceous. Hybodonts share a close common ancestry with modern sharks and rays (Neoselachii) as part of the clade Euselachii. They are distinguished from other chondrichthyans by their distinctive fin spines and cephalic spines present on the heads of males. An ecologically diverse group, they were abundant in marine and freshwater environments during the late Paleozoic and early Mesozoic, but were rare in open marine environments by the end of the Jurassic, having been largely replaced by modern sharks, though they were still common in freshwater and marginal marine habitats. They survived until the end of the Cretaceous, before going extinct. Etymology The term hybodont comes from the Greek word ὕβος or ὑβός meaning hump or hump-backed and ὀδούς, ὀδοντ meaning tooth. This name was given based on their conical compressed teeth. Taxonomic history Hybodonts were first described in the nineteenth century based on isolated fossil teeth (Agassiz, 1837). Hybodonts were first separated from living sharks by Zittel (1911). Although historically argued to have a close relationship with the modern shark order Heterodontiformes, this has been refuted. Hybodontiformes are total group-elasmobranchs and the sister group to Neoselachii, which includes modern sharks and rays. Hybodontiformes and Neoselachii are grouped together in the clade Euselachii, to the exclusion of other total-group elasmobranchs like Xenacanthiformes. Hybodonts are divided into a number of families, but the higher level taxonomy of hybodonts, especially Mesozoic taxa, is poorly resolved. Description The largest hybodonts reached lengths of , while some other hybodonts were much smaller, with adult body lengths of around . Hybodonts had a generally robust bodyform. Due to their cartilaginous skeletons usually disintegrating upon death like other chondrichthyans, hybodonts are generally described and identified based on teeth and fin spine fossils, which are more likely to be preserved. Rare partial or complete skeletons are known from areas of exceptional preservation.Hybodonts are recognized as having teeth with a prominent cusp which is higher than lateral cusplets. Hybodont teeth are often preserved as incomplete fossils because the base of the tooth is not well attached to the crown. Hybodonts were initially divided into two groups based on their tooth shape. One group had teeth with acuminate cusps that lacked a pulp cavity; these are called osteodont teeth. The other group had a different cusp arrangement and had a pulp cavity, these are called orthodont teeth. For example, the hybodont species Heterophychodus steinmanni have osteodont teeth with vascular canals of dentine which are arranged vertically parallel to each other, also called ‘tubular dentine’. The crowns of these osteodont teeth are covered with a single layer of enameloid. Hybodont teeth served a variety of functions depending on the species, including grinding, crushing (durophagy), tearing, clutching, and even cutting. Hybodonts are characterized by having two dorsal fins each preceded by a fin spine. The fin spine morphology is unique to each hybodont species. The fin spines are elongate and gently curved towards the rear, with the posterior part of the spine being covered in hooked denticles, typically in two parallel rows running along the length of the spine, sometimes with a ridge between them. Part of the front of the spines are often covered in a ribbed ornamentation, while in some other hybodonts this region is covered in rows of small bumps. The spines are mineralised, and primary composed of osteodentine, while the ornamentation is formed of enamel. Similar fin spines are also found in many extinct chondrichthyan groups as well as in some modern sharks like Heterodontus and squalids. Male hybodonts had either one or two pairs of cephalic spines on their heads, a characteristic distinctive to hybodonts. These spines, while of variable placement, were always placed posterior to the eye socket, and were composed of a base divided into three lobes, with the main part of the spine being backwardly curved, most specimens of which had a barb near the apex. These spines, like the fin spines, were mineralised, with the base composed of osteodentine, while the main part of the spine was covered in enamel. Male hybodonts possessed fin claspers used in mating, like modern sharks. Hybodonts had a fully heterocercal tail fin, where the upper lobe of the fin was much larger than the lower one due to the spine extending into it. Like living sharks and rays, the skin of hybodonts was covered with dermal denticles. Hybodonts laid egg cases, similar to those produced by living cartilaginous fish. Most hybodont egg cases are assigned to the genus Palaeoxyris, which tapers towards both ends, with one end having a tendril which attached to substrate, with the middle section being composed of at least three twisted bands. Ecology Hybodont fossils are found in depositional environments ranging from marine to fluvial (river deposits). Many hybodonts are thought to have been euryhaline, able to tolerate a wide range of salinities. Hybodonts inhabited freshwater environments from early in their evolutionary history, spanning from the Carboniferous onwards. Based on isotopic analysis, some species of hybodonts are likely to have permanently lived in freshwater environments, while others may have migrated between marine and freshwater environments. One genus of hybodont, Onychoselache of the lower Carboniferous of Scotland, is suggested to have been capable of amphibious locomotion, similar to modern orectolobiform sharks such as bamboo and epaulette sharks, due to its well-developed pectoral fins. It has been suggested that male hybodonts used their cephalic spines to grip females during mating. Preserved egg cases of hybodonts assigned to Palaeoxyris indicate that at least some hybodonts laid their eggs in freshwater and brackish environments, with the eggs being attached to vegetation via a tendril. Laying of eggs in freshwater is not known in any living cartilaginous fish. At least some hybodonts are suggested to have utlilized specific sites as nurseries, such as in the Triassic lake deposits of the Madygen Formation of Kyrgyzstan, where eggs of Lonchidion are suggested to have been laid on the lakeshore or upriver areas, where the juveniles hatched and matured, before migrating deeper into the lake as adults.Hybodonts are thought to have been generally relatively slow swimmers, though capable of fast bursts of locomotion. Some hybodonts like Hybodus are thought to have been active predators capable of feeding on swiftly moving prey, with preserved stomach contents of a specimen of Hybodus hauffianus indicating that they fed on belemnites (a type of extinct squid-like cephalopod). Hybodonts have a wide variety of tooth shapes. This variety suggests that they took advantage of multiple food sources. It is thought that some hybodonts which had wider, flatter, teeth specialized in crushing or grinding hard-shelled prey (durophagy), with some hybodonts like Asteracanthus probably consuming both hard and soft bodied prey. Often multiple species of hybodonts with different prey preferences coexisted within the same ecosystem. Evolutionary history The earliest hybodont remains are from the latest Devonian (Famennian, ~ 360 million years ago) of Iran, belonging to the genus Roongodus, as well as remains assigned to Lissodus of the same age from Belgium. Carboniferous hybodonts include both durophagous and non-durophagous forms, while durophagous forms were dominant during the Permian period. By the Permian period, hybodonts had a global distribution. The Permian-Triassic extinction event only had a limited effect on hybodont diversity. Maximum hybodont diversity is observed during the Triassic. During the Triassic and Early Jurassic, hybodontiforms were the dominant elasmobranchs in both marine and non-marine environments. A shift in hybodonts was seen during the Middle Jurassic, a transition between the distinctly different assemblages seen in the Triassic – Early Jurassic and the Late Jurassic – Cretaceous. As neoselachians (group of modern sharks) diversified further during the Late Jurassic, hybodontiforms became less prevalent in open marine conditions but remained diverse in fluvial and restricted settings during the Cretaceous. Possible reasons for the replacement of hybodonts by modern sharks include more effective locomotory and jaw movement mechanisms of the latter group. By the end of the Cretaceous, hybodonts had declined to only a handful of species, including members of Lonchidion, and Meristodonoides. The last hybodonts disappeared, seemingly abruptly, as part of the Cretaceous-Paleogene extinction event approximately 66 million years ago. Families and genera The taxonomy of hybodonts is considered poorly resolved, so the classification presented should not be taken as authoritative. Lonchidiidae Herman, 1977 Baharyodon Diplolonchidion Vectiselachos Hylaeobatis Isanodus Parvodus Lissodus? Lonchidion Luopingselache Jiaodontus Pristrisodus Distobatidae Distobatus Reticulodus Tribodus? Aegyptobatus Acrodontidae Acrodus Strophodus? Hybodontidae Dicrenodus Egertonodus Hybodus Meristodonoides Planohybodus Priohybodus Sphenonchus Durnonovariaodus Crassodus Incertae sedis Tribodus Strophodus Asteracanthus Roongodus Polyacrodus Palaeobates Bdellodus Thaiodus Acrorhizodus Khoratodus Arctacanthus Reesodus Steinbachodus Onychoselache Omanoselache Pororhiza Mukdahanodus Secarodus Hamiltonichthys Gansuselache Dabasacanthus Teresodus Diablodontus Gunnellodus? Heteroptychodus Lissodus Columnaodus Carinacanthus Form genera Palaeoxyris (genus used for the egg capsules of hybodonts)
Biology and health sciences
Prehistoric chondrichthyans
Animals
508239
https://en.wikipedia.org/wiki/Trichromacy
Trichromacy
Trichromacy or trichromatism is the possession of three independent channels for conveying color information, derived from the three different types of cone cells in the eye. Organisms with trichromacy are called trichromats. The normal explanation of trichromacy is that the organism's retina contains three types of color receptors (called cone cells in vertebrates) with different absorption spectra. In actuality, the number of such receptor types may be greater than three, since different types may be active at different light intensities. In vertebrates with three types of cone cells, at low light intensities the rod cells may contribute to color vision. Humans and other animals that are trichromats Humans and some other mammals have evolved trichromacy based partly on pigments inherited from early vertebrates. In fish and birds, for example, four pigments are used for vision. These extra cone receptor visual pigments detect energy of other wavelengths, sometimes including ultraviolet. Eventually two of these pigments were lost (in placental mammals) and another was gained, resulting in trichromacy among some primates. Humans and closely related primates are usually trichromats, as are some of the females of most species of New World monkeys, and both male and female howler monkeys. Recent research suggests that trichromacy may also be quite general among marsupials. A study conducted regarding trichromacy in Australian marsupials suggests the medium wavelength sensitivity (MWS), cones of the honey possum (Tarsipes rostratus) and the fat-tailed dunnart (Sminthopsis crassicaudata) are features coming from the inherited reptilian retinal arrangement. The possibility of trichromacy in marsupials potentially has another evolutionary basis than that of primates. Further biological and behavioural tests may verify if trichromacy is a common characteristic of marsupials. Most other mammals are currently thought to be dichromats, with only two types of cone (though limited trichromacy is possible at low light levels where the rods and cones are both active). Most studies of carnivores, as of other mammals, reveal dichromacy; examples include the domestic dog, the ferret, and the spotted hyena. Some species of insects (such as honeybees) are also trichromats, being sensitive to ultraviolet, blue and green instead of blue, green and red. Research indicates that trichromacy allows animals to distinguish brightly colored fruit and young leaves from other vegetation that is not beneficial to their survival. Another theory is that detecting skin flushing and thereby mood may have influenced the development of primate trichromate vision. The color red also has other effects on primate and human behavior as discussed in the color psychology article. Types of cones specifically found in primates Primates are the only known placental mammalian trichromats. Their eyes include three different kinds of cones, each containing a different photopigment (opsin). Their peak sensitivities lie in the blue (short-wavelength S cones), green (medium-wavelength M cones) and yellow-green (long-wavelength L cones) regions of the color spectrum. S cones make up 5–10% of the cones and form a regular mosaic. Special bipolar and ganglion cells pass those signals from S cones and there is evidence that they have a separate signal pathway through the thalamus to the visual cortex as well. On the other hand, the L and M cones are hard to distinguish by their shapes or other anatomical means – their opsins differ in only 15 out of 363 amino acids, so no one has yet succeeded in producing specific antibodies to them. But Mollon and Bowmaker did find that L cones and M cones are randomly distributed and are in equal numbers. Mechanism of trichromatic color vision Trichromatic color vision is the ability of humans and some other animals to see different colors, mediated by interactions among three types of color-sensing cone cells. The trichromatic color theory began in the 18th century, when Thomas Young proposed that color vision was a result of three different photoreceptor cells. From the middle of the 19th century, in his Treatise on Physiological Optics, Hermann von Helmholtz later expanded on Young's ideas using color-matching experiments which showed that people with normal vision needed three wavelengths to create the normal range of colors. Physiological evidence for trichromatic theory was later given by Gunnar Svaetichin (1956). Each of the three types of cones in the retina of the eye contains a different type of photosensitive pigment, which is composed of a transmembrane protein called opsin and a light-sensitive molecule called 11-cis retinal. Each different pigment is especially sensitive to a certain wavelength of light (that is, the pigment is most likely to produce a cellular response when it is hit by a photon with the specific wavelength to which that pigment is most sensitive). The three types of cones are L, M, and S, which have pigments that respond best to light of long (especially 560 nm), medium (530 nm), and short (420 nm) wavelengths respectively. Since the likelihood of response of a given cone varies not only with the wavelength of the light that hits it but also with its intensity, the brain would not be able to discriminate different colors if it had input from only one type of cone. Thus, interaction between at least two types of cone is necessary to produce the ability to perceive color. With at least two types of cones, the brain can compare the signals from each type and determine both the intensity and color of the light. For example, moderate stimulation of a medium-wavelength cone cell could mean that it is being stimulated by very bright red (long-wavelength) light, or by not very intense yellowish-green light. But very bright red light would produce a stronger response from L cones than from M cones, while not very intense yellowish light would produce a stronger response from M cones than from other cones. Thus trichromatic color vision is accomplished by using combinations of cell responses. It is estimated that the average human can distinguish up to ten million different colors.
Biology and health sciences
Visual system
Biology
508455
https://en.wikipedia.org/wiki/Oxygen%20therapy
Oxygen therapy
Oxygen therapy, also referred to as supplemental oxygen, is the use of oxygen as medical treatment. Supplemental oxygen can also refer to the use of oxygen enriched air at altitude. Acute indications for therapy include hypoxemia (low blood oxygen levels), carbon monoxide toxicity and cluster headache. It may also be prophylactically given to maintain blood oxygen levels during the induction of anesthesia. Oxygen therapy is often useful in chronic hypoxemia caused by conditions such as severe COPD or cystic fibrosis. Oxygen can be delivered via nasal cannula, face mask, or endotracheal intubation at normal atmospheric pressure, or in a hyperbaric chamber. It can also be given through bypassing the airway, such as in ECMO therapy. Oxygen is required for normal cellular metabolism. However, excessively high concentrations can result in oxygen toxicity, leading to lung damage and respiratory failure. Higher oxygen concentrations can also increase the risk of airway fires, particularly while smoking. Oxygen therapy can also dry out the nasal mucosa without humidification. In most conditions, an oxygen saturation of 94–96% is adequate, while in those at risk of carbon dioxide retention, saturations of 88–92% are preferred. In cases of carbon monoxide toxicity or cardiac arrest, saturations should be as high as possible. While air is typically 21% oxygen by volume, oxygen therapy can increase O2 content of air up to 100%. The medical use of oxygen first became common around 1917, and is the most common hospital treatment in the developed world. It is currently on the World Health Organization's List of Essential Medicines. Home oxygen can be provided either by oxygen tanks or oxygen concentrator. Medical uses Oxygen is widely used by hospitals, EMS, and first-aid providers in a variety of conditions and settings. A few indications frequently requiring high-flow oxygen include resuscitation, major trauma, anaphylaxis, major bleeding, shock, active convulsions, and hypothermia. Acute conditions In context of acute hypoxemia, oxygen therapy should be titrated to a target level based on pulse oximetry (94–96% in most patients, or 88–92% in people with COPD). This can be performed by increasing oxygen delivery, described as FIO2(fraction of inspired oxygen). In 2018, the British Medical Journal recommended that oxygen therapy be stopped for saturations greater than 96% and not started for saturations above 90 to 93%. This may be due to an association between excessive oxygenation in the acutely ill and increased mortality. Exceptions to these recommendations include carbon monoxide poisoning, cluster headaches, sickle cell crisis, and pneumothorax. Oxygen therapy has also been used as emergency treatment for decompression sickness for years. Recompression in a hyperbaric chamber with 100% oxygen is the standard treatment for decompression illness. The success of recompression therapy is greatest if given within four hours after resurfacing, with earlier treatment associated with a decreased number of recompression treatments required for resolution. It has been suggested in literature that heliox may be a better alternative to oxygen therapy. In the context of stroke, oxygen therapy may be beneficial as long as hyperoxic environments are avoided. People receiving outpatient oxygen therapy for hypoxemia following acute illness or hospitalization should be re-assessed by a physician prior to prescription renewal to gauge the necessity of ongoing oxygen therapy. If the initial hypoxemia has resolved, additional treatment may be an unnecessary use of resources. Chronic conditions Common conditions which may require a baseline of supplementary oxygen include chronic obstructive pulmonary disease (COPD), chronic bronchitis, and emphysema. Patients may also require additional oxygen during acute exacerbations. Oxygen may also be prescribed for breathlessness, end-stage cardiac failure, respiratory failure, advanced cancer, or neurodegenerative disease in spite of relatively normal blood oxygen levels. Physiologically, it may be indicated in people with arterial oxygen partial pressure Pa ≤ 55mmHg (7.3kPa) or arterial oxygen saturation Sa ≤ 88%. Careful titration of oxygen therapy should be considered in patients with chronic conditions predisposing them to carbon dioxide retention (e.g., COPD, emphysema). In these instances, oxygen therapy may decrease respiratory drive, leading to accumulation of carbon dioxide (hypercapnia), acidemia, and increased mortality secondary to respiratory failure. Improved outcomes have been observed with titrated oxygen treatment largely due to gradual improvement of the ventilation/perfusion ratio. The risks associated with loss of respiratory drive are far outweighed by the risks of withholding emergency oxygen, so emergency administration of oxygen is never contraindicated. Transfer from the field to definitive care with titrated oxygen typically occurs long before significant reductions to the respiratory drive are observed. Contraindications There are certain situations in which oxygen therapy has been shown to negatively impact a person's condition. Oxygen therapy can exacerbate the effects of paraquat poisoning and should be withheld unless severe respiratory distress or respiratory arrest is present. Paraquat poisoning is rare, with about 200 deaths globally from 1958 to 1978. Oxygen therapy is not recommended for people with pulmonary fibrosis or bleomycin-associated lung damage. ARDS caused by acid aspiration may be exacerbated with oxygen therapy according to some animal studies. Hyperoxic environments should be avoided in cases of sepsis. Adverse effects In some instances, oxygen delivery can lead to particular complications in population subsets. In infants with respiratory failure, administration of high levels of oxygen can sometimes promote overgrowth of new blood vessels in the eye leading to blindness. This phenomenon is known as retinopathy of prematurity (ROP). In rare instances, people receiving hyperbaric oxygen therapy have had seizures, which has been previously attributed to oxygen toxicity. There is some evidence that extended HBOT can accelerate development of cataracts. Alternative medicine Some practitioners of alternative medicine have promoted "oxygen therapy" as a cure for many human ailments including AIDS, Alzheimer's disease and cancer. According to the American Cancer Society, "available scientific evidence does not support claims that putting oxygen-releasing chemicals into a person's body is effective in treating cancer", and some of these treatments can be dangerous. Physiologic effects Oxygen supplementation has a variety of physiologic effects on the human body. Whether or not these effects are adverse to a patient is dependent upon clinical context. Cases in which an excess amount of oxygen is available to organs is known as hyperoxia. While the following effects may observed with noninvasive high-dose oxygen therapy (i.e., not ECMO), delivery of oxygen at higher pressures is associated with exacerbation of the following associated effects. Absorption atelectasis It has been hypothesized that oxygen therapy may promote accelerated development of atelectasis (partial or complete lung collapse), as well as denitrogenation of gas cavities (e.g., pneumothorax, pneumocephalus). This concept is based on the idea that oxygen is more quickly absorbed compared to nitrogen within the body, leading oxygen-rich areas that are poorly ventilated to be rapidly absorbed, leading to atelectasis. It is thought that higher fractions of inhaled oxygen (FIO2) are associated with increasing rates of atelectasis in the clinical scenario. In clinically healthy adults, it is believed that absorption atelectasis typically does not have any significant implications when managed properly. Airway inflammation In regard to the airway, both tracheobronchitis and mucositis have been observed with high levels of oxygen delivery (typically >40% O2). Within the lungs, these elevated concentrations of oxygen have been associated with increased alveolar toxicity (coined the Lorrain-Smith effect). Mucosal damage is observed to increase with elevated atmospheric pressure and oxygen concentrations, which may result in the development of ARDS and possibly death. Central nervous system effects Decreased cerebral blood flow and intracranial pressure (ICP) have been reported in hyperoxic conditions, with mixed results regarding impact on cognition. Hyperoxia as also been associated with seizures, cataract formation, and reversible myopia. Hypercapnea Among retainers, excess exposure to oxygen in context of the Haldane effect causes decreased binding of deoxyhemoglobin to in the blood. This unloading of may contribute to the development of acid-base disorders due to the associated increase in PaCO2 (hypercapnea). Patients with underlying lung disease such as COPD may not be able to adequately clear the additional produced by this effect, worsening their condition. In addition, oxygen therapy has also been shown to decrease respiratory drive, further contributing to possible hypercapnea. Immunological effects Hyperoxic environments have been observed to decrease granulocyte rolling and diapedesis in specific circumstances in humans. In regard to anaerobic infections, cases of necrotizing fasciitis have been observed to require fewer debridement operations and have improvement in regard to mortality in patients treated with hyperbaric oxygen therapy. This may stem from oxygen intolerance of otherwise anaerobic microorganisms. Oxidative stress Sustained exposure to oxygen may overwhelm the body's capacity to deal with oxidative stress.  Rates of oxidative stress appears to be influenced by both oxygen concentration and length of exposure, with general toxicity observed to occur within hours in certain hyperoxic conditions. Reduction in erythropoiesis Hyperoxia is observed to result in a serum reduction in erythropoietin, resulting in reduced stimulus for erythropoiesis. Hyperoxia at normobaric environments does not appear to be able to halt erythropoiesis completely. Pulmonary vasodilation Within the lungs, hypoxia is observed to be a potent pulmonary vasoconstrictor, due to inhibition of an outward potassium current and activation of inward sodium current leading to pulmonary vascular muscular contraction. However, the effects of hyperoxia do not seem to have a particularly strong vasodilatory effect from the few studies that have been performed on patients with pulmonary hypertension. As a result, an effect appears to be present but minor. Systemic vasoconstriction In the systemic vasculature, oxygen serves as a vasoconstrictor, leading to mildly increased blood pressure and decreased cardiac output and heart rate. Hyperbaric conditions do not seem to have a significant impact on these overall physiologic effects. Clinically, this may lead to increased left-to-right shunting in certain patient populations, such as those with atrial septal defect. While the mechanism of the vasoconstriction is unknown, one proposed theory is that increased reactive oxygen species from oxygen therapy accelerates the degradation of endothelial nitric oxide, a vasodilator. These vasoconstrictive effects are thought to be the underlying mechanism helping to abort cluster headaches. Dissolved oxygen in hyperoxic conditions may make also a significant contribution to total gas transport. Storage and sources Oxygen can be separated by a number of methods (e.g., chemical reaction, fractional distillation) to enable immediate or future use. The main methods utilized for oxygen therapy include: Liquid storage – Liquid oxygen is stored in insulated tanks at low temperature and allowed to boil (at a temperature of 90.188 K (−182.96 °C)) during use, releasing gaseous oxygen. This method is widely utilized at hospitals due to high oxygen requirements. See Vacuum Insulated Evaporator for more information on this method of storage. Compressed gas storage – Oxygen gas is compressed in a gas cylinder, which provides a convenient storage method (refrigeration not required). Large oxygen cylinders hold a volume of and can last about two days at a flow rate of 2 litres per minute (LPM). A small portable M6 (B) cylinder holds and weighs about . These tanks can last 4–6 hours with a conserving regulator, which adjust flow based on a person's breathing rate. Conserving regulators may not be effective for patients who breathe through their mouth. Instant usage – The use of an electrically powered oxygen concentrator or a chemical reaction based unit can create sufficient oxygen for immediate personal use. These units (especially the electrically powered versions) are widely used for home oxygen therapy as portable personal oxygen. One particular advantage includes continuous supply without need for bulky oxygen cylinders. Hazards and risk Highly concentrated sources of oxygen also increase risk for rapid combustion. Oxygen itself is not flammable, but the addition of concentrated oxygen to a fire greatly increases its intensity, and can aid the combustion of materials that are relatively inert under normal conditions. Fire and explosion hazards exist when concentrated oxidants and fuels are brought together in close proximity, although an ignition event (e.g., heat or spark) is needed to trigger combustion. Concentrated oxygen will allow combustion to proceed rapidly and energetically. Steel pipes and storage vessels used to store and transmit both gaseous and liquid oxygen will act as a fuel; and therefore the design and manufacture of oxygen systems requires special training to ensure that ignition sources are minimized. Highly concentrated oxygen in a high-pressure environment can spontaneously ignite hydrocarbons such as oil and grease, resulting in a fire or explosion. The heat caused by rapid pressurization serves as the ignition source. For this reason, storage vessels, regulators, piping and any other equipment used with highly concentrated oxygen must be "oxygen-clean" prior to use to ensure the absence of potential fuels. This does not only apply to pure oxygen; any concentration significantly higher than atmospheric (approximately 21%) carries a potential ignition risk. Some hospitals have instituted "no-smoking" policies which can help keep ignition sources away from medically piped oxygen. These policies do not eliminate the risk of injury among patients with portable oxygen systems, especially among smokers. Other potential sources of ignition include candles, aromatherapy, medical equipment, cooking, and deliberate vandalism. Delivery Various devices are used for oxygen administration. In most cases, the oxygen will first pass through a pressure regulator, used to control the high pressure of oxygen delivered from a cylinder (or other source) to a lower pressure. This lower pressure is then controlled by a flowmeter (which may be preset or selectable) which controls the flow at a measured rate (e.g., litres per minute [LPM]). The typical flowmeter range for medical oxygen is between 0 and 15 LPM with some units capable of obtaining up to 25 LPM. Many wall flowmeters using a Thorpe tube design are able to be dialed to "flush" oxygen which is beneficial in emergency situations. Low-dose oxygen Many people only require slight increases in inhaled oxygen, rather than pure or near-pure oxygen. These requirements can be met through a number of devices dependent on situation, flow requirements, and personal preference. A nasal cannula (NC) is a thin tube with two small nozzles inserted into a person's nostrils. It can provide oxygen at low flow rates, 1–6 litres per minute (LPM), delivering an oxygen concentration of 24–40%. There are also a number of face mask options, such as the simple face mask, often used at between 5 and 10 LPM, capable of delivering oxygen concentrations between 35% and 55%. This is closely related to the more controlled air-entrainment masks, also known as Venturi masks, which can accurately deliver a predetermined oxygen concentration from 24 to 50%. In some instances, a partial rebreathing mask can be used, which is based on a simple mask, but features a reservoir bag, which can provide oxygen concentrations of 40–70% at 5–15 LPM. Demand oxygen delivery systems (DODS) or oxygen resuscitators deliver oxygen only when the person inhales or the caregiver presses a button on the mask (e.g., nonbreathing patient). These systems greatly conserve oxygen compared to steady-flow masks, and are useful in emergency situations when a limited supply of oxygen is available and there is a delay in transporting the person to higher care. Due to utilization of a variety of methods for oxygenation requirements performance differences arise. They are very useful in CPR, as the caregiver can deliver rescue breaths composed of 100% oxygen with the press of a button. Care must be taken not to over-inflate the person's lungs, for which some systems employ safety valves. These systems may not be appropriate for people who are unconscious or in respiratory distress because of the required respiratory effort. High flow oxygen delivery For patients requiring high concentrations of oxygen, a number of devices are available. The most commonly utilized device is the non-rebreather mask (or reservoir mask). Non-rebreather masks draw oxygen from attached reservoir bags with one-way valves that direct exhaled air out of the mask. If flow rate is not sufficient (~10L/min), the bag may collapse on inspiration. This type of mask is indicated for acute medical emergencies. The delivered FIO2 (Inhalation volumetric fraction of molecular oxygen) of this system is 60–80%, depending on oxygen flow and breathing pattern. Another type of device is a humidified high flow nasal cannula which enables flows exceeding a person's peak inspiratory flow demand to be delivered via nasal cannula, thus providing FIO2 of up to 100% because there is no entrainment of room air. This also allows the person to continue to talk, eat, and drink while still receiving therapy. This type of delivery method is associated with greater overall comfort, improved oxygenation, respiratory rates and reduced sputumstatis compared with face mask oxygen. In specialist applications such as aviation, tight-fitting masks can be used. These masks also have applications in anaesthesia, carbon monoxide poisoning treatment and in hyperbaric oxygen therapy. Positive pressure delivery Patients who are unable to breathe on their own will require positive pressure to move oxygen into their lungs for gaseous exchange to take place. Systems for delivery vary in complexity and cost, starting with a basic pocket mask adjunct which can be used to manually deliver artificial respiration with supplemental oxygen delivered through a mask port. Many emergency medical service members, first aid personnel, and hospital staff may use a bag-valve-mask (BVM), which is a malleable bag attached to a face mask (or invasive airway such as an endotracheal tube or laryngeal mask airway), usually with a reservoir bag attached, which is manually manipulated by the healthcare professional to push oxygen (or air) into the lungs. This is the only procedure allowed for initial treatment of cyanide poisoning in the UK workplace.Automated versions of the BVM system, known as a resuscitator or pneupac can also deliver measured and timed doses of oxygen directly to people through a facemask or airway. These systems are related to the anaesthetic machines used in operations under general anaesthesia that allow a variable amount of oxygen to be delivered, along with other gases including air, nitrous oxide and inhalational anaesthetics. Drug delivery Oxygen and other compressed gases are used in conjunction with a nebulizer to allow delivery of medications to the upper and/or lower airways. Nebulizers use compressed gas to propel liquid medication into therapeutically sized aerosol droplets for deposition to the appropriate portion of the airway. A typical compressed gas flow rate of 8–10 L/min is used to nebulize medications, saline, sterile water, or a combination these treatments into a therapeutic aerosol for inhalation. In the clinical setting, room air (ambient mix of several gasses), molecular oxygen, and Heliox are the most common gases used to nebulize a bolus treatment or a continuous volume of therapeutic aerosols. Exhalation filters for oxygen masks Filtered oxygen masks have the ability to prevent exhaled particles from being released into the surrounding environment. These masks are normally of a closed design such that leaks are minimized and breathing of room air is controlled through a series of one-way valves. Filtration of exhaled breaths is accomplished either by placing a filter on the exhalation port or through an integral filter that is part of the mask itself. These masks first became popular in the Toronto (Canada) healthcare community during the 2003 SARS Crisis. SARS was identified as being respiratory based, and it was determined that conventional oxygen therapy devices were not designed for the containment of exhaled particles. In 2003, the HiOx80 oxygen mask was released for sale. The HiOx80 mask is a closed design mask that allows a filter to be placed on the exhalation port. Several new designs have emerged in the global healthcare community for the containment and filtration of potentially infectious particles. Other designs include the ISO- oxygen mask, the Flo2Max oxygen mask, and the O-Mask. Typical oxygen masks allow a person to breathe in a mixture of room air and therapeutic oxygen. However, as filtered oxygen masks use a closed design that minimizes or eliminates the person's contact with and ability to inhale room air, delivered oxygen concentrations in such devices have been found to be elevated, approaching 99% using adequate oxygen flows. Because all exhaled particles are contained within the mask, nebulized medications are also prevented from releasing into the surrounding atmosphere, decreasing the occupational exposure to healthcare staff and other people. Aircraft In the United States, most airlines restrict the devices allowed on board an aircraft. As a result, passengers are restricted in what devices they can use. Some airlines will provide cylinders for passengers with an associated fee. Other airlines allow passengers to carry on approved portable concentrators. However, the lists of approved devices varies by airline so passengers may need to check with any airline they are planning to fly on. Passengers are generally not allowed to carry on personal cylinders. In all cases, passengers need to notify the airline in advance of their equipment. Effective May 13, 2009, the Department of Transportation and FAA ruled that a select number of portable oxygen concentrators are approved for use on all commercial flights. FAA regulations require larger airplanes to carry D-cylinders of oxygen for use in case of an emergency. Oxygen conserving devices Since the 1980s, devices have been available which conserve stored oxygen by delivering it during the portion of the breathing cycle when it is more effectively used. This has the effect of stored oxygen lasting longer, or a smaller, and therefore lighter, portable oxygen delivery system being practicable. This class of device can also be used with portable oxygen concentrators, making them more efficient. The delivery of supplemental oxygen is most effective if it is made at a point in the breathing cycle when it will be inhaled to the alveoli, where gas transfer occurs. oxygen delivered later in the cycle will be inhaled into physiological dead space, wher it serves no useful purpose as it cannot diffuse into the blood. Oxygen delivered during stages of the breathing cycle in which it is not inhaled is also wasted. A continuous constant flow rate uses a simple regulator, but is inefficient as a high percentage of the delivered gas does not reach the alveoli, and over half is not inhaled at all. A system which accumulates free-flow oxygen during resting and exhalation stages, (reservoir cannulas) makes a larger part of the oxygen available for inhalation, and it will be selectively inhaled during the initial part of inhalation, which reaches furthest into the lungs. A similar function is provided by a mechanical demand regulator which provides gas only during inhalation, but requires some physical effort by the user, and also ventilates dead space with oxygen. A third class of system (pulse dose oxygen conserving device, or demand pulse devices) senses the start of inhalation and provides a metered bolus, which if correctly matched to requirements, will be sufficient and effectively inhaled into the alveoli.Such systems can be pneumatically or electrically controlled. Adaptive demand systems A development in pulse demand delivery are devices that automatically adjust the volume of the pulsed bolus to suit the activity level of the user. This adaptive response in intended to reduce desaturation responses caused by exercise rate variation. Pulsed delivery devices are available as stand alone modules or integrated into a system specifically designed to use compressed gas, liquid oxygen or oxygen concentrator sources. Integrated design usually allows optimisation of the system for the source type at the cost of versatility. Transtracheal oxygen catheters are inserted directly into the trachea through a small opening in the front of the neck for that purpose. The opening is directed downward, towards the bifurcation of the bronchi. Oxygen introduced through the catheter bypasses the dead spaces of the nose, pharynx and upper trachea during inhalation, and during continuous flow, will accumulate in the anatomic dead space at the end of exhalation and be available for immediate inhalation to the alveoli on the following inhalation. This reduces wastage and provides efficiency roughly three times greater than with external continuous flow. This is roughly equivalent to a reservoir cannula. Transtracheal catheters have been found to be effective during rest, exercise and sleep.
Biology and health sciences
Treatments
Health
508504
https://en.wikipedia.org/wiki/Compressor
Compressor
A compressor is a mechanical device that increases the pressure of a gas by reducing its volume. An air compressor is a specific type of gas compressor. Many compressors can be staged, that is, the gas is compressed several times in steps or stages, to increase discharge pressure. Often, the second stage is physically smaller than the primary stage, to accommodate the already compressed gas without reducing its pressure. Each stage further compresses the gas and increases its pressure and also temperature (if inter cooling between stages is not used). Types Compressors are similar to pumps: both increase the pressure on a fluid (such as a gas) and both can transport the fluid through a pipe. The main distinction is that the focus of a compressor is to change the density or volume of the fluid, which is mostly only achievable on gases. Gases are compressible, while liquids are relatively incompressible, so compressors are rarely used for liquids. The main action of a pump is to pressurize and transport liquids. The main and important types of gas compressors are illustrated and discussed below: Positive displacement A positive displacement compressor is a system that compresses the air by the displacement of a mechanical linkage reducing the volume (since the reduction in volume due to a piston in thermodynamics is considered as positive displacement of the piston). Put another way, a positive displacement compressor is one that operates by drawing in a discrete volume of gas from its inlet then forcing that gas to exit via the compressor's outlet. The increase in the pressure of the gas is due, at least in part, to the compressor pumping it at a mass flow rate which cannot pass through the outlet at the lower pressure and density of the inlet. Reciprocating compressors Reciprocating compressors use pistons driven by a crankshaft. They can be either stationary or portable, can be single or multi-staged, and can be driven by electric motors or internal combustion engines. Small reciprocating compressors from 5 to 30 horsepower (hp) are commonly seen in automotive applications and are typically for intermittent duty. Larger reciprocating compressors well over are commonly found in large industrial and petroleum applications. Discharge pressures can range from low pressure to very high pressure (>18000 psi or 124 MPa). In certain applications, such as air compression, multi-stage double-acting compressors are said to be the most efficient compressors available, and are typically larger, and more costly than comparable rotary units. Another type of reciprocating compressor, usually employed in automotive cabin air conditioning systems, is the swash plate or wobble plate compressor, which uses pistons moved by a swash plate mounted on a shaft (see axial piston pump). Household, home workshop, and smaller job site compressors are typically reciprocating compressors or less with an attached receiver tank. A linear compressor is a reciprocating compressor with the piston being the rotor of a linear motor. This type of compressor can compress a wide range of gases, including refrigerant, hydrogen, and natural gas. Because of this, it finds use in a wide range of applications in many different industries and can be designed to a wide range of capacities, by varying size, number of cylinders, and cylinder unloading. However, it suffers from higher losses due to clearance volumes, resistance due to discharge and suction valves, weighs more, is difficult to maintain due to having a large number of moving parts, and it has inherent vibration. Ionic liquid piston compressor An ionic liquid piston compressor, ionic compressor or ionic liquid piston pump is a hydrogen compressor based on an ionic liquid piston instead of a metal piston as in a piston-metal diaphragm compressor. Rotary screw compressors Rotary screw compressors use two meshed rotating positive-displacement helical screws to force the gas into a smaller space. These are usually used for continuous operation in commercial and industrial applications and may be either stationary or portable. Their application can be from to over and from low pressure to moderately high pressure (>). The classifications of rotary screw compressors vary based on stages, cooling methods, and drive types among others. Rotary screw compressors are commercially produced in Oil Flooded, Water Flooded and Dry type. The efficiency of rotary compressors depends on the air drier, and the selection of air drier is always 1.5 times volumetric delivery of the compressor. Designs with a single screw or three screws instead of two exist. Screw compressors have fewer moving components, larger capacity, less vibration and surging, can operate at variable speeds, and typically have higher efficiency. Small sizes or low rotor speeds are not practical due to inherent leaks caused by clearance between the compression cavities or screws and compressor housing. They depend on fine machining tolerances to avoid high leakage losses and are prone to damage if operated incorrectly or poorly serviced. Rotary vane compressors Rotary vane compressors consist of a rotor with a number of blades inserted in radial slots in the rotor. The rotor is mounted offset in a larger housing that is either circular or a more complex shape. As the rotor turns, blades slide in and out of the slots keeping contact with the outer wall of the housing. Thus, a series of increasing and decreasing volumes is created by the rotating blades. Rotary vane compressors are, with piston compressors one of the oldest of compressor technologies. With suitable port connections, the devices may be either a compressor or a vacuum pump. They can be either stationary or portable, can be single or multi-staged, and can be driven by electric motors or internal combustion engines. Dry vane machines are used at relatively low pressures (e.g., ) for bulk material movement while oil-injected machines have the necessary volumetric efficiency to achieve pressures up to about in a single stage. A rotary vane compressor is well suited to electric motor drive and is significantly quieter in operation than the equivalent piston compressor. Rotary vane compressors can have mechanical efficiencies of about 90%. Rolling piston The Rolling piston in a rolling piston style compressor plays the part of a partition between the vane and the rotor. Rolling piston forces gas against a stationary vane. 2 of these compressors can be mounted on the same shaft to increase capacity and reduce vibration and noise. A design without a spring is known as a swing compressor. In refrigeration and air conditioning, this type of compressor is also known as a rotary compressor, with rotary screw compressors being also known simply as screw compressors. It offers higher efficiency than reciprocating compressors due to less losses from the clearance volume between the piston and the compressor casing, it's 40% to 50% smaller and lighter for a given capacity (which can impact material and shipping costs when used in a product), causes less vibration, has fewer components and is more reliable than a reciprocating compressor. But its structure does not allow capacities beyond 5 refrigeration tons, is less reliable than other compressor types, and is less efficient than other compressor types due to losses from the clearance volume. Scroll compressors A scroll compressor, also known as scroll pump and scroll vacuum pump, uses two interleaved spiral-like vanes to pump or compress fluids such as liquids and gases. The vane geometry may be involute, archimedean spiral, or hybrid curves. They operate more smoothly, quietly, and reliably than other types of compressors in the lower volume range. Often, one of the scrolls is fixed, while the other orbits eccentrically without rotating, thereby trapping and pumping or compressing pockets of fluid between the scrolls. Due to minimum clearance volume between the fixed scroll and the orbiting scroll, these compressors have a very high volumetric efficiency. These compressors are extensively used in air conditioning and refrigeration because they are lighter, smaller and have fewer moving parts than reciprocating compressors and they are also more reliable. They are more expensive though, so peltier coolers or rotary and reciprocating compressors may be used in applications where cost is the most important or one of the most important factors to consider when designing a refrigeration or air conditioning system. This type of compressor was used as the supercharger on Volkswagen G60 and G40 engines in the early 1990s. When compared with reciprocating and rolling piston compressors, scroll compressors are more reliable since they have fewer components and have a simpler structure, are more efficient since they have no clearance volume nor valves, and possess the advantages both of surging less and not vibrating so much. But, when compared with screw and centrifugal compressors, scroll compressors have lower efficiencies and smaller capacities. Diaphragm compressors A diaphragm compressor (also known as a membrane compressor) is a variant of the conventional reciprocating compressor. The compression of gas occurs by the movement of a flexible membrane, instead of an intake element. The back-and-forth movement of the membrane is driven by a rod and a crankshaft mechanism. Only the membrane and the compressor box come in contact with the gas being compressed. The degree of flexing and the material constituting the diaphragm affects the maintenance life of the equipment. Generally stiff metal diaphragms may only displace a few cubic centimeters of volume because the metal cannot endure large degrees of flexing without cracking, but the stiffness of a metal diaphragm allows it to pump at high pressures. Rubber or silicone diaphragms are capable of enduring deep pumping strokes of very high flexion, but their low strength limits their use to low-pressure applications, and they need to be replaced as plastic embrittlement occurs. Diaphragm compressors are used for hydrogen and compressed natural gas (CNG) as well as in a number of other applications. The photograph on the right depicts a three-stage diaphragm compressor used to compress hydrogen gas to for use in a prototype compressed hydrogen and compressed natural gas (CNG) fueling station built in downtown Phoenix, Arizona by the Arizona Public Service company (an electric utilities company). Reciprocating compressors were used to compress the natural gas. The reciprocating natural gas compressor was developed by Sertco. The prototype alternative fueling station was built in compliance with all of the prevailing safety, environmental and building codes in Phoenix to demonstrate that such fueling stations could be built in urban areas. Dynamic Air bubble compressor Also known as a trompe. A mixture of air and water generated through turbulence is allowed to fall into a subterranean chamber where the air separates from the water. The weight of falling water compresses the air in the top of the chamber. A submerged outlet from the chamber allows water to flow to the surface at a lower height than the intake. An outlet in the roof of the chamber supplies the compressed air to the surface. A facility on this principle was built on the Montreal River at Ragged Shutes near Cobalt, Ontario in 1910 and supplied 5,000 horsepower to nearby mines. Centrifugal compressors Centrifugal compressors use a rotating disk or impeller in a shaped housing to force the gas to the rim of the impeller, increasing the velocity of the gas. A diffuser (divergent duct) section converts the velocity energy to pressure energy. They are primarily used for continuous, stationary service in industries such as oil refineries, chemical and petrochemical plants and natural gas processing plants. Their application can be from to thousands of horsepower. With multiple staging, they can achieve high output pressures greater than . This type of compressor, along with screw compressors, are extensively used in large refrigeration and air conditioning systems. Magnetic bearing (magnetically levitated) and air bearing centrifugal compressors exist. Many large snowmaking operations (like ski resorts) use this type of compressor. They are also used in internal combustion engines as superchargers and turbochargers. Centrifugal compressors are used in small gas turbine engines or as the final compression stage of medium-sized gas turbines. Centrifugal compressors are the largest available compressors, offer higher efficiencies under partial loads, may be oil-free when using air or magnetic bearings which increases the heat transfer coefficient in evaporators and condensers, weigh up to 90% less and occupy 50% less space than reciprocating compressors, are reliable and cost less to maintain since less components are exposed to wear, and only generate minimal vibration. But, their initial cost is higher, require highly precise CNC machining, the impeller needs to rotate at high speeds making small compressors impractical, and surging becomes more likely. Surging is gas flow reversal, meaning that the gas goes from the discharge to the suction side, which can cause serious damage, specially in the compressor bearings and its drive shaft. It is caused by a pressure on the discharge side that is higher than the output pressure of the compressor. This can cause gases to flow back and forth between the compressor and whatever is connected to its discharge line, causing oscillations. Diagonal or mixed-flow compressors Diagonal or mixed-flow compressors are similar to centrifugal compressors, but have a radial and axial velocity component at the exit from the rotor. The diffuser is often used to turn diagonal flow to an axial rather than radial direction. Comparative to the conventional centrifugal compressor (of the same stage pressure ratio), the value of the speed of the mixed flow compressor is 1.5 times larger. Axial compressors Axial compressors are dynamic rotating compressors that use arrays of fan-like airfoils to progressively compress a fluid. They are used where high flow rates or a compact design are required. The arrays of airfoils are set in rows, usually as pairs: one rotating and one stationary. The rotating airfoils, also known as blades or rotors, accelerate the fluid. The stationary airfoils, also known as stators or vanes, decelerate and redirect the flow direction of the fluid, preparing it for the rotor blades of the next stage. Axial compressors are almost always multi-staged, with the cross-sectional area of the gas passage diminishing along the compressor to maintain an optimum axial Mach number. Beyond about 5 stages or a 4:1 design pressure ratio a compressor will not function unless fitted with features such as stationary vanes with variable angles (known as variable inlet guide vanes and variable stators), the ability to allow some air to escape part-way along the compressor (known as interstage bleed) and being split into more than one rotating assembly (known as twin spools, for example). Axial compressors can have high efficiencies; around 90% polytropic at their design conditions. However, they are relatively expensive, requiring a large number of components, tight tolerances and high quality materials. Axial compressors are used in medium to large gas turbine engines, natural gas pumping stations, and some chemical plants. Hermetically sealed, open, or semi-hermetic Compressors used in refrigeration systems must exhibit near-zero leakage to avoid the loss of the refrigerant if they are to function for years without service. This necessitates the use of very effective seals, or even the elimination of all seals and openings to form a hermetic system. These compressors are often described as being either hermetic, open, or semi-hermetic, to describe how the compressor is enclosed and how the motor drive is situated in relation to the gas or vapor being compressed. Some compressors outside of refrigeration service may also be hermetically sealed to some extent, typically when handling toxic, polluting, or expensive gasses, with most non-refrigeration applications being in the petrochemical industry. In hermetic and most semi-hermetic compressors, the compressor and motor driving the compressor are integrated, and operate within the pressurized gas envelope of the system. The motor is designed to operate in, and be cooled by, the refrigerant gas being compressed. Open compressors have an external motor driving a shaft that passes through the body of the compressor and rely on rotary seals around the shaft to retain the internal pressure. The difference between the hermetic and semi-hermetic, is that the hermetic uses a one-piece welded steel casing that cannot be opened for repair; if the hermetic fails it is simply replaced with an entire new unit. A semi-hermetic uses a large cast metal shell with gasketed covers with screws that can be opened to replace motor and compressor components. The primary advantage of a hermetic and semi-hermetic is that there is no route for the gas to leak out of the system. The main advantages of open compressors is that they can be driven by any motive power source, allowing the most appropriate motor to be selected for the application, or even non-electric power sources such as an internal combustion engine or steam turbine, and secondly the motor of an open compressor can be serviced without opening any part of the refrigerant system. An open pressurized system such as an automobile air conditioner can be more susceptible to leak its operating gases. Open systems rely on lubricant in the system to splash on pump components and seals. If it is not operated frequently enough, the lubricant on the seals slowly evaporates, and then the seals begin to leak until the system is no longer functional and must be recharged. By comparison, a hermetic or semi-hermetic system can sit unused for years, and can usually be started up again at any time without requiring maintenance or experiencing any loss of system pressure. Even well lubricated seals will leak a small amount of gas over time, particularly if the refrigeration gasses are soluble in the lubricating oil, but if the seals are well manufactured and maintained this loss is very low. The disadvantage of hermetic compressors is that the motor drive cannot be repaired or maintained, and the entire compressor must be replaced if a motor fails. A further disadvantage is that burnt-out windings can contaminate the whole systems, thereby requiring the system to be entirely pumped down and the gas replaced (This can also happen in semi hermetic compressors where the motor operates in the refrigerant). Typically, hermetic compressors are used in low-cost factory-assembled consumer goods where the cost of repair and labor is high compared to the value of the device, and it would be more economical to just purchase a new device or compressor. Semi-hermetic compressors are used in mid-sized to large refrigeration and air conditioning systems, where it is cheaper to repair and/or refurbish the compressor compared to the price of a new one. A hermetic compressor is simpler and cheaper to build than a semi-hermetic or open compressor. Thermodynamics of gas compression Isentropic compressor A compressor can be idealized as internally reversible and adiabatic, thus an isentropic steady state device, meaning the change in entropy is 0. The enthalpy change for a flow process can be calculated. dH = VdP +TdS Isentropic dS is zero. dH = VdP Non flow isentropic processes like some positive displacement compressors may use a different equation. dH = PdV By defining the compression cycle as isentropic, an ideal efficiency for the process can be attained, and the ideal compressor performance can be compared to the actual performance of the machine. Isotropic Compression as used in ASME PTC 10 Code refers to a reversible, adiabatic compression process Isentropic efficiency of Compressors: is the enthalpy at the initial state is the enthalpy at the final state for the actual process is the enthalpy at the final state for the isentropic process Minimizing work required by a compressor Comparing reversible to irreversible compressors Comparison of the differential form of the energy balance for each device. Let be heat, be work, be kinetic energy, and be potential energy. Actual Compressor: Furthermore, and T is [absolute temperature] () which produces: or Therefore, work-consuming devices such as pumps and compressors (work is negative) require less work when they operate reversibly. Effect of cooling during the compression process isentropic process: involves no cooling, polytropic process: involves some cooling isothermal process: involves maximum cooling By making the following assumptions the required work for the compressor to compress a gas from to is the following for each process: and Flow processes VdP All processes are internally reversible The gas behaves like an ideal gas with constant specific heats Isentropic (, where ): Polytropic (): Isothermal ( or ): By comparing the three internally reversible processes compressing an ideal gas from to , the results show that isentropic compression () requires the most work in and the isothermal compression( or ) requires the least amount of work in. For the polytropic process () work decreases as the exponent, n, decreases, by increasing the heat rejection during the compression process. One common way of cooling the gas during compression is to use cooling jackets around the casing of the compressor. Compressors in ideal thermodynamic cycles Ideal Rankine Cycle 1->2 Isentropic compression in a pump Ideal Carnot Cycle 4->1 Isentropic compression Ideal Otto Cycle 1->2 Isentropic compression Ideal Diesel Cycle 1->2 Isentropic compression Ideal Brayton Cycle 1->2 Isentropic compression in a compressor Ideal Vapor-compression refrigeration Cycle 1->2 Isentropic compression in a compressor NOTE: The isentropic assumptions are only applicable with ideal cycles. Real world cycles have inherent losses due to inefficient compressors and turbines. The real world system are not truly isentropic but are rather idealized as isentropic for calculation purposes. Temperature Compression of a gas increases its temperature. For a polytropic transformation of a gas: The work done for polytropic compression (or expansion) of a gas into a closed cylinder. so in which p is pressure, V is volume, n takes different values for different compression processes (see below), and 1 & 2 refer to initial and final states. Adiabatic – This model assumes that no energy (heat) is transferred to or from the gas during the compression, and all supplied work is added to the internal energy of the gas, resulting in increases of temperature and pressure. Theoretical temperature rise is: with T1 and T2 in degrees Rankine or kelvins, p2 and p1 being absolute pressures and ratio of specific heats (approximately 1.4 for air). The rise in air and temperature ratio means compression does not follow a simple pressure to volume ratio. This is less efficient, but quick. Adiabatic compression or expansion more closely model real life when a compressor has good insulation, a large gas volume, or a short time scale (i.e., a high power level). In practice there will always be a certain amount of heat flow out of the compressed gas. Thus, making a perfect adiabatic compressor would require perfect heat insulation of all parts of the machine. For example, even a bicycle tire pump's metal tube becomes hot as you compress the air to fill a tire. The relation between temperature and compression ratio described above means that the value of for an adiabatic process is (the ratio of specific heats). Isothermal – This model assumes that the compressed gas remains at a constant temperature throughout the compression or expansion process. In this cycle, internal energy is removed from the system as heat at the same rate that it is added by the mechanical work of compression. Isothermal compression or expansion more closely models real life when the compressor has a large heat exchanging surface, a small gas volume, or a long time scale (i.e., a small power level). Compressors that utilize inter-stage cooling between compression stages come closest to achieving perfect isothermal compression. However, with practical devices perfect isothermal compression is not attainable. For example, unless you have an infinite number of compression stages with corresponding intercoolers, you will never achieve perfect isothermal compression. For an isothermal process, is 1, so the value of the work integral for an isothermal process is: When evaluated, the isothermal work is found to be lower than the adiabatic work. Polytropic – This model takes into account both a rise in temperature in the gas as well as some loss of energy (heat) to the compressor's components. This assumes that heat may enter or leave the system, and that input shaft work can appear as both increased pressure (usually useful work) and increased temperature above adiabatic (usually losses due to cycle efficiency). Compression efficiency is then the ratio of temperature rise at theoretical 100 percent (adiabatic) vs. actual (polytropic). Polytropic compression will use a value of between 0 (a constant-pressure process) and infinity (a constant volume process). For the typical case where an effort is made to cool the gas compressed by an approximately adiabatic process, the value of will be between 1 and . Staged compression In the case of centrifugal compressors, commercial designs currently do not exceed a compression ratio of more than 3.5 to 1 in any one stage (for a typical gas). Since compression raises the temperature, the compressed gas is to be cooled between stages making the compression less adiabatic and more isothermal. The inter-stage coolers (intercoolers) typically result in some partial condensation that is removed in vapor–liquid separators. In the case of small reciprocating compressors, the compressor flywheel may drive a cooling fan that directs ambient air across the intercooler of a two or more stage compressor. Because rotary screw compressors can make use of cooling lubricant to reduce the temperature rise from compression, they very often exceed a 9 to 1 compression ratio. For instance, in a typical diving compressor the air is compressed in three stages. If each stage has a compression ratio of 7 to 1, the compressor can output 343 times atmospheric pressure (7 × 7 × 7 = 343 atmospheres). () Drive motors There are many options for the motor that powers the compressor: Gas turbines power the axial and centrifugal flow compressors that are part of jet engines. Steam turbines or water turbines are possible for large compressors. Electric motors are cheap and quiet for static compressors. Small motors suitable for domestic electrical supplies use single-phase alternating current. Larger motors can only be used where an industrial electrical three phase alternating current supply is available. Diesel engines or petrol engines are suitable for portable compressors and support compressors. In automobiles and other types of vehicles (including piston-powered airplanes, boats, trucks, etc.), diesel or gasoline engine's power output can be increased by compressing the intake air, so that more fuel can be burned per cycle. These engines can power compressors using their own crankshaft power (this setup known as a supercharger), or, use their exhaust gas to drive a turbine connected to the compressor (this setup known as a turbocharger). Lubrication Compressors that are driven by an electric motor can be controlled using a VFD or power inverter, however many hermetic and semi-hermetic compressors can only work in a range of or at fixed speeds, since they may include built-in oil pumps. The built-in oil pump is connected to the same shaft that drives the compressor, and forces oil into the compressor and motor bearings. At low speeds, insufficient quantities of oil reach the bearings, eventually leading to bearing failure, while at high speeds, excessive amounts of oil may be lost from the bearings and compressor and potentially into the discharge line due to splashing. Eventually the oil runs out and the bearings are left unlubricated, leading to failure, and the oil may contaminate the refrigerant, air or other working gas. Applications Gas compressors are used in various applications where either higher pressures or lower volumes of gas are needed: In pipeline transport of purified natural gas from the production site to the consumer, a compressor is driven by a motor fueled by gas bled from the pipeline. Thus, no external power source is necessary. In maritime cargo transport and cargo operations by gas carriers. Petroleum refineries, natural gas processing plants, petrochemical and chemical plants, and similar large industrial plants require compressing for intermediate and end-product gases. Refrigeration and air conditioner equipment use compressors to move heat in refrigerant cycles (see vapor-compression refrigeration). Gas turbine systems compress the intake combustion air. Small-volume purified or manufactured gases require compression to fill high pressure cylinders for medical, welding, and other uses. Various industrial, manufacturing, and building processes require compressed air to power pneumatic tools. In the manufacturing and blow moulding of PET plastic bottles and containers. Some aircraft require compressors to maintain cabin pressurization at altitude. Some types of jet engines—such as turbojets and turbofans—compress the air required for fuel combustion. The jet engine's turbines power the combustion air compressor. In underwater diving, self-contained breathing apparatus, hyperbaric oxygen therapy, and other life support equipment, compressors provide pressurised breathing gas either directly or via high pressure gas storage containers, such as diving cylinders. In surface supplied diving, an air compressor is generally used to supply low pressure air (10 to 20 bar) for breathing. Submarines use compressors to store air for later use in displacing water from buoyancy chambers to adjust buoyancy. Turbochargers and superchargers are compressors that increase internal combustion engine performance by increasing the mass flow of air inside the cylinder, so the engine can burn more fuel and hence produce more power. Rail and heavy road transport vehicles use compressed air to operate rail vehicle or road vehicle brakes—and various other systems (doors, windscreen wipers, engine, gearbox control, etc.). Service stations and auto repair shops use compressed air to fill pneumatic tires and power pneumatic tools. Fire pistons and heat pumps exist to heat air or other gasses, and compressing the gas is only a means to that end. Rotary lobe compressors are often used to provide air in pneumatic conveying lines for powder or solids. Pressure reached can range from 0.5 to 2 bar g.
Technology
Hydraulics and pneumatics
null
508602
https://en.wikipedia.org/wiki/Reactivity%20%28chemistry%29
Reactivity (chemistry)
In chemistry, reactivity is the impulse for which a chemical substance undergoes a chemical reaction, either by itself or with other materials, with an overall release of energy. Reactivity refers to: the chemical reactions of a single substance, the chemical reactions of two or more substances that interact with each other, the systematic study of sets of reactions of these two kinds, methodology that applies to the study of reactivity of chemicals of all kinds, experimental methods that are used to observe these processes, and theories to predict and to account for these processes. The chemical reactivity of a single substance (reactant) covers its behavior in which it: decomposes, forms new substances by addition of atoms from another reactant or reactants, and interacts with two or more other reactants to form two or more products. The chemical reactivity of a substance can refer to the variety of circumstances (conditions that include temperature, pressure, presence of catalysts) in which it reacts, in combination with the: variety of substances with which it reacts, equilibrium point of the reaction (i.e., the extent to which all of it reacts), and rate of the reaction. The term reactivity is related to the concepts of chemical stability and chemical compatibility. An alternative point of view Reactivity is a somewhat vague concept in chemistry. It appears to embody both thermodynamic factors and kinetic factors (i.e., whether or not a substance reacts, and how fast it reacts). Both factors are actually distinct, and both commonly depend on temperature. For example, it is commonly asserted that the reactivity of alkali metals (Na, K, etc.) increases down the group in the periodic table, or that hydrogen's reactivity is evidenced by its reaction with oxygen. In fact, the rate of reaction of alkali metals (as evidenced by their reaction with water for example) is a function not only of position within the group but also of particle size. Hydrogen does not react with oxygen—even though the equilibrium constant is very large—unless a flame initiates the radical reaction, which leads to an explosion. Restriction of the term to refer to reaction rates leads to a more consistent view. Reactivity then refers to the rate at which a chemical substance tends to undergo a chemical reaction in time. In pure compounds, reactivity is regulated by the physical properties of the sample. For instance, grinding a sample to a higher specific surface area increases its reactivity. In impure compounds, the reactivity is also affected by the inclusion of contaminants. In crystalline compounds, the crystalline form can also affect reactivity. However, in all cases, reactivity is primarily due to the sub-atomic properties of the compound. Although it is commonplace to make statements that "substance X is reactive," each substance reacts with its own set of reagents. For example, the statement that "sodium metal is reactive" suggests that sodium reacts with many common reagents (including pure oxygen, chlorine, hydrochloric acid, and water), either at room temperature or when using a Bunsen burner. The concept of stability should not be confused with reactivity. For example, an isolated molecule of an electronically excited state of the oxygen molecule spontaneously emits light after a statistically defined period. The half-life of such a species is another manifestation of its stability, but its reactivity can only be ascertained via its reactions with other species. Causes of reactivity The second meaning of reactivity (i.e., whether or not a substance reacts) can be rationalized at the atomic and molecular level using older and simpler valence bond theory and also atomic and molecular orbital theory. Thermodynamically, a chemical reaction occurs because the products (taken as a group) are at a lower free energy than the reactants; the lower energy state is referred to as the "more stable state." Quantum chemistry provides the most in-depth and exact understanding of the reason this occurs. Generally, electrons exist in orbitals that are the result of solving the Schrödinger equation for specific situations. All things (values of the n and ml quantum numbers) being equal, the order of stability of electrons in a system from least to greatest is; unpaired, and with no other electrons in similar orbitals, unpaired, and with all degenerate orbitals half-filled, (and the most stable is) a filled set of orbitals. To achieve one of these orders of stability, an atom reacts with another atom to stabilize both. For example, a lone hydrogen atom has a single electron in its 1s orbital. It becomes significantly more stable (as much as 100 kilocalories per mole, or 420 kilojoules per mole) when reacting to form H2. It is for this same reason that carbon almost always forms four bonds. Its ground-state valence configuration is 2s2 2p2, half-filled. However, the activation energy to go from half-filled to fully-filled p orbitals is negligible, and as such, carbon forms them almost instantaneously. Meanwhile, the process releases a significant amount of energy (exothermic). This four equal bond configuration is called sp3 hybridization. The above three paragraphs rationalize, albeit very generally, the reactions of some common species, particularly atoms. One approach to generalize the above is the activation strain model of chemical reactivity which provides a causal relationship between, the reactants' rigidity and their electronic structure, and the height of the reaction barrier. The rate of any given reaction: Reactants -> Products is governed by the rate law: where the rate is the change in the molar concentration in one second in the rate-determining step of the reaction (the slowest step), is the product of the molar concentration of all the reactants raised to the correct order (known as the reaction order), and is the reaction constant, which is constant for one given set of circumstances (generally temperature and pressure) and independent of concentration. The reactivity of a compound is directly proportional to both the value of and the rate. For instance, if A + B -> C + D, then where is the reaction order of , is the reaction order of , is the reaction order of the full reaction, and is the reaction constant.
Physical sciences
Reaction
Chemistry
509033
https://en.wikipedia.org/wiki/Rivet
Rivet
A rivet is a permanent mechanical fastener. Before being installed, a rivet consists of a smooth cylindrical shaft with a head on one end. The end opposite the head is called the tail. On installation, the deformed end is called the shop head or buck-tail. Because there is effectively a head on each end of an installed rivet, it can support tension loads. However, it is much more capable of supporting shear loads (loads perpendicular to the axis of the shaft). Fastenings used in traditional wooden boat building, such as copper nails and clinch bolts, work on the same principle as the rivet but were in use long before the term rivet was introduced and, where they are remembered, are usually classified among nails and bolts respectively. History Rivet holes have been found in Egyptian spearheads dating back to the Naqada culture of between 4400 and 3000 B.C. Archeologists have also uncovered many Bronze Age swords and daggers with rivet holes where the handles would have been. The rivets themselves were essentially short rods of metal, which metalworkers hammered into a pre-drilled hole on one side and deformed on the other to hold them in place. Types There are several types of rivets, designed to meet different cost, accessibility, and strength requirements: Solid/round head rivets Solid rivets are one of the oldest and most reliable types of fasteners, having been found in archaeological findings dating back to the Bronze Age. Solid rivets consist simply of a shaft and head that are deformed with a hammer or rivet gun. A rivet compression or crimping tool can also deform this type of rivet. This tool is mainly used on rivets close to the edge of the fastened material since the tool is limited by the depth of its frame. A rivet compression tool does not require two people and is generally the most foolproof way to install solid rivets. Solid rivets are used in applications where reliability and safety count. A typical application for solid rivets can be found within the structural parts of aircraft. Hundreds of thousands of solid rivets are used to assemble the frame of a modern aircraft. Such rivets come with rounded (universal) or 100° countersunk heads. Typical materials for aircraft rivets are aluminium alloys (2017, 2024, 2117, 7050, 5056, 55000, V-65), titanium, and nickel-based alloys (e.g., Monel). Some aluminium alloy rivets are too hard to buck and must be softened by solution treating (precipitation hardening) prior to being bucked. "Ice box" aluminium alloy rivets harden with age, and must likewise be annealed and then kept at sub-freezing temperatures (hence the name "ice box") to slow the age-hardening process. Steel rivets can be found in static structures such as bridges, cranes, and building frames. The setting of these fasteners requires access to both sides of a structure. Solid rivets are driven using a hydraulically, pneumatically, or electromagnetically actuated squeezing tool or even a handheld hammer. Applications where only one side is accessible require "blind" rivets. Solid rivets are also used by some artisans in the construction of modern reproduction of medieval armour, jewellery and metal couture. High-strength structural steel rivets Until relatively recently, structural steel connections were either welded or riveted. High-strength bolts have largely replaced structural steel rivets. Indeed, the latest steel construction specifications published by AISC (the 14th Edition) no longer cover their installation. The reason for the change is primarily due to the expense of skilled workers required to install high-strength structural steel rivets. Whereas two relatively unskilled workers can install and tighten high-strength bolts, it normally takes four skilled workers to install rivets (warmer, catcher, holder, basher). At a central location near the areas being riveted, a furnace was set up. Rivets were placed in the furnace and heated to approximately 900 °C or "cherry red". The rivet warmer or cook used tongs to remove individual rivets and throw them to a catcher stationed near the joints to be riveted. The catcher (usually) caught the rivet in a leather or wooden bucket with an ash-lined bottom. The catcher inserted the rivet into the hole to be riveted, then quickly turned to catch the next rivet. The holder up or holder on would hold a heavy bucking bar or dolly or another (larger) pneumatic jack against the round "shop head" of the rivet, while the riveter (sometimes two riveters) applied a hammer or pneumatic rivet hammer with a "rivet set" to the tail of the rivet, making it mushroom against the joint forming the "field head" into its final domed shape. Alternatively, the buck is hammered more or less flush with the structure in a counter-sunk hole. On cooling, the rivet contracted axially exerting the clamping force on the joint. Before the use of pneumatic hammers, e.g. in the construction of RMS Titanic, the person who hammered the rivet was known as the "basher". The last commonly used high-strength structural steel rivets were designated ASTM A502 Grade 1 rivets. Such riveted structures may be insufficient to resist seismic loading from earthquakes if the structure was not engineered for such forces, a common problem of older steel bridges. This is because a hot rivet cannot be properly heat treated to add strength and hardness. In the seismic retrofit of such structures, it is common practice to remove critical rivets with an oxygen torch, precision ream the hole, then insert a machined and heat-treated bolt. Semi-tubular rivets Semi-tubular rivets (also known as tubular rivets) are similar to solid rivets, except they have a partial hole (opposite the head) at the tip. The purpose of this hole is to reduce the amount of force needed for application by rolling the tubular portion outward. The force needed to apply a semi-tubular rivet is about 1/4 of the amount needed to apply a solid rivet. Tubular rivets are sometimes preferred for pivot points (a joint where movement is desired) since the swelling of the rivet is only at the tail. The type of equipment used to apply semi-tubular rivets ranges from prototyping tools to fully automated systems. Typical installation tools (from lowest to highest price) are hand set, manual squeezer, pneumatic squeezer, kick press, impact riveter, and finally PLC-controlled robotics. The most common machine is the impact riveter and the most common use of semi-tubular rivets is in lighting, brakes, ladders, binders, HVAC duct-work, mechanical products, and electronics. They are offered from 1/16-inch (1.6 mm) to 3/8-inch (9.5 mm) in diameter (other sizes are considered highly special) and can be up to 8 inches (203 mm) long. A wide variety of materials and platings are available, most common base metals are steel, brass, copper, stainless, aluminum and the most common platings are zinc, nickel, brass, tin. Tubular rivets are normally waxed to facilitate proper assembly. An installed tubular rivet has a head on one side, with a rolled-over and exposed shallow blind hole on the other. Blind rivets Blind rivets, commonly referred to as "pop" rivets (POP is the brand name of the original manufacturer, now owned by Stanley Engineered Fastening, a division of Stanley Black & Decker) are tubular and are supplied with a nail-like mandrel through the center which has a "necked" or weakened area near the head. The rivet assembly is inserted into a hole drilled through the parts to be joined and a specially designed tool is used to draw the mandrel through the rivet. The compression force between the head of the mandrel and the tool expands the diameter of the tube throughout its length, locking the sheets being fastened if the hole was the correct size. The head of the mandrel also expands the blind end of the rivet to a diameter greater than that of the drilled hole, compressing the fastened sheets between the head of the rivet and the head of the mandrel. At a predetermined tension, the mandrel breaks at the necked location. With open tubular rivets, the head of the mandrel may or may not remain embedded in the expanded portion of the rivet, and can come loose at a later time. More expensive closed-end tubular rivets are formed around the mandrel so the head of the mandrel is always retained inside the blind end after installation. "Pop" rivets can be fully installed with access to only one side of a part or structure. Prior to the invention of blind rivets, installation of a rivet typically required access to both sides of the assembly: a rivet hammer on one side and a bucking bar on the other side. In 1916, Royal Navy reservist and engineer Hamilton Neil Wylie filed a patent for an "improved means of closing tubular rivets" (granted May 1917). In 1922 Wylie joined the British aircraft manufacturer Armstrong-Whitworth Ltd to advise on metal construction techniques; here he continued to develop his rivet design with a further 1927 patent that incorporated the pull-through mandrel and allowed the rivet to be used blind. By 1928, the George Tucker Eyelet Company, of Birmingham, England, produced a "cup" rivet based on the design. It required a separate GKN mandrel and the rivet body to be hand-assembled prior to use for the building of the Siskin III aircraft. Together with Armstrong-Whitworth, the Geo. Tucker Co. further modified the rivet design to produce a one-piece unit incorporating a mandrel and rivet. This product was later developed in aluminium and trademarked as the "POP" rivet. The United Shoe Machinery Co. produced the design in the U.S. as inventors such as Carl Cherry and Lou Huck experimented with other techniques for expanding solid rivets. They are available in flat head, countersunk head, and modified flush head with standard diameters of 1/8, 5/32, and 3/16 inch. Blind rivets are made from soft aluminum alloy, steel (including stainless steel), copper, and Monel. There are also , which are designed to take shear and tensile loads. The rivet body is normally manufactured using one of three methods: There is a vast array of specialty blind rivets that are suited for high strength or plastic applications. Typical types include: Internally and externally locked structural blind rivets can be used in aircraft applications because, unlike other types of blind rivets, the locked mandrels cannot fall out and are watertight. Since the mandrel is locked into place, they have the same or greater shear-load-carrying capacity as solid rivets and may be used to replace solid rivets on all but the most critical stressed aircraft structures. The typical assembly process requires the operator to install the rivet in the nose of the tool by hand and then actuate the tool. However, in recent years automated riveting systems have become popular in an effort to reduce assembly costs and repetitive disorders. The cost of such tools ranges from US$1,500 for auto-feed pneumatics to US$50,000 for fully robotic systems. While structural blind rivets using a locked mandrel are common, there are also aircraft applications using "non-structural" blind rivets where the reduced, but still predictable, strength of the rivet without the mandrel is used as the design strength. A method popularized by Chris Heintz of Zenith Aircraft uses a common flat-head (countersunk) rivet which is drawn into a specially machined nosepiece that forms it into a round-head rivet, taking up much of the variation inherent in hole size found in amateur aircraft construction. Aircraft designed with these rivets use rivet strength figures measured with the mandrel removed. Oscar rivets Oscar rivets are similar to blind rivets in appearance and installation but have splits (typically three) along the hollow shaft. These splits cause the shaft to fold and flare out (similar to the wings on a toggle bolt's nut) as the mandrel is drawn into the rivet. This flare (or flange) provides a wide bearing surface that reduces the chance of rivet pull-out. This design is ideal for high-vibration applications where the back surface is inaccessible. A version of the Oscar rivet is the Olympic rivet which uses an aluminum mandrel that is drawn into the rivet head. After installation, the head and mandrel are shaved off flush resulting in an appearance closely resembling a brazier head-driven rivet. They are used in the repair of Airstream trailers to replicate the look of the original rivets. Drive rivet A drive rivet is a form of blind rivet that has a short mandrel protruding from the head that is driven in with a hammer to flare out the end inserted in the hole. This is commonly used to rivet wood panels into place since the hole does not need to be drilled all the way through the panel, producing an aesthetically pleasing appearance. They can also be used with plastic, metal, and other materials and require no special setting tool other than a hammer and possibly a backing block (steel or some other dense material) placed behind the location of the rivet while hammering it into place. Drive rivets have less clamping force than most other rivets. Drive screws, possibly another name for drive rivets, are commonly used to hold nameplates into blind holes. They typically have spiral threads that grip the side of the hole. Flush rivet A flush rivet is used primarily on external metal surfaces where good appearance and the elimination of unnecessary aerodynamic drag are important. A flush rivet takes advantage of a countersunk or dimpled hole; they are also commonly referred to as countersunk rivets. Countersunk or flush rivets are used extensively on the exterior of aircraft for aerodynamic reasons such as reduced drag and turbulence. Additional post-installation machining may be performed to perfect the airflow. Flush riveting was invented in America in the 1930s by Vladimir Pavlecka and his team at Douglas Aircraft. The technology was used by Howard Hughes in the design and production of his H-1 plane, the Hughes H-1 Racer. Friction-lock rivet These resemble an expanding bolt except the shaft snaps below the surface when the tension is sufficient. The blind end may be either countersunk ('flush') or dome-shaped. One early form of blind rivet that was the first to be widely used for aircraft construction and repair was the Cherry friction-lock rivet. Originally, Cherry friction locks were available in two styles, hollow shank pull-through and self-plugging types. The pull-through type is no longer common; however, the self-plugging Cherry friction-lock rivet is still used for repairing light aircraft. Cherry friction-lock rivets are available in two head styles, universal and 100-degree countersunk. Furthermore, they are usually supplied in three standard diameters, 1/8, 5/32 and 3/16 inch. A friction-lock rivet cannot replace a solid shank rivet, size for size. When a friction lock is used to replace a solid shank rivet, it must be at least one size larger in diameter because the friction-lock rivet loses considerable strength if its center stem falls out due to vibrations or damage. Rivet alloys, shear strengths, and driving condition Self-piercing rivets Self-pierce riveting (SPR) is a process of joining two or more materials using an engineered rivet. Unlike solid, blind and semi-tubular rivets, self-pierce rivets do not require a drilled or punched hole. SPRs are cold-forged to a semi-tubular shape and contain a partial hole to the opposite end of the head. The end geometry of the rivet has a chamfered poke that helps the rivet pierce the materials being joined. A hydraulic or electric servo rivet setter drives the rivet into the material, and an upsetting die provides a cavity for the displaced bottom sheet material to flow. The SPR process is described in here SPR process. The self-pierce rivet fully pierces the top sheet material(s) but only partially pierces the bottom sheet. As the tail end of the rivet does not break through the bottom sheet it provides a water or gas-tight joint. With the influence of the upsetting die, the tail end of the rivet flares and interlocks into the bottom sheet forming a low profile button. Rivets need to be harder than the materials being joined. they are heat treated to various levels of hardness depending on the material's ductility and hardness. Rivets come in a range of diameters and lengths depending on the materials being joined; head styles are either flush countersunk or pan heads. Depending on the rivet setter configuration, i.e. hydraulic, servo, stroke, nose-to-die gap, feed system etc., cycle times can be as quick as one second. Rivets are typically fed to the rivet setter nose from tape and come in cassette or spool form for continuous production. Riveting systems can be manual or automated depending on the application requirements; all systems are very flexible in terms of product design and ease of integration into a manufacturing process. SPR joins a range of dissimilar materials such as steel, aluminum, plastics, composites and pre-coated or pre-painted materials. Benefits include low energy demands, no heat, fumes, sparks or waste and very repeatable quality. Compression rivets Compression rivets are commonly used for functional or decorative purposes on clothing, accessories, and other items. They have male and female halves that press together, through a hole in the material. Double cap rivets have aesthetic caps on both sides. Single cap rivets have caps on just one side; the other side is low profile with a visible hole. Cutlery rivets are commonly used to attach handles to knife blades and other utensils. Sizes Rivets come in both inch series and metric series: Imperial units (fractions of inches) with diameters such as 1/8″ or 5/16″. Système international or SI units with diameters such as 3 mm, 8 mm. The main official standards relate more to technical parameters such as ultimate tensile strength and surface finishing than physical length and diameter. They are: Imperial Rivet diameters are commonly measured in -inch increments and their lengths in -inch increments, expressed as "dash numbers" at the end of the rivet identification number. A "dash 3 dash 4" (XXXXXX-3-4) designation indicates a -inch diameter and -inch (or -inch) length. Some rivets lengths are also available in half sizes, and have a dash number such as –3.5 ( inch) to indicate they are half-size. The letters and digits in a rivet's identification number that precede its dash numbers indicate the specification under which the rivet was manufactured and the head style. On many rivets, a size in 32nds may be stamped on the rivet head. Other makings on the rivet head, such as small raised or depressed dimples or small raised bars indicate the rivet's alloy. To become a proper fastener, a rivet should be placed in a hole ideally 4–6 thousandths of an inch larger in diameter. This allows the rivet to be easily and fully inserted, then setting allows the rivet to expand, tightly filling the gap and maximizing strength. Metric Rivet diameters and lengths are measured in millimeters. Conveniently, the rivet diameter relates to the drill required to make a hole to accept the rivet, rather than the actual diameter of the rivet, which is slightly smaller. This facilitates the use of a simple drill-gauge to check both rivet and drill are compatible. For general use, diameters between 2 mm – 20 mm and lengths from 5 mm – 50 mm are common. The design type, material and any finish is usually expressed in plain language (often English). Applications Before welding techniques and bolted joints were developed, metal-framed buildings and structures such as the Eiffel Tower, Shukhov Tower and the Sydney Harbour Bridge were generally held together by riveting, as were automobile chassis. Riveting is still widely used in applications where light weight and high strength are critical, such as in an aircraft. Sheet metal alloys used in aircraft skins are generally not welded, because the aircraft in high-speed flight skins will be stretched, extrusion may occur deformation and change in material properties. Riveting can reduce the vibration transmission between joints, thereby reducing the risk of cracking. The firmness is better and more reliable against such repeated stress changes. In order to reduce air resistance, countersunk rivets are generally used in aircraft skins. A large number of countries used rivets in the construction of armored tanks during World War II, including the M3 Lee (General Grant) manufactured in the United States. However, many countries soon learned that rivets were a large weakness in tank design since if a tank was hit by a large projectile it would dislocate the rivets and they would fly around the inside of the tank and injure or kill the crew, even if the projectile did not penetrate the armor. Some countries such as Italy, Japan, and Britain used rivets in some or all of their tank designs throughout the war for various reasons, such as lack of welding equipment or inability to weld very thick plates of armor effectively. Blind rivets are used almost universally in the construction of plywood road cases. Common but more exotic uses of rivets are to reinforce jeans and to produce the distinctive sound of a sizzle cymbal. Joint analysis The stress and shear in a rivet are analyzed like a bolted joint. However, it is not wise to combine rivets with bolts and screws in the same joint. Rivets fill the hole where they are installed to establish a very tight fit (often called an interference fit). It is difficult or impossible to obtain such a tight fit with other fasteners. The result is that rivets in the same joint with loose fasteners carry more of the load—they are effectively stiffer. The rivet can then fail before it can redistribute load to the other loose-fit fasteners like bolts and screws. This often causes catastrophic failure of the joint when the fasteners unzip. In general, a joint composed of similar fasteners is the most efficient because all fasteners reach capacity simultaneously. Installation Solid and semi-tubular rivets There are several methods for installing solid rivets. Manual with hammer and handset or bucking bar Pneumatic hammers Handheld squeezers Riveting machines Pin hammer, rivet set Rivets small enough and soft enough are often bucked. In this process, the installer places a rivet gun against the factory head and holds a bucking bar against the tail or a hard working surface. The bucking bar is a specially shaped solid block of metal. The rivet gun provides a series of high-impulse forces that upsets and work hardens the tail of the rivet between the work and the inertia of the bucking bar. Rivets that are large or hard may be more easily installed by squeezing instead. In this process, a tool in contact with each end of the rivet clinches to deform the rivet. Rivets may also be upset by hand, using a ball-peen hammer. The head is placed in a special hole made to accommodate it, known as a rivet-set. The hammer is applied to the buck-tail of the rivet, rolling an edge so that it is flush against the material. Testing Solid rivets for construction A hammer is also used to "ring" an installed rivet, as a non-destructive test for tightness and imperfections. The inspector taps the head (usually the factory head) of the rivet with the hammer while touching the rivet and base plate lightly with the other hand and judges the quality of the audibly returned sound and the feel of the sound traveling through the metal to the operator's fingers. A rivet tightly set in its hole returns a clean and clear ring, while a loose rivet produces a recognizably different sound. Testing of blind rivets A blind rivet has strength properties that can be measured in terms of shear and tensile strength. Occasionally rivets also undergo performance testing for other critical features, such as pushout force, break load and salt spray resistance. A standardized destructive test according to the Inch Fastener Standards is widely accepted. The shear test involves installing a rivet into two plates at specified hardness and thickness and measuring the force necessary to shear the plates. The tensile test is basically the same, except that it measures the pullout strength. Per the IFI-135 standard, all blind rivets produced must meet this standard. These tests determine the strength of the rivet, and not the strength of the assembly. To determine the strength of the assembly a user must consult an engineering guide or the Machinery's Handbook. Alternatives Adhesives Bolted joints Brazing Clinching Folded joints Nails Screws Soldering Welding
Technology
Components_2
null
509042
https://en.wikipedia.org/wiki/Ostracod
Ostracod
Ostracods, or ostracodes, are a class of the Crustacea (class Ostracoda), sometimes known as seed shrimp. Some 33,000 species (only 13,000 of which are extant) have been identified, grouped into 7 valid orders. They are small crustaceans, typically around in size, but varying from in the case of the marine Gigantocypris. The largest known freshwater species is Megalocypris princeps, which reach 8 mm in length. In most cases, their bodies are flattened from side to side and protected by a bivalve-like valve or "shell" made of chitin, and often calcium carbonate. The family Entocytheridae and many planktonic forms do not have calcium carbonate. The hinge of the two valves is in the upper (dorsal) region of the body. Ostracods are grouped together based on shell and soft part morphology, and molecular studies have not unequivocally supported the group's monophyly. They have a wide range of diets, and the class includes carnivores, herbivores, scavengers and filter feeders, but most ostracods are deposit feeders. Etymology Ostracod comes from the Greek óstrakon meaning shell or tile. Fossils Ostracods are "by far the most common arthropods in the fossil record" with fossils being found from the early Ordovician to the present. An outline microfaunal zonal scheme based on both Foraminifera and Ostracoda was compiled by M. B. Hart. Freshwater ostracods have even been found in Baltic amber of Eocene age, having presumably been washed onto trees during floods. Ostracods have been particularly useful for the biozonation of marine strata on a local or regional scale, and they are invaluable indicators of paleoenvironments because of their widespread occurrence, small size, easily preservable, generally moulted, calcified bivalve carapaces; the valves are a commonly found microfossil. A find in Queensland, Australia, in 2013, announced in May 2014, at the Bicentennary Site in the Riversleigh World Heritage area, revealed both male and female specimens with very well preserved soft tissue. This set the Guinness World Record for the oldest penis. Males had observable sperm that is the oldest yet seen and, when analysed, showed internal structures and has been assessed as being the largest sperm (per body size) of any animal recorded. It was assessed that the fossilisation was achieved within several days, due to phosphorus in the bat droppings of the cave where the ostracods were living. Description The body of an ostracod is encased by a carapace originating from the head region, and consists of two valves superficially resembling the shell of a clam. A distinction is made between the valve (hard parts) and the body with its appendages (soft parts). Studies of the embryonic development in Myodocopida shows that the bivalved carapace develops from two independent buds of the carapace valves. As the two halves grow, they meet in the middle. In Manawa, an ostracod in the order Palaeocopida, the carapace originates as a single element and during growth folds at the midline. Body parts The body consists of a head and thorax, separated by a slight constriction. Unlike many other crustaceans, the body is not clearly divided into segments. Most species have completely or partly lost their trunk segmentation, and there are no boundaries between the thorax and abdomen, and it has therefore been impossible to tell if the first pair of limbs after the maxillae belongs to the head or the thorax. With a few exception, like platycopids which have an 11-segmented trunk, the abdomen in ostracods has no visible segments. The head is the largest part of the body, and bears four pairs of appendages. Two pairs of well-developed antennae are used to swim through the water. In addition, there are a pair of mandibles and a pair of maxillae. The thorax has three primary pairs of appendages. The first of these has different functions in different groups. It can be used for feeding (Cypridoidea) or for walking (Cytheroidea), and in some species it has evolved into a male clasping organ. The second pair is mainly used for locomotion, and the third is used for walking or cleaning, but can also be reduced or absent. Both the second and third pair are absent in suborder Cladocopina. In the Myodocopina the third pair is a multisegmented cleaning organ that resembles a worm. Their external genitals seem to originate from the fusion of three to five appendages. The two "rami", or projections, from the tip of the tail point downward and slightly forward from the rear of the shell. All ostracods have a pair of "ventilatory appendages" that beat rhythmically, which create a water current between the body and the inner surface of the carapace. Podocopa, the largest subclass, have no gills, heart or circulatory system, so the gas exchange take place all over the surface. The other subclass of ostracods, the Myodocopa, do have a heart, and the family Cylindroleberididae also have 6–8 lamellar gills. Certain other larger members of Myodocopa, even if they don't have gills, have a circulatory system where hemolymph sinuses absorbs oxygen through special areas on the inner wall of the carapace. In addition, the respiratory protein hemocyanin has been found in the two orders Myodocopida and Platycopida. Nitrogenous waste is excreted through glands on the maxillae, antennae, or both. The primary sense of ostracods is likely touch, as they have several sensitive hairs on their bodies and appendages. Compound eyes are only found in Myodocopina within the Myodocopa. The order Halocyprida in the same subclass is eyeless. Podocopid ostracods have just a naupliar eye consisting of two lateral ocelli and a single ventral ocellus, but the ventral one is absent in some species. Platycopida was assumed to be completely eyeless, but two species, Keijcyoidea infralittoralis and Cytherella sordida, have been found to both possess a nauplius eye too. Palaeoclimatic reconstruction A new method is in development called mutual ostracod temperature range (MOTR), similar to the mutual climatic range (MCR) used for beetles, which can be used to infer palaeotemperatures. The ratio of oxygen-18 to oxygen-16 (δ18O) and the ratio of magnesium to calcium (Mg/Ca) in the calcite of ostracod valves can be used to infer information about past hydrological regimes, global ice volume and water temperatures. Distribution Ecologically, marine ostracods can be part of the zooplankton or (most commonly) are part of the benthos, living on or inside the upper layer of the sea floor. Ostracods has been found as deep as 9,307 m (genus Krithe in family Krithidae). Subclass Myodocopa and the two podocop orders Palaeocopida and Platycopida are restricted to marine environments (except for Platycopida which have a few brackish species), but we find non-marine forms in the four superfamilies Terrestricytheroidea, Cypridoidea, Darwinuloidea, and Cytheroidea in the order Podocopida. Terrestricytheroidea is semi-terrestrial and usually found in brackish and marine-influenced environments such as salt marshes, but not in freshwater. The other three superfamilies also live in freshwater (Darwinuloidea is exclusively non-marine). Of these three, only Cypridoidea have freshwater species able to swim. Representatives living in terrestrial habitats are also found in all three freshwater groups, such as genus Mesocypris which is known from humid forest soils of South Africa, Australia and New Zealand. As of 2008, around 2000 species and 200 genera of non-marine ostracods are found. However, a large portion of diversity is still undescribed, indicated by undocumented diversity hotspots of temporary habitats in Africa and Australia. Non-marine species have been found to live in sulfidic cave ecosystems such as the Movile Cave, deep groundwaters, hypersaline waters, acidic waters with pH as low as 3.4, phytotelmata in plants like bromeliads, and in temperatures varying from almost freezing to more than 50 °C in hot springs. Of the known specific and generic diversity of non-marine ostracods, half (1000 species, 100 genera) belongs to one family (of 13 families), Cyprididae. Many Cyprididae occur in temporary water bodies and have drought-resistant eggs, mixed/parthenogenetic reproduction, and the ability to swim. These biological attributes preadapt them to form successful radiations in these habitats. Ecology Lifecycle Male ostracods have two penises, corresponding to two genital openings (gonopores) on the female. The individual sperm are often large, and are coiled up within the testis prior to mating; in some cases, the uncoiled sperm can be up to six times the length of the male ostracod itself. Mating typically occurs during swarming, with large numbers of females swimming to join the males. Some species are partially or wholly parthenogenetic. Superfamily Darwinuloidea was assumed to have reproduced asexually for the last 200 million years, but rare males have since been discovered in one of the species. In the subclass Myodocopa, all members of the order Myodocopida have brood care, releasing their offspring as first instars, allowing a pelagic lifestyle. In the order Halocyprida the eggs are released directly into the sea, except for a single genus with brood care. In the subclass Podocopa, brood care is only found in Darwinulocopina and some Cytherocopina in the order Podocopida. In the remaining Podocopa it is common to glue the eggs to a firm surface, like vegetation or the substratum. These eggs are often resting eggs, and remain dormant during desiccation and extreme temperatures, only hatching when exposed to more favorable conditions. Species adapted to vernal pools can reach sexual maturity in just 30 days after hatching. There is no larval stage or metamorphosis (direct development). Instead they hatch from the egg as juveniles with the bivalved carapace and at least three functional limbs. As the juvenile grows through a series of molts they acquire more limbs and develop further the already existing ones. They reach sexual maturity in the final instar and then never molts again. The number of instars they go through before adulthood varies. In Podocopa it is eight or nine (but family Entocytheridae and suborder Bairdiocopina has only seven), the Halocyprida goes through six or seven, and Myodocopida only four to six. They are able to produce several offspring many times as adults (iteroparity). Predators A variety of fauna prey upon ostracods in both aquatic and terrestrial environments. An example of predation in the marine environment is the action of certain Cytherocopina in the cuspidariid clams in detecting ostracods with cilia protruding from inhalant structures, thence drawing the ostracod prey in by a violent suction action. Predation from higher animals also occurs; for example, amphibians such as the rough-skinned newt prey upon certain ostracods. Whale sharks also seem to eat them as part of their filter feeding process. Bioluminescence Some ostracods, such as Vargula hilgendorfii, have a light organ in which they produce luminescent chemicals. These ostracods are called "blue sand" or "blue tears" and glow blue in the dark. Their bioluminescent properties made them valuable to the Japanese during World War II, when the Japanese army collected large amounts from the ocean to use as a convenient light for reading maps and other papers at night. The light from these ostracods, called umihotaru in Japanese, was sufficient to read by but not bright enough to give away troops' position to enemies. Bioluminescence has evolved twice in ostracods; once in Cypridinidae, and once in Halocyprididae. In bioluminescent Halocyprididae a green light is produced within carapace glands, and in Cypridinidae a blue light is produced and extruded from the upper lip. Most species use the light as predation defense, but the male of at least 75 known species of the Cypridinidae, restricted to the Caribbean, use pulses of light to attract females. Some species are the opposite where the females use pulses of light to attract males. This is seen in one example such as the glow worm. This bioluminiscent courtship display has only evolved once in ostracods, in a cypridinid group named Luxorina that originated at least 151 million years ago. Ostracods with bioluminescent courtship show higher rates of speciation than those who simply use light as protection against predators. The male will continue to swim after releasing its small ball of bioluminescent mucus, but the female is able to read the display to pinpoint the male's location. In one species hundreds of thousands of males synchronize their light display, and when one male creates a pattern of light, the new pattern will spread out as the neighboring males repeat it. Classification Early work indicated that Ostracoda may not be monophyletic, and early molecular phylogeny was ambiguous on this front. Recent combined analyses of molecular and morphological data suggested monophyly in analyses with broadest taxon sampling, but this monophyly had no or very little support (Fig. 1 - bootstrap 0, 17 and 46, often values above 95 are considered sufficient for the taxon support). Class Ostracoda is divided into following living clades: Subclass Myodocopa Order Myodocopida Suborder Myodocopina Superfamily Cypridinoidea (1 family) Superfamily Cylindroleberidoidea (1 family) Superfamily Sarsielloidea (3 families) Order Halocyprida Suborder Halocypridina Superfamily Thaumatocypridoidea (1 family) Superfamily Halocypridoidea (1 family) Suborder Cladocopina Superfamily Cladocopoidea (1 family) Subclass Podocopa Order Palaeocopida Suborder Kirkbyocopina Superfamily Puncioidea (1 family) Order Platycopida Suborder Platycopina Superfamily Cytherelloidea (1 family) Order Podocopida Suborder Cytherocopina Superfamily Cytheroidea (27 families) Superfamily Terrestricytheroidea (1 family) Suborder Cypridocopina Superfamily Macrocypridoidea (1 family) Superfamily Pontocypridoidea (1 family) Superfamily Cypridoidea (4 families) Suborder Darwinulocopina Superfamily Darwinuloidea (1 family) Suborder Bairdiocopina Superfamily Bairdioidea (3 families) Suborder Sigilliocopina Superfamily Sigillioidea (1 family)
Biology and health sciences
Crustaceans
null
509137
https://en.wikipedia.org/wiki/Newtonian%20telescope
Newtonian telescope
The Newtonian telescope, also called the Newtonian reflector or just a Newtonian, is a type of reflecting telescope invented by the English scientist Sir Isaac Newton, using a concave primary mirror and a flat diagonal secondary mirror. Newton's first reflecting telescope was completed in 1668 and is the earliest known functional reflecting telescope. The Newtonian telescope's simple design has made it very popular with amateur telescope makers. Description A Newtonian telescope is composed of a primary mirror or objective, usually parabolic in shape, and a smaller flat secondary mirror. The primary mirror makes it possible to collect light from the pointed region of the sky, while the secondary mirror redirects the light out of the optical axis at a right angle so it can be viewed with an eyepiece. Advantages of the Newtonian design They are free of chromatic aberration found in refracting telescopes. Newtonian telescopes are usually less expensive for any given objective diameter (or aperture) than comparable quality telescopes of other types. Since there is only one surface that needs to be ground and polished into a complex shape, overall fabrication is much simpler than other telescope designs (Gregorians, cassegrains, and early refractors had two surfaces that need figuring. Later achromatic refractor objectives had four surfaces that have to be figured). A short focal ratio can be more easily obtained, leading to a wider field of view. The eyepiece is located at the top end of the telescope. Combined with short f-ratios this can allow for a much more compact mounting system, reducing cost and adding to portability. Disadvantages of the Newtonian design Newtonians, like other reflecting telescope designs using parabolic mirrors, suffer from coma, an off-axis aberration which causes imagery to flare inward and towards the optical axis (stars towards edge of the field of view take on a comet-like shape). This flare is zero on-axis, and is linear with increasing field angle and inversely proportional to the square of the mirror focal ratio (the mirror focal length divided by the mirror diameter). The formula for third order tangential coma is 3θ / 16F², where θ is the angle off axis to the image in radians and F is the focal ratio. Newtonians with a focal ratio of f/6 or lower (f/5 for example) are considered to have increasingly serious coma for visual or photographic use. Low focal ratio primary mirrors can be combined with lenses that correct for coma to increase image sharpness over the field. Newtonians have a central obstruction due to the secondary mirror in the light path. This obstruction and also the diffraction spikes caused by the support structure (called the "spider") of the secondary mirror reduce contrast. Visually, these effects can be reduced by using a two or three-legged curved spider. This reduces the diffraction sidelobe intensities by a factor of about four and helps to improve image contrast, with the potential penalty that circular spiders are more prone to wind-induced vibration. For portable Newtonians collimation can be a problem. The primary and secondary can get out of alignment from the shocks associated with transport and handling. This means the telescope may need to be re-aligned (collimated) every time it is set up. Other designs such as refractors and catadioptrics (specifically Maksutov cassegrains) have fixed collimation. The focal plane is at an asymmetrical point and at the top of the optical tube assembly. For visual observing, most notably on equatorial mounts, tube orientation can put the eyepiece in a very poor viewing position, and larger telescopes require ladders or support structures to access it. Some designs provide mechanisms for rotating the eyepiece mount or the entire tube assembly to a better position. For research telescopes, counterbalancing very heavy instruments mounted at this focus has to be taken into consideration. Variations There are several variations on the Newtonian design that add a lens to the system creating a catadioptric telescope. This is done to correct spherical aberration or reduce cost. Schmidt–Newtonian A Schmidt–Newtonian telescope combines the Newtonian optical design with a full-aperture Schmidt corrector plate in front of the primary mirror that not only corrects spherical aberration but can also support the secondary mirror. The resulting system has less coma and secondary mirror support induced diffraction effects. Maksutov–Newtonian Similar to a Schmidt–Newtonian, a Maksutov telescope's meniscus shaped corrector can be added to the Newtonian configuration, which gives it minimal aberration over a wide field of view, with one-fourth the coma of a similar standard Newtonian and one-half the coma of a Schmidt-Newtonian. Diffraction can also be minimized by using a high focal ratio with a proportionally small diagonal mirror mounted on the corrector. Jones–Bird A Jones–Bird Newtonian (sometimes called a Bird–Jones) uses a spherical primary mirror in place of a parabolic one, with spherical aberrations corrected by sub-aperture corrector lens usually mounted inside the focusser tube or in front of the secondary mirror. This design reduces the size and cost of the telescope with a shorter overall telescope tube length (with the corrector extending the focal length in a "telephoto" type layout) combined with a less costly spherical mirror. Commercially produced versions of this design have been noted to be optically compromised, due to the difficulty of producing a correctly shaped sub-aperture corrector, and are targeted at the inexpensive end of the telescope market. History Newton's idea for a reflecting telescope was not new. Galileo Galilei and Giovanni Francesco Sagredo had discussed using a mirror as the image forming objective soon after the invention of the refracting telescope, and others, such as Niccolò Zucchi, claimed to have experimented with the idea as far back as 1616. Newton may even have read James Gregory's 1663 book Optica Promota which described reflecting telescope designs using parabolic mirrors (a telescope Gregory had been trying unsuccessfully to build). Newton built his reflecting telescope because he suspected it could prove his theory that white light is composed of a spectrum of colours. Colour distortion (chromatic aberration) was the primary fault of refracting telescopes of Newton's day, and there were many theories as to what caused it. During the mid-1660s with his work on the theory of colour, Newton concluded this defect was caused by the lens of the refracting telescope behaving the same as prisms he was experimenting with, breaking white light into a rainbow of colours around bright astronomical objects. If this were true, then chromatic aberration could be eliminated by building a telescope which did not use a lens – a reflecting telescope. In late 1668 Isaac Newton built his first reflecting telescope. He chose an alloy (speculum metal) of tin and copper as the most suitable material for his objective mirror. He later devised means for shaping and grinding the mirror and may have been the first to use a pitch lap to polish the optical surface. He chose a spherical shape for his mirror instead of a parabola to simplify construction; even though it would introduce spherical aberration, it would still correct chromatic aberration. He added to his reflector what is the hallmark of the design of a Newtonian telescope, a secondary diagonally mounted mirror near the primary mirror's focus to reflect the image at a 90° angle to an eyepiece mounted on the side of the telescope. This unique addition allowed the image to be viewed with minimal obstruction of the objective mirror. He also made the tube, mount, and fittings. Newton's first version had a primary mirror diameter of and a focal ratio of f/5. He found that the telescope worked without colour distortion and that he could see the four Galilean moons of Jupiter and the crescent phase of the planet Venus with it. Newton's friend Isaac Barrow showed a second telescope to a small group from the Royal Society of London at the end of 1671. They were so impressed with it that they demonstrated it to Charles II in January 1672. Newton was admitted as a fellow of the society in the same year. Like Gregory before him, Newton found it hard to construct an effective reflector. It was difficult to grind the speculum metal to a regular curvature. The surface also tarnished rapidly; the consequent low reflectivity of the mirror and also its small size meant that the view through the telescope was very dim compared to contemporary refractors. Because of these difficulties in construction, the Newtonian reflecting telescope was initially not widely adopted. In 1721 John Hadley showed a much-improved model to the Royal Society. Hadley had solved many of the problems of making a parabolic mirror. His Newtonian with a mirror diameter of compared favourably with the large aerial refracting telescopes of the day.
Technology
Telescope
null
509748
https://en.wikipedia.org/wiki/Parking%20lot
Parking lot
A parking lot or car park (British English), also known as a car lot, is a cleared area intended for parking vehicles. The term usually refers to an area dedicated only for parking, with a durable or semi-durable surface. In most jurisdictions where cars are the dominant mode of transportation, parking lots are a major feature of cities and suburban areas. Shopping malls, sports stadiums, and other similar venues often have immense parking lots. (
Technology
Concepts of ground transport
null
509871
https://en.wikipedia.org/wiki/Pinus%20elliottii
Pinus elliottii
Pinus elliottii, commonly known as slash pine, is a conifer tree native to the Southeastern United States. Slash pine is named after the "slashes" – swampy ground overgrown with trees and bushes – that constitute its habitat. Other common names include swamp pine, yellow slash pine, and southern Florida pine. Slash pine has two different varieties: P. e. var. elliottii and P. e. var. densa. Historically, slash pine has been an important economic timber for naval stores, turpentine, and resin. The wood of slash pine is known for its unusually high strength, especially for a pine. It exceeds many hardwoods and is even comparable to very dense woods such as black ironwood. Description and taxonomy This tree is fast-growing, but not very long-lived by pine standards (to 200 years). It reaches heights of with a trunk diameter of . The leaves are needle-like, very slender, in clusters of two or three, and long. The cones are glossy red-brown, in length, with a short (), thick prickle on each scale. It is known for its conical shape and unusually high strength, especially for a pine. Its wood has an average crush strength of 8,140 lb/in2 (56.1 MPa), which exceeds many hardwoods such as white ash (7,410 lb/in2) and black maple (6,680 lb/in2). It is not as strong as black ironwood (9,940 lb/in2), but because its average density is less than half that of ironwood, slash pine has a far greater strength-to-weight ratio. It may be distinguished from the related loblolly pine (P. taeda) by the somewhat longer, glossier needles and larger red-brown cones, and from longleaf pine (P. palustris) by the shorter, more slender needles and smaller cones with less broad scales. Two varieties of P. elliotii are described, but recent genetic studies have indicated that the varieties may not be more closely related to each other than they are to other pines in the Southeast. If this is the case, reclassifying these varieties as separate species would be warranted. P. elliottii can hybridize with P. taeda, sand pine (Pinus clausa), and P. palustris. The two commonly accepted varieties are the following: P. e. var. elliottii (typical slash pine) ranges from South Carolina to Louisiana, and south to central Florida. Its leaves occur in bundles, fascicles of twos and threes, mostly threes, and the cones are larger, . P. e. var. densa (South Florida slash pine, Dade County pine) is found in the pine rocklands of southern Florida and the Florida Keys, including the Everglades. Leaves are nearly all in bundles of two, with longer needles. The cones are smaller, , the wood is denser, and the tree has a thicker taproot. Unlike the typical variety of slash pine, seedlings of P. e. var. densa has a "grass stage" similar to longleaf pine. P. e. var. densa is not frost tolerant, which limits its range to South Florida. Range and habitat Communities dominated by slash pine are termed "slash pine forests". Slash pine is predominately found in Florida and Georgia, and extends from South Carolina west to southeastern Louisiana, and south to the Florida Keys. It is common in East Texas, where it was first planted at the E.O. Siecke State Forest in 1926. The natural habitat is sandy subtropical maritime forests and wet flatwoods. Slash pine generally grows better in warm, humid areas where the average annual temperature is above , with extreme ranges from . Factors such as competition, fire, and precipitation may limit the natural distribution of these trees. Slash pines are able to grow in an array of soils, but pine stands that are close to bodies of water such as swamps and ponds grow better because of higher soil moisture and seedling protection from wildfire. These forests have been managed through controlled fires since the beginning of the 20th century. Within the first year, P. elliottii is particularly susceptible to seedling mortality caused by fire. P. e. var. densa is more fire resistant than P. e. var. elliottii because it has thicker bark. Fire ecology History Fire has long been an important element in Southeastern forests. Native Americans burned land to improve grass growth for grazing and visibility for hunting. When European settlers arrived in the New World, they brought new diseases that severely diminished the Native American populations. Over time, with the lack of consistent burning, much of the open land of the South reverted to forest land. Logging began to increase in the Southeast, which created some tension between the loggers and local farmers. The loggers wanted to continue to burn the forest, but the local farmers were concerned about how burning would affect cattle grazing and turpentine production. Fire maintenance has long been a controversial issue. In the 1940s, the Smokey Bear campaign to prevent wildfires promoted a shift toward fire suppression. Subsequently, many of these fire-dependent ecosystems became increasingly dominated by more shade-tolerant tree species (hardwoods). Despite many reports from the U.S. Forest Service about the benefits fire has on forage production, pine regeneration, control of tree pathogens, and reducing risks of wildfires, controlled burning did not begin to regain traction until the 1950s and 1960s. Uses Without regular fire intervals in slash pine forests, the ecosystem can change over time. For example, in the northern range for slash pine, forests can convert from mesic flatwoods to denser mixed-hardwood canopies with trees such as oaks, hickory, and southern magnolia. In South Florida, the pine rocklands can convert to a rockland hammock dominated by woody shrubs and invasive plants. Invasive species are a major management issue in the South. Many pine trees and native plants are adapted to fire, meaning they require fire disturbance to open their pine cones, germinate seeds, and cue other metabolic processes. Fire can be a good management strategy for invasive species because many invasive plants are not adapted to fire. Therefore, fire can eliminate the parental plant or reduce seed viability. Controlled burning is also used to help reduce pathogen load in an ecosystem. For example, fire can eliminate pest populations or resting fungal spores that could infect new seedlings. Low-intensity burns can also clear space in the understory and provide nutrient pulses that benefit the understory vegetation. Fire is also used to prevent "fuel" buildup, the highly flammable plants such as grasses and scrub under the canopy that could burn easily in a wildfire. Most prescribed burn intervals are about every 2–5 years, which allows the ecosystem to regenerate after the burn. Much of the South Florida pine rockland ecosystem is highly fragmented and has not been burned because of the proximity to buildings. Risks such as smoke, air quality, and residual particulate matter in the environment pose safety issues for controlled burns near homes and businesses. Diseases and pests Fusiform rust Starting in the late 1950s, the emergence of fusiform rust on Southeastern pine trees including slash pine, loblolly pine, and longleaf pine led to massive tree mortality within the pine industry. This obligate parasitic pathogen is notorious for infecting young trees in newly planted areas within the first few years of growing. The pine industry was still rather new at the time of this initial outbreak, so many newly planted forests had large-scale mortality because the trees were not yet old enough to be resilient to the disease or harvested. Florida’s pine industry in particular was booming with an increase in plantation acreage from in 1952 to upwards of in 1990. Because of the complicated lifecycle of Cronaritum quercuum f. sp. fusiforme, the fungal causal agent of fusiform rust, the management strategies of pruning diseased stems, reducing fertilization, and discarding infected seed were not sufficient to prevent million-dollar annual loses. Rust pathogens are difficult to manage because of their complicated reproductive lifecycles. C. querecuum f. sp. fusiforme is heteroecious, requiring two different plant hosts for reproduction, and is macrocyclic, meaning it contains all five spore stages typical for rust infections: basidiospores, teliospores, urediniospores, aeciospores, and spermatia. Oak trees are the secondary host for this pathogen. The primary inocula on pine are basidiospores, which infect the pine needles between March and May. The basidiospores germinate and grow into the stems of the tree where the fungus can overwinter for 4–6 months in the wood. In the fall, the spermatia form and fertilize the aceiospores in the following spring. The aceiospores are released from the pine and are the primary inocula that infect the oak trees in the following growing season. Aceiospores grow through the oak leaves producing urediniospores on the underside of the oak leaves. These urediniospores can reproduce clonally, asexually, and can continue to infect oak plants as a secondary inoculum. Within two weeks of the primary urediniospore inoculation on the oak tree, teliospores are formed which germinate into basidiospores that infect the pine trees and complete the life rust cycle. Symptoms on the pine include gall formation, stem swelling, cankers, bushiness, and dieback. The cankers in the stem allow secondary fungal infections or other pests to enter the trees easily. Understanding the climate conditions that can lead to rust outbreaks is an important component for management strategies, but this was not well understood in the early decades of this epidemic. More recent information has shown that certain weather patterns such as high humidity, wet pine needles, and temperatures around for about 18 days can increase the spread of basiodiospores, so increase disease severity. Managing fusiform rust There are many ways to go about reducing high-hazard areas for fusiform rust, but it starts with understanding why fusiform rust occurs more often during certain instances than others. Even though we have seen newer genetic work from seedling nurseries that has helped loblolly and slash pine become more resilient to fusiform rust, it is not always the case that landowners want to or can afford to buy the genetically modified seedlings so there are a couple of ways to help reduce the possibility of fusiform rust infected trees. The first initial step to take to reduce fusiform rust infection is to reduce the amount of site preparation used to establish the stand. These site preps, while desired, cause increased rapid growth of pines. When this happens the outer layer of bark is thin enough for fusiform rust to infect and often it will be the main stem. When loblolly pine reach around the age of eight years old you can use more fertilization and forest prescriptions because at this time fusiform rust is not as likely to infect the main stem. Due to oaks being the alternate host for fusiform rust, where it lives out three of its spore lifecycles, it is a good idea to remove any hardwoods that are adjacent to your loblolly stand. This can be difficult considering that oaks also share an importance in the economic and environmental aspect of the Southeast. Doing this will allow the pathogen to hit a dead end. In an older loblolly plantation, it is safe to keep those trees in rotation if the disease is not along the stem of the tree. Pitch canker Pitch canker, a monocyclic disease caused by the fungus Fusarium circinatum (previously named Fusarium moniliforme var. subglutinans), was first described in 1946 by Hepting and Roth. When it was first described, disease levels were low until the 1970s, when a massive epidemic of pitch canker caused mass tree mortality in Florida slash pine. Some hypotheses suggest that the pathogen may have originated in Mexico and was then introduced in Florida and later transmitted to California on diseased seed. The pathogen has been reported in Mexico; however, high fungal diversity and low tree mortality from the disease suggests that this pathogen may have co-evolved in Mexico before being introduced to other parts of the world. Many reports describe the pathogen as endemic to Florida, likely because the disease was introduced a long time therefore the population has become more diverse. By 1974, over half of the slash pine population in Florida was infected with Fusarium circinatum disease. In areas where the pathogen is newly introduced, the fungal population is mostly clonal, because fewer mating types are present within the population, so sexual reproduction may be lower. Pitch canker infects nearly all pine species, including longleaf pine, shortleaf pine, and eastern white pine. This disease continues to be a problem in nurseries, and has been reported in other countries. A major problem in Florida is that artificial replanting of pines may be contributing to high disease incidences. The disease can be passed through seed and spores, but requires open wounds to infect the tree from insect damage, mechanical damage, hail/weather damage, etc. The predominant symptoms include needle chlorosis and reddening of shoots (called "flagging") that later die. Cankers or lesions that form on the trunks can turn the bark yellow or dark brown and cause resin to exude. Stems may die and get crystalized in resin-soaked lesions. Resin is generally produced in plants to protect against pathogens. Sometimes, the tissue above the canker dies, causing girdling of the stem. The severity of the disease depends on weather conditions and may require moisture and insect wounds or hail to infect the trees. Some insects such as bark beetles, spittle bugs, weevils, pine tip moths, and needle midges may vector the disease into the tree. F. circinatum was used to inoculate P. e. var. densa trees to try to increase resin production for extraction, but this approach was ineffective. Other fungi Fungus species Thozetella pinicola was found on leaf litter of Pinus elliottii in Hong Kong in 2009. Uses This tree is widely grown in tree plantations. It is also used in horticulture.
Biology and health sciences
Pinaceae
Plants
510060
https://en.wikipedia.org/wiki/Intermediate-mass%20black%20hole
Intermediate-mass black hole
An intermediate-mass black hole (IMBH) is a class of black hole with mass in the range of one hundred to one hundred thousand (102–105) solar masses: significantly higher than stellar black holes but lower than the hundred thousand to more than one billion (105–109) solar mass supermassive black holes. Several IMBH candidate objects have been discovered in the Milky Way galaxy and others nearby, based on indirect gas cloud velocity and accretion disk spectra observations of various evidentiary strength. Observational evidence The gravitational wave signal GW190521, which occurred on 21 May 2019 at 03:02:29 UTC, and was published on 2 September 2020, resulted from the merger of two black holes. They had masses of 85 and 65 solar masses and merged to form a black hole of 142 solar masses, with 8 solar masses radiated away as gravitational waves. Before that, the strongest evidence for IMBHs came from a few low-luminosity active galactic nuclei. Due to their activity, these galaxies almost certainly contain accreting black holes, and in some cases the black hole masses can be estimated using the technique of reverberation mapping. For instance, the spiral galaxy NGC 4395 at a distance of about 4 Mpc appears to contain a black hole with mass of about solar masses. The largest up-to-date sample of intermediate-mass black holes includes 305 candidates selected by sophisticated analysis of one million optical spectra of galaxies collected by the Sloan Digital Sky Survey. X-ray emission was detected from 10 of these candidates confirming their classification as IMBH. Some ultraluminous X-ray sources (ULXs) in nearby galaxies are suspected to be IMBHs, with masses of a hundred to a thousand solar masses. The ULXs are observed in star-forming regions (e.g., in starburst galaxy M82), and are seemingly associated with young star clusters which are also observed in these regions. However, only a dynamical mass measurement from the analysis of the optical spectrum of the companion star can unveil the presence of an IMBH as the compact accretor of the ULX. A few globular clusters have been claimed to contain IMBHs, based on measurements of the velocities of stars near their centers; the figure shows one candidate object. However, none of the claimed detections has stood up to scrutiny. For instance, the data for M31 G1, the object shown in the figure, can be fit equally well without a massive central object. Additional evidence for the existence of IMBHs can be obtained from observation of gravitational radiation, emitted from a binary containing an IMBH and a compact remnant or another IMBH. Finally, the M–sigma relation predicts the existence of black holes with masses of 104 to 106 solar masses in low-luminosity galaxies. The smallest black hole from the M–sigma relation prediction is the nucleus of RGG 118 galaxy with only about 50,000 solar masses. Potential discoveries In November 2004 a team of astronomers reported the discovery of GCIRS 13E, the first intermediate-mass black hole in the Milky Way galaxy, orbiting three light-years from Sagittarius A*. This medium black hole of 1,300 solar masses is within a cluster of seven stars, possibly the remnant of a massive star cluster that has been stripped down by the Galactic Center. This observation may add support to the idea that supermassive black holes grow by absorbing nearby smaller black holes and stars. However, in 2005, a German research group claimed that the presence of an IMBH near the galactic center is doubtful, based on a dynamical study of the star cluster in which the IMBH was said to reside. An IMBH near the galactic center could also be detected via its perturbations on stars orbiting around the supermassive black hole. In January 2006 a team led by Philip Kaaret of the University of Iowa announced the discovery of a quasiperiodic oscillation from an intermediate-mass black hole candidate located using NASA's Rossi X-ray Timing Explorer. The candidate, M82 X-1, is orbited by a red giant star that is shedding its atmosphere into the black hole. Neither the existence of the oscillation nor its interpretation as the orbital period of the system are fully accepted by the rest of the scientific community, as the periodicity claimed is based on only about four cycles, meaning that it is possible for this to be random variation. If the period is real, it could be either the orbital period, as suggested, or a super-orbital period in the accretion disk, as is seen in many other systems. In 2009, a team of astronomers led by Sean Farrell discovered HLX-1, an intermediate-mass black hole with a smaller cluster of stars around it, in the galaxy ESO 243–49. This evidence suggested that ESO 243-49 had a galactic collision with HLX-1's galaxy and absorbed the majority of the smaller galaxy's matter. A team at the CSIRO radio telescope in Australia announced on 9 July 2012 that it had discovered the first intermediate-mass black hole. In 2015 a team at Keio University in Japan found a gas cloud (CO-0.40-0.22) with very wide velocity dispersion. They performed simulations and concluded that a model with a black hole of around 100,000 solar masses would be the best fit for the velocity distribution. However, a later work pointed out some difficulties with the association of high-velocity dispersion clouds with intermediate mass black holes and proposed that such clouds might be generated by supernovae. Further theoretical studies of the gas cloud and nearby IMBH candidates have been inconclusive but have reopened the possibility. In 2017, it was announced that a black hole of a few thousand solar masses may be located in the globular cluster 47 Tucanae. This was based on the accelerations and distributions of pulsars in the cluster; however, a later analysis of an updated and more complete data set on these pulsars found no positive evidence for this. In 2018, the Keio University team found several molecular gas streams orbiting around an invisible object near the galactic center, designated HCN-0.009-0.044, suggested that it is a black hole of 32,000 solar masses and, if so, is the third IMBH discovered in the region. Observations in 2019 found evidence for a gravitational wave event (GW190521) arising from the merger of two intermediate-mass black holes, with masses of 66 and 85 times that of the Sun. In September 2020 it was announced that the resulting merged black hole weighed 142 solar masses, with 9 solar masses being radiated away as gravitational waves. In 2020, astronomers reported the possible finding of an intermediate-mass black hole, named 3XMM J215022.4-055108, in the direction of the Aquarius constellation, about 740 million light years from Earth. In 2021 the discovery of a 100,000 solar-mass intermediate-mass black hole in the globular cluster B023-G78 in the Andromeda Galaxy was posted to arXiv in a preprint. In 2023, an analysis of proper motions of the closest known globular cluster, Messier 4, revealed an excess mass of roughly 800 solar masses in the center, which appears to not be extended, and could thus be considered as kinematic evidence for an IMBH (even if an unusually compact cluster of compact objects, white dwarfs, neutron stars or stellar-mass black holes cannot be completely discounted). A study from July 10, 2024, examined seven fast-moving stars from the center of the globular cluster Omega Centauri, finding that these stars were consistent with being bound to an intermediate-mass black hole of at least 8,200 solar masses. Origin Intermediate-mass black holes are too massive to be formed by the collapse of a single star, which is how stellar black holes are thought to form. Their environments lack the extreme conditions—i.e., high density and velocities observed at the centers of galaxies—which seemingly lead to the formation of supermassive black holes. There are three postulated formation scenarios for IMBHs. The first is the merging of stellar mass black holes and other compact objects by means of accretion. The second one is the runaway collision of massive stars in dense stellar clusters and the collapse of the collision product into an IMBH. The third is that they are primordial black holes formed in the Big Bang. Scientists have also considered the possibility of the creation of intermediate-mass black holes through mechanisms involving the collapse of a single star, such as the possibility of direct collapse into black holes of stars with pre-supernova helium core mass > (to avoid a pair instability supernova which would completely disrupt the star), requiring an initial total stellar mass of > , but there may be little chance of observing such a high-mass supernova remnant. Recent theories suggest that such massive stars which could lead to the formation of intermediate mass black holes may form in young star clusters via multiple stellar collisions.
Physical sciences
Basics_2
Astronomy
510323
https://en.wikipedia.org/wiki/Sumatran%20rhinoceros
Sumatran rhinoceros
The Sumatran rhinoceros (Dicerorhinus sumatrensis), also known as the Sumatran rhino, hairy rhinoceros or Asian two-horned rhinoceros, is a rare member of the family Rhinocerotidae and one of five extant species of rhinoceros; it is the only extant species of the genus Dicerorhinus. It is the smallest rhinoceros, although it is still a large mammal; it stands high at the shoulder, with a head-and-body length of and a tail of . The weight is reported to range from , averaging . Like both African species, it has two horns; the larger is the nasal horn, typically , while the other horn is typically a stub. A coat of reddish-brown hair covers most of the Sumatran rhino's body. The Sumatran rhinoceros once inhabited rainforests, swamps and cloud forests in India, Bhutan, Bangladesh, Myanmar, Laos, Thailand, Malaysia, Indonesia and southwestern China, particularly in Sichuan. It is now critically endangered, with only five substantial populations in the wild: four in Sumatra and one in Borneo, with an estimated total population of fewer than 80 mature individuals. The species was extirpated in Malaysia in 2019, and one of the Sumatran populations may already be extinct. In 2015, researchers announced that the Bornean rhinoceros had become extinct in the northern part of Borneo in Sabah, Malaysia. A tiny population was discovered in East Kalimantan in early 2016. The Sumatran rhino is a mostly solitary animal except for courtship and offspring-rearing. It is the most vocal rhino species and also communicates through marking soil with its feet, twisting saplings into patterns, and leaving excrement. The species is much better studied than the similarly reclusive Javan rhinoceros, in part because of a program that brought 40 Sumatran rhinos into captivity with the goal of preserving the species. There was little or no information about procedures that would assist in ex situ breeding. Though a number of rhinos died once at the various destinations and no offspring were produced for nearly 20 years, the rhinos were all doomed in their soon-to-be-logged forest. In March 2016, a Sumatran rhinoceros (of the Bornean rhinoceros subspecies) was spotted in Indonesian Borneo. The Indonesian ministry of Environment, began an official counting of the Sumatran rhino in February 2019, planned to be completed in three years. Malaysia's last known bull and cow Sumatran rhinos died in May and November 2019, respectively. The species is now considered to be locally extinct in that country, and only survives in Indonesia. There are fewer than 80 left in existence. Taxonomy and naming The first documented Sumatran rhinoceros was shot outside Fort Marlborough, near the west coast of Sumatra, in 1793. Drawings of the animal, and a written description, were sent to the naturalist Joseph Banks, then president of the Royal Society of London, who published a paper on the specimen that year. In 1814, the species was given a scientific name by Johann Fischer von Waldheim. The specific epithet sumatrensis signifies "of Sumatra", the Indonesian island where the rhinos were first discovered. Carl Linnaeus originally classified all rhinos in the genus, Rhinoceros; therefore, the species was originally identified as Rhinoceros sumatrensis or sumatranus. Joshua Brookes considered the Sumatran rhinoceros with its two horns a distinct genus from the one-horned Rhinoceros, and gave it the name Didermocerus in 1828. Constantin Wilhelm Lambert Gloger proposed the name Dicerorhinus in 1841. In 1868, John Edward Gray proposed the name Ceratorhinus. Normally, the oldest name would be used, but a 1977 ruling by the International Commission on Zoological Nomenclature established the proper genus name as Dicerorhinus. Dicerorhinus comes from the Greek terms (, meaning "two"), (, meaning "horn"), and (, meaning "nose"). The three subspecies are: D. s. sumatrensis, known as the western Sumatran rhinoceros, which has only 75 to 85 rhinos remaining, mostly in the national parks of Bukit Barisan Selatan and Kerinci Seblat, Gunung Leuser in Sumatra, but also in Way Kambas National Park in small numbers. They have recently gone extinct in Peninsular Malaysia. The main threats against this subspecies are habitat loss and poaching. A slight genetic difference is noted between the western Sumatran and Bornean rhinos. The rhinos in Peninsular Malaysia were once known as D. s. niger, but were later recognized to be a synonym of D. s. sumatrensis. Three bulls and five cows currently live in captivity at the Sumatran Rhino Sanctuary at Way Kambas, the youngest bull having been bred and born there in 2012. Another calf, a female, was born at the sanctuary in May 2016. The sanctuary's two bulls were born at the Cincinnati Zoo and Botanical Garden. A third calf female was born in March 2022. D. s. harrissoni, known as the Bornean rhinoceros or eastern Sumatran rhinoceros, which was once common throughout Borneo; now only about 15 individuals are estimated to survive. The known population lives in East Kalimantan, with them having recently gone extinct in Sabah. Reports of animals surviving in Sarawak are unconfirmed. This subspecies is named after Tom Harrisson, who worked extensively with Bornean zoology and anthropology in the 1960s. The Bornean subspecies is markedly smaller in body size than the other two subspecies. The captive population consisted of one bull and two cows at the Borneo Rhinoceros Sanctuary in Sabah; the bull died in 2019 and the cows died in 2017 and 2019 respectively. D. s. lasiotis, known as the northern Sumatran rhinoceros or Chittagong rhinoceros, which once roamed India and Bangladesh, has been declared extinct in these countries. Unconfirmed reports suggest a small population may still survive in Myanmar, but the political situation in that country has prevented verification. The name lasiotis is derived from the Greek for "hairy-ears". Later studies showed that their ear hair was not longer than other Sumatran rhinos, but D. s. lasiotis remained a subspecies because it was significantly larger than the other subspecies. Evolution Ancestral rhinoceroses first diverged from other perissodactyls in the Early Eocene. Mitochondrial DNA comparison suggests the ancestors of modern rhinos split from the ancestors of Equidae around 50 million years ago. The extant family, the Rhinocerotidae, first appeared in the Late Eocene in Eurasia, and the ancestors of the extant rhino species dispersed from Asia beginning in the Miocene. Although the relationships of modern rhinoceros species to each other were long controversial, modern genetic evidence has placed the Sumatran rhinoceros as more closely related to the Asian one-horned rhinoceroses (the Indian rhinoceros and Javan rhinoceros) belonging to the genus Rhinoceros than to living African rhinoceros species, with the split between Rhinoceros and Dicerorhinus estimated to have occurred around 14.8 million years ago, shortly after the split between the ancestors of Dicerorhinus and Rhinoceros and African rhinoceroses, which is placed around 15.6 million years ago. Based on morphological and genetic evidence, the Sumatran rhinoceros is believed to be closely related to the extinct woolly rhinoceros (Coelodonta antiquitatis) and Stephanorhinus, with the split between their last common ancestors estimated to be around 9.5 million years ago. The woolly rhinoceros, so named for the coat of hair it shares with the Sumatran rhinoceros, first appeared in China; by the Upper Pleistocene, it ranged across the Eurasian continent from Korea to Spain. The woolly rhinoceros survived until its extinction near the end of the last ice age, around 14,000 years ago. Stephanorhinus species are well known in Europe from the Late Pliocene through the Pleistocene, and China from the Pleistocene, with two species, Stephanorhinus kirchbergensis and the Stephanorhinus hemitoechus surviving into the last glacial period, until at least 40,000 years ago and possibly later. Although historically many fossil species were assigned to Dicerorhinus today only two fossil species are confidently placed in the genus. These include Dicerorhinus fusuiensis from the Early Pleistocene of South China, and Dicerorhinus gwebinensis from the Pliocene-Early Pleistocene of Myanmar. Fossils of the modern Sumatran rhinoceros are known from the Early Pleistocene onwards. Pairwise sequential Markovian coalescent (PSMC) analysis of a complete nuclear genome of a Sumatran specimen suggested strong fluctuations in population size, with a general trend of decline over the course of the Middle to Late Pleistocene with an estimated peak effective population size of 57,800 individuals 950,000 years ago, declining to around 500–1,300 individuals at the start of the Holocene, with a slight rebound during the Eemian Interglacial. This was likely due to climate change causing limiting suitable habitat for the Rhinoceros, causing severe population fluctuations as well as population fragmentation due to the flooding of Sundaland. Human induced habitat change and hunting may have played a role in the Late Pleistocene. The study was later criticised for not including DNA from extinct mainland populations, which would have provided a holistic account. A Bayesian skyline plot of complete Mitochondrial genomes from multiple individuals from across the range of the species suggested that the population had been relatively stable with an effective population size of 40,000 individuals over the last 400,000 years, with a sharp decline starting around 25,000 years ago. Cladogram showing the relationships of recent and Late Pleistocene rhinoceros species (minus Stephanorhinus hemitoechus) based on whole nuclear genomes, after Liu et al, 2021: Description A mature Sumatran rhino stands about high at the shoulder, has a body length of around , and weighs , though the largest individuals in zoos have been known to weigh as much as . Like the two African species, it has two horns. The larger is the nasal horn, typically only , though the longest recorded specimen was much longer at . The posterior horn is much smaller, usually less than long, and often little more than a knob. The larger nasal horn is also known as the anterior horn; the smaller posterior horn is known as the frontal horn. The horns are dark grey or black in color. The bulls have larger horns than the cows, though the species is not otherwise sexually dimorphic. The Sumatran rhino lives an estimated 30–45 years in the wild, while the record time in captivity is a female D. lasiotis, which lived for 32 years and 8 months before dying in the London Zoo in 1900. Two thick folds of skin encircle the body behind the front legs and before the hind legs. The rhino has a smaller fold of skin around its neck. The skin itself is thin, , and in the wild, the rhino appears to have no subcutaneous fat. Hair can range from dense (the most dense hair in young calves) to scarce, and is usually a reddish brown. In the wild, this hair is hard to observe because the rhinos are often covered in mud. In captivity, however, the hair grows out and becomes much shaggier, likely because of less abrasion from walking through vegetation. The rhino has a patch of long hair around its ears and a thick clump of hair at the end of its tail. Like all rhinos, they have very poor vision. The Sumatran rhinoceros is fast and agile; it climbs mountains easily and comfortably traverses steep slopes and riverbanks. Distribution and habitat The Sumatran rhinoceros lives in both lowland and highland secondary rainforest, swamps, and cloud forests. It inhabits hilly areas close to water, particularly steep upper valleys with copious undergrowth. The Sumatran rhinoceros once inhabited a continuous range as far north as Myanmar, eastern India, and Bangladesh. Unconfirmed reports also placed it in Cambodia, Laos, and Vietnam. All known living animals occur in the island of Sumatra. Some conservationists hope Sumatran rhinos may still survive in Burma, though it is considered unlikely. Political turmoil in Burma has prevented any assessment or study of possible survivors. The last reports of stray animals from Indian limits were in the 1990s. Remains of Sumatran rhinoceros have been found in Chinese Neolithic sites of Zhejiang, Henan, Fujian, and the northeastern Tibetan Plateau. The Sumatran rhino is widely scattered across its range, much more so than the other Asian rhinos, which has made it difficult for conservationists to protect members of the species effectively. Only four areas are known to contain Sumatran rhinoceros: Bukit Barisan Selatan National Park, Gunung Leuser National Park, and Way Kambas National Park on Sumatra; and on Indonesian Borneo west of Samarindah. The Kerinci Seblat National Park, Sumatra's largest, was estimated to contain a population of around 500 rhinos in the 1980s, but due to poaching, this population is now considered extinct. The survival of any animals in Peninsular Malaysia is extremely unlikely. Genetic analysis of Sumatran rhino populations has identified three distinct genetic lineages. The channel between Sumatra and Malaysia was not as significant a barrier for the rhinos as the Barisan Mountains along the length of Sumatra, for rhinos in eastern Sumatra and Peninsular Malaysia are more closely related than the rhinos on the other side of the mountains in western Sumatra. In fact, the eastern Sumatra and Malaysia rhinos show so little genetic variance, the populations were likely not separate during the Pleistocene, when sea levels were much lower and Sumatra formed part of the mainland. Both populations of Sumatra and Malaysia, however, are close enough genetically that interbreeding would not be problematic. The rhinos of Borneo are sufficiently distinct that conservation geneticists have advised against crossing their lineages with the other populations. Conservation geneticists have recently begun to study the diversity of the gene pool within these populations by identifying microsatellite loci. The results of initial testing found levels of variability within Sumatran rhino populations comparable to those in the population of the less endangered African rhinos, but the genetic diversity of Sumatran rhinos is an area of continuing study. Although the rhino had been thought to be extinct in Kalimantan since the 1990s, in March 2013 World Wildlife Fund (WWF) announced that the team when monitoring orangutan activity found in West Kutai Regency, East Kalimantan, several fresh rhino foot trails, mud holes, traces of rhino-rubbed trees, traces of rhino horns on the walls of mud holes, and rhino bites on small branches. The team also identified that rhinos ate more than 30 species of plants. On 2 October 2013, video images made with camera traps showing the Sumatran rhino in Kutai Barat, Kalimantan, were released by the World Wildlife Fund. Experts assume the videos show two different animals, but aren't quite certain. According to the Indonesia's Minister of Forestry, Zulkifli Hasan called the video evidence "very important" and mentioned Indonesia's "target of rhino population growth by three percent per year". On 22 March 2016 it was announced by the WWF that a live Sumatran rhino was found in Kalimantan; it was the first contact in over 40 years. The rhino, a female, was captured and transported to a nearby sanctuary to ensure her survival. Iman, the last known Sumatran rhino in Malaysia, died in November 2019; stem cell technology is being used in an attempt to revitalize the rhino's population and reverse extinction in the country. As of 2023, there has been two births at the Sumatran Rhino Sanctuary at Way Kambas National Park, Indonesia. Behavior and ecology Sumatran rhinos are solitary creatures except for pairing before mating and during offspring rearing. Individuals have home ranges; bulls have territories as large as , whereas cows' ranges are . The ranges of cows appear to be spaced apart; bulls' ranges often overlap. No evidence indicates Sumatran rhinos defend their territories through fighting. Marking their territories is done by scraping soil with their feet, bending saplings into distinctive patterns, and leaving excrement. The Sumatran rhino is usually most active when eating, at dawn, and just after dusk. During the day, they wallow in mud baths to cool down and rest. In the rainy season, they move to higher elevations; in the cooler months, they return to lower areas in their range. When mud holes are unavailable, the rhino will deepen puddles with its feet and horns. The wallowing behaviour helps the rhino maintain its body temperature and protect its skin from ectoparasites and other insects. Captive specimens, deprived of adequate wallowing, have quickly developed broken and inflamed skins, suppurations, eye problems, inflamed nails, and hair loss, and have eventually died. One 20-month study of wallowing behavior found they will visit no more than three wallows at any given time. After two to 12 weeks using a particular wallow, the rhino will abandon it. Typically, the rhino will wallow around midday for two to three hours at a time before venturing out for food. Although in zoos the Sumatran rhino has been observed wallowing less than 45 minutes a day, the study of wild animals found 80–300 minutes (an average of 166 minutes) per day spent in wallows. There has been little opportunity to study epidemiology in the Sumatran rhinoceros. Ticks and gyrostigma were reported to cause deaths in captive animals in the 19th century. The rhino is also known to be vulnerable to the blood disease surra, which can be spread by horse-flies carrying parasitic trypanosomes; in 2004, all five rhinos at the Sumatran Rhinoceros Conservation Center died over an 18-day period after becoming infected by the disease. The Sumatran rhino has no known predators other than humans. Tigers and wild dogs may be capable of killing a calf, but calves stay close to their mothers, and the frequency of such killings is unknown. Although the rhino's range overlaps with elephants and tapirs, the species do not appear to compete for food or habitat. Asian elephants (Elephas maximus) and Sumatran rhinos are even known to share trails, and many smaller species, such as deer, boars, and wild dogs, will use the trails the rhinos and elephants create. The Sumatran rhino maintains two types of trails across its range. Main trails will be used by generations of rhinos to travel between important areas in the rhino's range, such as between salt licks, or in corridors through inhospitable terrain that separates ranges. In feeding areas, the rhinos will make smaller trails, still covered by vegetation, to areas containing food the rhino eats. Sumatran rhino trails have been found that cross rivers deeper than and about across. The currents of these rivers are known to be strong, but the rhino is a strong swimmer. A relative absence of wallows near rivers in the range of the Sumatran rhinoceros indicates they may occasionally bathe in rivers in lieu of wallowing. Diet Most feeding occurs just before nightfall and in the morning. The Sumatran rhino is a folivore, with a diet of young saplings, leaves, twigs, and shoots. The rhinos usually consume up to of food a day. Primarily by measuring dung samples, researchers have identified more than 100 food species consumed by the Sumatran rhinoceros. The largest portion of the diet is tree saplings with a trunk diameter of . The rhinoceros typically pushes these saplings over with its body, walking over the sapling without stepping on it, to eat the leaves. Many of the plant species the rhino consumes exist in only small portions, which indicates the rhino is frequently changing its diet and feeding in different locations. Among the most common plants the rhino eats are many species from the Euphorbiaceae, Rubiaceae, and Melastomataceae families. The most common species the rhino consumes is Eugenia. The vegetal diet of the Sumatran rhinoceros is high in fiber and only moderate in protein. Salt licks are very important to the nutrition of the rhino. These licks can be small hot springs, seepages of salty water, or mud-volcanoes. The salt licks also serve an important social purpose for the rhinos—bulls visit the licks to pick up the scent of cows in oestrus. Some Sumatran rhinos, however, live in areas where salt licks are not readily available, or the rhinos have not been observed using the licks. These rhinos may get their necessary mineral requirements by consuming plants rich in minerals. Communication The Sumatran rhinoceros is the most vocal of the rhinoceros species. Observations of the species in zoos show the animal almost constantly vocalizing, and it is known to do so in the wild, as well. The rhino makes three distinct noises: eeps, whales, and whistle-blows. The eep, a short, one-second-long yelp, is the most common sound. The whale, named for its similarity to vocalizations of the humpback whale, is the most song-like vocalization and the second-most common. The whale varies in pitch and lasts from four to seven seconds. The whistle-blow is named because it consists of a two-second-long whistling noise and a burst of air in immediate succession. The whistle-blow is the loudest of the vocalizations, loud enough to make the iron bars in the zoo enclosure where the rhinos were studied vibrate. The purpose of the vocalizations is unknown, though they are theorized to convey danger, sexual readiness, and location, as do other ungulate vocalizations. The whistle-blow could be heard at a great distance, even in the dense brush in which the Sumatran rhino lives. A vocalization of similar volume from elephants has been shown to carry and the whistle-blow may carry as far. The Sumatran rhinoceros will sometimes twist the saplings they do not eat. This twisting behavior is believed to be used as a form of communication, frequently indicating a junction in a trail. Reproduction Cows become sexually mature at the age of six to seven years, while bulls become sexually mature at about 10 years old. The gestation period is around 15–16 months. The calf, which typically weighs , is weaned after about 15 months and stays with its mother for the first two to three years of its life. In the wild, the birth interval for this species is estimated to be four to five years; its natural offspring-rearing behavior is unstudied. The reproductive habits of the Sumatran rhinoceros have been studied in captivity. Sex relationships begin with a courtship period characterized by increased vocalization, tail raising, urination, and increased physical contact, with both bull and cow using their snouts to bump the other in the head and genitals. The pattern of courtship is most similar to that of the black rhinoceros. Young Sumatran rhino bulls are often too aggressive with cows, sometimes injuring and even killing them during the courtship. In the wild, the cow could run away from an overly aggressive bull, but in their smaller captive enclosures, they cannot; this inability to escape aggressive bulls may partly contribute to the low success rate of captive-breeding programs. The period of oestrus itself, when the cow is receptive to the bull, lasts about 24 hours, and observations have placed its recurrence between 21 and 25 days. Sumatran rhinos in the Cincinnati Zoo have been observed copulating for 30–50 minutes, similar in length to other rhinos; observations at the Sumatran Rhinoceros Conservation Centre in Malaysia have shown a briefer copulation cycle. As the Cincinnati Zoo has had successful pregnancies, and other rhinos also have lengthy copulatory periods, a lengthy rut may be the natural behavior. Though researchers observed successful conceptions, all these pregnancies ended in failure for a variety of reasons until the first successful captive birth in 2001; studies of these failures at the Cincinnati Zoo discovered the Sumatran rhino's ovulation is induced by mating and it had unpredictable progesterone levels. Breeding success was finally achieved in 2001, 2004, and 2007 by providing a pregnant rhino with supplementary progestin. In 2016, a calf was born in captivity in western Indonesia, only the fifth such birth in a breeding facility. In March 2022, and 1 October 2023, female calves were born at the Sumatran Rhino Sanctuary (SRS), as well as a male calf born on 25 November 2023. Way Kambas National Park, Lampung province, Indonesia. Conservation In the wild Sumatran rhinos were once quite numerous throughout Southeast Asia. Fewer than 100 individuals are now estimated to remain. The species is classed as critically endangered (primarily due to illegal poaching) while the last survey in 2008 estimated that around 250 individuals survived. From the early 1990s, the population decline was estimated at more than 50% per decade, and the small, scattered populations now face high risks of inbreeding depression. Most remaining habitat is in relatively inaccessible mountainous areas of Indonesia. Poaching of Sumatran rhinos is a cause for concern, due to the high market price of its horns. This species has been overhunted for many centuries, leading to the current greatly reduced and still declining population. The rhinos are difficult to observe and hunt directly (one field researcher spent seven weeks in a treehide near a salt lick without ever observing a rhino directly), so poachers make use of spear traps and pit traps. In the 1970s, uses of the rhinoceros's body parts among the local people of Sumatra were documented, such as the use of rhino horns in amulets and a folk belief that the horns offer some protection against poison. Dried rhinoceros meat was used as medicine for diarrhea, leprosy, and tuberculosis. "Rhino oil", a concoction made from leaving a rhino's skull in coconut oil for several weeks, may be used to treat skin diseases. The extent of use and belief in these practices is not known. Rhinoceros horn was once believed to be widely used as an aphrodisiac; in fact traditional Chinese medicine never used it for this purpose. Nevertheless, hunting in this species has primarily been driven by a demand for rhino horns with unproven medicinal properties. The rainforests of Indonesia and Malaysia, which the Sumatran rhino inhabits, are also targets for legal and illegal logging because of the desirability of their hardwoods. Rare woods such as merbau, meranti and semaram are valuable on the international markets, fetching as much as $1,800 per m3 ($1,375 per cu yd). Enforcement of illegal-logging laws is difficult because humans live within or near many of the same forests as the rhino. The 2004 Indian Ocean earthquake has been used to justify new logging. Although the hardwoods in the rainforests of the Sumatran rhino are destined for international markets and not widely used in domestic construction, the number of logging permits for these woods has increased dramatically because of the tsunami. However, while this species has been suggested to be highly sensitive to habitat disturbance, apparently it is of little importance compared to hunting, as it can withstand more or less any forest condition. Nevertheless, the main cause of drastic reduction of the species is likely caused by the Allee effect. The Bornean rhino in Sabah was confirmed to be extinct in the wild in April 2015, with only 3 individuals left in captivity. The mainland Sumatran rhino in Malaysia was confirmed to be extinct in the wild in August 2015. In March 2016 there was a rare sighting of a Sumatran rhino in East Kalimantan, the Indonesian part of Borneo. The last time there was a Sumatran rhino in the Kalimantan area was approximately 40 years ago. This optimism was met with despair as the same rhino named Najaq was found dead several weeks after the sighting. The cause of death was infection on the wound caused by snare. In captivity Sumatran rhinos do not thrive outside of their ecosystem. London Zoo acquired a bull and cow in 1872 that had been captured in Chittagong in 1868. The female named "Begum" survived until 1900, the record lifetime for a captive rhino. Begum was one of at least seven specimens of the extinct subspecies D. s. lasiotis that were held in zoos and circuses. In 1972, Subur, the only Sumatran rhino remaining in captivity, died at the Copenhagen Zoo. Despite the species' persistent lack of reproductive success, in the early 1980s, some conservation organizations began a captive-breeding program for the Sumatran rhinoceros. Between 1984 and 1996, this ex situ conservation program transported 40 Sumatran rhinos from their native habitats to zoos and reserves across the world. While hopes were initially high, and much research was conducted on the captive specimens, by the late 1990s, not a single rhino had been born in the program, and most of its proponents agreed the program had been a failure. In 1997, the IUCN's Asian rhino specialist group, which once endorsed the program, declared it had failed "even maintaining the species within acceptable limits of mortality", noting that, in addition to the lack of births, 20 of the captured rhinos had died. In 2004, a surra outbreak at the Sumatran Rhinoceros Conservation Centre killed all the captive rhinos in Peninsular Malaysia, reducing the population of captive rhinos to eight. Seven of these captive rhinos were sent to the United States, and three to Port Lympne Zoo in the United Kingdom (the other was kept in Southeast Asia), but by 1997, their numbers had dwindled to three: a cow in the Los Angeles Zoo, a bull in the Cincinnati Zoo, and a cow in the Bronx Zoo. In a final effort, the three rhinos were united in Cincinnati. After years of failed attempts, the cow from Los Angeles, Emi, became pregnant for the sixth time, with the zoo's bull Ipuh. All five of her previous pregnancies ended in failure. Reproductive physiologist at the Cincinnati Zoo, Terri Roth, had learned from previous failures, though, and with the aid of special hormone treatments, Emi gave birth to a healthy male calf named Andalas (an Indonesian literary word for Sumatra) in September 2001. Andalas's birth was the first successful captive birth of a Sumatran rhino in 112 years. A female calf, named "Suci" (Indonesian for "pure"), followed on 30 July 2004. On 29 April 2007, Emi gave birth a third time, to her second male calf, named Harapan (Indonesian for "hope") or Harry. In 2007, Andalas, who had been living at the Los Angeles Zoo, was returned to Sumatra to take part in breeding programs with healthy females, leading to the siring and 23 June 2012 birth of male calf Andatu, the fourth captive-born calf of the era; Andalas had been mated with Ratu, a wild-born cow living in the Rhino Sanctuary at Way Kambas National Park. Despite the recent successes in Cincinnati, the captive-breeding program has remained controversial. Proponents argue that the zoos have not only aided the conservation effort by studying the reproductive habits, raising public awareness and education about the rhinos, helping raise financial resources for conservation efforts in Sumatra but, moreover, to have established a small captive breeding group. Opponents of the captive breeding program argue that the losses are too great; the program is too expensive; removing rhinos from their habitat, even temporarily, alters their ecological role; and captive populations cannot match the rate of recovery seen in well-protected native habitats. In October 2015, Harapan, the last rhino in the Western Hemisphere, left the Cincinnati Zoo to Indonesia. In August 2016, there were only three Sumatran rhinos left in Malaysia, all in captivity in the eastern state of Sabah: A bull named Tam and two cows named Puntung and Iman. In June 2017, Puntung was put down due to skin cancer. Tam died on 27 May 2019 and Iman died of cancer on 23 November 2019 at the Borneo Rhino Sanctuary. The species became extinct in Malaysia, its native land in 2019. In Indonesia, meanwhile, a seventh rhino increased the group at the Sumatran Rhino Sanctuary, in Way Kambas NP. A female was born on 12 May 2016, named Delilah. Another female, daughter of Andatu and Rosa, was born on 24 March 2022, named Sedah Mirah. A female was born on 30 September 2023, the third child of Andalas-Ratu pair. A male calf was born on 26 November 2023, son of Delilah and Harapan. In Indonesian East Kalimantan, only one old (estimated to be 35 to 40 years old) female named Pahu lives in Sumatran Rhino Sanctuary (SRS) Kelian, West Kutai after being captured in 2018, another identified is Pari, a female who lives in the wild in Sungai Ratah-Sungai Nyuatan-Sungai Lawa protected forest. On 31 October 2023, conservationists in Indonesia said they have extracted eggs from Pahu, who were too old and small to breed with the Sumatran subspecies, the eggs are currently planned to be fertilized with sperms from captive male Sumatran rhino before implanted in female Sumatran rhino in SRS Way Kambas. Cultural depictions Aside from those few individuals kept in zoos and pictured in books, the Sumatran rhinoceros has remained little known, overshadowed by the more common Indian, black and white rhinos. Recently, however, video footage of the Sumatran rhinoceros in its native habitat and in breeding centers has been featured in several nature documentaries. Extensive footage can be found in an Asia Geographic documentary The Littlest Rhino. Natural History New Zealand showed footage of a Sumatran rhino, shot by freelance Indonesian-based cameraman Alain Compost, in the 2001 documentary The Forgotten Rhino, which featured mainly Javan and Indian rhinos. Though they were documented by droppings and tracks, pictures of the Bornean rhinoceros were first taken and widely distributed by modern conservationists in April 2006, when camera traps photographed a healthy adult in the jungles of Sabah in Malaysian Borneo. On 24 April 2007, it was announced that cameras had captured the first-ever video footage of a wild Bornean rhino. The night-time footage showed the rhino eating, peering through jungle foliage, and sniffing the film equipment. The World Wildlife Fund, which took the video, has used it in efforts to convince local governments to turn the area into a rhino conservation zone. Monitoring has continued; 50 new cameras have been set up, and in February 2010, what appeared to be a pregnant rhino was filmed. A number of folk tales about the Sumatran rhino were collected by colonial naturalists and hunters from the mid-19th century to the early 20th century. In Burma, the belief was once widespread that the Sumatran rhino ate fire. Tales described the fire-eating rhino following smoke to its source, especially campfires, and then attacking the camp. There was also a Burmese belief that the best time to hunt was every July, when the Sumatran rhinos would congregate beneath the full moon. In Malaya, it was said that the Sumatran rhino's horns was hollow and could be used as a sort of hose for breathing air and squirting water. In Malaya and Sumatra, it was once believed that the rhino shed its horns every year and buried them under the ground. In Borneo, the rhino was said to have a strange carnivorous practice: after defecating in a stream, it would turn around and eat fish that had been stupefied by the excrement.
Biology and health sciences
Perissodactyla
Animals
510340
https://en.wikipedia.org/wiki/Stellar%20black%20hole
Stellar black hole
A stellar black hole (or stellar-mass black hole) is a black hole formed by the gravitational collapse of a star. They have masses ranging from about 5 to several tens of solar masses. They are the remnants of supernova explosions, which may be observed as a type of gamma ray burst. These black holes are also referred to as collapsars. Properties By the no-hair theorem, a black hole can only have three fundamental properties: mass, electric charge, and angular momentum. The angular momentum of a stellar black hole is due to the conservation of angular momentum of the star or objects that produced it. The gravitational collapse of a star is a natural process that can produce a black hole. It is inevitable at the end of the life of a massive star when all stellar energy sources are exhausted. If the mass of the collapsing part of the star is below the Tolman–Oppenheimer–Volkoff (TOV) limit for neutron-degenerate matter, the end product is a compact star – either a white dwarf (for masses below the Chandrasekhar limit) or a neutron star or a (hypothetical) quark star. If the collapsing star has a mass exceeding the TOV limit, the crush will continue until zero volume is achieved and a black hole is formed around that point in space. The maximum mass that a neutron star can possess before further collapsing into a black hole is not fully understood. In 1939, it was estimated at 0.7 solar masses, called the TOV limit. In 1996, a different estimate put this upper mass in a range from 1.5 to 3 solar masses. The maximum observed mass of neutron stars is about for PSR J0740+6620 discovered in September, 2019. In the theory of general relativity, a black hole could exist of any mass. The lower the mass, the higher the density of matter has to be in order to form a black hole. (See, for example, the discussion in Schwarzschild radius, the radius of a black hole.) There are no known stellar processes that can produce black holes with mass less than a few times the mass of the Sun. If black holes that small exist, they are most likely primordial black holes. Until 2016, the largest known stellar black hole was solar masses. In September 2015, a rotating black hole of solar masses was discovered by gravitational waves as it formed in a merger event of two smaller black holes. , the binary system 2MASS J05215658+4359220 was reported to host the smallest-mass black hole currently known to science, with a mass 3.3 solar masses and a diameter of only 19.5 kilometers. There is observational evidence for two other types of black holes, which are much more massive than stellar black holes. They are intermediate-mass black holes (in the center of globular clusters) and supermassive black holes in the center of the Milky Way and other galaxies. X-ray compact binary systems Stellar black holes in close binary systems are observable when the matter is transferred from a companion star to the black hole; the energy released in the fall toward the compact star is so large that the matter heats up to temperatures of several hundred million degrees and radiates in X-rays. The black hole, therefore, is observable in X-rays, whereas the companion star can be observed with optical telescopes. The energy release for black holes and neutron stars are of the same order of magnitude. Black holes and neutron stars are therefore often difficult to distinguish. The derived masses come from observations of compact X-ray sources (combining X-ray and optical data). All identified neutron stars have a mass below 3.0 solar masses; none of the compact systems with a mass above 3.0 solar masses display the properties of a neutron star. The combination of these facts makes it more and more likely that the class of compact stars with a mass above 3.0 solar masses are in fact black holes. Note that this proof of the existence of stellar black holes is not entirely observational but relies on theory: we can think of no other object for these massive compact systems in stellar binaries besides a black hole. A direct proof of the existence of a black hole would be if one actually observes the orbit of a particle (or a cloud of gas) that falls into the black hole. Black hole kicks The large distances above the galactic plane achieved by some binaries are the result of black hole natal kicks. The velocity distribution of black hole natal kicks seems similar to that of neutron star kick velocities. One might have expected that it would be the momenta that were the same with black holes receiving lower velocity than neutron stars due to their higher mass but that doesn't seem to be the case, which may be due to the fall-back of asymmetrically expelled matter increasing the momentum of the resulting black hole. Mass gaps It is predicted by some models of stellar evolution that black holes with masses in two ranges cannot be directly formed by the gravitational collapse of a star. These are sometimes distinguished as the "lower" and "upper" mass gaps, roughly representing the ranges of 2 to 5 and 50 to 150 solar masses (), respectively. Another range given for the upper gap is 52 to 133 . has been regarded as the upper mass limit for stars in the current era of the universe. Lower mass gap A lower mass gap is suspected on the basis of a scarcity of observed candidates with masses within a few solar masses above the maximum possible neutron star mass. The existence and theoretical basis for this possible gap are uncertain. The situation may be complicated by the fact that any black holes found in this mass range may have been created via the merging of binary neutron star systems, rather than stellar collapse. The LIGO/Virgo collaboration has reported three candidate events among their gravitational wave observations in run O3 with component masses that fall in this lower mass gap. There has also been reported an observation of a bright, rapidly rotating giant star in a binary system with an unseen companion emitting no light, including x-rays, but having a mass of solar masses. This is interpreted to suggest that there may be many such low-mass black holes that are not currently consuming any material and are hence undetectable via the usual x-ray signature. Upper mass gap The upper mass gap is predicted by comprehensive models of late-stage stellar evolution. It is expected that with increasing mass, supermassive stars reach a stage where a pair-instability supernova occurs, during which pair production, the production of free electrons and positrons in the collision between atomic nuclei and energetic gamma rays, temporarily reduces the internal pressure supporting the star's core against gravitational collapse. This pressure drop leads to a partial collapse, which in turn causes greatly accelerated burning in a runaway thermonuclear explosion, resulting in the star being blown completely apart without leaving a stellar remnant behind. Pair-instability supernovae can only happen in stars with a mass range from around 130 to 250 solar masses () and low to moderate metallicity (low abundance of elements other than hydrogen and helium – a situation common in Population III stars). However, this mass gap is expected to be extended down to about 45 solar masses by the process of pair-instability pulsational mass loss, before the occurrence of a "normal" supernova explosion and core collapse. In nonrotating stars the lower bound of the upper mass gap may be as high as 60 . The possibility of direct collapse into black holes of stars with core mass > 133 , requiring total stellar mass of > 260 has been considered, but there may be little chance of observing such a high-mass supernova remnant; i.e., the lower bound of the upper mass gap may represent a mass cutoff. Observations of the LB-1 system of a star and unseen companion were initially interpreted in terms of a black hole with a mass of about 70 solar masses, which would be excluded by the upper mass gap. However, further investigations have weakened this claim. Black holes may also be found in the mass gap through mechanisms other than those involving a single star, such as the merger of black holes. Candidates Our Milky Way galaxy contains several stellar-mass black hole candidates (BHCs) which are closer to us than the supermassive black hole in the galactic center region. Most of these candidates are members of X-ray binary systems in which the compact object draws matter from its partner via an accretion disk. The probable black holes in these pairs range from three to more than a dozen solar masses. Extragalactic Candidates outside our galaxy come from gravitational wave detections: Candidates outside our galaxy from X-ray binaries: The disappearance of N6946-BH1 following a failed supernova in NGC 6946 may have resulted in the formation of a black hole.
Physical sciences
Basics_2
Astronomy
511394
https://en.wikipedia.org/wiki/Opioid
Opioid
Opioids are a class of drugs that derive from, or mimic, natural substances found in the opium poppy plant. Opioids work on opioid receptors in the brain and other organs to produce a variety of morphine-like effects, including pain relief. The terms 'opioid' and 'opiate' are sometimes used interchangeably, but the term 'opioid' is used to designate all substances, both natural and synthetic, that bind to opioid receptors in the brain. Opiates are alkaloid compounds naturally found in the opium poppy plant Papaver somniferum. Medically they are primarily used for pain relief, including anesthesia. Other medical uses include suppression of diarrhea, replacement therapy for opioid use disorder, and suppressing cough. The opioid receptor antagonist naloxone is used to reverse opioid overdose. Extremely potent opioids such as carfentanil are approved only for veterinary use. Opioids are also frequently used recreationally for their euphoric effects or to prevent withdrawal. Opioids can cause death and have been used, alone and in combination, in a small number of executions in the United States. Side effects of opioids may include itchiness, sedation, nausea, respiratory depression, constipation, and euphoria. Long-term use can cause tolerance, meaning that increased doses are required to achieve the same effect, and physical dependence, meaning that abruptly discontinuing the drug leads to unpleasant withdrawal symptoms. The euphoria attracts recreational use, and frequent, escalating recreational use of opioids typically results in addiction. An overdose or concurrent use with other depressant drugs like benzodiazepines can result in death from respiratory depression. Opioids act by binding to opioid receptors, which are found principally in the central and peripheral nervous system and the gastrointestinal tract. These receptors mediate both the psychoactive and the somatic effects of opioids. Partial agonists, like the anti-diarrhea drug loperamide and antagonists, like naloxegol for opioid-induced constipation, do not cross the blood–brain barrier, but can displace other opioids from binding to those receptors in the myenteric plexus. Because opioids are addictive and may result in fatal overdose, most are controlled substances. In 2013, between 28 and 38 million people used opioids illicitly (0.6% to 0.8% of the global population between the ages of 15 and 65). By 2021, that number rose to 60 million. In 2011, an estimated 4 million people in the United States used opioids recreationally or were dependent on them. As of 2015, increased rates of recreational use and addiction are attributed to over-prescription of opioid medications and inexpensive illicit heroin. Conversely, fears about overprescribing, exaggerated side effects, and addiction from opioids are similarly blamed for under-treatment of pain. Terminology Opioids include opiates, an older term that refers to such drugs derived from opium, including morphine itself. Opiate is properly limited to the natural alkaloids found in the resin of the opium poppy although some include semi-synthetic derivatives. Other opioids are semi-synthetic and synthetic drugs such as hydrocodone, oxycodone, and fentanyl; antagonist drugs such as naloxone; and endogenous peptides such as endorphins. The terms opiate and narcotic are sometimes encountered as synonyms for opioid. Narcotic, derived from words meaning 'numbness' or 'sleep', originally referred to any psychoactive compound with numbing or paralyzing properties. As an American legal term, narcotic refers to cocaine and opioids, and their source materials; it is also loosely applied to any illegal or controlled psychoactive drug. In some jurisdictions all controlled drugs are legally classified as narcotics. The term can have pejorative connotations and its use is generally discouraged where that is the case. Medical uses Pain The weak opioid codeine, in low doses and combined with one or more other drugs, is commonly available in prescription medicines and without a prescription to treat mild pain. Other opioids are usually reserved for the relief of moderate to severe pain. Acute pain Opioids are effective for the treatment of acute pain (such as pain following surgery). For immediate relief of moderate to severe acute pain, opioids are frequently the treatment of choice due to their rapid onset, efficacy and reduced risk of dependence. However, a new report showed a clear risk of prolonged opioid use when opioid analgesics are initiated for an acute pain management following surgery or trauma. They have also been found to be important in palliative care to help with the severe, chronic, disabling pain that may occur in some terminal conditions such as cancer, and degenerative conditions such as rheumatoid arthritis. In many cases opioids are a successful long-term care strategy for those with chronic cancer pain. Just over half of all states in the US have enacted laws that restrict the prescribing or dispensing of opioids for acute pain. Chronic non-cancer pain Guidelines have suggested that the risk of opioids is likely greater than their benefits when used for most non-cancer chronic conditions including headaches, back pain, and fibromyalgia. Thus they should be used cautiously in chronic non-cancer pain. If used the benefits and harms should be reassessed at least every three months. In treating chronic pain, opioids are an option to be tried after other less risky pain relievers have been considered, including paracetamol or NSAIDs like ibuprofen or naproxen. Some types of chronic pain, including the pain caused by fibromyalgia or migraine, are preferentially treated with drugs other than opioids. The efficacy of using opioids to lessen chronic neuropathic pain is uncertain. Opioids are contraindicated as a first-line treatment for headache because they impair alertness, bring risk of dependence, and increase the risk that episodic headaches will become chronic. Opioids can also cause heightened sensitivity to headache pain. When other treatments fail or are unavailable, opioids may be appropriate for treating headache if the patient can be monitored to prevent the development of chronic headache. Opioids are being used more frequently in the management of non-malignant chronic pain. This practice has now led to a new and growing problem with addiction and misuse of opioids. Because of various negative effects the use of opioids for long-term management of chronic pain is not indicated unless other less risky pain relievers have been found ineffective. Chronic pain which occurs only periodically, such as that from nerve pain, migraines, and fibromyalgia, frequently is better treated with medications other than opioids. Paracetamol and nonsteroidal anti-inflammatory drugs including ibuprofen and naproxen are considered safer alternatives. They are frequently used combined with opioids, such as paracetamol combined with oxycodone (Percocet) and ibuprofen combined with hydrocodone (Vicoprofen), which boosts the pain relief but is also intended to deter recreational use. Other Cough Codeine was once viewed as the "gold standard" in cough suppressants, but this position is now questioned. Some recent placebo-controlled trials have found that it may be no better than a placebo for some causes including acute cough in children. As a consequence, it is not recommended for children. Additionally, there is no evidence that hydrocodone is useful in children. Similarly, a 2012 Dutch guideline regarding the treatment of acute cough does not recommend its use. (The opioid analogue dextromethorphan, long claimed to be as effective a cough suppressant as codeine, has similarly demonstrated little benefit in several recent studies.) Low dose morphine may help chronic cough but its use is limited by side effects. Diarrhea In cases of diarrhea-predominate irritable bowel syndrome, opioids may be used to suppress diarrhea. Loperamide is a peripherally selective opioid available without a prescription used to suppress diarrhea. The ability to suppress diarrhea also produces constipation when opioids are used beyond several weeks. Naloxegol, a peripherally-selective opioid antagonist is now available to treat opioid induced constipation. Shortness of breath Opioids may help with shortness of breath particularly in advanced diseases such as cancer and COPD among others. However, findings from two recent systematic reviews of the literature found that opioids were not necessarily more effective in treating shortness of breath in patients who have advanced cancer. Restless legs syndrome Though not typically a first line of treatment, opioids, such as oxycodone and methadone, are sometimes used in the treatment of severe and refractory restless legs syndrome. Hyperalgesia Opioid-induced hyperalgesia (OIH) has been evident in patients after chronic opioid exposure. Adverse effects Each year 69,000 people worldwide die of opioid overdose, and 15 million people have an opioid addiction. In older adults, opioid use is associated with increased adverse effects such as "sedation, nausea, vomiting, constipation, urinary retention, and falls". As a result, older adults taking opioids are at greater risk for injury. Opioids do not cause any specific organ toxicity, unlike many other drugs, such as aspirin and paracetamol. They are not associated with upper gastrointestinal bleeding and kidney toxicity. Prescription of opioids for acute low back pain and management of osteoarthritis seem to have long-term adverse effects According to the USCDC, methadone was involved in 31% of opioid related deaths in the US between 1999–2010 and 40% as the sole drug involved, far higher than other opioids. Studies of long term opioids have found that many stop them, and that minor side effects were common. Addiction occurred in about 0.3%. In the United States in 2016 opioid overdose resulted in the death of 1.7 in 10,000 people. Reinforcement disorders Tolerance Tolerance is a process characterized by neuroadaptations that result in reduced drug effects. While receptor upregulation may often play an important role other mechanisms are also known. Tolerance is more pronounced for some effects than for others; tolerance occurs slowly to the effects on mood, itching, urinary retention, and respiratory depression, but occurs more quickly to the analgesia and other physical side effects. However, tolerance does not develop to constipation or miosis (the constriction of the pupil of the eye to less than or equal to two millimeters). This idea has been challenged, however, with some authors arguing that tolerance does develop to miosis. Tolerance to opioids is attenuated by a number of substances, including: calcium channel blockers intrathecal magnesium and zinc NMDA antagonists, such as dextromethorphan, ketamine, and memantine. cholecystokinin antagonists, such as proglumide Newer agents such as the phosphodiesterase inhibitor ibudilast have also been researched for this application. Tolerance is a physiologic process where the body adjusts to a medication that is frequently present, usually requiring higher doses of the same medication over time to achieve the same effect. It is a common occurrence in individuals taking high doses of opioids for extended periods, but does not predict any relationship to misuse or addiction. Physical dependence Physical dependence is the physiological adaptation of the body to the presence of a substance, in this case opioid medication. It is defined by the development of withdrawal symptoms when the substance is discontinued, when the dose is reduced abruptly or, specifically in the case of opioids, when an antagonist (e.g., naloxone) or an agonist-antagonist (e.g., pentazocine) is administered. Physical dependence is a normal and expected aspect of certain medications and does not necessarily imply that the patient is addicted. The withdrawal symptoms for opiates may include severe dysphoria, craving for another opiate dose, irritability, sweating, nausea, rhinorrea, tremor, vomiting and myalgia. Slowly reducing the intake of opioids over days and weeks can reduce or eliminate the withdrawal symptoms. The speed and severity of withdrawal depends on the half-life of the opioid; heroin and morphine withdrawal occur more quickly than methadone withdrawal. The acute withdrawal phase is often followed by a protracted phase of depression and insomnia that can last for months. The symptoms of opioid withdrawal can be treated with other medications, such as clonidine. Physical dependence does not predict drug misuse or true addiction, and is closely related to the same mechanism as tolerance. While there is anecdotal claims of benefit with ibogaine, data to support its use in substance dependence is poor. Critical patients who received regular doses of opioids experience iatrogenic withdrawal as a frequent syndrome. Addiction Drug addiction is a complex set of behaviors typically associated with misuse of certain drugs, developing over time and with higher drug dosages. Addiction includes psychological compulsion, to the extent that the affected person persists in actions leading to dangerous or unhealthy outcomes. Opioid addiction includes insufflation or injection, rather than taking opioids orally as prescribed for medical reasons. In European nations such as Austria, Bulgaria, and Slovakia, slow-release oral morphine formulations are used in opiate substitution therapy (OST) for patients who do not well tolerate the side effects of buprenorphine or methadone. Buprenorphine can also be used together with naloxone for a longer treatment of addiction. In other European countries including the UK, this is also legally used for OST although on a varying scale of acceptance. Slow-release formulations of medications are intended to curb misuse and lower addiction rates while trying to still provide legitimate pain relief and ease of use to pain patients. Questions remain, however, about the efficacy and safety of these types of preparations. Further tamper resistant medications are currently under consideration with trials for market approval by the FDA. The amount of evidence available only permits making a weak conclusion, but it suggests that a physician properly managing opioid use in patients with no history of substance use disorder can give long-term pain relief with little risk of developing addiction, or other serious side effects. Problems with opioids include the following: Some people find that opioids do not relieve all of their pain. Some people find that opioids side effects cause problems which outweigh the therapy's benefit. Some people build tolerance to opioids over time. This requires them to increase their drug dosage to maintain the benefit, and that in turn also increases the unwanted side effects. Long-term opioid use can cause opioid-induced hyperalgesia, which is a condition in which the patient has increased sensitivity to pain. All of the opioids can cause side effects. Common adverse reactions in patients taking opioids for pain relief include nausea and vomiting, drowsiness, itching, dry mouth, dizziness, and constipation. Nausea and vomiting Tolerance to nausea occurs within 7–10 days, during which antiemetics (e.g. low dose haloperidol once at night) are very effective. Due to severe side effects such as tardive dyskinesia, haloperidol is now rarely used. A related drug, prochlorperazine is more often used, although it has similar risks. Stronger antiemetics such as ondansetron or tropisetron are sometimes used when nausea is severe or continuous and disturbing, despite their greater cost. A less expensive alternative is dopamine antagonists such as domperidone and metoclopramide. Domperidone does not cross the blood–brain barrier and produce adverse central antidopaminergic effects, but blocks opioid emetic action in the chemoreceptor trigger zone. This drug is not available in the U.S. Some antihistamines with anticholinergic properties (e.g. orphenadrine, diphenhydramine) may also be effective. The first-generation antihistamine hydroxyzine is very commonly used, with the added advantages of not causing movement disorders, and also possessing analgesic-sparing properties. Δ9-tetrahydrocannabinol relieves nausea and vomiting; it also produces analgesia that may allow lower doses of opioids with reduced nausea and vomiting. 5-HT3 antagonists (e.g. ondansetron) Dopamine antagonists (e.g. domperidone) Anti-cholinergic antihistamines (e.g. diphenhydramine) Δ9-tetrahydrocannabinol (e.g. dronabinol) Vomiting is due to gastric stasis (large volume vomiting, brief nausea relieved by vomiting, oesophageal reflux, epigastric fullness, early satiation), besides direct action on the chemoreceptor trigger zone of the area postrema, the vomiting centre of the brain. Vomiting can thus be prevented by prokinetic agents (e.g. domperidone or metoclopramide). If vomiting has already started, these drugs need to be administered by a non-oral route (e.g. subcutaneous for metoclopramide, rectally for domperidone). Prokinetic agents (e.g. domperidone) Anti-cholinergic agents (e.g. orphenadrine) Evidence suggests that opioid-inclusive anaesthesia is associated with postoperative nausea and vomiting. Patients with chronic pain using opioids had small improvements in pain and physically functioning and increased risk of vomiting. Drowsiness Tolerance to drowsiness usually develops over 5–7 days, but if troublesome, switching to an alternative opioid often helps. Certain opioids such as fentanyl, morphine and diamorphine (heroin) tend to be particularly sedating, while others such as oxycodone, tilidine and meperidine (pethidine) tend to produce comparatively less sedation, but individual patients responses can vary markedly and some degree of trial and error may be needed to find the most suitable drug for a particular patient. Otherwise, treatment with CNS stimulants is generally effective. Stimulants (e.g. caffeine, modafinil, amphetamine, methylphenidate) Itching Itching tends not to be a severe problem when opioids are used for pain relief, but antihistamines are useful for counteracting itching when it occurs. Non-sedating antihistamines such as fexofenadine are often preferred as they avoid increasing opioid induced drowsiness. However, some sedating antihistamines such as orphenadrine can produce a synergistic pain relieving effect permitting smaller doses of opioids be used. Consequently, several opioid/antihistamine combination products have been marketed, such as Meprozine (meperidine/promethazine) and Diconal (dipipanone/cyclizine), and these may also reduce opioid induced nausea. Antihistamines (e.g. fexofenadine) Constipation Opioid-induced constipation (OIC) develops in 90 to 95% of people taking opioids long-term. Since tolerance to this problem does not generally develop, most people on long-term opioids need to take a laxative or enemas. Treatment of OIC is successional and dependent on severity. The first mode of treatment is non-pharmacological, and includes lifestyle modifications like increasing dietary fiber, fluid intake (around per day), and physical activity. If non-pharmacological measures are ineffective, laxatives, including stool softeners (e.g., polyethylene glycol), bulk-forming laxatives (e.g., fiber supplements), stimulant laxatives (e.g., bisacodyl, senna), and/or enemas, may be used. A common laxative regimen for OIC is the combination of docusate and bisacodyl. Osmotic laxatives, including lactulose, polyethylene glycol, and milk of magnesia (magnesium hydroxide), as well as mineral oil (a lubricant laxative), are also commonly used for OIC. If laxatives are insufficiently effective (which is often the case), opioid formulations or regimens that include a peripherally-selective opioid antagonist, such as methylnaltrexone bromide, naloxegol, alvimopan, or naloxone (as in oxycodone/naloxone), may be tried. A 2018 (updated in 2022) Cochrane review found that the evidence was moderate for alvimopan, naloxone, or methylnaltrexone bromide but with increased risk of adverse events. Naloxone by mouth appears to be the most effective. A daily 0.2 mg dose of naldemedine has been shown to significantly improve symptoms in patients with OIC. Opioid rotation is one method suggested to minimise the impact of constipation in long-term users. While all opioids cause constipation, there are some differences between drugs, with studies suggesting tramadol, tapentadol, methadone and fentanyl may cause relatively less constipation, while with codeine, morphine, oxycodone or hydromorphone constipation may be comparatively more severe. Respiratory depression Respiratory depression is the most serious adverse reaction associated with opioid use, but it usually is seen with the use of a single, intravenous dose in an opioid-naïve patient. In patients taking opioids regularly for pain relief, tolerance to respiratory depression occurs rapidly, so that it is not a clinical problem. Several drugs have been developed which can partially block respiratory depression, although the only respiratory stimulant currently approved for this purpose is doxapram, which has only limited efficacy in this application. Newer drugs such as BIMU-8 and CX-546 may be much more effective. Respiratory stimulants: carotid chemoreceptor agonists (e.g. doxapram), 5-HT4 agonists (e.g. BIMU8), δ-opioid agonists (e.g. BW373U86) and AMPAkines (e.g. CX717) can all reduce respiratory depression caused by opioids without affecting analgesia, but most of these drugs are only moderately effective or have side effects which preclude use in humans. 5-HT1A agonists such as 8-OH-DPAT and repinotan also counteract opioid-induced respiratory depression, but at the same time reduce analgesia, which limits their usefulness for this application. Opioid antagonists (e.g. naloxone, nalmefene, diprenorphine) The initial 24 hours after opioid administration appear to be the most critical with regard to life-threatening OIRD, but may be preventable with a more cautious approach to opioid use. Patients with cardiac, respiratory disease and/or obstructive sleep apnoea are at increased risk for OIRD. Increased pain sensitivity Opioid-induced hyperalgesia – where individuals using opioids to relieve pain paradoxically experience more pain as a result of that medication – has been observed in some people. This phenomenon, although uncommon, is seen in some people receiving palliative care, most often when dose is increased rapidly. If encountered, rotation between several different opioid pain medications may decrease the development of increased pain. Opioid induced hyperalgesia more commonly occurs with chronic use or brief high doses but some research suggests that it may also occur with very low doses. Side effects such as hyperalgesia and allodynia, sometimes accompanied by a worsening of neuropathic pain, may be consequences of long-term treatment with opioid analgesics, especially when increasing tolerance has resulted in loss of efficacy and consequent progressive dose escalation over time. This appears to largely be a result of actions of opioid drugs at targets other than the three classic opioid receptors, including the nociceptin receptor, sigma receptor and Toll-like receptor 4, and can be counteracted in animal models by antagonists at these targets such as J-113,397, BD-1047 or (+)-naloxone respectively. No drugs are currently approved specifically for counteracting opioid-induced hyperalgesia in humans and in severe cases the only solution may be to discontinue use of opioid analgesics and replace them with non-opioid analgesic drugs. However, since individual sensitivity to the development of this side effect is highly dose dependent and may vary depending which opioid analgesic is used, many patients can avoid this side effect simply through dose reduction of the opioid drug (usually accompanied by the addition of a supplemental non-opioid analgesic), rotating between different opioid drugs, or by switching to a milder opioid with a mixed mode of action that also counteracts neuropathic pain, particularly tramadol or tapentadol. NMDA receptor antagonists such as ketamine SNRIs such as milnacipran Anticonvulsants such as gabapentin or pregabalin Other adverse effects Low sex hormone levels Clinical studies have consistently associated medical and recreational opioid use with hypogonadism (low sex hormone levels) in different sexes. The effect is dose-dependent. Most studies suggest that the majority (perhaps as much as 90%) of chronic opioid users develop hypogonadism. A 2015 systematic review and meta-analysis found that opioid therapy suppressed testosterone levels in men by about 165 ng/dL (5.7 nmol/L) on average, which was a reduction in testosterone level of almost 50%. Conversely, opioid therapy did not significantly affect testosterone levels in women. However, opioids can also interfere with menstruation in women by limiting the production of luteinizing hormone (LH). Opioid-induced hypogonadism likely causes the strong association of opioid use with osteoporosis and bone fracture, due to deficiency in estradiol. It also may increase pain and thereby interfere with the intended clinical effect of opioid treatment. Opioid-induced hypogonadism is likely caused by their agonism of opioid receptors in the hypothalamus and the pituitary gland. One study found that the depressed testosterone levels of heroin addicts returned to normal within one month of abstinence, suggesting that the effect is readily reversible and is not permanent. , the effect of low-dose or acute opioid use on the endocrine system is unclear. Long-term use of opioids can affect the other hormonal systems as well. Disruption of work Use of opioids may be a risk factor for failing to return to work. Persons performing any safety-sensitive task should not use opioids. Health care providers should not recommend that workers who drive or use heavy equipment including cranes or forklifts treat chronic or acute pain with opioids. Workplaces which manage workers who perform safety-sensitive operations should assign workers to less sensitive duties for so long as those workers are treated by their physician with opioids. People who take opioids long term have increased likelihood of being unemployed. Taking opioids may further disrupt the patient's life and the adverse effects of opioids themselves can become a significant barrier to patients having an active life, gaining employment, and sustaining a career. In addition, lack of employment may be a predictor of aberrant use of prescription opioids. Increased accident-proneness Opioid use may increase accident-proneness. Opioids may increase risk of traffic accidents and accidental falls. Reduced Attention Opioids have been shown to reduce attention, more so when used with antidepressants and/or anticonvulsants. Rare side effects Infrequent adverse reactions in patients taking opioids for pain relief include: dose-related respiratory depression (especially with more potent opioids), confusion, hallucinations, delirium, urticaria, hypothermia, bradycardia/tachycardia, orthostatic hypotension, dizziness, headache, urinary retention, ureteric or biliary spasm, muscle rigidity, myoclonus (with high doses), and flushing (due to histamine release, except fentanyl and remifentanil). Both therapeutic and chronic use of opioids can compromise the function of the immune system. Opioids decrease the proliferation of macrophage progenitor cells and lymphocytes, and affect cell differentiation (Roy & Loh, 1996). Opioids may also inhibit leukocyte migration. However the relevance of this in the context of pain relief is not known. Pregnancy Interactions Physicians treating patients using opioids in combination with other drugs keep continual documentation that further treatment is indicated and remain aware of opportunities to adjust treatment if the patient's condition changes to merit less risky therapy. With other depressant drugs The concurrent use of opioids with other depressant drugs such as benzodiazepines or ethanol increases the rates of adverse events and overdose. Despite this, opioids and benzodiazepines are concurrently dispensed in many settings. As with an overdose of opioid alone, the combination of an opioid and another depressant may precipitate respiratory depression often leading to death. These risks are lessened with close monitoring by a physician, who may conduct ongoing screening for changes in patient behavior and treatment compliance. Opioid antagonist Opioid effects (adverse or otherwise) can be reversed with an opioid antagonist such as naloxone or naltrexone. These competitive antagonists bind to the opioid receptors with higher affinity than agonists but do not activate the receptors. This displaces the agonist, attenuating or reversing the agonist effects. However, the elimination half-life of naloxone can be shorter than that of the opioid itself, so repeat dosing or continuous infusion may be required, or a longer acting antagonist such as nalmefene may be used. In patients taking opioids regularly it is essential that the opioid is only partially reversed to avoid a severe and distressing reaction of waking in excruciating pain. This is achieved by not giving a full dose but giving this in small doses until the respiratory rate has improved. An infusion is then started to keep the reversal at that level, while maintaining pain relief. Opioid antagonists remain the standard treatment for respiratory depression following opioid overdose, with naloxone being by far the most commonly used, although the longer acting antagonist nalmefene may be used for treating overdoses of long-acting opioids such as methadone, and diprenorphine is used for reversing the effects of extremely potent opioids used in veterinary medicine such as etorphine and carfentanil. However, since opioid antagonists also block the beneficial effects of opioid analgesics, they are generally useful only for treating overdose, with use of opioid antagonists alongside opioid analgesics to reduce side effects, requiring careful dose titration and often being poorly effective at doses low enough to allow analgesia to be maintained. Naltrexone does not appear to increase risk of serious adverse events, which confirms the safety of oral naltrexone. Mortality or serious adverse events due to rebound toxicity in patients with naloxone were rare. Pharmacology Opioids bind to specific opioid receptors in the nervous system and other tissues. There are three principal classes of opioid receptors, μ, κ, δ (mu, kappa, and delta), although up to seventeen have been reported, and include the ε, ι, λ, and ζ (Epsilon, Iota, Lambda and Zeta) receptors. Conversely, σ (Sigma) receptors are no longer considered to be opioid receptors because their activation is not reversed by the opioid inverse-agonist naloxone, they do not exhibit high-affinity binding for classical opioids, and they are stereoselective for dextro-rotatory isomers while the other opioid receptors are stereo-selective for levo-rotatory isomers. In addition, there are three subtypes of μ-receptor: μ1 and μ2, and the newly discovered μ3. Another receptor of clinical importance is the opioid-receptor-like receptor 1 (ORL1), which is involved in pain responses as well as having a major role in the development of tolerance to μ-opioid agonists used as analgesics. These are all G-protein coupled receptors acting on GABAergic neurotransmission. The pharmacodynamic response to an opioid depends upon the receptor to which it binds, its affinity for that receptor, and whether the opioid is an agonist or an antagonist. For example, the supraspinal analgesic properties of the opioid agonist morphine are mediated by activation of the μ1 receptor; respiratory depression and physical dependence by the μ2 receptor; and sedation and spinal analgesia by the κ receptor. Each group of opioid receptors elicits a distinct set of neurological responses, with the receptor subtypes (such as μ1 and μ2 for example) providing even more [measurably] specific responses. Unique to each opioid is its distinct binding affinity to the various classes of opioid receptors (e.g. the μ, κ, and δ opioid receptors are activated at different magnitudes according to the specific receptor binding affinities of the opioid). For example, the opiate alkaloid morphine exhibits high-affinity binding to the μ-opioid receptor, while ketazocine exhibits high affinity to ĸ receptors. It is this combinatorial mechanism that allows for such a wide class of opioids and molecular designs to exist, each with its own unique effect profile. Their individual molecular structure is also responsible for their different duration of action, whereby metabolic breakdown (such as N-dealkylation) is responsible for opioid metabolism. Functional selectivity A new strategy of drug development takes receptor signal transduction into consideration. This strategy strives to increase the activation of desirable signalling pathways while reducing the impact on undesirable pathways. This differential strategy has been given several names, including functional selectivity and biased agonism. The first opioid that was intentionally designed as a biased agonist and placed into clinical evaluation is the drug oliceridine. It displays analgesic activity and reduced adverse effects. Opioid comparison Extensive research has been conducted to determine equivalence ratios comparing the relative potency of opioids. Given a dose of an opioid, an equianalgesic table is used to find the equivalent dosage of another. Such tables are used in opioid rotation practices, and to describe an opioid by comparison to morphine, the reference opioid. Equianalgesic tables typically list drug half-lives, and sometimes equianalgesic doses of the same drug by means of administration, such as morphine: oral and intravenous. Binding profiles Usage Opioid prescriptions in the US increased from 76 million in 1991 to 207 million in 2013. In the 1990s, opioid prescribing increased significantly. Once used almost exclusively for the treatment of acute pain or pain due to cancer, opioids are now prescribed liberally for people experiencing chronic pain. This has been accompanied by rising rates of accidental addiction and accidental overdoses leading to death. According to the International Narcotics Control Board, the United States and Canada lead the per capita consumption of prescription opioids. The number of opioid prescriptions per capita in the United States and Canada is double the consumption in the European Union, Australia, and New Zealand. Certain populations have been affected by the opioid addiction crisis more than others, including First World communities and low-income populations. Public health specialists say that this may result from the unavailability or high cost of alternative methods for addressing chronic pain. Opioids have been described as a cost-effective treatment for chronic pain, but the impact of the opioid epidemic and deaths caused by opioid overdoses should be considered in assessing their cost-effectiveness. Data from 2017 suggest that in the U.S. about 3.4 percent of the U.S. population are prescribed opioids for daily pain management. Calls for opioid deprescribing have led to broad scale opioid tapering practices with little scientific evidence to support the safety or benefit for patients with chronic pain. History Naturally occurring opioids Opioids are among the world's oldest known drugs. The earliest known evidence of Papaver somniferum in a human archaeological site dates to the Neolithic period around 5,700–5,500 BCE. Its seeds have been found at Cueva de los Murciélagos in the Iberian Peninsula and La Marmotta in the Italian Peninsula. Use of the opium poppy for medical, recreational, and religious purposes can be traced to the fourth century BC, when ideograms on Sumerians clay tablets mention the use of "Hul Gil", a "plant of joy". Opium was known to the Egyptians, and is mentioned in the Ebers Papyrus as an ingredient in a mixture for the soothing of children, and for the treatment of breast abscesses. Opium was also known to the Greeks. It was valued by Hippocrates ( – ) and his students for its sleep-inducing properties, and used for the treatment of pain. The Latin saying "Sedare dolorem opus divinum est", trans. "Alleviating pain is the work of the divine", has been variously ascribed to Hippocrates and to Galen of Pergamum. The medical use of opium is later discussed by Pedanius Dioscorides ( – 90 AD), a Greek physician serving in the Roman army, in his five-volume work, De Materia Medica. During the Islamic Golden Age, the use of opium was discussed in detail by Avicenna ( – June 1037 AD) in The Canon of Medicine. The book's five volumes include information on opium's preparation, an array of physical effects, its use to treat a variety of illness, contraindications for its use, its potential danger as a poison and its potential for addiction. Avicenna discouraged opium's use except as a last resort, preferring to address the causes of pain rather than trying to minimize it with analgesics. Many of Avicenna's observations have been supported by modern medical research. Exactly when the world became aware of opium in India and China is uncertain, but opium was mentioned in the Chinese medical work K'ai-pao-pen-tsdo (973 AD) By 1590 AD, opium poppies were a staple spring crop in the Subahs of Agra region. The physician Paracelsus (–1541) is often credited with reintroducing opium into medical use in Western Europe, during the German Renaissance. He extolled opium's benefits for medical use. He also claimed to have an "arcanum", a pill which he called laudanum, that was superior to all others, particularly when death was to be cheated. ("Ich hab' ein Arcanum – heiss' ich Laudanum, ist über das Alles, wo es zum Tode reichen will.") Later writers have asserted that Paracelsus' recipe for laudanum contained opium, but its composition remains unknown. Laudanum The term laudanum was used generically for a useful medicine until the 17th century. After Thomas Sydenham introduced the first liquid tincture of opium, "laudanum" came to mean a mixture of both opium and alcohol. Sydenham's 1669 recipe for laudanum mixed opium with wine, saffron, clove and cinnamon. Sydenham's laudanum was used widely in both Europe and the Americas until the 20th century. Other popular medicines, based on opium, included Paregoric, a much milder liquid preparation for children; Black-drop, a stronger preparation; and Dover's powder. The opium trade Opium became a major colonial commodity, moving legally and illegally through trade networks involving India, the Portuguese, the Dutch, the British and China, among others. The British East India Company saw the opium trade as an investment opportunity in 1683 AD. In 1773 the Governor of Bengal established a monopoly on the production of Bengal opium, on behalf of the East India Company. The cultivation and manufacture of Indian opium was further centralized and controlled through a series of acts, between 1797 and 1949. The British balanced an economic deficit from the importation of Chinese tea by selling Indian opium which was smuggled into China in defiance of Chinese government bans. This led to the First (1839–1842) and Second Opium Wars (1856–1860) between China and Britain. Morphine In the 19th century, two major scientific advances were made that had far-reaching effects. Around 1804, German pharmacist Friedrich Sertürner isolated morphine from opium. He described its crystallization, structure, and pharmacological properties in a well-received paper in 1817. Morphine was the first alkaloid to be isolated from any medicinal plant, the beginning of modern scientific drug discovery. The second advance, nearly fifty years later, was the refinement of the hypodermic needle by Alexander Wood and others. Development of a glass syringe with a subcutaneous needle made it possible to easily administer controlled measurable doses of a primary active compound. Morphine was initially hailed as a wonder drug for its ability to ease pain. It could help people sleep, and had other useful side effects, including control of coughing and diarrhea. It was widely prescribed by doctors, and dispensed without restriction by pharmacists. During the American Civil War, opium and laudanum were used extensively to treat soldiers. It was also prescribed frequently for women, for menstrual pain and diseases of a "nervous character". At first it was assumed (wrongly) that this new method of application would not be addictive. Codeine Codeine was discovered in 1832 by Pierre Jean Robiquet. Robiquet was reviewing a method for morphine extraction, described by Scottish chemist William Gregory (1803–1858). Processing the residue left from Gregory's procedure, Robiquet isolated a crystalline substance from the other active components of opium. He wrote of his discovery: "Here is a new substance found in opium ... We know that morphine, which so far has been thought to be the only active principle of opium, does not account for all the effects and for a long time the physiologists are claiming that there is a gap that has to be filled." His discovery of the alkaloid led to the development of a generation of antitussive and antidiarrheal medicines based on codeine. Semi-synthetic and synthetic opioids Synthetic opioids were invented, and biological mechanisms for their actions discovered, in the 20th century. Scientists have searched for non-addictive forms of opioids, but have created stronger ones instead. In England Charles Romley Alder Wright developed hundreds of opiate compounds in his search for a nonaddictive opium derivative. In 1874 he became the first person to synthesize diamorphine (heroin), using a process called acetylation which involved boiling morphine with acetic anhydride for several hours. Heroin received little attention until it was independently synthesized by Felix Hoffmann (1868–1946), working for Heinrich Dreser (1860–1924) at Bayer Laboratories. Dreser brought the new drug to market as an analgesic and a cough treatment for tuberculosis, bronchitis, and asthma in 1898. Bayer ceased production in 1913, after heroin's addictive potential was recognized. Several semi-synthetic opioids were developed in Germany in the 1910s. The first, oxymorphone, was synthesized from thebaine, an opioid alkaloid in opium poppies, in 1914. Next, Martin Freund and Edmund Speyer developed oxycodone, also from thebaine, at the University of Frankfurt in 1916. In 1920, hydrocodone was prepared by Carl Mannich and Helene Löwenheim, deriving it from codeine. In 1924, hydromorphone was synthesized by adding hydrogen to morphine. Etorphine was synthesized in 1960, from the oripavine in opium poppy straw. Buprenorphine was discovered in 1972. The first fully synthetic opioid was meperidine (Demerol), found serendipitously by German chemist Otto Eisleb (or Eislib) at IG Farben in 1932. Meperidine was the first opioid to have a structure unrelated to morphine, but with opioid-like properties. Its analgesic effects were discovered by Otto Schaumann in 1939. Gustav Ehrhart and Max Bockmühl, also at IG Farben, built on the work of Eisleb and Schaumann. They developed "Hoechst 10820" (later methadone) around 1937. In 1959 the Belgian physician Paul Janssen developed fentanyl, a synthetic opioid with 30 to 50 times the potency of heroin. Nearly 150 synthetic opioids are now known. Criminalization and medical use Non-clinical use of opium was criminalized in the United States by the Harrison Narcotics Tax Act of 1914, and by many other laws. The use of opioids was stigmatized, and it was seen as a dangerous substance, to be prescribed only as a last resort for dying patients. The Controlled Substances Act of 1970 eventually relaxed the harshness of the Harrison Act. In the United Kingdom the 1926 report of the Departmental Committee on Morphine and Heroin Addiction under the Chairmanship of the President of the Royal College of Physicians reasserted medical control and established the "British system" of control—which lasted until the 1960s. In the 1980s the World Health Organization published guidelines for prescribing drugs, including opioids, for different levels of pain. In the U.S., Kathleen Foley and Russell Portenoy became leading advocates for the liberal use of opioids as painkillers for cases of "intractable non-malignant pain". With little or no scientific evidence to support their claims, industry scientists and advocates suggested that people with chronic pain would be resistant to addiction. The release of OxyContin in 1996 was accompanied by an aggressive marketing campaign promoting the use of opioids for pain relief. Increasing prescription of opioids fueled a growing black market for heroin. Between 2000 and 2014 there was an "alarming increase in heroin use across the country and an epidemic of drug overdose deaths". As a result, health care organizations and public health groups, such as Physicians for Responsible Opioid Prescribing, have called for decreases in the prescription of opioids. In 2016, the Centers for Disease Control and Prevention (CDC) issued a new set of guidelines for the prescription of opioids "for chronic pain outside of active cancer treatment, palliative care, and end-of-life care" and the increase of opioid tapering. "Remove the Risk" In April 2019 the U.S. Food and Drug Administration announced the launch of a new education campaign to help Americans understand the important role they play in removing and properly disposing of unused prescription opioids from their homes. This new initiative is part of the FDA's continued efforts to address the nationwide opioid crisis (see below) and aims to help decrease unnecessary exposure to opioids and prevent new addiction. The "Remove the Risk" campaign is targeting women ages 35–64, who are most likely to oversee household health care decisions and often serve as the gatekeepers to opioids and other prescription medications in the home. Society and culture Definition The term "opioid" originated in the 1950s. It combines "opium" + "-oid" meaning "opiate-like" ("opiates" being morphine and similar drugs derived from opium). The first scientific publication to use it, in 1963, included a footnote stating, "In this paper, the term, 'opioid', is used in the sense originally proposed by George H. Acheson (personal communication) to refer to any chemical compound with morphine-like activities". By the late 1960s, research found that opiate effects are mediated by activation of specific molecular receptors in the nervous system, which were termed "opioid receptors". The definition of "opioid" was later refined to refer to substances that have morphine-like activities that are mediated by the activation of opioid receptors. One modern pharmacology textbook states: "the term opioid applies to all agonists and antagonists with morphine-like activity, and also the naturally occurring and synthetic opioid peptides". Another pharmacology reference eliminates the morphine-like requirement: "Opioid, a more modern term, is used to designate all substances, both natural and synthetic, that bind to opioid receptors (including antagonists)". Some sources define the term opioid to exclude opiates, and others use opiate comprehensively instead of opioid, but opioid used inclusively is considered modern, preferred and is in wide use. Efforts to reduce recreational use in the US In 2011, the Obama administration released a white paper describing the administration's plan to deal with the opioid crisis. The administration's concerns about addiction and accidental overdosing have been echoed by numerous other medical and government advisory groups around the world. As of 2015, prescription drug monitoring programs exist in every state, except for Missouri. These programs allow pharmacists and prescribers to access patients' prescription histories in order to identify suspicious use. However, a survey of US physicians published in 2015 found that only 53% of doctors used these programs, while 22% were not aware that the programs were available to them. The Centers for Disease Control and Prevention was tasked with establishing and publishing a new guideline, and was heavily lobbied. In 2016, the United States Centers for Disease Control and Prevention published its Guideline for Prescribing Opioids for Chronic Pain, recommending that opioids only be used when benefits for pain and function are expected to outweigh risks, and then used at the lowest effective dosage, with avoidance of concurrent opioid and benzodiazepine use whenever possible. Research suggests that the prescription of high doses of opioids related to chronic opioid therapy (COT) can at times be prevented through state legislative guidelines and efforts by health plans that devote resources and establish shared expectations for reducing higher doses. On 10 August 2017, Donald Trump declared the opioid crisis a (non-FEMA) national public health emergency. Global shortages Morphine and other poppy-based medicines have been identified by the World Health Organization as essential in the treatment of severe pain. As of 2002, seven countries (USA, UK, Italy, Australia, France, Spain and Japan) use 77% of the world's morphine supplies, leaving many emerging countries lacking in pain relief medication. The current system of supply of raw poppy materials to make poppy-based medicines is regulated by the International Narcotics Control Board under the provision of the 1961 Single Convention on Narcotic Drugs. The amount of raw poppy materials that each country can demand annually based on these provisions must correspond to an estimate of the country's needs taken from the national consumption within the preceding two years. In many countries, underprescription of morphine is rampant because of the high prices and the lack of training in the prescription of poppy-based drugs. The World Health Organization is now working with administrations from various countries to train healthworkers and to develop national regulations regarding drug prescription to facilitate a greater prescription of poppy-based medicines. Another idea to increase morphine availability is proposed by the Senlis Council, who suggest, through their proposal for Afghan Morphine, that Afghanistan could provide cheap pain relief solutions to emerging countries as part of a second-tier system of supply that would complement the current INCB regulated system by maintaining the balance and closed system that it establishes while providing finished product morphine to those in severe pain and unable to access poppy-based drugs under the current system. Recreational use Opioids can produce strong feelings of euphoria and are frequently used recreationally. Traditionally associated with illicit opioids such as heroin, prescription opioids are misused recreationally. Drug misuse and non-medical use include the use of drugs for reasons or at doses other than prescribed. Opioid misuse can also include providing medications to persons for whom it was not prescribed. Such diversion may be treated as crimes, punishable by imprisonment in many countries. In 2014, almost 2 million Americans abused or were dependent on prescription opioids. Classification There are a number of broad classes of opioids: Natural opiates: alkaloids contained in the resin of the opium poppy, primarily morphine, codeine, and thebaine, but not papaverine and noscapine which have a different mechanism of action Esters of morphine opiates: slightly chemically altered but more natural than the semi-synthetics, as most are morphine prodrugs, diacetylmorphine (morphine diacetate; heroin), nicomorphine (morphine dinicotinate), dipropanoylmorphine (morphine dipropionate), desomorphine, acetylpropionylmorphine, dibenzoylmorphine, diacetyldihydromorphine; Semi-synthetic opioids: created from either the natural opiates or morphine esters, such as hydromorphone, hydrocodone, oxycodone, oxymorphone, ethylmorphine and buprenorphine; Fully synthetic opioids: such as fentanyl, pethidine, levorphanol, methadone, tramadol, tapentadol, and dextropropoxyphene; Endogenous opioid peptides, produced naturally in the body, such as endorphins, enkephalins, dynorphins, and endomorphins. Endogenous opioids, non-peptide: Morphine, and some other opioids, which are produced in small amounts in the body, are included in this category. Natural opioids, non-animal, non-opiate: the leaves from Mitragyna speciosa (kratom) contain a few naturally-occurring opioids, active via Mu- and Delta receptors. Salvinorin A, found naturally in the Salvia divinorum plant, is a kappa-opioid receptor agonist. Tramadol and tapentadol, which act as monoamine uptake inhibitors also act as mild and potent agonists (respectively) of the μ-opioid receptor. Both drugs produce analgesia even when naloxone, an opioid antagonist, is administered. Some minor opium alkaloids and various substances with opioid action are also found elsewhere, including molecules present in kratom, Corydalis, and Salvia divinorum plants and some species of poppy aside from Papaver somniferum. There are also strains which produce copious amounts of thebaine, an important raw material for making many semi-synthetic and synthetic opioids. Of all of the more than 120 poppy species, only two produce morphine. Amongst analgesics there are a small number of agents which act on the central nervous system but not on the opioid receptor system and therefore have none of the other (narcotic) qualities of opioids although they may produce euphoria by relieving pain—a euphoria that, because of the way it is produced, does not form the basis of habituation, physical dependence, or addiction. Foremost amongst these are nefopam, orphenadrine, and perhaps phenyltoloxamine or some other antihistamines. Tricyclic antidepressants have painkilling effect as well, but they're thought to do so by indirectly activating the endogenous opioid system. Paracetamol is predominantly a centrally acting analgesic (non-narcotic) which mediates its effect by action on descending serotoninergic (5-hydroxy triptaminergic) pathways, to increase 5-HT release (which inhibits release of pain mediators). It also decreases cyclo-oxygenase activity. It has recently been discovered that most or all of the therapeutic efficacy of paracetamol is due to a metabolite, AM404, which enhances the release of serotonin and inhibits the uptake of anandamide. Other analgesics work peripherally (i.e., not on the brain or spinal cord). Research is starting to show that morphine and related drugs may indeed have peripheral effects as well, such as morphine gel working on burns. Recent investigations discovered opioid receptors on peripheral sensory neurons. A significant fraction (up to 60%) of opioid analgesia can be mediated by such peripheral opioid receptors, particularly in inflammatory conditions such as arthritis, traumatic or surgical pain. Inflammatory pain is also blunted by endogenous opioid peptides activating peripheral opioid receptors. It was discovered in 1953, that humans and some animals naturally produce minute amounts of morphine, codeine, and possibly some of their simpler derivatives like heroin and dihydromorphine, in addition to endogenous opioid peptides. Some bacteria are capable of producing some semi-synthetic opioids such as hydromorphone and hydrocodone when living in a solution containing morphine or codeine respectively. Many of the alkaloids and other derivatives of the opium poppy are not opioids or narcotics; the best example is the smooth-muscle relaxant papaverine. Noscapine is a marginal case as it does have CNS effects but not necessarily similar to morphine, and it is probably in a category all its own. Dextromethorphan (the stereoisomer of levomethorphan, a semi-synthetic opioid agonist) and its metabolite dextrorphan have no opioid analgesic effect at all despite their structural similarity to other opioids; instead they are potent NMDA antagonists and sigma 1 and 2-receptor agonists and are used in many over-the-counter cough suppressants. Salvinorin A is a unique selective, powerful ĸ-opioid receptor agonist. It is not properly considered an opioid nevertheless, because: chemically, it is not an alkaloid; and it has no typical opioid properties: absolutely no anxiolytic or cough-suppressant effects. It is instead a powerful hallucinogen. Endogenous opioids Opioid-peptides that are produced in the body include: Endorphins Enkephalins Dynorphins Endomorphins β-endorphin is expressed in Pro-opiomelanocortin (POMC) cells in the arcuate nucleus, in the brainstem and in immune cells, and acts through μ-opioid receptors. β-endorphin has many effects, including on sexual behavior and appetite. β-endorphin is also secreted into the circulation from pituitary corticotropes and melanotropes. α-neoendorphin is also expressed in POMC cells in the arcuate nucleus. Met-enkephalin is widely distributed in the CNS and in immune cells; [met]-enkephalin is a product of the proenkephalin gene, and acts through μ and δ-opioid receptors. leu-enkephalin, also a product of the proenkephalin gene, acts through δ-opioid receptors. Dynorphin acts through κ-opioid receptors, and is widely distributed in the CNS, including in the spinal cord and hypothalamus, including in particular the arcuate nucleus and in both oxytocin and vasopressin neurons in the supraoptic nucleus. Endomorphin acts through μ-opioid receptors, and is more potent than other endogenous opioids at these receptors. Opium alkaloids and derivatives Opium alkaloids Phenanthrenes naturally occurring in (opium): Codeine Morphine Thebaine Oripavine Preparations of mixed opium alkaloids, including papaveretum, are still occasionally used. Esters of morphine Diacetylmorphine (morphine diacetate; heroin) Nicomorphine (morphine dinicotinate) Dipropanoylmorphine (morphine dipropionate) Diacetyldihydromorphine Acetylpropionylmorphine Desomorphine Methyldesorphine Dibenzoylmorphine Ethers of morphine Dihydrocodeine Ethylmorphine Heterocodeine Semi-synthetic alkaloid derivatives Buprenorphine Etorphine Hydrocodone Hydromorphone Oxycodone (sold as OxyContin) Oxymorphone Synthetic opioids Anilidopiperidines Fentanyl (see also list of fentanyl analogues) Alphamethylfentanyl Alfentanil Sufentanil Remifentanil Carfentanyl Ohmefentanyl Ohmecarfentanil Benzimidazoles Benzimidazoles opioids are also known as nitazenes. Metodesnitazene (Metazene) Etodesnitazene (Etazene) Metonitazene Etonitazene Etonitazepyne Etonitazepipne Isotonitazene Clonitazene Phenylpiperidines Pethidine (meperidine) Ketobemidone MPPP Allylprodine Prodine PEPAP Promedol Diphenylpropylamine derivatives Propoxyphene Dextropropoxyphene Dextromoramide Bezitramide Piritramide Methadone Dipipanone Levomethadyl acetate (LAAM) Difenoxin Diphenoxylate Loperamide (does cross the blood–brain barrier but is quickly pumped into the non-central nervous system by P-Glycoprotein. Mild opiate withdrawal in animal models exhibits this action after sustained and prolonged use including rhesus monkeys, mice, and rats.) Benzomorphan derivatives Dezocine—agonist/antagonist Pentazocine—agonist/antagonist Phenazocine Oripavine derivatives Buprenorphine—partial agonist Dihydroetorphine Etorphine Morphinan derivatives Butorphanol—agonist/antagonist Nalbuphine—agonist/antagonist Levorphanol Levomethorphan Racemethorphan Others Lefetamine Meptazinol Mitragynine Tilidine Tramadol Tapentadol Eluxadoline Bucinnazine 7-Hydroxymitragynine Allosteric modulators Plain allosteric modulators do not belong to the opioids, instead they are classified as opioidergics. Opioid antagonists Nalmefene Naloxone Naltrexone Methylnaltrexone (Methylnaltrexone is only peripherally active as it does not cross the blood–brain barrier in sufficient quantities to be centrally active. As such, it can be considered the antithesis of loperamide.) Naloxegol (Naloxegol is only peripherally active as it does not cross the blood–brain barrier in sufficient quantities to be centrally active. As such, it can be considered the antitheses of loperamide.) Tables of opioids Table of morphinan opioids Table of non-morphinan opioids
Biology and health sciences
Pain treatments
Health
1947467
https://en.wikipedia.org/wiki/Radical%20polymerization
Radical polymerization
In polymer chemistry, radical polymerization (RP) is a method of polymerization by which a polymer forms by the successive addition of a radical to building blocks (repeat units). Radicals can be formed by a number of different mechanisms, usually involving separate initiator molecules. Following its generation, the initiating radical adds (nonradical) monomer units, thereby growing the polymer chain. Radical polymerization is a key synthesis route for obtaining a wide variety of different polymers and materials composites. The relatively non-specific nature of radical chemical interactions makes this one of the most versatile forms of polymerization available and allows facile reactions of polymeric radical chain ends and other chemicals or substrates. In 2001, 40 billion of the 110 billion pounds of polymers produced in the United States were produced by radical polymerization. Radical polymerization is a type of chain polymerization, along with anionic, cationic and coordination polymerization. Initiation Initiation is the first step of the polymerization process. During initiation, an active center is created from which a polymer chain is generated. Not all monomers are susceptible to all types of initiators. Radical initiation works best on the carbon–carbon double bond of vinyl monomers and the carbon–oxygen double bond in aldehydes and ketones. Initiation has two steps. In the first step, one or two radicals are created from the initiating molecules. In the second step, radicals are transferred from the initiator molecules to the monomer units present. Several choices are available for these initiators. Types of initiation and the initiators Thermal decomposition The initiator is heated until a bond is homolytically cleaved, producing two radicals (Figure 1). This method is used most often with organic peroxides or azo compounds. Photolysis Radiation cleaves a bond homolytically, producing two radicals (Figure 2). This method is used most often with metal iodides, metal alkyls, and azo compounds. Photoinitiation can also occur by bi-molecular H abstraction when the radical is in its lowest triplet excited state. An acceptable photoinitiator system should fulfill the following requirements: High absorptivity in the 300–400 nm range. Efficient generation of radicals capable of attacking the alkene double bond of vinyl monomers. Adequate solubility in the binder system (prepolymer + monomer). Should not impart yellowing or unpleasant odors to the cured material. The photoinitiator and any byproducts resulting from its use should be non-toxic. Redox reactions Reduction of hydrogen peroxide or an alkyl hydrogen peroxide by iron (Figure 3). Other reductants such as Cr2+, V2+, Ti3+, Co2+, and Cu+ can be employed in place of ferrous ion in many instances. Persulfates The dissociation of a persulfate in the aqueous phase (Figure 4). This method is useful in emulsion polymerizations, in which the radical diffuses into a hydrophobic monomer-containing droplet. Ionizing radiation α-, β-, γ-, or x-rays cause ejection of an electron from the initiating species, followed by dissociation and electron capture to produce a radical (Figure 5). Electrochemical Electrolysis of a solution containing both monomer and electrolyte. A monomer molecule will receive an electron at the cathode to become a radical anion, and a monomer molecule will give up an electron at the anode to form a radical cation (Figure 6). The radical ions then initiate free radical (and/or ionic) polymerization. This type of initiation is especially useful for coating metal surfaces with polymer films. Plasma A gaseous monomer is placed in an electric discharge at low pressures under conditions where a plasma (ionized gaseous molecules) is created. In some cases, the system is heated and/or placed in a radiofrequency field to assist in creating the plasma. Sonication High-intensity ultrasound at frequencies beyond the range of human hearing (16 kHz) can be applied to a monomer. Initiation results from the effects of cavitation (the formation and collapse of cavities in the liquid). The collapse of the cavities generates very high local temperatures and pressures. This results in the formation of excited electronic states, which in turn lead to bond breakage and radical formation. Ternary initiators A ternary initiator is the combination of several types of initiators into one initiating system. The types of initiators are chosen based on the properties they are known to induce in the polymers they produce. For example, poly(methyl methacrylate) has been synthesized by the ternary system benzoyl peroxide and 3,6-bis(o-carboxybenzoyl)-N-isopropylcarbazole and di-η5-indenylzirconium dichloride (Figure 7).This type of initiating system contains a metallocene, an initiator, and a heteroaromatic diketo carboxylic acid. Metallocenes in combination with initiators accelerate polymerization of poly(methyl methacrylate) and produce a polymer with a narrower molecular weight distribution. The example shown here consists of indenylzirconium (a metallocene) and benzoyl peroxide (an initiator). Also, initiating systems containing heteroaromatic diketo carboxylic acids, such as 3,6-bis(o-carboxybenzoyl)-N-isopropylcarbazole in this example, are known to catalyze the decomposition of benzoyl peroxide. Initiating systems with this particular heteroaromatic diket carboxylic acid are also known to have effects on the microstructure of the polymer. The combination of all of these components—a metallocene, an initiator, and a heteroaromatic diketo carboxylic acid—yields a ternary initiating system that was shown to accelerate the polymerization and produce polymers with enhanced heat resistance and regular microstructure. Initiator efficiency Due to side reactions, not all radicals formed by the dissociation of initiator molecules actually add monomers to form polymer chains. The efficiency factor f is defined as the fraction of the original initiator which contributes to the polymerization reaction. The maximal value of f is 1, but typical values range from 0.3 to 0.8. The following types of reactions can decrease the efficiency of the initiator. Primary recombination Two radicals recombine before initiating a chain (Figure 8). This occurs within the solvent cage, meaning that no solvent has yet come between the new radicals. Other recombination pathways Two radical initiators recombine before initiating a chain, but not in the solvent cage (Figure 9). Side reactions One radical is produced instead of the three radicals that could be produced (Figure 10). Propagation During polymerization, a polymer spends most of its time in increasing its chain length, or propagating. After the radical initiator is formed, it attacks a monomer (Figure 11). In an ethene monomer, one electron pair is held securely between the two carbons in a sigma bond. The other is more loosely held in a pi bond. The free radical uses one electron from the pi bond to form a more stable bond with the carbon atom. The other electron returns to the second carbon atom, turning the whole molecule into another radical. This begins the polymer chain. Figure 12 shows how the orbitals of an ethylene monomer interact with a radical initiator. Once a chain has been initiated, the chain propagates (Figure 13) until there are no more monomers (living polymerization) or until termination occurs. There may be anywhere from a few to thousands of propagation steps depending on several factors such as radical and chain reactivity, the solvent, and temperature. The mechanism of chain propagation is as follows: Termination Chain termination is inevitable in radical polymerization due to the high reactivity of radicals. Termination can occur by several different mechanisms. If longer chains are desired, the initiator concentration should be kept low; otherwise, many shorter chains will result. Combination of two active chain ends: one or both of the following processes may occur. Combination: two chain ends simply couple together to form one long chain (Figure 14). One can determine if this mode of termination is occurring by monitoring the molecular weight of the propagating species: combination will result in doubling of molecular weight. Also, combination will result in a polymer that is C2 symmetric about the point of the combination. Radical disproportionation: a hydrogen atom from one chain end is abstracted to another, producing a polymer with a terminal unsaturated group and a polymer with a terminal saturated group (Figure 15). Combination of an active chain end with an initiator radical (Figure 16). Interaction with impurities or inhibitors. Oxygen is the common inhibitor. The growing chain will react with molecular oxygen, producing an oxygen radical, which is much less reactive (Figure 17). This significantly slows down the rate of propagation. Nitrobenzene, butylated hydroxyl toluene, and diphenyl picryl hydrazyl (DPPH, Figure 18) are a few other inhibitors. The latter is an especially effective inhibitor because of the resonance stabilization of the radical. Chain transfer Contrary to the other modes of termination, chain transfer results in the destruction of only one radical, but also the creation of another radical. Often, however, this newly created radical is not capable of further propagation. Similar to disproportionation, all chain-transfer mechanisms also involve the abstraction of a hydrogen or other atom. There are several types of chain-transfer mechanisms. To solvent: a hydrogen atom is abstracted from a solvent molecule, resulting in the formation of radical on the solvent molecules, which will not propagate further (Figure 19). The effectiveness of chain transfer involving solvent molecules depends on the amount of solvent present (more solvent leads to greater probability of transfer), the strength of the bond involved in the abstraction step (weaker bond leads to greater probability of transfer), and the stability of the solvent radical that is formed (greater stability leads to greater probability of transfer). Halogens, except fluorine, are easily transferred. To monomer: a hydrogen atom is abstracted from a monomer. While this does create a radical on the affected monomer, resonance stabilization of this radical discourages further propagation (Figure 20). To initiator: a polymer chain reacts with an initiator, which terminates that polymer chain, but creates a new radical initiator (Figure 21). This initiator can then begin new polymer chains. Therefore, contrary to the other forms of chain transfer, chain transfer to the initiator does allow for further propagation. Peroxide initiators are especially sensitive to chain transfer. To polymer: the radical of a polymer chain abstracts a hydrogen atom from somewhere on another polymer chain (Figure 22). This terminates the growth of one polymer chain, but allows the other to branch and resume growing. This reaction step changes neither the number of polymer chains nor the number of monomers which have been polymerized, so that the number-average degree of polymerization is unaffected. Effects of chain transfer: The most obvious effect of chain transfer is a decrease in the polymer chain length. If the rate of transfer is much larger than the rate of propagation, then very small polymers are formed with chain lengths of 2-5 repeating units (telomerization). The Mayo equation estimates the influence of chain transfer on chain length (xn): . Where ktr is the rate constant for chain transfer and kp is the rate constant for propagation. The Mayo equation assumes that transfer to solvent is the major termination pathway. Methods There are four industrial methods of radical polymerization: Bulk polymerization: reaction mixture contains only initiator and monomer, no solvent. Solution polymerization: reaction mixture contains solvent, initiator, and monomer. Suspension polymerization: reaction mixture contains an aqueous phase, water-insoluble monomer, and initiator soluble in the monomer droplets (both the monomer and the initiator are hydrophobic). Emulsion polymerization: similar to suspension polymerization except that the initiator is soluble in the aqueous phase rather than in the monomer droplets (the monomer is hydrophobic, and the initiator is hydrophilic). An emulsifying agent is also needed. Other methods of radical polymerization include the following: Template polymerization: In this process, polymer chains are allowed to grow along template macromolecules for the greater part of their lifetime. A well-chosen template can affect the rate of polymerization as well as the molar mass and microstructure of the daughter polymer. The molar mass of a daughter polymer can be up to 70 times greater than those of polymers produced in the absence of the template and can be higher in molar mass than the templates themselves. This is because of retardation of the termination for template-associated radicals and by hopping of a radical to the neighboring template after reaching the end of a template polymer. Plasma polymerization: The polymerization is initiated with plasma. A variety of organic molecules including alkenes, alkynes, and alkanes undergo polymerization to high molecular weight products under these conditions. The propagation mechanisms appear to involve both ionic and radical species. Plasma polymerization offers a potentially unique method of forming thin polymer films for uses such as thin-film capacitors, antireflection coatings, and various types of thin membranes. Sonication: The polymerization is initiated by high-intensity ultrasound. Polymerization to high molecular weight polymer is observed but the conversions are low (<15%). The polymerization is self-limiting because of the high viscosity produced even at low conversion. High viscosity hinders cavitation and radical production. Reversible deactivation radical polymerization Also known as living radical polymerization, controlled radical polymerization, reversible deactivation radical polymerization (RDRP) relies on completely pure reactions, preventing termination caused by impurities. Because these polymerizations stop only when there is no more monomer, polymerization can continue upon the addition of more monomer. Block copolymers can be made this way. RDRP allows for control of molecular weight and dispersity. However, this is very difficult to achieve and instead a pseudo-living polymerization occurs with only partial control of molecular weight and dispersity. ATRP and RAFT are the main types of complete radical polymerization. Atom transfer radical polymerization (ATRP): based on the formation of a carbon-carbon bond by atom transfer radical addition. This method, independently discovered in 1995 by Mitsuo Sawamoto and by Jin-Shan Wang and Krzysztof Matyjaszewski, requires reversible activation of a dormant species (such as an alkyl halide) and a transition metal halide catalyst (to activate dormant species). Reversible Addition-Fragmentation Chain-Transfer Polymerization (RAFT): requires a compound that can act as a reversible chain-transfer agent, such as dithio compound. Stable Free Radical Polymerization (SFRP): used to synthesize linear or branched polymers with narrow molecular weight distributions and reactive end groups on each polymer chain. The process has also been used to create block co-polymers with unique properties. Conversion rates are about 100% using this process but require temperatures of about 135 °C. This process is most commonly used with acrylates, styrenes, and dienes. The reaction scheme in Figure 23 illustrates the SFRP process. Because the chain end is functionalized with the TEMPO molecule (Figure 24), premature termination by coupling is reduced. As with all living polymerizations, the polymer chain grows until all of the monomer is consumed. Kinetics In typical chain growth polymerizations, the reaction rates for initiation, propagation and termination can be described as follows: where f is the efficiency of the initiator and kd, kp, and kt are the constants for initiator dissociation, chain propagation and termination, respectively. [I] [M] and [M•] are the concentrations of the initiator, monomer and the active growing chain. Under the steady-state approximation, the concentration of the active growing chains remains constant, i.e. the rates of initiation and of termination are equal. The concentration of active chain can be derived and expressed in terms of the other known species in the system. In this case, the rate of chain propagation can be further described using a function of the initiator and monomer concentrations The kinetic chain length v is a measure of the average number of monomer units reacting with an active center during its lifetime and is related to the molecular weight through the mechanism of the termination. Without chain transfer, the kinetic chain length is only a function of propagation rate and initiation rate. Assuming no chain-transfer effect occurs in the reaction, the number average degree of polymerization Pn can be correlated with the kinetic chain length. In the case of termination by disproportionation, one polymer molecule is produced per every kinetic chain: Termination by combination leads to one polymer molecule per two kinetic chains: Any mixture of both these mechanisms can be described by using the value , the contribution of disproportionation to the overall termination process: If chain transfer is considered, the kinetic chain length is not affected by the transfer process because the growing free-radical center generated by the initiation step stays alive after any chain-transfer event, although multiple polymer chains are produced. However, the number average degree of polymerization decreases as the chain transfers, since the growing chains are terminated by the chain-transfer events. Taking into account the chain-transfer reaction towards solvent S, initiator I, polymer P, and added chain-transfer agent T. The equation of Pn will be modified as follows: It is usual to define chain-transfer constants C for the different molecules , , , , Thermodynamics In chain growth polymerization, the position of the equilibrium between polymer and monomers can be determined by the thermodynamics of the polymerization. The Gibbs free energy (ΔGp) of the polymerization is commonly used to quantify the tendency of a polymeric reaction. The polymerization will be favored if ΔGp < 0; if ΔGp > 0, the polymer will undergo depolymerization. According to the thermodynamic equation ΔG = ΔH – TΔS, a negative enthalpy and an increasing entropy will shift the equilibrium towards polymerization. In general, the polymerization is an exothermic process, i.e. negative enthalpy change, since addition of a monomer to the growing polymer chain involves the conversion of π bonds into σ bonds, or a ring–opening reaction that releases the ring tension in a cyclic monomer. Meanwhile, during polymerization, a large amount of small molecules are associated, losing rotation and translational degrees of freedom. As a result, the entropy decreases in the system, ΔSp < 0 for nearly all polymerization processes. Since depolymerization is almost always entropically favored, the ΔHp must then be sufficiently negative to compensate for the unfavorable entropic term. Only then will polymerization be thermodynamically favored by the resulting negative ΔGp. In practice, polymerization is favored at low temperatures: TΔSp is small. Depolymerization is favored at high temperatures: TΔSp is large. As the temperature increases, ΔGp become less negative. At a certain temperature, the polymerization reaches equilibrium (rate of polymerization = rate of depolymerization). This temperature is called the ceiling temperature (Tc). ΔGp = 0. Stereochemistry The stereochemistry of polymerization is concerned with the difference in atom connectivity and spatial orientation in polymers that has the same chemical composition. Hermann Staudinger studied the stereoisomerism in chain polymerization of vinyl monomers in the late 1920s, and it took another two decades for people to fully appreciate the idea that each of the propagation steps in the polymer growth could give rise to stereoisomerism. The major milestone in the stereochemistry was established by Ziegler and Natta and their coworkers in 1950s, as they developed metal based catalyst to synthesize stereoregular polymers. The reason why the stereochemistry of the polymer is of particular interest is because the physical behavior of a polymer depends not only on the general chemical composition but also on the more subtle differences in microstructure. Atactic polymers consist of a random arrangement of stereochemistry and are amorphous (noncrystalline), soft materials with lower physical strength. The corresponding isotactic (like substituents all on the same side) and syndiotactic (like substituents of alternate repeating units on the same side) polymers are usually obtained as highly crystalline materials. It is easier for the stereoregular polymers to pack into a crystal lattice since they are more ordered and the resulting crystallinity leads to higher physical strength and increased solvent and chemical resistance as well as differences in other properties that depend on crystallinity. The prime example of the industrial utility of stereoregular polymers is polypropene. Isotactic polypropene is a high-melting (165 °C), strong, crystalline polymer, which is used as both a plastic and fiber. Atactic polypropene is an amorphous material with an oily to waxy soft appearance that finds use in asphalt blends and formulations for lubricants, sealants, and adhesives, but the volumes are minuscule compared to that of isotactic polypropene. When a monomer adds to a radical chain end, there are two factors to consider regarding its stereochemistry: 1) the interaction between the terminal chain carbon and the approaching monomer molecule and 2) the configuration of the penultimate repeating unit in the polymer chain. The terminal carbon atom has sp2 hybridization and is planar. Consider the polymerization of the monomer CH2=CXY. There are two ways that a monomer molecule can approach the terminal carbon: the mirror approach (with like substituents on the same side) or the non-mirror approach (like substituents on opposite sides). If free rotation does not occur before the next monomer adds, the mirror approach will always lead to an isotactic polymer and the non-mirror approach will always lead to a syndiotactic polymer (Figure 25). However, if interactions between the substituents of the penultimate repeating unit and the terminal carbon atom are significant, then conformational factors could cause the monomer to add to the polymer in a way that minimizes steric or electrostatic interaction (Figure 26). Reactivity Traditionally, the reactivity of monomers and radicals are assessed by the means of copolymerization data. The Q–e scheme, the most widely used tool for the semi-quantitative prediction of monomer reactivity ratios, was first proposed by Alfrey and Price in 1947. The scheme takes into account the intrinsic thermodynamic stability and polar effects in the transition state. A given radical and a monomer are considered to have intrinsic reactivities Pi and Qj, respectively. The polar effects in the transition state, the supposed permanent electric charge carried by that entity (radical or molecule), is quantified by the factor e, which is a constant for a given monomer, and has the same value for the radical derived from that specific monomer. For addition of monomer 2 to a growing polymer chain whose active end is the radical of monomer 1, the rate constant, k12, is postulated to be related to the four relevant reactivity parameters by The monomer reactivity ratio for the addition of monomers 1 and 2 to this chain is given by For the copolymerization of a given pair of monomers, the two experimental reactivity ratios r1 and r2 permit the evaluation of (Q1/Q2) and (e1 – e2). Values for each monomer can then be assigned relative to a reference monomer, usually chosen as styrene with the arbitrary values Q = 1.0 and e = –0.8. Applications Free radical polymerization has found applications including the manufacture of polystyrene, thermoplastic block copolymer elastomers, cardiovascular stents, chemical surfactants and lubricants. Block copolymers are used for a wide variety of applications including adhesives, footwear and toys. Academic research Free radical polymerization allows the functionalization of carbon nanotubes. CNTs intrinsic electronic properties lead them to form large aggregates in solution, precluding useful applications. Adding small chemical groups to the walls of CNT can eliminate this propensity and tune the response to the surrounding environment. The use of polymers instead of smaller molecules can modify CNT properties (and conversely, nanotubes can modify polymer mechanical and electronic properties). For example, researchers coated carbon nanotubes with polystyrene by first polymerizing polystyrene via chain radical polymerization and subsequently mixing it at 130 °C with carbon nanotubes to generate radicals and graft them onto the walls of carbon nanotubes (Figure 27). Chain growth polymerization ("grafting to") synthesizes a polymer with predetermined properties. Purification of the polymer can be used to obtain a more uniform length distribution before grafting. Conversely, “grafting from”, with radical polymerization techniques such as atom transfer radical polymerization (ATRP) or nitroxide-mediated polymerization (NMP), allows rapid growth of high molecular weight polymers. Radical polymerization also aids synthesis of nanocomposite hydrogels. These gels are made of water-swellable nano-scale clay (especially those classed as smectites) enveloped by a network polymer. Aqueous dispersions of clay are treated with an initiator and a catalyst and the organic monomer, generally an acrylamide. Polymers grow off the initiators that are in turn bound to the clay. Due to recombination and disproportionation reactions, growing polymer chains bind to one another, forming a strong, cross-linked network polymer, with clay particles acting as branching points for multiple polymer chain segments. Free radical polymerization used in this context allows the synthesis of polymers from a wide variety of substrates (the chemistries of suitable clays vary). Termination reactions unique to chain growth polymerization produce a material with flexibility, mechanical strength and biocompatibility.
Physical sciences
Organic reactions
Chemistry
1948637
https://en.wikipedia.org/wiki/Neuroplasticity
Neuroplasticity
Neuroplasticity, also known as neural plasticity or just plasticity, is the ability of neural networks in the brain to change through growth and reorganization. Neuroplasticity refers to the brain's ability to reorganize and rewire its neural connections, enabling it to adapt and function in ways that differ from its prior state. This process can occur in response to learning new skills, experiencing environmental changes, recovering from injuries, or adapting to sensory or cognitive deficits. Such adaptability highlights the dynamic and ever-evolving nature of the brain, even into adulthood. These changes range from individual neuron pathways making new connections, to systematic adjustments like cortical remapping or neural oscillation. Other forms of neuroplasticity include homologous area adaptation, cross modal reassignment, map expansion, and compensatory masquerade. Examples of neuroplasticity include circuit and network changes that result from learning a new ability, information acquisition, environmental influences, pregnancy, caloric intake, practice/training, and psychological stress. Neuroplasticity was once thought by neuroscientists to manifest only during childhood, but research in the latter half of the 20th century showed that many aspects of the brain can be altered (or are "plastic") even through adulthood. Furthermore, starting from the primary stimulus-response sequence in simple reflexes, the organisms' capacity to correctly detect alterations within themselves and their context depends on the concrete nervous system architecture, which evolves in a particular way already during gestation. Adequate nervous system development forms us as human beings with all necessary cognitive functions. The physicochemical properties of the mother-fetus bio-system affect the neuroplasticity of the embryonic nervous system in their ecological context. However, the developing brain exhibits a higher degree of plasticity than the adult brain. Activity-dependent plasticity can have significant implications for healthy development, learning, memory, and recovery from brain damage. History Origin The term plasticity was first applied to behavior in 1890 by William James in The Principles of Psychology where the term was used to describe "a structure weak enough to yield to an influence, but strong enough not to yield all at once". The first person to use the term neural plasticity appears to have been the Polish neuroscientist Jerzy Konorski. One of the first experiments providing evidence for neuroplasticity was conducted in 1793, by Italian anatomist Michele Vicenzo Malacarne, who described experiments in which he paired animals, trained one of the pair extensively for years, and then dissected both. Malacarne discovered that the cerebellums of the trained animals were substantially larger than the cerebellum of the untrained animals. However, while these findings were significant, they were eventually forgotten. In 1890, the idea that the brain and its function are not fixed throughout adulthood was proposed by William James in The Principles of Psychology, though the idea was largely neglected. Up until the 1970s, neuroscientists believed that the brain's structure and function was essentially fixed throughout adulthood. While the brain was commonly understood as a nonrenewable organ in the early 1900s, the pioneering neuroscientist Santiago Ramón y Cajal used the term neuronal plasticity to describe nonpathological changes in the structure of adult brains. Based on his renowned neuron doctrine, Cajal first described the neuron as the fundamental unit of the nervous system that later served as an essential foundation to develop the concept of neural plasticity. Many neuroscientists used the term plasticity to explain the regenerative capacity of the peripheral nervous system only. Cajal, however, used the term plasticity to reference his findings of degeneration and regeneration in the adult brain (a part of the central nervous system). This was controversial, with some like Walther Spielmeyer and Max Bielschowsky arguing that the CNS cannot produce new cells. The term has since been broadly applied: Research and discovery In 1923, Karl Lashley conducted experiments on rhesus monkeys that demonstrated changes in neuronal pathways, which he concluded were evidence of plasticity. Despite this, and other research that suggested plasticity, neuroscientists did not widely accept the idea of neuroplasticity. Inspired by work from Nicolas Rashevsky, in 1943, McCulloch and Pitts proposed the artificial neuron, with a learning rule, whereby new synapses are produced when neurons fire simultaneously. This is then extensively discussed in The organization of behavior (Hebb, 1949) and is now known as Hebbian learning. In 1945, Justo Gonzalo concluded from his research on brain dynamics, that, contrary to the activity of the projection areas, the "central" cortical mass (more or less equidistant from the visual, tactile and auditive projection areas), would be a "maneuvering mass", rather unspecific or multisensory, with capacity to increase neural excitability and re-organize the activity by means of plasticity properties. He gives as a first example of adaptation, to see upright with reversing glasses in the Stratton experiment, and specially, several first-hand brain injuries cases in which he observed dynamic and adaptive properties in their disorders, in particular in the inverted perception disorder [e.g., see pp 260–62 Vol. I (1945), p 696 Vol. II (1950)]. He stated that a sensory signal in a projection area would be only an inverted and constricted outline that would be magnified due to the increase in recruited cerebral mass, and re-inverted due to some effect of brain plasticity, in more central areas, following a spiral growth. Marian Diamond of the University of California, Berkeley, produced the first scientific evidence of anatomical brain plasticity, publishing her research in 1964. Other significant evidence was produced in the 1960s and after, notably from scientists including Paul Bach-y-Rita, Michael Merzenich along with Jon Kaas, as well as several others. In the 1960s, Paul Bach-y-Rita invented a device that was tested on a small number of people, and involved a person sitting in a chair, embedded in which were nubs that were made to vibrate in ways that translated images received in a camera, allowing a form of vision via sensory substitution. Studies in people recovering from stroke also provided support for neuroplasticity, as regions of the brain that remained healthy could sometimes take over, at least in part, functions that had been destroyed; Shepherd Ivory Franz did work in this area. Eleanor Maguire documented changes in hippocampal structure associated with acquiring the knowledge of London's layout in local taxi drivers. A redistribution of grey matter was indicated in London Taxi Drivers compared to controls. This work on hippocampal plasticity not only interested scientists, but also engaged the public and media worldwide. Michael Merzenich is a neuroscientist who has been one of the pioneers of neuroplasticity for over three decades. He has made some of "the most ambitious claims for the field – that brain exercises may be as useful as drugs to treat diseases as severe as schizophrenia – that plasticity exists from cradle to the grave, and that radical improvements in cognitive functioning – how we learn, think, perceive, and remember are possible even in the elderly." Merzenich's work was affected by a crucial discovery made by David Hubel and Torsten Wiesel in their work with kittens. The experiment involved sewing one eye shut and recording the cortical brain maps. Hubel and Wiesel saw that the portion of the kitten's brain associated with the shut eye was not idle, as expected. Instead, it processed visual information from the open eye. It was "…as though the brain didn't want to waste any 'cortical real estate' and had found a way to rewire itself." This implied neuroplasticity during the critical period. However, Merzenich argued that neuroplasticity could occur beyond the critical period. His first encounter with adult plasticity came when he was engaged in a postdoctoral study with Clinton Woosley. The experiment was based on observation of what occurred in the brain when one peripheral nerve was cut and subsequently regenerated. The two scientists micromapped the hand maps of monkey brains before and after cutting a peripheral nerve and sewing the ends together. Afterwards, the hand map in the brain that they expected to be jumbled was nearly normal. This was a substantial breakthrough. Merzenich asserted that, "If the brain map could normalize its structure in response to abnormal input, the prevailing view that we are born with a hardwired system had to be wrong. The brain had to be plastic." Merzenich received the 2016 Kavli Prize in Neuroscience "for the discovery of mechanisms that allow experience and neural activity to remodel brain function." Neurobiology There are different ideas and theories on what biological processes allow for neuroplasticity to occur. The core of this phenomenon is based upon synapses and how connections between them change based on neuron functioning. It is widely agreed upon that neuroplasticity takes on many forms, as it is a result of a variety of pathways. These pathways, mainly signaling cascades, allow for gene expression alterations that lead to neuronal changes, and thus neuroplasticity. There are a number of other factors that are thought to play a role in the biological processes underlying the changing of neural networks in the brain. Some of these factors include synapse regulation via phosphorylation, the role of inflammation and inflammatory cytokines, proteins such as Bcl-2 proteins and neutrophorins, and energy production via mitochondria. JT Wall and J Xu have traced the mechanisms underlying neuroplasticity. Re-organization is not cortically emergent, but occurs at every level in the processing hierarchy; this produces the map changes observed in the cerebral cortex. Types Christopher Shaw and Jill McEachern (eds) in "Toward a theory of Neuroplasticity", state that there is no all-inclusive theory that overarches different frameworks and systems in the study of neuroplasticity. However, researchers often describe neuroplasticity as "the ability to make adaptive changes related to the structure and function of the nervous system." Correspondingly, two types of neuroplasticity are often discussed: structural neuroplasticity and functional neuroplasticity. Structural neuroplasticity Structural plasticity is often understood as the brain's ability to change its neuronal connections. New neurons are constantly produced and integrated into the central nervous system throughout the life span based on this type of neuroplasticity. Researchers nowadays use multiple cross-sectional imaging methods (i.e. magnetic resonance imaging (MRI), computerized tomography (CT)) to study the structural alterations of the human brains. This type of neuroplasticity often studies the effect of various internal or external stimuli on the brain's anatomical reorganization. The changes of grey matter proportion or the synaptic strength in the brain are considered as examples of structural neuroplasticity. Structural neuroplasticity is currently investigated more within the field of neuroscience in current academia. Functional neuroplasticity Functional plasticity refers to the brain's ability to alter and adapt the functional properties of network of neurons. It can occur in four known ways namely: homologous area adaptation map expansion cross-model reassignment compensatory masquerade. Homologous area adaptation Homologous area adaptation is the assumption of a particular cognitive process by a homologous region in the opposite hemisphere. For instance, through homologous area adaptation a cognitive task is shifted from a damaged part of the brain to its homologous area in opposite side of the brain. Homologous area adaptation is a type of functional neuroplasticity that occur usually in children rather than adults. Map expansion In map expansion, cortical maps related to particular cognitive tasks expand due to frequent exposure to stimuli. Map expansion has been proven through experiments performed in relation to the study: experiment on effect of frequent stimulus on functional connectivity of the brain was observed in individuals learning spatial routes. Cross-model reassignment Cross-model reassignment involves reception of novel input signals to a brain region which has been stripped off its default input. Compensatory masquerade Functional plasticity through compensatory masquerade occurs using different cognitive processes for an already established cognitive task. Changes in the brain associated with functional neuroplasticity can occur in response to two different types of events: previous activity (activity-dependent plasticity) to acquire memory or in response to malfunction or damage of neurons (maladaptive plasticity) to compensate a pathological event In the latter case the functions from one part of the brain transfer to another part of the brain based on the demand to produce recovery of behavioral or physiological processes. Regarding physiological forms of activity-dependent plasticity, those involving synapses are referred to as synaptic plasticity. The strengthening or weakening of synapses that results in an increase or decrease of firing rate of the neurons are called long-term potentiation (LTP) and long-term depression (LTD), respectively, and they are considered as examples of synaptic plasticity that are associated with memory. The cerebellum is a typical structure with combinations of LTP/LTD and redundancy within the circuitry, allowing plasticity at several sites. More recently it has become clearer that synaptic plasticity can be complemented by another form of activity-dependent plasticity involving the intrinsic excitability of neurons, which is referred to as intrinsic plasticity. This, as opposed to homeostatic plasticity does not necessarily maintain the overall activity of a neuron within a network but contributes to encoding memories. Also, many studies have indicated functional neuroplasticity in the level of brain networks, where training alters the strength of functional connections. Although a recent study discusses that these observed changes should not directly relate to neuroplasticity, since they may root in the systematic requirement of the brain network for reorganization. Applications and examples The adult brain is not entirely "hard-wired" with fixed neuronal circuits. There are many instances of cortical and subcortical rewiring of neuronal circuits in response to training as well as in response to injury. There is ample evidence for the active, experience-dependent re-organization of the synaptic networks of the brain involving multiple inter-related structures including the cerebral cortex. The specific details of how this process occurs at the molecular and ultrastructural levels are topics of active neuroscience research. The way experience can influence the synaptic organization of the brain is also the basis for a number of theories of brain function including the general theory of mind and neural Darwinism. The concept of neuroplasticity is also central to theories of memory and learning that are associated with experience-driven alteration of synaptic structure and function in studies of classical conditioning in invertebrate animal models such as Aplysia. There is evidence that neurogenesis (birth of brain cells) occurs in the adult, rodent brain—and such changes can persist well into old age. The evidence for neurogenesis is mainly restricted to the hippocampus and olfactory bulb, but research has revealed that other parts of the brain, including the cerebellum, may be involved as well. However, the degree of rewiring induced by the integration of new neurons in the established circuits is not known, and such rewiring may well be functionally redundant. Treatment of brain damage A surprising consequence of neuroplasticity is that the brain activity associated with a given function can be transferred to a different location; this can result from normal experience and also occurs in the process of recovery from brain injury. Neuroplasticity is the fundamental issue that supports the scientific basis for treatment of acquired brain injury with goal-directed experiential therapeutic programs in the context of rehabilitation approaches to the functional consequences of the injury. Neuroplasticity is gaining popularity as a theory that, at least in part, explains improvements in functional outcomes with physical therapy post-stroke. Rehabilitation techniques that are supported by evidence which suggest cortical reorganization as the mechanism of change include constraint-induced movement therapy, functional electrical stimulation, treadmill training with body-weight support, and virtual reality therapy. Robot assisted therapy is an emerging technique, which is also hypothesized to work by way of neuroplasticity, though there is currently insufficient evidence to determine the exact mechanisms of change when using this method. One group has developed a treatment that includes increased levels of progesterone injections in brain-injured patients. "Administration of progesterone after traumatic brain injury (TBI) and stroke reduces edema, inflammation, and neuronal cell death, and enhances spatial reference memory and sensory-motor recovery." In a clinical trial, a group of severely injured patients had a 60% reduction in mortality after three days of progesterone injections. However, a study published in the New England Journal of Medicine in 2014 detailing the results of a multi-center NIH-funded phase III clinical trial of 882 patients found that treatment of acute traumatic brain injury with the hormone progesterone provides no significant benefit to patients when compared with placebo. Binocular vision For decades, researchers assumed that humans had to acquire binocular vision, in particular stereopsis, in early childhood or they would never gain it. In recent years, however, successful improvements in persons with amblyopia, convergence insufficiency or other stereo vision anomalies have become prime examples of neuroplasticity; binocular vision improvements and stereopsis recovery are now active areas of scientific and clinical research. Phantom limbs In the phenomenon of phantom limb sensation, a person continues to feel pain or sensation within a part of their body that has been amputated. This is strangely common, occurring in 60–80% of amputees. An explanation for this is based on the concept of neuroplasticity, as the cortical maps of the removed limbs are believed to have become engaged with the area around them in the postcentral gyrus. This results in activity within the surrounding area of the cortex being misinterpreted by the area of the cortex formerly responsible for the amputated limb. The relationship between phantom limb sensation and neuroplasticity is a complex one. In the early 1990s V.S. Ramachandran theorized that phantom limbs were the result of cortical remapping. However, in 1995 Herta Flor and her colleagues demonstrated that cortical remapping occurs only in patients who have phantom pain. Her research showed that phantom limb pain (rather than referred sensations) was the perceptual correlate of cortical reorganization. This phenomenon is sometimes referred to as maladaptive plasticity. In 2009, Lorimer Moseley and Peter Brugger carried out an experiment in which they encouraged arm amputee subjects to use visual imagery to contort their phantom limbs into impossible configurations. Four of the seven subjects succeeded in performing impossible movements of the phantom limb. This experiment suggests that the subjects had modified the neural representation of their phantom limbs and generated the motor commands needed to execute impossible movements in the absence of feedback from the body. Chronic pain Individuals who have chronic pain experience prolonged pain at sites that may have been previously injured, yet are otherwise currently healthy. This phenomenon is related to neuroplasticity due to a maladaptive reorganization of the nervous system, both peripherally and centrally. During the period of tissue damage, noxious stimuli and inflammation cause an elevation of nociceptive input from the periphery to the central nervous system. Prolonged nociception from the periphery then elicits a neuroplastic response at the cortical level to change its somatotopic organization for the painful site, inducing central sensitization. For instance, individuals experiencing complex regional pain syndrome demonstrate a diminished cortical somatotopic representation of the hand contralaterally as well as a decreased spacing between the hand and the mouth. Additionally, chronic pain has been reported to significantly reduce the volume of grey matter in the brain globally, and more specifically at the prefrontal cortex and right thalamus. However, following treatment, these abnormalities in cortical reorganization and grey matter volume are resolved, as well as their symptoms. Similar results have been reported for phantom limb pain, chronic low back pain and carpal tunnel syndrome. Meditation A number of studies have linked meditation practice to differences in cortical thickness or density of gray matter. One of the most well-known studies to demonstrate this was led by Sara Lazar, from Harvard University, in 2000. Richard Davidson, a neuroscientist at the University of Wisconsin, has led experiments in collaboration with the Dalai Lama on effects of meditation on the brain. His results suggest that meditation may lead to change in the physical structure of brain regions associated with attention, anxiety, depression, fear, anger, and compassion as well as the ability of the body to heal itself. Artistic engagement and art therapy There is substantial evidence that artistic engagement in a therapeutic environment can create changes in neural network connections as well as increase cognitive flexibility. In one 2013 study, researchers found evidence that long-term, habitual artistic training (e.g. musical instrument practice, purposeful painting, etc.) can "macroscopically imprint a neural network system of spontaneous activity in which the related brain regions become functionally and topologically modularized in both domain-general and domain-specific manners". In simple terms, brains repeatedly exposed to artistic training over long periods develop adaptations to make such activity both easier and more likely to spontaneously occur. Some researchers and academics have suggested that artistic engagement has substantially altered the human brain throughout our evolutionary history. D.W Zaidel, adjunct professor of behavioral neuroscience and contributor at VAGA, has written that "evolutionary theory links the symbolic nature of art to critical pivotal brain changes in Homo sapiens supporting increased development of language and hierarchical social grouping". Music therapy There is evidence that engaging in music-supported therapy can improve neuroplasticity in patients who are recovering from brain injuries. Music-supported therapy can be used for patients that are undergoing stroke rehabilitation where a one month study of stroke patients participating in music-supported therapy showed a significant improvement in motor control in their affected hand. Another finding was the examination of grey matter volume of adults developing brain atrophy and cognitive decline where playing a musical instrument, such as the piano, or listening to music can increase grey matter volume in areas such as the caudate nucleus, Rolandic operculum, and cerebellum. Evidence also suggests that music-supported therapy can improve cognitive performance, well-being, and social behavior in patients who are recovering from damage to the orbitofrontal cortex (OFC) and recovering from mild traumatic brain injury. Neuroimaging post music-supported therapy revealed functional changes in OFC networks, with improvements observed in both task-based and resting-state fMRI analyses. Fitness and exercise Aerobic exercise increases the production of neurotrophic factors (compounds that promote growth or survival of neurons), such as brain-derived neurotrophic factor (BDNF), insulin-like growth factor 1 (IGF-1), and vascular endothelial growth factor (VEGF). Exercise-induced effects on the hippocampus are associated with measurable improvements in spatial memory. Consistent aerobic exercise over a period of several months induces marked clinically significant improvements in executive function (i.e., the "cognitive control" of behavior) and increased gray matter volume in multiple brain regions, particularly those that give rise to cognitive control. The brain structures that show the greatest improvements in gray matter volume in response to aerobic exercise are the prefrontal cortex and hippocampus; moderate improvements are seen in the anterior cingulate cortex, parietal cortex, cerebellum, caudate nucleus, and nucleus accumbens. Higher physical fitness scores (measured by VO2 max) are associated with better executive function, faster processing speed, and greater volume of the hippocampus, caudate nucleus, and nucleus accumbens. Deafness and loss of hearing Due to hearing loss, the auditory cortex and other association areas of the brain in deaf and/or hard of hearing people undergo compensatory plasticity. The auditory cortex usually reserved for processing auditory information in hearing people now is redirected to serve other functions, especially for vision and somatosensation. Deaf individuals have enhanced peripheral visual attention, better motion change but not color change detection ability in visual tasks, more effective visual search, and faster response time for visual targets compared to hearing individuals. Altered visual processing in deaf people is often found to be associated with the repurposing of other brain areas including primary auditory cortex, posterior parietal association cortex (PPAC), and anterior cingulate cortex (ACC). A review by Bavelier et al. (2006) summarizes many aspects on the topic of visual ability comparison between deaf and hearing individuals. Brain areas that serve a function in auditory processing repurpose to process somatosensory information in congenitally deaf people. They have higher sensitivity in detecting frequency change in vibration above threshold and higher and more widespread activation in auditory cortex under somatosensory stimulation. However, speeded response for somatosensory stimuli is not found in deaf adults. Cochlear implant Neuroplasticity is involved in the development of sensory function. The brain is born immature and then adapts to sensory inputs after birth. In the auditory system, congenital hearing loss, a rather frequent inborn condition affecting 1 of 1000 newborns, has been shown to affect auditory development, and implantation of a sensory prostheses activating the auditory system has prevented the deficits and induced functional maturation of the auditory system. Due to a sensitive period for plasticity, there is also a sensitive period for such intervention within the first 2–4 years of life. Consequently, in prelingually deaf children, early cochlear implantation, as a rule, allows the children to learn the mother language and acquire acoustic communication. Blindness Due to vision loss, the visual cortex in blind people may undergo cross-modal plasticity, and therefore other senses may have enhanced abilities. Or the opposite could occur, with the lack of visual input weakening the development of other sensory systems. One study suggests that the right posterior middle temporal gyrus and superior occipital gyrus reveal more activation in the blind than in the sighted people during a sound-moving detection task. Several studies support the latter idea and found weakened ability in audio distance evaluation, proprioceptive reproduction, threshold for visual bisection, and judging minimum audible angle. Human echolocation Human echolocation is a learned ability for humans to sense their environment from echoes. This ability is used by some blind people to navigate their environment and sense their surroundings in detail. Studies in 2010 and 2011 using functional magnetic resonance imaging techniques have shown that parts of the brain associated with visual processing are adapted for the new skill of echolocation. Studies with blind patients, for example, suggest that the click-echoes heard by these patients were processed by brain regions devoted to vision rather than audition. Attention deficit hyperactivity disorder Reviews of MRI and electroencephalography (EEG) studies on individuals with ADHD suggest that the long-term treatment of ADHD with stimulants, such as amphetamine or methylphenidate, decreases abnormalities in brain structure and function found in subjects with ADHD, and improves function in several parts of the brain, such as the right caudate nucleus of the basal ganglia, left ventrolateral prefrontal cortex (VLPFC), and superior temporal gyrus. In early child development Neuroplasticity is most active in childhood as a part of normal human development, and can also be seen as an especially important mechanism for children in terms of risk and resiliency. Trauma is considered a great risk as it negatively affects many areas of the brain and puts a strain on the sympathetic nervous system from constant activation. Trauma thus alters the brain's connections such that children who have experienced trauma may be hyper vigilant or overly aroused. However, a child's brain can cope with these adverse effects through the actions of neuroplasticity. Neuroplasticity is shown in four different categories in children and covering a wide variety of neuronal functioning. These four types include impaired, excessive, adaptive, and plasticity. There are many examples of neuroplasticity in human development. For example, Justine Ker and Stephen Nelson looked at the effects of musical training on neuroplasticity, and found that musical training can contribute to experience dependent structural plasticity. This is when changes in the brain occur based on experiences that are unique to an individual. Examples of this are learning multiple languages, playing a sport, doing theatre, etc. A study done by Hyde in 2009, showed that changes in the brain of children could be seen in as little as 15 months of musical training. Ker and Nelson suggest this degree of plasticity in the brains of children can "help provide a form of intervention for children... with developmental disorders and neurological diseases." In animals In a single lifespan, individuals of an animal species may encounter various changes in brain morphology. Many of these differences are caused by the release of hormones in the brain; others are the product of evolutionary factors or developmental stages. Some changes occur seasonally in species to enhance or generate response behaviors. Seasonal brain changes Changing brain behavior and morphology to suit other seasonal behaviors is relatively common in animals. These changes can improve the chances of mating during breeding season. Examples of seasonal brain morphology change can be found within many classes and species. Within the class Aves, black-capped chickadees experience an increase in the volume of their hippocampus and strength of neural connections to the hippocampus during fall months. These morphological changes within the hippocampus which are related to spatial memory are not limited to birds, as they can also be observed in rodents and amphibians. In songbirds, many song control nuclei in the brain increase in size during mating season. Among birds, changes in brain morphology to influence song patterns, frequency, and volume are common. Gonadotropin-releasing hormone (GnRH) immunoreactivity, or the reception of the hormone, is lowered in European starlings exposed to longer periods of light during the day. The California sea hare, a gastropod, has more successful inhibition of egg-laying hormones outside of mating season due to increased effectiveness of inhibitors in the brain. Changes to the inhibitory nature of regions of the brain can also be found in humans and other mammals. In the amphibian Bufo japonicus, part of the amygdala is larger before breeding and during hibernation than it is after breeding. Seasonal brain variation occurs within many mammals. Part of the hypothalamus of the common ewe is more receptive to GnRH during breeding season than at other times of the year. Humans experience a change in the "size of the hypothalamic suprachiasmatic nucleus and vasopressin-immunoreactive neurons within it" during the fall, when these parts are larger. In the spring, both reduce in size. Traumatic brain injury research A group of scientists found that if a small stroke (an infarction) is induced by obstruction of blood flow to a portion of a monkey's motor cortex, the part of the body that responds by movement moves when areas adjacent to the damaged brain area are stimulated. In one study, intracortical microstimulation (ICMS) mapping techniques were used in nine normal monkeys. Some underwent ischemic-infarction procedures and the others, ICMS procedures. The monkeys with ischemic infarctions retained more finger flexion during food retrieval and after several months this deficit returned to preoperative levels. With respect to the distal forelimb representation, "postinfarction mapping procedures revealed that movement representations underwent reorganization throughout the adjacent, undamaged cortex." Understanding of interaction between the damaged and undamaged areas provides a basis for better treatment plans in stroke patients. Current research includes the tracking of changes that occur in the motor areas of the cerebral cortex as a result of a stroke. Thus, events that occur in the reorganization process of the brain can be ascertained. The treatment plans that may enhance recovery from strokes, such as physiotherapy, pharmacotherapy, and electrical-stimulation therapy, are also being studied. Jon Kaas, a professor at Vanderbilt University, has been able to show "how somatosensory area 3b and ventroposterior (VP) nucleus of the thalamus are affected by longstanding unilateral dorsal-column lesions at cervical levels in macaque monkeys." Adult brains have the ability to change as a result of injury but the extent of the reorganization depends on the extent of the injury. His recent research focuses on the somatosensory system, which involves a sense of the body and its movements using many senses. Usually, damage of the somatosensory cortex results in impairment of the body perception. Kaas' research project is focused on how these systems (somatosensory, cognitive, motor systems) respond with plastic changes resulting from injury. One recent study of neuroplasticity involves work done by a team of doctors and researchers at Emory University, specifically Donald Stein and David Wright. This is the first treatment in 40 years that has significant results in treating traumatic brain injuries while also incurring no known side effects and being cheap to administer. Stein noticed that female mice seemed to recover from brain injuries better than male mice, and that at certain points in the estrus cycle, females recovered even better. This difference may be attributed to different levels of progesterone, with higher levels of progesterone leading to the faster recovery from brain injury in mice. However, clinical trials showed progesterone offers no significant benefit for traumatic brain injury in human patients. Aging Transcriptional profiling of the frontal cortex of persons ranging from 26 to 106 years of age defined a set of genes with reduced expression after age 40, and especially after age 70. Genes that play central roles in synaptic plasticity were the most significantly affected by age, generally showing reduced expression over time. There was also a marked increase in cortical DNA damage, likely oxidative DNA damage, in gene promoters with aging. Reactive oxygen species appear to have a significant role in the regulation of synaptic plasticity and cognitive function. However age-related increases in reactive oxygen species may also lead to impairments in these functions. Multilingualism There is a beneficial effect of multilingualism on people's behavior and cognition. Numerous studies have shown that people who study more than one language have better cognitive functions and flexibilities than people who only speak one language. Bilinguals are found to have longer attention spans, stronger organization and analyzation skills, and a better theory of mind than monolinguals. Researchers have found that the effect of multilingualism on better cognition is due to neuroplasticity. In one prominent study, neurolinguists used a voxel-based morphometry (VBM) method to visualize the structural plasticity of brains in healthy monolinguals and bilinguals. They first investigated the differences in density of grey and white matter between two groups and found the relationship between brain structure and age of language acquisition. The results showed that grey-matter density in the inferior parietal cortex for multilinguals were significantly greater than monolinguals. The researchers also found that early bilinguals had a greater density of grey matter relative to late bilinguals in the same region. The inferior parietal cortex is a brain region highly associated with the language learning, which corresponds to the VBM result of the study. Recent studies have also found that learning multiple languages not only re-structures the brain but also boosts brain's capacity for plasticity. A recent study found that multilingualism not only affects the grey matter but also white matter of the brain. White matter is made up of myelinated axons that is greatly associated with learning and communication. Neurolinguists used a diffusion tensor imaging (DTI) scanning method to determine the white matter intensity between monolinguals and bilinguals. Increased myelinations in white matter tracts were found in bilingual individuals who actively used both languages in everyday life. The demand of handling more than one language requires more efficient connectivity within the brain, which resulted in greater white matter density for multilinguals. While it is still debated whether these changes in brain are result of genetic disposition or environmental demands, many evidences suggest that environmental, social experience in early multilinguals affect the structural and functional reorganization in the brain. Novel treatments of depression Historically, the monoamine imbalance hypothesis of depression played a dominant role in psychiatry and drug development. However, while traditional antidepressants cause a quick increase in noradrenaline, serotonin, or dopamine, there is a significant delay in their clinical effect and often an inadequate treatment response. As neuroscientists pursued this avenue of research, clinical and preclinical data across multiple modalities began to converge on pathways involved in neuroplasticity. They found a strong inverse relationship between the number of synapses and severity of depression symptoms and discovered that in addition to their neurotransmitter effect, traditional antidepressants improved neuroplasticity but over a significantly protracted time course of weeks or months. The search for faster acting antidepressants found success in the pursuit of ketamine, a well-known anesthetic agent, that was found to have potent anti-depressant effects after a single infusion due to its capacity to rapidly increase the number of dendritic spines and to restore aspects of functional connectivity. Additional neuroplasticity promoting compounds with therapeutic effects that were both rapid and enduring have been identified through classes of compounds including serotonergic psychedelics, cholinergic scopolamine, and other novel compounds. To differentiate between traditional antidepressants focused on monoamine modulation and this new category of fast acting antidepressants that achieve therapeutic effects through neuroplasticity, the term psychoplastogen was introduced.
Biology and health sciences
Biology basics
Biology
1949009
https://en.wikipedia.org/wiki/Heat%20capacity%20ratio
Heat capacity ratio
In thermal physics and thermodynamics, the heat capacity ratio, also known as the adiabatic index, the ratio of specific heats, or Laplace's coefficient, is the ratio of the heat capacity at constant pressure () to heat capacity at constant volume (). It is sometimes also known as the isentropic expansion factor and is denoted by (gamma) for an ideal gas or (kappa), the isentropic exponent for a real gas. The symbol is used by aerospace and chemical engineers. where is the heat capacity, the molar heat capacity (heat capacity per mole), and the specific heat capacity (heat capacity per unit mass) of a gas. The suffixes and refer to constant-pressure and constant-volume conditions respectively. The heat capacity ratio is important for its applications in thermodynamical reversible processes, especially involving ideal gases; the speed of sound depends on this factor. Thought experiment To understand this relation, consider the following thought experiment. A closed pneumatic cylinder contains air. The piston is locked. The pressure inside is equal to atmospheric pressure. This cylinder is heated to a certain target temperature. Since the piston cannot move, the volume is constant. The temperature and pressure will rise. When the target temperature is reached, the heating is stopped. The amount of energy added equals , with representing the change in temperature. The piston is now freed and moves outwards, stopping as the pressure inside the chamber reaches atmospheric pressure. We assume the expansion occurs without exchange of heat (adiabatic expansion). Doing this work, air inside the cylinder will cool to below the target temperature. To return to the target temperature (still with a free piston), the air must be heated, but is no longer under constant volume, since the piston is free to move as the gas is reheated. This extra heat amounts to about 40% more than the previous amount added. In this example, the amount of heat added with a locked piston is proportional to , whereas the total amount of heat added is proportional to . Therefore, the heat capacity ratio in this example is 1.4. Another way of understanding the difference between and is that applies if work is done to the system, which causes a change in volume (such as by moving a piston so as to compress the contents of a cylinder), or if work is done by the system, which changes its temperature (such as heating the gas in a cylinder to cause a piston to move). applies only if , that is, no work is done. Consider the difference between adding heat to the gas with a locked piston and adding heat with a piston free to move, so that pressure remains constant. In the second case, the gas will both heat and expand, causing the piston to do mechanical work on the atmosphere. The heat that is added to the gas goes only partly into heating the gas, while the rest is transformed into the mechanical work performed by the piston. In the first, constant-volume case (locked piston), there is no external motion, and thus no mechanical work is done on the atmosphere; is used. In the second case, additional work is done as the volume changes, so the amount of heat required to raise the gas temperature (the specific heat capacity) is higher for this constant-pressure case. Ideal-gas relations For an ideal gas, the molar heat capacity is at most a function of temperature, since the internal energy is solely a function of temperature for a closed system, i.e., , where is the amount of substance in moles. In thermodynamic terms, this is a consequence of the fact that the internal pressure of an ideal gas vanishes. Mayer's relation allows us to deduce the value of from the more easily measured (and more commonly tabulated) value of : This relation may be used to show the heat capacities may be expressed in terms of the heat capacity ratio () and the gas constant (): Relation with degrees of freedom The classical equipartition theorem predicts that the heat capacity ratio () for an ideal gas can be related to the thermally accessible degrees of freedom () of a molecule by Thus we observe that for a monatomic gas, with 3 translational degrees of freedom per atom: As an example of this behavior, at 273 K (0 °C) the noble gases He, Ne, and Ar all have nearly the same value of , equal to 1.664. For a diatomic gas, often 5 degrees of freedom are assumed to contribute at room temperature since each molecule has 3 translational and 2 rotational degrees of freedom, and the single vibrational degree of freedom is often not included since vibrations are often not thermally active except at high temperatures, as predicted by quantum statistical mechanics. Thus we have For example, terrestrial air is primarily made up of diatomic gases (around 78% nitrogen, N2, and 21% oxygen, O2), and at standard conditions it can be considered to be an ideal gas. The above value of 1.4 is highly consistent with the measured adiabatic indices for dry air within a temperature range of 0–200 °C, exhibiting a deviation of only 0.2% (see tabulation above). For a linear triatomic molecule such as , there are only 5 degrees of freedom (3 translations and 2 rotations), assuming vibrational modes are not excited. However, as mass increases and the frequency of vibrational modes decreases, vibrational degrees of freedom start to enter into the equation at far lower temperatures than is typically the case for diatomic molecules. For example, it requires a far larger temperature to excite the single vibrational mode for , for which one quantum of vibration is a fairly large amount of energy, than for the bending or stretching vibrations of . For a non-linear triatomic gas, such as water vapor, which has 3 translational and 3 rotational degrees of freedom, this model predicts Real-gas relations As noted above, as temperature increases, higher-energy vibrational states become accessible to molecular gases, thus increasing the number of degrees of freedom and lowering . Conversely, as the temperature is lowered, rotational degrees of freedom may become unequally partitioned as well. As a result, both and increase with increasing temperature. Despite this, if the density is fairly low and intermolecular forces are negligible, the two heat capacities may still continue to differ from each other by a fixed constant (as above, ), which reflects the relatively constant difference in work done during expansion for constant pressure vs. constant volume conditions. Thus, the ratio of the two values, , decreases with increasing temperature. However, when the gas density is sufficiently high and intermolecular forces are important, thermodynamic expressions may sometimes be used to accurately describe the relationship between the two heat capacities, as explained below. Unfortunately the situation can become considerably more complex if the temperature is sufficiently high for molecules to dissociate or carry out other chemical reactions, in which case thermodynamic expressions arising from simple equations of state may not be adequate. Thermodynamic expressions Values based on approximations (particularly ) are in many cases not sufficiently accurate for practical engineering calculations, such as flow rates through pipes and valves at moderate to high pressures. An experimental value should be used rather than one based on this approximation, where possible. A rigorous value for the ratio can also be calculated by determining from the residual properties expressed as Values for are readily available and recorded, but values for need to be determined via relations such as these. See relations between specific heats for the derivation of the thermodynamic relations between the heat capacities. The above definition is the approach used to develop rigorous expressions from equations of state (such as Peng–Robinson), which match experimental values so closely that there is little need to develop a database of ratios or values. Values can also be determined through finite-difference approximation. Adiabatic process This ratio gives the important relation for an isentropic (quasistatic, reversible, adiabatic process) process of a simple compressible calorically-perfect ideal gas: is constant Using the ideal gas law, : is constant is constant where is the pressure of the gas, is the volume, and is the thermodynamic temperature. In gas dynamics we are interested in the local relations between pressure, density and temperature, rather than considering a fixed quantity of gas. By considering the density as the inverse of the volume for a unit mass, we can take in these relations. Since for constant entropy, , we have , or , it follows that For an imperfect or non-ideal gas, Chandrasekhar defined three different adiabatic indices so that the adiabatic relations can be written in the same form as above; these are used in the theory of stellar structure: All of these are equal to in the case of an ideal gas.
Physical sciences
Thermodynamics
Physics
1950552
https://en.wikipedia.org/wiki/Wood%20frog
Wood frog
Lithobates sylvaticus or Rana sylvatica, commonly known as the wood frog, is a frog species that has a broad distribution over North America, extending from the boreal forest of the north to the southern Appalachians, with several notable disjunct populations including lowland eastern North Carolina. The wood frog has garnered attention from biologists because of its freeze tolerance, relatively great degree of terrestrialism (for a ranid), interesting habitat associations (peat bogs, vernal pools, uplands), and relatively long-range movements. The ecology and conservation of the wood frog has attracted research attention in recent years because they are often considered "obligate" breeders in ephemeral wetlands (sometimes called "vernal pools"), which are themselves more imperiled than the species that breed in them. The wood frog has been proposed to be the official state amphibian of New York. Description Wood frogs range from in length. Females are larger than males. Adult wood frogs are usually brown, tan, or rust-colored, and usually have a dark eye mask. Individual frogs are capable of varying their color; Conant (1958) depicts one individual which was light brown and dark brown at different times. The underparts of wood frogs are pale with a yellow or green cast; in northern populations, the belly may be faintly mottled. Their body colour may change seasonally; exposure to sunlight causes darkening. Geographic range The contiguous wood frog range is from northern Georgia and northeastern Canada in the east to Alaska and southern British Columbia in the west. They range all throughout the boreal forests of Canada. It is the most widely distributed frog in Alaska. It is also found in the Medicine Bow National Forest. Habitat Wood frogs are forest-dwelling organisms that breed primarily in ephemeral, freshwater wetlands: woodland vernal pools. They are nonarboreal and spend most of their time on the forest floor. Long-distance migration plays an important role in their life history. Individual wood frogs range widely (hundreds of metres) among their breeding pools and neighboring freshwater swamps, cool-moist ravines, and/or upland habitats. Genetic neighborhoods of individual pool breeding populations extend more than a kilometre away from the breeding site. Thus, conservation of this species requires a landscape (multiple habitats at appropriate spatial scales) perspective. They also can be camouflaged with their surroundings. A study was done on wood frogs dispersal patterns in 5 ponds at the Appalachian Mountains where they reported adult wood frogs were 100% faithful to the pond of their first breeding but 18% of juveniles dispersed to breed in other ponds. Adult wood frogs spend summer months in moist woodlands, forested swamps, ravines, or bogs. During the fall, they leave summer habitats and migrate to neighboring uplands to overwinter. Some may remain in moist areas to overwinter. Hibernacula tend to be in the upper organic layers of the soil, under leaf litter. By overwintering in uplands adjacent to breeding pools, adults ensure a short migration to thawed pools in early spring. Wood frogs are mostly diurnal and are rarely seen at night, except maybe in breeding choruses. They are one of the first amphibians to emerge for breeding right when the snow melts, along with spring peepers. Feeding Wood frogs eat a variety of small, forest-floor invertebrates, with a diet primarily consisting of insects. The tadpoles are omnivorous, feeding on plant detritus and algae along with other tadpoles of their own and other species. The feeding pattern of the wood frog is similar to that of other ranids. It is triggered by prey movement and consists of a bodily lunge that terminates with the mouth opening and an extension of the tongue onto the prey. The ranid tongue is attached to the floor of the mouth near the tip of the jaw, and when the mouth is closed, the tongue lies flat, extended posteriorly from its point of attachment. In the feeding strike, the tongue is swung forward as though on a hinge, so some portion of the normally dorsal and posterior tongue surface makes contact with the prey. At this point in the feeding strike, the wood frog differs markedly from more aquatic Lithobates species, such as the green frog, leopard frog, and bullfrog. The wood frog makes contact with the prey with just the tip of its tongue, much like a toad. A more extensive amount of tongue surface is applied in the feeding strikes of these other frog species, with the result that usually the prey is engulfed by the fleshy tongue and considerable tongue surface contacts the surrounding substrate. Cold tolerance Similar to other northern frogs that enter dormancy close to the surface in soil and/or leaf litter, wood frogs can tolerate the freezing of their blood and other tissues. Urea is accumulated in tissues in preparation for overwintering, and liver glycogen is converted in large quantities to glucose in response to internal ice formation. Both urea and glucose act as cryoprotectants to limit the amount of ice that forms and to reduce osmotic shrinkage of cells. Frogs found in southern Canada and the American midwest can tolerate freezing temperatures of . However, wood frogs in Interior Alaska exhibit even greater tolerance, with some of their body water freezing while still surviving. When frozen, wood frogs have no detectable vital signs: no heartbeat, breathing, blood circulation, muscle movement, or detectable brain activity. Wood frogs in natural hibernation remain frozen for 193 +/- 11 consecutive days and reached an average (October–May) temperature of and an average minimum temperature of . The wood frog has evolved various physiological adaptations that allow it to tolerate the freezing of 65–70% of its total body water. When water freezes, ice crystals form in cells and break up the structure, so that when the ice thaws the cells are damaged. Frozen frogs also need to endure the interruption of oxygen delivery to their tissues as well as strong dehydration and shrinkage of their cells when water is drawn out of cells to freeze. The wood frog has evolved traits that prevent their cells from being damaged when frozen and thawed out. The wood frog has developed various adaptations that allow it to effectively combat prolonged ischemia/anoxia and extreme cellular dehydration. One crucial mechanism utilized by the wood frog is the accumulation of high amounts of glucose that act as a cryoprotectant. Frogs can survive many freeze/thaw events during winter if no more than about 65%-70% of the total body water freezes. Wood frogs have a series of seven amino acid substitutions in the sarco/endoplasmic reticulum Ca2+-ATPase 1 (SERCA 1) enzyme ATP binding site that allows this pump to function at lower temperatures relative to less cold-tolerant species (e.g. Lithobates clamitans). Studies on northern subpopulations found that Alaskan wood frogs had a larger liver glycogen reserve and greater urea production compared to those in more temperate zones of its range. These conspecifics also showed higher glycogen phosphorylase enzymatic activity, which facilitates their adaptation to freezing. The phenomenon of cold resistance is observed in other anuran species. The Japanese tree frog shows even greater cold tolerance than the wood frog, surviving in temperatures as low as for up to 120 days. Reproduction L. sylvaticus primarily breeds in ephemeral pools rather than permanent water bodies such as ponds or lakes. This is believed to provide some protection for the adult frogs and their offspring (eggs and tadpoles) from predation by fish and other predators of permanent water bodies. Adult wood frogs typically hibernate within 65 meters of breeding pools. They emerge from hibernation in early spring and migrate to the nearby pools. There, males chorus, emitting duck-like quacking sounds. Wood frogs are considered explosive breeders; many populations will conduct all mating in the span of a week. Males actively search for mates by swimming around the pool and calling. Females, on the other hand, will stay under the water and rarely surface, most likely to avoid sexual harassment. A male approaches a female and clasps her from behind her forearms before hooking his thumbs together in a hold called "amplexus", which is continued until the female deposits the eggs. Females deposit eggs attached to submerged substrate, typically vegetation or downed branches. Most commonly, females deposit eggs adjacent to other egg masses, creating large aggregations of masses. Some advantage is conferred to pairs first to breed, as clutches closer to the center of the raft absorb heat and develop faster than those on the periphery, and have more protection from predators. If pools dry before tadpoles metamorphose into froglets, they die. This constitutes the risk counterbalancing the antipredator protection of ephemeral pools. By breeding in early spring, however, wood frogs increase their offspring's chances of metamorphosing before pools dry. The larvae undergo two stages of development: fertilization to free-living tadpoles, and free-living tadpoles to juvenile frogs. During the first stage, the larvae are adapted for rapid development, and their growth depends on the temperature of the water. Variable larval survival is a major contributor to fluctuations in wood frog population size from year to year. The second stage of development features rapid development and growth, and depends on environmental factors including food availability, temperature, and population density. Some studies suggest that road-salts, as used in road de-icing, may have toxic effects on wood frog larvae. A study exposed wood frog tadpoles to NaCl and found that tadpoles experienced reduced activity and weight, and even displayed physical abnormalities. There was also significantly lower survivorship and decreased time to metamorphosis with increasing salt concentration. De-icing agents may pose a serious conservation concern to wood frog larvae. Another study has found increased tolerance to salt with higher concentrations, though the authors caution against over-extrapolating from short-term, high concentration studies to longer-term, lower concentration conditions, as contradictory outcomes occur. Following metamorphosis, a small percentage (less than 20%) of juveniles will disperse, permanently leaving the vicinity of their natal pools. The majority of offspring are philopatric, returning to their natal pool to breed. Most frogs breed only once in their lives, although some will breed two or three times, generally with differences according to age. The success of the larvae and tadpoles is important in populations of wood frogs because they affect the gene flow and genetic variation of the following generations. Conservation status Although the wood frog is not endangered or threatened, in many parts of its range, urbanization is fragmenting populations. Several studies have shown, under certain thresholds of forest cover loss or over certain thresholds of road density, wood frogs and other common amphibians begin to "drop out" of formerly occupied habitats. Another conservation concern is that wood frogs are primarily dependent on smaller, "geographically isolated" wetlands for breeding. At least in the United States, these wetlands are largely unprotected by federal law, leaving it up to states to tackle the problem of conserving pool-breeding amphibians. The wood frog has a complex lifecycle that depends on multiple habitats, damp lowlands, and adjacent woodlands. Their habitat conservation is, therefore, complex, requiring integrated, landscape-scale preservation. Wood frog development in the tadpole stage is known to be negatively affected by road salt contaminating freshwater ecosystems. Tadpoles have also been shown to develop abnormalities due to a combination of warmer conditions and toxic metals from pesticides near their habitats. These conditions allow them to be predated upon by dragonfly larvae more easily often causing missing limbs.
Biology and health sciences
Frogs and toads
Animals
1950766
https://en.wikipedia.org/wiki/Graph%20isomorphism%20problem
Graph isomorphism problem
The graph isomorphism problem is the computational problem of determining whether two finite graphs are isomorphic. The problem is not known to be solvable in polynomial time nor to be NP-complete, and therefore may be in the computational complexity class NP-intermediate. It is known that the graph isomorphism problem is in the low hierarchy of class NP, which implies that it is not NP-complete unless the polynomial time hierarchy collapses to its second level. At the same time, isomorphism for many special classes of graphs can be solved in polynomial time, and in practice graph isomorphism can often be solved efficiently. This problem is a special case of the subgraph isomorphism problem, which asks whether a given graph G contains a subgraph that is isomorphic to another given graph H; this problem is known to be NP-complete. It is also known to be a special case of the non-abelian hidden subgroup problem over the symmetric group. In the area of image recognition it is known as the exact graph matching. State of the art In November 2015, László Babai announced a quasi-polynomial time algorithm for all graphs, that is, one with running time for some fixed . On January 4, 2017, Babai retracted the quasi-polynomial claim and stated a sub-exponential time bound instead after Harald Helfgott discovered a flaw in the proof. On January 9, 2017, Babai announced a correction (published in full on January 19) and restored the quasi-polynomial claim, with Helfgott confirming the fix. Helfgott further claims that one can take , so the running time is . Prior to this, the best accepted theoretical algorithm was due to , and was based on the earlier work by combined with a subfactorial algorithm of V. N. Zemlyachenko . The algorithm has run time 2O() for graphs with n vertices and relies on the classification of finite simple groups. Without this classification theorem, a slightly weaker bound was obtained first for strongly regular graphs by , and then extended to general graphs by . Improvement of the exponent for strongly regular graphs was done by . For hypergraphs of bounded rank, a subexponential upper bound matching the case of graphs was obtained by . There are several competing practical algorithms for graph isomorphism, such as those due to , , , and . While they seem to perform well on random graphs, a major drawback of these algorithms is their exponential time performance in the worst case. The graph isomorphism problem is computationally equivalent to the problem of computing the automorphism group of a graph, and is weaker than the permutation group isomorphism problem and the permutation group intersection problem. For the latter two problems, obtained complexity bounds similar to that for graph isomorphism. Solved special cases A number of important special cases of the graph isomorphism problem have efficient, polynomial-time solutions: Trees Planar graphs (In fact, planar graph isomorphism is in log space, a class contained in P) Interval graphs Permutation graphs Circulant graphs Bounded-parameter graphs Graphs of bounded treewidth Graphs of bounded genus (Planar graphs are graphs of genus 0.) Graphs of bounded degree Graphs with bounded eigenvalue multiplicity k-Contractible graphs (a generalization of bounded degree and bounded genus) Color-preserving isomorphism of colored graphs with bounded color multiplicity (i.e., at most k vertices have the same color for a fixed k) is in class NC, which is a subclass of P. Complexity class GI Since the graph isomorphism problem is neither known to be NP-complete nor known to be tractable, researchers have sought to gain insight into the problem by defining a new class GI, the set of problems with a polynomial-time Turing reduction to the graph isomorphism problem. If in fact the graph isomorphism problem is solvable in polynomial time, GI would equal P. On the other hand, if the problem is NP-complete, GI would equal NP and all problems in NP would be solvable in quasi-polynomial time. As is common for complexity classes within the polynomial time hierarchy, a problem is called GI-hard if there is a polynomial-time Turing reduction from any problem in GI to that problem, i.e., a polynomial-time solution to a GI-hard problem would yield a polynomial-time solution to the graph isomorphism problem (and so all problems in GI). A problem is called complete for GI, or GI-complete, if it is both GI-hard and a polynomial-time solution to the GI problem would yield a polynomial-time solution to . The graph isomorphism problem is contained in both NP and co-AM. GI is contained in and low for Parity P, as well as contained in the potentially much smaller class SPP. That it lies in Parity P means that the graph isomorphism problem is no harder than determining whether a polynomial-time nondeterministic Turing machine has an even or odd number of accepting paths. GI is also contained in and low for ZPPNP. This essentially means that an efficient Las Vegas algorithm with access to an NP oracle can solve graph isomorphism so easily that it gains no power from being given the ability to do so in constant time. GI-complete and GI-hard problems Isomorphism of other objects There are a number of classes of mathematical objects for which the problem of isomorphism is a GI-complete problem. A number of them are graphs endowed with additional properties or restrictions: digraphs labelled graphs, with the proviso that an isomorphism is not required to preserve the labels, but only the equivalence relation consisting of pairs of vertices with the same label "polarized graphs" (made of a complete graph Km and an empty graph Kn plus some edges connecting the two; their isomorphism must preserve the partition) 2-colored graphs explicitly given finite structures multigraphs hypergraphs finite automata Markov Decision Processes commutative class 3 nilpotent (i.e., xyz = 0 for every elements x, y, z) semigroups finite rank associative algebras over a fixed algebraically closed field with zero squared radical and commutative factor over the radical. context-free grammars normal-form games balanced incomplete block designs Recognizing combinatorial isomorphism of convex polytopes represented by vertex-facet incidences. GI-complete classes of graphs A class of graphs is called GI-complete if recognition of isomorphism for graphs from this subclass is a GI-complete problem. The following classes are GI-complete: connected graphs graphs of diameter 2 and radius 1 directed acyclic graphs regular graphs bipartite graphs without non-trivial strongly regular subgraphs bipartite Eulerian graphs bipartite regular graphs line graphs split graphs chordal graphs regular self-complementary graphs polytopal graphs of general, simple, and simplicial convex polytopes in arbitrary dimensions. Many classes of digraphs are also GI-complete. Other GI-complete problems There are other nontrivial GI-complete problems in addition to isomorphism problems. Finding a graph's automorphism group. Counting automorphisms of a graph. The recognition of self-complementarity of a graph or digraph. A clique problem for a class of so-called M-graphs. It is shown that finding an isomorphism for n-vertex graphs is equivalent to finding an n-clique in an M-graph of size n2. This fact is interesting because the problem of finding a clique of order (1 − ε)n in a M-graph of size n2 is NP-complete for arbitrarily small positive ε. The problem of homeomorphism of 2-complexes. The definability problem for first-order logic. The input of this problem is a relational database instance I and a relation R, and the question to answer is whether there exists a first-order query Q (without constants) such that Q evaluated on I gives R as the answer. GI-hard problems The problem of counting the number of isomorphisms between two graphs is polynomial-time equivalent to the problem of telling whether even one exists. The problem of deciding whether two convex polytopes given by either the V-description or H-description are projectively or affinely isomorphic. The latter means existence of a projective or affine map between the spaces that contain the two polytopes (not necessarily of the same dimension) which induces a bijection between the polytopes. Program checking have shown a probabilistic checker for programs for graph isomorphism. Suppose P is a claimed polynomial-time procedure that checks if two graphs are isomorphic, but it is not trusted. To check if graphs G and H are isomorphic: Ask P whether G and H are isomorphic. If the answer is "yes": Attempt to construct an isomorphism using P as subroutine. Mark a vertex u in G and v in H, and modify the graphs to make them distinctive (with a small local change). Ask P if the modified graphs are isomorphic. If no, change v to a different vertex. Continue searching. Either the isomorphism will be found (and can be verified), or P will contradict itself. If the answer is "no": Perform the following 100 times. Choose randomly G or H, and randomly permute its vertices. Ask P if the graph is isomorphic to G and H. (As in AM protocol for graph nonisomorphism). If any of the tests are failed, judge P as invalid program. Otherwise, answer "no". This procedure is polynomial-time and gives the correct answer if P is a correct program for graph isomorphism. If P is not a correct program, but answers correctly on G and H, the checker will either give the correct answer, or detect invalid behaviour of P. If P is not a correct program, and answers incorrectly on G and H, the checker will detect invalid behaviour of P with high probability, or answer wrong with probability 2−100. Notably, P is used only as a blackbox. Applications Graphs are commonly used to encode structural information in many fields, including computer vision and pattern recognition, and graph matching, i.e., identification of similarities between graphs, is an important tools in these areas. In these areas graph isomorphism problem is known as the exact graph matching. In cheminformatics and in mathematical chemistry, graph isomorphism testing is used to identify a chemical compound within a chemical database. Also, in organic mathematical chemistry graph isomorphism testing is useful for generation of molecular graphs and for computer synthesis. Chemical database search is an example of graphical data mining, where the graph canonization approach is often used. In particular, a number of identifiers for chemical substances, such as SMILES and InChI, designed to provide a standard and human-readable way to encode molecular information and to facilitate the search for such information in databases and on the web, use canonization step in their computation, which is essentially the canonization of the graph which represents the molecule. In electronic design automation graph isomorphism is the basis of the Layout Versus Schematic (LVS) circuit design step, which is a verification whether the electric circuits represented by a circuit schematic and an integrated circuit layout are the same.
Mathematics
Graph theory
null
1950953
https://en.wikipedia.org/wiki/Crowbar
Crowbar
A crowbar, also called a wrecking bar, pry bar or prybar, pinch-bar, or occasionally a prise bar or prisebar, colloquially gooseneck, or pig bar, or in Australia a jemmy, is a lever consisting of a metal bar with a single curved end and flattened points, used to force two objects apart or gain mechanical advantage in lifting; often the curved end has a notch for removing nails. The design can be used as any of the three lever classes. The curved end is usually used as a first-class lever, and the flat end as a second-class lever. Designs made from thick flat steel bar are often referred to as utility bars. Materials and construction A common hand tool, the crow bar is typically made of medium-carbon steel, possibly hardened on its ends. Commonly crowbars are forged from long steel stock, either hexagonal or sometimes cylindrical. Alternative designs may be forged with a rounded I-shaped cross-section shaft. Versions using relatively wide flat steel bar are often referred to as "utility" or "flat bars". Etymology and usage The accepted etymology identifies the first component of the word crowbar with the bird-name "crow", perhaps due to the crowbar's resemblance to the feet or beak of a crow. The first use of the term is dated back to . It was also called simply a crow, or iron crow; William Shakespeare used the latter, as in Romeo and Juliet, Act 5, Scene 2: "Get me an iron crow and bring it straight unto my cell." In Daniel Defoe's 1719 novel Robinson Crusoe, the protagonist lacks a pickaxe so uses a crowbar instead: "As for the pickaxe, I made use of the iron crows, which were proper enough, though heavy." Types Types of crowbar include: Alignment pry bar, also referred to as Sleeve bar Cat’s claw pry bar, more simply known as a cat's paw Digging pry bar Flat pry bar Gooseneck pry bar Heavy-duty pry bar Molding pry bar Rolling head pry bar
Technology
Hand tools
null