id
stringlengths
2
8
url
stringlengths
31
117
title
stringlengths
1
71
text
stringlengths
153
118k
topic
stringclasses
4 values
section
stringlengths
4
49
sublist
stringclasses
9 values
6142110
https://en.wikipedia.org/wiki/Health%20professional
Health professional
A health professional, healthcare professional, or healthcare worker (sometimes abbreviated HCW) is a provider of health care treatment and advice based on formal training and experience. The field includes those who work as a nurse, physician (such as family physician, internist, obstetrician, psychiatrist, radiologist, surgeon etc.), physician assistant, registered dietitian, veterinarian, veterinary technician, optometrist, pharmacist, pharmacy technician, medical assistant, physical therapist, occupational therapist, dentist, midwife, psychologist, audiologist, or healthcare scientist, or who perform services in allied health professions. Experts in public health and community health are also health professionals. Fields The healthcare workforce comprises a wide variety of professions and occupations who provide some type of healthcare service, including such direct care practitioners as physicians, nurse practitioners, physician assistants, nurses, respiratory therapists, dentists, pharmacists, speech-language pathologist, physical therapists, occupational therapists, physical and behavior therapists, as well as allied health professionals such as phlebotomists, medical laboratory scientists, dieticians, and social workers. They often work in hospitals, healthcare centers and other service delivery points, but also in academic training, research, and administration. Some provide care and treatment services for patients in private homes. Many countries have a large number of community health workers who work outside formal healthcare institutions. Managers of healthcare services, health information technicians, and other assistive personnel and support workers are also considered a vital part of health care teams. Healthcare practitioners are commonly grouped into health professions. Within each field of expertise, practitioners are often classified according to skill level and skill specialization. "Health professionals" are highly skilled workers, in professions that usually require extensive knowledge including university-level study leading to the award of a first degree or higher qualification. This category includes physicians, physician assistants, registered nurses, veterinarians, veterinary technicians, veterinary assistants, dentists, midwives, radiographers, pharmacists, physiotherapists, optometrists, operating department practitioners and others. Allied health professionals, also referred to as "health associate professionals" in the International Standard Classification of Occupations, support implementation of health care, treatment and referral plans usually established by medical, nursing, respiratory care, and other health professionals, and usually require formal qualifications to practice their profession. In addition, unlicensed assistive personnel assist with providing health care services as permitted. Another way to categorize healthcare practitioners is according to the sub-field in which they practice, such as mental health care, pregnancy and childbirth care, surgical care, rehabilitation care, or public health. Mental health A mental health professional is a health worker who offers services to improve the mental health of individuals or treat mental illness. These include psychiatrists, psychiatry physician assistants, clinical, counseling, and school psychologists, occupational therapists, clinical social workers, psychiatric-mental health nurse practitioners, marriage and family therapists, mental health counselors, as well as other health professionals and allied health professions. These health care providers often deal with the same illnesses, disorders, conditions, and issues; however, their scope of practice often differs. The most significant difference across categories of mental health practitioners is education and training. There are many damaging effects to the health care workers. Many have had diverse negative psychological symptoms ranging from emotional trauma to very severe anxiety. Health care workers have not been treated right and because of that their mental, physical, and emotional health has been affected by it. The SAGE author's said that there were 94% of nurses that had experienced at least one PTSD after the traumatic experience. Others have experienced nightmares, flashbacks, and short and long term emotional reactions. The abuse is causing detrimental effects on these health care workers. Violence is causing health care workers to have a negative attitude toward work tasks and patients, and because of that they are "feeling pressured to accept the order, dispense a product, or administer a medication". Sometimes it can range from verbal to sexual to physical harassment, whether the abuser is a patient, patient's families, physician, supervisors, or nurses. Obstetrics A maternal and newborn health practitioner is a health care expert who deals with the care of women and their children before, during and after pregnancy and childbirth. Such health practitioners include obstetricians, physician assistants, midwives, obstetrical nurses and many others. One of the main differences between these professions is in the training and authority to provide surgical services and other life-saving interventions. In some developing countries, traditional birth attendants, or traditional midwives, are the primary source of pregnancy and childbirth care for many women and families, although they are not certified or licensed. According to research, rates for unhappiness among obstetrician-gynecologists (Ob-Gyns) range somewhere between 40 and 75 percent. Geriatrics A geriatric care practitioner plans and coordinates the care of the elderly and/or disabled to promote their health, improve their quality of life, and maintain their independence for as long as possible. They include geriatricians, occupational therapists, physician assistants, adult-gerontology nurse practitioners, clinical nurse specialists, geriatric clinical pharmacists, geriatric nurses, geriatric care managers, geriatric aides, nursing aides, caregivers and others who focus on the health and psychological care needs of older adults. Surgery A surgical practitioner is a healthcare professional and expert who specializes in the planning and delivery of a patient's perioperative care, including during the anaesthetic, surgical and recovery stages. They may include general and specialist surgeons, physician assistants, assistant surgeons, surgical assistants, veterinary surgeons, veterinary technicians. anesthesiologists, anesthesiologist assistants, nurse anesthetists, surgical nurses, clinical officers, operating department practitioners, anaesthetic technicians, perioperative nurses, surgical technologists, and others. Rehabilitation A rehabilitation care practitioner is a health worker who provides care and treatment which aims to enhance and restore functional ability and quality of life to those with physical impairments or disabilities. These include physiatrists, physician assistants, rehabilitation nurses, clinical nurse specialists, nurse practitioners, physiotherapists, chiropractors, orthotists, prosthetists, occupational therapists, recreational therapists, audiologists, speech and language pathologists, respiratory therapists, rehabilitation counsellors, physical rehabilitation therapists, athletic trainers, physiotherapy technicians, orthotic technicians, prosthetic technicians, personal care assistants, and others. Optometry Optometry is a field traditionally associated with the correction of refractive errors using glasses or contact lenses, and treating eye diseases. Optometrists also provide general eye care, including screening exams for glaucoma and diabetic retinopathy and management of routine or eye conditions. Optometrists may also undergo further training in order to specialize in various fields, including glaucoma, medical retina, low vision, or paediatrics. In some countries, such as the United Kingdom, United States, and Canada, Optometrists may also undergo further training in order to be able to perform some surgical procedures. Diagnostics Medical diagnosis providers are health workers responsible for the process of determining which disease or condition explains a person's symptoms and signs. It is most often referred to as diagnosis with the medical context being implicit. This usually involves a team of healthcare providers in various diagnostic units. These include radiographers, radiologists, Sonographers, medical laboratory scientists, pathologists, and related professionals. Dentistry A dental care practitioner is a health worker and expert who provides care and treatment to promote and restore oral health. These include dentists and dental surgeons, dental assistants, dental auxiliaries, dental hygienists, dental nurses, dental technicians, dental therapists or oral health therapists, and related professionals. Podiatry Care and treatment for the foot, ankle, and lower leg may be delivered by podiatrists, chiropodists, pedorthists, foot health practitioners, podiatric medical assistants, podiatric nurse and others. Public health A public health practitioner focuses on improving health among individuals, families and communities through the prevention and treatment of diseases and injuries, surveillance of cases, and promotion of healthy behaviors. This category includes community and preventive medicine specialists, physician assistants, public health nurses, pharmacist, clinical nurse specialists, dietitians, environmental health officers (public health inspectors), paramedics, epidemiologists, public health dentists, and others. Alternative medicine In many societies, practitioners of alternative medicine have contact with a significant number of people, either as integrated within or remaining outside the formal health care system. These include practitioners in acupuncture, Ayurveda, herbalism, homeopathy, naturopathy, Reiki, Shamballa Reiki energy healing , Siddha medicine, traditional Chinese medicine, traditional Korean medicine, Unani, and Yoga. In some countries such as Canada, chiropractors and osteopaths (not to be confused with doctors of osteopathic medicine in the United States) are considered alternative medicine practitioners. Occupational hazards The healthcare workforce faces unique health and safety challenges and is recognized by the National Institute for Occupational Safety and Health (NIOSH) as a priority industry sector in the National Occupational Research Agenda (NORA) to identify and provide intervention strategies regarding occupational health and safety issues. Biological hazards Exposure to respiratory infectious diseases like tuberculosis (caused by Mycobacterium tuberculosis) and influenza can be reduced with the use of respirators; this exposure is a significant occupational hazard for health care professionals. Healthcare workers are also at risk for diseases that are contracted through extended contact with a patient, including scabies. Health professionals are also at risk for contracting blood-borne diseases like hepatitis B, hepatitis C, and HIV/AIDS through needlestick injuries or contact with bodily fluids. This risk can be mitigated with vaccination when there is a vaccine available, like with hepatitis B. In epidemic situations, such as the 2014-2016 West African Ebola virus epidemic or the 2003 SARS outbreak, healthcare workers are at even greater risk, and were disproportionately affected in both the Ebola and SARS outbreaks. In general, appropriate personal protective equipment (PPE) is the first-line mode of protection for healthcare workers from infectious diseases. For it to be effective against highly contagious diseases, personal protective equipment must be watertight and prevent the skin and mucous membranes from contacting infectious material. Different levels of personal protective equipment created to unique standards are used in situations where the risk of infection is different. Practices such as triple gloving and multiple respirators do not provide a higher level of protection and present a burden to the worker, who is additionally at increased risk of exposure when removing the PPE. Compliance with appropriate personal protective equipment rules may be difficult in certain situations, such as tropical environments or low-resource settings. A 2020 Cochrane systematic review found low-quality evidence that using more breathable fabric in PPE, double gloving, and active training reduce the risk of contamination but that more randomized controlled trials are needed for how best to train healthcare workers in proper PPE use. Tuberculosis screening, testing, and education Based on recommendations from The United States Center for Disease Control and Prevention (CDC) for TB screening and testing the following best practices should be followed when hiring and employing Health Care Personnel. When hiring Health Care Personnel, the applicant should complete the following: a TB risk assessment, a TB symptom evaluation for at least those listed on the Signs & Symptoms page, a TB test in accordance with the guidelines for Testing for TB Infection, and additional evaluation for TB disease as needed (e.g. chest x-ray for HCP with a positive TB test) The CDC recommends either a blood test, also known as an interferon-gamma release assay (IGRA), or a skin test, also known as a Mantoux tuberculin skin test (TST). A TB blood test for baseline testing does not require two-step testing. If the skin test method is used to test HCP upon hire, then two-step testing should be used. A one-step test is not recommended. The CDC has outlined further specifics on recommended testing for several scenarios. In summary: Previous documented positive skin test (TST) then a further TST is not recommended Previous documented negative TST within 12 months before employment OR at least two documented negative TSTs ever then a single TST is recommended All other scenarios, with the exception of programs using blood tests, the recommended testing is a two-step TST According to these recommended testing guidelines any two negative TST results within 12 months of each other constitute a two-step TST. For annual screening, testing, and education, the only recurring requirement for all HCP is to receive TB education annually. While the CDC offers education materials, there is not a well defined requirement as to what constitutes a satisfactory annual education. Annual TB testing is no longer recommended unless there is a known exposure or ongoing transmission at a healthcare facility. Should an HCP be considered at increased occupational risk for TB annual screening may be considered. For HCP with a documented history of a positive TB test result do not need to be re-tested but should instead complete a TB symptom evaluation. It is assumed that any HCP who has undergone a chest x-ray test has had a previous positive test result. When considering mental health you may see your doctor to be evaluated at your digression. It is recommended to see someone at least once a year in order to make sure that there has not been any sudden changes. Psychosocial hazards Occupational stress and occupational burnout are highly prevalent among health professionals. Some studies suggest that workplace stress is pervasive in the health care industry because of inadequate staffing levels, long work hours, exposure to infectious diseases and hazardous substances leading to illness or death, and in some countries threat of malpractice litigation. Other stressors include the emotional labor of caring for ill people and high patient loads. The consequences of this stress can include substance abuse, suicide, major depressive disorder, and anxiety, all of which occur at higher rates in health professionals than the general working population. Elevated levels of stress are also linked to high rates of burnout, absenteeism and diagnostic errors, and reduced rates of patient satisfaction. In Canada, a national report (Canada's Health Care Providers) also indicated higher rates of absenteeism due to illness or disability among health care workers compared to the rest of the working population, although those working in health care reported similar levels of good health and fewer reports of being injured at work. There is some evidence that cognitive-behavioral therapy, relaxation training and therapy (including meditation and massage), and modifying schedules can reduce stress and burnout among multiple sectors of health care providers. Research is ongoing in this area, especially with regards to physicians, whose occupational stress and burnout is less researched compared to other health professions. Healthcare workers are at higher risk of on-the-job injury due to violence. Drunk, confused, and hostile patients and visitors are a continual threat to providers attempting to treat patients. Frequently, assault and violence in a healthcare setting goes unreported and is wrongly assumed to be part of the job. Violent incidents typically occur during one-on-one care; being alone with patients increases healthcare workers' risk of assault. In the United States, healthcare workers experience of nonfatal workplace violence incidents. Psychiatric units represent the highest proportion of violent incidents, at 40%; they are followed by geriatric units (20%) and the emergency department (10%). Workplace violence can also cause psychological trauma. Health care professionals are also likely to experience sleep deprivation due to their jobs. Many health care professionals are on a shift work schedule, and therefore experience misalignment of their work schedule and their circadian rhythm. In 2007, 32% of healthcare workers were found to get fewer than 6 hours of sleep a night. Sleep deprivation also predisposes healthcare professionals to make mistakes that may potentially endanger a patient. COVID pandemic Especially in times like the present (2020), the hazards of health professional stem into the mental health. Research from the last few months highlights that COVID-19 has contributed greatly  to the degradation of mental health in healthcare providers. This includes, but is not limited to, anxiety, depression/burnout, and insomnia. A study done by Di Mattei et al. (2020) revealed that 12.63% of COVID nurses and 16.28% of other COVID healthcare workers reported extremely severe anxiety symptoms at the peak of the pandemic. In addition, another study was conducted on 1,448 full time employees in Japan. The participants were surveyed at baseline in March 2020 and then again in May 2020. The result of the study showed that psychological distress and anxiety had increased more among healthcare workers during the COVID-19 outbreak. Similarly, studies have also shown that following the pandemic, at least one in five healthcare professionals report symptoms of anxiety. Specifically, the aspect of "anxiety was assessed in 12 studies, with a pooled prevalence of 23.2%" following COVID. When considering all 1,448 participants that percentage makes up about 335 people. Abuse by patients The patients are selecting victims who are more vulnerable. For example, Cho said that these would be the nurses that are lacking experience or trying to get used to their new roles at work. Others authors that agree with this are Vento, Cainelli, & Vallone and they said that, the reason patients have caused danger to health care workers is because of insufficient communication between them, long waiting lines, and overcrowding in waiting areas. When patients are intrusive and/or violent toward the faculty, this makes the staff question what they should do about taking care of a patient. There have been many incidents from patients that have really caused some health care workers to be traumatized and have so much self doubt. Goldblatt and other authors  said that there was a lady who was giving birth, her husband said, "Who is in charge around here"? "Who are these sluts you employ here".  This was very avoidable to have been said to the people who are taking care of your wife and child. Physical and chemical hazards Slips, trips, and falls are the second-most common cause of worker's compensation claims in the US and cause 21% of work absences due to injury. These injuries most commonly result in strains and sprains; women, those older than 45, and those who have been working less than a year in a healthcare setting are at the highest risk. An epidemiological study published in 2018 examined the hearing status of noise-exposed health care and social assistance (HSA) workers sector to estimate and compare the prevalence of hearing loss by subsector within the sector. Most of the HSA subsector prevalence estimates ranged from 14% to 18%, but the Medical and Diagnostic Laboratories subsector had 31% prevalence and the Offices of All Other Miscellaneous Health Practitioners had a 24% prevalence. The Child Day Care Services subsector also had a 52% higher risk than the reference industry. Exposure to hazardous drugs, including those for chemotherapy, is another potential occupational risk. These drugs can cause cancer and other health conditions. Gender factors Female health care workers may face specific types of workplace-related health conditions and stress. According to the World Health Organization, women predominate in the formal health workforce in many countries and are prone to musculoskeletal injury (caused by physically demanding job tasks such as lifting and moving patients) and burnout. Female health workers are exposed to hazardous drugs and chemicals in the workplace which may cause adverse reproductive outcomes such as spontaneous abortion and congenital malformations. In some contexts, female health workers are also subject to gender-based violence from coworkers and patients. Workforce shortages Many jurisdictions report shortfalls in the number of trained health human resources to meet population health needs and/or service delivery targets, especially in medically underserved areas. For example, in the United States, the 2010 federal budget invested $330 million to increase the number of physicians, physician assistants, nurse practitioners, nurses, and dentists practicing in areas of the country experiencing shortages of trained health professionals. The Budget expands loan repayment programs for physicians, nurses, and dentists who agree to practice in medically underserved areas. This funding will enhance the capacity of nursing schools to increase the number of nurses. It will also allow states to increase access to oral health care through dental workforce development grants. The Budget's new resources will sustain the expansion of the health care workforce funded in the Recovery Act. There were 15.7 million health care professionals in the US as of 2011. In Canada, the 2011 federal budget announced a Canada Student Loan forgiveness program to encourage and support new family physicians, physician assistants, nurse practitioners and nurses to practice in underserved rural or remote communities of the country, including communities that provide health services to First Nations and Inuit populations. In Uganda, the Ministry of Health reports that as many as 50% of staffing positions for health workers in rural and underserved areas remain vacant. As of early 2011, the Ministry was conducting research and costing analyses to determine the most appropriate attraction and retention packages for medical officers, nursing officers, pharmacists, and laboratory technicians in the country's rural areas. At the international level, the World Health Organization estimates a shortage of almost 4.3 million doctors, midwives, nurses, and support workers worldwide to meet target coverage levels of essential primary health care interventions. The shortage is reported most severe in 57 of the poorest countries, especially in sub-Saharan Africa. Nurses are the most common type of medical field worker to face shortages around the world. There are numerous reasons that the nursing shortage occurs globally. Some include: inadequate pay, a large percentage of working nurses are over the age of 45 and are nearing retirement age, burnout, and lack of recognition. Incentive programs have been put in place to aid in the deficit of pharmacists and pharmacy students. The reason for the shortage of pharmacy students is unknown but one can infer that it is due to the level of difficulty in the program. Results of nursing staff shortages can cause unsafe staffing levels that lead to poor patient care. Five or more incidents that occur per day in a hospital setting as a result of nurses who do not receive adequate rest or meal breaks is a common issue. Regulation and registration Practicing without a license that is valid and current is typically illegal. In most jurisdictions, the provision of health care services is regulated by the government. Individuals found to be providing medical, nursing or other professional services without the appropriate certification or license may face sanctions and criminal charges leading to a prison term. The number of professions subject to regulation, requisites for individuals to receive professional licensure, and nature of sanctions that can be imposed for failure to comply vary across jurisdictions. In the United States, under Michigan state laws, an individual is guilty of a felony if identified as practicing in the health profession without a valid personal license or registration. Health professionals can also be imprisoned if found guilty of practicing beyond the limits allowed by their licenses and registration. The state laws define the scope of practice for medicine, nursing, and a number of allied health professions. In Florida, practicing medicine without the appropriate license is a crime classified as a third degree felony, which may give imprisonment up to five years. Practicing a health care profession without a license which results in serious bodily injury classifies as a second degree felony, providing up to 15 years' imprisonment. In the United Kingdom, healthcare professionals are regulated by the state; the UK Health and Care Professions Council (HCPC) protects the 'title' of each profession it regulates. For example, it is illegal for someone to call himself an Occupational Therapist or Radiographer if they are not on the register held by the HCPC.
Biology and health sciences
Health professionals
Health
20691673
https://en.wikipedia.org/wiki/Screw
Screw
A screw is an externally helical threaded fastener capable of being tightened or released by a twisting force (torque) to the head. The most common uses of screws are to hold objects together and there are many forms for a variety of materials. Screws might be inserted into holes in assembled parts or a screw may form its own thread. The difference between a screw and a bolt is that the latter is designed to be tightened or released by torquing a nut. The screw head on one end has a milled slot that commonly requires a tool to transfer the twisting force. Common tools for driving screws include screwdrivers, wrenches, coins and hex keys. The head is usually larger than the body, which provides a bearing surface and keeps the screw from being driven deeper than its length; an exception being the set screw (aka grub screw). The cylindrical portion of the screw from the underside of the head to the tip is called the shank; it may be fully or partially threaded with the distance between each thread called the pitch. Most screws are tightened by clockwise rotation, which is called a right-hand thread. Screws with a left-hand thread are used in exceptional cases, such as where the screw will be subject to counterclockwise torque, which would tend to loosen a right-hand screw. For this reason, the left-side pedal of a bicycle has a left-hand thread. The screw mechanism is one of the six classical simple machines defined by Renaissance scientists. History Fasteners had become widespread involving concepts such as dowels and pins, wedging, mortises and tenons, dovetails, nailing (with or without clenching the nail ends), forge welding, and many kinds of binding with cord made of leather or fiber, using many kinds of knots. The screw was one of the last of the simple machines to be invented. It first appeared in Mesopotamia during the Neo-Assyrian period (911-609) BC, and then later appeared in Ancient Egypt and Ancient Greece where it was described by the Greek mathematician Archytas of Tarentum (428–350 BC). By the 1st century BC, wooden screws were commonly used throughout the Mediterranean world in screw presses for pressing olive oil from olives and for pressing juice from grapes in winemaking. The first documentation of the screwdriver is in the medieval Housebook of Wolfegg Castle, a manuscript written sometime between 1475 and 1490. However they probably did not become widespread until after 1800, once threaded fasteners had become commodified. Metal screws used as fasteners were rare in Europe before the 15th century, if known at all. The metal screw did not become a common fastener until machine tools for mass production developed toward the end of the 18th century. This development blossomed in the 1760s and 1770s. along two separate paths that soon converged: The first path was pioneered by brothers Job and William Wyatt of Staffordshire, UK, who patented in 1760 a machine that one might today best call a screw machine of an early and prescient sort. It made use of a leadscrew to guide the cutter to produce the desired pitch, and the slot was cut with a rotary file while the main spindle held still (presaging live tools on lathes 250 years later). Not until 1776 did the Wyatt brothers have a wood-screw factory up and running. Their enterprise failed, but new owners soon made it prosper, and in the 1780s they were producing 16,000 screws a day with only 30 employees—the kind of industrial productivity and output volume that would later become characteristic of modern industry but which was revolutionary at the time. Meanwhile, English instrument-maker Jesse Ramsden (1735–1800) was working on the toolmaking and instrument-making end of the screw-cutting problem, and in 1777 he invented the first satisfactory screw-cutting lathe. The British engineer Henry Maudslay (1771–1831) gained fame by popularizing such lathes with his screw-cutting lathes of 1797 and 1800, containing the trifecta of leadscrew, slide rest, and change-gear gear train, all in the right proportions for industrial machining. In a sense he unified the paths of the Wyatts and Ramsden and did for machine screws what had already been done for wood screws, i.e., significant easing of production spurring commodification. His firm remained a leader in machine tools for decades afterward. A misquoting of James Nasmyth popularized the notion that Maudslay had invented the slide rest, but this was incorrect; however, his lathes helped to popularize it. These developments of the 1760–1800 era, with the Wyatts and Maudslay as arguably the most important drivers, caused great increase in the use of threaded fasteners. Standardization of threadforms began almost immediately, but it was not quickly completed; it has been an evolving process ever since. Further improvements to the mass production of screws continued to push unit prices lower and lower for decades to come, throughout the 19th century. The mass production of wood screws (metal screws for fixing wood) in a specialized, single-purpose, high-volume-production machine tool; and the low-count, toolroom-style production of machine screws or bolts (V-thread) with easy selection among various pitches (whatever the machinist happened to need on any given day). In 1821 Hardman Philips built the first screw factory in the United States – on Moshannon Creek, near Philipsburg – for the manufacture of blunt metal screws. An expert in screw manufacture, Thomas Lever, was brought over from England to run the factory. The mill used steam and water power, with hardwood charcoal as fuel. The screws were made from wire prepared by "rolling and wire drawing apparatus" from iron manufactured at a nearby forge. The screw mill was not a commercial success; it eventually failed due to competition from the lower-cost, gimlet-pointed screw, and ceased operations in 1836. The American development of the turret lathe (1840s) and of automatic screw machines derived from it (1870s) drastically reduced the unit cost of threaded fasteners by increasingly automating the machine-tool control. This cost reduction spurred ever greater use of screws. Throughout the 19th century, the most commonly used forms of screw head (that is, drive types) were simple internal-wrenching straight slots and external-wrenching squares and hexagons. These were easy to machine and served most applications adequately. Rybczynski describes a flurry of patents for alternative drive types in the 1860s through 1890s, but explains that these were patented but not manufactured due to the difficulties and expense of doing so at the time. In 1908, Canadian P. L. Robertson was the first to make the internal-wrenching square socket drive a practical reality by developing just the right design (slight taper angles and overall proportions) to allow the head to be stamped easily but successfully, with the metal cold forming as desired rather than being sheared or displaced in unwanted ways. Practical manufacture of the internal-wrenching hexagon drive (hex socket) shortly followed in 1911. In the early 1930s American Henry F. Phillips popularized the Phillips-head screw, with a cross-shaped internal drive. Later improved -head screws were developed, more compatible with screwdrivers not of the exactly right head size: Pozidriv and Supadriv. Phillips screws and screwdrivers are to some extent compatible with those for the newer types, but with the risk of damaging the heads of tightly fastened screws. Threadform standardization further improved in the late 1940s, when the ISO metric screw thread and the Unified Thread Standard were defined. Precision screws, for controlling motion rather than fastening, developed around the turn of the 19th century, and represented one of the central technical advances, along with flat surfaces, that enabled the industrial revolution. They are key components of micrometers and lathes. Manufacture There are three steps in manufacturing a screw: heading, thread rolling, and coating. Screws are normally made from wire, which is supplied in large coils, or round bar stock for larger screws. The wire or rod is then cut to the proper length for the type of screw being made; this workpiece is known as a blank. It is then cold headed, which is a cold working process. Heading produces the head of the screw. The shape of the die in the machine dictates what features are pressed into the screw head; for example a flat head screw uses a flat die. For more complicated shapes two heading processes are required to get all of the features into the screw head. This production method is used because heading has a very high production rate, and produces virtually no waste material. Slotted head screws require an extra step to cut the slot in the head; this is done on a slotting machine. These machines are essentially stripped down milling machines designed to process as many blanks as possible. The blanks are then polished again prior to threading. The threads are usually produced via thread rolling; however, some are cut. The workpiece is then tumble finished with wood and leather media to do final cleaning and polishing. For most screws, a coating, such as electroplating with zinc (galvanizing) or applying black oxide, is applied to prevent corrosion. Types of screws Body Threaded fasteners either have a tapered shank or a non-tapered shank. Fasteners with tapered shanks are designed to either be driven into a substrate directly or into a pilot hole in a substrate, and most are classed as screws. Mating threads are formed in the substrate as these fasteners are driven in. Fasteners with a non-tapered shank are generally designed to mate with a nut or to be driven into a tapped hole, and most would be classed as bolts, although some are thread-forming (eg. taptite) and some authorities would treat some as screws when they are used with a female threaded fastener other than a nut. Sheet-metal screws do not have the chip-clearing flute of self-tapping screws. However, some wholesale vendors do not distinguish between the two kinds. Wood screw A wood screw is a metal screw used to fix wood, with a sharp point and a tapered thread designed to cut its own thread into the wood. Some screws are driven into intact wood; larger screws are usually driven into a hole narrower than the screw thread, and cut the thread in the wood. Early wood screws were made by hand, with a series of files, chisels, and other cutting tools, and these can be spotted easily by noting the irregular spacing and shape of the threads, as well as file marks remaining on the head of the screw and in the area between threads. Many of these screws had a blunt end, completely lacking the sharp tapered point on nearly all modern wood screws. Some wood screws were made with cutting dies as early as the late 1700s (possibly even before 1678 when the book content was first published in parts). Eventually, lathes were used to manufacture wood screws, with the earliest patent being recorded in 1760 in England. During the 1850s, swaging tools were developed to provide a more uniform and consistent thread. Screws made with these tools have rounded valleys with sharp and rough threads. Once screw turning machines were in common use, most commercially available wood screws were produced with this method. These cut wood screws are almost invariably tapered, and even when the tapered shank is not obvious, they can be discerned because the threads do not extend past the diameter of the shank. Such screws are best installed after drilling a pilot hole with a tapered drill bit. The majority of modern wood screws, except for those made of brass, are formed on thread rolling machines. These screws have a constant diameter and threads with a larger diameter than the shank and are stronger because the rolling process does not cut the grain of the metal. Self-tapping screw A self-tapping screw is designed to cut its own thread, usually in a fairly soft metal or plastic, in the same way as a wood screw (wood screws are actually self-tapping, but not referred to as such). Machine screw ASME standards specify a variety of machine screws (aka stove bolts) in diameters ranging up to . A machine screw or bolt is usually a smaller fastener (less than in diameter) threaded the entire length of its shank that usually has a recessed drive type (slotted, Phillips, etc.), usually intended to screw into a pre-formed thread, either a nut or a threaded (tapped) hole, unlike a wood or self-tapping screw. Machine screws are also made with socket heads (see above), often referred to as socket-head machine screws. Hex cap screw ASME standard B18.2.1-1996 specifies hex cap screws whose size range is in diameter. In 1991, responding to an influx of counterfeit fasteners, Congress passed PL 101-592, the "Fastener Quality Act". As a result, the ASME B18 committee re-wrote B18.2.1, renaming finished hex bolts to hex cap screw a term that had existed in common usage long before, but was now also being codified as an official name for the ASME B18 standard. Lug bolt and head bolts are other terms that refer to fasteners that are designed to be threaded into a tapped hole that is in part of the assembly and so based on the Machinery's Handbook distinction they would be screws. Here common terms are at variance with Machinery's Handbook distinction. Lag screw Lag screws (US) or coach screws (UK, Australia, and New Zealand) (also referred to as lag bolts or coach bolts, although this is a misnomer) or French wood screw (Scandinavia) are large wood screws. Lag screws are used to lag together lumber framing, to lag machinery feet to wood floors, and for other heavy carpentry applications. The attributive modifier lag came from an early principal use of such fasteners: the fastening of lags such as barrel staves and other similar parts. These fasteners are "screws" according to the Machinery's Handbook criteria, and the obsolescent term "lag bolt" has been replaced by "lag screw" in the Handbook. However, based on tradition many tradesmen continue to refer to them as "bolts", because, like head bolts, they are large, with hex or square heads that require a wrench, socket, or specialized bit to turn. The head is typically an external hex. Metric hex-headed lag screws are covered by DIN 571. Inch square-headed and hex-headed lag screws are covered by ASME B18.2.1. A typical lag screw can range in diameter from 4 to 20 mm or #10 to 1.25 in (4.83 to 31.75 mm), and lengths from 16 to 200 mm or or longer, with the coarse threads of a wood-screw or sheet-metal-screw threadform (but larger). The materials are usually carbon steel substrate with a coating of zinc galvanization (for corrosion resistance). The zinc coating may be bright yellow (electroplated), or dull gray (hot-dip galvanized). Bone screw Bone screws have the medical use of securing broken bones in living humans and animals. As with aerospace and nuclear power, medical use involves some of the highest technology for fasteners; excellent performance, longevity, and quality are required, and reflected in prices. Bone screws are often made of relatively non-reactive stainless steel or titanium, and they often have advanced features such as conical threads, multistart threads, cannulation (hollow core), and proprietary screw drive types, some not seen outside of these applications. Head There are a variety of screw head shapes. A few varieties of screw are manufactured with a break-away head, which snaps off when adequate torque is applied, to prevent removal after fitting, often to avoid tampering. (short for "panel") A low disc with a rounded, high outer edge with large surface area. or (BH) Cylindrical with a rounded top. A dome-shaped head used for decoration. Lower-profile dome designed to prevent tampering. A screw with a flat head that requires countersinking so that it can be driven with the head flush with the surface it is screwed into. The angle of the screw is measured as the aperture of the cone. Oval or A decorative screw head with a countersunk bottom and rounded top. Also known as "raised countersunk" or "instrument head" in the UK. Similar to countersunk, but there is a smooth progression from the shank to the angle of the head, similar to the bell of a bugle. Cylindrical. Cylindrical, but with a slightly convex top surface. A flanged head can be based on any non-countersunk head style, with the addition of an integrated flange at the base of the head that eliminates the need for a flat washer. Hex shaped, similar to the head of a hex bolt. Sometimes flanged. Most head types can provide for countersinking on the underside. This is most relevant to flat heads, which can be driven flush with the surface they are screwed into. pan and truss etc. Sizes Metric The international standards for metric externally threaded fasteners are ISO 898-1 for property classes produced from carbon steels and ISO 3506-1 for property classes produced from corrosion resistant steels. Inch There are many standards governing the material and mechanical properties of imperial sized externally threaded fasteners. Some of the most common consensus standards for grades produced from carbon steels are ASTM A193, ASTM A307, ASTM A354, ASTM F3125, and SAE J429. Some of the most common consensus standards for grades produced from corrosion resistant steels are ASTM F593 & ASTM A193. Tools The hand tool used to drive in most screws is called a screwdriver. A power tool that does the same job is a power screwdriver; power drills may also be used with screw-driving attachments. Where the holding power of the screwed joint is critical, torque-measuring and torque-limiting screwdrivers are used to ensure sufficient but not excessive force is developed by the screw. The hand tool for driving hex head threaded fasteners is a spanner (UK usage) or wrench (US usage), while a nut setter is used with a power screw driver. Modern screws employ a wide variety of screw drive designs, each requiring a different kind of tool to drive in or extract them. The most common screw drives are the slotted and Phillips in the US; hex, Robertson, and Torx are also common in some applications. Some types of drive are intended for automatic assembly in mass-production of such items as automobiles. More exotic screw drive types may be used in situations where tampering is undesirable, such as in electronic appliances that should not be serviced by the home repair person. Screw threads There are many systems for specifying the dimensions of screws, but in much of the world the ISO metric screw thread preferred series has displaced the many older systems. Other relatively common systems include the British Standard Whitworth, BA system (British Association), and the Unified Thread Standard. ISO metric screw thread The basic principles of the ISO metric screw thread are defined in international standard ISO 68-1 and preferred combinations of diameter and pitch are listed in ISO 261. The smaller subset of diameter and pitch combinations commonly used in screws, nuts and bolts is given in ISO 262. The most commonly used pitch value for each diameter is the coarse pitch. For some diameters, one or two additional fine pitch variants are also specified, for special applications such as threads in thin-walled pipes. ISO metric screw threads are designated by the letter M followed by the major diameter of the thread in millimetres (e.g. M8). If the thread does not use the normal coarse pitch (e.g. 1.25 mm in the case of M8), then the pitch in millimeters is also appended with a multiplication sign (e.g. "M8×1" if the screw thread has an outer diameter of 8 mm and advances by 1 mm per 360° rotation). The nominal diameter of a metric screw is the outer diameter of the thread. The tapped hole (or nut) into which the screw fits, has an internal diameter which is the size of the screw minus the pitch of the thread. Thus, an M6 screw, which has a pitch of 1 mm, is made by threading a 6 mm shank, and the nut or threaded hole is made by tapping threads into a hole of 5 mm diameter (6 mm − 1 mm). Metric hexagon bolts, screws and nuts are specified, for example, in International Standards ISO 4014, ISO 4017, and ISO 4032. The following table lists the relationship given in these standards between the thread size and the maximum width across the hexagonal flats (wrench size): In addition, the following non-preferred intermediate sizes are specified: Bear in mind that these are just examples and the width across flats is different for structural bolts, flanged bolts, and also varies by standards organization. Whitworth The first person to create a standard (in about 1841) was the English engineer Sir Joseph Whitworth. Whitworth screw sizes are still used, both for repairing old machinery and where a coarser thread than the metric fastener thread is required. Whitworth became British Standard Whitworth, abbreviated to BSW (BS 84:1956) and the British Standard Fine (BSF) thread was introduced in 1908 because the Whitworth thread was too coarse for some applications. The thread angle was 55°, and the depth and pitch varied with the diameter of the thread (i.e., the bigger the bolt, the coarser the thread). Spanners for Whitworth bolts are marked with the size of the bolt, not the distance across the flats of the screw head. The most common use of a Whitworth pitch nowadays is in all UK scaffolding. Additionally, the standard photographic tripod thread, which for small cameras is 1/4" Whitworth (20 tpi) and for medium/large format cameras is 3/8" Whitworth (16 tpi). It is also used for microphone stands and their appropriate clips, again in both sizes, along with "thread adapters" to allow the smaller size to attach to items requiring the larger thread. Note that while 1/4" UNC bolts fit 1/4" BSW camera tripod bushes, yield strength is reduced by the different thread angles of 60° and 55° respectively. British Association screw thread British Association (BA) screw threads, named after the British Association for Advancement of Science, were devised in 1884 and standardised in 1903. Screws were described as "2BA", "4BA" etc., the odd numbers being rarely used, except in equipment made prior to the 1970s for telephone exchanges in the UK. This equipment made extensive use of odd-numbered BA screws, in order—it may be suspected—to reduce theft. BA threads are specified by British Standard BS 93:1951 "Specification for British Association (B.A.) screw threads with tolerances for sizes 0 B.A. to 16 B.A." While not related to ISO metric screws, the sizes were actually defined in metric terms, a 0BA thread having a 6 mm diameter and 1 mm pitch. Other threads in the BA series are related to 0BA in a geometric series with the common factors 0.9 and 1.2. For example, a 4BA thread has pitch  mm (0.65 mm) and diameter  mm (3.62 mm). Although 0BA has the same diameter and pitch as ISO M6, the threads have different forms and are not compatible. BA threads are still common in some niche applications. Certain types of fine machinery, such as moving-coil meters and clocks, tend to have BA threads wherever they are manufactured. BA sizes were also used extensively in aircraft, especially those manufactured in the United Kingdom. BA sizing is still used in railway signalling, mainly for the termination of electrical equipment and cabling. BA threads are extensively used in Model Engineering where the smaller hex head sizes make scale fastenings easier to represent. As a result, many UK Model Engineering suppliers still carry stocks of BA fasteners up to typically 8BA and 10BA. 5BA is also commonly used as it can be threaded onto 1/8 rod. Unified Thread Standard The Unified Thread Standard (UTS) is most commonly used in the United States, but is also extensively used in Canada and occasionally in other countries. The size of a UTS screw is described using the following format: X-Y, where X is the nominal size (the hole or slot size in standard manufacturing practice through which the shank of the screw can easily be pushed) and Y is the threads per inch (TPI). For sizes inch and larger the size is given as a fraction; for sizes less than this an integer is used, ranging from 0 to 16. The integer sizes can be converted to the actual diameter by using the formula 0.060 + (0.013 × number). For example, a #4 screw is 0.060 + (0.013 × 4) = 0.060 + 0.052 = 0.112 inches in diameter. There are also screw sizes smaller than "0" (zero or ought). The sizes are 00, 000, 0000 which are usually referred to as two ought, three ought, and four ought. Most eyeglasses have the bows screwed to the frame with 00-72 (pronounced double ought – seventy two) size screws. To calculate the major diameter of "ought" size screws count the number of 0's and multiply this number by 0.013 and subtract from 0.060. For example, the major diameter of a 000-72 screw thread is .060 – (3 x .013) = 0.060 − 0.039 = .021 inches. For most size screws there are multiple TPI available, with the most common being designated a Unified Coarse Thread (UNC or UN) and Unified Fine Thread (UNF or UF). Note: In countries other than the United States and Canada, the ISO Metric Screw Thread System is primarily used today. Unlike most other countries the United States and Canada still use the Unified (Inch) Thread System. However, both are moving over to the ISO Metric System. It is estimated that approximately 60% of screw threads in use in the United States are still inch based. Mechanical classifications The numbers stamped on the head of the bolt are referred to the grade of the bolt used in certain application with the strength of a bolt. High-strength steel bolts usually have a hexagonal head with an ISO strength rating (called property class) stamped on the head. And the absence of marking/number indicates a lower grade bolt with low strength. The property classes most often used are 5.8, 8.8, and 10.9. The number before the point is the ultimate tensile strength in MPa divided by 100. The number after the point is the multiplier ratio of yield strength to ultimate tensile strength. For example, a property class 5.8 bolt has a nominal (minimum) ultimate tensile strength of 500 MPa, and a tensile yield strength of 0.8 times ultimate tensile strength or 0.8 (500) = 400 MPa. Ultimate tensile strength is the tensile stress at which the bolt fails. Tensile yield strength is the stress at which the bolt will yield in tension across the entire section of the bolt and receive a permanent set (an elongation from which it will not recover when the force is removed) of 0.2% offset strain. Proof strength is the usable strength of the fastener. Tension testing of a bolt up to the proof load should not cause permanent set of the bolt and should be conducted on actual fasteners rather than calculated. If a bolt is tensioned beyond the proof load, it may behave in plastic manner due to yielding in the threads and the tension preload may be lost due to the permanent plastic deformations. When elongating a fastener prior to reaching the yield point, the fastener is said to be operating in the elastic region; whereas elongation beyond the yield point is referred to as operating in the plastic region of the bolt material. If a bolt is loaded in tension beyond its proof strength, the yielding at the net root section of the bolt will continue until the entire section begins to yield and it has exceeded its yield strength. If tension increases, the bolt fractures at its ultimate strength. Mild steel bolts have property class 4.6, which is 400 MPa ultimate strength and 0.6*400=240 MPa yield strength. High-strength steel bolts have property class 8.8, which is 800 MPa ultimate strength and 0.8*800=640 MPa yield strength or above. The same type of screw or bolt can be made in many different grades of material. For critical high-tensile-strength applications, low-grade bolts may fail, resulting in damage or injury. On SAE-standard bolts, a distinctive pattern of marking is impressed on the heads to allow inspection and validation of the strength of the bolt. However, low-cost counterfeit fasteners may be found with actual strength far less than indicated by the markings. Such inferior fasteners are a danger to life and property when used in aircraft, automobiles, heavy trucks, and similar critical applications. The Machinery's Handbook describes the distinction between bolts and screws as follows: This distinction is consistent with ASME B18.2.1 and some dictionary definitions for screw and bolt. Old USS and SAE standards defined cap screws as fasteners with shanks that were threaded to the head and bolts as fasteners with shanks that were partially unthreaded. The federal government of the United States made an effort to formalize the difference between a bolt and a screw, because different tariffs apply to each.
Technology
Rigid components
null
20691950
https://en.wikipedia.org/wiki/Goods%20station
Goods station
A goods station (also known as a goods yard or goods depot) or freight station is, in the widest sense, a railway station where, either exclusively or predominantly, goods (or freight), such as merchandise, parcels, and manufactured items, are loaded onto or unloaded from ships or road vehicles and/or where goods wagons are transferred to local sidings. A station where goods are not specifically received or dispatched but simply transferred on their way to their destination between the railway and another means of transport, such as ships or lorries, may be referred to as a transshipment station. This often takes the form of a container terminal and may also be known as a container station. Goods stations were more widespread in the days when the railways were common carriers and were often converted from former passenger stations whose traffic had moved elsewhere. First goods station The world's first dedicated goods terminal was the 1830 Park Lane Goods Station at the South End Liverpool Docks. Built in 1830 the terminal was reached by a tunnel from Edge Hill in the east of the city. The station was a part of the Liverpool and Manchester Railway, which was also a first being the first inter-city railway. Location Goods stations may be located: next to a passenger station (either on the far side of the platforms as seen from the station building or immediately alongside it), separately from the associated passenger station on one of the railway lines leading from it, as an independent facility not connected with any particular passenger station. Where individual goods wagons are dispatched to specific goods stations, they are usually delivered to special shunting stations or marshalling yards where they are sorted and then collected. Sometimes there are combined shunting and goods stations. Equipment A goods station is usually equipped with a large number of storage and loading sidings in order to fulfil its task. On the loading sidings there may be fixed facilities, such as cranes or conveyor belts, or temporary equipment, such as wheeled ramps for the loading of sugar beet. Stations where the primary purpose of the station is the handling of containers are also known as container terminals (CT). They are equipped with special cranes and fork-lift vehicles for loading containers from lorries or ships onto the railway vehicles, or vice versa. If only a small section of a station is used for the loading and unloading of goods, it may be referred to as the "loading area" or "loading dock" and has its own access and signposting. Often there are no facilities for loading and the individual firm has to organise its own loading equipment such as conveyor belts or lorry cranes. Such loading areas were mainly to be found on branch lines, narrow gauge railways and at smaller stations. Medium-sized and larger goods stations usually have marshalling or shunting sidings to enable trains to be divided amongst the various local loading and sorting sidings and industrial branches, at the same time performing the function of a small railway hub. In many European countries they are also equipped with a hump yard. Changing nature of goods stations Due to the increasing amount of goods traffic that has switched from rail to road many goods stations and, in consequence marshalling yards, closed and were often eventually demolished, so that reviving rail services at the same location is no longer possible. In combined goods and hub stations with a hump yard, the latter was closed if the station lost its role as a railway hub, whilst the local goods function was retained. In addition, in most countries, part-load or parcel goods services have been entirely transferred to the roads, which has led to the closure of goods sheds as well as most of the public loading sidings and ramps used by smaller customers. As a result, most of the remaining goods stations today are just used as container or transshipment stations. European terminology In German-speaking countries, various terms for goods station are used including Güterbahnhof in Germany (abb: Gbf) and Switzerland (abb: GB), Frachtenbahnhof (Fbf) in Austria; Umschlagbahnhof (Ubf) for transshipment station and Containerbahnhof or Containerterminal (CT) for container station or terminal. In French : gare aux marchandises or gare de fret.
Technology
Concepts of ground transport
null
452162
https://en.wikipedia.org/wiki/Parrotfish
Parrotfish
Parrotfish are a group of fish species traditionally regarded as a family (Scaridae), but now often treated as a subfamily (Scarinae) or tribe (Scarini) of the wrasses (Labridae). With roughly 95 species, this group's largest species richness is in the Indo-Pacific. They are found in coral reefs, rocky coasts, and seagrass beds, and can play a significant role in bioerosion. Description Parrotfish are named for their dentition, which is distinct from other fish, including other labrids. Their numerous teeth are arranged in a tightly packed mosaic on the external surface of their jaw bones, forming a parrot-like beak with which they rasp algae from coral and other rocky substrates (which contributes to the process of bioerosion). Maximum sizes vary within the group, with the majority of species reaching in length. However, a few species reach lengths in excess of , and the green humphead parrotfish can reach up to . The smallest species is the bluelip parrotfish (Cryptotomus roseus), which has a maximum size of . Mucus Some parrotfish species, including the queen parrotfish (Scarus vetula), secrete a mucus cocoon, particularly at night. Prior to going to sleep, some species extrude mucus from their mouths, forming a protective cocoon that envelops the fish, presumably hiding its scent from potential predators. This mucus envelope may also act as an early warning system, allowing the parrotfish to flee when it detects predators such as moray eels disturbing the membrane. The skin itself is covered in another mucous substance which may have antioxidant properties helpful in repairing bodily damage, or repelling parasites, in addition to providing protection from UV light. Feeding Most parrotfish species are herbivores, feeding mainly on epilithic algae. A wide range of other small organisms are sometimes eaten, including invertebrates (sessile and benthic species, as well as zooplankton), bacteria and detritus. A few mostly larger species such as the green humphead parrotfish (Bolbometopon muricatum) feed extensively on living coral (polyps). None of these are exclusive corallivores, but polyps can make up as much as half their diet or even more in the green humphead parrotfish. Overall it has been estimated that fewer than one percent of parrotfish bites involve live corals and all except the green humphead parrotfish prefer algae-covered surfaces over live corals. Nevertheless, when they do eat coral polyps, localized coral death can occur. Their feeding activity is important for the production and distribution of coral sands in the reef biome, and can prevent algal overgrowth of the reef structure. The teeth grow continuously, replacing material worn away by feeding. Whether they feed on coral, rock or seagrasses, the substrate is ground up between the pharyngeal teeth. After they digest the edible portions from the rock, they excrete it as sand, helping create small islands and the sandy beaches. The humphead parrotfish can produce of sand each year. Or, on average (as there are so many variables i.e. size/species/location/depth etc.), almost per parrotfish per day. While feeding, parrotfish must be cognizant of predation by one of their main predators, the lemon shark. On Caribbean coral reefs, parrotfish are important consumers of sponges. An indirect effect of parrotfish grazing on sponges is the protection of reef-building corals that would otherwise be overgrown by fast-growing sponge species. Analysis of parrotfish feeding biology describes three functional groups: excavators, scrapers and browsers. Excavators have larger, stronger jaws that can gouge the substrate, leaving visible scars on the surface. Scrapers have less powerful jaws that can but infrequently do leave visible scraping scars on the substrate. Some of these may also feed on sand instead of hard surfaces. Browsers mainly feed on seagrasses and their epiphytes. Mature excavating species include Bolbometopon muricatum, Cetoscarus, Chlorurus and Sparisoma viride. These excavating species all feed as scrapers in early juvenile stages, but Hipposcarus and Scarus, which also feed as scrapers in early juvenile stages, retain the scraping feeding mode as adults. Browsing species are found in the genera Calotomus, Cryptotomus, Leptoscarus, Nicholsina and Sparisoma. Feeding modes reflect habitat preferences, with browsers chiefly living in the grassy seabed, and excavators and scrapers on coral reefs. Recently, the microphage feeding hypothesis challenged the prevailing paradigm of parrotfish as algal consumers by proposing that: Microscopy and molecular barcoding of coral reef substrate bitten by scraping and excavating parrotfish suggest that coral reef cyanobacteria from the order Nostocales are important in the feeding of these parrotfish. Additional microscopy and molecular barcoding research indicates that some parrotfish may ingest microscopic biota associated with endolithic sponges. Life cycle The development of parrotfishes is complex and accompanied by a series of changes in sex and colour (polychromatism). Most species are sequential hermaphrodites, starting as females (known as the initial phase) and then changing to males (the terminal phase). In many species, for example the stoplight parrotfish (Sparisoma viride), a number of individuals develop directly to males (i.e., they do not start as females). These directly developing males usually most resemble the initial phase, and often display a different mating strategy than the terminal phase males of the same species. A few species such as the Mediterranean parrotfish (S. cretense) are secondary gonochorists. This means that some females do not change sex (they remain females throughout their lives), the ones that do change from female to male do it while still immature (reproductively functioning females do not change to males) and there are no males with female-like colors (the initial phase males in other parrotfish). The marbled parrotfish (Leptoscarus vaigiensis) is the only species of parrotfish known not to change sex. In most species, the initial phase is dull red, brown, or grey, while the terminal phase is vividly green or blue with bright pink, orange or yellow patches. In a smaller number of species the phases are similar, and in the Mediterranean parrotfish the adult female is brightly colored, while the adult male is gray. In most species, juveniles have a different color pattern from adults. Juveniles of some tropical species can alter their color temporarily to mimic other species. Where the sexes and ages differ, the remarkably different phases often were first described as separate species. As a consequence early scientists recognized more than 350 parrotfish species, which is almost four times the actual number. Most tropical species form large schools when feeding and these are often grouped by size. Harems of several females presided over by a single male are normal in most species, with the males vigorously defending their position from any challenge. As pelagic spawners, parrotfish release many tiny, buoyant eggs into the water, which become part of the plankton. The eggs float freely, settling into the coral until hatching. The sex change in parrotfishes is accompanied by changes in circulating steroids. Females have high levels of estradiol, moderate levels of T and undetectable levels of the major fish androgen 11-ketotestosterone. During the transition from initial to terminal coloration phases, concentrations of 11-ketotestosterone rise dramatically and estrogen levels decline. If a female is injected with 11-ketotestosterone, it will cause a precocious change in gonadal, gametic and behavioural sex. Economic importance A commercial fishery exists for some of the larger species, particularly in the Indo-Pacific, but also for a few others like the Mediterranean parrotfish. Protecting parrotfishes is proposed as a way of saving Caribbean coral reefs from being overgrown with seaweed and sponges. Despite their striking colors, their feeding behavior renders them highly unsuitable for most marine aquaria. A new study has discovered that the parrotfish is extremely important for the health of the Great Barrier Reef; it is the only one of thousands of reef fish species that regularly performs the task of scraping and cleaning inshore coral reefs. Taxonomy Traditionally, the parrotfishes have been considered to be a family level taxon, Scaridae. Although phylogenetic and evolutionary analyses of parrotfishes are ongoing, they are now accepted to be a clade in the tribe Cheilini, and are now commonly referred to as scarine labrids (subfamily Scarinae, family Labridae). Some authorities have preferred to maintain the parrotfishes as a family-level taxon, resulting in Labridae not being monophyletic (unless split into several families). The World Register of Marine Species divides the group into two subfamilies as follows: subfamily Scarinae genus Bolbometopon Smith, 1956 (1 species) genus Cetoscarus Smith, 1956 (2 species) genus Chlorurus Swainson, 1839 (18 species) genus Hipposcarus Smith, 1956 (2 species) genus Scarus Forsskål, 1775 (53 species) subfamily Sparisomatinae genus Calotomus Gilbert, 1890 (5 species) genus Cryptotomus Cope, 1870 (1 species) genus Leptoscarus Swainson, 1839 (1 species) genus Nicholsina Fowler, 1915 (3 species) genus Sparisoma Swainson, 1839 (15 species) Some sources retain the Scaridae as a family, placing it alongside the wrasses of the family Labridae and the weed whitings Odacidae in the order Labriformes, part of the Percomorpha. They also do not support the division of the Scaridae into two subfamilies. Gallery Timeline of genera
Biology and health sciences
Acanthomorpha
Animals
452494
https://en.wikipedia.org/wiki/Wheellock
Wheellock
A wheellock, wheel-lock, or wheel lock is a friction-wheel mechanism which creates a spark that causes a firearm to fire. It was the next major development in firearms technology after the matchlock, and the first self-igniting firearm. Its name is from its rotating steel wheel to provide ignition. Developed in Europe around 1500, it was used alongside the matchlock (), the snaplock (), the snaphance (), and the flintlock (). Design The wheellock works by spinning a spring-loaded steel wheel against a piece of pyrite to generate intense sparks, which ignite gunpowder in a pan, which flashes through a small touchhole to ignite the main charge in the firearm's barrel. The pyrite is clamped in vise jaws on a spring-loaded arm (or 'dog'), which rests on the pan cover. When the trigger is pulled, the pan cover is opened, and the wheel is rotated, with the pyrite pressed into contact. A close modern analogy of the wheellock mechanism is the operation of a lighter, where a toothed steel wheel is spun in contact with a piece of sparking material to ignite the liquid or gaseous fuel. A wheellock firearm had the advantage that it could be instantly readied and fired even with one hand, in contrast to common matchlock firearms, which required a burning cord of slow match ready if the gun might be needed and demanded the operator's full attention and two hands to operate. On the other hand, wheellock mechanisms were complex to make, making them relatively costly. The "dog" The dog is a spring-loaded arm pivoted on the outside of the lock plate. A sparking material, usually a small piece of iron pyrite, is clamped and held by vise-like jaws at the swinging end of the arm. The dog has two possible positions to which it can be pivoted by hand: a "safe" position, in which the dog is pushed towards the muzzle of the firearm, and an "operating" position, where the dog is pulled towards the operator so that the pyrite in its jaws can engage either the top of the pan cover (see below), or (in the absence of the pan cover) the edge of a steel wheel bearing longitudinal grooves around its circumference. Flint is not suitable as a sparking material in the wheellock because it is too hard and would quickly wear away the wheel grooves. The wheel The upper segment of the grooved wheel, made of hardened steel, projects through a slot cut to its precise dimensions in the base of the priming pan. The wheel is grooved on its outside circumference with three or more V-shaped grooves with transverse cuts at intervals to provide a friction surface for the iron pyrite. The wheel is fixed to a shaft, one end of which projects outside the lockplate. The outside projection is of square section to permit a spanner (wrench) to be engaged for subsequent tensioning of the lock. The other end of the shaft fits through a hole in the lockplate, and on this end is forged a cam, or eccentric. One end of a short, robust chain (made of three or four flat, parallel links like a short piece of bicycle chain) is fixed to the cam, while the other end of the chain is held in a groove at the end of the longer branch of a large heavy V-spring, which is generally retained by a screw and a headed bracket through upstands inside the lockplate. The pan As in all muzzle-loading firearms (prior to the introduction of the percussion cap), the pan transmits the fire to the main charge of gunpowder inside the breech of the barrel, via a small hole (or "vent") in the side of the breech, that gives on to the pan. The priming pan of all wheellocks is provided with a sliding cover that has two purposes, the first of which is to contain the priming powder and afford it some protection from the elements (the second is examined below, under 'Operation'). The pan cover may be slid open and closed by hand, but it is also attached to an arm inside the lock plate, which is acted upon by the eccentric on the shaft of the wheel. The sear or trigger mechanism The trigger engages one arm of a "z"-shaped sear pivoting in its centre between two upstanding brackets riveted or brazed to the inside of the lockplate. The other arm of the sear passes through a hole in the lockplate, and engages in a blind hole on the inner side of the wheel, thus effectively locking it and preventing any rotation but only because of a secondary sear or wedge that is pressed under the rear arm of the sear—that is between the lockplate and the sear—when the forward part of it engages into the recess in the wheel. When the trigger is pulled, the secondary lever is withdrawn from its position and the strong pull of the mainspring pushes the unsupported main sear back into the lock and the wheel is free to rotate. The mechanism may seem overconstructed, but it prevents the trigger mechanism from working against the very powerful mainspring as it is the case with all vertical acting sears in flint and percussion locks or even modern firearms that still have cocks (revolvers). Preparing to fire First, the dog is rotated forward to the "safe" position, and the priming pan is pushed open (if it is not already so). After loading a powder charge and ball through the muzzle in the usual way, the operator takes his "spanner", slips it onto the square section of the wheel shaft, and turns it until a click is heard (about one-half to three-quarters of a revolution), and the wheel is felt to lock in place, whereupon the spanner is withdrawn. What occurs is that when the wheel is turned, the mainspring is tensioned via the chain, which is wound partially around the shaft. The click is the sound of one end of the sear engaging in the blind hole on the inside of the wheel, thus immobilising it. The pan is then primed with powder, and the pan cover pulled shut. Finally the dog is pulled back so that the pyrite in its jaws is resting on the top of the pan cover, under some pressure from the spring at the toe of its arm. Operation On pulling the trigger of a wheellock firearm, the sear effects a slight rotation as described above. The end of the sear arm (that has hitherto locked the wheel and prevented it from turning) is disengaged, leaving the wheel free to turn under the tension of the mainspring. There is a subtlety here that is of vital importance: the "hole" in the side of the wheel into which the sear engages, is not a parallel-sided shaft. If it were, then under the tremendous tension of the mainspring, it would require a huge force on the trigger to disengage the sear. Nor is the tip of the sear arm cylindrical, which would have a similar effect. Rather, the "hole" is a depression in the wheel (like a small crater), and the sear has a rounded end: the wheel is locked by reason of lateral force on the shaft of the wheel rather than vertical force on the sear. As soon as the wheel is released by the sear, the longer arm of the mainspring pulls the chain engaged in it. The other end of the chain being fixed to the cam on the wheel shaft, the latter rotates at high speed, whilst the rotating cam pushes forward the arm to which the pan cover is attached, thus causing the pan cover to slide forward towards the muzzle of the piece, and the pyrites to fall (under tension of the dog spring) on to the now rotating wheel. That is the second purpose of a sliding pan cover: were the pyrites to engage a stationary wheel, it would almost certainly jam the mechanism: but the built-in delay allows the pyrites to slip off the sliding pan-cover on to an already rotating wheel. A more modern development has been the use of a ball bearing between the wheel and the sear. This design allows a smoother and lighter trigger pull, requiring less force to operate. The fast rotation of the wheel against the pyrites produces white-hot sparks that ignite the powder in the pan, which is transferred to the main charge in the breech of the barrel via the vent, and the gun discharges. The wheellock took around a minute to load, prepare and fire. Many contemporary illustrations of a wheellock pistol in action show the gun held slightly rotated (about 45 degree angle from the horizontal) rather than vertically as with a hand cannon, to ensure that the priming powder in the pan lay against the vent in the barrel, and avoid a 'flash in the pan' or misfire. This was not the case for the flintlock, where the sparks had to fall vertically a certain distance on to the pan. History Though not a firearm, a wheellock mechanism for a land mine is described in the Huolongjing, a 14th century Chinese military manual. When stepped on, a pin is dislodged, causing a weight to fall, which spun a drum attached to two steel wheels. The wheels struck sparks against a flint, igniting the fuse. The invention of the wheellock in Europe can be placed at about 1500. There is a vocal group of scholars who believe Leonardo da Vinci was the inventor. Drawings made by Leonardo of a wheellock mechanism date (depending on the authority) from either the mid-1490s or the first decade of the 16th century. However, a drawing from a book of German inventions (dated 1505) and a 1507 reference to the purchase of a wheellock in Austria may indicate the inventor was an unknown German mechanic instead. In 1517 and 1518, the first gun control laws banning the wheellock were proclaimed by the Emperor Maximilian I, initially in Austria and later throughout the Holy Roman Empire. Several Italian states followed suit in the 1520s and 1530s, another argument used by the pro-German camp. As Lisa Jardine relates in her account of the assassination of William the Silent of the Netherlands, in 1584, the small size, ease of concealment and user-friendly loading aspect of the wheellock, compared to larger and more cumbersome hand-held weapons, meant that it was used for the killing of public figures, such as Francis, Duke of Guise and William himself. Jardine also argues that a stray wheellock pistol shot may have been responsible for the St. Bartholomew's Day massacre of French Huguenots in 1572. Wheellock pistols were in common use during the Thirty Years' War (1618–1648) on both sides for cavalry and officers. Around 1650 the flintlock began to replace the wheellock as it was cheaper and easier to use than the wheellock. Wheel-lock firearms were never mass-produced for military purposes, but the best preserved armoury collection at the Landeszeughaus in Graz, Austria, contains over 3,000 examples, many of which were produced in small batches for military units. Features Among the advantages of the wheellock was a better resistance to rain or damp conditions than the matchlock and the absence of a telltale glow, or smell from the burning slow match, itself a hazard in proximity to gunpowder. A slow match could be next to impossible to light in rain, but the wheellock allowed sparks to be generated in any weather, and the priming pan was fitted with a cover that was not opened until the instant the gun was fired. That made it feasible for the first time to conceal a firearm under clothing. The high production cost and complexity of the mechanism, however, hindered the wheellock's widespread adoption. A highly skilled gunsmith was required to build the mechanism, and the variety of parts and complex design made it liable to malfunction if it was not carefully maintained. Early models also had trouble with unreliable springs, but the problem was quickly solved. The wheellock was used along with the matchlock until both were replaced by the simpler and less-costly flintlock, by the late 17th century. The wheellock mechanism however gave faster ignition than the flintlock, because the sparks were produced directly in the pan rather than having to fall a certain distance from the frizzen.
Technology
Mechanisms_2
null
452563
https://en.wikipedia.org/wiki/Matchlock
Matchlock
A matchlock or firelock is a historical type of firearm wherein the gunpowder is ignited by a burning piece of flammable cord or twine that is in contact with the gunpowder through a mechanism that the musketeer activates by pulling a lever or trigger with their finger. This firing mechanism was an improvement over the hand cannon, which lacked a trigger and required the musketeer or an assistant to apply a match directly to the gunpowder by hand. The matchlock mechanism allowed the musketeer to apply the match himself without losing his concentration. Description The classic matchlock gun held a burning slow match in a clamp at the end of a small curved lever known as the serpentine. Upon the pull of a lever (or in later models a trigger) protruding from the bottom of the gun and connected to the serpentine, the clamp dropped down, lowering the smoldering match into the flash pan and igniting the priming powder. The flash from the primer traveled through the touch hole, igniting the main charge of propellant in the gun barrel. On the release of the lever or trigger, the spring-loaded serpentine would move in reverse to clear the pan. For obvious safety reasons, the match would be removed before reloading of the gun. Both ends of the match were usually kept alight in case one end should be accidentally extinguished. Earlier types had only an S-shaped serpentine pinned to the stock either behind or in front of the flash pan (the so-called "serpentine lock"), one end of which was manipulated to bring the match into the pan. A later addition to the gun was the rifled barrel. This made the gun much more accurate at longer distances but did have drawbacks, the main one being that it took much longer to reload because the bullet had to be pounded down into the barrel. A type of matchlock was developed called the snap matchlock, in which the serpentine was brought to firing position by a weak spring, and activated by pressing a button, pulling a trigger, or even pulling a short string passing into the mechanism. As the match was often extinguished after its collision with the flash pan, this type was not used by soldiers but was often used in fine target weapons where the precision of the shot was more important than the repetition. An inherent weakness of the matchlock was the necessity of keeping the match constantly lit. This was chiefly a problem in wet weather, when damp match cord was difficult to light and to keep burning. Another drawback was the burning match itself. At night, the match would glow in the darkness, possibly revealing the carrier's position. The distinctive smell of burning match-cord was also a giveaway of a musketeer's position. It was also quite dangerous when soldiers were carelessly handling large quantities of gunpowder (for example, while refilling their powder horns) with lit matches present. This was one reason why soldiers in charge of transporting and guarding ammunition were amongst the first to be issued self-igniting guns like the wheellock and snaphance. The matchlock was also uneconomical to keep ready for long periods of time, as keeping both ends of a match lit every night for a year required a mile of match. History The earliest form of matchlock in Europe appeared by 1411 and in the Ottoman Empire by 1425. This early arquebus was a hand cannon with a serpentine lever to hold matches. However this early arquebus did not have the matchlock mechanism traditionally associated with the weapon. The exact dating of the matchlock addition is disputed. The first references to the use of what may have been matchlock arquebuses (tüfek) by the Janissary corps of the Ottoman Army date them from 1394 to 1465. However it is unclear whether these were arquebuses or small cannons as late as 1444, but according to Gábor Ágoston the fact that they were listed separately from cannons in mid-15th century inventories suggest they were handheld firearms, though he admits this is disputable. Godfrey Goodwin dates the first use of the matchlock arquebus by the Janissaries to no earlier than 1465. The idea of a serpentine later appeared in an Austrian manuscript dated to the mid-15th century. The first dated illustration of a matchlock mechanism dates to 1475, and by the 16th century they were universally used. During this time the latest tactic in using the matchlock was to line up and send off a volley of musket balls at the enemy. This volley would be much more effective than single soldiers trying to hit individual targets. Robert Elgood theorizes the armies of the Italian states used the arquebus in the 15th century, but this may be a type of hand cannon, not matchlocks with trigger mechanism. He agreed that the matchlock first appeared in Western Europe during the 1470s in Germany. Improved versions of the Ottoman arquebus were transported to India by Babur in 1526. The matchlock was claimed to have been introduced to China by the Portuguese. The Chinese obtained the matchlock arquebus technology from the Portuguese in the 16th century and matchlock firearms were used by the Chinese into the 19th century. The Chinese used the term "bird-gun" to refer to muskets and Turkish muskets may have reached China before Portuguese ones. In Japan, the first documented introduction of the matchlock, which became known as the tanegashima, was through the Portuguese in 1543. The tanegashima seems to have been based on snap matchlocks that were produced in the armory of Goa in Portuguese India, which was captured by the Portuguese in 1510. While the Japanese were technically able to produce tempered steel (e.g. sword blades), they preferred to use work-hardened brass springs in their matchlocks. The name tanegashima came from the island where a Chinese junk (a type of ship) with Portuguese adventurers on board was driven to anchor by a storm. The lord of the Japanese island Tanegashima Tokitaka (1528–1579) purchased two matchlock rifles from the Portuguese and put a swordsmith to work copying the matchlock barrel and firing mechanism. Within a few years, the use of the tanegashima in battle forever changed the way war was fought in Japan. Despite the appearance of more advanced ignition systems, such as that of the wheellock and the snaphance, the low cost of production, simplicity, and high availability of the matchlock kept it in use in European armies. It left service around 1750. It was eventually completely replaced by the flintlock as the foot soldier's main armament. In Japan, matchlocks continued to see military use up to the mid-19th century. In China, matchlock guns were still being used by imperial army soldiers in the middle decades of the 19th century. There is evidence that matchlock rifles may have been in use among some peoples in Christian Abyssinia in the late Middle Ages. Although modern rifles were imported into Ethiopia during the 19th century, contemporary British historians noted that, along with slingshots, matchlock rifle weapons were used by the elderly for self-defense and by the militaries of the Ras. Under Qing rule, the Hakka on Taiwan owned matchlock muskets. Han people traded and sold matchlock muskets to the Taiwanese aborigines. During the Sino-French War, the Hakka and Aboriginals used their matchlock muskets against the French in the Keelung Campaign and Battle of Tamsui. The Hakka used their matchlock muskets to resist the Japanese invasion of Taiwan (1895) and Han Taiwanese and Aboriginals conducted an insurgency against Japanese rule. 20th century use Arabian Bedouin families continued using matchlocks well into the 20th century, and matchlocks were often passed down as family heirlooms within Bedouin families. The reliability of the matchlock made it the weapon of choice for Bedouins, who sometimes chose to convert flintlocks into matchlocks. Tibetans have used matchlocks from as early as the sixteenth century until very recently. The early 20th century explorer Sven Hedin also encountered Tibetan tribesmen on horseback armed with matchlock rifles along the Tibetan border with Xinjiang. Tibetan nomad fighters used arquebuses for warfare during the Annexation of Tibet by the People's Republic of China as late as the second half of the 20th century—and Tibetan nomads reportedly still use matchlock rifles to hunt wolves and other predatory animals. These matchlock arquebuses typically feature a long, sharpened retractable forked stand. Literary references A Spanish matchlock, purchased in Holland, plays an important role in Walter D. Edmonds' Newbery Award-winning children's novel The Matchlock Gun.
Technology
Mechanisms_2
null
452577
https://en.wikipedia.org/wiki/Free%20body%20diagram
Free body diagram
In physics and engineering, a free body diagram (FBD; also called a force diagram) is a graphical illustration used to visualize the applied forces, moments, and resulting reactions on a free body in a given condition. It depicts a body or connected bodies with all the applied forces and moments, and reactions, which act on the body(ies). The body may consist of multiple internal members (such as a truss), or be a compact body (such as a beam). A series of free bodies and other diagrams may be necessary to solve complex problems. Sometimes in order to calculate the resultant force graphically the applied forces are arranged as the edges of a polygon of forces or force polygon (see ). Free body A body is said to be "free" when it is singled out from other bodies for the purposes of dynamic or static analysis. The object does not have to be "free" in the sense of being unforced, and it may or may not be in a state of equilibrium; rather, it is not fixed in place and is thus "free" to move in response to forces and torques it may experience. Figure 1 shows, on the left, green, red, and blue widgets stacked on top of each other, and for some reason the red cylinder happens to be the body of interest. (It may be necessary to calculate the stress to which it is subjected, for example.) On the right, the red cylinder has become the free body. In figure 2, the interest has shifted to just the left half of the red cylinder and so now it is the free body on the right. The example illustrates the context sensitivity of the term "free body". A cylinder can be part of a free body, it can be a free body by itself, and, as it is composed of parts, any of those parts may be a free body in itself. Figure 1 and 2 are not yet free body diagrams. In a completed free body diagram, the free body would be shown with forces acting on it. Purpose Free body diagrams are used to visualize forces and moments applied to a body and to calculate reactions in mechanics problems. These diagrams are frequently used both to determine the loading of individual structural components and to calculate internal forces within a structure. They are used by most engineering disciplines from Biomechanics to Structural Engineering. In the educational environment, a free body diagram is an important step in understanding certain topics, such as statics, dynamics and other forms of classical mechanics. Features A free body diagram is not a scaled drawing, it is a diagram. The symbols used in a free body diagram depends upon how a body is modeled. Free body diagrams consist of: A simplified version of the body (often a dot or a box) Forces shown as straight arrows pointing in the direction they act on the body Moments are shown as curves with an arrow head or a vector with two arrow heads pointing in the direction they act on the body One or more reference coordinate systems By convention, reactions to applied forces are shown with hash marks through the stem of the vector The number of forces and moments shown depends upon the specific problem and the assumptions made. Common assumptions are neglecting air resistance and friction and assuming rigid body action. In statics all forces and moments must balance to zero; the physical interpretation is that if they do not, the body is accelerating and the principles of statics do not apply. In dynamics the resultant forces and moments can be non-zero. Free body diagrams may not represent an entire physical body. Portions of a body can be selected for analysis. This technique allows calculation of internal forces, making them appear external, allowing analysis. This can be used multiple times to calculate internal forces at different locations within a physical body. For example, a gymnast performing the iron cross: modeling the ropes and person allows calculation of overall forces (body weight, neglecting rope weight, breezes, buoyancy, electrostatics, relativity, rotation of the earth, etc.). Then remove the person and show only one rope; you get force direction. Then only looking at the person the forces on the hand can be calculated. Now only look at the arm to calculate the forces and moments at the shoulders, and so on until the component you need to analyze can be calculated. Modeling the body A body may be modeled in three ways: a particle. This model may be used when any rotational effects are zero or have no interest even though the body itself may be extended. The body may be represented by a small symbolic blob and the diagram reduces to a set of concurrent arrows. A force on a particle is a bound vector. rigid extended. Stresses and strains are of no interest but rotational effects are. A force arrow should lie along the line of force, but where along the line is irrelevant. A force on an extended rigid body is a sliding vector. non-rigid extended. The point of application of a force becomes crucial and has to be indicated on the diagram. A force on a non-rigid body is a bound vector. Some use the tail of the arrow to indicate the point of application. Others use the tip. What is included An FBD represents the body of interest and the external forces acting on it. The body: This is usually a schematic depending on the body—particle/extended, rigid/non-rigid—and on what questions are to be answered. Thus if rotation of the body and torque is in consideration, an indication of size and shape of the body is needed. For example, the brake dive of a motorcycle cannot be found from a single point, and a sketch with finite dimensions is required. The external forces: These are indicated by labelled arrows. In a fully solved problem, a force arrow is capable of indicating the direction and the line of action the magnitude the point of application a reaction, as opposed to an applied force, if a hash is present through the stem of the arrow Often a provisional free body is drawn before everything is known. The purpose of the diagram is to help to determine magnitude, direction, and point of application of external loads. When a force is originally drawn, its length may not indicate the magnitude. Its line may not correspond to the exact line of action. Even its orientation may not be correct. External forces known to have negligible effect on the analysis may be omitted after careful consideration (e.g. buoyancy forces of the air in the analysis of a chair, or atmospheric pressure on the analysis of a frying pan). External forces acting on an object may include friction, gravity, normal force, drag, tension, or a human force due to pushing or pulling. When in a non-inertial reference frame (see coordinate system, below), fictitious forces, such as centrifugal pseudoforce are appropriate. At least one coordinate system is always included, and chosen for convenience. Judicious selection of a coordinate system can make defining the vectors simpler when writing the equations of motion or statics. The x direction may be chosen to point down the ramp in an inclined plane problem, for example. In that case the friction force only has an x component, and the normal force only has a y component. The force of gravity would then have components in both the x and y directions: mgsin(θ) in the x and mgcos(θ) in the y, where θ is the angle between the ramp and the horizontal. Exclusions A free body diagram should not show: Bodies other than the free body. Constraints. (The body is not free from constraints; the constraints have just been replaced by the forces and moments exerted on the body.) Forces exerted by the free body. (A diagram showing the forces exerted both on and by a body is likely to be confusing since all the forces will cancel out. By Newton's 3rd law if body A exerts a force on body B then B exerts an equal and opposite force on A. This should not be confused with the equal and opposite forces that are necessary to hold a body in equilibrium.) Internal forces. (For example, if an entire truss is being analyzed, the forces between the individual truss members are not included.) Velocity or acceleration vectors. Analysis In an analysis, a free body diagram is used by summing all forces and moments (often accomplished along or about each of the axes). When the sum of all forces and moments is zero, the body is at rest or moving and/or rotating at a constant velocity, by Newton's first law. If the sum is not zero, then the body is accelerating in a direction or about an axis according to Newton's second law. Forces not aligned to an axis Determining the sum of the forces and moments is straightforward if they are aligned with coordinate axes, but it is more complex if some are not. It is convenient to use the components of the forces, in which case the symbols ΣFx and ΣFy are used instead of ΣF (the variable M is used for moments). Forces and moments that are at an angle to a coordinate axis can be rewritten as two vectors that are equivalent to the original (or three, for three dimensional problems)—each vector directed along one of the axes (Fx) and (Fy). Example: A block on an inclined plane A simple free-body diagram, shown above, of a block on a ramp, illustrates this. All external supports and structures have been replaced by the forces they generate. These include: mg: the product of the mass of the block and the constant of gravitation acceleration: its weight. N: the normal force of the ramp. Ff: the friction force of the ramp. The force vectors show the direction and point of application and are labelled with their magnitude. It contains a coordinate system that can be used when describing the vectors. Some care is needed in interpreting the diagram. The normal force has been shown to act at the midpoint of the base, but if the block is in static equilibrium its true location is directly below the centre of mass, where the weight acts because that is necessary to compensate for the moment of the friction. Unlike the weight and normal force, which are expected to act at the tip of the arrow, the friction force is a sliding vector and thus the point of application is not relevant, and the friction acts along the whole base. Polygon of forces In the case of two applied forces, their sum (resultant force) can be found graphically using a parallelogram of forces. To graphically determine the resultant force of multiple forces, the acting forces can be arranged as edges of a polygon by attaching the beginning of one force vector to the end of another in an arbitrary order. Then the vector value of the resultant force would be determined by the missing edge of the polygon. In the diagram, the forces P1 to P6 are applied to the point O. The polygon is constructed starting with P1 and P2 using the parallelogram of forces (vertex a). The process is repeated (adding P3 yields the vertex b, etc.). The remaining edge of the polygon O-e represents the resultant force R. Kinetic diagram In dynamics a kinetic diagram is a pictorial device used in analyzing mechanics problems when there is determined to be a net force and/or moment acting on a body. They are related to and often used with free body diagrams, but depict only the net force and moment rather than all of the forces being considered. Kinetic diagrams are not required to solve dynamics problems; their use in teaching dynamics is argued against by some in favor of other methods that they view as simpler. They appear in some dynamics texts but are absent in others.
Physical sciences
Classical mechanics
Physics
452667
https://en.wikipedia.org/wiki/Wave%20packet
Wave packet
In physics, a wave packet (also known as a wave train or wave group) is a short burst of localized wave action that travels as a unit, outlined by an envelope. A wave packet can be analyzed into, or can be synthesized from, a potentially-infinite set of component sinusoidal waves of different wavenumbers, with phases and amplitudes such that they interfere constructively only over a small region of space, and destructively elsewhere. Any signal of a limited width in time or space requires many frequency components around a center frequency within a bandwidth inversely proportional to that width; even a gaussian function is considered a wave packet because its Fourier transform is a "packet" of waves of frequencies clustered around a central frequency. Each component wave function, and hence the wave packet, are solutions of a wave equation. Depending on the wave equation, the wave packet's profile may remain constant (no dispersion) or it may change (dispersion) while propagating. Historical background Ideas related to wave packets – modulation, carrier waves, phase velocity, and group velocity – date from the mid-1800s. The idea of a group velocity distinct from a wave's phase velocity was first proposed by W.R. Hamilton in 1839, and the first full treatment was by Rayleigh in his "Theory of Sound" in 1877. Erwin Schrödinger introduced the idea of wave packets just after publishing his famous wave equation. He solved his wave equation for a quantum harmonic oscillator, introduced the superposition principle, and used it to show that a compact state could persist. While this work did result in the important concept of coherent states, the wave packet concept did not endure. The year after Schrödinger's paper, Werner Heisenberg published his paper on the uncertainty principle, showing in the process, that Schrödinger's results only applied to quantum harmonic oscillators, not for example to Coulomb potential characteristic of atoms. The following year, 1927, Charles Galton Darwin explored Schrödinger's equation for an unbound electron in free space, assuming an initial Gaussian wave packet. Darwin showed that at time later the position of the packet traveling at velocity would be where is the uncertainty in the initial position. Later in 1927 Paul Ehrenfest showed that the time, for a matter wave packet of width and mass to spread by a factor of 2 was . Since is so small, wave packets on the scale of macroscopic objects, with large width and mass, double only at cosmic time scales. Significance in quantum mechanics Quantum mechanics describes the nature of atomic and subatomic systems using Schrödinger's wave equation. The classical limit of quantum mechanics and many formulations of quantum scattering use wave packets formed from various solutions to this equation. Quantum wave packet profiles change while propagating; they show dispersion. Physicists have concluded that "wave packets would not do as representations of subatomic particles". Wave packets and the classical limit Schrodinger developed wave packets in hopes of interpreting quantum wave solutions as locally compact wave groups. Such packets tradeoff position localization for spreading momentum. In the coordinate representation of the wave (such as the Cartesian coordinate system), the position of the particle's localized probability is specified by the position of the packet solution. The narrower the spatial wave packet, and therefore the better localized the position of the wave packet, the larger the spread in the momentum of the wave. This trade-off between spread in position and spread in momentum is a characteristic feature of the Heisenberg uncertainty principle. One kind of optimal tradeoff minimizes the product of position uncertainty and momentum uncertainty . If we place such a packet at rest it stays at rest: the average value of the position and momentum match a classical particle. However it spreads out in all directions with a velocity given by the optimal momentum uncertainty . The spread is so fast that in the distance of once around an atom the wave packet is unrecognizable. Wave packets and quantum scattering Particle interactions are called scattering in physics; wave packet mathematics play an important role in quantum scattering approaches. A monochromatic (single momentum) source produces convergence difficulties in the scattering models. Scattering problems also have classical limits. Whenever the scattering target (for example an atom) has a size much smaller than wave packet, the center of the wave packet follows scattering classical trajectories. In other cases, the wave packet distorts and scatters as it interacts with the target. Basic behaviors Non-dispersive Without dispersion the wave packet maintains its shape as it propagates. As an example of propagation without dispersion, consider wave solutions to the following wave equation from classical physics where is the speed of the wave's propagation in a given medium. Using the physics time convention, , the wave equation has plane-wave solutions where the relation between the angular frequency and angular wave vector is given by the dispersion relation: such that . This relation should be valid so that the plane wave is a solution to the wave equation. As the relation is linear, the wave equation is said to be non-dispersive. To simplify, consider the one-dimensional wave equation with . Then the general solution is where the first and second term represent a wave propagating in the positive respectively negative . A wave packet is a localized disturbance that results from the sum of many different wave forms. If the packet is strongly localized, more frequencies are needed to allow the constructive superposition in the region of localization and destructive superposition outside the region. From the basic one-dimensional plane-wave solutions, a general form of a wave packet can be expressed as where the amplitude , containing the coefficients of the wave superposition, follows from taking the inverse Fourier transform of a "sufficiently nice" initial wave evaluated at : and comes from Fourier transform conventions. For example, choosing we obtain and finally The nondispersive propagation of the real or imaginary part of this wave packet is presented in the above animation. Dispersive By contrast, in the case of dispersion, a wave changes shape during propagation. For example, the free Schrödinger equation , has plane-wave solutions of the form: where is a constant and the dispersion relation satisfies with the subscripts denoting unit vector notation. As the dispersion relation is non-linear, the free Schrödinger equation is dispersive. In this case, the wave packet is given by: where once again is simply the Fourier transform of . If (and therefore ) is a Gaussian function, the wave packet is called a Gaussian wave packet. For example, the solution to the one-dimensional free Schrödinger equation (with , , and ħ set equal to one) satisfying the initial condition representing a wave packet localized in space at the origin as a Gaussian function, is seen to be An impression of the dispersive behavior of this wave packet is obtained by looking at the probability density: It is evident that this dispersive wave packet, while moving with constant group velocity , is delocalizing rapidly: it has a width increasing with time as , so eventually it diffuses to an unlimited region of space. Gaussian wave packets in quantum mechanics The above dispersive Gaussian wave packet, unnormalized and just centered at the origin, instead, at =0, can now be written in 3D, now in standard units: The Fourier transform is also a Gaussian in terms of the wavenumber, the k-vector, With and its inverse adhering to the uncertainty relation such that can be considered the square of the width of the wave packet, whereas its inverse can be written as Each separate wave only phase-rotates in time, so that the time dependent Fourier-transformed solution is The inverse Fourier transform is still a Gaussian, but now the parameter has become complex, and there is an overall normalization factor. The integral of over all space is invariant, because it is the inner product of with the state of zero energy, which is a wave with infinite wavelength, a constant function of space. For any energy eigenstate , the inner product, only changes in time in a simple way: its phase rotates with a frequency determined by the energy of . When has zero energy, like the infinite wavelength wave, it doesn't change at all. For a given , the phase of the wave function varies with position as . It varies quadratically with position, which means that it is different from multiplication by a linear phase factor as is the case of imparting a constant momentum to the wave packet. In general, the phase of a gaussian wave packet has both a linear term and a quadratic term. The coefficient of the quadratic term begins by increasing from towards as the gaussian wave packet becomes sharper, then at the moment of maximum sharpness, the phase of the wave function varies linearly with position. Then the coefficient of the quadratic term increases from towards , as the gaussian wave packet spreads out again. The integral is also invariant, which is a statement of the conservation of probability. Explicitly, where is the distance from the origin, the speed of the particle is zero, and width given by which is at (arbitrarily chosen) time while eventually growing linearly in time, as , indicating wave-packet spreading. For example, if an electron wave packet is initially localized in a region of atomic dimensions (i.e., m) then the width of the packet doubles in about s. Clearly, particle wave packets spread out very rapidly indeed (in free space): For instance, after ms, the width will have grown to about a kilometer. This linear growth is a reflection of the (time-invariant) momentum uncertainty: the wave packet is confined to a narrow , and so has a momentum which is uncertain (according to the uncertainty principle) by the amount , a spread in velocity of , and thus in the future position by . The uncertainty relation is then a strict inequality, very far from saturation, indeed! The initial uncertainty has now increased by a factor of (for large ). The 2D case A gaussian 2D quantum wave function: where The Airy wave train In contrast to the above Gaussian wave packet, which moves at constant group velocity, and always disperses, there exists a wave function based on Airy functions, that propagates freely without envelope dispersion, maintaining its shape, and accelerates in free space: where, for simplicity (and nondimensionalization), choosing , , and B an arbitrary constant results in There is no dissonance with Ehrenfest's theorem in this force-free situation, because the state is both non-normalizable and has an undefined (infinite) for all times. (To the extent that it could be defined, for all times, despite the apparent acceleration of the front.) The Airy wave train is the only dispersionless wave in one dimensional free space. In higher dimensions, other dispersionless waves are possible. In phase space, this is evident in the pure state Wigner quasiprobability distribution of this wavetrain, whose shape in x and p is invariant as time progresses, but whose features accelerate to the right, in accelerating parabolas. The Wigner function satisfiesThe three equalities demonstrate three facts: Time-evolution is equivalent to a translation in phase-space by . The contour lines of the Wigner function are parabolas of form . Time-evolution is equivalent to a shearing in phase space along the -direction at speed . Note the momentum distribution obtained by integrating over all is constant. Since this is the probability density in momentum space, it is evident that the wave function itself is not normalizable. Free propagator The narrow-width limit of the Gaussian wave packet solution discussed is the free propagator kernel . For other differential equations, this is usually called the Green's function, but in quantum mechanics it is traditional to reserve the name Green's function for the time Fourier transform of . Returning to one dimension for simplicity, with m and ħ set equal to one, when is the infinitesimal quantity , the Gaussian initial condition, rescaled so that its integral is one, becomes a delta function, , so that its time evolution, yields the propagator. Note that a very narrow initial wave packet instantly becomes infinitely wide, but with a phase which is more rapidly oscillatory at large values of x. This might seem strange—the solution goes from being localized at one point to being "everywhere" at all later times, but it is a reflection of the enormous momentum uncertainty of a localized particle, as explained above. Further note that the norm of the wave function is infinite, which is also correct, since the square of a delta function is divergent in the same way. The factor involving is an infinitesimal quantity which is there to make sure that integrals over are well defined. In the limit that , becomes purely oscillatory, and integrals of are not absolutely convergent. In the remainder of this section, it will be set to zero, but in order for all the integrations over intermediate states to be well defined, the limit ε→0 is to be only taken after the final state is calculated. The propagator is the amplitude for reaching point x at time t, when starting at the origin, x=0. By translation invariance, the amplitude for reaching a point x when starting at point y is the same function, only now translated, In the limit when t is small, the propagator goes to a delta function but only in the sense of distributions: The integral of this quantity multiplied by an arbitrary differentiable test function gives the value of the test function at zero. To see this, note that the integral over all space of equals 1 at all times, since this integral is the inner-product of K with the uniform wave function. But the phase factor in the exponent has a nonzero spatial derivative everywhere except at the origin, and so when the time is small there are fast phase cancellations at all but one point. This is rigorously true when the limit ε→0 is taken at the very end. So the propagation kernel is the (future) time evolution of a delta function, and it is continuous, in a sense: it goes to the initial delta function at small times. If the initial wave function is an infinitely narrow spike at position , it becomes the oscillatory wave, Now, since every function can be written as a weighted sum of such narrow spikes, the time evolution of every function 0 is determined by this propagation kernel , Thus, this is a formal way to express the fundamental solution or general solution. The interpretation of this expression is that the amplitude for a particle to be found at point at time is the amplitude that it started at , times the amplitude that it went from to , summed over all the possible starting points. In other words, it is a convolution of the kernel with the arbitrary initial condition , Since the amplitude to travel from to after a time +' can be considered in two steps, the propagator obeys the composition identity, which can be interpreted as follows: the amplitude to travel from to in time +' is the sum of the amplitude to travel from to in time , multiplied by the amplitude to travel from to in time ', summed over all possible intermediate states y. This is a property of an arbitrary quantum system, and by subdividing the time into many segments, it allows the time evolution to be expressed as a path integral. Analytic continuation to diffusion The spreading of wave packets in quantum mechanics is directly related to the spreading of probability densities in diffusion. For a particle which is randomly walking, the probability density function satisfies the diffusion equation where the factor of 2, which can be removed by rescaling either time or space, is only for convenience. A solution of this equation is the time-varying Gaussian function which is a form of the heat kernel. Since the integral of ρt is constant while the width is becoming narrow at small times, this function approaches a delta function at t=0, again only in the sense of distributions, so that for any test function . The time-varying Gaussian is the propagation kernel for the diffusion equation and it obeys the convolution identity, which allows diffusion to be expressed as a path integral. The propagator is the exponential of an operator , which is the infinitesimal diffusion operator, A matrix has two indices, which in continuous space makes it a function of and '. In this case, because of translation invariance, the matrix element only depend on the difference of the position, and a convenient abuse of notation is to refer to the operator, the matrix elements, and the function of the difference by the same name: Translation invariance means that continuous matrix multiplication, is essentially convolution, The exponential can be defined over a range of ts which include complex values, so long as integrals over the propagation kernel stay convergent, As long as the real part of is positive, for large values of , is exponentially decreasing, and integrals over are indeed absolutely convergent. The limit of this expression for approaching the pure imaginary axis is the above Schrödinger propagator encountered, which illustrates the above time evolution of Gaussians. From the fundamental identity of exponentiation, or path integration, holds for all complex z values, where the integrals are absolutely convergent so that the operators are well defined. Thus, quantum evolution of a Gaussian, which is the complex diffusion kernel K, amounts to the time-evolved state, This illustrates the above diffusive form of the complex Gaussian solutions,
Physical sciences
Waves
Physics
452905
https://en.wikipedia.org/wiki/Sniper%20rifle
Sniper rifle
A sniper rifle is a high-precision, long-range rifle. Requirements include high accuracy, reliability, and mobility, concealment, and optics, for anti-personnel, anti-materiel and surveillance uses by military snipers. The modern sniper rifle is a portable shoulder-fired rifle with either a bolt action or semi-automatic action, fitted with a telescopic sight for extreme accuracy and chambered for a high-ballistic performance centerfire cartridge. History The Whitworth rifle was arguably the first long-range sniper rifle in the world. Designed in 1854 by Sir Joseph Whitworth, a prominent British engineer, it used barrels with hexagonal polygonal rifling, which meant that the projectile did not have to "bite" into the rifling grooves as with conventional rifling. His rifle was far more accurate than the Pattern 1853 Enfield, which had shown weaknesses during the Crimean War. At trials in 1857, which tested the accuracy and range of both weapons, Whitworth's design outperformed the Enfield at a rate of about three to one. The Whitworth rifle was able to hit the target at a range of 2,000 yards (around 1,830 meters), whereas the Enfield could only manage it at a distance of 1,400 yards (around 1,280 meters). During the American Civil War, Confederate sharpshooters equipped with Whitworth rifles were tasked to kill Union field artillery crews, and were responsible for killing Major General John Sedgwick—one of the highest-ranking officers killed during the Civil War—at the Battle of Spotsylvania Court House. During the Crimean War, the first optical sights were designed for fitting onto the rifles. Much of this pioneering work was the brainchild of a Colonel D. Davidson, using optical sights produced by Chance Brothers of Birmingham. This allowed a marksman to more accurately observe and target objects at a greater distance than ever before. The telescopic sight, or scope, was originally fixed and could not be adjusted, which therefore limited its range. By the 1870s, the perfection of breech loading magazine rifles led to sniper rifles having "effective accurate" ranges of up to a mile away from their target. 20th century During the Second Boer War, the latest breech-loading rifles with magazines and smokeless powder were used by both sides. The British were equipped with the Lee–Metford rifle, while the Boers had received the latest Mauser Model 1895 rifles from Germany. In the open terrain of South Africa, the marksman was a crucial component in battle. The Lovat Scouts was a British Army unit formed in 1899 that was renowned for the expert marksmanship and stalking skills of its personnel. The men wore ghillie suits for camouflage and were expertly skilled in observation. Hesketh Hesketh-Prichard said of them that "keener men never lived". After the Boer War, the Scouts became the first official sniper unit in the British Army. It was not until World War I that sniper rifles began to be used more regularly in battle and certain soldiers given specialized training to use such a rifle. In Germany, these trained snipers were given rifles with telescopic sights, which illuminated at night in order to improve their accuracy. German gunsmiths fitted the scope above the barrel for optimal accuracy. During World War I, the accuracy of the sniper rifle was greatly improved. By the end of World War II, snipers were reported to provide "reasonable accuracy" over with anything over this range being unpredictable. It was during World War I and II that the word ‘sniper’ began to be used commonly, whereas previously those who were armed with sniper rifles were referred to as sharpshooters or marksmen. These marksmen, wielding sniper rifles such as the Karabiner 98k and Mosin–Nagant Model 1891/30 sniper rifle, had a drastic and demoralizing effect on the battlefield. Soldiers would often remain hidden in foxholes or trenches so as not to expose themselves to the deadly accuracy of a sniper. Some soldiers even began to disregard orders from commanding officers to protect against potential harm, which thus broke down the chain of command on the battlefield. The sniper rifle soon acquired the reputation of being one of the most effective and ruthless weapons of war. Though sniper rifles had proved to be extremely effective in combat, there was still a great reluctance in many militaries to adopt a trained sniper regiment. To effectively use a sniper rifle, a soldier had to go through particularly rigorous training, and most trainees did not make it past the first week. Sniper training was also so expensive to conduct that, even until as recently as 1970, the reasoning for having trained snipers as a part of an army was deemed questionable. In Britain, sniper rifles were not seen as being an integral part of an army until after the Germans boasted of their success with sniper teams during the early months of World War I. British army advisors supposed that the telescopic sights attached to sniper rifles were too easily damaged and thus not well suited for military use. However, they soon realized that these telescopic sights could be improved and made sturdy enough to withstand a sniper rifle shot. Sniper rifles have continued to be used consistently throughout the later part of the 20th century in Korea, Vietnam and the Middle East as an integral part of the modern style of guerrilla warfare. 21st century The durability, accuracy and power of sniper rifles circa 2010 are beyond anything in use even ten years prior, and dwarf those of World War II sniper rifles. Modern sniper rifles are very reliable and are able to fire repeatedly without losing accuracy, whereas earlier sniper rifles were more liable to lose accuracy due to wear and tear. Sniper rifles continue to be adapted and improved upon, with the effective range of sniper rifles (c. 2001) exceeding , making them one of the most accurate and efficient weapons in use. Classification Modern sniper rifles can be divided into two basic classes: military and law enforcement. Military Sniper rifles manufactured for military service are often designed for very high durability, range, reliability, sturdiness, serviceability, and repairability under adverse environmental and combat conditions, at the sacrifice of a small degree of accuracy. Military snipers and sharpshooters may also be required to carry their rifles and other equipment for long distances, making it important to minimize weight. Military organizations often operate under strict budget constraints, which influences the type and quality of sniper rifles they purchase. Law enforcement Sniper rifles built or modified for use in law enforcement are generally required to have the greatest possible accuracy, but do not need to have as long a range. Law enforcement-specific rifles are usually used in non-combat (often urban) environments, so they do not have the requirement to be as hardy or portable as military versions; they may also be smaller due to the decrease in required range. Some of the first sniper rifles designed specifically to meet police and other law-enforcement requirements were developed for West German police after the Munich massacre at the 1972 Summer Olympics. Many police services and law enforcement organizations (such as the U.S. Secret Service) now use rifles designed for law enforcement purposes. The Heckler & Koch PSG1 is one rifle specifically designed to meet these criteria and is often referred to as an ideal example of this type of sniper rifle. The FN Special Police Rifle was built for, and is marketed to, law enforcement rather than military agencies. Distinguishing characteristics The features of a sniper rifle can vary widely depending on the specific tasks it is intended to perform. Features that may distinguish a sniper rifle from other weapons are the presence of a telescopic sight, unusually long overall length, a stock designed for firing from a prone position, and the presence of a bipod and other accessories. Telescopic sight Perhaps the single most important characteristic that sets a sniper rifle apart from other military or police small arms is the mounting of a telescopic sight, which is relatively easy to distinguish from smaller optical aiming devices found on some modern assault rifles and submachine guns (such as reflector sights). The telescopic sights used on sniper rifles differ from other optical sights in that they offer much greater magnification (more than 4× and up to 40×) and have a much larger objective lens (40 to 50 mm in diameter) for a brighter image. Most telescopic lenses employed in military or police roles also have special reticles to aid with judgment of distance, which is an important factor in accurate shot placement due to the bullet's trajectory. Action The choice between bolt-action and semi-automatic, usually recoil operation or gas operation for the latter, is usually determined by specific requirements of the sniper's role in a particular organization, with each design having advantages and disadvantages. For a given cartridge, a bolt-action rifle is cheaper to build and maintain, more reliable, and lighter, due to fewer moving parts in the mechanism. In addition, the absence of uncontrolled automatic cartridge case ejection helps avoid revealing the shooter's position. Semi-automatic weapons can serve both as a battle rifle and sniper rifle, and allow for a greater rate (and hence volume) of fire. As such rifles may be modified service rifles, an additional benefit can be commonality of operation with the issued infantry rifle. A bolt action is most commonly used in both military and police roles due to its higher accuracy and ease of maintenance. Special forces operators tend to prefer semi-automatic rifles over bolt-action rifles for certain applications such as detonating unexploded ordnance from a safe distance and penetrating reinforced structures that enemy combatants are using as cover. A designated marksman rifle (DMR) is less specialized than a typical military sniper rifle, often only intended to extend the range of a group of soldiers. Therefore, when a semi-automatic action is used, it is due to an overlap with the roles of standard-issue weapons. There may also be additional logistical advantages if the DMR uses the same ammunition as the more common standard-issue weapons. These rifles enable a higher volume of fire, but sacrifice some long-range accuracy. They are frequently built from existing selective fire battle rifles or assault rifles, often simply by adding a telescopic sight and adjustable stock. A police semi-automatic sniper rifle may be used in situations that require a single sniper to engage multiple targets in quick succession; military semi-automatics, such as the M110 SASS, are used in similar "target-rich" environments. Magazine In a military setting, logistical concerns are the primary determinant of the cartridge used, so sniper rifles are usually limited to rifle cartridges commonly used by the military force employing the rifle and match grade ammunition. Since large national militaries generally change slowly, military rifle ammunition is frequently battle-tested and well-studied by ammunition and firearms experts. Consequently, police forces tend to follow military practices in choosing a sniper rifle cartridge instead of trying to break new ground with less-perfected (but possibly better) ammunition. Before the introduction of the standard 7.62×51mm NATO (.308 Winchester) cartridge in the 1950s, standard military cartridges were the .30-06 Springfield (7.62×63mm) (United States), .303 British (7.7×56mmR) (United Kingdom), and 7.92×57 mm Mauser (Germany). The .30-06 Springfield continued in service with U.S. Marine Corps snipers during the Vietnam War in the 1970s, well after general adoption of the 7.62×51mm. At the present time, in both the Western world and within NATO, the 7.62×51mm is currently the primary cartridge of choice for military and police sniper rifles. Worldwide, the trend is similar. The preferred sniper cartridge in Russia is another .30 caliber military cartridge, the 7.62×54mmR, which has slightly superior performance to the 7.62×51mm, although the rimmed design limits reliability compared to the latter cartridge. This cartridge was introduced in 1891, and both Russian sniper rifles of the modern era, the Mosin–Nagant and the SVD, are chambered for it. Certain commercial cartridges designed with only performance in mind, without the logistical constraints of most armies, have also gained popularity in the 1990s. These include the 7mm Remington Magnum (7.2×64mm), .300 Winchester Magnum (7.62×67mm), and the .338 Lapua Magnum (8.6×70mm). These cartridges offer better ballistic performance and greater effective range than the 7.62×51mm. Though they are not as powerful as .50 caliber cartridges, rifles chambered for these cartridges are not as heavy as those chambered for .50 caliber ammunition, and are significantly more powerful than rifles chambered for 7.62×51mm. Snipers may also employ anti-materiel rifles in sniping roles against targets such as vehicles, equipment and structures, or for the long-range destruction of explosive devices; these rifles may also be used against personnel. Anti-materiel rifles tend to be semi-automatic and of a larger caliber than other rifles, using cartridges such as the .50 BMG (12.7×99mm), 12.7×108mm, 14.5×114mm, and 20mm. These large cartridges are required to be able to fire projectiles containing payloads such as explosives, armor-piercing cores, incendiaries or combinations of these, such as the Raufoss Mk 211 projectile. Due to the considerable size and weight of anti-materiel rifles, two- or three-man sniper teams become necessary. Barrel Barrels are normally of precise manufacture and of a heavier cross section than more traditional barrels, in order to reduce the change in impact points between a first shot from a cold barrel and a follow-up shot from a warm barrel. Unlike many battle and assault rifles, the bores are usually not chromed to avoid inaccuracy due to an uneven treatment. When installed, barrels are often free-floating—installed so that the barrel contacts the rest of the rifle only at the receiver. A free-floating barrel avoids contact with the fore-end of the stock by the barrel itself, sling, bipod, or the sniper's hands that can interfere with barrel harmonics. The end of the barrel is usually crowned or machined to form a rebated area around the muzzle proper to avoid asymmetry or damage, and consequent inaccuracy. External longitudinal fluting that contributes to heat dissipation by increasing the surface area, while simultaneously decreasing the weight of the barrel, is sometimes used on sniper rifle barrels. Sniper-rifle barrels may also utilize a threaded muzzle or combination device (muzzle brake or flash suppressor and attachment mount) to allow the fitting of a suppressor. These suppressors often have a means of adjusting the point of impact while fitted. Military sniper rifles tend to have barrel lengths of or longer to allow the cartridge propellant to fully burn, reducing the amount of revealing muzzle flash and increasing muzzle velocity. Police sniper rifles may use shorter barrels to improve handling characteristics. The shorter barrels' muzzle velocity loss is unimportant at closer ranges; the impact velocity of the bullet is more than sufficient. Stock The most common special feature of a sniper rifle stock is the adjustable cheek piece, where the shooter's cheek meets the rear of the stock. For most rifles equipped with a telescopic sight, this area is raised slightly because the telescope is positioned higher than iron sights, and can sometimes be adjusted up or down to suit the individual shooter. To further aid this individual fitting, the stock can sometimes also be adjusted for length, often by varying the number of inserts at the rear of the stock where it meets the shooter's shoulder. If the stock is manufactured from wood, environmental conditions or operational use may warp the wood, causing slight alignment or barrel harmonics changes over time and altering the point of impact. Stocks manufactured from polymers and metal alloys are less susceptible to point of impact shifting from environmental conditions. Sniper stocks are typically designed to avoid making contact with the barrel of the weapon to minimize the effects of environmental inconstancies. Modern sniper rifle stocks tend to be designed around a rigid chassis, offer user adjustability to allow shooters of various sizes and shapes to tailor the stock to their personal preferences, and modular attachment points to accommodate low-light and daylight aiming optics, laser designators, and other accessories without the need for custom-made mounting interface kits. Accessories An adjustable sling is often fitted on the rifle, used by the sniper to achieve better stability when standing, kneeling, or sitting. The sniper uses the sling to "lock in" by wrapping their non-firing arm into the sling, keeping that arm still. Non-static weapon mounts, such as bipods, monopods, and shooting sticks, are also regularly used to aid and improve stability and reduce operator fatigue. Shooting bags are also commonly used to help stabilize the rifle or to provide an adjustable base. Capabilities Accuracy A military-issue battle rifle or assault rifle is usually capable of between 3–6 minute of angle (0.9–1.7 milliradian) accuracy. A standard-issue military sniper rifle is typically capable of 1–3 MOA (0.3–0.9 mrad) accuracy, with a police sniper rifle capable of 0.25–1.5 MOA (0.1–0.4 mrad) accuracy. For comparison, a competition target or benchrest rifle may be capable of accuracy up to 0.15–0.3 MOA (0.04–0.09 mrad). A 1 MOA (0.28 mrad) average extreme spread (the center-to-center distance between the two most distant bullet holes) for a 5-shot group translates into a 69% probability that the bullet's point of impact will be in a target circle with a diameter of at . This average extreme spread for a 5-shot group and the accompanying hit probability are considered sufficient for effectively hitting a human at an 800-meter distance. In 1982, a U.S. Army draft requirement for a Sniper Weapon System was: "The System will: (6) Have an accuracy of no more than 0.75 MOA (0.2 mrad) for a 5-shot group at 1,500 meters when fired from a supported, non-benchrest position". The M24 Sniper Weapon System adopted in 1988 has a stated maximum effective range of 800 meters and a maximum allowed average mean radius (AMR) of 1.9 inches at 300 yards from a machine rest, which corresponds to a 0.6 MOA (0.17 mrad) extreme spread for a 5-shot group when using 7.62×51mm M118 Special Ball cartridges. A 2008 United States military market survey for a Precision Sniper Rifle (PSR) called for 1 MOA (0.3 mrad) extreme vertical spread for all shots in a 5-round group fired at targets at 300, 600, 900, 1,200 and 1,500 meters. In 2009, a United States Special Operations Command market survey called for a 1 MOA (0.28 mrad) extreme vertical spread for all shots in a 10-round group fired at targets at 300, 600, 900, 1,200, and 1,500 meters. The 2009 Precision Sniper Rifle requirements state that the PSR, when fired without a suppressor, shall provide a confidence factor of 80% that the weapon and ammunition combination is capable of holding 1 MOA (0.28 mrad) extreme vertical spread, calculated from 150 ten-round groups that were fired unsuppressed. No individual group was to exceed 1.5 MOA (0.42 mrad) extreme vertical spread. All accuracy was taken at the 1,500 meter point. In 2008, the US military adopted the M110 Semi-Automatic Sniper System, which has a maximum allowed extreme spread of 1.8 MOA (0.5 mrad) for a 5-shot group on 300 feet, using M118LR ammunition or equivalent. In 2010, the maximum bullet dispersion requirement for the M24 .300 Winchester Magnum corresponded to 1.4 MOA (0.39 mrad) extreme spread for 5 shot group on 100 meters. In 2011, the US military adapted the .300 Winchester Magnum M2010 Enhanced Sniper Rifle, which had to meet an accuracy requirement to fire ≤ 1 MOA/0.28 mrad (less than a 2-inch shot group at 200 yards) before being released for fielding. Although accuracy standards for police rifles do not widely exist, rifles are frequently seen with accuracy levels from 0.5 to 1.5 MOA (0.2–0.5 mrad). For typical policing situations, an extreme spread accuracy level no better than 1 MOA (0.3 mrad) is usually all that is required, as police typically employ their rifles at shorter ranges. At or less, a rifle with a relatively low accuracy of only 1 MOA (0.3 mrad) should be able to repeatedly hit a 3 cm (1.2 inch) target. A 3 cm diameter target is smaller than the brain stem, which is targeted by police snipers for its quick killing effect. Maximum effective range Unlike police sniper rifles, military sniper rifles tend to be employed at the greatest possible distances, so that range advantages, like an increased difficulty to spot and engage the sniper, can be exploited. The most popular military sniper rifles (in terms of numbers in service) are chambered for 7.62 mm (0.30 inch) caliber ammunition, such as 7.62×51mm and 7.62×54mm R. Since sniper rifles of this class must compete with several other types of military weapons with similar range, snipers invariably must employ skilled fieldcraft to conceal their position. The recent trend in specialized military sniper rifles is towards larger calibers that offer relatively favorable hit probabilities at greater range with anti-personnel cartridges, such as .300 Winchester Magnum and .338 Lapua Magnum, and anti-materiel cartridges, such as 12.7×99mm, 12.7×108mm, and 14.5×114mm. This allows snipers to take fewer risks and spend less time finding concealment when facing enemies that are not equipped with similar weapons. Maximum range claims made by military organizations and materiel manufacturers regarding sniper weapon systems are not based on consistent or strictly scientific criteria. The problem is that the bullet only interacts after a relatively long flight path with the target (which can also be a materiel target for a sniper bullet). This implies that variables such as the minimal required hit probability, local atmospheric conditions, properties and velocity of the employed bullet (parts), properties of the target, and the desired terminal effect are major relevant factors that determine the maximum effective range of the employed system.
Technology
Firearms
null
452950
https://en.wikipedia.org/wiki/Limit%20cycle
Limit cycle
In mathematics, in the study of dynamical systems with two-dimensional phase space, a limit cycle is a closed trajectory in phase space having the property that at least one other trajectory spirals into it either as time approaches infinity or as time approaches negative infinity. Such behavior is exhibited in some nonlinear systems. Limit cycles have been used to model the behavior of many real-world oscillatory systems. The study of limit cycles was initiated by Henri Poincaré (1854–1912). Definition We consider a two-dimensional dynamical system of the form where is a smooth function. A trajectory of this system is some smooth function with values in which satisfies this differential equation. Such a trajectory is called closed (or periodic) if it is not constant but returns to its starting point, i.e. if there exists some such that for all . An orbit is the image of a trajectory, a subset of . A closed orbit, or cycle, is the image of a closed trajectory. A limit cycle is a cycle which is the limit set of some other trajectory. Properties By the Jordan curve theorem, every closed trajectory divides the plane into two regions, the interior and the exterior of the curve. Given a limit cycle and a trajectory in its interior that approaches the limit cycle for time approaching , then there is a neighborhood around the limit cycle such that all trajectories in the interior that start in the neighborhood approach the limit cycle for time approaching . The corresponding statement holds for a trajectory in the interior that approaches the limit cycle for time approaching , and also for trajectories in the exterior approaching the limit cycle. Stable, unstable and semi-stable limit cycles In the case where all the neighboring trajectories approach the limit cycle as time approaches infinity, it is called a stable or attractive limit cycle (ω-limit cycle). If instead, all neighboring trajectories approach it as time approaches negative infinity, then it is an unstable limit cycle (α-limit cycle). If there is a neighboring trajectory which spirals into the limit cycle as time approaches infinity, and another one which spirals into it as time approaches negative infinity, then it is a semi-stable limit cycle. There are also limit cycles that are neither stable, unstable nor semi-stable: for instance, a neighboring trajectory may approach the limit cycle from the outside, but the inside of the limit cycle is approached by a family of other cycles (which would not be limit cycles). Stable limit cycles are examples of attractors. They imply self-sustained oscillations: the closed trajectory describes the perfect periodic behavior of the system, and any small perturbation from this closed trajectory causes the system to return to it, making the system stick to the limit cycle. Finding limit cycles Every closed trajectory contains within its interior a stationary point of the system, i.e. a point where . The Bendixson–Dulac theorem and the Poincaré–Bendixson theorem predict the absence or existence, respectively, of limit cycles of two-dimensional nonlinear dynamical systems. Open problems Finding limit cycles, in general, is a very difficult problem. The number of limit cycles of a polynomial differential equation in the plane is the main object of the second part of Hilbert's sixteenth problem. It is unknown, for instance, whether there is any system in the plane where both components of are quadratic polynomials of the two variables, such that the system has more than 4 limit cycles. Applications Limit cycles are important in many scientific applications where systems with self-sustained oscillations are modelled. Some examples include: Aerodynamic limit-cycle oscillations The Hodgkin–Huxley model for action potentials in neurons. The Sel'kov model of glycolysis. The daily oscillations in gene expression, hormone levels and body temperature of animals, which are part of the circadian rhythm, although this is contradicted by more recent evidence. The migration of cancer cells in confining micro-environments follows limit cycle oscillations. Some non-linear electrical circuits exhibit limit cycle oscillations, which inspired the original Van der Pol model. The control of respiration and hematopoiesis, as appearing in the Mackey-Glass equations.
Mathematics
Dynamical systems
null
452991
https://en.wikipedia.org/wiki/Plane%20%28tool%29
Plane (tool)
A hand plane is a tool for shaping wood using muscle power to force the cutting blade over the wood surface. Some rotary power planers are motorized power tools used for the same types of larger tasks, but are unsuitable for fine-scale planing, where a miniature hand plane is used. Generally, all planes are used to flatten, reduce the thickness of, and impart a smooth surface to a rough piece of lumber or timber. Planing is also used to produce horizontal, vertical, or inclined flat surfaces on workpieces usually too large for shaping, where the integrity of the whole requires the same smooth surface. Special types of planes are designed to cut joints or decorative mouldings. Hand planes are generally the combination of a cutting edge, such as a sharpened metal plate, attached to a firm body, that when moved over a wood surface, take up relatively uniform shavings, by nature of the body riding on the 'high spots' in the wood, and also by providing a relatively constant angle to the cutting edge, render the planed surface very smooth. A cutter that extends below the bottom surface, or sole, of the plane slices off shavings of wood. A large, flat sole on a plane guides the cutter to remove only the highest parts of an imperfect surface, until, after several passes, the surface is flat and smooth. When used for flattening, bench planes with longer soles are preferred for boards with longer longitudinal dimensions. A longer sole registers against a greater portion of the board's face or edge surface which leads to a more consistently flat surface or straighter edge. Conversely, using a smaller plane allows for more localized low or high spots to remain. Though most planes are pushed across a piece of wood, holding it with one or both hands, Japanese planes are pulled toward the body, not pushed away. Woodworking machinery that perform a similar function as hand planes include the jointer and the thickness planer, also called a thicknesser; the job these specialty power tools can still be done by hand planers and skilled manual labor as it was for many centuries. When rough lumber is reduced to dimensional lumber, a large electric motor or internal combustion engine will drive a thickness planer that removes a certain percentage of excess wood to create a uniform, smooth surface on all four sides of the board and in specialty woods, may also plane the cut edges. History Hand planes are ancient, originating thousands of years ago. Early planes were made from wood with a rectangular slot or mortise cut across the center of the body. The cutting blade or iron was held in place with a wooden wedge. The wedge was tapped into the mortise and adjusted with a small mallet, a piece of scrap wood or with the heel of the user's hand. Planes of this type have been found in excavations of old sites as well as drawings of woodworking from medieval Europe and Asia. The earliest known examples of the woodworking plane have been found in Pompeii, although other Roman examples have been unearthed in Britain and Germany. The Roman planes resemble modern planes in essential function, most having iron wrapping a wooden core top, bottom, front and rear, and an iron blade secured with a wedge. One example found in Cologne has a body made entirely of bronze without a wooden core. A Roman plane iron used for cutting moldings was found in Newstead, England. Histories prior to these examples are not clear, but furniture pieces and other woodwork found in Egyptian tombs show surfaces carefully smoothed with some manner of cutting edge or scraping tool. There are suggestions that the earliest planes were simply wooden blocks fastened to the soles of adzes to effect greater control of the cutting action. In the mid-1860s, Leonard Bailey began producing a line of cast iron-bodied hand planes, the patents for which were later purchased by Stanley Rule & Level, now Stanley Black & Decker. The original Bailey designs were further evolved and added to by Justus Traut and others at Stanley Rule & Level. The Bailey and Bedrock designs became the basis for most modern metal hand plane designs manufactured today. The Bailey design is still manufactured by Stanley Black & Decker. In 1918 an air-powered handheld planing tool was developed to reduce shipbuilding labor during World War I. The air-driven cutter spun at 8,000–15,000 rpm and allowed one man to do the planing work of up to fifteen men who used manual tools. Modern hand planes are made from wood, ductile iron or bronze which produces a tool that is heavier and will not rust. Parts The standard components of a hand plane include: Types Most planes fall within the categories (by size) of block plane, smoothing plane, and jointing plane. Specialty planes include the shoulder plane, router plane, bullnose plane, and chisel plane, among others. Electrically powered hand planers (loosely referred to as power planes) have joined the hand-held plane family. Bench planes are characterized by having their cutting bevel facing down and attached to a chipbreaker. Most metal bench planes, as well as some larger wooden ones, are designed with a rear handle known as a tote. Block planes are characterized by the absence of a chipbreaker and the cutting iron bedded with the bevel up. The block plane is a smaller tool that can be held with one hand and that excels at working across the grain on a cut end of a board (end grain). It is also good for general purpose work such as taking down a knot in the wood, smoothing small pieces, and chamfering edges. Different types of bench planes are designed to perform different tasks, the name and size of the plane being defined according to its use. Bailey iron bench planes were designated by number with respect to the length of the plane. This has carried over through the type, regardless of manufacturer. A No. 1 plane is but little more than five inches long. A typical smoothing plane (approx. nine inches) is usually a No. 4, jack planes at about fourteen inches are No. 5, an eighteen-inch fore plane will be a No. 6, and the jointer planes at twenty-two to twenty-four inches in length are No. 7 or 8, respectively. A designation, such as No. 4 indicates a plane of No. 4 length but slightly wider. A designation such as 5 indicates the length of a No. 5 but slightly wider (actually, the width of a No. 6 or a No. 7), while a designation such as 5 indicates the length of a No. 5 but slightly narrower (actually, the width of a No. 3). "Bedrock" versions of the above are simply 600 added to the base number (although no "601" was ever produced, such a plane is indeed available from specialist dealers; 602 through 608, including all the fractionals, were made). Order of use A typical order of use in flattening, truing, and smoothing a rough sawn board might be: A scrub plane, which removes large amounts of wood quickly, is typically around long, but narrower than a smoothing plane, has an iron with a convex cutting edge and has a wider mouth opening to accommodate the ejection of thicker shavings/chips. A jack plane is up to long, continues the job of roughing out, but with more accuracy and flattening capability than the scrub. A jointer plane (including the smaller fore plane) is between long, and is used for jointing and final flattening out of boards. A smoothing plane, up to long, is used to begin preparing the surface for finishing. A polishing plane (kanna) is a traditional Japanese plane designed to take a smaller shaving than a Western smoothing plane to create an extremely smooth surface. Polishing planes are the same length as western smoothing planes, and unlike Western planes, which are pushed across a board, is pulled with both hands towards the user. Material Planes may also be classified by the material of which they are constructed: A wooden plane is entirely wood except for the blade. The iron is held into the plane with a wooden wedge and is adjusted by striking the plane with a hammer. A transitional plane has a wooden body with a metal casting set in it to hold and adjust the blade. A metal plane is largely constructed of metal, except, perhaps, for the handles. An infill plane has a body of metal filled with very dense and hardwood on which the blade rests and the handles are formed. They are typically of English or Scottish manufacture. They are prized for their ability to smooth difficult grained woods when set very finely. A side-escapement plane has a tall, narrow, wooden body with an iron held in place by a wedge. They are characterized by the method of shaving ejection. Instead of being expelled from the center of the plane and exiting from the top, these planes have a slit in the side by which the shaving is ejected. On some variations, the slit is accompanied by a circular bevel cut in the side of the plane. Special purposes Some special types of planes include: The rabbet plane, also known as a rebate or openside plane, which cuts rabbets (rebates) i.e. shoulders, or steps. The shoulder plane, is characterized by a cutter that is flush with the edges of the plane, allowing trimming right up to the edge of a workpiece. It is commonly used to clean up dadoes (housings) and tenons for joinery. The fillister plane, similar to a rabbet plane, with a fence that registers on the board's edge to cut rabbets with an accurate width. The moulding plane, which is used to cut mouldings along the edge of a board. The grooving plane which is used to cut grooves along the edge of a board for joining. Grooves are the same as dadoes/housings but are being distinguished by running with the grain. The plow/plough plane, which cuts grooves and dadoes (housings) not in direct contact with the edge of the board. The router plane, which cleans up the bottom of recesses such as shallow mortises, grooves, and dadoes (housings). Router planes come in several sizes and can also be pressed into service to thickness the cheeks of tenons so that they are parallel to the face of the board. The chisel plane, similar to a bullnose plane, but with an exposed blade which allows it to remove wood up to a perpendicular surface such as from the bottom inside of a box. The finger plane, which is used for smoothing very small pieces such as toy parts, very thin strips of wood, etc. The very small curved bottom varieties are known as violin makers planes and are used in making stringed instruments. The bullnose plane has a very short leading edge, or "toe", to its body, and so can be used in tight spaces; most commonly of the shoulder and rabbet variety. Some bullnose planes have a removable toe so that they can pull double duty as a chisel plane. The combination plane, which combines the function of moulding and rabbet planes, which has different cutters and adjustments. The circular or , which utilizes an adjustment system to control the flex on a steel sheet sole and create a uniform curve. A concave setting permits great control for planing large curves, like table sides or chair arms, and the convex works well for chair arms, legs and backs, and other applications. The compass plane, which has a flexible sole with an adjustable curve and is used to plane concave and convex surfaces. Typically used in wooden boat building. The toothed plane, which is used for smoothing wood with irregular grain. and for preparing stock for traditional hammer veneering applications. The spill plane which creates long, spiraling wood shavings or tapers. The spar plane, which is used for smoothing round shapes, like boat masts and chair legs. The match plane, which is used for making tongue and groove boards. Hollows and rounds are similar to moulding planes, but lack a specific moulding profile. Instead, they cut either a simple concave or convex shape on the face or edge of a board to create a single element of a complex-profile moulding. They are used in pairs or sets of various sizes to create moulding profile elements such as fillets, coves, bullnoses, thumbnails ovolos, ogees, etc. When making mouldings, hollows and rounds must be used together to create the several shapes of the profile. However, they may be used as a single plane to create a simple decorative cove or round-over on the edge of a board. Many of these hollows and rounds can be classified in the category of side-escapement planes. Use Planing wood along its side grain should result in thin shavings rising above the surface of the wood as the edge of the plane iron is pushed forward, leaving a smooth surface, but sometimes splintering occurs. This is largely a matter of cutting with the grain or against the grain respectively, referring to the side grain of the piece of wood being worked. The grain direction can be determined by looking at the edge or side of the work piece. Wood fibers can be seen running out to the surface that is being planed. When the fibers meet the work surface it looks like the point of an arrow that indicates the direction. With some very figured and difficult woods, the grain runs in many directions and therefore working against the grain is inevitable. In this case, a very sharp and finely-set blade is required. When planing against the grain, the wood fibers are lifted by the plane iron, resulting in a jagged finish, called tearout. Planing across the grain is sometimes called traverse or transverse planing. Planing the end grain of the board involves different techniques and frequently different planes designed for working end grain. Block planes and other bevel-up planes are often effective in planing the difficult nature of end grain. These planes are usually designed to use an iron bedded at a low angle, typically about 12 degrees.
Technology
Hand tools
null
453028
https://en.wikipedia.org/wiki/Elaninae
Elaninae
An elanine kite is any of several small, lightly-built raptors with long, pointed wings. Some authorities list the group as a formal subfamily, Elaninae. As a subfamily there are six species in three genera with two of these genera being monotypic. Two other species have at times been included with the group, but genetic research has shown them to belong to different subfamilies. Elanine kites have a near-worldwide distribution, with two endemic species found in the Americas, two in Australia, and one in Africa, while the black-winged kite is found over a vast range from Europe and Africa in the west to Southeast Asia in the east. Species Current Elaninae Previously in Elaninae Genus Machaerhamphus or Macheiramphus (subfamily Harpiinae) Bat hawk, M. alcinus – Paleotropics (Africa, south Asia through to New Guinea) Genus Elanoides (subfamily Perninae) Swallow-tailed kite, Elanoides forficatus – Americas Description Elanus species are primarily rodent hunters, searching for prey from a perch or often hovering like kestrels. Their tail is unforked. Chelictinia feeds on the wing, taking insects from the air, or small reptiles and insects from tree branches. Its tail is very long and deeply forked, like Elanoides which has similar feeding habits but is larger. Both Elanus and Chelictinia have similarities in markings, with red eyes, a black patch above the eye, yellow legs and cere, and black beak. Gampsonyx is very small, also feeding on insects, with the size and coloration typical of the Asian falconets. It is black above and white below, often with a tinge of rufous around the legs. Taxonomy and systematics In 1851 British zoologist Edward Blyth described Elaninae, the "smooth clawed kites", as a formal subfamily of Accipitridae. However they are also grouped in Accipitrinae, the broader subfamily of hawks and eagles described by French ornithologist Louis Pierre Vieillot in 1816. Nicholas Vigors in 1824 had grouped Elanus and "true Milvus" together into Stirps Milvina, the kites. Earlier, the terms "kite" in English or "iktinos" in Greek referred only to the red or black (milvine) kites. French ornithologists used the term "milan" for both the milvine and elanine kites. Around the same time, in 1823, Louis-Pierre Vieillot had placed the group (in five species) together into his own genus Elanoïdes, rather than Savigny's Elanus. Vigors listed three known species: Elanus melanopterus, E. furcatus, and E. Riocourii. But he noted that the latter two had more forked tails and probably didn't have nails that were rounded underneath. The following year he gained access to specimens of the fork-tailed kites and split them from Elanus into a separate genus, Nauclerus. In 1931, Peters used the subfamily Elaninae, listing its members as Elanus, Chelictinia, and Machaeramphus. He placed Elanoïdes in subfamily Perninae, and Gampsonyx with the forest falcons in Polyhieracinae. In the 1950s, several authors found Gampsonyx was related to Elanus rather than the falcons, based on morphological features and molt schedule. Lerner and Mindell describe the Elaninae as: "Kites noted for having a bony shelf above the eye, Elanus is cosmopolitan, Gampsonyx is restricted to the New World and Chelictinia is found in Africa". This is in contrast to the Perninae, which are: "Kites mainly found in the tropics and specializing on insects and bee or wasp larvae, all lack the bony eye shield found in the Elaninae". Comparisons of sequences for certain mitochondrial marker genes indicate that some elanine kites split early from the rest of the Accipitridae. Wink and Sauer-Gurth found that Elanus was less related than the osprey and secretary bird (which are often placed in a separate family), but noted that this was not strongly indicated. However, Lerner and Mindell found that the osprey was less related, but Elanus leucurus was basal to the other Accipitridae. Negro and colleagues have discussed convergent traits between kites in the genus Elanus and owls, such as a lower acidity of the stomach and some specialized flight feathers otherwise not found in diurnal raptors. Lerner and Mindell also found that Elanoides forficatus grouped with Perninae, such as the type species Pernis apivorus and the Australian endemics Lophoictinia and Hamirostra. Chelictinia, Machaerhamphus, and Gampsonyx were not included in these genetic sequencing studies.
Biology and health sciences
Accipitrimorphae
Animals
453262
https://en.wikipedia.org/wiki/Mycobacterium%20leprae
Mycobacterium leprae
Mycobacterium leprae (also known as the leprosy bacillus or Hansen's bacillus) is one of the two species of bacteria that cause Hansen's disease (leprosy), a chronic but curable infectious disease that damages the peripheral nerves and targets the skin, eyes, nose, and muscles. It is an acid-fast, Gram-positive, rod shaped bacterium and an obligate intracellular parasite, which means, unlike its relative Mycobacterium tuberculosis, it cannot be grown in cell-free laboratory media. This is likely due to gene deletion and decay that the genome of the species has experienced via reductive evolution, which has caused the bacterium to depend heavily on its host for nutrients and metabolic intermediates. It has a narrow host range and apart from humans, the only other natural hosts are nine-banded armadillo and red squirrels. The bacteria infect mainly macrophages and Schwann cells, and are typically found congregated as a palisade. Mycobacterium leprae was sensitive to dapsone as a treatment alone, but since the 1960s, it has developed resistance against this antibiotic. Currently, a multidrug treatment (MDT) is recommended by the World Health Organization, including dapsone, rifampicin, and clofazimine. The species was discovered in 1873 by the Norwegian physician Gerhard Armauer Hansen, and was the first bacterium to be identified as a cause of disease in humans. Microbiology Mycobacterium leprae is an intracellular, pleomorphic, non-sporing, non-motile, acid-fast, pathogenic bacterium. It is an aerobic bacillus (rod-shaped bacterium) with parallel sides and round ends, surrounded by the characteristic waxy coating of mycolic acid unique to mycobacteria. It is Gram-positive by Gram staining, but Mycobacterium leprae was traditionally stained with carbol fuchsin in the Ziehl–Neelsen stain. Because the bacilli are less acid-fast than Mycobacterium tuberculosis (MTB), the Fite-Faraco staining method, which has a lower acid concentration, is used now. In size and shape, it closely resembles MTB. The bacteria are found in the granulomatous lesions and are especially numerous in the nodules. This bacteria often occur in large numbers within the lesions of lepromatous leprosy and are usually grouped together as a palisade. By optical microscopy of host cells, Mycobacterium leprae can be found singly or in clumps referred to as "globi", the bacilli can be straight or slightly curved, with a length ranging from 1–8 μm and a diameter of 0.3 μm. The bacteria grow best at 27 to 30 °C, making the skin, nasal mucosa and peripheral nerves primary targets for infection by Mycobacterium leprae. Host range Mycobacterium leprae has a narrow host range and apart from humans the only other hosts are nine-banded armadillos and red squirrels, and armadillos have been implicated as a source of zoonotic leprosy in humans. In the laboratory, mice can be infected and this is a useful animal model. Cultivation Mycobacterium leprae has an unusually lengthy doubling time (ranging from 12 to 14 days compared with 20 minutes for Escherichia coli), as well as its inability to be cultured in the laboratory. Because the organism is an obligate intracellular parasite, it lacks many necessary genes for independent survival, causing difficulty in culturing the organism. The complex and unique cell wall that makes members of the genus Mycobacterium difficult to destroy is also the reason for its extremely slow replication rate. Mycobacterium leprae prefers cool temperatures, slightly acidic microaerophilic conditions, and prefers the use of lipids as an energy source over sugars. The growth conditions needed for Mycobacterium leprae are known, but an exact axenic medium to support the growth of Mycobacterium leprae still has yet to be discovered. Since in vitro cultivation is not generally possible, it has instead been grown in mouse foot pads, and in armadillos due to their low core body temperature. Metabolism The reductive evolution experienced by the Mycobacterium leprae genome has impaired its metabolic abilities in comparison to other Mycobacterium, specifically in its catabolic pathways. Catabolism Mycobacterium lepraes inability to be grown in axenic media indicates its reliance on nutrients and intermediates from its host. Many of the catabolic pathways present in other Mycobacterium species are compromised, due to the absence of enzymes that play key roles in degradation of nutrients. Mycobacterium leprae has lost the ability to use common carbon sources, such as acetate and galactose, in its central energy metabolism pathways. Additionally, lipid degradation is impaired, with deficits in key lipase enzymes, and other proteins involved in lipolysis. Functional carbon catabolic pathways continue to exist in the species, such as the glycolytic pathway, the pentose phosphate pathway, and the TCA cycle. These deficiencies extensively restricts the microbe's growth to a limited number of carbon sources, such as host-derived intermediates. Anabolism Mycobacterium leprae'''s anabolic pathways have been largely unaffected by its reductive evolution. The species retains its ability for the synthesis of genetic material, such as purines, pyrimidines, nucleotides, and nucleosides, as well as the synthesis of all amino acids, except for methionine and lysine. Genome The first genome sequence of a strain of Mycobacterium leprae was completed in 2001, revealing 1604 protein-coding genes and another 1,116 pseudogenes. The genome sequence of a strain originally isolated in Tamil Nadu, India, and designated TN, was completed in 2013. This genome sequence contains 3,268,203 base pairs (bp) and an average G+C content of 57.8%, which is significantly less than M. tuberculosis, which has 4,441,529 bp and 65.6% G+C. Comparing the genome sequence of Mycobacterium leprae with that of MTB reveals an extreme case of reductive evolution. Less than half of the genome contains functional genes. It is estimated approximately 2000 genes from Mycobacterium leprae genome has been lost. Gene deletion and decay appear to have eliminated many important metabolic activities, including siderophore production, part of the oxidative and most of the microaerophilic and anaerobic respiratory chains, and numerous catabolic systems and their regulatory circuits. This reductive evolution is largely linked to the organism's development into an obligate intracellular bacterium. Pseudogenes Many of the genes that were present in the genome of the common ancestor of Mycobacterium leprae and M. tuberculosis have been lost in the Mycobacterium leprae genome. Due to Mycobacterium leprae's reliance on a host organism, many of the species' DNA repair functions have been lost, increasing the occurrence of deletion mutations. Because the products supplied by these deleted genes are typically present in the host cells infected by Mycobacterium leprae, the impact that the mutations have on the microbe is minimal, allowing for survival within the host despite its reduced genome. Consequently, Mycobacterium leprae has undergone a dramatic reduction in genome size with the loss of many genes. Over half of the pathogen's genome is now made up by pseudogenes due to the pathogen undergoing what is known as reductive evolution. Among published genomes, Mycobacterium leprae contains the highest number of pseudogens (>1000). Many of these pseudogenes arose from insertions of stop codons which may have been caused by sigma factor dysfunction (a protein needed for initiation of transcription in bacteria) or the insertion of transposon- derived repetitive sequences. Some of the Mycobacterium leprae pseudogens expression levels will alter upon infection of macrophages, which suggests that some Mycobacterium leprae pseudogens are not all "decayed" genes, but could also function in infection, intracellular replication, and replication. This genome reduction is not complete. Downsizing from a genome of 4.42 Mbp, such as that of M. tuberculosis, to one of 3.27 Mbp would account for the loss of some 1200 protein-coding sequences. Essential enzymes There are eight essential enzymes for Mycobacterium leprae, and one of them is alanine racemase (alr). This enzyme is significant because it is found in D-alanine—D-alanine ligase and alanine/Aspartate metabolism. Other essential enzymes include putative dTDP4deydrorhamnose 3, 5epimerase (rm1C) which plays an important role in both Nucleotide sugar metabolism and polyketide sugar unit biosynthesis. Petidoglycan biosynthesis also require murG, murF, MurE, murY, murC, and murD, the remaining six essential enzymes for mycobacterium leprae. Distribution The bacterium has a global distribution in humans but the highest prevalence is in sub-Saharan Africa, Asia and South America. The geographic occurrences of Mycobacterium leprae include: Angola, Brazil, Central African Republic, the Democratic Republic of Congo, Federated States of Micronesia, India, Kiribati, Madagascar, Nepal, Republic of Marshall Islands, and the United Republic of Tanzania. Since the introduction of multidrug therapy (MDT) in the 1980s, the prevalence of leprosy cases has declined by 95%. This decline led the World Health Organization (WHO) to declare leprosy eliminated as a public health problem, defined as a prevalence of less than one leprosy patient per 10,000 population. Aside from Mycobacterium leprae transmission from infected humans, environmental sources could also be an important reservoir. Mycobacterium leprae DNA was detected in soil from houses of leprosy patients in Bangladesh, armadillos' holes in Suriname and habitats of lepromatous red squirrels in the British Isles. One study found numerous reports of leprosy cases with a history of contact with armadillos in the United States. A zoonotic transmission pathway from exposure to armadillos has been proposed, with human patients from a previous study in southeastern United States shown to be infected with the same armadillo-associated Mycobacterium leprae genotype. High rates of Mycobacterium leprae infection were observed in armadillos in the Brazilian state of Pará, and individuals who frequently consumed armadillo meat showed a significantly higher titres of the M. leprae-specific antigen, phenolic glycolipid I (PGL-I) compared with those who did not or ate them less frequently. Evolution The closest relative to Mycobacterium leprae is Mycobacterium lepromatosis. These species diverged (95% highest posterior density – ) The most recent common ancestor of the extant Mycobacterium leprae strains was calculated to have lived 3,607 years ago (95% highest posterior density 2204–5525 years ago). The estimated substitution rate was 7.67 x 10−9 substitutions per site per year, similar to other bacteria. A study of genomes isolated from medieval cases estimated the mutation rate to be 6.13 × 10−9. The authors also showed that the leprosy bacillus in the Americas was brought there from Europe. Another study suggests that Mycobacterium leprae originated in East Africa and spread from there to Europe and the Middle East initially before spreading to West Africa and the Americas in the last 500 years. Almost complete sequences of Mycobacterium leprae from medieval skeletons with osteological lesions suggestive of leprosy from different Europe geographic origins were obtained using DNA capture techniques and high-throughput sequencing. Ancient sequences were compared with those of modern strains from biopsies of leprosy patients representing diverse genotypes and geographic origins, giving new insights in the understanding of its evolution and course through history, phylogeography of the leprosy bacillus, and the disappearance of leprosy from Europe. Verena J. Schuenemann et al. demonstrated a remarkable genomic conservation during the past 1000 years and a close similarity between modern and ancient strains, suggesting that the sudden decline of leprosy in Europe was not due to a loss of virulence, but due to extraneous factors, such as other infectious diseases, changes in host immunity, or improved social conditions. Pathogenesis The incubation period of Mycobacterium leprae ranges from 9 months to 20 years. The bacterium replicates intracellularly inside histiocytes and nerve cells and has two forms. One form is "tuberculoid", which induces a cell-mediated response that limits its growth, and has few detectible bacilli (paucibacillary). Through this form, Mycobacterium leprae multiplies at the site of entry, usually the skin, invading and colonizing Schwann cells. The bacterium then induces T-helper lymphocytes, epithelioid cells, and giant cell infiltration of the skin, causing infected individuals to exhibit large flattened patches with raised and elevated red edges on their skin. These patches have dry, pale, hairless centers, accompanied by a loss of sensation on the skin. The loss of sensation may develop as a result of invasion of the peripheral sensory nerves. The macule at the cutaneous site of entry and the loss of pain sensation are key clinical indications that an individual has a tuberculoid form of leprosy. The second form of leprosy is the "lepromatous" form, in which the microbes proliferate within the macrophages at the site of entry, and has many detectable bacilli (multibacillary). They also grow within the epithelial tissues of the face and ear lobes. The suppressor T-cells that are induced are numerous, but the epithelioid and giant cells are rare or absent. With cell-mediated immunity impaired, large numbers of Mycobacterium leprae appear in the macrophages and the infected patients develop papules at the entry site, marked by a folding of the skin. Gradual destruction of cutaneous nerves lead to what is referred to as "classic lion face". Extensive penetration by this bacterium may lead to severe body damage; for example the loss of bones, fingers, and toes. Symptoms of a Mycobacterium leprae infection The symptoms of a Mycobacterium leprae infection, also known as leprosy, are skin sores that are pale in color, lumps or bumps that do not go away after several weeks or months, nerve damage which can lead to complications with the ability to sense feeling in the arms and legs as well as muscle weakness. Symptoms usually take 3–5 years from being exposed to manifest within the body. However, some individuals do not begin to show symptoms until 20 years after exposure to the disease. This long incubation period makes the ability to properly be able to diagnose when an individual came into contact with the disease very difficult. In armadillos, Mycobacterium leprae causes a disseminated infection with similar structural and pathological changes in tissues and nerves. In squirrels, according the to Veterinary Pathology Unit of the University of Edinburgh, " The disease is unmistakeable: there is gross swelling and loss of hair around the snout, lips, eyelids, ears, genitalia and sometimes feet and lower limbs. This bare skin has a "shiny" appearance. The squirrel is usually in generally poor body condition and may have a heavy burden of parasites like fleas, ticks and mites." Treatment The mycolic acids in the bacteria's cell walls afford resistance to many antibiotics and are a major virulence factor. Multidrug therapy (MDT) was recommended by WHO Expert Committee in 1984, and became the standard leprosy treatment. MDT has been supplied by WHO for free since 1995 to endemic countries. MDT is used to treat leprosy because treatment of leprosy with one drug (monotherapy) can result in drug resistance. The drug combination used in MDT will depend on the classification of the disease. WHO recommends patients with multibacillary leprosy use a combination of Rifampicin, Clofazimine, and Dapsone for 12 months. WHO recommends patients with paulibacilalry leprosy use combination of Rifampicin and Dapsone for a duration of 6 months. Antibiotics must be taken regularly until treatment is complete because Mycobacterium leprae can become drug resistant. Effectiveness of the treatment can be determined with the use of an acid-fast stain of Mycobacterium leprae from a skin smear to estimate the number of bacilli still present in the patient. The number of reported cases of leprosy annually is around 250,000 cases indicating that the chain of transmission has yet to be broken despite the use of MDT leading to a 90% reduction in the prevalence rate of leprosy. This makes it very evident that control of the disease is not yet where it needs to be calling for the need in continued research towards treatment and control. A preventive measure of Mycobacterium leprae is to avoid close contact with infectious people who are untreated. Blindness, crippling of the hands and feet, and paralysis are all effects of nerve damage associated with untreated M. leprae. Treatment does not reverse the nerve damage done, which is why early treatment is needed. The Bacillus Calmette–Guérin vaccine offers a variable amount of protection against leprosy in addition to its main target of tuberculosis. Targets of antibiotics Dapsone competitively inhibits the enzyme dihydropteroate synthase (DHPS) resulting in decreases the production of tetrahydrofolate, which is an essential component of nucleic acid biosynthesis in M. leprae. Rifampin will interrupt binding of the β-subunit of the DNA-dependent RNA polymerase, which will uncouple mRNA production and results in cell death. Clofazimine's mechanisms are not fully understood regarding Mycobacterium leprae, but the drug's binding appear to occur at base sequences with guanine, which may explain why clofazimine has a preference for G+C rich genomes of mycobacteria over human DNA. The binding of clofazimine to mycobacterial DNA can has been proven as weakly bactericidal against Mycobacterium leprae in mice, which is why it is not suitable for single drug therapy for leprosy. Out of the three main drugs rifampin is more bactericidal than either dapsone or clofazimine. Potential antibiotic targets It is important to find new targets for antibiotics due to increasing resistance. Mycobacterium leprae has six essential enzymes murC, murD, murE, murF, murG, and murY that are all essential for peptidoglycan biosynthesis in M. leprae. These enzymes and peptidoglycan biosynthesis are potential targets for antibiotics. By targeting these enzymes, which catalyze additions of short polypeptide chains, synthesis of the bacterial cell wall can be prevented. Antibiotic resistance Resistance to antibiotics is seen in around 10% of new cases of leprosy and in around 15% of relapsed cases. Drug resistance in Mycobacterium leprae is thought to be from genetic alterations in the antibiotic targets and a reduction in cell wall permeability. Compared to the amount of efflux pumps in M. tuberculosis, Mycobacterium leprae contains about half as many. The efflux pumps contributing to drug resistance and virulence in M. tuberculosis have been retained throughout the genome reductive evolution that Mycobacterium leprae underwent. Discovery Mycobacterium leprae'' was discovered in 1873 by the Norwegian physician Gerhard Armauer Hansen (1841–1912), and was the first bacterium to be identified as a cause of disease in humans. It was confirmed to be a bacterium by Albert Ludwig Sigesmund Neisser who argued with Hansen over priority for the discovery. Hansen's attempts to infect animals with the bacteria were unsuccessful. When, in 1879, he injected, without consent, tissue from a person with lepromatous leprosy into the eye of 33-year old Kari Nielsdatter who had the milder tuberculoid form of the infection, he was dismissed from his post at the Leprosy Hospital in Bergen and was banned from practising medicine. The case had little effect on Hansen's professional reputation, and he continued with his research.
Biology and health sciences
Gram-positive bacteria
Plants
453286
https://en.wikipedia.org/wiki/Inca%20road%20system
Inca road system
The Inca road system (also spelled Inka road system and known as Qhapaq Ñan meaning "royal road" in Quechua) was the most extensive and advanced transportation system in pre-Columbian South America. It was about long. The construction of the roads required a large expenditure of time and effort. The network was composed of formal roads carefully planned, engineered, built, marked and maintained; paved where necessary, with stairways to gain elevation, bridges and accessory constructions such as retaining walls, and water drainage systems. It was based on two north–south roads: one along the coast and the second and most important inland and up the mountains, both with numerous branches. It can be directly compared with the road network built during the Roman Empire, although the Inca road system was built one thousand years later. The road system allowed for the transfer of information, goods, soldiers and persons, without the use of wheels, within the Tawantinsuyu or Inca Empire throughout a territory covering almost and inhabited by about 12 million people. The roads were bordered, at intervals, with buildings to allow the most effective usage: at short distance there were relay stations for chasquis, the running messengers; at a one-day walking interval tambos allowed support to the road users and flocks of llama pack animals. Administrative centers with warehouses, called qullqas, for re-distribution of goods were found along the roads. Towards the boundaries of the Inca Empire and in newly conquered areas pukaras (fortresses) were found. Part of the road network was built by cultures that precede the Inca Empire, notably the Wari culture in the northern central Peru and the Tiwanaku culture in Bolivia. Different organizations such as UNESCO and IUCN have been working to protect the network in collaboration with the governments and communities of the six countries (Colombia, Ecuador, Peru, Bolivia, Chile and Argentina) through which the Great Inca Road passes. In modern times some remnant of the roads see heavy use from tourism, such as the Inca Trail to Machu Picchu, which is well known by trekkers. A 2021 study found that its effects have lingered for over 500 years, with wages, nutrition and school levels higher in communities living within 20 kilometers of the Inca Road, compared to similar communities farther away. Extent The Tawantinsuyu, which integrated the current territories of Peru, continued towards the north through present-day Ecuador, reaching the northernmost limits of the Andean mountain range in the region of Los Pastos in Colombia; by the South, it penetrated down to the Mendoza and Atacama lands, in the southernmost reaches of the Empire, corresponding currently with Argentine and Chilean territories. On the Chilean side, the road reached the Maipo river. The Inca Road system connected the northern territories with the capital city Cusco and the southern territories. About , out of the more than that the Andean mountains spans, were covered by it. As indicated by Hyslop, "The main route of the sierra (mountains) that passes through Quito, Tumebamba, Huánuco, Cusco, Chucuito, Paria and Chicona to the Mendoza River, has a length of 5,658 km." (3,516 miles) The exact extent of the road network is not known: travelers and scholars proposed various lengths, spanning from to to . Two main routes were defined: the eastern one, inland, runs high in the puna grassland, a large and undulating surface, which extends above ; the second one, the western route, that starts from the region of Tumbes in the current Peru–Ecuador border, follows the coastal plains, but does not include the coastal deserts, where it hugs the foothills. This western road outlines the current Pan-American Highway in its South American pacific extension. Recent investigations carried out under the Proyecto Qhapaq Ñan, sponsored by the Peruvian government and basing also on previous research and surveys, suggest with a high degree of probability that another branch of the road system existed on the east side of the Andean ridge, connecting the administrative centre of Huánuco Pampa with the Amazonian provinces and having a length of about . More than twenty transversal routes ran over the western mountains, while others traversed the eastern cordillera in the mountains and lowlands, connecting the two main routes and populated areas, administrative centres, agricultural and mining zones, as well as ceremonial centres and sacred spaces in different parts of the vast Inca territory. Some of these roads reach altitudes of over above sea level. The four routes During the Inca Empire, the roads officially stemmed from Cusco into the 4 cardinal directions towards the 4 suyus (provinces) into which the Tawantinsuyu was divided. Cusco was the center of Peru: the Inca-Spanish chronicler Inca Garcilaso de la Vega states that "Cozco in the language of the Incas means navel that is the Earth's navel". The four regions were named Chinchaysuyu towards the North, Collasuysu towards the South, Antisuyu towards the East and the lower valleys of the Amazon region and Contisuyu towards the West and the lower valleys along the Pacific coast. The route towards the North was the most important in the Inca Empire, as shown by its constructive characteristics: a width ranging between 3 and 16 m and the size of the archaeological vestiges that mark the way both in its vicinity and in its area of influence. It is not coincidental that this path goes through and organizes the most important administrative centers of the Tawantinsuyu outside Cusco, such as Vilcashuamán, Xauxa, Tarmatambo, Pumpu, Huánuco Pampa, Cajamarca and Huancabamba, in current territories of Peru; and Ingapirca, Tomebamba or Riobamba in Ecuador. This was regarded by the Incas as "the" Qhapaq Ñan, main road or royal road, starting from Cusco and arriving in Quito. From Quito northwards, the Inca presence is perceived in defensive settlements that mark the advance of the Empire by the Ecuadorian provinces of Carchi and Imbabura and the current Nariño Department in Colombia, which in the 16th century was in process of being incorporated into the Inca Empire. The route of Qollasuyu leaves Cusco and points towards the South, splitting into two branches to skirt Lake Titicaca (one on the east and one the west coast) that join again to cross the territory of the Bolivian Altiplano. From there the roads were unfolding to advance towards the southernmost boundaries of the Tawantinsuyu. One branch headed towards the current Mendoza region of Argentina, while the other penetrated the ancient territories of the Diaguita and Atacama people in Chilean lands, who had already developed basic road networks. From there, crossing the driest desert in the world, the Atacama Desert, the Qollasuyu route reached the Maipo river, currently in the Santiago metropolitan region. From there no vestiges of the Inca advance have been found. Contisuyu roads allowed to connect Cusco to coastal territories, in what corresponds to the current regions of Arequipa, Moquegua and Tacna, in the extreme Peruvian south. These roads are transversal routes that guaranteed the complementarity of natural resources, since they cross very varied ecological floors, in the varied altitude of the descent from the heights of the cordillera to the coastal spaces. The roads of the Antisuyu are the least known and a lesser number of vestiges were registered. They penetrated into the territories of the Ceja de Jungla or Amazonian Andes leading to the Amazon rainforest, where conditions are more difficult for the conservation of archaeological evidences. The true physical extension of the Inca Empire for this region is not very clear. Purposes of the road The Incas used the road system for a variety of reasons, from transportation for people who were traveling through the Empire to military and religious purposes. The road system allowed for a fast movement of persons from one part of the Empire to the other: both armies and workers used the roads to move and the tambos to rest and be fed. It also allowed for the fast movement of information and valuable small goods which traveled through the chasquis. The Incas gave priority to the straightness of the roads, whenever possible, to shorten the distances. According to Hyslop the roads were the basis for the expansion of the Inca Empire: the most important settlements were located on the main roads, following a provision prefigured by the existence of older roads. The Incas had a predilection for the use of the Altiplano, or puna areas, for displacement, seeking to avoid contact with the populations settled in the valleys, and project, at the same time, a straight route of rapid communication. Other researchers pointed out additional factors that conditioned the location of Inca settlements and roads, such as the establishment of control zones in an intermediate location with respect to the populations and productive lands of the valleys, the requirement of specific goods, and storage needs, which were favored in the high plains of the Altiplano, characterized by low temperatures and dry climates. As an example, the administrative center of Huánuco Pampa includes 497 collcas, which totaled as much as and could support a population of between twelve and fifteen thousand people. Cotapachi (nowadays in the Bolivian region of Cochabamba) included a group of 2,400 collcas far away from any significant village. Collcas were long-term storage houses, primarily for the storage of grains and maize, which had an extremely long expiration date and made them ideal for long-term storage for the army in the event of conflicts. According to Hyslop the use of the Inca road system was reserved to authorities. He states: «soldiers, porters, and llama caravans were prime users, as were the nobility and other individuals on official duty… Other subjects were allowed to walk along the roads only with permission…» Nevertheless, he recognizes that «there was also an undetermined amount of private traffic … about which little is known». Some local structures (called ranchillos) exist alongside the road which may allow to infer that also private trade traffic was present. The use of the Inca roads, in the colonial period, after the Spanish conquest of Peru was mostly discontinued. The Conquistadors used the Inca roads to approach the capital city of Cusco, but they used horses and ox carts, which were not usable on such a road, and soon most of the roads were abandoned. Only about 25 percent of this network is still visible today, the rest having been destroyed by wars (conquest, uprising, independence or between nations), the change in the economic model which involved abandoning large areas of territory, and finally the construction of modern infrastructure, during the nineteenth and twentieth centuries, which led to the superposition of new communication channels in the outline of pre-Hispanic roads. Transportation Transportation was done on foot as in pre-Columbian America; the use of wheels for transportation was not known. The Inca had two main uses of transportation on the roads: the chasqui (runners) for relaying information (through the quipus) and lightweight valuables throughout the empire, and llamas caravans for transporting goods. Llamas were used as pack animals in large flocks. They are lightweight animals and cannot carry much but are incredibly nimble. To transport large numbers of goods across the empire, it was more efficient for the Incas to use herds of llamas and to have two or three herdsmen. Herdsmen would drive the animals carrying their loads up the steep mountain roads, increasing carrying capacity without risking additional lives. Llamas have soft, padded hoofs, which give them good traction and a negligible impact on the road surface. Llamas of the Q'ara race (short-haired variety), which are used also in contemporary caravans, can carry about for a distance of per day, when necessary they can carry up to for short trips. They forage on natural vegetation. Trade Roads and bridges were essential to the political cohesion of the Inca state and to the redistribution of goods within it. All resources in the Empire were the property of the ruling elite. Commercial exchanges between manufacturers or producers and buyers were not practiced, as the management of all goods came under the control of the central authority. The redistribution of goods was known as the vertical archipelago: this system formed the basis for trade throughout the Inca Empire. As different sections of the Empire had different resources, the roads were used to distribute goods to other parts of the Empire that were in need of them. Roads reinforced the strength of the Inca Empire, as they allowed for the empire's multitude of resources to be distributed through a set system to ensure all parts of the Empire were satisfied. Nevertheless, scholars have noted that there was a possible barter of goods along the roads between caravanners and villagers: a sort of "secondary exchange" and "daily swapping". Military These roads provided easy, reliable and quick routes for the Empire's administrative and military communications, personnel movement, and logistical support. After conquering a territory or convincing the local lord to become an ally, the Inca would employ a military-political strategy including the extension of the road system into the new dominated territories. The Qhapaq Ñan thus became a permanent symbol of the ideological presence of the Inca dominion in the newly conquered place. The road system facilitated the movement of imperial troops and preparations for new conquests as well as the quelling of uprisings and rebellions. However it was also allowed for sharing with the newly incorporated populations the surplus goods that the Inca produced and stored annually for the purpose of redistribution. The army moved frequently, mostly in support of military actions but also to support civil works. The forts or pukaras were located mainly in the border areas, as a spatial indicator of the process of progressing and annexing new territories to the Empire. In fact, a greater number of pukaras are found towards the north of the Tawantinsuyu, as witnesses to the work of incorporating the northern territories, which were known to be rich in pastures. To the south there are abundant remains, around Mendoza in Argentina and along the Maipo river in Chile, where the presence of forts marks the line of the road at the southernmost point of the Empire. Religious The high altitude shrines were directly related to the cult of Nature and specifically to the mountains, typical of the Inca society, which the Incas formalized by the construction of religious structures on the mountain peaks. Mountains are the apus, or deities, in the universe of Andean beliefs that are still held today; they have a spiritual connotation linked to the future of Nature and human existence. The Incas held many rituals, including the sacrifice of children, goods, and llamas, at the mountain tops as part of this belief. However, not all mountains held the same religious connotation nor were sanctuaries built on all of them. The only way to reach the summits of the mountains for worship was by connecting the road system to high altitude paths in order to reach the sacred places. They were ritual roads that culminated in the peaks, at the point of contact between the earthly and the sacred space. Some of them reached high altitudes above sea level, such as mount Chañi, which had a road that started at the base and went to the summit at an elevation of . In addition to high altitude shrines, there were also many holy shrines or religious sites, called wak’a, that were a part of the Zeq’e system along and near the roads, especially around the capital city, Cusco. These shrines were either natural or modified features of the landscape, as well as buildings, where the Inca would visit for worship. Some important places of worship were directly connected by the main Inca roads. Such is the case of the sanctuary of Pachacamac through which the coastal road passed, just south of present day Lima. History Inca Empire era Much of the system was the result of the Incas claiming exclusive right over numerous traditional routes, some of which had been constructed centuries earlier, mostly by the Wari empire in the central highlands of Peru and the Tiwanaku culture. This latter had developed around Lake Titicaca, in the current territories of Peru and Bolivia, between the 6th and 12th centuries CE, and had set up a complex and advanced civilization. Many new sections of the road were built or upgraded substantially by the Incas: the one through Chile's Atacama desert and the one along the western margin of Lake Titicaca serve as two examples. The reign of the Incas originated during the Late Intermediate period (between 1000 CE and 1450 CE), when this group dominated only the region of Cusco. Inca Pachakutiq began the transformation and expansion of what decades later would become the Tawantinsuyu. The historical stage of the Empire begun around 1438 when, having settled the disputes with local populations around Cusco, the Incas started the conquest of the coastal valleys from Nasca to Pachacamac and the other regions of Chinchaysuyu. Their strategy involved modifying or constructing a road structure that would ensure the connection of the incorporated territory with Cusco and with other administrative centers, allowing the displacement of troops and officials. The Incas' military advance was based mostly on diplomatic deals before the annexation of the new regions and the consolidation of the dominion, considering war as a last resort. The foundation of cities and administrative centers connected by the road system ensured state control of the new incorporated ethnic groups. Topa Inca Yupanqui succeeded to Pachakutiq, and conquered the Chimu reaching the far north region of Quito around 1463; later he extended the conquests to the jungle region of Charcas and, in the south, to Chile. Colonial era During the first years of the Colony, the Qhapaq Ñan suffered a stage of abandonment and destruction caused by the abrupt decrease of the number of natives due to illness and war which reduced the population from more than 12 million people to about 1.1 million in 50 years and destroyed the social structure that provided labor for road maintenance. The use of the Inca roads became partial and was adapted to the new political and economic targets of the Colony and later of the Viceroyalty where the economic structure was based on the extraction of minerals and commercial production. This implied a dramatic change in the use of the territory. The former integration of longitudinal and transversal territories was reduced to a connection of the Andean valleys and the Altiplano with the coast to allow for the export of products, especially gold and silver, which started flowing to the coast and from there to Spain. A key factor in the dismantling of the network at the subcontinental level was the opening of new routes to connect the emerging production centers (estates and mines) with the coastal ports. In this context, only those routes that covered the new needs were used, abandoning the rest, particularly those that connected to the forts built during the advance of the Inca Empire or those that linked the agricultural spaces with the administrative centres. Nevertheless, the ritual roads that allowed access to the sanctuaries continued to be used under the religious syncretism that has been characterizing the Andean historical moments since the conquest. Cieza de Leon in 1553 noted the abandonment of the road and stated that although in many places it is already broken down and undone, it shows the great thing that it was. The admiration of the chroniclers was not enough to convince the Spanish ruler of the need to maintain and consolidate the road system rather than abandoning and destroying it. The reduction of the local population to newly built settlements (known as reducciones, a sort of concentration camps) was among the causes of the abandonment of the Inca roads and the building of new ones to connect the reducciones to the centers of Spanish power. Another important factor was the inadequacy of the road for horses and mules introduced by the conquerors, that became the new pack animals, substituting for the lightweight llamas. Even the new agriculture, derived from Spain, consisting mainly of cereals, changed the appearance of the territory, which was sometimes transformed, cutting and joining several andenes (farming terraces), which in turn reduced the fertile soil due to erosion form rain. The pre-Hispanic agricultural technologies were abandoned or displaced towards marginal spaces, relegated by the colonizers. Part of the network continued to be used, as well as some of its equipment, such as the tambos, which were transformed into stores and shops, adjusting to the tradition of Spain, where peasant production was taken to them for selling. The tambos entered a new stage as meeting spaces for different ways of life that irremediably ended up integrating new social and territorial structures. Post-colonial and modern times After the independence from Spain the American republics, throughout the 19th century, did not provide significant changes to the territory. In the case of Peru, the territorial structure established by the Colony was maintained while the link between the production of the mountains and the coast was consolidated under a logic of extraction and export. The construction of modern roads and railways was adapted to this logic. It gave priority to the communication with the coasts and was complemented by transversal axes of penetration into the inter-Andean valleys for the channeling of production towards the coastal axis and its seaports. At the end of the eighteenth century, large estates were developed for the supply of raw materials to international markets, together with guano, so the maritime ports of Peru took on special relevance and intense activity requiring an adequate accessibility from the production spaces. Some parts of the Inca roads were still in use in the south of the Altiplano giving access to the main centers for the production of alpaca and vicuña wools, which were in high demand in the international markets. The twentieth century organization of roads along the Andes gave priority to the Pan-American highway along the coast, following roughly the traces of the coastal Inca road. This highway was then connected to west–east routes into the valleys while the north–south Inca road up the mountains was mostly reduced to local pedestrian transit. In 2014 the road system became a UNESCO World Heritage Site. Architecture and engineering of the Inca roads The Incas built their road system by expanding and reinforcing several pre-existing smaller networks of roads, adapting and improving previous infrastructures, setting up a system of formal roads and providing a maintenance system that would protect the roads and facilitate the displacements and the exchange of people, goods and information. The outcome was a great road network of subcontinental dimensions, which, from Cusco, was directed in the four cardinal directions that marked the territorial division of Tawantinsuyu, which allowed the Inca and his officers to have knowledge of everything that circulated on the roads, however far away they were. The Incas developed techniques to overcome the difficult territory of the Andes: on steep slopes they built stone steps, while in desert areas near the coast they built low walls to keep the sand from drifting over the road. Construction and maintenance The manpower required for both construction and maintenance was obtained through the mita: a sort of tax work, provided to the state by the conquered people, by which the Inca Empire produced the required goods and performed the necessary services, which included the upkeep of roads and their relevant infrastructures (bridges, tambos, warehouses, etc.). The labor was organized by officials who were in charge of the development, control and operation of roads and bridges, as well as communications. The chronicler Felipe Guaman Poma de Ayala noted that these authorities were chosen among the noble relatives of the Inca, residents of Cusco. There were three main officials: the manager of the royal roads, the manager of bridges, the manager of chasquis. There were also several amojonadores or builders of landmarks. Architectural components Hyslop noted that there was no road construction standard, because the roads were set in such varied environments and landscapes. Roadway and pavement In the mountains and the high forests, precisely arranged paving stones or cobbles were used for paving, placing them with their flat face towards the top, trying to produce a uniform surface. Nevertheless, not all the roads were paved; in the Andean puna and in the coastal deserts the road was usually made using packed earth, sand, or simply covering grassland with soil or sand. There is also evidence of paving with vegetable fibers such as in the road of Pampa Afuera in Casma (Áncash department, Peru). The width of the roadway varied between , although some could be much wider, such as the road leading to Huánuco Pampa. The Cusco to Quito portion of the Road system, which was the most trafficked one, had a width always exceeding even in agricultural areas where the land had high value. Some portions reached a width of . Near urban and administrative centers there is evidence of two or three roads constructed in parallel. The maximum recorded width on the north coastal road is , while the average width in the south coastal road is . Side walls and stone rows Stones and walls served to mark the width of the road and signal it. On the coast and in the mountains, the availability of construction materials such as stone and mud for preparing adobes allowed to build walls on both sides of the road, to isolate it from agricultural land so that the walkers and caravans traveled without affecting the crops. In the flatlands and in the deserts, these walls most probably prevented sand from covering the road. In the absence of walls, the roads in the more deserted areas also used stone rows and wooden poles driven into the sand as route markers. Stone rows were built with stones of similar sizes and shapes, placed next to each other and located on one or both edges of the road, arranged in a sort of curb. In some cases it has been observed that the sides of these stones were edged. Furrows Although it is not strictly a construction element used to delimit the edges of the road, there are cases in which furrows delimit the road on both sides. Examples of these furrows have been found in the coastal area located south of the Chala district in Arequipa. Retaining walls Retaining walls were made with stones, adobes or mud and were built on the hillsides. These walls contained leveling fillings to form the platform of the road or to support the soil that could otherwise slide down the slope, as is generally seen in the transversal roads that lead to the coast from the mountains. Drainage Drainage by ditches or culverts was more frequent in the mountains and jungle due to the constant rainfall. Along other road sections, the drainage of rain water was carried out through an articulated system based on longitudinal channels and shorter drains, transverse to the axis of the road. Retaining walls were used along the mountain slopes, and are similar to those used to support the terraces. When crossing wetlands, roads were often supported by buttress walls or built on causeways. Road marks At given distances the direction of the road was marked with stone piles (mojones in Spanish) a sort of milestones, generally placed on both sides of the road. They were columns of well piled stones with a surmounting stone and often strategically placed on rises in order to be spotted from long distances. The apachetas (South American cairns) were mounds of stones of different sizes, formed through gradual accumulation by the travelers, who deposited stones as an offering to preserve their travel from setbacks and allow for its successful conclusion. The apachetas were located on the side of the roads in transitional spaces such as passes or "points of interest" for travellers. This practice was condemned for its pagan character during the Colony and the Viceroyalty, when priests were ordered to dismantle them and plant crosses instead. Nevertheless, the tradition of making apachetas was not discontinued and crosses or altars of different sizes were accompanied by mounds of stone. Paintings and mock-ups Some places such as rock shelters or cliffs show rock paintings next to the roads, which can be interpreted as a reinforcement of the signalization. The generally zoomorphic painted representations correspond to stylized camelids, in the typical Inca design and color. Figures directly carved on the stone are also found. Rocks of varying sizes at the roadside can represent the shapes of the mountains or important glaciers of the region, as an expression of the sacralization of geography; they can be made up of one or more rocks. Causeways In damp areas embankments were built to produce causeways, in rocky terrain it was necessary to dig the path in the rock or to drive it through an artificial terrace with retaining walls Some important causeways such as on the coast of Lake Titicaca were built to take into account the periodic variation of the lake level due to alternating rainy and dry seasons. They had stone bridges to allow the free flow of water below them. Stairways In order to overcome the limitations imposed by the roughness of the relief and the adverse environmental conditions, the Inca engineers designed different solutions. On rocky outcrops the road became narrower, adapting to the orography with frequent turns and retaining walls, but on particularly steep slopes flights of stairs or ramps were built or carved in the rock. Bridges There were multiple types of bridges used throughout the road system and they were sometimes built in pairs. Some bridges were made of parallel logs tied together with ropes and covered with earth and vegetal fibers supported by stone abutments, while others were built of stone slabs resting on piled stones. One of the difficulties of creating wooden bridges was obtaining logs. Sometimes, the laborers who were making the bridges had to bring the lumber from very far away. Wooden bridges would be replaced about every eight years. The construction of bridges was accomplished by the help of many workers. It implied first of all the constructions of abutments, normally made of stone both rough and dressed. The masonry could even be extremely well fitted, with no evidence of any mortar being used to keep the stones in place. Incas, having no iron, used a method of stone working which used simple tools, such as hammerstones, to pound the rocks in a way that the contours of the upper rock matched those of the rock below so that the seams fit perfectly without mortar. For simple log bridges, the construction was done by placing a series of logs over projecting canes. Stone bridges could span shorter lengths and needed shallower rivers to be built . Some slabs were placed over the abutments and intermediate stone pillars when necessary. A very special stone bridge was recently discovered in Bolivia consisting of a relatively small opening to allow the stream to flow and a quite imposing stone embankment filling the valley sides in order to allow the road to pass on top of it. To cross rivers flat banks, floating reeds tied together were used, forming of a row of totora boats placed side to side and covered with a board of totora and earth. Inca rope bridges also provided access across narrow valleys. A bridge across the Apurímac River, west of Cusco, spanned a distance of . Rope bridges had to be replaced about every two years: to this end, the communities around the river crossing were commanded into a mita for the construction of the new bridge, while the old bridge was cut and let fall into the river. This type of bridge was built with ropes of vegetable fibers, such as ichu (Stipa ichu) a fiber typical of the Altiplano, which were tied together to form cords and ropes which constituted the bridge floor cables, the two handrails and the necessary connections between them. Ravines were sometimes crossed by large hanging baskets, or oroyas, which could span distances of over . Tunnel To access the famous Apurímac rope bridge it was necessary for the road to reach the narrowest section of the gorge: to this end, the road was cut along a natural fault into the steep rock of the valley and a tunnel was carved to facilitate the way. The tunnel had a series of side openings allowing the light to come in. There is no evidence of other tunnels along the Inca roads. Equipment Garcilaso de la Vega underlines the presence of infrastructure on the Inca road system where all across the Empire lodging posts for state officials and chasqui messengers were ubiquitous, well-spaced and well provisioned. Food, clothes, and weapons were also stored and kept ready for the Inca army marching through the territory. The tambos were the most numerous and perhaps more important buildings in the operation of the road network. They were constructions of varied architecture and size whose function was mainly the lodging of the travellers and the storage of products for their supply. For this reason, they were located at a day's journey interval, although irregularities were identified in their distances, probably linked to various factors such as the presence of water sources, the existence of land with agricultural produce or the presence of pre-Inca centers. The tambos were most probably administered by the local populations since many of them are associated with settlements with additional constructions for different uses, such as canchas (rectangular enclosures bordered by a wall, probably used as accommodation for walkers), and collcas and kallancas. These latter were rectangular buildings of considerable size, which the Conquistadors called barns for their length. They were used for ceremonies and for accommodation of diverse nature: members of the Inca or local elites, mitimaes or other travelers. Tambos were so frequent that many Andean regional place names include the word tambo in them. At the roadside the chasquiwasis, or relay stations for the Inca messenger chasqui, were frequent. In these places the chasquis waited for the messages they had to take to other locations. The fast flow of information was important for an Empire that was in constant expansion. The chasquiwasis were normally quite small and there is little archaeological evidence and research on them. Inca Trail to Machu Picchu Machu Picchu itself was far off the beaten path, and served as a royal estate populated by the ruling Inca and several hundred servants. It required regular infusions of goods and services from Cusco and other parts of the Empire. This is evidenced by the fact that there are no large government storage facilities at the site. A 1997 study concluded that the site's agricultural potential would not have been sufficient to support residents, even on a seasonal basis.
Technology
Ground transportation networks
null
453433
https://en.wikipedia.org/wiki/Scallion
Scallion
Scallions (also known as green onions and spring onions) are edible vegetables of various species in the genus Allium. Scallions generally have a milder taste than most onions. Their close relatives include garlic, shallots, leeks, chives, and Chinese onions. The leaves are eaten both raw and cooked. Scallions produce hollow, tubular, green leaves that grow directly from the bulb, which does not fully develop. This is different to other Allium species where bulbs fully develop, such as commercially available onions and garlic. With scallions, the leaves are what is typically chopped into various dishes and used as garnishes. Etymology and naming The names scallion and shallot derive from the Old French eschalotte, by way of eschaloigne, from the Latin Ascalōnia caepa or "Ascalonian onion", a namesake of the ancient Eastern Mediterranean coastal city of Ascalon. Other names used in various parts of the world include spring onion, green onion, table onion, salad onion, onion stick, long onion, baby onion, precious onion, wild onion, yard onion, gibbon, syboe (Scots), and shallot. Varieties Species and cultivars that may be used as scallions include A. cepa 'White Lisbon' 'White Lisbon Winter Hardy' – an extra-hardy variety for overwintering Calçot A. cepa var. cepa – Most of the cultivars grown in the West as scallions belong to this variety. The scallions from A. cepa var. cepa (common onion) are usually from a young plant, harvested before a bulb forms or sometimes soon after slight bulbing has occurred. A. cepa var. aggregatum (formerly A. ascalonicum) – commonly called shallots or sometimes eschalot. A. chinense A. fistulosum, the Welsh onion – does not form bulbs even when mature, and is grown in the West almost exclusively as a scallion or salad onion. A. × proliferum – sometimes used as scallions Germination Scallions generally take 7–14 days to germinate depending on the variety. Uses Culinary Scallions may be cooked or used raw, often as a part of salads, salsas, or as a garnish. Scallion oil is sometimes made from the green leaves, after they are chopped, lightly cooked, and emulsified in a vegetable oil. In Catalan cuisine, calçot is a type of onion traditionally eaten in a calçotada (plural: calçotades). An eponymous gastronomic event is traditionally held between the end of winter and early spring, where calçots are grilled, dipped in salvitxada or romesco sauce, and consumed in massive quantities. In Ireland, scallions are chopped and added to mashed potatoes, known as champ or as an added ingredient to Colcannon. In Mexico and the Southwest United States, cebollitas () are scallions that are sprinkled with salt, grilled whole, and eaten with lime juice, cheese and rice. They are typically served as a traditional accompaniment to asado dishes. At the Passover meal (Seder), Afghan Jews and Persian Jews strike one another with scallions before singing "Dayenu", thus re-enacting the whipping endured by the Hebrews enslaved by the ancient Egyptians. In Asian cuisine, diced scallions are often used in soup, noodle, and seafood dishes, sandwiches, curries, and as part of a stir fry. The bottom half-centimetre of the root is commonly removed before use. In China, scallion is commonly used together with ginger and garlic to cook a wide variety of vegetables and meat. This combination is often called the "holy trinity" of Chinese cooking, much like the mirepoix (celery, onions, and carrots) in French cuisine or the holy trinity in Cajun cuisine. The white part of scallion is usually fried with other ingredients while the green part is usually chopped to decorate finished food. In India, it is sometimes eaten raw as an appetizer. In north India, coriander, mint and onion chutney are made using uncooked scallions. It is also used as a vegetable with Chapatis and Rotis. In south India, spring onions stir fried with coconut and shallots (known as Vengaya Thazhai Poriyal in Tamil and Ulli Thandu Upperi in Malayalam) are served as a side dish with rice. In Japan, tree onions (wakegi) are used mostly as topping of Japanese cuisine such as tofu. In Nepal, scallion is used in different meat dish fillings like momo and choyla (meat intertwined with scallion and spices). In the southern Philippines, it is ground in a mortar along with ginger and chili pepper to make a native condiment called wet palapa, which can be used to spice dishes or as a topping for fried or sun-dried food. It can also be used to make the dry version of palapa, when it is stir fried with fresh coconut shavings and wet palapa. In Vietnam, Welsh onion is important to prepare dưa hành (fermented onions) which is served for Tết, the Vietnamese New Year. A kind of sauce, mỡ hành (Welsh onion fried in oil), is used in dishes such as cơm tấm, bánh ít and cà tím nướng. Welsh onion is the main ingredient in the dish cháo hành, which is a rice porridge used to treat the common cold.
Biology and health sciences
Root vegetables
Plants
453626
https://en.wikipedia.org/wiki/Strangeness
Strangeness
In particle physics, strangeness (symbol S) is a property of particles, expressed as a quantum number, for describing decay of particles in strong and electromagnetic interactions that occur in a short period of time. The strangeness of a particle is defined as: where n represents the number of strange quarks () and n represents the number of strange antiquarks (). Evaluation of strangeness production has become an important tool in search, discovery, observation and interpretation of quark–gluon plasma (QGP). Strangeness is an excited state of matter and its decay is governed by CKM mixing. The terms strange and strangeness predate the discovery of the quark, and were adopted after its discovery in order to preserve the continuity of the phrase: strangeness of particles as −1 and anti-particles as +1, per the original definition. For all the quark flavour quantum numbers (strangeness, charm, topness and bottomness) the convention is that the flavour charge and the electric charge of a quark have the same sign. With this, any flavour carried by a charged meson has the same sign as its charge. Conservation Strangeness was introduced by Murray Gell-Mann, Abraham Pais, Tadao Nakano and Kazuhiko Nishijima to explain the fact that certain particles, such as the kaons or the hyperons and , were created easily in particle collisions, yet decayed much more slowly than expected for their large masses and large production cross sections. Noting that collisions seemed to always produce pairs of these particles, it was postulated that a new conserved quantity, dubbed "strangeness", was preserved during their creation, but not conserved in their decay. In our modern understanding, strangeness is conserved during the strong and the electromagnetic interactions, but not during the weak interactions. Consequently, the lightest particles containing a strange quark cannot decay by the strong interaction, and must instead decay via the much slower weak interaction. In most cases these decays change the value of the strangeness by one unit. This doesn't necessarily hold in second-order weak reactions, however, where there are mixes of and mesons. All in all, the amount of strangeness can change in a weak interaction reaction by +1, 0 or −1 (depending on the reaction). For example, the interaction of a K− meson with a proton is represented as: Here strangeness is conserved and the interaction proceeds via the strong nuclear force. Nonetheless, in reactions like the decay of the positive kaon: Since both pions have a strangeness of 0, this violates conservation of strangeness, meaning the reaction must go via the weak force.
Physical sciences
Quantum numbers
Physics
453959
https://en.wikipedia.org/wiki/HD%20209458%20b
HD 209458 b
HD 209458 b is an exoplanet that orbits the solar analog HD 209458 in the constellation Pegasus, some from the Solar System. The radius of the planet's orbit is , or one-eighth the radius of Mercury's orbit (). This small radius results in a year that is 3.5 Earth-days long and an estimated surface temperature of about . Its mass is 220 times that of Earth (0.69 Jupiter masses) and its volume is some 2.5 times greater than that of Jupiter. The high mass and volume of HD 209458 b indicate that it is a gas giant. HD 209458 b represents a number of milestones in exoplanetary research. It was the first of many categories: a transiting extrasolar planet The first planet detected through more than one method an extrasolar planet known to have an atmosphere an extrasolar planet observed to have an evaporating hydrogen atmosphere an extrasolar planet found to have an atmosphere containing the elements oxygen and carbon one of the first two extrasolar planets to be directly observed spectroscopically The first extrasolar gas giant to have its superstorm measured the first planet to have its orbital speed measured, determining its mass directly. Based on the application of newer theoretical models, as of April 2007, it is thought to be the first extrasolar planet found to have water vapor in its atmosphere. In July 2014, NASA announced finding very dry atmospheres on HD 209458 b and two other exoplanets (HD 189733 b and WASP-12b) orbiting Sun-like stars. HD 209458 b has been nicknamed "Osiris" after the Egyptian god. This nickname has been acknowledged by the IAU, but it has not yet been approved as an official proper name. Detection and discovery Transits Spectroscopic studies first revealed the presence of a planet around HD 209458 on November 5, 1999. Astronomers had made careful photometric measurements of several stars known to be orbited by planets, in the hope that they might observe a dip in brightness caused by the transit of the planet across the star's face. This would require the planet's orbit to be inclined such that it would pass between the Earth and the star, and previously no transits had been detected. Soon after the discovery, separate teams, one led by David Charbonneau including Timothy Brown and others, and the other by Gregory W. Henry, were able to detect a transit of the planet across the surface of the star making it the first known transiting extrasolar planet. On September 9 and 16, 1999, Charbonneau's team measured a 1.7% drop in HD 209458's brightness, which was attributed to the passage of the planet across the star. On November 8, Henry's team observed a partial transit, seeing only the ingress. Initially unsure of their results, the Henry group decided to rush their results to publication after overhearing rumors that Charbonneau had successfully seen an entire transit in September. Papers from both teams were published simultaneously in the same issue of the Astrophysical Journal. Each transit lasts about three hours, during which the planet covers about 1.5% of the star's face. The star had been observed many times by the Hipparcos satellite, which allowed astronomers to calculate the orbital period of HD 209458 b very accurately at 3.524736 days. Spectroscopic Spectroscopic analysis had shown that the planet had a mass about 0.69 times that of Jupiter. The occurrence of transits allowed astronomers to calculate the planet's radius, which had not been possible for any previously known exoplanet, and it turned out to have a radius some 35% larger than Jupiter's. It had been previously hypothesized that hot Jupiters particularly close to their parent star should exhibit this kind of inflation due to intense heating of their outer atmosphere. Tidal heating due to its orbit's eccentricity, which may have been more eccentric at formation, may also have played a role over the past billion years. Direct detection On March 22, 2005, NASA released news that infrared light from the planet had been measured by the Spitzer Space Telescope, the first ever direct detection of light from an extrasolar planet. This was done by subtracting the parent star's constant light and noting the difference as the planet transited in front of the star and was eclipsed behind it, providing a measure of the light from the planet itself. New measurements from this observation determined the planet's temperature as at least . The nearly circular orbit of HD 209458 b was also confirmed. Spectral observation On February 21, 2007, NASA and Nature released news that HD 209458 b was one of the first two extrasolar planets to have their spectra directly observed, the other one being HD 189733 b. This was long seen as the first mechanism by which extrasolar but non-sentient life forms could be searched for, by way of influence on a planet's atmosphere. A group of investigators led by Jeremy Richardson of NASA's Goddard Space Flight Center spectrally measured HD 209458 b's atmosphere in the range of 7.5 to 13.2 micrometres. The results defied theoretical expectations in several ways. The spectrum had been predicted to have a peak at 10 micrometres, which would have indicated water vapor in the atmosphere, but such a peak was absent, indicating no detectable water vapor. Another unpredicted peak was observed at 9.65 micrometres, which the investigators attributed to clouds of silicate dust, a phenomenon not previously observed. Another unpredicted peak occurred at 7.78 micrometres, for which the investigators did not have an explanation. A separate team led by Mark Swain of the Jet Propulsion Laboratory reanalyzed the Richardson et al. data, and had not yet published their results when the Richardson et al. article came out, but made similar findings. On 23 June 2010, astronomers announced they have measured a superstorm (with windspeeds of up to ) for the first time in the atmosphere of HD 209458 b. The very high-precision observations done by ESO's Very Large Telescope and its powerful CRIRES spectrograph of carbon monoxide gas show that it is streaming at enormous speed from the extremely hot day side to the cooler night side of the planet. The observations also allow another exciting "first"—measuring the orbital speed of the exoplanet itself, providing a direct determination of its mass. As of 2021, the spectra of planetary atmosphere taken by different instruments remains highly inconsistent, indicating either metal-poor atmosphere, temperatures below blackbody equilibrium or disequilibrium atmosphere chemistry. Rotation In August 2008, the measurement of HD 209458 b's Rossiter–McLaughlin effect and hence spin–orbit angle is −4.4 ± 1.4°. The study in 2012, updated the spin-orbit angle to −5°. Physical characteristics Stratosphere and upper clouds The atmosphere is at a pressure of one bar at an altitude of 1.29 Jupiter radii above the planet's center. Where the pressure is 33±5 millibars, the atmosphere is clear (probably hydrogen) and its Rayleigh effect is detectable. At that pressure, the temperature is . Observations by the orbiting Microvariability and Oscillations of STars telescope initially limited the planet's albedo (or reflectivity) below 0.3, making it a surprisingly dark object. (The geometric albedo has since been measured to be 0.038 ± 0.045.) In comparison, Jupiter has a much higher albedo of 0.52. This would suggest that HD 209458 b's upper cloud deck is either made of less reflective material than is Jupiter's, or else has no clouds and Rayleigh-scatters incoming radiation like Earth's dark ocean. Models since then have shown that between the top of its atmosphere and the hot, high pressure gas surrounding the mantle, there exists a stratosphere of cooler gas. This implies an outer shell of dark, opaque, hot clouds; usually thought to consist of vanadium and titanium oxides, but other compounds like tholins cannot be ruled out yet. A 2016 study indicates the high-altitude cloud cover is patchy with about 57 percent coverage. The Rayleigh-scattering heated hydrogen rests at the top of the stratosphere; the absorptive portion of the cloud deck floats above it at 25 millibars. Exosphere On November 27, 2001, astronomers announced that they had detected sodium in the atmosphere of the planet, using observations with the Hubble Space Telescope. This was the first planetary atmosphere outside the Solar System to be measured. The core of the sodium line runs from pressures of 50 millibar to a microbar. This turns out to be about a third the amount of sodium at HD 189733 b. The additional data did not confirm the presence of sodium in the atmosphere of HD 209458 b as in 2020. In 2003–4, astronomers used the Hubble Space Telescope Imaging Spectrograph to discover an enormous ellipsoidal envelope of hydrogen, carbon and oxygen around the planet that reaches . The hydrogen exosphere extends to a distance RH=3.1 RJ, much larger than the planetary radius of 1.32 RJ. At this temperature and distance, the Maxwell–Boltzmann distribution of particle velocities gives rise to a significant "tail" of atoms moving at speeds greater than the escape velocity. The planet is estimated to be losing about of hydrogen per second. Analysis of the starlight passing through the envelope shows that the heavier carbon and oxygen atoms are being blown from the planet by the extreme "hydrodynamic drag" created by its evaporating hydrogen atmosphere. The hydrogen tail streaming from the planet is approximately long, which is roughly equivalent to its diameter. It is thought that this type of atmosphere loss may be common to all planets orbiting Sun-like stars closer than around . HD 209458 b will not evaporate entirely, although it may have lost up to about 7% of its mass over its estimated lifetime of 5 billion years. It may be possible that the planet's magnetic field may prevent this loss, because the exosphere would become ionized by the star, and the magnetic field would contain the ions from loss. Atmosphere composition On April 10, 2007, Travis Barman of the Lowell Observatory announced evidence that the atmosphere of HD 209458 b contained water vapor. Using a combination of previously published Hubble Space Telescope measurements and new theoretical models, Barman found strong evidence for water absorption in the planet's atmosphere. His method modeled light passing directly through the atmosphere from the planet's star as the planet passed in front of it. However, this hypothesis is still being investigated for confirmation. Barman drew on data and measurements taken by Heather Knutson, a student at Harvard University, from the Hubble Space Telescope, and applied new theoretical models to demonstrate the likelihood of water absorption in the atmosphere of the planet. The planet orbits its parent star every three and a half days, and each time it passes in front of its parent star, the atmospheric contents can be analyzed by examining how the atmosphere absorbs light passing from the star directly through the atmosphere in the direction of Earth. According to a summary of the research, atmospheric water absorption in such an exoplanet renders it larger in appearance across one part of the infrared spectrum, compared to wavelengths in the visible spectrum. Barman took Knutson's Hubble data on HD 209458 b, applied to his theoretical model, and allegedly identified water absorption in the planet's atmosphere. On April 24, the astronomer David Charbonneau, who led the team that made the Hubble observations, cautioned that the telescope itself may have introduced variations that caused the theoretical model to suggest the presence of water. He hoped that further observations would clear the matter up in the following months. As of April 2007, further investigation is being conducted. On October 20, 2009, researchers at JPL announced the discovery of water vapor, carbon dioxide, and methane in the atmosphere. The refined spectra obtained in 2021 has detected instead water vapor, carbon monoxide, hydrogen cyanide, methane, ammonia and acetylene, all consistent with the extremely high carbon to oxygen molar ratio of 1.0 (while Sun has C/O molar ratio of 0.55). If true, the HD 209458 b may be a prime example of the carbon planet. Magnetic field In 2014, a magnetic field around HD 209458 b was inferred from the way hydrogen was evaporating from the planet. It is the first (indirect) detection of a magnetic field on an exoplanet. The magnetic field is estimated to be about one tenth as strong as Jupiter's. Search for Radio Emissions Since HD 209458 b orbits less than 0.1 AU from its host star, theorists hypothesized that it may cause stellar flaring synchronized to the orbital period of the exoplanet. A 2011 search for these magnetic star-planet interactions that would result in coronal radio emissions did not detect any signal. Similarly, no magnetospheric radio emissions were detected from the planet either.
Physical sciences
Notable exoplanets
Astronomy
454323
https://en.wikipedia.org/wiki/Scintillator
Scintillator
A scintillator ( ) is a material that exhibits scintillation, the property of luminescence, when excited by ionizing radiation. Luminescent materials, when struck by an incoming particle, absorb its energy and scintillate (i.e. re-emit the absorbed energy in the form of light). Sometimes, the excited state is metastable, so the relaxation back down from the excited state to lower states is delayed (necessitating anywhere from a few nanoseconds to hours depending on the material). The process then corresponds to one of two phenomena: delayed fluorescence or phosphorescence. The correspondence depends on the type of transition and hence the wavelength of the emitted optical photon. Principle of operation A scintillation detector or scintillation counter is obtained when a scintillator is coupled to an electronic light sensor such as a photomultiplier tube (PMT), photodiode, or silicon photomultiplier. PMTs absorb the light emitted by the scintillator and re-emit it in the form of electrons via the photoelectric effect. The subsequent multiplication of those electrons (sometimes called photo-electrons) results in an electrical pulse which can then be analyzed and yield meaningful information about the particle that originally struck the scintillator. Vacuum photodiodes are similar but do not amplify the signal while silicon photodiodes, on the other hand, detect incoming photons by the excitation of charge carriers directly in the silicon. Silicon photomultipliers consist of an array of photodiodes which are reverse-biased with sufficient voltage to operate in avalanche mode, enabling each pixel of the array to be sensitive to single photons. History The first device which used a scintillator was built in 1903 by Sir William Crookes and used a ZnS screen. The scintillations produced by the screen were visible to the naked eye if viewed by a microscope in a darkened room; the device was known as a spinthariscope. The technique led to a number of important discoveries but was obviously tedious. Scintillators gained additional attention in 1944, when Curran and Baker replaced the naked eye measurement with the newly developed PMT. This was the birth of the modern scintillation detector. Applications for scintillators Scintillators are used by the American government as Homeland Security radiation detectors. Scintillators can also be used in particle detectors, new energy resource exploration, X-ray security, nuclear cameras, computed tomography and gas exploration. Other applications of scintillators include CT scanners and gamma cameras in medical diagnostics, and screens in older style CRT computer monitors and television sets. Scintillators have also been proposed as part of theoretical models for the harnessing of gamma-ray energy through the photovoltaic effect, for example in a nuclear battery. The use of a scintillator in conjunction with a photomultiplier tube finds wide use in hand-held survey meters used for detecting and measuring radioactive contamination and monitoring nuclear material. Scintillators generate light in fluorescent tubes, to convert the ultra-violet of the discharge into visible light. Scintillation detectors are also used in the petroleum industry as detectors for Gamma Ray logs. Properties of scintillators There are many desired properties of scintillators, such as high density, fast operation speed, low cost, radiation hardness, production capability, and durability of operational parameters. High density reduces the material size of showers for high-energy γ-quanta and electrons. The range of Compton scattered photons for lower energy γ-rays is also decreased via high density materials. This results in high segmentation of the detector and leads to better spatial resolution. Usually high density materials have heavy ions in the lattice (e.g., lead, cadmium), significantly increasing the contribution of photoelectric effect (~Z4). The increased photo-fraction is important for some applications such as positron emission tomography. High stopping power for electromagnetic component of the ionizing radiation needs greater photo-fraction; this allows for a compact detector. High operating speed is needed for good resolution of spectra. Precision of time measurement with a scintillation detector is proportional to . Short decay times are important for the measurement of time intervals and for the operation in fast coincidence circuits. High density and fast response time can allow detection of rare events in particle physics. Particle energy deposited in the material of a scintillator is proportional to the scintillator's response. Charged particles, γ-quanta and ions have different slopes when their response is measured. Thus, scintillators could be used to identify various types of γ-quanta and particles in fluxes of mixed radiation. Another consideration of scintillators is the cost of producing them. Most crystal scintillators require high-purity chemicals and sometimes rare-earth metals that are fairly expensive. Not only are the materials an expenditure, but many crystals require expensive furnaces and almost six months of growth and analyzing time. Currently, other scintillators are being researched for reduced production cost. Several other properties are also desirable in a good detector scintillator: a low gamma output (i.e., a high efficiency for converting the energy of incident radiation into scintillation photons), transparency to its own scintillation light (for good light collection), efficient detection of the radiation being studied, a high stopping power, good linearity over a wide range of energy, a short rise time for fast timing applications (e.g., coincidence measurements), a short decay time to reduce detector dead-time and accommodate high event rates, emission in a spectral range matching the spectral sensitivity of existing PMTs (although wavelength shifters can sometimes be used), an index of refraction near that of glass (≈1.5) to allow optimum coupling to the PMT window. Ruggedness and good behavior under high temperature may be desirable where resistance to vibration and high temperature is necessary (e.g., oil exploration). The practical choice of a scintillator material is usually a compromise among those properties to best fit a given application. Among the properties listed above, the light output is the most important, as it affects both the efficiency and the resolution of the detector (the efficiency is the ratio of detected particles to the total number of particles impinging upon the detector; the energy resolution is the ratio of the full width at half maximum of a given energy peak to the peak position, usually expressed in %). The light output is a strong function of the type of incident particle or photon and of its energy, which therefore strongly influences the type of scintillation material to be used for a particular application. The presence of quenching effects results in reduced light output (i.e., reduced scintillation efficiency). Quenching refers to all radiationless deexcitation processes in which the excitation is degraded mainly to heat. The overall signal production efficiency of the detector, however, also depends on the quantum efficiency of the PMT (typically ~30% at peak), and on the efficiency of light transmission and collection (which depends on the type of reflector material covering the scintillator and light guides, the length/shape of the light guides, any light absorption, etc.). The light output is often quantified as a number of scintillation photons produced per keV of deposited energy. Typical numbers are (when the incident particle is an electron): ≈40 photons/keV for , ~10 photons/keV for plastic scintillators, and ~8 photons/keV for bismuth germanate (). Scintillation detectors are generally assumed to be linear. This assumption is based on two requirements: (1) that the light output of the scintillator is proportional to the energy of the incident radiation; (2) that the electrical pulse produced by the photomultiplier tube is proportional to the emitted scintillation light. The linearity assumption is usually a good rough approximation, although deviations can occur (especially pronounced for particles heavier than the proton at low energies). Resistance and good behavior under high-temperature, high-vibration environments is especially important for applications such as oil exploration (wireline logging, measurement while drilling). For most scintillators, light output and scintillation decay time depends on the temperature. This dependence can largely be ignored for room-temperature applications since it is usually weak. The dependence on the temperature is also weaker for organic scintillators than it is for inorganic crystals, such as NaI-Tl or BGO. Strong dependence of decay time on the temperature in BGO scintillator is used for remote monitoring of temperature in vacuum environment. The coupled PMTs also exhibit temperature sensitivity, and can be damaged if submitted to mechanical shock. Hence, high temperature rugged PMTs should be used for high-temperature, high-vibration applications. The time evolution of the number of emitted scintillation photons N in a single scintillation event can often be described by linear superposition of one or two exponential decays. For two decays, we have the form: where τf and τs are the fast (or prompt) and the slow (or delayed) decay constants. Many scintillators are characterized by 2 time components: one fast (or prompt), the other slow (or delayed). While the fast component usually dominates, the relative amplitude A and B of the two components depend on the scintillating material. Both of these components can also be a function of the energy loss dE/dx. In cases where this energy loss dependence is strong, the overall decay time constant varies with the type of incident particle. Such scintillators enable pulse shape discrimination, i.e., particle identification based on the decay characteristics of the PMT electric pulse. For instance, when BaF2 is used, γ rays typically excite the fast component, while α particles excite the slow component: it is thus possible to identify them based on the decay time of the PMT signal. Types of scintillators Organic crystals Organic scintillators are aromatic hydrocarbon compounds which contain benzene ring structures interlinked in various ways. Their luminescence typically decays within a few nanoseconds. Some organic scintillators are pure crystals. The most common types are anthracene (, decay time ≈30 ns), stilbene (, 4.5 ns decay time), and naphthalene (, few ns decay time). They are very durable, but their response is anisotropic (which spoils energy resolution when the source is not collimated), and they cannot be easily machined, nor can they be grown in large sizes; hence they are not very often used. Anthracene has the highest light output of all organic scintillators and is therefore chosen as a reference: the light outputs of other scintillators are sometimes expressed as a percentage of anthracene light output. Organic liquids These are liquid solutions of one or more organic scintillators in an organic solvent. The typical solutes are fluors such as p-terphenyl (), PBD (), butyl PBD (), PPO (), and wavelength shifter such as POPOP (). The most widely used solvents are toluene, xylene, benzene, phenylcyclohexane, triethylbenzene, and decalin. Liquid scintillators are easily loaded with other additives such as wavelength shifters to match the spectral sensitivity range of a particular PMT, or 10B to increase the neutron detection efficiency of the scintillation counter itself (since 10B has a high interaction cross section with thermal neutrons). Newer approaches combine several solvents or load different metals to achieve identification of incident particles. For many liquids, dissolved oxygen can act as a quenching agent and lead to reduced light output, hence the necessity to seal the solution in an oxygen-free, airtight enclosure. Plastic scintillators The term "plastic scintillator" typically refers to a scintillating material in which the primary fluorescent emitter, called a fluor, is suspended in the base, a solid polymer matrix. While this combination is typically accomplished through the dissolution of the fluor prior to bulk polymerization, the fluor is sometimes associated with the polymer directly, either covalently or through coordination, as is the case with many Li6 plastic scintillators. Polyethylene naphthalate has been found to exhibit scintillation by itself without any additives and is expected to replace existing plastic scintillators due to higher performance and lower price. The advantages of plastic scintillators include fairly high light output and a relatively quick signal, with a decay time of 2–4 nanoseconds, but perhaps the biggest advantage of plastic scintillators is their ability to be shaped, through the use of molds or other means, into almost any desired form with what is often a high degree of durability. Plastic scintillators are known to show light output saturation when the energy density is large (Birks' Law). Bases The most common bases used in plastic scintillators are the aromatic plastics, polymers with aromatic rings as pendant groups along the polymer backbone, amongst which polyvinyltoluene (PVT) and polystyrene (PS) are the most prominent. While the base does fluoresce in the presence of ionizing radiation, its low yield and negligible transparency to its own emission make the use of fluors necessary in the construction of a practical scintillator. Aside from the aromatic plastics, the most common base is polymethylmethacrylate (PMMA), which carries two advantages over many other bases: high ultraviolet and visible light transparency and mechanical properties and higher durability with respect to brittleness. The lack of fluorescence associated with PMMA is often compensated through the addition of an aromatic co-solvent, usually naphthalene. A plastic scintillator based on PMMA in this way boasts transparency to its own radiation, helping to ensure uniform collection of light. Other common bases include polyvinyl xylene (PVX) polymethyl, 2,4-dimethyl, 2,4,5-trimethyl styrenes, polyvinyl diphenyl, polyvinyl naphthalene, polyvinyl tetrahydronaphthalene, and copolymers of these and other bases. Fluors Also known as luminophors, these compounds absorb the scintillation of the base and then emit at larger wavelength, effectively converting the ultraviolet radiation of the base into the more easily transferred visible light. Further increasing the attenuation length can be accomplished through the addition of a second fluor, referred to as a spectrum shifter or converter, often resulting in the emission of blue or green light. Common fluors include polyphenyl hydrocarbons, oxazole and oxadiazole aryls, especially, n-terphenyl (PPP), 2,5-diphenyloxazole (PPO), 1,4-di-(5-phenyl-2-oxazolyl)-benzene (POPOP), 2-phenyl-5-(4-biphenylyl)-1,3,4-oxadiazole (PBD), and 2-(4’-tert-butylphenyl)-5-(4’’-biphenylyl)-1,3,4-oxadiazole (B-PBD). Inorganic crystals Inorganic scintillators are usually crystals grown in high temperature furnaces, for example, alkali metal halides, often with a small amount of activator impurity. The most widely used is (thallium-doped sodium iodide); its scintillation light is blue. Other inorganic alkali halide crystals are: , , (pure), , , . Some non-alkali crystals include: BGO, , , , , , (), , , GAGG:Ce. (For more examples, see also phosphors). Newly developed products include , lanthanum chloride doped with cerium, as well as a cerium-doped lanthanum bromide, . They are both very hygroscopic (i.e., damaged when exposed to moisture in the air) but offer excellent light output and energy resolution (63 photons/keV γ for versus 38 photons/keV γ for ), a fast response (16 ns for versus 230 ns for ), excellent linearity, and a very stable light output over a wide range of temperatures. In addition LaBr3(Ce) offers a higher stopping power for γ rays (density of 5.08 g/cm3 versus 3.67 g/cm3 for ). LYSO () has an even higher density (7.1 g/cm3, comparable to ), is non-hygroscopic, and has a higher light output than (32 photons/keV γ), in addition to being rather fast (41 ns decay time versus 300 ns for ). A disadvantage of some inorganic crystals, e.g., NaI, is their hygroscopicity, a property which requires them to be housed in an airtight container to protect them from moisture. and BaF2 are only slightly hygroscopic and do not usually need protection. CsF, , , are hygroscopic, while , , , and are not. Inorganic crystals can be cut to small sizes and arranged in an array configuration so as to provide position sensitivity. Such arrays are often used in medical physics or security applications to detect X-rays or γ rays: high-Z, high density materials (e.g. LYSO, BGO) are typically preferred for this type of applications. Scintillation in inorganic crystals is typically slower than in organic ones, ranging typically from 1.48 ns for to 9000 ns for . Exceptions are (~5 ns), fast (0.7 ns; the slow component is at 630 ns), as well as the newer products (, 28 ns; , 16 ns; , 41 ns). For the imaging application, one of the advantage of inorganic crystals is very high light yield. Some high light yield scintillators above 100,000 photons/MeV at 662 keV are very recently reported for , , and . Many semiconductor scintillator phosphors are known, such as ZnS(Ag) (mentioned in the history section), CdS(Ag), ZnO(Zn), ZnO(Ga), CdS(In), ZnSe(O), and ZnTe(O), but none of these are available as single crystals. CdS(Te) and ZnSe(Te) have been commercially available in single crystal form, but their luminosity is partially quenched at room temperature. GaAs(Si,B) is a recently discovered cryogenic semiconductor scintillator with high light output in the infra-red and apparently no afterglow. In combination with ultra-low noise cryogenic photodetectors it is the target in experiments to detect rare, low-energy electronic excitations from interacting dark matter. Gaseous scintillators Gaseous scintillators consist of nitrogen and the noble gases helium, argon, krypton, and xenon, with helium and xenon receiving the most attention. The scintillation process is due to the de-excitation of single atoms excited by the passage of an incoming particle. This de-excitation is very rapid (~1 ns), so the detector response is quite fast. Coating the walls of the container with a wavelength shifter is generally necessary as those gases typically emit in the ultraviolet and PMTs respond better to the visible blue-green region. In nuclear physics, gaseous detectors have been used to detect fission fragments or heavy charged particles. Glasses The most common glass scintillators are cerium-activated lithium or boron silicates. Since both lithium and boron have large neutron cross-sections, glass detectors are particularly well suited to the detection of thermal (slow) neutrons. Lithium is more widely used than boron since it has a greater energy release on capturing a neutron and therefore greater light output. Glass scintillators are however sensitive to electrons and γ rays as well (pulse height discrimination can be used for particle identification). Being very robust, they are also well-suited to harsh environmental conditions. Their response time is ≈10 ns, their light output is however low, typically ≈30% of that of anthracene. Solution-based perovskite scintillators Scintillation properties of organic-inorganic methylamonium (MA) lead halide perovskites under proton irradiation were first reported by Shibuya et al. in 2002 and the first γ-ray pulse height spectrum, although still with poor energy resolution, was reported on () by van Eijk et al. in 2008 . Birowosuto at al. studied the scintillation properties of 3-D and 2-D layered perovskites under X-ray excitation. MAPbBr3 () emits at 550 nm and MAPbI3 () at 750 nm which is attributed to exciton emission near the band gap of the compounds. In this first generation of Pb-halide perovskites the emission is strongly quenched at room temperature and less than 1 000 ph/MeV survive. At 10 K however intense emission is observed and write about yields up to 200 000 ph/MeV. The quenching is attributed to the small e-h binding energy in the exciton that decreases for Cl to Br to I . Interestingly one may replace the organic MA group with Cs+ to obtain full inorganic CsPbX3 halide perovskites. Depending on the Cl, Br, I content the triplet X-ray excited exciton emission can be tuned from 430 nm to 700 nm . One may also dilute Cs with Rb to obtain similar tuning. Above very recent developments demonstrate that the organic-inorganic and all inorganic Pb-halide perovskites have various interesting scintillation properties. However, the recent two-dimensional perovskite single crystals with light yields between 10 000 and 40 000 ph/MeV and decay times below 10 ns at room temperature will be more favorable as they may have much larger Stokes shift up to 200 nm in comparison with CsPbBr3 quantum dot scintillators and this is essential to prevent self reabsorption for scintillators. More recently, a new material class first reported by Professor Biwu Ma's research group, called 0D organic metal halide hybrid (OMHH), an extension of the perovskite materials. This class of materials exhibits strong exciton binding of hundreds of meV, resulting in their high photoluminescent quantum efficiency of almost unity. Their large stoke shift and reabsorption-free properties make them desirable. Their potential applications for scintillators have been reported by the same group, and others. In 2020,(C38H34P2)MnBr4 was reported to have a light yield up to 80 000 Photon/MeV despite its low Z compared to traditional all inorganic. Impressive light yields from other 0D OMHH have been reported. There is a great potential to realize new generation scintillators from this material class. However, they are limited by their relatively long response time in microseconds, which is an area of intense research. Physics of scintillation Organic scintillators Transitions made by the free valence electrons of the molecules are responsible for the production of scintillation light in organic crystals. These electrons are associated with the whole molecule rather than any particular atom and occupy the so-called -molecular orbitals. The ground state S0 is a singlet state above which are the excited singlet states (S*, the lowest triplet state (T0), and its excited levels (T*, A fine structure corresponding to molecular vibrational modes is associated with each of those electron levels. The energy spacing between electron levels is ≈1 eV; the spacing between the vibrational levels is about 1/10 of that for electron levels. An incoming particle can excite either an electron level or a vibrational level. The singlet excitations immediately decay (< 10 ps) to the S* state without the emission of radiation (internal degradation). The S* state then decays to the ground state S0 (typically to one of the vibrational levels above S0) by emitting a scintillation photon. This is the prompt component or fluorescence. The transparency of the scintillator to the emitted photon is due to the fact that the energy of the photon is less than that required for an S0 → S* transition (the transition is usually being to a vibrational level above S0). When one of the triplet states gets excited, it immediately decays to the T0 state with no emission of radiation (internal degradation). Since the T0 → S0 transition is very improbable, the T0 state instead decays by interacting with another T0 molecule: and leaves one of the molecules in the S* state, which then decays to S0 with the release of a scintillation photon. Since the T0-T0 interaction takes time, the scintillation light is delayed: this is the slow or delayed component (corresponding to delayed fluorescence). Sometimes, a direct T0 → S0 transition occurs (also delayed), and corresponds to the phenomenon of phosphorescence. Note that the observational difference between delayed-fluorescence and phosphorescence is the difference in the wavelengths of the emitted optical photon in an S* → S0 transition versus a T0 → S0 transition. Organic scintillators can be dissolved in an organic solvent to form either a liquid or plastic scintillator. The scintillation process is the same as described for organic crystals (above); what differs is the mechanism of energy absorption: energy is first absorbed by the solvent, then passed onto the scintillation solute (the details of the transfer are not clearly understood). Inorganic scintillators The scintillation process in inorganic materials is due to the electronic band structure found in crystals and is not molecular in nature as is the case with organic scintillators. An incoming particle can excite an electron from the valence band to either the conduction band or the exciton band (located just below the conduction band and separated from the valence band by an energy gap; see picture). This leaves an associated hole behind, in the valence band. Impurities create electronic levels in the forbidden gap. The excitons are loosely bound electron-hole pairs which wander through the crystal lattice until they are captured as a whole by impurity centers. The latter then rapidly de-excite by emitting scintillation light (fast component). The activator impurities are typically chosen so that the emitted light is in the visible range or near-UV where photomultipliers are effective. The holes associated with electrons in the conduction band are independent from the latter. Those holes and electrons are captured successively by impurity centers exciting certain metastable states not accessible to the excitons. The delayed de-excitation of those metastable impurity states again results in scintillation light (slow component). BGO (bismuth germanium oxide) is a pure inorganic scintillator without any activator impurity. There, the scintillation process is due to an optical transition of the ion, a major constituent of the crystal. In tungstate scintillators and the emission is due to radiative decay of self-trapped excitons. The scintillation process in GaAs doped with silicon and boron impurities is different from conventional scintillators in that the silicon n-type doping provides a built-in population of delocalized electrons at the bottom of the conduction band. Some of the boron impurity atoms reside on arsenic sites and serve as acceptors. A scintillation photon is produced whenever an acceptor atom such as boron captures an ionization hole from the valence band and that hole recombines radiatively with one of the delocalized electrons. Unlike many other semiconductors, the delocalized electrons provided by the silicon are not “frozen out” at cryogenic temperatures. Above the Mott transition concentration of free carriers per cm3, the “metallic” state is maintained at cryogenic temperatures because mutual repulsion drives any additional electrons into the next higher available energy level, which is in the conduction band. The spectrum of photons from this process is centered at 930 nm (1.33 eV) and there are three other emission bands centered at 860, 1070, and 1335 nm from other minor processes. Each of these emission bands has a different luminosity and decay time. The high scintillation luminosity is surprising because (1) with a refractive index of about 3.5, escape is inhibited by total internal reflection and (2) experiments at 90K report narrow-beam infrared absorption coefficients of several per cm. Recent Monte Carlo and Feynman path integral calculations have shown that the high luminosity could be explained if most of the narrow beam absorption is actually a novel optical scattering from the conduction electrons with a cross section of about 5 x 10−18 cm2 that allows scintillation photons to escape total internal reflection. This cross section is about 107 times larger than Thomson scattering but comparable to the optical cross section of the conduction electrons in a metal mirror. Gases In gases, the scintillation process is due to the de-excitation of single atoms excited by the passage of an incoming particle (a very rapid process: ≈1 ns). Response to various radiations Heavy ions Scintillation counters are usually not ideal for the detection of heavy ions for three reasons: the very high ionizing power of heavy ions induces quenching effects which result in a reduced light output (e.g. for equal energies, a proton will produce 1/4 to 1/2 the light of an electron, while alphas will produce only about 1/10 the light); the high stopping power of the particles also results in a reduction of the fast component relative to the slow component, increasing detector dead-time; strong non-linearities are observed in the detector response especially at lower energies. The reduction in light output is stronger for organics than for inorganic crystals. Therefore, where needed, inorganic crystals, e.g. , (typically used in thin sheets as α-particle monitors), , should be preferred to organic materials. Typical applications are α-survey instruments, dosimetry instruments, and heavy ion dE/dx detectors. Gaseous scintillators have also been used in nuclear physics experiments. Electrons The detection efficiency for electrons is essentially 100% for most scintillators. But because electrons can make large angle scatterings (sometimes backscatterings), they can exit the detector without depositing their full energy in it. The back-scattering is a rapidly increasing function of the atomic number Z of the scintillator material. Organic scintillators, having a lower Z than inorganic crystals, are therefore best suited for the detection of low-energy (< 10 MeV) beta particles. The situation is different for high energy electrons: since they mostly lose their energy by bremsstrahlung at the higher energies, a higher-Z material is better suited for the detection of the bremsstrahlung photon and the production of the electromagnetic shower which it can induce. Gamma rays High-Z materials, e.g. inorganic crystals, are best suited for the detection of gamma rays. The three basic ways that a gamma ray interacts with matter are: the photoelectric effect, Compton scattering, and pair production. The photon is completely absorbed in photoelectric effect and pair production, while only partial energy is deposited in any given Compton scattering. The cross section for the photoelectric process is proportional to Z5, that for pair production proportional to Z2, whereas Compton scattering goes roughly as Z. A high-Z material therefore favors the former two processes, enabling the detection of the full energy of the gamma ray. If the gamma rays are at higher energies (>5 MeV), pair production dominates. Neutrons Since the neutron is not charged it does not interact via the Coulomb force and therefore does not ionize the scintillation material. It must first transfer some or all of its energy via the strong force to a charged atomic nucleus. The positively charged nucleus then produces ionization. Fast neutrons (generally >0.5 MeV ) primarily rely on the recoil proton in (n,p) reactions; materials rich in hydrogen, e.g. plastic scintillators, are therefore best suited for their detection. Slow neutrons rely on nuclear reactions such as the (n,γ) or (n,α) reactions, to produce ionization. Their mean free path is therefore quite large unless the scintillator material contains nuclides having a high cross section for these nuclear reactions such as 6Li or 10B. Materials such as LiI(Eu) or glass silicates are therefore particularly well-suited for the detection of slow (thermal) neutrons. List of inorganic scintillators The following is a list of commonly used inorganic crystals: or barium fluoride: contains a very fast and a slow component. The fast scintillation light is emitted in the UV band (220 nm) and has a 0.7 ns decay time (smallest decay time for any scintillator), while the slow scintillation light is emitted at longer wavelengths (310 nm) and has a 630 ns decay time. It is used for fast timing applications, as well as applications for which pulse shape discrimination is needed. The light yield of is about 12 photons/keV. is not hygroscopic. or bismuth germanate: bismuth germanate has a higher stopping power, but a lower optical yield than . It is often used in coincidence detectors for detecting back-to-back gamma rays emitted upon positron annihilation in positron emission tomography machines. or cadmium tungstate: a high density, high atomic number scintillator with a very long decay time (14 μs), and relatively high light output (about 1/3 of that of ). is routinely used for X-ray detection (CT scanners). Having very little 228Th and 226Ra contamination, it is also suitable for low activity counting applications. or calcium fluoride doped with europium: The material is not hygroscopic, has a 940 ns decay time, and is relatively low-Z. The latter property makes it ideal for detection of low energy β particles because of low backscattering, but not very suitable for γ detection. Thin layers of have also been used with a thicker slab of to make phoswiches capable of discriminating between α, β, and γ particles. or calcium tungstate: exhibits long decay time 9 μs and short wavelength emission with maximum at 420 nm matching sensitivity curve of bialkali PMT. The light yield and energy resolution of the scintillator (6.6% for 137Cs) is comparable with that of . : undoped cesium iodide emits predominantly at 315 nm, is only slightly hygroscopic, and has a very short decay time (16 ns), making it suitable for fast timing applications. The light output is quite low at room temperature, however, it significantly increases with cooling. or cesium iodide doped with sodium: the crystal is less bright than , but comparable in light output to . The wavelength of maximum emission is at 420 nm, well matched to the photocathode sensitivity of bialkali PMTs. It has a slightly shorter decay time than (630 ns versus 1000 ns for ). is hygroscopic and needs an airtight enclosure for protection against moisture. or cesium iodide doped with thallium: these crystals are one of the brightest scintillators. Its maximum wavelength of light emission is in the green region at 550 nm. is only slightly hygroscopic and does not usually require an airtight enclosure. GaAs or gallium arsenide (suitably doped with silicon and boron impurities) is a cryogenic n-type semiconductor scintillator with a low cryogenic bandgap (1.52 eV) and high light output (100 photons/keV) in the infra-red (930 nm). The absence of thermally stimulated luminescence is evidence for the absence of afterglow, which makes it attractive for detecting rare, low energy electronic excitations from interacting dark matter. Large (5 kg) high-quality crystals are commercially grown for electronic applications. or gadolinium oxysulfide has a high stopping power due to its relatively high density (7.32 g/cm3) and the high atomic number of gadolinium. The light output is also good, making it useful as a scintillator for x-ray imaging applications. (or lanthanum bromide doped with cerium): a better (novel) alternative to ; denser, more efficient, much faster (having a decay time about ~20ns), offers superior energy resolution due to its very high light output. Moreover, the light output is very stable and quite high over a very wide range of temperatures, making it particularly attractive for high temperature applications. Depending on the application, the intrinsic activity of 138La can be a disadvantage. is very hygroscopic. (or lanthanum chloride doped with cerium): very fast, high light output. is a cheaper alternative to . It is also quite hygroscopic. or lead tungstate: due to its high-Z, is suitable for applications where a high stopping power is required (e.g. γ ray detection). or lutetium iodide. or lutetium oxyorthosilicate (): used in positron emission tomography because it exhibits properties similar to bismuth germanate (), but with a higher light yield. Its only disadvantage is the intrinsic background from the beta decay of natural 176Lu. (): comparable in density to , but much faster and with much higher light output; excellent for medical imaging applications. is non-hygroscopic. or sodium iodide doped with thallium: is by far the most widely used scintillator material. It is available in single crystal form or the more rugged polycrystalline form (used in high vibration environments, e.g. wireline logging in the oil industry). Other applications include nuclear medicine, basic research, environmental monitoring, and aerial surveys. is very hygroscopic and needs to be housed in an airtight enclosure. or yttrium aluminum garnet: is non-hygroscopic. The wavelength of maximum emission is at 550 nm, well-matched to red-resistive PMTs or photo-diodes. It is relatively fast (70 ns decay time). Its light output is about 1/3 of that of . The material exhibits some properties that make it particularly attractive for electron microscopy applications (e.g. high electron conversion efficiency, good resolution, mechanical ruggedness and long lifetime). or zinc sulfide: is one of the older inorganic scintillators (the first experiment making use of a scintillator by Sir William Crookes (1903) involved a ZnS screen). It is only available as a polycrystalline powder, however. Its use is therefore limited to thin screens used primarily for α particle detection. or zinc tungstate is similar to scintillator exhibiting long decay constant 25 μs and slightly lower light yield.
Physical sciences
Electromagnetic radiation
Physics
454390
https://en.wikipedia.org/wiki/Menstrual%20cup
Menstrual cup
A menstrual cup is a menstrual hygiene device which is inserted into the vagina during menstruation. Its purpose is to collect menstrual fluid (blood from the uterine lining mixed with other fluids). Menstrual cups are made of elastomers (silicone rubbers, latex rubbers, or thermoplastic rubbers). A properly fitting menstrual cup seals against the vaginal walls, so tilting and inverting the body will not cause it to leak. It is impermeable and collects menstrual fluid, unlike tampons and menstrual pads, which absorb it. Menstrual cups come in two types. The older type is bell-shaped, often with a stem, and has walls more than 2mm thick. The second type has a springy rim, and attached to the rim, a bowl with thin, flexible walls. Bell-shaped cups sit over the cervix, like cervical caps, but they are generally larger than cervical caps and cannot be worn during vaginal sex. Ring-shaped cups sit in the same position as a contraceptive diaphragm; they do not block the vagina and can be worn during vaginal sex. Menstrual cups are not meant to prevent pregnancy. Every 4–12 hours (depending on capacity and the amount of flow), the cup is emptied (usually removed, rinsed, and reinserted). After each period, the cup requires cleaning. One cup may be reusable for up to 10 years, making their long-term cost lower than that of disposable tampons or pads, though the initial cost is higher. As menstrual cups are reusable, they generate less solid waste than tampons and pads, both from the products themselves and from their packaging. Bell-shaped cups have to fit fairly precisely; it is common for users to get a perfect fit from the second cup they buy, by judging the misfit of the first cup. Ring-shaped cups are one-size-fits-most, but some manufacturers sell multiple sizes. Reported leakage for menstrual cups is similar or rarer than for tampons and pads. It is possible to urinate, defecate, sleep, swim, do gymnastics, run, ride bicycles or riding animals, weightlift, and do heavy exercise while wearing a menstrual cup. Incorrect placement or cup size can cause leakage. Most users initially find menstrual cups difficult, uncomfortable, and even painful to insert and remove. This generally gets better within 3–4 months of use; having friends who successfully use menstrual cups helps, but there is a shortage of research on factors that ease the learning curve. Menstrual cups are a safe alternative to other menstrual products; risk of toxic shock syndrome infection is similar or lower with menstrual cups than for pads or tampons. Terminology The terminology used for menstrual cups is sometimes inconsistent. This article uses "menstrual cup" to mean all types, and for clarity, distinguishes the two main types as "bell-shaped" and "ring-shaped". The thick-walled bell-shaped cups are the older type, and the term "menstrual cup" is sometimes used to refer only to bell-shaped cups. But in modern formal contexts, such as academic research and regulations, "menstrual cup" usually refers to both types. The US Food and Drug Administration holds that "A menstrual cup is a receptacle placed in the vagina to collect menstrual flow." The EU legislated that "The product group ‘reusable menstrual cups’ shall comprise reusable flexible cups or barriers worn inside the body whose function is to retain and collect menstrual fluid, and which are made of silicone or other elastomers." Ring-shaped cups are also called "menstrual discs" and sometimes "menstrual rings", to distinguish them from bell-shaped cups. Bell-shaped cups are sometimes called "menstrual bells". Because bell-shaped cups are commonly depicted as being placed in the vaginal canal, well below the cervix, they are also called "vaginal cups", with the ring-shaped cups called "cervical cups". This may not clearly reflect their position in the body. MRI imaging suggests that, contrary to some manufacturer's depictions, the bell-shaped cups called "vaginal cups" are placed over the cervix, in a position similar to a cervical cap (not to be confused with a cervical cup). Ring-shaped cups, called "cervical cups", also cover the cervix, but have one edge next to the cervix, and the other located further down the vagina, so that the cup is nearly parallel to the long axis of the vagina. In the 1800s, menstrual cups were called "'catamenial sacks", and were similar external catamenial sacks of "canoe-like form", which in turn were similar to catamenial sacks which were waterproof rubber undersheet supports for absorbent pads. These were made from india-rubber or gutta-percha, forms of latex. Use Menstrual cups are favoured by backpackers and other travellers, as they are easy to pack and only one is needed. Thorough washing of the cup and hands helps to avoid introducing new bacteria into the vagina, which may heighten the risk of UTIs and other infections. Disposable and reusable pads do not demand the same hand hygiene, though reusable pads also require access to water for washing out pads. If the hands have come into contact with any chemical that directly trigger sensory receptors in the skin, such as menthol or capsaicin, all traces of the chemical should be removed before touching the mucous membranes. A UN spec recommends that cups should not be shared; they should only ever be used by one person. Insertion The vagina is narrowest at the entrance and becomes wider and easier to stretch further in. Menstrual cups are folded or compressed to insert them, and then opened out once inside. The innermost portion of the cup typically goes into the vaginal fornix (the groove around the cervix). Menstrual cups cannot pass through the cervix into the uterus. The muscles of the pelvic floor, which surround the vaginal entrance, are relaxed to let the cup pass. Involuntarily tensing the vaginal muscles can make it impossible for anything to enter the vagina without causing pain. Many initially find insertion difficult, uncomfortable, and even painful, but learn to do it within a few cycles. There is little publicly available research on learning to use menstrual cups which compares types of cup or instructions. A bell-shaped cup is folded or pinched before being inserted into the vagina. There are various folding techniques for insertion; common folds include the "C" fold, the "7" fold, and the punch-down fold. Once inside, the cup will normally unfold automatically and seal against the vaginal wall. In some cases, the user may need to twist the cup or flex the vaginal muscles to ensure the cup is fully open. In practice, the rim of a bell-shaped cup generally sits in the vaginal fornix, the ring-shaped hollow around the cervix. Some fornixes are much deeper than others. Those with deeper fornixes may use insertion techniques such as inserting the cup partway, opening it before the rim passes the cervix, and then pushing it up into place; or they may press the cup to one side and let it open slowly, the rim slipping over the cervix. If correctly sized and inserted, the cup should not leak or cause any discomfort. The stem should be completely inside the vagina. If it can't be positioned inside, the cup can be removed and the stem trimmed. Ring-shaped cups (also called menstrual discs or menstrual rings) are inserted differently than bell-shaped cups: by squeezing opposite sides of the rim together until they touch, sliding the inner end of the folded cup to the end of the vaginal canal, and tucking the outer end behind the pubic bone. They can be less bulky than a bell-shaped cup, no bulkier than a tampon. Inserting a ring-shaped cup requires more knowledge of anatomy, to get the cup under and around the cervix, not rucked up in front of it. Ring-shaped cups with non-circular rims are designed to be inserted with the widest, deepest part going in first. If they are inserted the wrong way around they may leak. If there are stems or other removal aids, they should be on the end inserted last. If lubricant is used for insertion, it should be water-based, as silicone lubricant can be damaging to the silicone. Wear A bell-shaped cup may protrude far enough to be uncomfortable if it is too long. It may press too firmly against the bladder, causing discomfort, frequent urination, or difficulty urinating, if it is too firm, or the wrong shape. A bell-shaped cup may leak if it is not inserted correctly, and does not pop open completely and seal against the walls of the vagina. Some factors mentioned in association with leakage included menorrhagia, unusual anatomy of the uterus, need for a larger size of menstrual cup, and incorrect placement of the menstrual cup, or that it had filled to capacity. However, a proper seal may continue to contain fluid in the upper vagina even if the cup is full. While many diagrams show bell-shaped menstrual cups very low in the vagina, with the vagina gaping open, in-vivo imaging shows that the cups sit high, with their rim around the cervix, and the vagina squishes shut below the cup, sealing it inside the body. If a ring-shaped cup pops out at the outermost edge, either the innermost edge got caught on near side of the cervix rather than tucked into the fornix behind it, or it is too big, or the outermost edge hasn't been tucked behind the public bone firmly enough. In either any case it will leak. If it comes loose and starts to slide out when using the toilet, or leaks on exertion (when exercising, coughing, or sneezing), it is too large or too small. Some deliberately choose a ring-shaped cup which will leak when they deliberately bear down on it, but not at any other time. Emptying It is possible to deliberately empty a ring-shaped menstrual disc by muscular effort, without removing it (provided it is of a fairly soft material, and the right size). This is done in a suitable location, such as when sitting on a toilet. Bell-shaped cups must be removed to empty them. The cup is emptied after 4–12 hours of use (or when it is full). Leaving the cup in for at least 3–4 hours allows the menstrual fluid to provide some lubrication. If sewers are available, menstrual cups can be emptied into a flush toilet, or sink, bath, or shower drain, and the drain rinsed with water. They can also be emptied into a pit latrine. When using a urine-diverting dry toilet, menstrual blood can be emptied into the part that receives the feces. If any menstrual blood falls into the funnel for urine, it can be rinsed away with water. In the absence of other facilities, menstrual fluid can be emptied into a cathole. This is a single-use hole deep, more than from water (and frequented areas like trails or campsites), ideally dug in organic soil, in an area where the waste will break down fast. Water used to rinse the cup can also be disposed of in the cathole, which is then refilled and concealed. Removal Many initially find removal difficult, uncomfortable, and even painful, but learn to do it without problems within a few cycles. The muscles of the pelvic floor are kept relaxed, to allow the cup to pass out through them. Techniques like squatting, putting a leg up on the toilet seat, spreading the knees, and bearing down on the cup as if giving birth are sometimes used to make removal easier. Because vaginal tenting can make the cup harder to remove, some manufacturers recommend waiting at least an hour after sex before removal. Slow removal and a firm grip avoid dropping the cup; experience, time and privacy also help. Dropping the cup can contaminate it (see cleaning, below). If a cup is removed or emptied over a pit latrine, it may fall in and be unretrievable. A bell-shaped cup is removed by reaching up to its stem to find the base. Simply pulling on the stem does not break the seal, and yanking on it can cause pain. To release the seal, the base of the cup is pinched, or a finger is placed alongside the cup. The exception is two-part cups with separate stems; those can be pulled out to break the seal. The shape of the (one-part) stem thus has little effect on how easy the cup is to remove, and many people trim the stem right off for comfort. The cup is removed slowly; rocking or wriggling it gently may help. Some fold the cup in a "C" fold before removal, to break the seal and reduce the bulk; folding the cup inside the body is generally more difficult than folding it outside. A cup can be removed over a toilet, bath, or shower to catch spills. Removal becomes less messy with practice, and it is possible to consistently remove bell-shaped cup without spilling, by keeping it upright. If it is necessary to track the amount of menses produced (e.g., for medical reasons), a bell-shaped cup allows one to do so accurately before emptying. Ring-shaped menstrual cups are removed by hooking the rim with a finger (from either side), or by pinching it with multiple fingers and pulling. Some ring-shaped cups have a dimple in the bowl, to make it easier to hook the rim from below. Some also have stems, but contrary to bell-shaped cups, these stems attach to the rim of the cup, and can be pulled to break the seal. Others have pull loops that fold flat against the bowl, which can also be pulled to remove. Removing ring-shaped cups is typically done over a toilet in case of spilling; the softer bowl squishes flat during removal, making it very difficult not to spill any menstruum. Removal aids like pull loops can make ring-shaped cups easier to remove without spilling, but they still tend to be messier than bell-shaped cups. Cleaning There is little published or independent research on how to clean menstrual cups. Manufacturers generally provide cleaning instructions, but they differ widely. Manufacturers did not provide any evidence validating or giving a rationale for the various cleaning instructions, as of 2022. A UN specification says that "The cup must be washed frequently in clean, boiling water as per manufacturer's instructions." In response to the 2022 review of manufacturer's recommendations (next section), which said there was no published evidence on how well cleaning methods work, a single small in-vitro study was done to compare cleaning methods. Cleaning study A single small in-vitro study (using human blood, but incubation outside the body) compared four cleaning treatments: cold water (cup rubbed with fingers under running water for 30 seconds) cold water and liquid soap (used instead of the more common bar soap so that the quantity could be more easily measured) cold water followed by steeping (putting the cup in a ceramic mug and pouring water over it as soon as the water boiled, then steeping for 5 min with the mug covered by a small plate; after five minutes, the water in the mug was still above 75 Celsius) cold water and soap followed by steeping It did not compare boiling to steeping, or steeping after warming the mug. All of the methods decreased the bacterial load of the cups, with steeping having a bigger individual effect than soap; when using all three cleaning methods on cups (the fourth treatment), the authors were unable to culture bacteria from them. Just rinsing and steeping, with no soap, had very similar or identical effects. The authors recommended using as many of the cleaning methods as possible, but using soap only if it can be thoroughly washed off, as soap residue can irritate the vagina. They pointed out the need for in-vivo studies, looking at real health outcomes, and the need for studies on more than the single model of cup they tested. Review of manufacturers' recommendations A 2022 review stated that "Publicly accessible evidence is needed to create consumer confidence in the recommended cleaning practices... nearly all menstrual cup manufacturers fail to provide any publicly available independent evidence that supports their recommended cleaning practices". The review found no standards or guidelines for menstrual cup cleaning practices, and urged independent research to establish a normative standard. The most common recommendations are: boiling a new cup before using it for the first time, for about five minutes when a menstrual cup is removed and emptied, it is generally cleaned before it is reinserted; the most common recommendations were: washing with water and a "mild" soap, for preference rinsing in water (second choice) wiping with a clean, dry wipe such as toilet tissue (third choice) boiling or steeping a cup between menstrual cycles for about five minutes. Most manufacturers recommended using water and soap if readily available. Many counterrecommend scented soaps and soaps made with an excess of oil or fat (in order to create moisturizing soap). Scents and moisturizers are designed to remain as residues on the hands after washing. Some manufacturers sell and recommend proprietary cleaning products. These are not considered necessary. Containers for steeping and boiling A dedicated menstrual-cup-cleaning pot may be too expensive, and use of kitchen pots socially unacceptable. Alternatives like used paint cans may contain harmful substances. Food cans are used; these hold their temperature better than an unwarmed ceramic mug for steeping, but there is no data on the safety of tinned or plastic-coated food cans for this use. Mason jars made for home canning are heatproof and designed to be sterilized by boiling; they have been used to steep-sterilize menstrual cups. They have also been used (presumably unsealed) for storage. Mugs have also been used. USB-powered sterilizers and proprietary menstrual cup cleaning solutions are not accessible to poorer users. Some menstrual cups come with cleaning containers; the cup is intended to be steeped in the container with boiling water for five minutes or microwaved in the container with water for 3–5 minutes. Containers are made from a medical-grade silicone or polypropylene. In practice A South African study found that 93% used tap water when cleaning their cups at home, but only 32-44% rinsed their cups with tap water outside the home; when water was not available, many women left their cups in all day. In situations where clean water is hard to get or in short supply, it may be difficult to clean the cup with water. Reusable alternatives, like washing rags, may take more water. A lack of soap also presents a problem in some developing countries. Washing a menstrual cup in a sink at a public toilet can pose problems, as the handwashing sinks are often in a public space rather than in the toilet cubicle. Accessible loos generally have sinks that can be reached from the toilet, but they may be needed by people with limited mobility. Some users do not empty cups in public toilets; if they only empty the cup twice a day, every 12 hours, they can wait until they return home. Boiling menstrual cups once a month can also be a problem in developing countries, if there is a lack of water, firewood or other fuel. Stain removal Smooth-surfaced cups are easier to clean; moulded text, ridges, bumps, and holes make it a bit more difficult. Some suggest scrubbing out grooves with a toothbrush, rag, or cloth, and airholes with an interdental brush. Stains on any color of the cup can be removed or at least lightened by soaking the cup in diluted hydrogen peroxide, or leaving it out in the sun for a few hours. Some cup makers recommend against the use of hydrogen peroxide. Some menstrual cups are sold colorless and translucent, but several brands also offer colored cups. Translucent cups lose their initial appearance faster than colored – they tend to get yellowish stains with use. It can be harder to see whether a dark-coloured cup is clean. The shade of a colored cup may change over time, though stains are often not as obvious on colored cups. Storage Manufacturers typically suggest letting the cup dry out fully and storing it dry in a breathable container, such as the cloth bag usually provided with the cup. Airtight wraps and containers are counterrecommended, especially if the cup is at all damp. Safety Menstrual cups are a safe option for managing menstruation, with risks comparable to or lower than alternatives (with the possible exception of the risk of intrauterine device (IUD) displacement). They are safe in in low-, middle-, and high-income settings. Using a menstrual cup does not harm the vaginal flora. Studies looked at disruptions of the vaginal flora including excessive growth of yeast, excessive growth of harmful bacteria, excessive growth of Staphylococcus aureus, and other microorganisms; subjects using menstrual cups were not more likely to have these common vaginal problems than subjects using other methods, (cloth or disposable pads, or tampons); in some studies, they were less likely. Menstrual cups can be used with an IUD, but it isn't clear whether using a menstrual cup increases the risk of IUD expulsion, . About 6% of all IUD users have an IUD come out unintentionally, most commonly during menstruation. In three studies of expulsion rates in menstrual cup users, the rates were 3.7%, 17.3% and 18.6%. Menstrual cup users differ demographically from the general population of IUD users (for instance, they tend to be younger, and youth independently increases the risk of losing an IUD unintentionally). It has been suggested that when removing a menstrual cup, the user might accidentally pull on the IUD string, or that the suction might pull the IUD out. There is no data on what removal techniques, brands or types of cup might be riskier. Some IUD users have had the strings of their IUD cut quite short as a precaution against accidentally pulling it out while removing a cup. So far there is no data on IUD displacement in people using ring-shaped cups, which do not suction to the cervix in the way bell-shaped cups can. Rare issues The number of menstrual cup users is unknown. This makes it hard to estimate the rate of rarer health problems related to cups. There are few reports, and rare problems are unlikely to turn up in a randomized study. Serious difficulty removing the cup, requiring professional assistance, is rare but not unknown. A 2019 review found two cases with bell-shaped silicone cups, and one case with an elaborate older model of diaphragm-like cup called a Gynaeseal. There were also 46 reports with a single brand of disposable ring-shaped plastic cup (of about 100 million cups sold); most were reported to the manufacturer. A 2019 review found three cases in which a malpositioned menstrual cup pressed on a ureter, which blocked the flow of urine from a kidney to the bladder; this caused renal colic (acute pain on the flank and lower back) which went away once the cup was removed. It also found one case of urinary incontinence while using the cup, which cleared up when the cup was removed, and five other urinary complaints. Most menstrual cups are made of silicone, and silicone allergies are rare. In 2010, there was one report to the FDA of someone with a silicone allergy who had to have reconstructive surgery to the vagina after using a silicone menstrual cup. There were two reports to the FDA of allergic reactions to a disposable plastic cup. A 2017 study in Dharpur, Gujarat, using a silicone cup described as ring-shaped and depicted as bell-shaped, collected two reports of rashes and one report of an allergy. The 2019 review also found two reports of irritation to the vagina and cervix, neither of which had clinical consequences, and two of severe pain (one on removing a cup for the first time). There were three reports of a vaginal wound from menstrual cup use, but reviewers were not able to review any associated medical records. One case report noted the development of endometriosis and adenomyosis in one menstrual cup user. Endometriosis affects 10–15% of menstruators. An online survey on the topic, with nine respondents, found three people who had used a menstrual cup and developed endometriosis. The U.S. Food and Drug Administration made a public statement that there was insufficient evidence of risk. Toxic shock syndrome Toxic shock syndrome (TSS) is a potentially fatal bacterial illness. A 2019 review found the risk of toxic shock syndrome with menstrual cup use to be low, with five cases identified via their literature search (one with an IUD, one with an immunodeficiency). Data from the United States showed rates of TSS to be lower in people using menstrual cups versus high-absorbency tampons. Infection risk is similar or less with menstrual cups compared to pads or tampons. There is an association between TSS and tampon use, although the exact connection remains unclear. TSS associated with menstrual cup use appears to be very rare, probably because menstrual cups are not absorbent, do not irritate the vaginal mucosal tissue, and so do not measurably change the vaginal flora. The risk of TSS associated with contraceptive cervical caps and contraceptive diaphragms is also very low. Like menstrual cups, these products both use mostly medical grade silicone or latex. A widely reported study showed that in vitro, bacteria associated with toxic shock syndrome (TSS) are capable of growing on menstrual cups, but results from similar studies are conflicting, and results from in-vivo studies do not show cause for concern. Size, shape, and flexibility There are no standards for the measurement or aize-labelling of menstrual cups, and each manufacturer uses their own system. Self-measurement of the vagina and third-party measurement tables are often used to get a good fit. Capacity affects how often the cup must be emptied. Some prefer to empty the cup only twice a day, morning and evening, to avoid emptying it in public toilets. Flow rates vary. On average, about 30mL of menstrual fluid is lost per month; 10 to 35mL is normal. Menstrual blood loss of more than 80mL per month is considered heavy menstrual bleeding, and grounds for consulting a doctor. The stated capacity of menstrual cup is generally measured ex vivo (outside the body). It is the volume of fluid that will fill the cup to just below the airholes, if there are airholes, or just below the rim, if there are none. These volume measurements are generally overestimates of real life capacity, because the cup may be compressed inside the body, and the cervix will often occupy some of the volume of the cup. Ex-vivo capacities for menstrual cups are in the range of tens of milliliters; for comparison, a normal-size tampon or pad holds about 5mL when thoroughly soaked. Smooth cups with no sharp edges are recommended by the UN. Moulded text, ridges, bumps, and holes make a cup harder to clean. Bell-shaped cups Bell-shaped menstrual cups all have a wall thickness of about 2mm. They vary in length, capacity, firmness, and external diameter of the rim. This accommodates variety in anatomy, flow quantity, and personal preferences for firmness. While vaginal tenting causes the cervix to retract during sexual arousal, it is normally located within centimeters of the vaginal opening; 45-55mmm is a medium height. Cups are available in lengths from about 30-80mm, with 40-60mm lengths being common; most menstrual discs are shallower than most bell-shaped cups. Some manufacturers sell several sizes of cup that are all the same length. Cups must be short enough that the cervix does not push the cup into contact with the vulva, where it may be uncomfortable. If the cervix sits particularly low or is tilted, a shorter cup may needed. A cup which is too short may sit too far up to remove easily. Many bell-shaped cups have stems. The stems can be trimmed to shorten the cup, giving stemmed cups a minimum and maximum length; instructions for trimming are generally included with the cup. Some cups are made in two parts, with a separate stem passing through a hole in the cup; these separate stems, unlike normal one-piece stems, can be pulled to break the seal, and were designed to make removing the cup with low dexterity easier. There also exist cups with valves in the stem, which can be slowly drained without removing the cup. The UN counterrecommends hollow stems, because solid stems are easier to clean. Ex vivo, small size cups hold about 15-25 ml, medium size cups 20-30 ml, and large cups 30-40 ml. The maximum capacity for large cups is about 50mL (ring-shaped cups generally hold a bit more than bell-shaped cups). Excessively high-volume cups can be uncomfortably large, so fit is prioritized. Bell-shaped cups also vary by firmness or flexibility. Some manufacturers make the same cups in a range of firmness levels. A firmer cup pops open more easily after insertion and may hold a more consistent seal against the vaginal wall (preventing leaks), but some people find softer cups more comfortable to insert. The outside diameter of the rim will also affect seal and comfort. Sizing Cervix height is measured by touching the cervix with a fingertip, and using the thumb against the finger to mark the inner edge of the vaginal opening; the distance from the thumbnail to the tip of the finger is the height of the cervix. Cervix height varies slightly over the month, and is usually lowest on the first day of bleeding; minimum height is used for sizing menstrual cups. The cup length is generally taken to be equivalent to the cervix height, but as the cup rim will generally sit in the fornix, some may comfortably take a cup slightly longer than their cervical height. Fornix depth varies, but is usually between 1–5 cm (0.5-2 inches). Manufacturers do not generally print cup dimensions on the box, but there are third-party tables of dimensions online. This forcers buyers to guess whether a cup will fit. A regulatory requirement for quantitative measurements, including a Young's modulus measurement of firmness, has been suggested. Research into what measurements would be most useful for selecting a well-sized cup is also needed. Most brands sell a smaller and a larger size, but some sell up to five sizes, and differing firmnesses. Sizes are mostly labelled transparently, (e.g. "S", "M", and "L"), but some manufacturers label sizes with ordinal numbers (e.g. "0", "1", and "2"), alphabetic letters (e.g. "A" "B" and "C"), or euphemisms (such as "Petite", "Regular", and "Full fit"). Between one manufacturer's products, volume usually increases with number and position in the alphabet. Mostly, each larger size is slightly larger in all dimensions, but some manufacturers have sizes that differ in only one dimension (length, diameter, or capacity). These sizes are not consistent between manufacturers. Manufacturers typically recommend the smaller size for under-30s who have not given birth vaginally and have a lighter flow, and the larger for everyone else. However, there is no medical evidence for sizing based on age or parity. Ring-shaped cups or discs Ring-shaped cups (also called menstrual discs or rings) are often approximately hemispherical in shape, like a diaphragm, with a flexible ring rim and a soft, collapsible center. They collect menstrual fluid like menstrual cups, but sit in the vaginal fornix and stay in place by hooking behind the pubic bone. Menstrual discs come in both disposable and reusable varieties. Ring-shaped cups are sized differently than bell-shaped cups. Fit is much less individual; the flexible bowl makes depth unimportant, and any ring-shaped cup between 60-70mm diameter will fit most people adequately. Sizing is measured in the same way as it is for contraceptive diaphragms, which fit in the same position. A study of circular-rim diaphragms failed to find any proxy factor (like parity or weight) which would allow prediction of the size of diaphragm someone needed; it was necessary to take a measurement. As with contraceptive diaphragms, some "one-size-fits-all" cups have slightly oval or pear-shaped rims, and some have rims that arch (as seen from the side), increasing the range of sizes that fit. A contraceptive diaphragm using these techniques was found to fit 98% of volunteers in a multicenter study (everyone with a size of 65-80mm). A disc which is too big or too small will leak. Ring-shaped cups come in diameters from 53mm to 80mm, . They have some advantages over bell-shaped cups, including that they have a higher ex-vivo capacity (40-80ml), enable bloodless period sex, and are more comfortable for some users. Disadvantages include messier removal and more difficulty learning to insert them than for bell-shaped cups. Some ring-shaped cups have removal aids. These may be stringlike stems, notches (dents in the outside of the bowl), and pull loops (like the ring-pull tab of a drinks can, or like a strap running parallel to the rim), and some have hybrid looped notches (with a strap across the rim of the notch). Notched or looped disks may rotate in the body so that the grip is out of reach, so some cups have three notches/loops spaced around the circumference of the cup. Notches reduce cup volume. Removal aids like pull loops make ring-shaped cups easier to remove without spilling, but they may chafe, and can be harder to clean. Some menstrual rings have ribbed membranes. It is difficult to mould thin membranes; the silicone or plastic has to flow into a very narrow part of the mould, and solidify only once it has filled the area. While it is possible to mould silicone membranes as thin as a quarter of a millimeter thick (), it requires care. Adding ribs (linear thicker areas) to the membrane makes it easier to mould. It also stiffens the membrane. Stiffer membranes may be more noticeable during sex, and smoother, softer ones less noticeable. It is anecdotally claimed that the increase in surface area from the ridges allows ridged cups to hold more blood; and that they may reduce effective natural vaginal lubrication when worn during sex. Texture may also be added to the outside of the membrane for grip, although it is usually the rim that is gripped when removing the cup. Rings with a slimmer rim can be easier to slide around the cervix. A thin spot in the rim can let the rim fold more tightly for insertion and removal. Some ring-shaped cups also have concentric grooves on the outside of the rim; these can be harder to clean than an ungrooved rim. Grooves add stiffness while using less material; see I-beam. Firmer ring-shaped cups can be easier to get into place, as they are stiffer, but softer rings fold more easily and tightly, and may be more comfortable to insert and remove. Firmer disks are therefore often preferred by new users. Unlike in bell-shaped cups, firmness does not affect the seal of a properly-fitting ring-shaped cup. Sizing Size can be measured in the same way as for contraceptive diaphragms; the fore and middle fingers are inserted until the tip of the middle finger is in the posterior fornix (the hollow on the spinewards side of the cervix), and the thumb is used against the forefinger to mark where the bony pubic arch touches the index finger. The diagonal distance between the tip of the middle finger and the thumbnail is then measured. This is the diameter of circular rim needed. At this depth the side walls of the vagina are quite stretchy, so no side-to-side measurement is needed. Sizing rings can also be used. Disposable menstrual discs are also similar in size to many reusable ones, and can be used to check if an ~70mm diameter fits. Many brands have a one-size-fits-most approach. Some sell two or three sizes, based on qualitative cervix height (low or high) rather than age or previous births. While North American manufacturers do not generally give dimensions, third-party tables of disc diameters are available online. European manufacturers generally do give the metric dimensions of their products online. For circular rims, the outside rim diameter should match the diaphragm size. For oval and slightly egg-shaped rims, the sizing should be similar, but taking an average of the two rim dimensions. For complex three-dimensional rims, the manufacturer should indicate the size range the cup will fit. Materials and color Cups are made from rubbers (elastomers). Most are made from silicone rubber; some are made from latex or thermoplastic rubber. Some contain other ingredients, such as colourants or cheap bulking fillers. Cups made from good-quality materials last longer. A UN specification says that cups must be made of medical-grade silicone. There are multiple medical grades. Plastics can also be medical-grade. Some jurisdictions require the use of medical-grade materials, but others do not. Where permitted, cups may be made of cheaper food-grade materials. The same make and model of cup may be made of different materials in different legal jurisdictions. In many jurisdictions, menstrual products need not list ingredients. Some places, including some US states, have enacted laws requiring food-style ingredient lists, with the percentage of each ingredient. These laws include menstrual cups, and have been supported by some cup manufacturers. Base materials Latex Early cups were made from latex manufactured from plant sap (usually gutta-percha or indiarubber). Latex is biodegradable. Latex allergy is common; around 4% of the general population worldwide has it, and repeated exposure makes a person more likely to develop it. Biologically-sourced latex may be brown or amber-coloured (see natural rubber). Latex can harden over time. Silicone rubber Most brands use a silicone rubber (also called a silicone elastomer) as the material for their menstrual cups. Silicone is durable and hypoallergenic. Menstrual cups made from silicone are reusable for up to 10 years. The majority of menstrual cups on the market are reusable, rather than disposable. Most brands, and all reputable ones, state that they use a medical-grade silicone. A UN specification requires medical-grade silicone. In most regulation systems, there are multiple subgrades of medical-grade silicones. For instance, in the US, class V and class VI are medical grades. Class VI is subgrouped into non-implantable (or medical-healthcare grade), shortterm-implantable, and longterm-implantable (with 30 days or more being long-term). Menstrual cups are commonly made of non-implantable medical-grade silicone. There are not specific grades of silicone rubber for long-term mucous membrane contact. There are also non-regulatory distinctions. Most cups are made from liquid silicone rubber (LSR), but some seem to be made from high-consistency rubber (HCR). While LSR is indeed liquid, HCR is a putty-like material, which makes for differences in the manufacturing process. The former generally uses platinum catalysts to initiate curing (setting); the latter mostly uses peroxide catalysts. While silicone rubbers, as polymers, are inert and hypoallergenic, the corresponding monomers are not. Silicone menstrual cups must therefore be fully cured before use. Heat accelerates curing. Silicone rubbers also come in a range of Shore A hardnesses; a Shore hardness of 10 is gumlike, softer than some sponges and foams (chewing gum is about 20), while 80 is harder than the heel of a shoe. The firmness of a cup will be affected by the firmness of the material, but also its shape and dimensions. Plastic Plastics are also used for menstrual cups. The plastics used are generally thermoplastics (plastics that soften when heated, and can therefore be heat-moulded). They are also generally rubbery or elastomeric. The thermoplastic elastomers used in plastic cups are often unspecified, but are often of some medical grade. The same brand and model of product may be made with different grades of plastic in different jurisdictions, with medical-grade plastic in jurisdictions that require it, and food-grade plastic in those that don't. This may be reflected in the price. Colourants and other additives The silicone or thermoplastics which most brands of cups are produced are naturally colorless and translucent. Several brands offer colored cups as well as, or instead of the colorless ones. A UN specification says that cups must be made of medical-grade silicone, and may include additives like "elastomer, dye or colorant", but no more than 0.5%. It also requires that the additives are non-toxic, non-carcinogenic, non-mutagenic, and do not cause skin irritation or skin sensitization. Manufacturers generally do not specify what colourants they use, even on request. In jurisdictions where cups are classed as medical devices, the colourants generally also have to be medical-grade, and fuse permanently to the raw material so that they cannot leach out. In jurisdictions where menstrual cups are classed as consumer devices, colourants need not be medical-grade. Some manufacturers use colourants certified for use in plastics intended to come in contact with food, for use in toys, and for use in consumer electronics. In some cases, a broader range of colours are available in jurisdictions where menstrual cups are not classed medical devices, and food-grade dyes can be used. The same brand and model of product may be made with different grades of colourant in different jurisdictions. Because silicone rubbers are relatively expensive, some dodgier manufacturers mix cheaper fillers into their silicone. These fillers are typically not tested for safety. Such cups may or may not whiten when stretched only a small amount. Manufacturing Material must be treated carefully during manufacture to avoid contaminating it. ISO certification is used for silicone manufacturing processes, and is required for regulatory approval in some jurisdictions, like Canada. Injection molding of liquid silicone rubber is used to make most cups. The setup costs (moulds etc.) are significant, so new designs are generally made in runs of about 4000 or more; production runs of existing designs are generally of about 500 or more at a time. Larger production runs make for cheaper cups, because there are fixed set-up costs (it takes money to design a cup, set up a production run, and to clean up afterwards). Experience curve effects also reduce costs as more cups are made, including for repeated production runs of the same or similar cups. For example, for an extremely complex overmoulding of silicone on a shaped nylon spring, used for a contraceptive diaphragm, costs on a production run of 500 in 2010 were $20 US per item, while by 2013, batches of 10,000 and improvements in the manufacturing process had brought the cost down to $5 per item. By 2016, further improvements in manufacturing techniques had reduced the reject rate, making the product cheaper. See costs section, below, for non-manufacturing costs. Good manufacturing will not show flash or conspicuous mould lines. Gate marks from the sprues of the mould may also be visible on some cups. Regulation Regulation varies by jurisdiction. Some international standards are used in multiple jurisdictions, especially those from the International Standards Association. Menstrual cup manufacturers seek and advertise ISO 13485 certification. ISO 13485 (quality management for the design and manufacturing of medical devices) and ISO 10993 (biocompatibility of medical devices) are both used for menstrual cups. ISO 14024 is an ecolabel which may be used by any manufacturer achieving certification to that standard; it is also used for some menstrual cups. While ISO standards are none-binding, specific ISO certifications may be legally required by some jurisdictions. Many manufacturers comply with regulations in multiple jurisdictions. They may vary their products in order to do so, using cheaper materials and methods in jurisdictions where they are allowed. Australia Australia changed its regulatory environment in 2018, exempting menstrual cups from an obligation to register on the Australian Register of Therapeutic Goods (ARTG). Australia requires certain package labelling. Canada Canada regulates menstrual cups (like tampons and other insertables) as Class II medical devices. In Canada this means that they must be licensed by Health Canada before being advertised, imported, or sold. There are standards for materials and manufacturing facilities; getting accreditation and meeting the requirements can take years. There is also separate strong regulation of sustainability claims. This regulation raises costs for Canadian manufacturers; large manufacturers have made statements approving of the regulatory environment, though they complain about online competition from laxer jurisdictions. Menstrual cups that meet the regulatory requirements to be sold in the United States may not be able to meet the requirements in Canada. This means that some menstrual cups manufactured in Canada are sold in the United States, but not in Canada. EU The EU does not regulate menstrual cups as medical devices, but categorizes them as "general products", under the General Product Safety Directive (now replaced by the General Product Safety Regulation). This means that manufacturers, by selling them, guarantee that they are safe, but do not face more oversight than manufacturers of other consumer products. Some menstrual cups carry the EU Ecolabel, which requires minimum standards for packaging, pollution, emission reduction, and toxic substances in the finished product. The EU has the power to order the removal of unsafe products, including from online shops. Manufacturers inside and outside the EU may voluntarily use the CE mark on packaging to assert that a product meets EU regulations. Some EU manufacturers voluntarily got ISO certifications. South Korea Menstrual cups are categorized as "quasi-drugs" in South Korea. On the 7th of December 2017, the Ministry of Food and Drug Safety approved the first menstrual cup for sale in South Korea, after a process involving the submission of data from a three-cycle clinical trial on effectiveness, and screening for ten highly hazardous volatile organic compounds. US The US regulates menstrual cups as Class II medical devices, but this does not mean the same thing as in Canada. The manufacturers of the silicone, the manufacturer that shapes it into cups, and the vendor, must all be registered with the FDA (using a 510(k) premarket notification and clearance) for a product to be sold legally in the United States. They must submit the required paperwork detailing their manufacturing process and similarity to existing products, and provide contact information. The US regulates the end products, but not the materials. Menstrual cups, unlike tampons, do not require premarket review. Some cups claim to be "FDA approved". The Food and Drug administration does not approve Class II medical devices, only Class III medical devices. Menstrual cups are categorized as class II, not class III, so they cannot be "approved", only "cleared", and these claims are inaccurate. The FDA requires certain product labelling on (or in) all packaging. Cost The costs for menstrual cups vary widely, from US$0.70 to $47 per cup, with a median cost of $23.35 (based on a 2019 review of 199 brands of menstrual cups available in 99 countries). Regulatory environment can have a strong effect on the price, because compliance may be time-consuming and costly. For manufacturing costs, see manufacturing section, above. Reusable menstrual products (including reusable menstrual cups) are more economical than disposable pads or tampons. The same 2019 review looked at costs across seven countries and found that, over 10 years, a menstrual cup costs $460.25 less than 12 disposable pads per period and $304.25 less than 12 tampons per period. Despite the long-term cost savings, the upfront cost of a menstrual cup is a barrier for some. Environmental impact Since they are reusable, menstrual cups help to reduce solid waste. Some disposable menstrual pads and plastic tampon applicators can take 25 years to break down in the ocean and can cause a significant environmental impact. Biodegradable sanitary options are also available, and these decompose in a short period of time, but they must be composted, and not disposed of in a landfill. When considering a 10-year time period, waste from consistent use of a menstrual cup is only a small fraction of the waste of pads or tampons. For example, if compared with using 12 pads per period, use of a menstrual cup would produce only 0.4% of the plastic waste. Each year, an estimated 20 billion pads and tampons are discarded in North America. They typically end up in landfills or are incinerated, which can have a great impact on the environment. Most of the pads and tampons are made of cotton and plastic. Plastic takes about 50 or more years and cotton starts degrading after 90 days if it is composted. Given that the menstrual cup is reusable, its use greatly decreases the amount of waste generated from menstrual cycles, as there is no daily waste and the amount of discarded packaging decreases as well. After their life span is over, silicone cups can be burned or sent to a landfill. Alternatively, one brand offers a recycling program and some hospitals are able to recycle medical grade silicone, including cups. Cups made from TPE can be recycled in areas that accept #7 plastics. Rubber cups are compostable. Menstrual cups may be emptied into a small hole in the soil or in compost piles, since menstrual fluid is a valuable fertilizer for plants and any pathogens of sexually transmitted diseases will quickly be destroyed by soil microbes. The water used to rinse the cups can be disposed of in the same way. This reduces the amount of wastewater that needs to be treated. In developing countries, solid waste management is often lacking. Here, menstrual cups have an advantage over disposable pads or tampons as they do not contribute to the solid waste issues in the communities or generate embarrassing refuse that others may see. History Menstrual cups may have been inspired by other types of vaginal inserts used throughout history. Vaginal inserts had various purposes from birth control, enabling abortions, to supporting a prolapsed uterus. The first version of what we would now call a menstrual cup was a rubber sack attached to a rubber ring created by S.L. Hockert in 1867, which was patented in the United States. An early version of a bullet-shaped menstrual cup was patented in 1932, by the midwifery group of McGlasson and Perkins. Leona Chalmers patented the first usable commercial cup in 1937. Other menstrual cups were patented in 1935, 1937, and 1950. The Tassaway brand of menstrual cups was introduced in the 1960s, but it was not a commercial success. Early menstrual cups were made of rubber. The first menstrual-cup applicator was mentioned in a 1968 Tassaway patent; there are also 21st-century versions, but they have not been a commercial success, . No medical research was conducted to ensure that menstrual cups were safe prior to introduction on the market. Early research in 1962 evaluated 50 volunteers using a bell-shaped cup. The researchers obtained vaginal smears, gram stains, and basic aerobic cultures of vaginal secretions. Vaginal speculum examination was performed, and pH was measured. No significant changes were noted. This report was the first containing extensive information on the safety and acceptability of a widely used menstrual cup that included both preclinical and clinical testing and over 10 years of post-marketing surveillance. In 1987, another latex rubber menstrual cup, The Keeper, was manufactured in the United States. This proved to be the first commercially viable menstrual cup and it is still available today. The first silicone menstrual cup was the UK-manufactured Mooncup in 2001. Most menstrual cups are now manufactured from medical grade silicone properties. An early menstrual disc, the Gynaeseal, was developed by Dr John Cattanach in 1989, but never found commercial success. In 1997, the Instead Feminine Protection Cup began to be sold across the United States. Designed by Audrey Contente, the disposable disc was made of Kraton. In 2018, reusable silicone discs were introduced. As of 2021, there were ten brands of discs available for purchase in various markets. Menstrual cups are becoming more popular worldwide, with many different brands, shapes, and sizes on the market. Most are reusable, though there is at least one brand of disposable menstrual cups currently manufactured. Some non-governmental organizations (NGOs) and companies have begun to propose menstrual cups to women in developing countries since about 2010, for example in Kenya and South Africa. Menstrual cups are regarded as a low-cost and environmentally friendly alternative to sanitary cloth, expensive disposable pads, or "nothing" – the reality for many women in developing countries. Acceptability studies In a randomized controlled feasibility study in rural western Kenya, adolescent primary school girls were provided with menstrual cups or menstrual pads instead of traditional menstrual care items of cloth or tissue. Girls provided with menstrual cups had a lower prevalence of sexually transmitted infections than control groups. Also, the prevalence of bacterial vaginosis was lower among cup users compared with menstrual pad users or those continuing other usual practice. Society and culture Public funding for menstrual cups The municipality of Alappuzha in Kerala, India launched a project in 2019 and gave away 5,000 menstrual cups for free to residents. The purpose of this was to encourage the use of these cups instead of non-biodegradable menstrual pads to reduce waste production. In 2022, Kumbalangi, a village in Kerala, became India's first sanitary-napkin-free panchayat under a project called "Avalkkayi", which gave away 5,700 menstrual cups for free. In 2022, the Spanish government began distributing free menstrual cups through public institutions (such as schools, prisons, and health facilities). In March 2024, Catalonia, in Spain, started supplying free menstrual cups as part of the "My period, my rules" initiative. The universal public healthcare system supplied one menstrual cup, one pair of period underwear, and two packages of reusable cloth menstrual pads per person, available through local pharmacies. The program covers 2.5 million people and cost the Catalan government €8.5 million (3.40 euros / US dollars per person). The program was undertaken for equity, poverty reduction, taboo reduction, and environmental benefits. It is expected to reduce waste from single-use menstrual hygiene products, which had been 9000 tons per year, according to the Catalan government. Developing countries Menstrual cups can be useful as a means of menstrual hygiene management for people in developing countries where access to affordable sanitary products may be limited. A lack of affordable hygiene products means inadequate, unhygienic alternatives are often used, which can present a serious health risk. Menstrual cups offer a long-term solution compared to some other menstrual hygiene products because they do not need to be replaced monthly. Cultural aspects Menstrual hygiene products that need to be inserted into the vagina can be unacceptable for cultural reasons. There are myths that they interfere with female reproductive organs and that they cause females to "lose their virginity". There is no evidence that tampon use commonly causes trauma to the hymen. Hymens vary, and septate, cribriform or microperforate hymens, rarer physiological variations, may interfere with tampon use. Some ring-shaped menstrual cups are no bulkier than a tampon when folded as recommended. Inserting objects (including penises) into the vagina may or may not affect the hymen. Some cultures wrongly think that the state of the hymen can give evidence of virginity, and wrongly believe that inserting anything into the vagina will "break" the hymen. This can discourage youths from using cups. Despite common cultural beliefs, the state of a hymen cannot be used to prove or disprove virginity. Penile penetration does not lead to predictable changes to female genital organs; after puberty, hymens are highly elastic and can stretch during penetration without trace of injury. Females with a confirmed history of sexual abuse involving genital penetration may have normal hymens. Young females who say they have had consensual sex mostly show no identifiable changes in the hymen. Hymens rarely completely cover the vagina, hymens naturally have irregularities in width, and hymens can heal spontaneously without scarring. Many women do not bleed on having vaginal sex for the first time, hymens may not bleed significantly when torn, and vaginal walls may bleed significantly when torn. There has been one news report of the stem of a bell-shaped cup passing outwards through a small side hole in a septate hymen (a hymen with more than one opening), causing pain on attempted removal. The woman had the problem diagnosed and the cup removed at a hospital. She had previously used the cup without problems for four years. Some examine their hymen with a mirror before using a menstrual cup.
Biology and health sciences
Hygiene products
Health
454403
https://en.wikipedia.org/wiki/Deep%20web
Deep web
The deep web, invisible web, or hidden web are parts of the World Wide Web whose contents are not indexed by standard web search-engine programs. This is in contrast to the "surface web", which is accessible to anyone using the Internet. Computer scientist Michael K. Bergman is credited with inventing the term in 2001 as a search-indexing term. Deep web sites can be accessed by a direct URL or IP address, but may require entering a password or other security information to access actual content. Uses of deep web sites include web mail, online banking, cloud storage, restricted-access social-media pages and profiles, and web forums that require registration for viewing content. It also includes paywalled services such as video on demand and some online magazines and newspapers. Terminology The first conflation of the terms "deep web" and "dark web" happened during 2009 when deep web search terminology was discussed together with illegal activities occurring on the Freenet and darknet. Those criminal activities include the commerce of personal passwords, false identity documents, drugs, firearms, and child pornography. Since then, after their use in the media's reporting on the black-market website Silk Road, media outlets have generally used "deep web" synonymously with the dark web or darknet, a comparison some reject as inaccurate and consequently has become an ongoing source of confusion. Wired reporters Kim Zetter and Andy Greenberg recommend the terms be used in distinct fashions. While the deep web is a reference to any site that cannot be accessed by a traditional search engine, the dark web is a portion of the deep web that has been hidden intentionally and is inaccessible by standard browsers and methods. Non-indexed content Bergman, in a paper on the deep web published in The Journal of Electronic Publishing, mentioned that Jill Ellsworth used the term Invisible Web in 1994 to refer to websites that were not registered with any search engine. Bergman cited a January 1996 article by Frank Garcia: It would be a site that's possibly reasonably designed, but they didn't bother to register it with any of the search engines. So, no one can find them! You're hidden. I call that the invisible Web. Another early use of the term Invisible Web was by Bruce Mount and Matthew B. Koll of Personal Library Software, in a description of the No. 1 Deep Web program found in a December 1996 press release. The first use of the specific term deep web, now generally accepted, occurred in the aforementioned 2001 Bergman study. Indexing methods Methods that prevent web pages from being indexed by traditional search engines may be categorized as one or more of the following: Contextual web: pages with content varying for different access contexts (e.g., ranges of client IP addresses or previous navigation sequence). Dynamic content: dynamic pages, which are returned in response to a submitted query or accessed only through a form, especially if open-domain input elements (such as text fields) are used; such fields are hard to navigate without domain knowledge. Limited access content: sites that limit access to their pages in a technical manner (e.g., using the Robots Exclusion Standard or CAPTCHAs, or no-store directive, which prohibit search engines from browsing them and creating cached copies). Sites may feature an internal search engine for exploring such pages. Non-HTML/text content: textual content encoded in multimedia (image or video) files or specific file formats not recognised by search engines. Private web: sites that require registration and login (password-protected resources). Scripted content: pages that are accessible only by links produced by JavaScript as well as content dynamically downloaded from Web servers via Flash or Ajax solutions. Software: certain content is hidden intentionally from the regular Internet, accessible only with special software, such as Tor, I2P, or other darknet software. For example, Tor allows users to access websites using the .onion server address anonymously, hiding their IP address. Unlinked content: pages which are not linked to by other pages, which may prevent web crawling programs from accessing the content. This content is referred to as pages without backlinks (also known as inlinks). Also, search engines do not always detect all backlinks from searched web pages. Web archives: Web archival services such as the Wayback Machine enable users to see archived versions of web pages across time, including websites that have become inaccessible and are not indexed by search engines such as Google. The Wayback Machine may be termed a program for viewing the deep web, as web archives that are not from the present cannot be indexed, as past versions of websites are impossible to view by a search. All websites are updated at some time, which is why web archives are considered Deep Web content. Content types While it is not always possible to discover directly a specific web server's content so that it may be indexed, a site potentially can be accessed indirectly (due to computer vulnerabilities). To discover content on the web, search engines use web crawlers that follow hyperlinks through known protocol virtual port numbers. This technique is ideal for discovering content on the surface web but is often ineffective at finding deep web content. For example, these crawlers do not attempt to find dynamic pages that are the result of database queries due to the indeterminate number of queries that are possible. It has been noted that this can be overcome (partially) by providing links to query results, but this could unintentionally inflate the popularity of a site of the deep web. DeepPeep, Intute, Deep Web Technologies, Scirus, and Ahmia.fi are a few search engines that have accessed the deep web. Intute ran out of funding and is now a temporary static archive as of July 2011. Scirus retired near the end of January 2013. Researchers have been exploring how the deep web can be crawled in an automatic fashion, including content that can be accessed only by special software such as Tor. In 2001, Sriram Raghavan and Hector Garcia-Molina (Stanford Computer Science Department, Stanford University) presented an architectural model for a hidden-Web crawler that used important terms provided by users or collected from the query interfaces to query a Web form and crawl the Deep Web content. Alexandros Ntoulas, Petros Zerfos, and Junghoo Cho of UCLA created a hidden-Web crawler that automatically generated meaningful queries to issue against search forms. Several form query languages (e.g., DEQUEL) have been proposed that, besides issuing a query, also allow extraction of structured data from result pages. Another effort is DeepPeep, a project of the University of Utah sponsored by the National Science Foundation, which gathered hidden-web sources (web forms) in different domains based on novel focused crawler techniques. Commercial search engines have begun exploring alternative methods to crawl the deep web. The Sitemap Protocol (first developed, and introduced by Google in 2005) and OAI-PMH are mechanisms that allow search engines and other interested parties to discover deep web resources on particular web servers. Both mechanisms allow web servers to advertise the URLs that are accessible on them, thereby allowing automatic discovery of resources that are not linked directly to the surface web. Google's deep web surfacing system computes submissions for each HTML form and adds the resulting HTML pages into the Google search engine index. The surfaced results account for a thousand queries per second to deep web content. In this system, the pre-computation of submissions is done using three algorithms: selecting input values for text search inputs that accept keywords, identifying inputs that accept only values of a specific type (e.g., date) and selecting a small number of input combinations that generate URLs suitable for inclusion into the Web search index. In 2008, to facilitate users of Tor hidden services in their access and search of a hidden .onion suffix, Aaron Swartz designed Tor2web—a proxy application able to provide access by means of common web browsers. Using this application, deep web links appear as a random sequence of letters followed by the .onion top-level domain.
Technology
Internet
null
454450
https://en.wikipedia.org/wiki/Analytical%20mechanics
Analytical mechanics
In theoretical physics and mathematical physics, analytical mechanics, or theoretical mechanics is a collection of closely related formulations of classical mechanics. Analytical mechanics uses scalar properties of motion representing the system as a whole—usually its kinetic energy and potential energy. The equations of motion are derived from the scalar quantity by some underlying principle about the scalar's variation. Analytical mechanics was developed by many scientists and mathematicians during the 18th century and onward, after Newtonian mechanics. Newtonian mechanics considers vector quantities of motion, particularly accelerations, momenta, forces, of the constituents of the system; it can also be called vectorial mechanics. A scalar is a quantity, whereas a vector is represented by quantity and direction. The results of these two different approaches are equivalent, but the analytical mechanics approach has many advantages for complex problems. Analytical mechanics takes advantage of a system's constraints to solve problems. The constraints limit the degrees of freedom the system can have, and can be used to reduce the number of coordinates needed to solve for the motion. The formalism is well suited to arbitrary choices of coordinates, known in the context as generalized coordinates. The kinetic and potential energies of the system are expressed using these generalized coordinates or momenta, and the equations of motion can be readily set up, thus analytical mechanics allows numerous mechanical problems to be solved with greater efficiency than fully vectorial methods. It does not always work for non-conservative forces or dissipative forces like friction, in which case one may revert to Newtonian mechanics. Two dominant branches of analytical mechanics are Lagrangian mechanics (using generalized coordinates and corresponding generalized velocities in configuration space) and Hamiltonian mechanics (using coordinates and corresponding momenta in phase space). Both formulations are equivalent by a Legendre transformation on the generalized coordinates, velocities and momenta; therefore, both contain the same information for describing the dynamics of a system. There are other formulations such as Hamilton–Jacobi theory, Routhian mechanics, and Appell's equation of motion. All equations of motion for particles and fields, in any formalism, can be derived from the widely applicable result called the principle of least action. One result is Noether's theorem, a statement which connects conservation laws to their associated symmetries. Analytical mechanics does not introduce new physics and is not more general than Newtonian mechanics. Rather it is a collection of equivalent formalisms which have broad application. In fact the same principles and formalisms can be used in relativistic mechanics and general relativity, and with some modifications, quantum mechanics and quantum field theory. Analytical mechanics is used widely, from fundamental physics to applied mathematics, particularly chaos theory. The methods of analytical mechanics apply to discrete particles, each with a finite number of degrees of freedom. They can be modified to describe continuous fields or fluids, which have infinite degrees of freedom. The definitions and equations have a close analogy with those of mechanics. Motivation for analytical mechanics The goal of mechanical theory is to solve mechanical problems, such as arise in physics and engineering. Starting from a physical system—such as a mechanism or a star system—a mathematical model is developed in the form of a differential equation. The model can be solved numerically or analytically to determine the motion of the system. Newton's vectorial approach to mechanics describes motion with the help of vector quantities such as force, velocity, acceleration. These quantities characterise the motion of a body idealised as a "mass point" or a "particle" understood as a single point to which a mass is attached. Newton's method has been successfully applied to a wide range of physical problems, including the motion of a particle in Earth's gravitational field and the motion of planets around the Sun. In this approach, Newton's laws describe the motion by a differential equation and then the problem is reduced to the solving of that equation. When a mechanical system contains many particles, however (such as a complex mechanism or a fluid), Newton's approach is difficult to apply. Using a Newtonian approach is possible, under proper precautions, namely isolating each single particle from the others, and determining all the forces acting on it. Such analysis is cumbersome even in relatively simple systems. Newton thought that his third law "action equals reaction" would take care of all complications. This is false even for such simple system as rotations of a solid body. In more complicated systems, the vectorial approach cannot give an adequate description. The analytical approach simplifies problems by treating mechanical systems as ensembles of particles that interact with each other, rather considering each particle as an isolated unit. In the vectorial approach, forces must be determined individually for each particle, whereas in the analytical approach it is enough to know one single function which contains implicitly all the forces acting on and in the system. Such simplification is often done using certain kinematic conditions which are stated a priori. However, the analytical treatment does not require the knowledge of these forces and takes these kinematic conditions for granted. Still, deriving the equations of motion of a complicated mechanical system requires a unifying basis from which they follow. This is provided by various variational principles: behind each set of equations there is a principle that expresses the meaning of the entire set. Given a fundamental and universal quantity called action, the principle that this action be stationary under small variation of some other mechanical quantity generates the required set of differential equations. The statement of the principle does not require any special coordinate system, and all results are expressed in generalized coordinates. This means that the analytical equations of motion do not change upon a coordinate transformation, an invariance property that is lacking in the vectorial equations of motion. It is not altogether clear what is meant by 'solving' a set of differential equations. A problem is regarded as solved when the particles coordinates at time t are expressed as simple functions of t and of parameters defining the initial positions and velocities. However, 'simple function' is not a well-defined concept: nowadays, a function f(t) is not regarded as a formal expression in t (elementary function) as in the time of Newton but most generally as a quantity determined by t, and it is not possible to draw a sharp line between 'simple' and 'not simple' functions. If one speaks merely of 'functions', then every mechanical problem is solved as soon as it has been well stated in differential equations, because given the initial conditions and t determine the coordinates at t. This is a fact especially at present with the modern methods of computer modelling which provide arithmetical solutions to mechanical problems to any desired degree of accuracy, the differential equations being replaced by difference equations. Still, though lacking precise definitions, it is obvious that the two-body problem has a simple solution, whereas the three-body problem has not. The two-body problem is solved by formulas involving parameters; their values can be changed to study the class of all solutions, that is, the mathematical structure of the problem. Moreover, an accurate mental or drawn picture can be made for the motion of two bodies, and it can be as real and accurate as the real bodies moving and interacting. In the three-body problem, parameters can also be assigned specific values; however, the solution at these assigned values or a collection of such solutions does not reveal the mathematical structure of the problem. As in many other problems, the mathematical structure can be elucidated only by examining the differential equations themselves. Analytical mechanics aims at even more: not at understanding the mathematical structure of a single mechanical problem, but that of a class of problems so wide that they encompass most of mechanics. It concentrates on systems to which Lagrangian or Hamiltonian equations of motion are applicable and that include a very wide range of problems indeed. Development of analytical mechanics has two objectives: (i) increase the range of solvable problems by developing standard techniques with a wide range of applicability, and (ii) understand the mathematical structure of mechanics. In the long run, however, (ii) can help (i) more than a concentration on specific problems for which methods have already been designed. Intrinsic motion Generalized coordinates and constraints In Newtonian mechanics, one customarily uses all three Cartesian coordinates, or other 3D coordinate system, to refer to a body's position during its motion. In physical systems, however, some structure or other system usually constrains the body's motion from taking certain directions and pathways. So a full set of Cartesian coordinates is often unneeded, as the constraints determine the evolving relations among the coordinates, which relations can be modeled by equations corresponding to the constraints. In the Lagrangian and Hamiltonian formalisms, the constraints are incorporated into the motion's geometry, reducing the number of coordinates to the minimum needed to model the motion. These are known as generalized coordinates, denoted qi (i = 1, 2, 3...). Difference between curvillinear and generalized coordinates Generalized coordinates incorporate constraints on the system. There is one generalized coordinate qi for each degree of freedom (for convenience labelled by an index i = 1, 2...N), i.e. each way the system can change its configuration; as curvilinear lengths or angles of rotation. Generalized coordinates are not the same as curvilinear coordinates. The number of curvilinear coordinates equals the dimension of the position space in question (usually 3 for 3d space), while the number of generalized coordinates is not necessarily equal to this dimension; constraints can reduce the number of degrees of freedom (hence the number of generalized coordinates required to define the configuration of the system), following the general rule: For a system with N degrees of freedom, the generalized coordinates can be collected into an N-tuple: and the time derivative (here denoted by an overdot) of this tuple give the generalized velocities: D'Alembert's principle of virtual work D'Alembert's principle states that infinitesimal virtual work done by a force across reversible displacements is zero, which is the work done by a force consistent with ideal constraints of the system. The idea of a constraint is useful – since this limits what the system can do, and can provide steps to solving for the motion of the system. The equation for D'Alembert's principle is: where are the generalized forces (script Q instead of ordinary Q is used here to prevent conflict with canonical transformations below) and are the generalized coordinates. This leads to the generalized form of Newton's laws in the language of analytical mechanics: where T is the total kinetic energy of the system, and the notation is a useful shorthand (see matrix calculus for this notation). Constraints If the curvilinear coordinate system is defined by the standard position vector , and if the position vector can be written in terms of the generalized coordinates and time in the form: and this relation holds for all times , then are called holonomic constraints. Vector is explicitly dependent on in cases when the constraints vary with time, not just because of . For time-independent situations, the constraints are also called scleronomic, for time-dependent cases they are called rheonomic. Lagrangian mechanics The introduction of generalized coordinates and the fundamental Lagrangian function: where T is the total kinetic energy and V is the total potential energy of the entire system, then either following the calculus of variations or using the above formula – lead to the Euler–Lagrange equations; which are a set of N second-order ordinary differential equations, one for each qi(t). This formulation identifies the actual path followed by the motion as a selection of the path over which the time integral of kinetic energy is least, assuming the total energy to be fixed, and imposing no conditions on the time of transit. The Lagrangian formulation uses the configuration space of the system, the set of all possible generalized coordinates: where is N-dimensional real space (see also set-builder notation). The particular solution to the Euler–Lagrange equations is called a (configuration) path or trajectory, i.e. one particular q(t) subject to the required initial conditions. The general solutions form a set of possible configurations as functions of time: The configuration space can be defined more generally, and indeed more deeply, in terms of topological manifolds and the tangent bundle. Hamiltonian mechanics The Legendre transformation of the Lagrangian replaces the generalized coordinates and velocities (q, q̇) with (q, p); the generalized coordinates and the generalized momenta conjugate to the generalized coordinates: and introduces the Hamiltonian (which is in terms of generalized coordinates and momenta): where denotes the dot product, also leading to Hamilton's equations: which are now a set of 2N first-order ordinary differential equations, one for each qi(t) and pi(t). Another result from the Legendre transformation relates the time derivatives of the Lagrangian and Hamiltonian: which is often considered one of Hamilton's equations of motion additionally to the others. The generalized momenta can be written in terms of the generalized forces in the same way as Newton's second law: Analogous to the configuration space, the set of all momenta is the generalized momentum space: ("Momentum space" also refers to "k-space"; the set of all wave vectors (given by De Broglie relations) as used in quantum mechanics and theory of waves) The set of all positions and momenta form the phase space: that is, the Cartesian product of the configuration space and generalized momentum space. A particular solution to Hamilton's equations is called a phase path, a particular curve (q(t),p(t)) subject to the required initial conditions. The set of all phase paths, the general solution to the differential equations, is the phase portrait: The Poisson bracket All dynamical variables can be derived from position q, momentum p, and time t, and written as a function of these: A = A(q, p, t). If A(q, p, t) and B(q, p, t) are two scalar valued dynamical variables, the Poisson bracket is defined by the generalized coordinates and momenta: Calculating the total derivative of one of these, say A, and substituting Hamilton's equations into the result leads to the time evolution of A: This equation in A is closely related to the equation of motion in the Heisenberg picture of quantum mechanics, in which classical dynamical variables become quantum operators (indicated by hats (^)), and the Poisson bracket is replaced by the commutator of operators via Dirac's canonical quantization: Properties of the Lagrangian and the Hamiltonian Following are overlapping properties between the Lagrangian and Hamiltonian functions. All the individual generalized coordinates qi(t), velocities q̇i(t) and momenta pi(t) for every degree of freedom are mutually independent. Explicit time-dependence of a function means the function actually includes time t as a variable in addition to the q(t), p(t), not simply as a parameter through q(t) and p(t), which would mean explicit time-independence. The Lagrangian is invariant under addition of the total time derivative of any function of q and t, that is: so each Lagrangian L and L describe exactly the same motion. In other words, the Lagrangian of a system is not unique. Analogously, the Hamiltonian is invariant under addition of the partial time derivative of any function of q, p and t, that is: (K is a frequently used letter in this case). This property is used in canonical transformations (see below). If the Lagrangian is independent of some generalized coordinates, then the generalized momenta conjugate to those coordinates are constants of the motion, i.e. are conserved, this immediately follows from Lagrange's equations: Such coordinates are "cyclic" or "ignorable". It can be shown that the Hamiltonian is also cyclic in exactly the same generalized coordinates. If the Lagrangian is time-independent the Hamiltonian is also time-independent (i.e. both are constant in time). If the kinetic energy is a homogeneous function of degree 2 of the generalized velocities, and the Lagrangian is explicitly time-independent, then: where λ is a constant, then the Hamiltonian will be the total conserved energy, equal to the total kinetic and potential energies of the system: This is the basis for the Schrödinger equation, inserting quantum operators directly obtains it. Principle of least action Action is another quantity in analytical mechanics defined as a functional of the Lagrangian: A general way to find the equations of motion from the action is the principle of least action: where the departure t1 and arrival t2 times are fixed. The term "path" or "trajectory" refers to the time evolution of the system as a path through configuration space , in other words q(t) tracing out a path in . The path for which action is least is the path taken by the system. From this principle, all equations of motion in classical mechanics can be derived. This approach can be extended to fields rather than a system of particles (see below), and underlies the path integral formulation of quantum mechanics,Quantum Field Theory, D. McMahon, Mc Graw Hill (US), 2008, and is used for calculating geodesic motion in general relativity. Hamiltonian-Jacobi mechanics Canonical transformations The invariance of the Hamiltonian (under addition of the partial time derivative of an arbitrary function of p, q, and t) allows the Hamiltonian in one set of coordinates q and momenta p to be transformed into a new set Q = Q(q, p, t) and P = P(q, p, t), in four possible ways: With the restriction on P and Q such that the transformed Hamiltonian system is: the above transformations are called canonical transformations, each function Gn is called a generating function of the "nth kind" or "type-n". The transformation of coordinates and momenta can allow simplification for solving Hamilton's equations for a given problem. The choice of Q and P is completely arbitrary, but not every choice leads to a canonical transformation. One simple criterion for a transformation q → Q and p → P to be canonical is the Poisson bracket be unity, for all i = 1, 2,...N. If this does not hold then the transformation is not canonical. The Hamilton–Jacobi equation By setting the canonically transformed Hamiltonian K = 0, and the type-2 generating function equal to Hamilton's principal function (also the action ) plus an arbitrary constant C: the generalized momenta become: and P is constant, then the Hamiltonian-Jacobi equation (HJE) can be derived from the type-2 canonical transformation: where H is the Hamiltonian as before: Another related function is Hamilton's characteristic functionused to solve the HJE by additive separation of variables for a time-independent Hamiltonian H. The study of the solutions of the Hamilton–Jacobi equations leads naturally to the study of symplectic manifolds and symplectic topology. In this formulation, the solutions of the Hamilton–Jacobi equations are the integral curves of Hamiltonian vector fields. Routhian mechanics Routhian mechanics is a hybrid formulation of Lagrangian and Hamiltonian mechanics, not often used but especially useful for removing cyclic coordinates. If the Lagrangian of a system has s cyclic coordinates q = q1, q2, ... qs with conjugate momenta p = p1, p2, ... ps, with the rest of the coordinates non-cyclic and denoted ζ = ζ1, ζ1, ..., ζN − s, they can be removed by introducing the Routhian: which leads to a set of 2s Hamiltonian equations for the cyclic coordinates q, and N − s Lagrangian equations in the non cyclic coordinates ζ. Set up in this way, although the Routhian has the form of the Hamiltonian, it can be thought of a Lagrangian with N − s degrees of freedom. The coordinates q do not have to be cyclic, the partition between which coordinates enter the Hamiltonian equations and those which enter the Lagrangian equations is arbitrary. It is simply convenient to let the Hamiltonian equations remove the cyclic coordinates, leaving the non cyclic coordinates to the Lagrangian equations of motion. Appellian mechanics Appell's equation of motion involve generalized accelerations, the second time derivatives of the generalized coordinates: as well as generalized forces mentioned above in D'Alembert's principle. The equations are where is the acceleration of the k particle, the second time derivative of its position vector. Each acceleration ak is expressed in terms of the generalized accelerations αr, likewise each rk are expressed in terms the generalized coordinates qr. Classical field theory Lagrangian field theory Generalized coordinates apply to discrete particles. For N scalar fields φi(r, t) where i = 1, 2, ... N, the Lagrangian density is a function of these fields and their space and time derivatives, and possibly the space and time coordinates themselves: and the Euler–Lagrange equations have an analogue for fields: where ∂μ denotes the 4-gradient and the summation convention has been used. For N scalar fields, these Lagrangian field equations are a set of N second order partial differential equations in the fields, which in general will be coupled and nonlinear. This scalar field formulation can be extended to vector fields, tensor fields, and spinor fields. The Lagrangian is the volume integral of the Lagrangian density:Gravitation, J.A. Wheeler, C. Misner, K.S. Thorne, W.H. Freeman & Co, 1973, Originally developed for classical fields, the above formulation is applicable to all physical fields in classical, quantum, and relativistic situations: such as Newtonian gravity, classical electromagnetism, general relativity, and quantum field theory. It is a question of determining the correct Lagrangian density to generate the correct field equation. Hamiltonian field theory The corresponding "momentum" field densities conjugate to the N scalar fields φi(r, t) are: where in this context the overdot denotes a partial time derivative, not a total time derivative. The Hamiltonian density is defined by analogy with mechanics: The equations of motion are: where the variational derivative must be used instead of merely partial derivatives. For N fields, these Hamiltonian field equations are a set of 2N first order partial differential equations, which in general will be coupled and nonlinear. Again, the volume integral of the Hamiltonian density is the Hamiltonian Symmetry, conservation, and Noether's theorem Symmetry transformations in classical space and time Each transformation can be described by an operator (i.e. function acting on the position r or momentum p variables to change them). The following are the cases when the operator does not change r or p, i.e. symmetries. where R(n̂, θ) is the rotation matrix about an axis defined by the unit vector n̂''' and angle θ. Noether's theorem Noether's theorem states that a continuous symmetry transformation of the action corresponds to a conservation law, i.e. the action (and hence the Lagrangian) does not change under a transformation parameterized by a parameter s: the Lagrangian describes the same motion independent of s, which can be length, angle of rotation, or time. The corresponding momenta to q'' will be conserved.
Physical sciences
Classical mechanics
Physics
454486
https://en.wikipedia.org/wiki/Windows%209x
Windows 9x
Windows 9x is a generic term referring to a line of discontinued Microsoft Windows operating systems from 1995 to 2000, which were based on the Windows 95 kernel and its underlying foundation of MS-DOS, both of which were updated in subsequent versions. The first version in the 9x series was Windows 95, which was succeeded by Windows 98 and then Windows Me, which was the third and last version of Windows on the 9x line, until the series was superseded by Windows XP. Windows 9x is predominantly known for its use in home desktops. In 1998, Windows made up 82% of operating system market share. The internal release number for versions of Windows 9x is 4.x. The internal versions for Windows 95, 98, and Me are 4.0, 4.1, and 4.9, respectively. Previous MS-DOS-based versions of Windows used version numbers of 3.2 or lower. Windows NT, which was aimed at professional users such as networks and businesses, used a similar but separate version number between 3.1 and 4.0. All versions of Windows from Windows XP onwards are based on the Windows NT codebase. History Windows prior to 95 The first independent version of Microsoft Windows, version 1.0, released on November 20, 1985, achieved little popularity. Its name was initially "Interface Manager", but Rowland Hanson, the head of marketing at Microsoft, convinced the company that the name Windows would be more appealing to consumers. Windows 1.0 was not a complete operating system, but rather an "operating environment" that extended MS-DOS. Consequently, it shared the inherent flaws and problems of MS-DOS. The second installment of Microsoft Windows, version 2.0, was released on December 9, 1987, and used the real-mode memory model, which confined it to a maximum of 1 megabyte of memory. In such a configuration, it could run under another multitasking system like DESQview, which used the 286 Protected Mode. Microsoft Windows scored a significant success with Windows 3.0, released in 1990. In addition to improved capabilities given to native applications, Windows also allowed users to better multitask older MS-DOS-based software compared to Windows/386, thanks to the introduction of virtual memory. Microsoft developed Windows 3.1, which included several minor improvements to Windows 3.0, primarily consisting of bugfixes and multimedia support. It also excluded support for Real mode, and only ran on an Intel 80286 or better processor. Windows 3.1 was released on April 6, 1992. In November 1993 Microsoft also released Windows 3.11, a touch-up to Windows 3.1 which included all of the patches and updates that followed the release of Windows 3.1 in early 1992. Meanwhile, Microsoft continued to develop Windows NT. The main architect of the system was Dave Cutler, one of the chief architects of VMS at Digital Equipment Corporation. Microsoft hired him in August 1988 to create a successor to OS/2, but Cutler created a completely new system instead based on his MICA project at Digital. The first version of Windows NT, Windows NT 3.1, would be released on July 27, 1993 and used Windows 3.1's interface. About a year before the development of Windows 3.1's successor (Windows 95, code-named Chicago) began, Microsoft announced at its 1991 Professional Developers Conference that they would be developing a successor to Windows NT code-named Cairo, which some viewed it as succeeding both Windows NT and Windows 3.1's successor under one unified system. Microsoft publicly demonstrated Cairo at the 1993 Professional Developers Conference, complete with a demo system running Cairo for all attendees to use. Based on the Windows NT kernel, Cairo was a next-generation operating system that was to feature as many new technologies into Windows, including a new user interface with an object-based file system (this new user interface would officially debut with Windows 95 nearly 4 years later while the object-based file system would later be adopted as WinFS during the development of Windows Vista). According to Microsoft's product plan at the time, Cairo was planned to be released as late as July 1996 following its development. However, it had become apparent that Cairo was a much more difficult project than Microsoft had anticipated, and the project was subsequently cancelled 5 years into development. A subset of features from Cairo were eventually added into Windows NT 4.0 released on August 24, 1996, albeit without the object file system. Windows NT and Windows 9x would not be truly unified until Windows XP nearly 5 years later, when Microsoft began to merge its consumer and business line of Windows under a singular brand name based on Windows NT. Windows 95 After Windows 3.11, Microsoft began to develop a new consumer oriented version of the operating system code-named Chicago. Chicago was designed to have support for 32-bit preemptive multitasking, that of which was available in OS/2 and Windows NT, although a 16-bit kernel would remain for the sake of backward compatibility. The Win32 API first introduced with Windows NT was adopted as the standard 32-bit programming interface, with Win16 compatibility being preserved through a technique known as "thunking". A new GUI was not originally planned as part of the release, although elements of the Cairo user interface were borrowed and added as other aspects of the release (notably Plug and Play) slipped (and indeed after Cairo was cancelled 5 years in development). Microsoft did not change all of the Windows code to 32-bit; parts of it remained 16-bit (albeit not directly using real mode) for reasons of compatibility, performance and development time. Additionally it was necessary to carry over design decisions from earlier versions of Windows for reasons of backwards compatibility, even if these design decisions no longer matched a more modern computing environment. These factors immediately began to impact the operating system's efficiency and stability. Microsoft marketing adopted Windows 95 as the product name for Chicago when it was released on August 24, 1995. Microsoft went on to release five different versions of Windows 95: Windows 95 – original release (RTM) Windows 95 A – included Windows 95 OSR1 slipstreamed into the installation. Windows 95 B – (OSR2) included several major enhancements, Internet Explorer (IE) 3.0 and full FAT32 file system support. Windows 95 B USB – (OSR2.1) included basic USB support. Windows 95 C – (OSR2.5) included all the above features, plus IE 4.0. This was the last 95 version produced. OSR2, OSR2.1, and OSR2.5 ("OSR" being an initialism for "OEM Service Release") were not released to the general public, rather, they were available only to OEMs that would preload the OS onto computers. Some companies sold new hard drives with OSR2 preinstalled (officially justifying this as needed due to the hard drive's capacity). The first Microsoft Plus! add-on pack was sold for Windows 95. Windows 98 On June 25, 1998, Microsoft released Windows 98, code-named "Memphis" during development. It included new hardware drivers and better support for the FAT32 file system which allows support for disk partitions larger than the 2 GB maximum accepted by Windows 95. The USB support in Windows 98 was more robust than the basic support provided by the OEM editions of Windows 95. It also introduces the controversial integration of the Internet Explorer 4 web browser into the Windows shell and File Explorer (then known as Windows Explorer at the time). On June 10, 1999, Microsoft released Windows 98 Second Edition (also known as Windows 98 SE), an interim release whose notable features were the addition of Internet Connection Sharing and improved WDM audio and modem support. Internet Connection Sharing is a form of network address translation, allowing several machines on a LAN (Local Area Network) to share a single Internet connection. It also includes Internet Explorer 5 as opposed to Internet Explorer 4 in the original version. Windows 98 Second Edition also has certain improvements over the original release, and hardware support through device drivers was increased. Many minor problems present in the original release of Windows 98 were also found and fixed. These changes, among others, makes it (according to many) the most stable release of Windows 9x family—to the extent that some commentators used to say that Windows 98's beta version was more stable than Windows 95's final (gamma) version. Like with Windows 95, Windows 98 received the Microsoft Plus! add-on in the form of Plus! 98. Windows Me On September 14, 2000, Microsoft introduced Windows Me (Millennium Edition; also known as Windows ME), which upgraded Windows 98 with enhanced multimedia and Internet features. Code-named "Millennium", It was conceived as a quick one-year project that served as a stopgap release between Windows 98 and Windows XP (then code-named Whistler at the time). It borrowed some features from the business-oriented Windows 2000 into the Windows 9x series, and introduced the first version of System Restore, which allowed users to revert their system state to a previous "known-good" point in the case of a system failure. Windows Me also introduced the first release of Windows Movie Maker and included Windows Media Player 7. Internet Explorer 5.5 came shipped with Windows Me. Many of the new features from Windows Me were also available as updates for older Windows versions such as Windows 98 via Windows Update. The role of MS-DOS has also been greatly reduced compared to previous versions of Windows, with Windows Me no longer allowing real mode DOS to be accessed. Windows Me initially gained a positive reception upon its release, but later on it was heavily criticized by users for its instability and unreliability, due to frequent freezes and crashes. Windows Me has been viewed by many as one of the worst operating systems of all time, both in critical and in retrospect. PC World was highly critical of Windows Me months after it was released (and indeed when it was no longer available), with their article infamously describing Windows Me as "Mistake Edition" and placing it 4th in their "Worst Tech Products of All Time" feature in 2006. Consequently, many home users that were affected by Windows Me's instabilities (as well as those who negatively viewed Windows Me) ultimately stuck with the more reliable Windows 98 Second Edition for the remainder of Windows Me's lifecycle until the release of Windows XP in 2001. A small number of Windows Me owners moved over to the business-oriented Windows 2000 Professional during that same time period. The inability of users to easily boot into real mode MS-DOS like in Windows 95 and 98 led users to quickly figure out how to hack their Windows Me installations to provide this missing functionality back into the operating system. Windows Me never received a dedicated Microsoft Plus! add-on like with Windows 95 and Windows 98. Decline The release of Windows 2000 marked a shift in the user experience between the Windows 9x series and the Windows NT series. Windows NT 4.0, while based on the Windows 95 interface, suffered from a lack of support for USB, Plug and Play and DirectX versions after 3.0, preventing its users from playing contemporary games. Windows 2000 on the other hand, while primarily made towards business and server users, featured an updated user interface and better support for both Plug and Play and USB, as well as including built-in support for DirectX 7.0. The release of Windows XP in late 2001 confirmed the change of direction for Microsoft, bringing the consumer and business operating systems together under Windows NT. After the release of Windows XP, Microsoft stopped selling Windows 9x releases to end users (and later to OEMs) in the early 2000s. By March 2004, it was impossible to purchase any versions of the Windows 9x series. End of support Over time, support for the Windows 9x series ended. Windows 95 had lost its mainstream support in December 31, 2000, and extended support was dropped from Windows 95 on December 31, 2001 (which also ended support for older Windows versions prior to Windows 95 on that same day). Windows 98 and Windows 98 Second Edition had its mainstream support end on June 30, 2002, and mainstream support for Windows Me ended on December 31, 2003. Microsoft then continued to support the Windows 9x series until July 11, 2006, when extended support ended for Windows 98, Windows 98 Second Edition (SE), and Windows Millennium Edition (Me) – 4 years after extended support for Windows 95 ended on December 31, 2001. Microsoft DirectX, a set of standard gaming APIs, stopped being updated on Windows 95 at version 8.0a. It also stopped being updated on Windows 98 and Me after the release of Windows Vista in 2006, making DirectX 9.0c the last version of DirectX to support these operating systems. Support for Microsoft Internet Explorer on all Windows 9x releases have also ended. Windows 95, Windows 98 and Windows Me all lost security patches for Internet Explorer when the respective operating systems reached their end of support date. Internet Explorer 5.5 with Service Pack 2 is the last version of Internet Explorer compatible with Windows 95, while Internet Explorer 6 with Service Pack 1 is the last version compatible with latter releases of Windows 9x (i.e. 98 and Me). While Internet Explorer 6 for Windows XP did receive security patches up until it lost support, this is not the case for IE6 under Windows 98 and Me. Due to its age, Internet Explorer 7, the first major update to Internet Explorer 6 in half a decade, was only available for Windows XP SP2 and Windows Vista. The Windows Update website continued to be available for Windows 98, Windows 98 SE, and Windows Me after their end of support date; however, during 2011, Microsoft retired the Windows Update v4 website and removed the updates for Windows 98, Windows 98 SE, and Windows Me from its servers. Microsoft announced in July 2019 that the Microsoft Internet Games services on Windows Me (and XP) would end on July 31, 2019 (and for Windows 7 on January 22, 2020). Current usage The growing number of important updates caused by the end of life service for these operating systems have slowly made Windows 9x even less practical for everyday use. Today, even open source projects such as Mozilla Firefox will not run on Windows 9x without major rework. RetroZilla is a fork of Gecko 1.8.1 aimed at bringing "improved compatibility on the modern web" for versions of Windows as old as Windows 95 and NT 4.0. The latest version, 2.2, was released in February 2019 and added support for TLS 1.2. Design Kernel Windows 9x is a series of monolithic 16/32-bit operating systems. Like most operating systems, Windows 9x consists of kernel space and user space memory. Although Windows 9x features some memory protection, it does not protect the first megabyte of memory from userland applications for compatibility reasons. This area of memory contains code critical to the functioning of the operating system, and by writing into this area of memory an application can crash or freeze the operating system. This was a source of instability as faulty applications could accidentally write into this region, potentially corrupting important operating system memory, which usually resulted in some form of system error and halt. User mode The user-mode parts of Windows 9x consist of three subsystems: the Win16 subsystem, the Win32 subsystem and MS-DOS. Windows 9x/Me set aside two blocks of 64 KiB memory regions for GDI and heap resources. By running multiple applications, applications with numerous GDI elements or by running applications over a long span of time, it could exhaust these memory areas. If free system resources dropped below 10%, Windows would become unstable and likely crash. Kernel mode The kernel mode parts consist of the Virtual Machine Manager (VMM), the Installable File System Manager (IFSHLP), the Configuration Manager, and in Windows 98 and later, the WDM Driver Manager (NTKERN). As a 32-bit operating system, virtual memory space is 4 GiB, divided into a lower 2 GiB for applications and an upper 2 GiB for kernel per process. Registry Like Windows NT, Windows 9x stores user-specific and configuration-specific settings in a large information database called the Windows registry. Hardware-specific settings are also stored in the registry, and many device drivers use the registry to load configuration data. Previous versions of Windows used files such as AUTOEXEC.BAT, CONFIG.SYS, WIN.INI, SYSTEM.INI and other files with an .INI extension to maintain configuration settings. As Windows became more complex and incorporated more features, .INI files became too unwieldy for the limitations of the then-current FAT filesystem. Backwards-compatibility with .INI files was maintained until Windows XP succeeded the 9x and NT lines. Although Microsoft discourages using .INI files in favor of Registry entries, a large number of applications (particularly 16-bit Windows-based applications) still use .INI files. Windows 9x supports .INI files solely for compatibility with those applications and related tools (such as setup programs). The AUTOEXEC.BAT and CONFIG.SYS files also still exist for compatibility with real-mode system components and to allow users to change certain default system settings such as the PATH environment variable. The registry consists of two files: User.dat and System.dat. In Windows Me, Classes.dat was added. Virtual Machine Manager The Virtual Machine Manager (VMM) is the 32-bit protected mode kernel at the core of Windows 9x. Its primary responsibility is to create, run, monitor and terminate virtual machines. The VMM provides services that manage memory, processes, interrupts and protection faults. The VMM works with virtual devices (loadable kernel modules, which consist mostly of 32-bit ring 0 or kernel mode code, but may include other types of code, such as a 16-bit real mode initialisation segment) to allow those virtual devices to intercept interrupts and faults to control the access that an application has to hardware devices and installed software. Both the VMM and virtual device drivers run in a single, 32-bit, flat model address space at privilege level 0 (also called ring 0). The VMM provides multi-threaded, preemptive multitasking. It runs multiple applications simultaneously by sharing CPU (central processing unit) time between the threads in which the applications and virtual machines run. The VMM is also responsible for creating MS-DOS environments for system processes and Windows applications that still need to run in MS-DOS mode. It is the replacement for WIN386.EXE in Windows 3.x, and the file vmm32.vxd is a compressed archive containing most of the core VxD, including VMM.vxd itself and ifsmgr.vxd (which facilitates file system access without the need to call the real mode file system code of the DOS kernel). Software support Unicode Partial support for Unicode can be installed on Windows 9x through the Microsoft Layer for Unicode. File systems Windows 9x does not natively support NTFS or HPFS; however, there are third-party solutions available for Windows 9x that allows read-only access to NTFS volumes. Early versions of Windows 95 did not support FAT32. Like Windows for Workgroups 3.11, Windows 9x provides support for 32-bit file access based on IFSHLP.SYS. Unlike Windows 3.x, Windows 9x has support for the VFAT file system, allowing file names with a maximum of 255 characters instead of having 8.3 filenames. Event logging and tracing Windows 9x has no support for event logging and tracing or error reporting that the Windows NT family of operating systems has, although software like Norton CrashGuard can be used to achieve similar capabilities on Windows 9x. Security Windows 9x is designed as a single-user system. Thus, the security model is much less effective than the one in Windows NT. One reason for this is the FAT file systems (including FAT12/FAT16/FAT32), which are the only ones that Windows 9x supports officially, though Windows NT also supports FAT12 and FAT16 (but not FAT32; which wouldn’t be supported until Windows 2000) and Windows 9x can be extended to read and write NTFS volumes using third-party Installable File System drivers. FAT systems have very limited security; every user that has access to a FAT drive also has access to all files on that drive. The FAT file systems provide no access control lists and file-system level encryption like NTFS. Some operating systems that were available at the same time as Windows 9x are either multi-user or have multiple user accounts with different access privileges, which allows important system files (such as the kernel image) to be immutable under most user accounts. In contrast, while Windows 95 and later operating systems offer the option of having profiles for multiple users, they have no concept of access privileges, making them roughly equivalent to a single-user, single-account operating system; this means that all processes can modify all files on the system that aren't open, in addition to being able to modify the boot sector and perform other low-level hard drive modifications. This enables viruses and other clandestinely installed software to integrate themselves with the operating system in a way that is difficult for ordinary users to detect or undo. The profile support in the Windows 9x family is meant for convenience only; unless some registry keys are modified, the system can be accessed by pressing "Cancel" at login, even if all profiles have a password. Windows 95's default login dialog box also allows new user profiles to be created without having to log in first. Users and software can render the operating system unable to function by deleting or overwriting important system files from the hard disk. Users and software are also free to change configuration files in such a way that the operating system is unable to boot or properly function. This phenomenon is not exclusive to Windows 9x; many other operating systems are also susceptible to these vulnerabilities, either by viruses, malware or by the user’s consent. Installation software often replaced and deleted system files without properly checking if the file was still in use or of a newer version. This created a phenomenon often referred to as DLL hell. Windows Me introduced System File Protection and System Restore to handle common problems caused by this issue. Network sharing Windows 9x offers share-level access control security for file and printer sharing as well as user-level access control if a Windows NT-based operating system is available on the network. In contrast, Windows NT-based operating systems offer only user-level access control but integrated with the operating system's own user account security mechanism. Hardware support Drivers Device drivers in Windows 9x can be virtual device drivers or (starting with Windows 98) WDM drivers. VxDs usually have the filename extension .vxd or .386, whereas WDM compatible drivers usually use the extension .sys. The 32-bit VxD message server (msgsrv32) is a program that is able to load virtual device drivers (VxDs) at startup and then handle communication with the drivers. Additionally, the message server performs several background functions, including loading the Windows shell (such as Explorer.exe or Progman.exe). Another type of device drivers are .DRV drivers. These drivers are New Executable format and are loaded in user-mode, and are commonly used to control devices such as multimedia devices. To provide access to these devices, a dynamic link library is required (such as MMSYSTEM.DLL). Windows 9x retains backwards compatibility with many drivers made for Windows 3.x and MS-DOS. Using MS-DOS drivers can limit performance and stability due to their use of conventional memory and need to run in real mode which requires the CPU to switch in and out of protected mode. Drivers written for Windows 9x are loaded into the same address space as the kernel. This means that drivers can by accident or design overwrite critical sections of the operating system. Doing this can lead to system crashes, freezes and disk corruption. Faulty operating system drivers were a source of instability for the operating system. Other monolithic and hybrid kernels, like Linux and Windows NT, are also susceptible to malfunctioning drivers impeding the kernel's operation. Often the software developers of drivers and applications had insufficient experience with creating programs for the 'new' system, thus causing many errors which have been generally described as "system errors" by users, even if the error is not caused by parts of Windows or DOS. Microsoft has repeatedly redesigned the Windows Driver architecture since the release of Windows 95 as a result. CPU and bus technologies Windows 9x has no native support for hyper-threading, Data Execution Prevention, symmetric multiprocessing, APIC, or multi-core processors. Windows 9x has no native support for SATA host bus adapters (and neither do Windows 2000 nor Windows XP for that matter), or USB drives (except for Windows Me). There are, however, many SATA-I controllers for which Windows 98/Me drivers exist (and indeed Windows 2000 and Windows XP also provided SATA support via third-party drivers as well), and USB mass storage support has been added to Windows 95 OSR2 and Windows 98 through third party drivers. Hardware driver support for Windows 98/Me began to decline in 2005, most notably with motherboard chipsets and video cards. Early versions of Windows 95 had no support for USB or AGP acceleration (including lack of Infrared support for Windows 95 RTM). Windows 95 had preliminary support for ATAPI CD-ROMs, albeit with buggy ATAPI implementation. Windows 95 prior to OSR2 also had buggy support for processors implementing MMX as well as processors based on the P6 microarchitecture. MS-DOS Windows 95 was able to reduce the role of MS-DOS in Windows much further than had been done in Windows 3.1x and earlier. According to Microsoft developer Raymond Chen, MS-DOS served two purposes in Windows 95: as the boot loader, and as the 16-bit legacy device driver layer. When Windows 95 started up, MS-DOS loaded, processed CONFIG.SYS, launched COMMAND.COM, ran AUTOEXEC.BAT and finally ran WIN.COM. The WIN.COM program used MS-DOS to load the virtual machine manager, read SYSTEM.INI, load the virtual device drivers, and then turn off any running copies of EMM386 and switch into protected mode. Once in protected mode, the virtual device drivers (VxDs) transferred all state information from MS-DOS to the 32-bit file system manager, and then shut off MS-DOS. These VxDs allow Windows 9x to interact with hardware resources directly, as providing low-level functionalities such as 32-bit disk access and memory management. All future file system operations would get routed to the 32-bit file system manager. In Windows Me, win.com was no longer executed during the startup process; instead it went directly to execute VMM32.VXD from IO.SYS. The second role of MS-DOS (as the 16-bit legacy device driver layer) was as a backward compatibility tool for running DOS programs in Windows. Many MS-DOS programs and device drivers interacted with DOS in a low-level way, for example, by patching low-level BIOS interrupts such as int 13h, the low-level disk I/O interrupt. When a program issued an int 21h call to access MS-DOS, the call would go first to the 32-bit file system manager, which would attempt to detect this sort of patching. If it detects that the program has tried to hook into DOS, it will jump back into the 16-bit code to let the hook run. A 16-bit driver called IFSMGR.SYS would previously have been loaded by CONFIG.SYS, the job of which was to hook MS-DOS first before the other drivers and programs got a chance, then jump from 16-bit code back into 32-bit code, when the DOS program had finished, to let the 32-bit file system manager continue its work. According to Windows developer Raymond Chen, "MS-DOS was just an extremely elaborate decoy. Any 16-bit drivers and programs would patch or hook what they thought was the real MS-DOS, but which was in reality just a decoy. If the 32-bit file system manager detected that somebody bought the decoy, it told the decoy to quack." MS-DOS Virtualization Windows 9x can run MS-DOS applications within itself using a method called "Virtualization", where an application is run on a Virtual DOS machine. MS-DOS Mode Windows 95 and Windows 98 also offer backwards compatibility for DOS applications in the form of being able to boot into a native "DOS Mode" (MS-DOS can be booted without booting Windows, but not putting the CPU in protected mode). Through Windows 9x's memory managers and other post-DOS improvements, the overall system performance and functionality is improved. Some old applications or games may not run properly in the virtual DOS environment within Windows and require real DOS Mode. Having a command line mode outside of the GUI also offers the ability to fix certain system errors without entering the GUI. For example, if a virus is active in GUI mode it can often be safely removed in DOS mode, by deleting its files, which are usually locked while infected in Windows. Similarly, corrupted registry files, system files or boot files can be restored from real mode DOS. Windows 95 and Windows 98 can be started from DOS Mode by typing 'WIN' at the command prompt and then hitting "Enter", akin to earlier versions of Windows such as Windows 3.1. User interface Users can control a Windows 9x-based system through a command-line interface (or CLI) or a graphical user interface (or GUI). The default mode for Windows is usually the graphical user interface, whereas the CLI is available through MS-DOS windows. The GUI provides a means to control the placement and appearance of individual application windows, and interacts with the window system. The GDI, which is a part of the Win32 and Win16 subsystems, is also a module that is loaded in user mode, unlike Windows NT where the GDI is loaded in kernel mode. Alpha compositing and therefore transparency effects, such as fade effects in menus, are not supported by the GDI in Windows 9x, unlike with Windows NT releases since Windows 2000. Windows Explorer is the default user interface for the GUI; however, a variety of additional Windows shell replacements exist. Other GUIs include LiteStep, bbLean and Program Manager. In popular culture The sheer popularity of the Windows 9x series led to several web-based projects being created in the 2010s that aimed to replicate the look and feel of Windows 9x (and indeed an actual operating system as a whole) on a single web browser while also invoking nostalgia. Windows 93 (stylized as "WINDOWS93" in the title) is a web-based parody site created by two French musicians and programmers who go by the names of jankenpopp and Zombectro. Designed to look and feel like an actual operating system, It is also a parody of the Windows 9x series. It features several web applications which reference and features various internet memes from the late 1990s up to the early 2000s. EmuOS is another web-based site that aims to replicate the look and feel of Windows 9x as a whole, featuring 3 themes based on all major Windows 9x releases starting from Windows 95 up to Windows Me. It was created by Emupedia, a video game preservation and computer history community-based site, and was designed to play retro games and applications within a web browser. The aforementioned Windows 93 parody site is also featured. Windows 98 has been recreated in web-based format under the name 98.js (also known as Windows 98 Online). It featured web-based versions of several classic Windows applications.
Technology
Operating Systems
null
454524
https://en.wikipedia.org/wiki/Branta
Branta
The black geese of the genus Branta are waterfowl belonging to the true geese and swans subfamily Anserinae. They occur in the northern coastal regions of the Palearctic and all over North America, migrating to more southerly coasts in winter, and as resident birds in the Hawaiian Islands. Alone in the Southern Hemisphere, a self-sustaining feral population derived from introduced Canada geese is also found in New Zealand. The black geese derive their vernacular name for the prominent areas of black coloration found in all species. They can be distinguished from all other true geese by their legs and feet, which are black or very dark grey. Furthermore, they have black bills and large areas of black on the head and neck, with white (ochre in one species) markings that can be used to tell apart most species. As with most geese, their undertail and uppertail coverts are white. They are also on average smaller than other geese, though some very large taxa are known, which rival the swan goose and the black-necked swan in size. The Eurasian species of black geese have a more coastal distribution compared to the grey geese (genus Anser) which share the same general area of occurrence, not being found far inland even in winter (except for occasional stray birds or individuals escaped from captivity). This does not hold true for the American and Pacific species, in whose ranges grey geese are, for the most part, absent. Taxonomy The genus Branta was introduced by the Austrian naturalist Giovanni Antonio Scopoli in 1769. The name is a Latinised form of Old Norse Brandgás meaning burnt as in "burnt (black) goose". The type species is the brant goose (Branta bernicla). Ottenburghs and colleagues published a study in 2016 that established the phylogenetic relationships between the species. Species list The genus contains six living species. Two species have been described from subfossil remains found in the Hawaiian Islands, where they became extinct in prehistoric times: Nēnē-nui or wood-walking goose, Branta hylobadistes (prehistoric) Similar but hitherto undescribed remains are known from on Kauaʻi and Oʻahu. Giant Hawaii goose, Branta rhuax (prehistoric), formerly Geochen rhuax The relationships of the enigmatic Geochen rhuax, formerly known only from parts of a single bird's skeleton damaged due to apparently dying in a lava flow, were long unresolved. After reexamination of the subfossil material and comparisons with other subfossil bones from the island of Hawaii assigned to the genus Branta, it was redescribed as Branta rhuax in 2013. While a presumed relation between B. rhuax and the shelducks, proposed by Lester Short in 1970, has thus been refuted, bones of a shelduck-like bird have been found more recently on Kaua‘i. Whether this latter anatid was indeed a shelduck is presently undetermined. Similarly, two bones found on Oʻahu indicate the erstwhile presence of a gigantic waterfowl on this island. Its relationships relative to this genus and the moa-nalos, enormous goose-like dabbling ducks, are completely undeterminable at present. Early fossil record Several fossil species of Branta have been described. Since the true geese are hardly distinguishable by anatomical features, the allocation of these to this genus is somewhat uncertain. A number of supposed prehistoric grey geese have been described from North America, partially from the same sites as species assigned to Branta. Whether these are correctly assigned – meaning that the genus Anser was once much more widespread than today and that it coexisted with Branta in freshwater habitat which it today does only most rarely – is not clear. Especially in the case of B. dickeyi and B. howardae, doubts have been expressed about its correct generic assignment. Branta woolfendeni (Big Sandy Late Miocene of Wickieup, USA) Branta thessaliensis (Late Miocene of Perivolaki, Greece) Branta dickeyi (Late Pliocene – Late Pleistocene of W USA) Branta esmeralda (Esmeralda Early Pliocene) Branta howardae (Ricardo Early Pliocene) Branta propinqua (Middle Pleistocene of Fossil Lake, USA) Branta hypsibata (Pleistocene of Fossil Lake, USA) The former "Branta" minuscula is now placed with the prehistoric American shelducks, Anabernicula. On the other hand, a goose fossil from the Early-Middle Pleistocene of El Salvador is highly similar to Anser and given its age and biogeography it is likely to belong to that genus or Branta.
Biology and health sciences
Anseriformes
Animals
454551
https://en.wikipedia.org/wiki/Dunnart
Dunnart
A dunnart (from Noongar donat) is a narrow-footed marsupial the size of a European mouse, of the genus Sminthopsis. Dunnarts have a largely insectivorous diet. Taxonomy The genus name Sminthopsis was published by Oldfield Thomas in 1887, the author noting that the name Podabrus that had previously been used to describe the species was preoccupied as a genus of beetles. The type species is Phascogale crassicaudata, published by John Gould in 1844. There are 19 species, all of them in Australia or New Guinea: Genus Sminthopsis S. crassicaudata species-group Fat-tailed dunnart, Sminthopsis crassicaudata S. macroura species-group Kakadu dunnart, Sminthopsis bindi Carpentarian dunnart, Sminthopsis butleri Julia Creek dunnart, Sminthopsis douglasi Stripe-faced dunnart, Sminthopsis macroura Red-cheeked dunnart, Sminthopsis virginiae S. granulipes species-group White-tailed dunnart, Sminthopsis granulipes S. griseoventer species-group Grey-bellied dunnart, Sminthopsis griseoventer S. longicaudata species-group Long-tailed dunnart, Sminthopsis longicaudata S. murina species-group Chestnut dunnart, Sminthopsis archeri Little long-tailed dunnart, Sminthopsis dolichura Sooty dunnart, Sminthopsis fuliginosus Gilbert's dunnart, Sminthopsis gilberti White-footed dunnart, Sminthopsis leucopus Slender-tailed dunnart, Sminthopsis murina S. psammophila species-group Hairy-footed dunnart, Sminthopsis hirtipes Ooldea dunnart, Sminthopsis ooldea Sandhill dunnart, Sminthopsis psammophila Lesser hairy-footed dunnart, Sminthopsis youngsoni Additionally, two species are recognized by the American Society of Mammalogists: Froggatt's dunnart, Sminthopsis froggatti Stalker's dunnart, Sminthopsis stalkeri The American Society of Mammalogists also lists S. griseoventer as a synonym of S. fuliginosa, and moved S. longicaudata to the genus Antechinomys. Description A male dunnart's Y chromosome is the smallest known mammalian Y chromosome.
Biology and health sciences
Marsupials
Animals
454614
https://en.wikipedia.org/wiki/Cirrhitidae
Cirrhitidae
Cirrhitidae, the hawkfishes, are a family of marine ray-finned fishes found in tropical seas and which are associated with coral reefs. Taxonomy The Cirrhitidae were first recognised as a family by the Scots-born Australian naturalist William Sharp Macleay in 1841. It is one of the five constituent families in the superfamily Cirrhitoidea which is classified in the suborder Percoidei of the order Perciformes. Within the Cirrhitoidea, the Cirrhitidae is probably the most basal family. They have been placed in the order Centrarchiformes by some authorities, as part of the superfamily Cirrhitoidea, however, the fifth edition of Fishes of the World does not recognise the Centrarchiformes. The name of the family is taken from that of the genus Cirrhitus which is derived from cirrhus meaning a "lock of hair" or "a barbel", thought to be a reference to lower, unbranched rays of the pectoral fins which Bernard Germain de Lacépède termed as "barbillons", which means "barbels" in his description of the type species of the genus C. maculatus, and which he thought to be "false" pectoral fins. Another possibility is that the name refers to cirri extending from the tips of the spines in the dorsal fin spines, although Lacépède did not mention this feature. Genera The following 12 genera are classified within the Cirrhitidae, containing a total of 33 species: Amblycirrhitus Gill, 1862 Cirrhitichthys Bleeker, 1857 Cirrhitops J.L.B Smith, 1951 Cirrhitus Lacepède, 1803 Cristacirrhitus Randall, 2001 Cyprinocirrhites Tanaka, 1917 Isocirrhitus Randall, 1963 Itycirrhitus Randall 2001 Neocirrhites Castelnau, 1873 Notocirrhitus Randall, 2001 Oxycirrhites Bleeker, 1857 Paracirrhites Bleeker, 1874 Characteristics Cirrhitidae hawkfishes are roughly oblong in shape with a body which has a depth which is 21% to 50% of its standard length. They have a fringe of cirri on the rear edge of the forward nostrils. There are two poorly developed spines, on the gill cover. The outer row of teeth on the jaws are canine-like, the longest normally being located at the front of the upper jaw and the middle of the lower jaw. Inside this row, there is a band of bristle-like teeth, wider in the front. The dorsal fin is continuous, having 10 spines and 11–17 soft rays; it has an incision separating the spiny and soft-rayed parts. The anal fin contains three spines and five to seven, typically six, soft rays. There are 14 pectoral fin rays with the lowest five to seven rays unbranched and normally thickened, with deep notches in the membranes separating these lower rays. There is a single spine in the pelvic fins as well as five soft rays. The scales are cycloid and they lack a swimbladder. The colour and pattern vary between species. The maximum length attained is around , although around is more typical. Most species are quite small and colourfully patterned. Distribution and habitat Cirrhitidae hawkfishes are found in the tropical western and eastern Atlantic, Indian and Pacific, mainly in the Indo-West Pacific region. They are benthic fishes which are found on coral reefs or rocky substrates, mostly inhabiting shallow water. Biology Cirrhitidae fishes use their robust lower pectoral-fin rays to wedge into position where they will be subjected to the forces of currents and waves. They are carnivorous fishes, their main prey being benthic crustaceans. One species, Cyprinocirrhitus polyactis, mainly feeds on zooplankton, although it is frequently encountered resting on the substrate. Hawkfish frequently sit and wait on the higher parts of their habitat, diving onto prey items seen underneath them, in a similar manner to some hawk species, hence the name hawkfish. Fisheries and utilisation Cirrhitidae hawkfishes are mostly too small to be of interest to fisheries. The three largest species are occasionally fished for as food fish. A few of the smaller more colourful species, particularly Neocirrhites armatus and Oxycirrhites typus, are collected for the aquarium trade. Gallery
Biology and health sciences
Acanthomorpha
Animals
454746
https://en.wikipedia.org/wiki/Application%20software
Application software
Application software is any computer program that is intended for end-user use not operating, administering or programming the computer. An application (app, application program, software application) is any program that can be categorized as application software. Common types of applications include word processor, media player and accounting software. The term application software refers to all applications collectively and can be used to differentiate from system and utility software. Applications may be bundled with the computer and its system software or published separately. Applications may be proprietary or open-source. The short term app (coined in 1981 or earlier) became popular with the 2008 introduction of the iOS App Store, to refer to applications for mobile devices such as smartphones and tablets. Later, with introduction of the Mac App Store (in 2010) and Windows Store (in 2011), the term was extended in popular use to include desktop applications. Terminology The delineation between system software such as operating systems and application software is not exact, however, and is occasionally the object of controversy. For example, one of the key questions in the United States v. Microsoft Corp. antitrust trial was whether Microsoft's Internet Explorer web browser was part of its Windows operating system or a separate piece of application software. As another example, the GNU/Linux naming controversy is, in part, due to disagreement about the relationship between the Linux kernel and the operating systems built over this kernel. In some types of embedded systems, the application software and the operating system software may be indistinguishable from the user, as in the case of software used to control a VCR, DVD player, or microwave oven. The above definitions may exclude some applications that may exist on some computers in large organizations. For an alternative definition of an app: see Application Portfolio Management. When used as an adjective, application is not restricted to mean: of or on application software. For example, concepts such as application programming interface (API), application server, application virtualization, application lifecycle management and portable application apply to all computer programs alike, not just application software. Killer app Sometimes a new and popular application arises that only runs on one platform that results in increasing the desirability of that platform. This is called a killer application or killer app, coined in the late 1980s. For example, VisiCalc was the first modern spreadsheet software for the Apple II and helped sell the then-new personal computers into offices. For the BlackBerry, it was its email software. Platform specific naming Some applications are available for multiple platforms while others only work on one and are thus called, for example, a geography application for Microsoft Windows, or an Android application for education, or a Linux game. Classification There are many different and alternative ways to classify application software. From the legal point of view, application software is mainly classified with a black-box approach, about the rights of its end-users or subscribers (with eventual intermediate and tiered subscription levels). Software applications are also classified with respect to the programming language in which the source code is written or executed, and concerning their purpose and outputs. By property and use rights Application software is usually distinguished into two main classes: closed source vs open source software applications, and free or proprietary software applications. Proprietary software is placed under the exclusive copyright, and a software license grants limited usage rights. The open-closed principle states that software may be "open only for extension, but not for modification". Such applications can only get add-ons from third parties. Free and open-source software (FOSS) shall be run, distributed, sold, or extended for any purpose, and -being open- shall be modified or reversed in the same way. FOSS software applications released under a free license may be perpetual and also royalty-free. Perhaps, the owner, the holder or third-party enforcer of any right (copyright, trademark, patent, or ius in re aliena) are entitled to add exceptions, limitations, time decays or expiring dates to the license terms of use. Public-domain software is a type of FOSS which is royalty-free and - openly or reservedly- can be run, distributed, modified, reversed, republished, or created in derivative works without any copyright attribution and therefore revocation. It can even be sold, but without transferring the public domain property to other single subjects. Public-domain SW can be released under a (un)licensing legal statement, which enforces those terms and conditions for an indefinite duration (for a lifetime, or forever). By coding language Since the development and near-universal adoption of the web, an important distinction that has emerged, has been between web applications — written with HTML, JavaScript and other web-native technologies and typically requiring one to be online and running a web browser — and the more traditional native applications written in whatever languages are available for one's particular type of computer. There has been a contentious debate in the computing community regarding web applications replacing native applications for many purposes, especially on mobile devices such as smartphones and tablets. Web apps have indeed greatly increased in popularity for some uses, but the advantages of applications make them unlikely to disappear soon, if ever. Furthermore, the two can be complementary, and even integrated. By purpose and output Application software can also be seen as being either horizontal or vertical. Horizontal applications are more popular and widespread, because they are general purpose, for example word processors or databases. Vertical applications are niche products, designed for a particular type of industry or business, or department within an organization. Integrated suites of software will try to handle every specific aspect possible of, for example, manufacturing or banking worker, accounting, or customer service. There are many types of application software: An application suite consists of multiple applications bundled together. They usually have related functions, features, and user interfaces, and may be able to interact with each other, e.g. open each other's files. Business applications often come in suites, e.g. Microsoft Office, LibreOffice and iWork, which bundle together a word processor, a spreadsheet, etc.; but suites exist for other purposes, e.g. graphics or music. Enterprise software addresses the needs of an entire organization's processes and data flows, across several departments, often in a large distributed environment. Examples include enterprise resource planning systems, customer relationship management (CRM) systems, data replication engines, and supply chain management software. Departmental Software is a sub-type of enterprise software with a focus on smaller organizations or groups within a large organization. (Examples include travel expense management and IT Helpdesk.) Enterprise infrastructure software provides common capabilities needed to support enterprise software systems. (Examples include databases, email servers, and systems for managing networks and security.) Application platform as a service (aPaaS) is a cloud computing service that offers development and deployment environments for application services. Information worker software lets users create and manage information, often for individual projects within a department, in contrast to enterprise management. Examples include time management, resource management, analytical, collaborative and documentation tools. Word processors, spreadsheets, email and blog clients, personal information systems, and individual media editors may aid in multiple information worker tasks. Content access software is used primarily to access content without editing, but may include software that allows for content editing. Such software addresses the needs of individuals and groups to consume digital entertainment and published digital content. (Examples include media players, web browsers, and help browsers.) Educational software is related to content access software, but has the content or features adapted for use by educators or students. For example, it may deliver evaluations (tests), track progress through material, or include collaborative capabilities. Simulation software simulates physical or abstract systems for either research, training, or entertainment purposes. Media development software generates print and electronic media for others to consume, most often in a commercial or educational setting. This includes graphic-art software, desktop publishing software, multimedia development software, HTML editors, digital-animation editors, digital audio and video composition, and many others. Product engineering software is used in developing hardware and software products. This includes computer-aided design (CAD), computer-aided engineering (CAE), computer language editing and compiling tools, integrated development environments, and application programmer interfaces. Entertainment Software can refer to video games, screen savers, programs to display motion pictures or play recorded music, and other forms of entertainment which can be experienced through the use of a computing device. By platform Applications can also be classified by computing platforms such as a desktop application for a particular operating system, delivery network such as in cloud computing and Web 2.0 applications, or delivery devices such as mobile apps for mobile devices. The operating system itself can be considered application software when performing simple calculating, measuring, rendering, and word processing tasks not used to control hardware via a command-line interface or graphical user interface. This does not include application software bundled within operating systems such as a software calculator or text editor. Information worker software Accounting software Data management Contact manager Spreadsheet Database software Documentation Document automation Word processor Desktop publishing software Diagramming software Presentation software Email Blog software Enterprise resource planning Financial software Banking software Clearing systems Financial accounting software Financial software Field service management Workforce management software Project management software Calendaring software Employee scheduling software Workflow software Reservation systems Entertainment software Screen savers Video games Arcade games Console games Mobile games Personal computer games Software art Demo 64K intro Educational software Classroom management Reference software Sales readiness software Survey management Encyclopedia software Enterprise infrastructure software Artificial Intelligence for IT Operations (AIOps) Business workflow software Database management system (DBMS) Digital asset management (DAM) software Document management software Geographic information system (GIS) Simulation software Computer simulators Scientific simulators Social simulators Battlefield simulators Emergency simulators Vehicle simulators Flight simulators Driving simulators Simulation games Vehicle simulation games Media development software 3D computer graphics software Animation software Graphic art software Raster graphics editor Vector graphics editor Image organizer Video editing software Audio editing software Digital audio workstation Music sequencer Scorewriter HTML editor Game development tool Product engineering software Hardware engineering Computer-aided engineering Computer-aided design (CAD) Computer-aided manufacturing (CAM) Finite element analysis Software engineering Compiler software Integrated development environment Compiler Linker Debugger Version control Game development tool License manager
Technology
Computer software
null
454896
https://en.wikipedia.org/wiki/Grignard%20reaction
Grignard reaction
The Grignard reaction () is an organometallic chemical reaction in which, according to the classical definition, carbon alkyl, allyl, vinyl, or aryl magnesium halides (Grignard reagent) are added to the carbonyl groups of either an aldehyde or ketone under anhydrous conditions. This reaction is important for the formation of carbon–carbon bonds. History and definitions Grignard reactions and reagents were discovered by and are named after the French chemist François Auguste Victor Grignard (University of Nancy, France), who described them in 1900. He was awarded the 1912 Nobel Prize in Chemistry for this work. The reaction of an organic halide with magnesium is not a Grignard reaction, but provides a Grignard reagent. Classically, the Grignard reaction refers to the reaction between a ketone or aldehyde group with a Grignard reagent to form a primary or tertiary alcohol. However, some chemists understand the definition to mean all reactions of any electrophiles with Grignard reagents. Therefore, there is some dispute about the modern definition of the Grignard reaction. In the Merck Index, published online by the Royal Society of Chemistry, the classical definition is acknowledged, followed by "A more modern interpretation extends the scope of the reaction to include the addition of Grignard reagents to a wide variety of electrophilic substrates." This variety of definitions illustrates that there is some dispute within the chemistry community about the definition of a Grignard reaction. Shown below are some reactions involving Grignard reagents, but they themselves are not classically understood as Grignard reactions. Reaction mechanism Because carbon is more electronegative than magnesium, the carbon attached to magnesium acts as a nucleophile and attacks the electrophilic carbon atom in the polar bond of a carbonyl group. The addition of the Grignard reagent to the carbonyl group typically proceeds through a six-membered ring transition state, as shown below. Based on the detection of radical coupling side products, an alternative single electron transfer (SET) mechanism that involves the initial formation of a ketyl radical intermediate has also been proposed. A recent computational study suggests that the operative mechanism (polar vs. radical) is substrate-dependent, with the reduction potential of the carbonyl compound serving as a key parameter. Conditions The Grignard reaction is conducted under anhydrous conditions. Otherwise, the reaction will fail because the Grignard reagent will act as a base rather than a nucleophile and pick up a labile proton rather than attacking the electrophilic site. This will result in no formation of the desired product as the R-group of the Grignard reagent will become protonated while the MgX portion will stabilize the deprotonated species. To prevent this, Grignard reactions are completed in an inert atmosphere to remove all water from the reaction flask and ensure that the desired product is formed. Additionally, if there are acidic protons in the starting material, as shown in the figure on the right, one can overcome this by protecting the acidic site of the reactant by turning it into an ether or a silyl ether to eliminate the labile proton from the solution prior to the Grignard reaction. Variants Other variations of the Grignard reagent have been discovered to improve the chemoselectivity of the Grignard reaction, which include but are not limited to: Turbo-Grignards, organocerium reagents, and organocuprate (Gilman) reagents. Turbo-Grignards Turbo-Grignards are Grignard reagents modified with lithium chloride. Compared to conventional Grignard reagents, Turbo-Grignards are more chemoselective; esters, amides, and nitriles do not react with the Turbo-Grignard reagent. Heterometal-modified Grignard reagents The behavior of Grignard reagents can be usefully modified in the present of other metals. Copper(I) salts give organocuprates that preferentially effect 1,4 addition. Cerium trichloride allows selective 1,2-additions to the same substrates. Nickel and palladium halides catalyze cross coupling reactions.
Physical sciences
Organic reactions
Chemistry
455016
https://en.wikipedia.org/wiki/Akosombo%20Dam
Akosombo Dam
The Akosombo Dam, also known as the Volta Dam, is a hydroelectric dam on the Volta River in southeastern Ghana in the Akosombo gorge and part of the Volta River Authority. The construction of the dam flooded part of the Volta River Basin and led to the subsequent creation of Lake Volta. Lake Volta is the largest man-made lake in the world by surface area. It covers , which is 3.6% of Ghana's land area. With a volume of 148 cubic kilometers, Lake Volta is the world's third largest man-made lake by volume; the largest being Lake Kariba which contains 185 cubic kilometers of water. The primary purpose of the Akosombo Dam was to provide electricity for the aluminium industry. The Akosombo Dam was called "the largest single investment in the economic development plans of Ghana." The dam is significant for providing the majority of both Togo and Benin's electricity, although the construction of the Adjarala Dam (on Togo's Mono River) hopes to reduce these countries' reliance on imported electricity. The dam's original electrical output was , which was upgraded to in a retrofit project that was completed in 2006. The flooding that created the Lake Volta reservoir displaced many people and had a significant impact on the local environment, including seismic activity that led to coastal erosion; a changed hydrology caused microclimatic changes with less rain and higher temperatures. The soil surrounding the lake is less fertile than the soil under it, and heavy agricultural use has required the use of fertilizers, which in turn has led to eutrophication, which caused, among others, the explosive growth of an invasive weed that renders water navigation and transportation difficult, and form a habitat for the vectors of water-borne illnesses such as bilharzia, river blindness and malaria. Resettlement of the displaced inhabitants proved complex and in some cases unsuccessful; traditional farming practices disappeared and poverty increased. Design The dam was conceived in 1915 by geologist Albert Kitson, but no plans were drawn until the 1940s. The development of the Volta River Basin was proposed in 1949, but because funds were insufficient, the American company Volta Aluminum Company (Valco) lent money to Ghana so that the dam could be constructed. President Kwame Nkrumah adopted the Volta River hydropower project and commissioned Australian architect Kenneth Scott to design a residence for him overlooking the dam. The dam is long and high, comprising a high rock-fill embankment dam. It has a base width of and a structural volume of . The reservoir created by the dam, Lake Volta, has a capacity of and a surface area of . The lake is long. Maximum lake level is and minimum is . On the east side of the dam are two adjacent spillways that can discharge about of water. Each spillway contains six -wide and -tall steel floodgates. The dam's power plant contains six Francis turbines. Each turbine is supplied with water via a long and diameter penstock with a maximum of of hydraulic head afforded. The final proposal outlined the building of an aluminum smelter at Tema, a dam constructed at Akosombo to power the smelter, and a network of power lines installed through southern Ghana. The aluminum smelter was expected to eventually provide the revenue necessary for establishing local bauxite mining and refining, which would allow aluminum production without importing foreign alumina. Development of the aluminum industry within Ghana was dependent upon the proposed hydroelectric power. The proposed project's aluminum smelter was overseen by the American company Kaiser Aluminum and is operated by Valco. The smelter received its financial investment from Valco shareholders, with the support of the Export-Import Bank of the United States. However, Valco did not invest without first requiring insurances from Ghana's government, such as company exemptions from taxes on trade and discounted purchases of electricity. The estimated total cost of the project was $258 million. Construction In May 1960, the Ghana government called for tenders for construction of the hydroelectric dam. In 1961, an Italian consortium, Impregilo which had just completed the Kariba Dam, won the contract. In 1961, the Volta River Authority (VRA) was established by Ghana's Parliament through the passage of the Volta River Development Act. The VRA's fundamental operations were structured by six Board members and Nkrumah as chairman. The VRA's primary task is to manage the development of the Volta River Basin, which included the construction and supervision of the dam, the power station and the power transmission network. The VRA is responsible for the reservoir impounded by the dam, fishing within the lake, lake transportation and communication, and the welfare of those surrounding the lake. The dam was built between 1961 and 1965. Its development was undertaken by the Ghanaian government and funded 25% by the International Bank for Reconstruction and Development of the World Bank, the United States, and the United Kingdom. Impreglio carried out the dredging of the river bed and dewatering of the channel, and completed the dam a month earlier than scheduled despite flooding of the Volta River in 1963 which delayed work over three months. Between 1961 and 1966, 28 workers of Impregilo died during the construction of the dam. Memorials in Akosombo township and St. Barbara Catholic Church have been put up in their honor. The construction of the Akosombo Dam resulted in the flooding of part of the Volta River Basin and its upstream fields, and in the creation of Lake Volta which covers 3.6% of Ghana's total land area. Lake Volta was formed between 1962 and 1966 and necessitated the relocation of about 80,000 people, who represented 1% of the population. People of 700 villages were relocated into 52 resettlement villages two years prior to the dam's completion; the resettlement program was under the direction of the VRA. Two percent of the resettlement population were riparian fishers, and most were subsistence farmers. The Eastern Region of Ghana and the populations incorporated within its districts were most subject to the project's effects. Power generation The dam provides electricity to Ghana and its neighboring West African countries, including Togo and Benin. Initially 20% of Akosombo Dam's electric output (serving 70% of national demand) was provided to Ghanaians in the form of electricity, the remaining 80% was generated for Valco. The Ghana government was compelled, by contract, to pay for over 50% of the cost of Akosombo's construction, but the country was allowed only 20% of the power generated. Some commentators are concerned that this is an example of neocolonialism. In recent years the production from the Valco plant has declined with the vast majority of additional capacity in Akosombo used to service growing domestic demand. Initially, the dam's power production capabilities greatly overreached the actual demand; while the demand since the dam's inception has resulted in the doubling of hydropower production. Ghana's industrial and economic expansion triggered a higher demand for power, beyond the Akosombo's power plant capabilities. By 1981, a smaller dam was built at Kpong, downstream from Akosombo, and further upgrades to Akosombo have become necessary for maintaining hydropower output. Increasing demands for power exceed what can be provided by the current infrastructure. Power demands, along with unforeseen environmental trends, have resulted in rolling blackouts and major power outages. An overall trend of lower lake levels has been observed, sometimes below the requirement for operation of the dam. In the beginning of 2007, concerns were expressed over the electricity supply from the dam because of low water levels in the Lake Volta reservoir. During the latter half of 2007, much of this concern was abated when heavy rain fell in the catchment area of Volta River. In 2010, the highest-ever water level was recorded at the dam. This necessitated the opening of the flood gates at a reservoir elevation of , and for several weeks, water was spilled from the lake, causing some flooding downstream. Impacts The Akosombo Dam benefited some industrial and economic activities from the addition of lake transportation, increased fishing, new farming activities along the shoreline, and tourism. Biological habitat In the time following the construction of the dam, there has been a steady decline in agricultural productivity along the lake and the associated tributaries. The land surrounding Lake Volta is not nearly as fertile as the formerly cultivated land residing underneath the lake, and heavy agricultural activity has since exhausted the already inadequate soils. Downstream agricultural systems are losing soil fertility without the periodic floods that brought nutrients to the soil before the natural river flow was halted by the dam. The growth of commercially intensive agriculture has produced a rise in fertilizer run-off into the river. This, along with run-off from nearby cattle stocks and sewage pollution, has caused eutrophication of the river waters. The nutrient enrichment, in combination with the low water movement, has allowed for the invasion of aquatic weeds (Ceratophyllum). These weeds have become a formidable challenge to water navigation and transportation. Human welfare The presence of Ceratophyllum along the lake and within the tributaries has resulted in even greater detriment to local human health. The weeds provide the necessary habitat for black-fly, mosquitoes and snails, which are the vectors of water-borne illnesses such as bilharzia, river blindness and malaria. Since the installation of the dam, these diseases have increased remarkably. In particular, resettlement villages have shown an increase in disease prevalence since the establishment of Lake Volta, and a village's likelihood of infection corresponds to its proximity to the lake. Children and fishermen have been especially hard hit by this rise of disease prevalence. Additionally, the degradation of aquatic habitat has resulted in the decline of shrimp and clam populations. The physical health of local communities has been diminished from this loss of shellfish populations, as they provided an essential source of dietary protein. Likewise, the rural and industrial economies have experienced the financial losses associated with the decimation of river aquaculture. Increased human migration within the area has been driven by poverty and unfavorable resettlement conditions. This migration enabled the contraction of HIV and has since led to its heightened prevalence within Volta Basin communities. The districts of Manya Krobo and Yilo Krobo, which lie within the southwest portion of the Volta Basin, are predominantly indigenous communities that have attained a disproportionate prevalence of HIV. The situation underlines the strength of the local factors upon these districts. Commercial sex work was established in response to the thousands of male workers that were in the area for building the dam. Ten percent of the child-bearing females from these two districts migrated out of their districts during this time. In 1986, "90% of AIDS victims in Ghana were women, and 96% of them had recently lived outside the country". Socioeconomics The loss of land experienced by the 80,000 people forcibly relocated meant the loss of their primary economic activities from fishing and agriculture, loss of their homes, loss of their family grave sites, loss of community stability, and the eventual loss of important social values. The resettlement program demonstrated the social complexities involved in establishing "socially cohesive and integrated" communities. Insufficient planning resulted in the relocation of communities into areas that were not capable of providing for their former livelihoods and traditions. The loss of the naturally fertile soils beneath Lake Volta essentially led to the loss of traditional farming practices. The poor living conditions provided within the resettlement villages has been demonstrated by population reductions since resettlement. One resettlement village in particular experienced a greater than 50% population reduction in the 23 years following relocation. Increased economic risks and experiences of poverty are associated with those communities most impacted by the Volta River's development. The extensive human migration and degradation of natural resources within the Volta-basin area, are the products of poverty in conjunction with population pressure. Physical environment Reservoir-induced seismicity has been recorded because of the crustal re-adjustments from the added weight of the water within Lake Volta. There is an eastward shift of the river's mouth from the changes to the river's delta, and this has led to continuing coastal erosion. The changes in the river hydrology have altered the local heat budget which has caused microclimatic changes such as decreasing rain and higher mean monthly temperatures. All of these larger scale environmental impacts will all further compound the problems surrounding disruptions to local economic activities and associated, difficult human welfare conditions. A case study by the International Federation of Surveyors has indicated that the dam has had a significant impact on the shoreline erosion of the barrier separating the Keta Lagoon from the sea. Dr. Isaac Boateng has calculated the reduction of fluvial sediment as being from 71 million cubic metres per year to as little as 7 million cubic metres per year. Spillage Until 2023, the last time Akosombo dam community experienced flooding as a result of controlled spillage of the dam was in 2010. On 15 September 2023, the Volta River Authority (VRA) initiated a controlled spillage of water from the Akosombo and Kpong dams situated in the Eastern Region. This controlled spillage led to flooding in communities located along the lower Volta Basin leading to power interruptions. Many victims lost their belongings and livelihood due to the floods. The losses included farmlands, houses and properties which were destroyed by the floods.
Technology
Dams
null
455295
https://en.wikipedia.org/wiki/Sub-orbital%20spaceflight
Sub-orbital spaceflight
A sub-orbital spaceflight is a spaceflight in which the spacecraft reaches outer space, but its trajectory intersects the surface of the gravitating body from which it was launched. Hence, it will not complete one orbital revolution, will not become an artificial satellite nor will it reach escape velocity. For example, the path of an object launched from Earth that reaches the Kármán line (about – above sea level), and then falls back to Earth, is considered a sub-orbital spaceflight. Some sub-orbital flights have been undertaken to test spacecraft and launch vehicles later intended for orbital spaceflight. Other vehicles are specifically designed only for sub-orbital flight; examples include crewed vehicles, such as the X-15 and SpaceShipTwo, and uncrewed ones, such as ICBMs and sounding rockets. Flights which attain sufficient velocity to go into low Earth orbit, and then de-orbit before completing their first full orbit, are not considered sub-orbital. Examples of this include flights of the Fractional Orbital Bombardment System. A flight that does not reach space is still sometimes called sub-orbital, but cannot officially be classified as a "sub-orbital spaceflight". Usually a rocket is used, but some experimental sub-orbital spaceflights have also been achieved via the use of space guns. Altitude requirement By definition, a sub-orbital spaceflight reaches an altitude higher than above sea level. This altitude, known as the Kármán line, was chosen by the Fédération Aéronautique Internationale because it is roughly the point where a vehicle flying fast enough to support itself with aerodynamic lift from the Earth's atmosphere would be flying faster than orbital speed. The US military and NASA award astronaut wings to those flying above , although the U.S. State Department does not show a distinct boundary between atmospheric flight and spaceflight. Orbit During freefall the trajectory is part of an elliptic orbit as given by the orbit equation. The perigee distance is less than the radius of the Earth R including atmosphere, hence the ellipse intersects the Earth, and hence the spacecraft will fail to complete an orbit. The major axis is vertical, the semi-major axis a is more than R/2. The specific orbital energy is given by: where is the standard gravitational parameter. Almost always a < R, corresponding to a lower than the minimum for a full orbit, which is Thus the net extra specific energy needed compared to just raising the spacecraft into space is between 0 and . Speed, range, and altitude To minimize the required delta-v (an astrodynamical measure which strongly determines the required fuel), the high-altitude part of the flight is made with the rockets off (this is technically called free-fall even for the upward part of the trajectory). (Compare with Oberth effect.) The maximum speed in a flight is attained at the lowest altitude of this free-fall trajectory, both at the start and at the end of it. If one's goal is simply to "reach space", for example in competing for the Ansari X Prize, horizontal motion is not needed. In this case the lowest required delta-v, to reach 100 km altitude, is about 1.4 km/s. Moving slower, with less free-fall, would require more delta-v. Compare this with orbital spaceflights: a low Earth orbit (LEO), with an altitude of about 300 km, needs a speed around 7.7 km/s, requiring a delta-v of about 9.2 km/s. (If there were no atmospheric drag the theoretical minimum delta-v would be 8.1 km/s to put a craft into a 300-kilometer high orbit starting from a stationary point like the South Pole. The theoretical minimum can be up to 0.46 km/s less if launching eastward from near the equator.) For sub-orbital spaceflights covering a horizontal distance the maximum speed and required delta-v are in between those of a vertical flight and a LEO. The maximum speed at the lower ends of the trajectory are now composed of a horizontal and a vertical component. The higher the horizontal distance covered, the greater the horizontal speed will be. (The vertical velocity will increase with distance for short distances but will decrease with distance at longer distances.) For the V-2 rocket, just reaching space but with a range of about 330 km, the maximum speed was 1.6 km/s. Scaled Composites SpaceShipTwo which is under development will have a similar free-fall orbit but the announced maximum speed is 1.1 km/s (perhaps because of engine shut-off at a higher altitude). For larger ranges, due to the elliptic orbit the maximum altitude can be much more than for a LEO. On a 10,000-kilometer intercontinental flight, such as that of an intercontinental ballistic missile or possible future commercial spaceflight, the maximum speed is about 7 km/s, and the maximum altitude may be more than 1300 km. Any spaceflight that returns to the surface, including sub-orbital ones, will undergo atmospheric reentry. The speed at the start of the reentry is basically the maximum speed of the flight. The aerodynamic heating caused will vary accordingly: it is much less for a flight with a maximum speed of only 1 km/s than for one with a maximum speed of 7 or 8 km/s. The minimum delta-v and the corresponding maximum altitude for a given range can be calculated, d, assuming a spherical Earth of circumference and neglecting the Earth's rotation and atmosphere. Let θ be half the angle that the projectile is to go around the Earth, so in degrees it is 45°×d/. The minimum-delta-v trajectory corresponds to an ellipse with one focus at the centre of the Earth and the other at the point halfway between the launch point and the destination point (somewhere inside the Earth). (This is the orbit that minimizes the semi-major axis, which is equal to the sum of the distances from a point on the orbit to the two foci. Minimizing the semi-major axis minimizes the specific orbital energy and thus the delta-v, which is the speed of launch.) Geometrical arguments lead then to the following (with R being the radius of the Earth, about 6370 km): The altitude of apogee is maximized (at about 1320 km) for a trajectory going one quarter of the way around the Earth (). Longer ranges will have lower apogees in the minimal-delta-v solution. (where g is the acceleration of gravity at the Earth's surface). The Δv increases with range, leveling off at 7.9 km/s as the range approaches (halfway around the world). The minimum-delta-v trajectory for going halfway around the world corresponds to a circular orbit just above the surface (of course in reality it would have to be above the atmosphere). See lower for the time of flight. An intercontinental ballistic missile is defined as a missile that can hit a target at least 5500 km away, and according to the above formula this requires an initial speed of 6.1 km/s. Increasing the speed to 7.9 km/s to attain any point on Earth requires a considerably larger missile because the amount of fuel needed goes up exponentially with delta-v (see Rocket equation). The initial direction of a minimum-delta-v trajectory points halfway between straight up and straight toward the destination point (which is below the horizon). Again, this is the case if the Earth's rotation is ignored. It is not exactly true for a rotating planet unless the launch takes place at a pole. Flight duration In a vertical flight of not too high altitudes, the time of the free-fall is both for the upward and for the downward part the maximum speed divided by the acceleration of gravity, so with a maximum speed of 1 km/s together 3 minutes and 20 seconds. The duration of the flight phases before and after the free-fall can vary. For an intercontinental flight the boost phase takes 3 to 5 minutes, the free-fall (midcourse phase) about 25 minutes. For an ICBM the atmospheric reentry phase takes about 2 minutes; this will be longer for any soft landing, such as for a possible future commercial flight. Test flight 4 of the SpaceX 'Starship' performed such a flight with a lift off from Texas and a simulated soft touchdown in the Indian Ocean 66 minutes after liftoff. Sub-orbital flights can last from just seconds to days. Pioneer 1 was NASA's first space probe, intended to reach the Moon. A partial failure caused it to instead follow a sub-orbital trajectory, reentering the Earth's atmosphere 43 hours after launch. To calculate the time of flight for a minimum-delta-v trajectory, according to Kepler's third law, the period for the entire orbit (if it did not go through the Earth) would be: Using Kepler's second law, we multiply this by the portion of the area of the ellipse swept by the line from the centre of the Earth to the projectile: This gives about 32 minutes for going a quarter of the way around the Earth, and 42 minutes for going halfway around. For short distances, this expression is asymptotic to . From the form involving arccosine, the derivative of the time of flight with respect to d (or θ) goes to zero as d approaches (halfway around the world). The derivative of Δv also goes to zero here. So if d = , the length of the minimum-delta-v trajectory will be about , but it will take only a few seconds less time than the trajectory for d = (for which the trajectory is long). Flight profiles While there are a great many possible sub-orbital flight profiles, it is expected that some will be more common than others. Ballistic missiles The first sub-orbital vehicles which reached space were ballistic missiles. The first ballistic missile to reach space was the German V-2, the work of the scientists at Peenemünde, on October 3, 1942, which reached an altitude of . Then in the late 1940s the US and USSR concurrently developed missiles all of which were based on the V-2 Rocket, and then much longer range Intercontinental Ballistic Missiles (ICBMs). There are now many countries who possess ICBMs and even more with shorter range Intermediate Range Ballistic Missiles (IRBMs). Tourist flights Sub-orbital tourist flights will initially focus on attaining the altitude required to qualify as reaching space. The flight path will be either vertical or very steep, with the spacecraft landing back at its take-off site. The spacecraft will shut off its engines well before reaching maximum altitude, and then coast up to its highest point. During a few minutes, from the point when the engines are shut off to the point where the atmosphere begins to slow down the downward acceleration, the passengers will experience weightlessness. Megaroc had been planned for sub-orbital spaceflight by the British Interplanetary Society in the 1940s. In late 1945, a group led by M. Tikhonravov K. and N. G. Chernysheva at the Soviet NII-4 academy (dedicated to rocket artillery science and technology), began work on a stratospheric rocket project, VR-190, aimed at vertical flight by a crew of two pilots, to an altitude of 200 km (65,000 ft) using captured V-2. In 2004, a number of companies worked on vehicles in this class as entrants to the Ansari X Prize competition. The Scaled Composites SpaceShipOne was officially declared by Rick Searfoss to have won the competition on October 4, 2004, after completing two flights within a two-week period. In 2005, Sir Richard Branson of the Virgin Group announced the creation of Virgin Galactic and his plans for a 9-seat capacity SpaceShipTwo named VSS Enterprise. It has since been completed with eight seats (one pilot, one co-pilot and six passengers) and has taken part in captive-carry tests and with the first mother-ship WhiteKnightTwo, or VMS Eve. It has also completed solitary glides, with the movable tail sections in both fixed and "feathered" configurations. The hybrid rocket motor has been fired multiple times in ground-based test stands, and was fired in a powered flight for the second time on 5 September 2013. Four additional SpaceShipTwos have been ordered and will operate from the new Spaceport America. Commercial flights carrying passengers were expected in 2014, but became cancelled due to the disaster during SS2 PF04 flight. Branson stated, "[w]e are going to learn from what went wrong, discover how we can improve safety and performance and then move forwards together." Scientific experiments A major use of sub-orbital vehicles today is as scientific sounding rockets. Scientific sub-orbital flights began in the 1920s when Robert H. Goddard launched the first liquid fueled rockets, however they did not reach space altitude. In the late 1940s, captured German V-2 ballistic missiles were converted into V-2 sounding rockets which helped lay the foundation for modern sounding rockets. Today there are dozens of different sounding rockets on the market, from a variety of suppliers in various countries. Typically, researchers wish to conduct experiments in microgravity or above the atmosphere. Sub-orbital transportation Research, such as that done for the X-20 Dyna-Soar project suggests that a semi-ballistic sub-orbital flight could travel from Europe to North America in less than an hour. However, the size of rocket, relative to the payload, necessary to achieve this, is similar to an ICBM. ICBMs have delta-v's somewhat less than orbital; and therefore would be somewhat cheaper than the costs for reaching orbit, but the difference is not large. Due to the high cost of spaceflight, suborbital flights are likely to be initially limited to high value, very high urgency cargo deliveries such as courier flights, military fast-response operations or space tourism. The SpaceLiner is a hypersonic suborbital spaceplane concept that could transport 50 passengers from Australia to Europe in 90 minutes or 100 passengers from Europe to California in 60 minutes. The main challenge lies in increasing the reliability of the different components, particularly the engines, in order to make their use for passenger transportation on a daily basis possible. SpaceX is potentially considering using their Starship as a sub-orbital point-to-point transportation system. Notable uncrewed sub-orbital spaceflights The first sub-orbital space flight was on 20 June 1944, when MW 18014, a V-2 test rocket, launched from Peenemünde in Germany and reached 176 kilometres altitude. Bumper 5, a two-stage rocket launched from the White Sands Proving Grounds. On 24 February 1949 the upper stage reached an altitude of and a speed of . Albert II, a male rhesus macaque, became the first mammal in space on 14 June 1949 in a sub-orbital flight from Holloman Air Force Base in New Mexico to an altitude of 83 miles (134 km) aboard a U.S. V-2 sounding rocket. USSR – Energia, 15 May 1987, a Polyus payload which failed to reach orbit SpaceX IFT-7, 16 January 2025, a Starship flight test which blew up during ascent, forcing airline flights to alter course to avoid falling debris and setting back Elon Musk's flagship rocket program. There were also numerous reports of damage on the ground. It is, to date, the most massive object launched into a sub-orbital trajectory. Crewed sub-orbital spaceflights Above 100 km (62.14 mi) in altitude. Future of crewed sub-orbital spaceflight Private companies such as Virgin Galactic, Armadillo Aerospace (reinvented as Exos Aerospace), Airbus, Blue Origin and Masten Space Systems are taking an interest in sub-orbital spaceflight, due in part to ventures like the Ansari X Prize. NASA and others are experimenting with scramjet-based hypersonic aircraft which may well be used with flight profiles that qualify as sub-orbital spaceflight. Non-profit entities like ARCASPACE and Copenhagen Suborbitals also attempt rocket-based launches. Suborbital spaceflight projects Canadian Arrow CORONA DH-1 (rocket) Interorbital Systems Lunar Lander Challenge McDonnell Douglas DC-X Project Morpheus NASA program to continue developing ALHAT and Q­ landers Quad (rocket) Reusable Vehicle Testing program by JAXA Rocketplane XP SpaceX reusable launch system development program XCOR Lynx
Technology
Basics_6
null
455626
https://en.wikipedia.org/wiki/Ratchet%20%28device%29
Ratchet (device)
A ratchet (occasionally spelled rachet) is a mechanical device that allows continuous linear or rotary motion in only one direction while preventing motion in the opposite direction. Ratchets are widely used in machinery and tools. The word ratchet is also used informally to refer to a ratcheting socket wrench. Theory of operation A ratchet consists of a round gear or a linear rack with teeth, and a pivoting, spring-loaded finger called a pawl (or click, in clocks and watches) that engages the teeth. The teeth are uniform but are usually asymmetrical, with each tooth having a moderate slope on one edge and a much steeper slope on the other edge. When the teeth are moving in the unrestricted (i.e. forward) direction, the pawl easily slides up and over the gently sloped edges of the teeth, with a spring forcing it (often with an audible 'click') into the depression between the teeth as it passes the tip of each tooth. When the teeth move in the opposite (backward) direction, however, the pawl will catch against the steeply sloped edge of the first tooth it encounters, thereby locking it against the tooth and preventing any further motion in that direction. Backlash Because the ratchet can only stop backward motion at discrete points (i.e., at tooth boundaries), a ratchet does allow a limited amount of backward motion. This backward motion—which is limited to a maximum distance equal to the spacing between the teeth—is called backlash. In cases where backlash must be minimized, a smooth, toothless ratchet with a high friction surface such as rubber is sometimes used. The pawl bears against the surface at an angle so that any backward motion will cause the pawl to jam against the surface and thus prevent any further backward motion. Since the backward travel distance is primarily a function of the compressibility of the high friction surface, this mechanism can result in significantly reduced backlash. Uses Ratchet mechanisms are used in a wide variety of applications, including these: Cable ties Capstans Caulking guns Clocks Computer keyboards Freewheel (overrunning clutch) Grease guns Handcuffs Jacks Anti-rollback devices used in roller coasters Looms Slacklines Socket wrenches Tie down straps Turnstiles Typewriters Gallery
Technology
Mechanisms
null
10445610
https://en.wikipedia.org/wiki/Liming%20%28leather%20processing%29
Liming (leather processing)
Liming is a process used for parchment or leather processing, in which hides are soaked in an alkali solution. It is performed using a drum and paddle or a pit. Its objectives are: Removal of interfibrillary proteins. Removal of keratin proteins. Collagen swelling due to the alkaline pH. Collagen fibre bundle splitting. Removal of natural grease and fats Liming operations of cattle hides usually last 18 hours and are generally associated with the alkaline phase of beamhouse operations. Removal of interfibrillary proteins The interfibrillary proteins are denatured by the presence of alkali (particularly sodium sulfide), rendered soluble, facilitating their removal from the leather. Removal is done by the mechanical action of liming or reliming, but more prominently when the pelt is deswelled (during deliming). Failure to remove these proteins results in a hard, tinny leather (due to fibre glueing upon drying) that is brittle and inflexible. Keratin removal Keratin that is present in the hair, scales and in the epidermis of the skin is hydrolyzed in the presence of alkali (at pH values greater than 11.5). The disulfide bridges found in keratin protein are cleaved but can be reformed. Long periods of liming will result in hair removal. The main removal of keratin is performed during the unhairing operation. In traditional processing, liming and unhairing were indivisible and took place at the same time. During modern liming methods, and in particular the processing of sheepskins, the hair is removed first and then limed in a liming drum. In hair-save technology, the hides are unhaired first and then limed for a further 12–18 hours. Alkaline collagen swelling The presence of calcium hydroxide results in the alkaline swelling of skin. The result is an influx of water into the hide/skin, and a marked increase in fibre diameter and fibre shortening. The thickness of the skin increases, but the surface area of the pelt decreases. The weight increase, owing to the uptake of water, results in a doubling of the hide/skin weight. However, this weight also needs to take into consideration that proteins (especially the hair) have been removed, and the fleshing operation is often performed after liming. Collagen fibre bundle splitting The action of liming, in particular the swelling of the skin, results in the splitting of the fibre bundle sheath. Owing to the fibre diameter increasing, the bundle sheath cannot contain the thicker fibres, and it bursts open. This allows increased access to the fibres, which allows better tanning, retanning, dyeing and fatliquoring.
Technology
Materials
null
15575410
https://en.wikipedia.org/wiki/Euclidean%20plane
Euclidean plane
In mathematics, a Euclidean plane is a Euclidean space of dimension two, denoted or . It is a geometric space in which two real numbers are required to determine the position of each point. It is an affine space, which includes in particular the concept of parallel lines. It has also metrical properties induced by a distance, which allows to define circles, and angle measurement. A Euclidean plane with a chosen Cartesian coordinate system is called a Cartesian plane. The set of the ordered pairs of real numbers (the real coordinate plane), equipped with the dot product, is often called the Euclidean plane or standard Euclidean plane, since every Euclidean plane is isomorphic to it. History Books I through IV and VI of Euclid's Elements dealt with two-dimensional geometry, developing such notions as similarity of shapes, the Pythagorean theorem (Proposition 47), equality of angles and areas, parallelism, the sum of the angles in a triangle, and the three cases in which triangles are "equal" (have the same area), among many other topics. Later, the plane was described in a so-called Cartesian coordinate system, a coordinate system that specifies each point uniquely in a plane by a pair of numerical coordinates, which are the signed distances from the point to two fixed perpendicular directed lines, measured in the same unit of length. Each reference line is called a coordinate axis or just axis of the system, and the point where they meet is its origin, usually at ordered pair (0, 0). The coordinates can also be defined as the positions of the perpendicular projections of the point onto the two axes, expressed as signed distances from the origin. The idea of this system was developed in 1637 in writings by Descartes and independently by Pierre de Fermat, although Fermat also worked in three dimensions, and did not publish the discovery. Both authors used a single (abscissa) axis in their treatments, with the lengths of ordinates measured along lines not-necessarily-perpendicular to that axis. The concept of using a pair of fixed axes was introduced later, after Descartes' La Géométrie was translated into Latin in 1649 by Frans van Schooten and his students. These commentators introduced several concepts while trying to clarify the ideas contained in Descartes' work. Later, the plane was thought of as a field, where any two points could be multiplied and, except for 0, divided. This was known as the complex plane. The complex plane is sometimes called the Argand plane because it is used in Argand diagrams. These are named after Jean-Robert Argand (1768–1822), although they were first described by Danish-Norwegian land surveyor and mathematician Caspar Wessel (1745–1818). Argand diagrams are frequently used to plot the positions of the poles and zeroes of a function in the complex plane. In geometry Coordinate systems In mathematics, analytic geometry (also called Cartesian geometry) describes every point in two-dimensional space by means of two coordinates. Two perpendicular coordinate axes are given which cross each other at the origin. They are usually labeled x and y. Relative to these axes, the position of any point in two-dimensional space is given by an ordered pair of real numbers, each number giving the distance of that point from the origin measured along the given axis, which is equal to the distance of that point from the other axis. Another widely used coordinate system is the polar coordinate system, which specifies a point in terms of its distance from the origin and its angle relative to a rightward reference ray. Embedding in three-dimensional space Polytopes In two dimensions, there are infinitely many polytopes: the polygons. The first few regular ones are shown below: Convex The Schläfli symbol represents a regular -gon. Degenerate (spherical) The regular monogon (or henagon) {1} and regular digon {2} can be considered degenerate regular polygons and exist nondegenerately in non-Euclidean spaces like a 2-sphere, 2-torus, or right circular cylinder. Non-convex There exist infinitely many non-convex regular polytopes in two dimensions, whose Schläfli symbols consist of rational numbers {n/m}. They are called star polygons and share the same vertex arrangements of the convex regular polygons. In general, for any natural number n, there are n-pointed non-convex regular polygonal stars with Schläfli symbols {n/m} for all m such that m < n/2 (strictly speaking {n/m} = {n/(n − m)}) and m and n are coprime. Circle The hypersphere in 2 dimensions is a circle, sometimes called a 1-sphere (S1) because it is a one-dimensional manifold. In a Euclidean plane, it has the length 2πr and the area of its interior is where is the radius. Other shapes There are an infinitude of other curved shapes in two dimensions, notably including the conic sections: the ellipse, the parabola, and the hyperbola. In linear algebra Another mathematical way of viewing two-dimensional space is found in linear algebra, where the idea of independence is crucial. The plane has two dimensions because the length of a rectangle is independent of its width. In the technical language of linear algebra, the plane is two-dimensional because every point in the plane can be described by a linear combination of two independent vectors. Dot product, angle, and length The dot product of two vectors and is defined as: A vector can be pictured as an arrow. Its magnitude is its length, and its direction is the direction the arrow points. The magnitude of a vector A is denoted by . In this viewpoint, the dot product of two Euclidean vectors A and B is defined by where θ is the angle between A and B. The dot product of a vector A by itself is which gives the formula for the Euclidean length of the vector. In calculus Gradient In a rectangular coordinate system, the gradient is given by Line integrals and double integrals For some scalar field f : U ⊆ R2 → R, the line integral along a piecewise smooth curve C ⊂ U is defined as where r: [a, b] → C is an arbitrary bijective parametrization of the curve C such that r(a) and r(b) give the endpoints of C and . For a vector field F : U ⊆ R2 → R2, the line integral along a piecewise smooth curve C ⊂ U, in the direction of r, is defined as where · is the dot product and r: [a, b] → C is a bijective parametrization of the curve C such that r(a) and r(b) give the endpoints of C. A double integral refers to an integral within a region D in R2 of a function and is usually written as: Fundamental theorem of line integrals The fundamental theorem of line integrals says that a line integral through a gradient field can be evaluated by evaluating the original scalar field at the endpoints of the curve. Let . Then with p, q the endpoints of the curve γ. Green's theorem Let C be a positively oriented, piecewise smooth, simple closed curve in a plane, and let D be the region bounded by C. If L and M are functions of (x, y) defined on an open region containing D and have continuous partial derivatives there, then where the path of integration along C is counterclockwise. In topology In topology, the plane is characterized as being the unique contractible 2-manifold. Its dimension is characterized by the fact that removing a point from the plane leaves a space that is connected, but not simply connected. In graph theory In graph theory, a planar graph is a graph that can be embedded in the plane, i.e., it can be drawn on the plane in such a way that its edges intersect only at their endpoints. In other words, it can be drawn in such a way that no edges cross each other. Such a drawing is called a plane graph or planar embedding of the graph. A plane graph can be defined as a planar graph with a mapping from every node to a point on a plane, and from every edge to a plane curve on that plane, such that the extreme points of each curve are the points mapped from its end nodes, and all curves are disjoint except on their extreme points.
Mathematics
Geometry
null
17297847
https://en.wikipedia.org/wiki/Old%20Tjikko
Old Tjikko
Old Tjikko is an approximately 9,566 years-old Norway spruce, located in the Dalarna province in Sweden. Old Tjikko originally gained fame as the "world's oldest tree". Old Tjikko is, however, a clonal tree that has regenerated new trunks, branches and roots over millennia rather than an individual tree of great age. Old Tjikko is recognized as the oldest living Picea abies and the fourth-oldest known clonal tree. The age of the tree was determined by carbon dating of genetically matched plant material collected from under the tree, as dendrochronology does not work for clonal trees. The trunk itself is estimated to be only a few centuries old, but the plant has survived for much longer due to a process known as layering (when a branch comes in contact with the ground, it sprouts a new root), or vegetative cloning (when the trunk dies but the root system is still alive, it may sprout a new trunk). Discovery and details The root system of Old Tjikko is estimated to be years old, making it the world's oldest known Norway spruce. It stands tall and is located on Fulufjället Mountain of Dalarna province in Sweden. For millennia, the tree appeared in a stunted shrub formation (also known as a krummholz formation) due to the harsh extremes of the environment in which it lives. During the warming of the 20th century, the tree sprouted into a normal tree formation. The husband and wife who discovered the tree, Leif Kullman (Professor of Physical Geography at Umeå University), and Lisa Öberg (Tree scientist with a doctorate in biology and ecology from Mid Sweden University) attributed this growth spurt to global warming and gave the tree its nickname "Old Tjikko" after their late dog. The tree has survived for so long due to vegetative cloning. The visible tree is relatively young, but it is part of an older root system that dates back millennia. The trunk of the tree may die and regrow multiple times, but the tree's root system remains intact and in turn sprouts another trunk. The trunk may only live for about six hundred years, and when one trunk dies another eventually grows back in its place. Also, each winter, heavy snow may push the tree's low-lying branches to ground level, where they take root and survive to grow again the next year in a process known as layering. Layering occurs when a tree's branch comes in contact with the earth, and new roots sprout from the contact point. Other trees, such as coast redwoods and western red cedars are known to reproduce by layering. The tree's age was determined by carbon-14 dating of the root system, which found roots dating back to 375, 5,660, 9,000, and 9,550 years ago. Carbon dating is not accurate enough to pin down the exact year the tree sprouted from seed; however, given the estimated age, the tree is supposed to have sprouted around 7550 B.C. For comparison, the invention of writing (and thus, the beginning of recorded history) did not occur until around 4000 B.C. Researchers have found a cluster of around twenty spruce trees in the same area, all over eight millennia old. The estimated age of Old Tjikko is close to the maximum possible for this area, as the last ice age's receding Fenno-Scandian ice sheet only released the Fulufjället Mountain around ten millennia ago. Nature conservancy authorities considered putting a fence around the tree to protect it from possible vandals or trophy hunters. On July 1, 2024, it was reported that the Stockholm-based art studio Goldin+Senneby were building a climate-controlled installation at a new hospital campus in Malmö, Sweden. The installation houses a clone of Old Tjikko and was created using small twigs cut from Old Tjikko's top branches, which were then grafted onto stems of other spruce trees. This process would thus result in saplings with DNA identical to that of Old Tjikko.
Biology and health sciences
Pinaceae
Plants
760922
https://en.wikipedia.org/wiki/Bok%20choy
Bok choy
Bok choy (American English, Canadian English, and Australian English), pak choi (British English, South African English, and Caribbean English) or pok choi is a type of Chinese cabbage (Brassica rapa subsp. chinensis) cultivated as a leaf vegetable to be used as food. Varieties do not form heads and have green leaf blades with lighter bulbous bottoms instead, forming a cluster reminiscent of mustard greens. Its flavor is described as being between spinach and water chestnuts but slightly sweeter, with a mildly peppery undertone. The green leaves have a stronger flavor than the white bulb. Chinensis varieties are popular in southern China, East Asia, and Southeast Asia. Being winter-hardy, they are increasingly grown in Northern Europe. Originally classified as Brassica chinensis by Carl Linnaeus, they are now considered a subspecies of Brassica rapa.They are a member of the family Brassicaceae. Spelling and naming variations Other than the term "Chinese cabbage", the most widely used name in North America for the chinensis variety is bok choy (Cantonese for "white vegetable") or siu bok choy (Cantonese, for "small white vegetable", as opposed to dai bok choy meaning "big white vegetable", referring to the larger Napa cabbage). It is also sometimes spelled as pak choi, bok choi, and pak choy. In the UK, South Africa, and the Caribbean the term pak choi is used. Less commonly, the names Chinese chard, Chinese mustard, celery mustard, and spoon cabbage are also used. There are two main types of bok choy, collectively called xiǎo bái cài ("small white vegetable") in Mandarin. One is white bok choy () with dark green blades and white stalks, which is primarily cultivated in South China, and in Cantonese it is simply called baak choi (; the same characters pronounced bái cǎi by Mandarin speakers are preferably used for Napa cabbage). The other is green bok choy (; ; ; ; ) with light green stalks, which is more common in East China; the young and tender plants of green bok choy is called baby bok choy (), which is less crisp and therefore may become too soft if overcooked. In Australia, the New South Wales Department of Primary Industries has redefined many transcribed names to refer to specific cultivars. They have introduced the word buk choy to refer white bok choy and redefined pak choy to refer to green bok choy. Uses Cooking Bok choy cooks in 2 to 3 minutes by steaming, stir-frying, or simmering in water (8 minutes if steamed whole). The leaves cook faster than the stem. It is often used in similar ways to other leafy vegetables such as spinach and cabbage. It can also be eaten raw. It is commonly used in salads. Preserving Dried bok choy is saltier and sweeter. Pickled bok choy remains edible for months. Immature plants have the sweetest, tenderest stems and leaves. Nutritional value The raw vegetable is 95% water, 2% carbohydrates, 1% protein and less than 1% fat. In a reference serving, raw bok choy provides 54 kilojoules (13 food calories) of food energy and is a rich source (20% or more of the Daily Value, DV) of vitamin A (30% DV), vitamin C (54% DV) and vitamin K (44% DV), while providing folate, vitamin B6 and calcium in moderate amounts (10–17% DV). History Bok choy evolved in China, where it has been cultivated since the 5th century CE. Gallery
Biology and health sciences
Leafy vegetables
Plants
760951
https://en.wikipedia.org/wiki/Cysticercosis
Cysticercosis
Cysticercosis is a tissue infection caused by the young form of the pork tapeworm. People may have few or no symptoms for years. In some cases, particularly in Asia, solid lumps of between one and two centimeters may develop under the skin. After months or years these lumps can become painful and swollen and then resolve. A specific form called neurocysticercosis, which affects the brain, can cause neurological symptoms. In developing countries this is one of the most common causes of seizures. Cysticercosis is usually acquired by eating food or drinking water contaminated by tapeworms' eggs from human feces. Among foods egg-contaminated vegetables are a major source. The tapeworm eggs are present in the feces of a person infected with the adult worms, a condition known as taeniasis. Taeniasis, in the strict sense, is a different disease and is due to eating cysts in poorly cooked pork. People who live with someone with pork tapeworm have a greater risk of getting cysticercosis. The diagnosis can be made by aspiration of a cyst. Taking pictures of the brain with computer tomography (CT) or magnetic resonance imaging (MRI) is most useful for the diagnosis of disease in the brain. An increased number of a type of white blood cell, called eosinophils, in the cerebral spinal fluid and blood is also an indicator. Infection can be effectively prevented by personal hygiene and sanitation: this includes cooking pork well, proper toilets and sanitary practices, and improved access to clean water. Treating those with taeniasis is important to prevent spread. Treating the disease when it does not involve the nervous system may not be required. Treatment of those with neurocysticercosis may be with the medications praziquantel or albendazole. These may be required for long periods. Steroids, for anti-inflammation during treatment, and anti-seizure medications may also be required. Surgery is sometimes done to remove the cysts. The pork tapeworm is particularly common in Asia, Sub-Saharan Africa, and Latin America. In some areas it is believed that up to 25% of people are affected. In the developed world it is very uncommon. Worldwide in 2015 it caused about 400 deaths. Cysticercosis also affects pigs and cows but rarely causes symptoms as most are slaughtered before symptoms arise. The disease has occurred in humans throughout history. It is one of the neglected tropical diseases. Signs and symptoms Muscles Cysticerci can develop in any voluntary muscle. Invasion of muscle can cause inflammation of the muscle, with fever, eosinophilia, and increased size, which initiates with muscle swelling and later progress to atrophy and scarring. Usually, it is asymptomatic since the cysticerci die and become calcified. Nervous system The term neurocysticercosis is generally accepted to refer to cysts in the parenchyma of the brain. It presents with seizures and, less commonly, headaches. Cysticerca in brain parenchyma are usually 5–20 mm in diameter. In subarachnoid space and fissures, lesions may be as large as 6 cm in diameter and lobulated. They may be numerous and life-threatening. Cysts located within the ventricles of the brain can block the outflow of cerebrospinal fluid and present with symptoms of increased intracranial pressure. Racemose neurocysticercosis refers to cysts in the subarachnoid space. These can occasionally grow into large lobulated masses causing pressure on surrounding structures. Spinal cord neurocysticercosis most commonly presents symptoms such as back pain and radiculopathy. Eyes In some cases, cysticerci may be found in the eyeball, extraocular muscles, and under the conjunctiva (subconjunctiva). Depending on the location, they may cause visual difficulties that fluctuate with eye position, retinal edema, hemorrhage, decreased vision, or even vision loss. Skin Subcutaneous cysts are firm, mobile nodules, occurring mainly on the trunk and extremities. Subcutaneous nodules are sometimes painful. Cause Human cysticercosis develops after ingestion of the egg form of Taenia solium (often abbreviated as T. solium and also called pork tapeworm), which is transmitted through the oral-fecal route. The eggs enter the intestine where they develop into oncosphere larvae by hatching. The larvae enter the bloodstream and invade host tissues, where they further develop into larvae called cysticerci. The cysticercus larva completes development in about two months. It is about 0.6 to 1.8 cm–long, translucent, ellipsoidal, and shiny white, containing one developing scolex. Diagnosis The traditional method of demonstrating either tapeworm eggs or proglottids in stool samples diagnoses only taeniasis, carriage of the tapeworm stage of the life cycle. Only a small minority of patients with cysticercosis will harbor a tapeworm, rendering stool studies ineffective for diagnosis. Ophthalmic cysticercosis can be diagnosed by visualizing parasite in eye by fundoscopy. In cases of human cysticercosis, diagnosis is a sensitive problem and requires biopsy of the infected tissue or sophisticated instruments. Taenia solium eggs and proglottids found in feces, ELISA, or polyacrylamide gel electrophoresis diagnose only taeniasis and not cysticercosis. Radiological tests, such as X-ray, CT scans which demonstrate "ring-enhancing brain lesions", and MRIs, can also be used to detect diseases. X-rays are used to identify calcified larvae in the subcutaneous and muscle tissues, and CT scans and MRIs are used to find lesions in the brain. Serological Antibodies to cysticerci can be demonstrated in serum by enzyme linked immunoelectrotransfer blot (EITB) assay and in CSF by ELISA. An immunoblot assay using lentil-lectin (agglutinin from Lens culinaris) is highly sensitive and specific. However, individuals with intracranial lesions and calcifications may be seronegative. In the CDC's immunoblot assay, cysticercosis-specific antibodies can react with structural glycoprotein antigens from the larval cysts of Taenia solium. However, this is mainly a research tool not widely available in clinical practice and nearly unobtainable in resource-limited settings. Neurocysticercosis The diagnosis of neurocysticercosis is mainly clinical, based on a compatible presentation of symptoms and findings of imaging studies. Imaging Neuroimaging with CT or MRI is the most useful method of diagnosis. A CT scan shows both calcified and uncalcified cysts, as well as distinguishing active and inactive cysts. Cystic lesions can show ring enhancement and focal-enhancing lesions. Some cystic lesions, especially the ones in ventricles and subarachnoid space may not be visible on a CT scan, since the cyst fluid is isodense with cerebrospinal fluid (CSF). Thus diagnosis of extraparenchymal cysts usually relies on signs like hydrocephalus or enhanced basilar meninges. A CT scan with intraventricular contrast or MRI can be used. MRI is more sensitive in the detection of intraventricular cysts. CSF CSF findings include pleocytosis, elevated protein levels and depressed glucose levels; but these may not be always present. Prevention Cysticercosis is considered a “tools-ready disease” according to WHO. International Task Force for Disease Eradication in 1992 reported that cysticercosis is potentially eradicable. It is feasible because there are no animal reservoirs besides humans and pigs. The only source of Taenia solium infection for pigs is from humans, a definite host. Theoretically, breaking the life cycle seems easy by doing intervention strategies from various stages in the life cycle. For example, Massive chemotherapy of infected individuals, improving sanitation, and educating people are all major ways to discontinue the cycle, in which eggs from human feces are transmitted to other humans and/or pigs. Cooking pork or freezing it and inspecting meat are effective means to cease the life cycle The management of pigs by treating them or vaccinating them is another possibility to intervene The separation of pigs from human feces by confining them in enclosed piggeries. In Western European countries post World War 2 the pig industry developed rapidly and most pigs were housed. This was the main reason for pig cysticercosis largely being eliminated from the region. This, of course, is not a quick answer to the problem in developing countries. Pigs The intervention strategies to eradicate cysticercosis include surveillance of pigs in foci of transmission and massive chemotherapy treatment of humans. In reality, control of T. solium by a single intervention, for instance, by treating only the human population will not work because the existing infected pigs can still carry on the cycle. The proposed strategy for eradication is to do multilateral intervention by treating both human and porcine populations. It is feasible because treating pigs with oxfendazole is effective and once treated, pigs are protected from further infections for at least 3 months. Limitations Even with the concurrent treatment of humans and pigs, complete elimination is hard to achieve. In one study conducted in 12 villages in Peru, both humans and pigs were treated with praziquantel and oxfendazole, with a coverage of more than 75% in humans and 90% in pigs The result shows a decrease in prevalence and incidence in the intervention area; however, the effect did not eliminate T. solium. The possible reason includes the incomplete coverage and re-infection. Even though T. solium could be eliminated through mass treatment of human and porcine population, it is not sustainable. Moreover, both tapeworm carriers of humans and pigs tend to spread the disease from endemic to non-endemic areas resulting in periodic outbreaks of cysticercosis or outbreaks in new areas. Vaccines Given that pigs are part of a life cycle, vaccinating pigs is another feasible intervention to eliminate cysticercosis. Research studies have been focusing on vaccines against cestode parasites since many immune cell types are found to be capable of destroying cysticercus. Many vaccine candidates are extracted from antigens of different cestodes such as Taenia solium, T. crassiceps, T. saginata, T. ovis and target oncospheres and/or cysticerci. In 1983, Molinari et al. reported the first vaccine candidate against porcine cysticercosis using antigen from cysticercus cellulosae drawn out from naturally infected animals. Recently, vaccines extracted from genetically engineered 45W-4B antigens have been successfully tested to pigs in an experimental condition. This type of vaccine can protect against cysticercosis in both Chinese and Mexican type of T. solium. However, it has not been tested in endemic field conditions, which is important because the realistic conditions in the field differ greatly from the experimental condition, and this can result in a great difference in the chances of infection and immune reaction. Even though vaccines have been successfully generated, the feasibility of their production and usage in rural free-ranging pigs remains a challenge. If a vaccine is to be injected, the burden of work and the cost of vaccine administration to pigs will remain high and unrealistic. The incentives of using vaccines by pig owners will decrease if the vaccine administration to pigs takes time by injecting every single pig in their livestock. A hypothetical oral vaccine is proposed to be more effective as it can be easily delivered to the pigs by food. S3PVAC vaccine The vaccine constituted by 3 peptide synthetically produced (S3Pvac) has proven its efficacy in natural conditions of transmission. The S3PVAC vaccine so far, can be considered as the best vaccine candidate to be used in endemic areas such as Mexico (20). S3Pvac consists of three protective peptides: KETc12, KETc1, and GK1, whose sequences belong to native antigens that are present in the different developmental stages of T. solium and other cestode parasites. Non-infected pigs from rural villages in Mexico were vaccinated with S3Pvac and the vaccine reduced the number of cysticerci by 98% and the prevalence by 50%. The diagnostic method involves necropsy and tongue inspection of pigs. The natural challenge conditions used in the study proved the efficacy of the S3Pvac vaccine in transmission control of T. solium in Mexico. The S3Pvac vaccine is owned by the National Autonomous University of Mexico and the method of high scale production of the vaccine has already been developed. The validation of the vaccine in agreement with the Secretary of Animal Health in Mexico is currently in the process of completion. It is also hoped that the vaccine will be well-accepted by pig owners because they also lose their income if pigs are infected cysticercosis. Vaccination of pigs against cysticercosis, if successful, can potentially have a great impact on transmission control since there is no chance of re-infection once pigs receive the vaccination. Other Cysticercosis can also be prevented by routine inspection of meat and condemnation of measly meat by the local government and by avoiding partially cooked meat products. However, in areas where food is scarce, cyst-infected meat might be considered as wasted since pork can provide high-quality protein. At times, infected pigs are consumed within the locality or sold at low prices to traffickers who take the uninspected pigs at urban areas for sale. Management Neurocysticercosis Asymptomatic cysts, such as those discovered incidentally on neuroimaging done for another reason, may never lead to symptomatic disease and in many cases do not require therapy. Calcified cysts have already died and involuted. Seizures can also occur in individuals with only calcified cysts. Neurocysticercosis may present as hydrocephalus and acute onset seizures, thus the immediate therapy is emergent reduction of intracranial pressure and anticonvulsant medications. Once the seizures have been brought under control, antihelminthic treatments may be undertaken. The decision to treat with antiparasitic therapy is complex and based on the stage and number of cysts present, their location, and the person's specific symptoms. Adult Taenia solium are easily treated with niclosamide, and is most commonly used in taeniasis. However, cysticercosis is a complex disease and requires careful medication. Praziquantel (PZQ) is most often the drug of choice for neurocysticercosis. Albendazole is also a viable (and potentially superior) drug for the disease, and it has a lower cost and fewer drug interactions than praziquantel. In more complex situations, a combination of praziquantel, albendazole and steroids (such as corticosteroids to reduce the inflammation) is recommended. In the brain, the cysts can be usually found on the surface. Most brain cysts are found by accident, often while searching for other ailments. Surgical removals are the only way to completely remove cysts, even if symptoms have been treated successfully with medications. Antiparasitic treatment should be given in combination with corticosteroids and anticonvulsants to reduce inflammation surrounding the cysts and lower the risk of seizures. When corticosteroids are given in combination with praziquantel, cimetidine is also given, as corticosteroids decrease the action of praziquantel by enhancing its first pass metabolism. Surgical intervention is much more likely to be needed in intraventricular, racemose, or spinal neurocysticercosis. Treatments include direct excision of ventricular cysts, shunting procedures, and removal of cysts via endoscopy. Eyes In eye disease, surgical removal is necessary for cysts within the eye itself as treating intraocular lesions with anthelmintics will elicit an inflammatory reaction causing irreversible damage to structural components. Cysts outside the globe can be treated with anthelmintics and steroids. Treatment recommendations for subcutaneous cysticercosis include surgery, praziquantel, and albendazole. Epidemiology Regions Taenia solium is found worldwide, but is more common where pork is part of the diet. Cysticercosis is most prevalent where humans live in close contact with pigs. Therefore, high prevalences are reported in Mexico, Latin America, West Africa, Russia, India, Pakistan, North-East China, and Southeast Asia. In Europe it is most widespread among Slavic people. However, reviews of the epidemiological in Western and Eastern Europe shows there are still considerable gaps in our understanding of the disease also in these regions. Infection estimates In Latin America, an estimated 75 million persons live in endemic areas and 400,000 people have symptomatic disease. Some studies suggest that the prevalence of cysticercosis in Mexico is between 3.1 and 3.9 percent. Other studies have found the seroprevalence in areas of Guatemala, Bolivia, and Peru as high as 20 percent in humans, and 37 percent in pigs. In Ethiopia, Kenya, and the Democratic Republic of Congo around 10% of the population is infected, and in Madagascar 16%. The distribution of cysticercosis coincides with the distribution of T. solium. Cysticercosis is the most common cause of symptomatic epilepsy worldwide. Prevalence rates in the United States have shown immigrants from Mexico, Central and South America, and Southeast Asia account for most of the domestic cases of cysticercosis. In 1990 and 1991, four unrelated members of an Orthodox Jewish community in New York City developed recurrent seizures and brain lesions, which were found to have been caused by T. solium. Researchers who interviewed the families suspect the infection was acquired from domestic workers who were carriers of the tapeworm. Deaths Worldwide as of 2010, it caused about 1,200 deaths, up from 700 in 1990. Estimates from 2010 were that it contributed to at least 50,000 deaths annually. In US during 1990–2002, 221 cysticercosis deaths were identified. Mortality rates were highest for Latinos and men. The mean age at death was 40.5 years (range 2–88). Most patients, 84.6%, were foreign-born, and 62% had emigrated from Mexico. The 33 US-born persons who died of cysticercosis represented 15% of all cysticercosis-related deaths. The cysticercosis mortality rate was highest in California, which accounted for 60% of all cysticercosis deaths. History The earliest reference to tapeworms was found in the works of ancient Egyptians that date back to almost 2000 BC. The description of measled pork in the History of Animals written by Aristotle (384–322 BC) showed that the infection of pork with tapeworm was known to ancient Greeks at that time. It was also known to Jewish and later to early Muslim physicians and has been proposed as one of the reasons for pork being forbidden by Jewish and Islamic dietary laws. Recent examination of evolutionary histories of hosts and parasites and DNA evidence show that over 10,000 years ago, ancestors of modern humans in Africa became exposed to tapeworm when they scavenged for food or preyed on antelopes and bovids, and later passed the infection on to domestic animals such as pigs. Cysticercosis was described by Johannes Udalric Rumler in 1555; however, the connection between tapeworms and cysticercosis had not been recognized at that time. Around 1850, Friedrich Küchenmeister fed pork containing cysticerci of T. solium to humans awaiting execution in a prison, and after they had been executed, he recovered the developing and adult tapeworms in their intestines. By the middle of the 19th century, it was established that cysticercosis was caused by the ingestion of the eggs of T. solium.
Biology and health sciences
Helminthic diseases and infestations
Health
761035
https://en.wikipedia.org/wiki/Gai%20lan
Gai lan
Gai lan, kai-lan, Chinese broccoli, or Chinese kale (Brassica oleracea var. alboglabra) is a leafy vegetable with thick, flat, glossy blue-green leaves with thick stems, and florets similar to (but much smaller than) broccoli. A Brassica oleracea cultivar, gai lan is in the group alboglabra (from Latin albus "white" and glabrus "hairless"). When gone to flower, its white blossoms resemble that of its cousin Matthiola incana or hoary stock. The flavor is very similar to that of broccoli, but noticeably stronger and slightly more bitter. Cultivation Gai lan is a cool season crop that grows best between . It withstands hotter summer temperatures than other brassicas such as broccoli or cabbage. Gai lan is harvested around 60–70 days after sowing, just before the flowers start to bloom. The stems can become woody and tough when the plant bolts. It is generally harvest for market when 15-20cm (6-8in) tall however it can also be produced as "baby gai lan." The "baby" version is cultivated through crowding of seedings and generous fertilization; they resemble Brussels sprouts although they have looser folds. Hybrids Broccolini is a hybrid between broccoli and gai lan. Uses Culinary The stems and leaves of gai lan are eaten widely in Chinese cuisine; common preparations include gai lan stir-fried with ginger and garlic, and boiled or steamed and served with oyster sauce. It is also common in Vietnamese, Burmese and Thai cuisine. In Chinese cuisine it is often associated with dim sum restaurants. In Americanized Chinese food (like beef and broccoli), gai lan was frequently replaced by broccoli when gai lan was not available.
Biology and health sciences
Leafy vegetables
Plants
761556
https://en.wikipedia.org/wiki/South%20American%20fox
South American fox
The South American foxes (Lycalopex), commonly called raposa in Portuguese, or zorro in Spanish, are a genus from South America of the subfamily Caninae. Despite their name, they are not true foxes, but are a unique canid genus more closely related to wolves and jackals than to true foxes; some of them resemble foxes due to convergent evolution. The South American gray fox, Lycalopex griseus, is the most common species, and is known for its large ears and a highly marketable, russet-fringed pelt. The second-oldest known fossils belonging to the genus were discovered in Chile, and date from 2.0 to 2.5 million years ago, in the mid- to late Pliocene. The Vorohué Formation of Argentina has provided older fossils, dating to the Uquian to Ensenadan (Late Pliocene). Names The common English word "zorro" is a loan word from Spanish, with the word originally meaning "fox". Current usage lists Pseudalopex (literally: "false fox") as synonymous with Lycalopex ("wolf fox"), with the latter taking precedence. In 1895, Allen classified Pseudalopex as a subgenus of Canis, establishing the combination Canis (Pseudalopex), a name still used in the fossil record. Species Species currently included in this genus include: In 1914, Oldfield Thomas established the genus Dusicyon, in which he included these zorros. They were later reclassified to Lycalopex (via Pseudalopex) by Langguth in 1975. Phylogeny The following phylogenetic tree shows the evolutionary relationships between the Lycalopex species, based on molecular analysis of mitochondrial DNA control region sequences. Relationship with humans The zorros are hunted in Argentina for their durable, soft pelts. They are also often labelled 'lamb-killers'. In his diary of his well-known 1952 traveling with the young Che Guevara, Alberto Granado mentions talking with seasonal workers employed on vast sheep farms, who told him of a successful campaign by the ranch owners to exterminate the foxes who were preying on lambs. The ranchers offered a reward of one Argentinian peso for the body of a dead male fox and as much as five pesos for a female fox; to impoverished workers in the early 1950s, five pesos were a significant sum. Within a few years, foxes became virtually extinct in a large part of Argentina. The Fuegian dog (), also known as the Yaghan dog, was a domesticated form of the culpeo (Lycalopex culpaeus), unlike other domesticated canids which were dogs and silver foxes. This means different canid species have been domesticated multiple times by humans independently.
Biology and health sciences
Canines
Animals
762047
https://en.wikipedia.org/wiki/Freshwater%20ecosystem
Freshwater ecosystem
Freshwater ecosystems are a subset of Earth's aquatic ecosystems that include the biological communities inhabiting freshwater waterbodies such as lakes, ponds, rivers, streams, springs, bogs, and wetlands. They can be contrasted with marine ecosystems, which have a much higher salinity. Freshwater habitats can be classified by different factors, including temperature, light penetration, nutrients, and vegetation. There are three basic types of freshwater ecosystems: lentic (slow moving water, including pools, ponds, and lakess), lotic (faster moving streams, for example creeks and rivers) and wetlands (semi-aquatic areas where the soil is saturated or inundated for at least part of the time). Freshwater ecosystems contain 41% of the world's known fish species. Freshwater ecosystems have undergone substantial transformations over time, which has impacted various characteristics of the ecosystems. Original attempts to understand and monitor freshwater ecosystems were spurred on by threats to human health (for example cholera outbreaks due to sewage contamination). Early monitoring focused on chemical indicators, then bacteria, and finally algae, fungi and protozoa. A new type of monitoring involves quantifying differing groups of organisms (macroinvertebrates, macrophytes and fish) and measuring the stream conditions associated with them. Threats to freshwater biodiversity include overexploitation, water pollution, flow modification, destruction or degradation of habitat, and invasion by exotic species. Climate change is putting further pressure on these ecosystems because water temperatures have already increased by about 1 °C, and there have been significant declines in ice coverage which have caused subsequent ecosystem stresses. Types There are three basic types of freshwater ecosystems: Lentic (slow moving water, including pools, ponds, and lakes), lotic (faster moving water, for example streams and rivers) and wetlands (areas where the soil is saturated or inundated for at least part of the time). Limnology (and its branch freshwater biology) is a study about freshwater ecosystems. Lentic ecosystems Lotic ecosystems Wetlands Threats Biodiversity Five broad threats to freshwater biodiversity include overexploitation, water pollution, flow modification, destruction or degradation of habitat, and invasion by exotic species. Recent extinction trends can be attributed largely to sedimentation, stream fragmentation, chemical and organic pollutants, dams, and invasive species. Common chemical stresses on freshwater ecosystem health include acidification, eutrophication and copper and pesticide contamination. Freshwater biodiversity faces many threats. The World Wide Fund for Nature's Living Planet Index noted an 83% decline in the populations of freshwater vertebrates between 1970 and 2014. These declines continue to outpace contemporaneous declines in marine or terrestrial systems. The causes of these declines are related to: A rapidly changing climate Online wildlife trade and invasive species Infectious disease Toxic algae blooms Hydropower damming and fragmenting of half the world's rivers Emerging contaminants, such as hormones Engineered nanomaterials Microplastic pollution Light and noise interference Saltier coastal freshwaters due to sea level rise Calcium concentrations falling below the needs of some freshwater organisms The additive—and possibly synergistic—effects of these threats Invasive species Invasive plants and animals are a major issue to freshwater ecosystems, in many cases outcompeting native species and altering water conditions. Introduced species are especially devastating to ecosystems that are home to endangered species. An example of this being the Asian carp competing with the paddlefish in the Mississippi river. Common causes of invasive species in freshwater ecosystems include aquarium releases, introduction for sport fishing, and introduction for use as a food fish. Extinction of freshwater fauna Over 123 freshwater fauna species have gone extinct in North America since 1900. Of North American freshwater species, an estimated 48.5% of mussels, 22.8% of gastropods, 32.7% of crayfishes, 25.9% of amphibians, and 21.2% of fish are either endangered or threatened. Extinction rates of many species may increase severely into the next century because of invasive species, loss of keystone species, and species which are already functionally extinct (e.g., species which are not reproducing). Even using conservative estimates, freshwater fish extinction rates in North America are 877 times higher than background extinction rates (1 in 3,000,000 years). Projected extinction rates for freshwater animals are around five times greater than for land animals, and are comparable to the rates for rainforest communities. Given the dire state of freshwater biodiversity, a team of scientists and practitioners from around the globe recently drafted an Emergency Action plan to try and restore freshwater biodiversity. Current freshwater biomonitoring techniques focus primarily on community structure, but some programs measure functional indicators like biochemical (or biological) oxygen demand, sediment oxygen demand, and dissolved oxygen. Macroinvertebrate community structure is commonly monitored because of the diverse taxonomy, ease of collection, sensitivity to a range of stressors, and overall value to the ecosystem. Additionally, algal community structure (often using diatoms) is measured in biomonitoring programs. Algae are also taxonomically diverse, easily collected, sensitive to a range of stressors, and overall valuable to the ecosystem. Algae grow very quickly and communities may represent fast changes in environmental conditions. In addition to community structure, responses to freshwater stressors are investigated by experimental studies that measure organism behavioural changes, altered rates of growth, reproduction or mortality. Experimental results on single species under controlled conditions may not always reflect natural conditions and multi-species communities. The use of reference sites is common when defining the idealized "health" of a freshwater ecosystem. Reference sites can be selected spatially by choosing sites with minimal impacts from human disturbance and influence. However, reference conditions may also be established temporally by using preserved indicators such as diatom valves, macrophyte pollen, insect chitin and fish scales can be used to determine conditions prior to large scale human disturbance. These temporal reference conditions are often easier to reconstruct in standing water than moving water because stable sediments can better preserve biological indicator materials. Climate change The effects of climate change greatly complicate and frequently exacerbate the impacts of other stressors that threaten many fish, invertebrates, phytoplankton, and other organisms. Climate change is increasing the average temperature of water bodies, and worsening other issues such as changes in substrate composition, oxygen concentration, and other system changes that have ripple effects on the biology of the system. Water temperatures have already increased by around 1 °C, and significant declines in ice coverage have caused subsequent ecosystem stresses.
Physical sciences
Water: General
Earth science
762691
https://en.wikipedia.org/wiki/Supercritical%20fluid
Supercritical fluid
A supercritical fluid (SCF) is a substance at a temperature and pressure above its critical point, where distinct liquid and gas phases do not exist, but below the pressure required to compress it into a solid. It can effuse through porous solids like a gas, overcoming the mass transfer limitations that slow liquid transport through such materials. SCFs are superior to gases in their ability to dissolve materials like liquids or solids. Near the critical point, small changes in pressure or temperature result in large changes in density, allowing many properties of a supercritical fluid to be "fine-tuned". Supercritical fluids occur in the atmospheres of the gas giants Jupiter and Saturn, the terrestrial planet Venus, and probably in those of the ice giants Uranus and Neptune. Supercritical water is found on Earth, such as the water issuing from black smokers, a type of hydrothermal vent. SCFs are used as a substitute for organic solvents in a range of industrial and laboratory processes, most commonly carbon dioxide for decaffeination and water for steam boilers for power generation. Some substances are soluble in the supercritical state of a solvent (e.g. carbon dioxide) but insoluble in the gaseous or liquid state—or vice versa. This can be used to extract a substance and transport it elsewhere in solution before depositing it in the desired place by allowing or inducing a phase transition in the solvent. Properties Supercritical fluids generally have properties between those of a gas and a liquid. In Table 1, the critical properties are shown for some substances that are commonly used as supercritical fluids. †Source: International Association for Properties of Water and Steam (IAPWS) Table 2 shows density, diffusivity and viscosity for typical liquids, gases and supercritical fluids. Also, there is no surface tension in a supercritical fluid, as there is no liquid/gas phase boundary. By changing the pressure and temperature of the fluid, the properties can be "tuned" to be more liquid-like or more gas-like. One of the most important properties is the solubility of material in the fluid. Solubility in a supercritical fluid tends to increase with density of the fluid (at constant temperature). Since density increases with pressure, solubility tends to increase with pressure. The relationship with temperature is a little more complicated. At constant density, solubility will increase with temperature. However, close to the critical point, the density can drop sharply with a slight increase in temperature. Therefore, close to the critical temperature, solubility often drops with increasing temperature, then rises again. Mixtures Typically, supercritical fluids are completely miscible with each other, so that a binary mixture forms a single gaseous phase if the critical point of the mixture is exceeded. However, exceptions are known in systems where one component is much more volatile than the other, which in some cases form two immiscible gas phases at high pressure and temperatures above the component critical points. This behavior has been found for example in the systems N2-NH3, NH3-CH4, SO2-N2 and n-butane-H2O. The critical point of a binary mixture can be estimated as the arithmetic mean of the critical temperatures and pressures of the two components, where χi denotes the mole fraction of component i. For greater accuracy, the critical point can be calculated using equations of state, such as the Peng–Robinson, or group-contribution methods. Other properties, such as density, can also be calculated using equations of state. Phase diagram Figures 1 and 2 show two-dimensional projections of a phase diagram. In the pressure-temperature phase diagram (Fig. 1) the boiling curve separates the gas and liquid region and ends in the critical point, where the liquid and gas phases disappear to become a single supercritical phase. The appearance of a single phase can also be observed in the density-pressure phase diagram for carbon dioxide (Fig. 2). At well below the critical temperature, e.g., 280 K, as the pressure increases, the gas compresses and eventually (at just over 40 bar) condenses into a much denser liquid, resulting in the discontinuity in the line (vertical dotted line). The system consists of 2 phases in equilibrium, a dense liquid and a low density gas. As the critical temperature is approached (300 K), the density of the gas at equilibrium becomes higher, and that of the liquid lower. At the critical point, (304.1 K and 7.38 MPa (73.8 bar)), there is no difference in density, and the 2 phases become one fluid phase. Thus, above the critical temperature a gas cannot be liquefied by pressure. At slightly above the critical temperature (310 K), in the vicinity of the critical pressure, the line is almost vertical. A small increase in pressure causes a large increase in the density of the supercritical phase. Many other physical properties also show large gradients with pressure near the critical point, e.g. viscosity, the relative permittivity and the solvent strength, which are all closely related to the density. At higher temperatures, the fluid starts to behave more like an ideal gas, with a more linear density/pressure relationship, as can be seen in Figure 2. For carbon dioxide at 400 K, the density increases almost linearly with pressure. Many pressurized gases are actually supercritical fluids. For example, nitrogen has a critical point of 126.2 K (−147 °C) and 3.4 MPa (34 bar). Therefore, nitrogen (or compressed air) in a gas cylinder above this pressure is actually a supercritical fluid. These are more often known as permanent gases. At room temperature, they are well above their critical temperature, and therefore behave as a nearly ideal gas, similar to CO2 at 400 K above. However, they cannot be liquified by mechanical pressure unless cooled below their critical temperature, requiring gravitational pressure such as within gas giants to produce a liquid or solid at high temperatures. Above the critical temperature, elevated pressures can increase the density enough that the SCF exhibits liquid-like density and behaviour. At very high pressures, an SCF can be compressed into a solid because the melting curve extends to the right of the critical point in the P/T phase diagram. While the pressure required to compress supercritical CO2 into a solid can be, depending on the temperature, as low as 570 MPa, that required to solidify supercritical water is 14,000 MPa. The Fisher–Widom line, the Widom line, or the Frenkel line are thermodynamic concepts that allow to distinguish liquid-like and gas-like states within the supercritical fluid. History In 1822, Baron Charles Cagniard de la Tour discovered the critical point of a substance in his famous cannon barrel experiments. Listening to discontinuities in the sound of a rolling flint ball in a sealed cannon filled with fluids at various temperatures, he observed the critical temperature. Above this temperature, the densities of the liquid and gas phases become equal and the distinction between them disappears, resulting in a single supercritical fluid phase. In recent years, a significant effort has been devoted to investigation of various properties of supercritical fluids. Supercritical fluids have found application in a variety of fields, ranging from the extraction of floral fragrance from flowers to applications in food science such as creating decaffeinated coffee, functional food ingredients, pharmaceuticals, cosmetics, polymers, powders, bio- and functional materials, nano-systems, natural products, biotechnology, fossil and bio-fuels, microelectronics, energy and environment. Much of the excitement and interest of the past decade is due to the enormous progress made in increasing the power of relevant experimental tools. The development of new experimental methods and improvement of existing ones continues to play an important role in this field, with recent research focusing on dynamic properties of fluids. Natural occurrence Hydrothermal circulation Hydrothermal circulation occurs within the Earth's crust wherever fluid becomes heated and begins to convect. These fluids are thought to reach supercritical conditions under a number of different settings, such as in the formation of porphyry copper deposits or high temperature circulation of seawater in the sea floor. At mid-ocean ridges, this circulation is most evident by the appearance of hydrothermal vents known as "black smokers". These are large (metres high) chimneys of sulfide and sulfate minerals which vent fluids up to 400 °C. The fluids appear like great black billowing clouds of smoke due to the precipitation of dissolved metals in the fluid. It is likely that at that depth many of these vent sites reach supercritical conditions, but most cool sufficiently by the time they reach the sea floor to be subcritical. One particular vent site, Turtle Pits, has displayed a brief period of supercriticality at the vent site. A further site, Beebe, in the Cayman Trough, is thought to display sustained supercriticality at the vent orifice. Planetary atmospheres The atmosphere of Venus is 96.5% carbon dioxide and 3.5% nitrogen. The surface pressure is and the surface temperature is , above the critical points of both major constituents and making the surface atmosphere a supercritical fluid. The interior atmospheres of the Solar System's four giant planets are composed mainly of hydrogen and helium at temperatures well above their critical points. The gaseous outer atmospheres of the gas giants Jupiter and Saturn transition smoothly into the dense liquid interior, while the nature of the transition zones of the ice giants Neptune and Uranus is unknown. Theoretical models of extrasolar planet Gliese 876 d have posited an ocean of pressurized, supercritical fluid water with a sheet of solid high pressure water ice at the bottom. Applications Supercritical fluid extraction The advantages of supercritical fluid extraction (compared with liquid extraction) are that it is relatively rapid because of the low viscosities and high diffusivities associated with supercritical fluids. Alternative solvents to supercritical fluids may be poisonous, flammable or an environmental hazard to a much larger extent than water or carbon dioxide are. The extraction can be selective to some extent by controlling the density of the medium, and the extracted material is easily recovered by simply depressurizing, allowing the supercritical fluid to return to gas phase and evaporate leaving little or no solvent residues. Carbon dioxide is the most common supercritical solvent. It is used on a large scale for the decaffeination of green coffee beans, the extraction of hops for beer production, and the production of essential oils and pharmaceutical products from plants. A few laboratory test methods include the use of supercritical fluid extraction as an extraction method instead of using traditional solvents. Supercritical fluid decomposition Supercritical water can be used to decompose biomass via Supercritical Water Gasification of biomass. This type of biomass gasification can be used to produce hydrocarbon fuels for use in an efficient combustion device or to produce hydrogen for use in a fuel cell. In the latter case, hydrogen yield can be much higher than the hydrogen content of the biomass due to steam reforming where water is a hydrogen-providing participant in the overall reaction. Dry-cleaning Supercritical carbon dioxide (SCD) can be used instead of PERC (perchloroethylene) or other undesirable solvents for dry-cleaning. Supercritical carbon dioxide sometimes intercalates into buttons, and, when the SCD is depressurized, the buttons pop, or break apart. Detergents that are soluble in carbon dioxide improve the solvating power of the solvent. CO2-based dry cleaning equipment uses liquid CO2, not supercritical CO2, to avoid damage to the buttons. Supercritical fluid chromatography Supercritical fluid chromatography (SFC) can be used on an analytical scale, where it combines many of the advantages of high performance liquid chromatography (HPLC) and gas chromatography (GC). It can be used with non-volatile and thermally labile analytes (unlike GC) and can be used with the universal flame ionization detector (unlike HPLC), as well as producing narrower peaks due to rapid diffusion. In practice, the advantages offered by SFC have not been sufficient to displace the widely used HPLC and GC, except in a few cases such as chiral separations and analysis of high-molecular-weight hydrocarbons. For manufacturing, efficient preparative simulated moving bed units are available. The purity of the final products is very high, but the cost makes it suitable only for very high-value materials such as pharmaceuticals. Chemical reactions Changing the conditions of the reaction solvent can allow separation of phases for product removal, or single phase for reaction. Rapid diffusion accelerates diffusion controlled reactions. Temperature and pressure can tune the reaction down preferred pathways, e.g., to improve yield of a particular chiral isomer. There are also significant environmental benefits over conventional organic solvents. Industrial syntheses that are performed at supercritical conditions include those of polyethylene from supercritical ethene, isopropyl alcohol from supercritical propene, 2-butanol from supercritical butene, and ammonia from a supercritical mix of nitrogen and hydrogen. Other reactions were, in the past, performed industrially in supercritical conditions, including the synthesis of methanol and thermal (non-catalytic) oil cracking. Because of the development of effective catalysts, the required temperatures of those two processes have been reduced and are no longer supercritical. Impregnation and dyeing Impregnation is, in essence, the converse of extraction. A substance is dissolved in the supercritical fluid, the solution flowed past a solid substrate, and is deposited on or dissolves in the substrate. Dyeing, which is readily carried out on polymer fibres such as polyester using disperse (non-ionic) dyes, is a special case of this. Carbon dioxide also dissolves in many polymers, considerably swelling and plasticising them and further accelerating the diffusion process. Nano and micro particle formation The formation of small particles of a substance with a narrow size distribution is an important process in the pharmaceutical and other industries. Supercritical fluids provide a number of ways of achieving this by rapidly exceeding the saturation point of a solute by dilution, depressurization or a combination of these. These processes occur faster in supercritical fluids than in liquids, promoting nucleation or spinodal decomposition over crystal growth and yielding very small and regularly sized particles. Recent supercritical fluids have shown the capability to reduce particles up to a range of 5-2000 nm. Generation of pharmaceutical cocrystals Supercritical fluids act as a new medium for the generation of novel crystalline forms of APIs (Active Pharmaceutical Ingredients) named as pharmaceutical cocrystals. Supercritical fluid technology offers a new platform that allows a single-step generation of particles that are difficult or even impossible to obtain by traditional techniques. The generation of pure and dried new cocrystals (crystalline molecular complexes comprising the API and one or more conformers in the crystal lattice) can be achieved due to unique properties of SCFs by using different supercritical fluid properties: supercritical CO2 solvent power, anti-solvent effect and its atomization enhancement. Supercritical drying Supercritical drying is a method of removing solvent without surface tension effects. As a liquid dries, the surface tension drags on small structures within a solid, causing distortion and shrinkage. Under supercritical conditions there is no surface tension, and the supercritical fluid can be removed without distortion. Supercritical drying is used in the manufacturing process of aerogels and drying of delicate materials such as archaeological samples and biological samples for electron microscopy. Supercritical water electrolysis Electrolysis of water in a supercritical state, reduces the overpotentials found in other electrolysers, thereby improving the electrical efficiency of the production of oxygen and hydrogen. Increased temperature reduces thermodynamic barriers and increases kinetics. No bubbles of oxygen or hydrogen are formed on the electrodes, therefore no insulating layer is formed between catalyst and water, reducing the ohmic losses. The gas-like properties provide rapid mass transfer. Supercritical water oxidation Supercritical water oxidation uses supercritical water as a medium in which to oxidize hazardous waste, eliminating production of toxic combustion products that burning can produce. The waste product to be oxidised is dissolved in the supercritical water along with molecular oxygen (or an oxidising agent that gives up oxygen upon decomposition, e.g. hydrogen peroxide) at which point the oxidation reaction occurs. Supercritical water hydrolysis Supercritical hydrolysis is a method of converting all biomass polysaccharides as well the associated lignin into low molecular compounds by contacting with water alone under supercritical conditions. The supercritical water, acts as a solvent, a supplier of bond-breaking thermal energy, a heat transfer agent and as a source of hydrogen atoms. All polysaccharides are converted into simple sugars in near-quantitative yield in a second or less. The aliphatic inter-ring linkages of lignin are also readily cleaved into free radicals that are stabilized by hydrogen originating from the water. The aromatic rings of the lignin are unaffected under short reaction times so that the lignin-derived products are low molecular weight mixed phenols. To take advantage of the very short reaction times needed for cleavage a continuous reaction system must be devised. The amount of water heated to a supercritical state is thereby minimized. Supercritical water gasification Supercritical water gasification is a process of exploiting the beneficial effect of supercritical water to convert aqueous biomass streams into clean water and gases like H2, CH4, CO2, CO etc. Supercritical fluid in power generation The efficiency of a heat engine is ultimately dependent on the temperature difference between heat source and sink (Carnot cycle). To improve efficiency of power stations the operating temperature must be raised. Using water as the working fluid, this takes it into supercritical conditions. Efficiencies can be raised from about 39% for subcritical operation to about 45% using current technology. Many coal-fired supercritical steam generators are operational all over the world. Supercritical carbon dioxide is also proposed as a working fluid, which would have the advantage of lower critical pressure than water, but issues with corrosion are not yet fully solved. One proposed application is the Allam cycle. Supercritical water reactors (SCWRs) are proposed advanced nuclear systems that offer similar thermal efficiency gains. Biodiesel production Conversion of vegetable oil to biodiesel is via a transesterification reaction, where a triglyceride is converted to the methyl esters (of the fatty acids) plus glycerol. This is usually done using methanol and caustic or acid catalysts, but can be achieved using supercritical methanol without a catalyst. The method of using supercritical methanol for biodiesel production was first studied by Saka and his coworkers. This has the advantage of allowing a greater range and water content of feedstocks (in particular, used cooking oil), the product does not need to be washed to remove catalyst, and is easier to design as a continuous process. Enhanced oil recovery and carbon capture and storage Supercritical carbon dioxide is used to enhance oil recovery in mature oil fields. At the same time, there is the possibility of using "clean coal technology" to combine enhanced recovery methods with carbon sequestration. The CO2 is separated from other flue gases, compressed to the supercritical state, and injected into geological storage, possibly into existing oil fields to improve yields. At present, only schemes isolating fossil CO2 from natural gas actually use carbon storage, (e.g., Sleipner gas field), but there are many plans for future CCS schemes involving pre- or post- combustion CO2. There is also the possibility to reduce the amount of CO2 in the atmosphere by using biomass to generate power and sequestering the CO2 produced. Enhanced geothermal system The use of supercritical carbon dioxide, instead of water, has been examined as a geothermal working fluid. Refrigeration Supercritical carbon dioxide is also emerging as a useful high-temperature refrigerant, being used in new, CFC/HFC-free domestic heat pumps making use of the transcritical cycle. These systems are undergoing continuous development with supercritical carbon dioxide heat pumps already being successfully marketed in Asia. The EcoCute systems from Japan are some of the first commercially successful high-temperature domestic water heat pumps. Supercritical fluid deposition Supercritical fluids can be used to deposit functional nanostructured films and nanometer-size particles of metals onto surfaces. The high diffusivities and concentrations of precursor in the fluid as compared to the vacuum systems used in chemical vapour deposition allow deposition to occur in a surface reaction rate limited regime, providing stable and uniform interfacial growth. This is crucial in developing more powerful electronic components, and metal particles deposited in this way are also powerful catalysts for chemical synthesis and electrochemical reactions. Additionally, due to the high rates of precursor transport in solution, it is possible to coat high surface area particles which under chemical vapour deposition would exhibit depletion near the outlet of the system and also be likely to result in unstable interfacial growth features such as dendrites. The result is very thin and uniform films deposited at rates much faster than atomic layer deposition, the best other tool for particle coating at this size scale. Antimicrobial properties CO2 at high pressures has antimicrobial properties. While its effectiveness has been shown for various applications, the mechanisms of inactivation have not been fully understood although they have been investigated for more than 60 years.
Physical sciences
States of matter
Physics
763138
https://en.wikipedia.org/wiki/Sonic%20weapon
Sonic weapon
Sonic and ultrasonic weapons (USW) are weapons of various types that use sound to injure or incapacitate an opponent. Some sonic weapons make a focused beam of sound or of ultrasound; others produce an area field of sound. military and police forces make some limited use of sonic weapons. Use and deployment Extremely high-power sound waves can disrupt or destroy the eardrums of a target and cause severe pain or disorientation. This is usually sufficient to incapacitate a person. Less powerful sound waves can cause humans to experience nausea or discomfort. The possibility of a device that produces frequency that causes vibration of the eyeballs—and therefore distortion of vision—was suggested by paranormal researcher Vic Tandy in the 1990s while attempting to demystify a "haunting" in his laboratory in Coventry. This "spook" was characterised by a feeling of unease and vague glimpses of a grey apparition. Some detective work implicated a newly-installed extractor fan, found by Tandy, that was generating infrasound of 18.9 Hz, 0.3 Hz, and 9 Hz. A long-range acoustic device (LRAD) produces a 30 degree cone of audible sound in frequencies within the human hearing spectrum (20 - 20000 Hz). An LRAD was used by the crew of the cruise ship Seabourn Spirit in 2005 to deter pirates who chased and attacked the ship. More commonly this device and others of similar design have been used to disperse protesters and rioters in crowd control efforts. A similar system is called a "magnetic acoustic device". The Mosquito sonic devices have been used in the United Kingdom to deter teenagers from lingering around shops in target areas. The device works by emitting an ultra-high frequency blast (around 19–20 kHz) that teenagers or people under approximately 20 are susceptible to and find uncomfortable. Age-related hearing loss apparently prevents the ultra-high pitch sound from causing a nuisance to those in their late twenties and above, though this is wholly dependent on a young person's past exposure to high sound pressure levels. In 2020 and 2021, Greek authorities used long-range sound cannons to deter migrants on the Turkish border. High-amplitude sound of a specific pattern at a frequency close to the sensitivity peak of human hearing (2–3 kHz) is used as a burglar deterrent. Some police forces have used sound cannons against protesters, for example during the 2009 G20 Pittsburgh summit, the 2014 Ferguson unrest, and the 2016 Dakota Access Pipeline protest in North Dakota, among others. It has been reported that "sonic attacks" may have taken place in the American embassy in Cuba in 2016 and 2017 ("Havana syndrome"), leading to health problems, including hearing loss, in US and Canadian government employees at the US and Canadian embassies in Havana. However, more recent reports hypothesize microwave energy as the cause or their bodies tricking themselves via a mass psychogenic condition caused by extended periods of stress, such as working in an embassy of a nation considered hostile to your own. Research Studies have found that exposure to high intensity ultrasound at frequencies from 700 kHz to 3.6 MHz can cause lung and intestinal damage in mice. Heart rate patterns following vibroacoustic stimulation has resulted in serious negative consequences such as atrial flutter and bradycardia. See: Microwave auditory effect Effects other than to the ears The extra-aural (unrelated to hearing) bioeffects on various internal organs and the central nervous system included auditory shifts, vibrotactile sensitivity change, muscle contraction, cardiovascular function change, central nervous system effects, vestibular (inner ear) effects, and chest wall/lung tissue effects. Researchers found that low-frequency sonar exposure could result in significant cavitations, hypothermia, and tissue shearing. No follow up experiments were recommended. Tests performed on mice show the threshold for both lung and liver damage occurs at about 184 dB. Damage increases rapidly as intensity is increased. The American Institute of Ultrasound in Medicine (AIUM) has stated that there have been no proven biological effects associated with an unfocused sound beam with intensities below 100 mW/cm² SPTA or focused sound beams below an intensity level of 1 mW/cm² SPTA. Noise-induced neurologic disturbances in scuba divers exposed to continuous low-frequency tones for durations longer than 15 minutes has involved in some cases the development of immediate and long-term problems affecting brain tissue. The symptoms resembled those of individuals who had suffered minor head injuries. One theory for a causal mechanism is that the prolonged sound exposure resulted in enough mechanical strain to brain tissue to induce an encephalopathy. Divers and aquatic mammals may also suffer lung and sinus injuries from high intensity, low-frequency sound. This is due to the ease with which low-frequency sound passes from water into a body, but not into any pockets of gas in the body, which reflect the sound due to mismatched acoustic impedance.
Technology
Less-lethal weapons
null
763490
https://en.wikipedia.org/wiki/Fold%20%28geology%29
Fold (geology)
In structural geology, a fold is a stack of originally planar surfaces, such as sedimentary strata, that are bent or curved ("folded") during permanent deformation. Folds in rocks vary in size from microscopic crinkles to mountain-sized folds. They occur as single isolated folds or in periodic sets (known as fold trains). Synsedimentary folds are those formed during sedimentary deposition. Folds form under varied conditions of stress, pore pressure, and temperature gradient, as evidenced by their presence in soft sediments, the full spectrum of metamorphic rocks, and even as primary flow structures in some igneous rocks. A set of folds distributed on a regional scale constitutes a fold belt, a common feature of orogenic zones. Folds are commonly formed by shortening of existing layers, but may also be formed as a result of displacement on a non-planar fault (fault bend fold), at the tip of a propagating fault (fault propagation fold), by differential compaction or due to the effects of a high-level igneous intrusion e.g. above a laccolith. Fold terminology The fold hinge is the line joining points of maximum curvature on a folded surface. This line may be either straight or curved. The term hinge line has also been used for this feature. A fold surface seen perpendicular to its shortening direction can be divided into hinge and limb portions; the limbs are the flanks of the fold, and the limbs converge at the hinge zone. Within the hinge zone lies the hinge point, which is the point of minimum radius of curvature (maximum curvature) of the fold. The crest of the fold represents the highest point of the fold surface whereas the trough is the lowest point. The inflection point of a fold is the point on a limb at which the concavity reverses; on regular folds, this is the midpoint of the limb. The axial surface is defined as a plane connecting all the hinge lines of stacked folded surfaces. If the axial surface is planar, it is called an axial plane and can be described in terms of strike and dip. Folds can have a fold axis. A fold axis "is the closest approximation to a straight line that when moved parallel to itself, generates the form of the fold". (Ramsay 1967). A fold that can be generated by a fold axis is called a cylindrical fold. This term has been broadened to include near-cylindrical folds. Often, the fold axis is the same as the hinge line. Descriptive features Fold size Minor folds are quite frequently seen in outcrop; major folds seldom are except in the more arid countries. Minor folds can, however, often provide the key to the major folds they are related to. They reflect the same shape and style, the direction in which the closures of the major folds lie, and their cleavage indicates the attitude of the axial planes of the major folds and their direction of overturning Fold shape A fold can be shaped like a chevron, with planar limbs meeting at an angular axis, as cuspate with curved limbs, as circular with a curved axis, or as elliptical with unequal wavelength. Fold tightness Fold tightness is defined by the size of the angle between the fold's limbs (as measured tangential to the folded surface at the inflection line of each limb), called the interlimb angle. Gentle folds have an interlimb angle of between 180° and 120°, open folds range from 120° to 70°, close folds from 70° to 30°, and tight folds from 30° to 0°. Isoclines, or isoclinal folds, have an interlimb angle of between 10° and zero, with essentially parallel limbs. Fold symmetry Not all folds are equal on both sides of the axis of the fold. Those with limbs of relatively equal length are termed symmetrical, and those with highly unequal limbs are asymmetrical. Asymmetrical folds generally have an axis at an angle to the original unfolded surface they formed on. Facing and vergence Vergence is calculated in a direction perpendicular to the fold axis. Deformation style classes Folds that maintain uniform layer thickness are classed as concentric folds. Those that do not are called similar folds. Similar folds tend to display thinning of the limbs and thickening of the hinge zone. Concentric folds are caused by warping from active buckling of the layers, whereas similar folds usually form by some form of shear flow where the layers are not mechanically active. Ramsay has proposed a classification scheme for folds that often is used to describe folds in profile based upon the curvature of the inner and outer lines of a fold and the behavior of dip isogons. that is, lines connecting points of equal dip on adjacent folded surfaces: Types of fold Linear Anticline: linear, strata normally dip away from the axial center, oldest strata in center irrespective of orientation. Syncline: linear, strata normally dip toward the axial center, youngest strata in center irrespective of orientation. Antiform: linear, strata dip away from the axial center, age unknown, or inverted. Synform: linear, strata dip toward the axial center, age unknown, or inverted. Monocline: linear, strata dip in one direction between horizontal layers on each side. Recumbent: linear, fold axial plane oriented at a low angle resulting in overturned strata in one limb of the fold. Other Dome: nonlinear, strata dip away from center in all directions, oldest strata in center. Basin: nonlinear, strata dip toward center in all directions, youngest strata in center. Chevron: angular fold with straight limbs and small hinges Slump: typically monoclinal, the result of differential compaction or dissolution during sedimentation and lithification. Ptygmatic: Folds are chaotic, random and disconnected. Typical of sedimentary slump folding, migmatites and decollement detachment zones. Parasitic: short-wavelength folds formed within a larger wavelength fold structure - normally associated with differences in bed thickness Disharmonic: Folds in adjacent layers with different wavelengths and shapes (A homocline involves strata dipping in the same direction, though not necessarily any folding.) Causes of folding Folds appear on all scales, in all rock types, at all levels in the crust. They arise from a variety of causes. Layer-parallel shortening When a sequence of layered rocks is shortened parallel to its layering, this deformation may be accommodated in a number of ways, homogeneous shortening, reverse faulting or folding. The response depends on the thickness of the mechanical layering and the contrast in properties between the layers. If the layering does begin to fold, the fold style is also dependent on these properties. Isolated thick competent layers in a less competent matrix control the folding and typically generate classic rounded buckle folds accommodated by deformation in the matrix. In the case of regular alternations of layers of contrasting properties, such as sandstone-shale sequences, kink-bands, box-folds and chevron folds are normally produced. Fault-related folding Many folds are directly related to faults, associated with their propagation, displacement and the accommodation of strains between neighboring faults. Fault bend folding Fault-bend folds are caused by displacement along a non-planar fault. In non-vertical faults, the hanging-wall deforms to accommodate the mismatch across the fault as displacement progresses. Fault bend folds occur in both extensional and thrust faulting. In extension, listric faults form rollover anticlines in their hanging walls. In thrusting, ramp anticlines form whenever a thrust fault cuts up section from one detachment level to another. Displacement over this higher-angle ramp generates the folding. Fault propagation folding Fault propagation folds or tip-line folds are caused when displacement occurs on an existing fault without further propagation. In both reverse and normal faults this leads to folding of the overlying sequence, often in the form of a monocline. Detachment folding When a thrust fault continues to displace above a planar detachment without further fault propagation, detachment folds may form, typically of box-fold style. These generally occur above a good detachment such as in the Jura Mountains, where the detachment occurs on middle Triassic evaporites. Folding in shear zones Shear zones that approximate to simple shear typically contain minor asymmetric folds, with the direction of overturning consistent with the overall shear sense. Some of these folds have highly curved hinge-lines and are referred to as sheath folds. Folds in shear zones can be inherited, formed due to the orientation of pre-shearing layering or formed due to instability within the shear flow. Folding in sediments Recently deposited sediments are normally mechanically weak and prone to remobilization before they become lithified, leading to folding. To distinguish them from folds of tectonic origin, such structures are called synsedimentary (formed during sedimentation). Slump folding: When slumps form in poorly consolidated sediments, they commonly undergo folding, particularly at their leading edges, during their emplacement. The asymmetry of the slump folds can be used to determine paleoslope directions in sequences of sedimentary rocks. Dewatering: Rapid dewatering of sandy sediments, possibly triggered by seismic activity, can cause convolute bedding. Compaction: Folds can be generated in a younger sequence by differential compaction over older structures such as fault blocks and reefs. Igneous intrusion The emplacement of igneous intrusions tends to deform the surrounding country rock. In the case of high-level intrusions, near the Earth's surface, this deformation is concentrated above the intrusion and often takes the form of folding, as with the upper surface of a laccolith. Flow folding The compliance of rock layers is referred to as competence: a competent layer or bed of rock can withstand an applied load without collapsing and is relatively strong, while an incompetent layer is relatively weak. When rock behaves as a fluid, as in the case of very weak rock such as rock salt, or any rock that is buried deeply enough, it typically shows flow folding (also called passive folding, because little resistance is offered): the strata appear shifted undistorted, assuming any shape impressed upon them by surrounding more rigid rocks. The strata simply serve as markers of the folding. Such folding is also a feature of many igneous intrusions and glacier ice. Folding mechanisms Folding of rocks must balance the deformation of layers with the conservation of volume in a rock mass. This occurs by several mechanisms. Flexural slip Flexural slip allows folding by creating layer-parallel slip between the layers of the folded strata, which, altogether, result in deformation. A good analogy is bending a phone book, where volume preservation is accommodated by slip between the pages of the book. The fold formed by the compression of competent rock beds is called "flexure fold". Buckling Typically, folding is thought to occur by simple buckling of a planar surface and its confining volume. The volume change is accommodated by layer parallel shortening the volume, which grows in thickness. Folding under this mechanism is typical of a similar fold style, as thinned limbs are shortened horizontally and thickened hinges do so vertically. Mass displacement If the folding deformation cannot be accommodated by a flexural slip or volume-change shortening (buckling), the rocks are generally removed from the path of the stress. This is achieved by pressure dissolution, a form of metamorphic process, in which rocks shorten by dissolving constituents in areas of high strain and redepositing them in areas of lower strain. Folds generated in this way include examples in migmatites and areas with a strong axial planar cleavage. Mechanics of folding Folds in the rock are formed about the stress field in which the rocks are located and the rheology, or method of response to stress, of the rock at the time at which the stress is applied. The rheology of the layers being folded determines characteristic features of the folds that are measured in the field. Rocks that deform more easily form many short-wavelength, high-amplitude folds. Rocks that do not deform as easily form long-wavelength, low-amplitude folds. Economic implications Mining industry Layers of rock that fold into a hinge need to accommodate large deformations in the hinge zone. This results in voids between the layers. These voids, and especially the fact that the water pressure is lower in the voids than outside of them, act as triggers for the deposition of minerals. Over millions of years, this process is capable of gathering large quantities of trace minerals from large expanses of rock and depositing them at very concentrated sites. This may be a mechanism that is responsible for the veins. To summarize, when searching for veins of valuable minerals, it might be wise to look for highly folded rock, and this is the reason why the mining industry is very interested in the theory of geological folding. Oil industry Anticlinal traps are formed by folding of rock. For example, if a porous sandstone unit covered with low permeability shale is folded into an anticline, it may form a hydrocarbons trap, oil accumulating in the crest of the fold. Most anticlinal traps are produced as a result of sideways pressure, folding the layers of rock, but can also occur from sediments being compacted.
Physical sciences
Geology
null
763555
https://en.wikipedia.org/wiki/Cogeneration
Cogeneration
Cogeneration or combined heat and power (CHP) is the use of a heat engine or power station to generate electricity and useful heat at the same time. Cogeneration is a more efficient use of fuel or heat, because otherwise-wasted heat from electricity generation is put to some productive use. Combined heat and power (CHP) plants recover otherwise wasted thermal energy for heating. This is also called combined heat and power district heating. Small CHP plants are an example of decentralized energy. By-product heat at moderate temperatures ( can also be used in absorption refrigerators for cooling. The supply of high-temperature heat first drives a gas or steam turbine-powered generator. The resulting low-temperature waste heat is then used for water or space heating. At smaller scales (typically below 1 MW), a gas engine or diesel engine may be used. Cogeneration is also common with geothermal power plants as they often produce relatively low grade heat. Binary cycles may be necessary to reach acceptable thermal efficiency for electricity generation at all. Cogeneration is less commonly employed in nuclear power plants as NIMBY and safety considerations have often kept them further from population centers than comparable chemical power plants and district heating is less efficient in lower population density areas due to transmission losses. Cogeneration was practiced in some of the earliest installations of electrical generation. Before central stations distributed power, industries generating their own power used exhaust steam for process heating. Large office and apartment buildings, hotels, and stores commonly generated their own power and used waste steam for building heat. Due to the high cost of early purchased power, these CHP operations continued for many years after utility electricity became available. Overview Many process industries, such as chemical plants, oil refineries and pulp and paper mills, require large amounts of process heat for such operations as chemical reactors, distillation columns, steam driers and other uses. This heat, which is usually used in the form of steam, can be generated at the typically low pressures used in heating, or can be generated at much higher pressure and passed through a turbine first to generate electricity. In the turbine the steam pressure and temperature is lowered as the internal energy of the steam is converted to work. The lower-pressure steam leaving the turbine can then be used for process heat. Steam turbines at thermal power stations are normally designed to be fed high-pressure steam, which exits the turbine at a condenser operating a few degrees above ambient temperature and at a few millimeters of mercury absolute pressure. (This is called a condensing turbine.) For all practical purposes this steam has negligible useful energy before it is condensed. Steam turbines for cogeneration are designed for extraction of some steam at lower pressures after it has passed through a number of turbine stages, with the un-extracted steam going on through the turbine to a condenser. In this case, the extracted steam causes a mechanical power loss in the downstream stages of the turbine. Or they are designed, with or without extraction, for final exhaust at back pressure (non-condensing). The extracted or exhaust steam is used for process heating. Steam at ordinary process heating conditions still has a considerable amount of enthalpy that could be used for power generation, so cogeneration has an opportunity cost. A typical power generation turbine in a paper mill may have extraction pressures of . A typical back pressure may be . In practice these pressures are custom designed for each facility. Conversely, simply generating process steam for industrial purposes instead of high enough pressure to generate power at the top end also has an opportunity cost (See: Steam supply and exhaust conditions). The capital and operating cost of high-pressure boilers, turbines, and generators is substantial. This equipment is normally operated continuously, which usually limits self-generated power to large-scale operations. A combined cycle (in which several thermodynamic cycles produce electricity), may also be used to extract heat using a heating system as condenser of the power plant's bottoming cycle. For example, the RU-25 MHD generator in Moscow heated a boiler for a conventional steam powerplant, whose condensate was then used for space heat. A more modern system might use a gas turbine powered by natural gas, whose exhaust powers a steam plant, whose condensate provides heat. Cogeneration plants based on a combined cycle power unit can have thermal efficiencies above 80%. The viability of CHP (sometimes termed utilisation factor), especially in smaller CHP installations, depends on a good baseload of operation, both in terms of an on-site (or near site) electrical demand and heat demand. In practice, an exact match between the heat and electricity needs rarely exists. A CHP plant can either meet the need for heat (heat driven operation) or be run as a power plant with some use of its waste heat, the latter being less advantageous in terms of its utilisation factor and thus its overall efficiency. The viability can be greatly increased where opportunities for trigeneration exist. In such cases, the heat from the CHP plant is also used as a primary energy source to deliver cooling by means of an absorption chiller. CHP is most efficient when heat can be used on-site or very close to it. Overall efficiency is reduced when the heat must be transported over longer distances. This requires heavily insulated pipes, which are expensive and inefficient; whereas electricity can be transmitted along a comparatively simple wire, and over much longer distances for the same energy loss. A car engine becomes a CHP plant in winter when the reject heat is useful for warming the interior of the vehicle. The example illustrates the point that deployment of CHP depends on heat uses in the vicinity of the heat engine. Thermally enhanced oil recovery (TEOR) plants often produce a substantial amount of excess electricity. After generating electricity, these plants pump leftover steam into heavy oil wells so that the oil will flow more easily, increasing production. Cogeneration plants are commonly found in district heating systems of cities, central heating systems of larger buildings (e.g. hospitals, hotels, prisons) and are commonly used in the industry in thermal production processes for process water, cooling, steam production or CO2 fertilization. Trigeneration or combined cooling, heat and power (CCHP) refers to the simultaneous generation of electricity and useful heating and cooling from the combustion of a fuel or a solar heat collector. The terms cogeneration and trigeneration can also be applied to the power systems simultaneously generating electricity, heat, and industrial chemicals (e.g., syngas). Trigeneration differs from cogeneration in that the waste heat is used for both heating and cooling, typically in an absorption refrigerator. Combined cooling, heat, and power systems can attain higher overall efficiencies than cogeneration or traditional power plants. In the United States, the application of trigeneration in buildings is called building cooling, heating, and power. Heating and cooling output may operate concurrently or alternately depending on need and system construction. Types of plants Topping cycle plants primarily produce electricity from a steam turbine. Partly expanded steam is then condensed in a heating condensor at a temperature level that is suitable e.g. district heating or water desalination. Bottoming cycle plants produce high temperature heat for industrial processes, then a waste heat recovery boiler feeds an electrical plant. Bottoming cycle plants are only used in industrial processes that require very high temperatures such as furnaces for glass and metal manufacturing, so they are less common. Large cogeneration systems provide heating water and power for an industrial site or an entire town. Common CHP plant types are: Gas turbine CHP plants using the waste heat in the flue gas of gas turbines. The fuel used is typically natural gas. Gas engine CHP plants use a reciprocating gas engine, which is generally more competitive than a gas turbine up to about 5 MW. The gaseous fuel used is normally natural gas. These plants are generally manufactured as fully packaged units that can be installed within a plantroom or external plant compound with simple connections to the site's gas supply, electrical distribution network and heating systems. Typical outputs and efficiencies see Typical large example see Biofuel engine CHP plants use an adapted reciprocating gas engine or diesel engine, depending upon which biofuel is being used, and are otherwise very similar in design to a Gas engine CHP plant. The advantage of using a biofuel is one of reduced fossil fuel consumption and thus reduced carbon emissions. These plants are generally manufactured as fully packaged units that can be installed within a plantroom or external plant compound with simple connections to the site's electrical distribution and heating systems. Another variant is the wood gasifier CHP plant whereby a wood pellet or wood chip biofuel is gasified in a zero oxygen high temperature environment; the resulting gas is then used to power the gas engine. Combined cycle power plants adapted for CHP Molten-carbonate fuel cells and solid oxide fuel cells have a hot exhaust, very suitable for heating. Steam turbine CHP plants that use the heating system as the steam condenser for the steam turbine Nuclear power plants, similar to other steam turbine power plants, can be fitted with extractions in the turbines to bleed partially expanded steam to a heating system. With a heating system temperature of 95 °C it is possible to extract about 10 MW heat for every MW electricity lost. With a temperature of 130 °C the gain is slightly smaller, about 7 MW for every MWe lost. A review of cogeneration options is in Czech research team proposed a "Teplator" system where heat from spent fuel rods is recovered for the purpose of residential heating. Smaller cogeneration units may use a reciprocating engine or Stirling engine. The heat is removed from the exhaust and radiator. The systems are popular in small sizes because small gas and diesel engines are less expensive than small gas- or oil-fired steam-electric plants. Some cogeneration plants are fired by biomass, or industrial and municipal solid waste (see incineration). Some CHP plants use waste gas as the fuel for electricity and heat generation. Waste gases can be gas from animal waste, landfill gas, gas from coal mines, sewage gas, and combustible industrial waste gas. Some cogeneration plants combine gas and solar photovoltaic generation to further improve technical and environmental performance. Such hybrid systems can be scaled down to the building level and even individual homes. MicroCHP Micro combined heat and power or 'Micro cogeneration" is a so-called distributed energy resource (DER). The installation is usually less than 5 kWe in a house or small business. Instead of burning fuel to merely heat space or water, some of the energy is converted to electricity in addition to heat. This electricity can be used within the home or business or, if permitted by the grid management, sold back into the electric power grid. Delta-ee consultants stated in 2013 that with 64% of global sales the fuel cell micro-combined heat and power passed the conventional systems in sales in 2012. 20,000 units were sold in Japan in 2012 overall within the Ene Farm project. With a Lifetime of around 60,000 hours. For PEM fuel cell units, which shut down at night, this equates to an estimated lifetime of between ten and fifteen years. For a price of $22,600 before installation. For 2013 a state subsidy for 50,000 units is in place. MicroCHP installations use five different technologies: microturbines, internal combustion engines, stirling engines, closed-cycle steam engines, and fuel cells. One author indicated in 2008 that MicroCHP based on Stirling engines is the most cost-effective of the so-called microgeneration technologies in abating carbon emissions. A 2013 UK report from Ecuity Consulting stated that MCHP is the most cost-effective method of using gas to generate energy at the domestic level. However, advances in reciprocation engine technology are adding efficiency to CHP plants, particularly in the biogas field. As both MiniCHP and CHP have been shown to reduce emissions they could play a large role in the field of CO2 reduction from buildings, where more than 14% of emissions can be saved using CHP in buildings. The University of Cambridge reported a cost-effective steam engine MicroCHP prototype in 2017 which has the potential to be commercially competitive in the following decades. Quite recently, in some private homes, fuel cell micro-CHP plants can now be found, which can operate on hydrogen, or other fuels as natural gas or LPG. When running on natural gas, it relies on steam reforming of natural gas to convert the natural gas to hydrogen prior to use in the fuel cell. This hence still emits (see reaction) but (temporarily) running on this can be a good solution until the point where the hydrogen is starting to be distributed through the (natural gas) piping system. Another MicroCHP example is a natural gas or propane fueled Electricity Producing Condensing Furnace. It combines the fuel saving technique of cogeneration meaning producing electric power and useful heat from a single source of combustion. The condensing furnace is a forced-air gas system with a secondary heat exchanger that allows heat to be extracted from combustion products down to the ambient temperature along with recovering heat from the water vapor. The chimney is replaced by a water drain and vent to the side of the building. Trigeneration A plant producing electricity, heat and cold is called a trigeneration or polygeneration plant. Cogeneration systems linked to absorption chillers or adsorption chillers use waste heat for refrigeration. Combined heat and power district heating In the United States, Consolidated Edison distributes 66 billion kilograms of steam each year through its seven cogeneration plants to 100,000 buildings in Manhattan—the biggest steam district in the United States. The peak delivery is 10 million pounds per hour (or approximately 2.5 GW). Industrial CHP Cogeneration is still common in pulp and paper mills, refineries and chemical plants. In this "industrial cogeneration/CHP", the heat is typically recovered at higher temperatures (above 100 °C) and used for process steam or drying duties. This is more valuable and flexible than low-grade waste heat, but there is a slight loss of power generation. The increased focus on sustainability has made industrial CHP more attractive, as it substantially reduces carbon footprint compared to generating steam or burning fuel on-site and importing electric power from the grid. Smaller industrial co-generation units have an output capacity of 5–25 MW and represent a viable off-grid option for a variety of remote applications to reduce carbon emissions. Utility pressures versus self generated industrial Industrial cogeneration plants normally operate at much lower boiler pressures than utilities. Among the reasons are: Cogeneration plants face possible contamination of returned condensate. Because boiler feed water from cogeneration plants has much lower return rates than 100% condensing power plants, industries usually have to treat proportionately more boiler make up water. Boiler feed water must be completely oxygen free and de-mineralized, and the higher the pressure the more critical the level of purity of the feed water. Utilities are typically larger scale power than industry, which helps offset the higher capital costs of high pressure. Utilities are less likely to have sharp load swings than industrial operations, which deal with shutting down or starting up units that may represent a significant percent of either steam or power demand. Heat recovery steam generators A heat recovery steam generator (HRSG) is a steam boiler that uses hot exhaust gases from the gas turbines or reciprocating engines in a CHP plant to heat up water and generate steam. The steam, in turn, drives a steam turbine or is used in industrial processes that require heat. HRSGs used in the CHP industry are distinguished from conventional steam generators by the following main features: The HRSG is designed based upon the specific features of the gas turbine or reciprocating engine that it will be coupled to. Since the exhaust gas temperature is relatively low, heat transmission is accomplished mainly through convection. The exhaust gas velocity is limited by the need to keep head losses down. Thus, the transmission coefficient is low, which calls for a large heating surface area. Since the temperature difference between the hot gases and the fluid to be heated (steam or water) is low, and with the heat transmission coefficient being low as well, the evaporator and economizer are designed with plate fin heat exchangers. Cogeneration using biomass Biomass refers to any plant or animal matter in which it is possible to be reused as a source of heat or electricity, such as sugarcane, vegetable oils, wood, organic waste and residues from the food or agricultural industries. Brazil is now considered a world reference in terms of energy generation from biomass. A growing sector in the use of biomass for power generation is the sugar and alcohol sector, which mainly uses sugarcane bagasse as fuel for thermal and electric power generation. Power cogeneration in the sugar and alcohol sector In the sugarcane industry, cogeneration is fueled by the bagasse residue of sugar refining, which is burned to produce steam. Some steam can be sent through a turbine that turns a generator, producing electric power. Energy cogeneration in sugarcane industries located in Brazil is a practice that has been growing in last years. With the adoption of energy cogeneration in the sugar and alcohol sector, the sugarcane industries are able to supply the electric energy demand needed to operate, and generate a surplus that can be commercialized. Advantages of the cogeneration using sugarcane bagasse In comparison with the electric power generation by means of fossil fuel-based thermoelectric plants, such as natural gas, the energy generation using sugarcane bagasse has environmental advantages due to the reduction of emissions. In addition to the environmental advantages, cogeneration using sugarcane bagasse presents advantages in terms of efficiency comparing to thermoelectric generation, through the final destination of the energy produced. While in thermoelectric generation, part of the heat produced is lost, in cogeneration this heat has the possibility of being used in the production processes, increasing the overall efficiency of the process. Disadvantages of the cogeneration using sugarcane bagasse In sugarcane cultivation, is usually used potassium source's containing high concentration of chlorine, such as potassium chloride (KCl). Considering that KCl is applied in huge quantities, sugarcane ends up absorbing high concentrations of chlorine. Due to this absorption, when the sugarcane bagasse is burned in the power cogeneration, dioxins and methyl chloride ends up being emitted. In the case of dioxins, these substances are considered very toxic and cancerous. In the case of methyl chloride, when this substance is emitted and reaches the stratosphere, it ends up being very harmful for the ozone layer, since chlorine when combined with the ozone molecule generates a catalytic reaction leading to the breakdown of ozone links. After each reaction, chlorine starts a destructive cycle with another ozone molecule. In this way, a single chlorine atom can destroy thousands of ozone molecules. As these molecules are being broken, they are unable to absorb the ultraviolet rays. As a result, the UV radiation is more intense on Earth and there is a worsening of global warming. Comparison with a heat pump A heat pump may be compared with a CHP unit as follows. If, to supply thermal energy, the exhaust steam from the turbo-generator must be taken at a higher temperature than the system would produce most electricity at, the lost electrical generation is as if a heat pump were used to provide the same heat by taking electrical power from the generator running at lower output temperature and higher efficiency. Typically for every unit of electrical power lost, then about 6 units of heat are made available at about . Thus CHP has an effective Coefficient of Performance (COP) compared to a heat pump of 6. However, for a remotely operated heat pump, losses in the electrical distribution network would need to be considered, of the order of 6%. Because the losses are proportional to the square of the current, during peak periods losses are much higher than this and it is likely that widespread (i.e. citywide application of heat pumps) would cause overloading of the distribution and transmission grids unless they were substantially reinforced. It is also possible to run a heat driven operation combined with a heat pump, where the excess electricity (as heat demand is the defining factor on se) is used to drive a heat pump. As heat demand increases, more electricity is generated to drive the heat pump, with the waste heat also heating the heating fluid. As the efficiency of heat pumps depends on the difference between hot end and cold end temperature (efficiency rises as the difference decreases) it may be worthwhile to combine even relatively low grade waste heat otherwise unsuitable for home heating with heat pumps. For example, a large enough reservoir of cooling water at can significantly improve efficiency of heat pumps drawing from such a reservoir compared to air source heat pumps drawing from cold air during a night. In the summer when there's both demand for air conditioning and warm water, the same water may even serve as both a "dump" for the waste heat rejected by a/c units and as a "source" for heat pumps providing warm water. Those considerations are behind what is sometimes called "cold district heating" using a "heat" source whose temperature is well below those usually employed in district heating. Distributed generation Most industrial countries generate the majority of their electrical power needs in large centralized facilities with capacity for large electrical power output. These plants benefit from economy of scale, but may need to transmit electricity across long distances causing transmission losses. Cogeneration or trigeneration production is subject to limitations in the local demand and thus may sometimes need to reduce (e.g., heat or cooling production to match the demand). An example of cogeneration with trigeneration applications in a major city is the New York City steam system. Thermal efficiency Every heat engine is subject to the theoretical efficiency limits of the Carnot cycle or subset Rankine cycle in the case of steam turbine power plants or Brayton cycle in gas turbine with steam turbine plants. Most of the efficiency loss with steam power generation is associated with the latent heat of vaporization of steam that is not recovered when a turbine exhausts its low temperature and pressure steam to a condenser. (Typical steam to condenser would be at a few millimeters absolute pressure and on the order of hotter than the cooling water temperature, depending on the condenser capacity.) In cogeneration this steam exits the turbine at a higher temperature where it may be used for process heat, building heat or cooling with an absorption chiller. The majority of this heat is from the latent heat of vaporization when the steam condenses. Thermal efficiency in a cogeneration system is defined as: Where: = Thermal efficiency = Total work output by all systems = Total heat input into the system Heat output may also be used for cooling (for example, in summer), thanks to an absorption chiller. If cooling is achieved in the same time, thermal efficiency in a trigeneration system is defined as: Where: = Thermal efficiency = Total work output by all systems = Total heat input into the system Typical cogeneration models have losses as in any system. The energy distribution below is represented as a percent of total input energy: Electricity = 45% Heat + Cooling = 40% Heat losses = 13% Electrical line losses = 2% Conventional central coal- or nuclear-powered power stations convert about 33–45% of their input heat to electricity. Brayton cycle power plants operate at up to 60% efficiency. In the case of conventional power plants, approximately 10-15% of this heat is lost up the stack of the boiler. Most of the remaining heat emerges from the turbines as low-grade waste heat with no significant local uses, so it is usually rejected to the environment, typically to cooling water passing through a condenser. Because turbine exhaust is normally just above ambient temperature, some potential power generation is sacrificed in rejecting higher-temperature steam from the turbine for cogeneration purposes. For cogeneration to be practical power generation and end use of heat must be in relatively close proximity (<2 km typically). Even though the efficiency of a small distributed electrical generator may be lower than a large central power plant, the use of its waste heat for local heating and cooling can result in an overall use of the primary fuel supply as great as 80%. This provides substantial financial and environmental benefits. Costs Typically, for a gas-fired plant the fully installed cost per kW electrical is around £400/kW (US$577), which is comparable with large central power stations. History Cogeneration in Europe The EU has actively incorporated cogeneration into its energy policy via the CHP Directive. In September 2008 at a hearing of the European Parliament's Urban Lodgment Intergroup, Energy Commissioner Andris Piebalgs is quoted as saying, “security of supply really starts with energy efficiency.” Energy efficiency and cogeneration are recognized in the opening paragraphs of the European Union's Cogeneration Directive 2004/08/EC. This directive intends to support cogeneration and establish a method for calculating cogeneration abilities per country. The development of cogeneration has been very uneven over the years and has been dominated throughout the last decades by national circumstances. The European Union generates 11% of its electricity using cogeneration. However, there is large difference between Member States with variations of the energy savings between 2% and 60%. Europe has the three countries with the world's most intensive cogeneration economies: Denmark, the Netherlands and Finland. Of the 28.46 TWh of electrical power generated by conventional thermal power plants in Finland in 2012, 81.80% was cogeneration. Other European countries are also making great efforts to increase efficiency. Germany reported that at present, over 50% of the country's total electricity demand could be provided through cogeneration. So far, Germany has set the target to double its electricity cogeneration from 12.5% of the country's electricity to 25% of the country's electricity by 2020 and has passed supporting legislation accordingly. The UK is also actively supporting combined heat and power. In light of UK's goal to achieve a 60% reduction in carbon dioxide emissions by 2050, the government has set the target to source at least 15% of its government electricity use from CHP by 2010. Other UK measures to encourage CHP growth are financial incentives, grant support, a greater regulatory framework, and government leadership and partnership. According to the IEA 2008 modeling of cogeneration expansion for the G8 countries, the expansion of cogeneration in France, Germany, Italy and the UK alone would effectively double the existing primary fuel savings by 2030. This would increase Europe's savings from today's 155.69 Twh to 465 Twh in 2030. It would also result in a 16% to 29% increase in each country's total cogenerated electricity by 2030. Governments are being assisted in their CHP endeavors by organizations like COGEN Europe who serve as an information hub for the most recent updates within Europe's energy policy. COGEN is Europe's umbrella organization representing the interests of the cogeneration industry. The European public–private partnership Fuel Cells and Hydrogen Joint Undertaking Seventh Framework Programme project ene.field deploys in 2017 up 1,000 residential fuel cell Combined Heat and Power (micro-CHP) installations in 12 states. Per 2012 the first 2 installations have taken place. Cogeneration in the United Kingdom In the United Kingdom, the Combined Heat and Power Quality Assurance scheme regulates the combined production of heat and power. It was introduced in 1996. It defines, through calculation of inputs and outputs, "Good Quality CHP" in terms of the achievement of primary energy savings against conventional separate generation of heat and electricity. Compliance with Combined Heat and Power Quality Assurance is required for cogeneration installations to be eligible for government subsidies and tax incentives. Cogeneration in the United States Perhaps the first modern use of energy recycling was done by Thomas Edison. His 1882 Pearl Street Station, the world's first commercial power plant, was a combined heat and power plant, producing both electricity and thermal energy while using waste heat to warm neighboring buildings. Recycling allowed Edison's plant to achieve approximately 50 percent efficiency. By the early 1900s, regulations emerged to promote rural electrification through the construction of centralized plants managed by regional utilities. These regulations not only promoted electrification throughout the countryside, but they also discouraged decentralized power generation, such as cogeneration. By 1978, Congress recognized that efficiency at central power plants had stagnated and sought to encourage improved efficiency with the Public Utility Regulatory Policies Act (PURPA), which encouraged utilities to buy power from other energy producers. Cogeneration plants proliferated, soon producing about 8% of all energy in the United States. However, the bill left implementation and enforcement up to individual states, resulting in little or nothing being done in many parts of the country. The United States Department of Energy has an aggressive goal of having CHP constitute 20% of generation capacity by 2030. Eight Clean Energy Application Centers have been established across the nation. Their mission is to develop the required technology application knowledge and educational infrastructure necessary to lead "clean energy" (combined heat and power, waste heat recovery, and district energy) technologies as viable energy options and reduce any perceived risks associated with their implementation. The focus of the Application Centers is to provide an outreach and technology deployment program for end users, policymakers, utilities, and industry stakeholders. High electric rates in New England and the Middle Atlantic make these areas of the United States the most beneficial for cogeneration. Applications in power generation systems Fossil Any of the following conventional power plants may be converted to a combined cooling, heat and power system: Coal Microturbine Natural gas Oil Small gas turbine Nuclear Nuclear power Geothermal power / geothermal heating Radioisotope thermoelectric generators often double as Radioisotope heater units partially offsetting their low (single digit percent) efficiency in converting thermal to electric energy Renewable Solar thermal Biomass Hydrogen fuel cell (using green hydrogen) Any type of compressor or turboexpander, such as in compressed air energy storage
Technology
Power generation
null
4693960
https://en.wikipedia.org/wiki/Ammonium%20metavanadate
Ammonium metavanadate
Ammonium metavanadate is the inorganic compound with the formula NH4VO3. It is a white salt, although samples are often yellow owing to impurities of V2O5. It is an important intermediate in the purification of vanadium. Synthesis and structure The compound is prepared by the addition of ammonium salts to solutions of vanadate ions, generated by dissolution of V2O5 in basic aqueous solutions, such as hot sodium carbonate. The compound precipitates as a colourless solid. This precipitation step can be slow. The compound adopts a polymeric structure consisting of chains of [VO3]−, formed as corner-sharing VO4 tetrahedra. These chains are interconnected via hydrogen bonds with ammonium ions. Uses Vanadium is often purified from aqueous extracts of slags and ore by selective precipitation of ammonium metavanadate. The material is then roasted to give vanadium pentoxide: 2 NH4VO3 → V2O5 + 2 NH3 + H2O Other Vanadates can behave as structural mimics of phosphates, and in this way they exhibit biological activity. Ammonium metavanadate is used to prepare Mandelin reagent, a qualitative test for alkaloids.
Physical sciences
Metallic oxyanions
Chemistry
4694080
https://en.wikipedia.org/wiki/Sodium%20metavanadate
Sodium metavanadate
Sodium metavanadate is the inorganic compound with the formula NaVO3. It is a yellow, water-soluble salt. Sodium metavanadate is a common precursor to other vanadates. At low pH it converts to sodium decavanadate. It is also precursor to exotic metalates such as [γ-PV2W10O40]5-, [α-PVW11O40]4-, and [β-PV2W10O40]5-. Minerals Sodium metavanadate occurs as two minor minerals, metamunirite (anhydrous) and a dihydrate, munirite. Both are very rare, metamunirite is now known only from vanadium- and uranium-bearing sandstone formations of central-western USA and munirite from Pakistan and South Africa.
Physical sciences
Metallic oxyanions
Chemistry
4699587
https://en.wikipedia.org/wiki/Fish
Fish
A fish (: fish or fishes) is an aquatic, anamniotic, gill-bearing vertebrate animal with swimming fins and a hard skull, but lacking limbs with digits. Fish can be grouped into the more basal jawless fish and the more common jawed fish, the latter including all living cartilaginous and bony fish, as well as the extinct placoderms and acanthodians. In a break to the long tradition of grouping all fish into a single class (Pisces), contemporary phylogenetics views fish as a paraphyletic group. Most fish are cold-blooded, their body temperature varying with the surrounding water, though some large active swimmers like white shark and tuna can hold a higher core temperature. Many fish can communicate acoustically with each other, such as during courtship displays. The study of fish is known as ichthyology. The earliest fish appeared during the Cambrian as small filter feeders; they continued to evolve through the Paleozoic, diversifying into many forms. The earliest fish with dedicated respiratory gills and paired fins, the ostracoderms, had heavy bony plates that served as protective exoskeletons against invertebrate predators. The first fish with jaws, the placoderms, appeared in the Silurian and greatly diversified during the Devonian, the "Age of Fishes". Bony fish, distinguished by the presence of swim bladders and later ossified endoskeletons, emerged as the dominant group of fish after the end-Devonian extinction wiped out the apex predators, the placoderms. Bony fish are further divided into the lobe-finned and ray-finned fish. About 96% of all living fish species today are teleosts, a crown group of ray-finned fish that can protrude their jaws. The tetrapods, a mostly terrestrial clade of vertebrates that have dominated the top trophic levels in both aquatic and terrestrial ecosystems since the Late Paleozoic, evolved from lobe-finned fish during the Carboniferous, developing air-breathing lungs homologous to swim bladders. Despite the cladistic lineage, tetrapods are usually not considered to be fish. Fish have been an important natural resource for humans since prehistoric times, especially as food. Commercial and subsistence fishers harvest fish in wild fisheries or farm them in ponds or in breeding cages in the ocean. Fish are caught for recreation, or raised by fishkeepers as ornaments for private and public exhibition in aquaria and garden ponds. Fish have had a role in human culture through the ages, serving as deities, religious symbols, and as the subjects of art, books and movies. Etymology The word fish is inherited from Proto-Germanic, and is related to German , the Latin and Old Irish , though the exact root is unknown; some authorities reconstruct a Proto-Indo-European root , attested only in Italic, Celtic, and Germanic. Evolution Fossil history About 530 million years ago during the Cambrian explosion, fishlike animals with a notochord and eyes at the front of the body, such as Haikouichthys, appear in the fossil record. During the late Cambrian, other jawless forms such as conodonts appear. Jawed vertebrates appear in the Silurian, with giant armoured placoderms such as Dunkleosteus. Jawed fish, too, appeared during the Silurian: the cartilaginous Chondrichthyes and the bony Osteichthyes. During the Devonian, fish diversity greatly increased, including among the placoderms, lobe-finned fishes, and early sharks, earning the Devonian the epithet "the age of fishes". Phylogeny Fishes are a paraphyletic group, since any clade containing all fish, such as the Gnathostomata or (for bony fish) Osteichthyes, also contains the clade of tetrapods (four-limbed vertebrates, mostly terrestrial), which are usually not considered fish. Some tetrapods, such as cetaceans and ichthyosaurs, have secondarily acquired a fish-like body shape through convergent evolution. Fishes of the World comments that "it is increasingly widely accepted that tetrapods, including ourselves, are simply modified bony fishes, and so we are comfortable with using the taxon Osteichthyes as a clade, which now includes all tetrapods". The biodiversity of extant fish is unevenly distributed among the various groups; teleosts, bony fishes able to protrude their jaws, make up 96% of fish species. The cladogram shows the evolutionary relationships of all groups of living fishes (with their respective diversity) and the tetrapods. Extinct groups are marked with a dagger (†); groups of uncertain placement are labelled with a question mark (?) and dashed lines (- - - - -). Taxonomy Fishes (without tetrapods) are a paraphyletic group and for this reason, the class Pisces seen in older reference works is no longer used in formal classifications. Traditional classification divides fish into three extant classes (Agnatha, Chondrichthyes, and Osteichthyes), and with extinct forms sometimes classified within those groups, sometimes as their own classes. Fish account for more than half of vertebrate species. As of 2016, there are over 32,000 described species of bony fish, over 1,100 species of cartilaginous fish, and over 100 hagfish and lampreys. A third of these fall within the nine largest families; from largest to smallest, these are Cyprinidae, Gobiidae, Cichlidae, Characidae, Loricariidae, Balitoridae, Serranidae, Labridae, and Scorpaenidae. About 64 families are monotypic, containing only one species. Diversity Fish range in size from the huge whale shark to some tiny teleosts only long, such as the cyprinid Paedocypris progenetica and the stout infantfish. Swimming performance varies from fish such as tuna, salmon, and jacks that can cover 10–20 body-lengths per second to species such as eels and rays that swim no more than 0.5 body-lengths per second. A typical fish is cold-blooded, has a streamlined body for rapid swimming, extracts oxygen from water using gills, has two sets of paired fins, one or two dorsal fins, an anal fin and a tail fin, jaws, skin covered with scales, and lays eggs. Each criterion has exceptions, creating a wide diversity in body shape and way of life. For example, some fast-swimming fish are warm-blooded, while some slow-swimming fish have abandoned streamlining in favour of other body shapes. Ecology Habitats Fish species are roughly divided equally between freshwater and marine (oceanic) ecosystems; there are some 15,200 freshwater species and around 14,800 marine species. Coral reefs in the Indo-Pacific constitute the center of diversity for marine fishes, whereas continental freshwater fishes are most diverse in large river basins of tropical rainforests, especially the Amazon, Congo, and Mekong basins. More than 5,600 fish species inhabit Neotropical freshwaters alone, such that Neotropical fishes represent about 10% of all vertebrate species on the Earth. Fish are abundant in most bodies of water. They can be found in nearly all aquatic environments, from high mountain streams (e.g., char and gudgeon) to the abyssal and even hadal depths of the deepest oceans (e.g., cusk-eels and snailfish), although none have been found in the deepest 25% of the ocean. The deepest living fish in the ocean so far found is a cusk-eel, Abyssobrotula galatheae, recorded at the bottom of the Puerto Rico Trench at . In terms of temperature, Jonah's icefish live in cold waters of the Southern Ocean, including under the Filchner–Ronne Ice Shelf at a latitude of 79°S, while desert pupfish live in desert springs, streams, and marshes, sometimes highly saline, with water temperatures as high as 36 C. A few fish live mostly on land or lay their eggs on land near water. Mudskippers feed and interact with one another on mudflats and go underwater to hide in their burrows. A single undescribed species of Phreatobius has been called a true "land fish" as this worm-like catfish strictly lives among waterlogged leaf litter. Cavefish of multiple families live in underground lakes, underground rivers or aquifers. Parasites and predators Like other animals, fish suffer from parasitism. Some species use cleaner fish to remove external parasites. The best known of these are the bluestreak cleaner wrasses of coral reefs in the Indian and Pacific oceans. These small fish maintain cleaning stations where other fish congregate and perform specific movements to attract the attention of the cleaners. Cleaning behaviors have been observed in a number of fish groups, including an interesting case between two cichlids of the same genus, Etroplus maculatus, the cleaner, and the much larger E. suratensis. Fish occupy many trophic levels in freshwater and marine food webs. Fish at the higher levels are predatory, and a substantial part of their prey consists of other fish. In addition, mammals such as dolphins and seals feed on fish, alongside birds such as gannets and cormorants. Anatomy and physiology Locomotion The body of a typical fish is adapted for efficient swimming by alternately contracting paired sets of muscles on either side of the backbone. These contractions form S-shaped curves that move down the body. As each curve reaches the tail fin, force is applied to the water, moving the fish forward. The other fins act as control surfaces like an aircraft's flaps, enabling the fish to steer in any direction. Since body tissue is denser than water, fish must compensate for the difference or they will sink. Many bony fish have an internal organ called a swim bladder that allows them to adjust their buoyancy by increasing or decreasing the amount of gas it contains. The scales of fish provide protection from predators at the cost of adding stiffness and weight. Fish scales are often highly reflective; this silvering provides camouflage in the open ocean. Because the water all around is the same colour, reflecting an image of the water offers near-invisibility. Circulation Fish have a closed-loop circulatory system. The heart pumps the blood in a single loop throughout the body; for comparison, the mammal heart has two loops, one for the lungs to pick up oxygen, one for the body to deliver the oxygen. In fish, the heart pumps blood through the gills. Oxygen-rich blood then flows without further pumping, unlike in mammals, to the body tissues. Finally, oxygen-depleted blood returns to the heart. Respiration Gills Fish exchange gases using gills on either side of the pharynx. Gills consist of comblike structures called filaments. Each filament contains a capillary network that provides a large surface area for exchanging oxygen and carbon dioxide. Fish exchange gases by pulling oxygen-rich water through their mouths and pumping it over their gills. Capillary blood in the gills flows in the opposite direction to the water, resulting in efficient countercurrent exchange. The gills push the oxygen-poor water out through openings in the sides of the pharynx. Cartilaginous fish have multiple gill openings: sharks usually have five, sometimes six or seven pairs; they often have to swim to oxygenate their gills. Bony fish have a single gill opening on each side, hidden beneath a protective bony cover or operculum. They are able to oxygenate their gills using muscles in the head. Air breathing Some 400 species of fish in 50 families can breathe air, enabling them to live in oxygen-poor water or to emerge on to land. The ability of fish to do this is potentially limited by their single-loop circulation, as oxygenated blood from their air-breathing organ will mix with deoxygenated blood returning to the heart from the rest of the body. Lungfish, bichirs, ropefish, bowfins, snakefish, and the African knifefish have evolved to reduce such mixing, and to reduce oxygen loss from the gills to oxygen-poor water. Bichirs and lungfish have tetrapod-like paired lungs, requiring them to surface to gulp air, and making them obligate air breathers. Many other fish, including inhabitants of rock pools and the intertidal zone, are facultative air breathers, able to breathe air when out of water, as may occur daily at low tide, and to use their gills when in water. Some coastal fish like rockskippers and mudskippers choose to leave the water to feed in habitats temporarily exposed to the air. Some catfish absorb air through their digestive tracts. Digestion The digestive system consists of a tube, the gut, leading from the mouth to the anus. The mouth of most fishes contains teeth to grip prey, bite off or scrape plant material, or crush the food. An esophagus carries food to the stomach where it may be stored and partially digested. A sphincter, the pylorus, releases food to the intestine at intervals. Many fish have finger-shaped pouches, pyloric caeca, around the pylorus, of doubtful function. The pancreas secretes enzymes into the intestine to digest the food; other enzymes are secreted directly by the intestine itself. The liver produces bile which helps to break up fat into an emulsion which can be absorbed in the intestine. Excretion Most fish release their nitrogenous wastes as ammonia. This may be excreted through the gills or filtered by the kidneys. Salt is excreted by the rectal gland. Saltwater fish tend to lose water by osmosis; their kidneys return water to the body, and produce a concentrated urine. The reverse happens in freshwater fish: they tend to gain water osmotically, and produce a dilute urine. Some fish have kidneys able to operate in both freshwater and saltwater. Brain Fish have small brains relative to body size compared with other vertebrates, typically one-fifteenth the brain mass of a similarly sized bird or mammal. However, some fish have relatively large brains, notably mormyrids and sharks, which have brains about as large for their body weight as birds and marsupials. At the front of the brain are the olfactory lobes, a pair of structures that receive and process signals from the nostrils via the two olfactory nerves. Fish that hunt primarily by smell, such as hagfish and sharks, have very large olfactory lobes. Behind these is the telencephalon, which in fish deals mostly with olfaction. Together these structures form the forebrain. Connecting the forebrain to the midbrain is the diencephalon; it works with hormones and homeostasis. The pineal body is just above the diencephalon; it detects light, maintains circadian rhythms, and controls color changes. The midbrain contains the two optic lobes. These are very large in species that hunt by sight, such as rainbow trout and cichlids. The hindbrain controls swimming and balance.The single-lobed cerebellum is the biggest part of the brain; it is small in hagfish and lampreys, but very large in mormyrids, processing their electrical sense. The brain stem or myelencephalon controls some muscles and body organs, and governs respiration and osmoregulation. Sensory systems The lateral line system is a network of sensors in the skin which detects gentle currents and vibrations, and senses the motion of nearby fish, whether predators or prey. This can be considered both a sense of touch and of hearing. Blind cave fish navigate almost entirely through the sensations from their lateral line system. Some fish, such as catfish and sharks, have the ampullae of Lorenzini, electroreceptors that detect weak electric currents on the order of millivolt. Vision is an important sensory system in fish. Fish eyes are similar to those of terrestrial vertebrates like birds and mammals, but have a more spherical lens. Their retinas generally have both rods and cones (for scotopic and photopic vision); many species have colour vision, often with three types of cone. Teleosts can see polarized light; some such as cyprinids have a fourth type of cone that detects ultraviolet. Amongst jawless fish, the lamprey has well-developed eyes, while the hagfish has only primitive eyespots. Hearing too is an important sensory system in fish. Fish sense sound using their lateral lines and otoliths in their ears, inside their heads. Some can detect sound through the swim bladder. Some fish, including salmon, are capable of magnetoreception; when the axis of a magnetic field is changed around a circular tank of young fish, they reorient themselves in line with the field. The mechanism of fish magnetoreception remains unknown; experiments in birds imply a quantum radical pair mechanism. Cognition The cognitive capacities of fish include self-awareness, as seen in mirror tests. Manta rays and wrasses placed in front of a mirror repeatedly check whether their reflection's behavior mimics their body movement. Choerodon wrasse, archerfish, and Atlantic cod can solve problems and invent tools. The monogamous cichlid Amatitlania siquia exhibits pessimistic behavior when prevented from being with its partner. Fish orient themselves using landmarks; they may use mental maps based on multiple landmarks. Fish are able to learn to traverse mazes, showing that they possess spatial memory and visual discrimination. Behavioral research suggests that fish are sentient, capable of experiencing pain. Electrogenesis Electric fish such as elephantfishes, the African knifefish, and electric eels have some of their muscles adapted to generate electric fields. They use the field to locate and identify objects such as prey in the waters around them, which may be turbid or dark. Strongly electric fish like the electric eel can in addition use their electric organs to generate shocks powerful enough to stun their prey. Endothermy Most fish are exclusively cold-blooded or ectothermic. However, the Scombroidei are warm-blooded (endothermic), including the billfishes and tunas. The opah, a lampriform, uses whole-body endothermy, generating heat with its swimming muscles to warm its body while countercurrent exchange minimizes heat loss. Among the cartilaginous fishes, sharks of the families Lamnidae (such as the great white shark) and Alopiidae (thresher sharks) are endothermic. The degree of endothermy varies from the billfishes, which warm only their eyes and brain, to the bluefin tuna and the porbeagle shark, which maintain body temperatures more than above the ambient water. Reproduction and life-cycle The primary reproductive organs are paired testicles and ovaries. Eggs are released from the ovary to the oviducts. Over 97% of fish, including salmon and goldfish, are oviparous, meaning that the eggs are shed into the water and develop outside the mother's body. The eggs are usually fertilized outside the mother's body, with the male and female fish shedding their gametes into the surrounding water. In a few oviparous fish, such as the skates, fertilization is internal: the male uses an intromittent organ to deliver sperm into the female's genital opening of the female. Marine fish release large numbers of small eggs into the open water column. Newly hatched young of oviparous fish are planktonic larvae. They have a large yolk sac and do not resemble juvenile or adult fish. The larval period in oviparous fish is usually only some weeks, and larvae rapidly grow and change in structure to become juveniles. During this transition, larvae must switch from their yolk sac to feeding on zooplankton prey. Some fish such as surf-perches, splitfins, and lemon sharks are viviparous or live-bearing, meaning that the mother retains the eggs and nourishes the embryos via a structure analogous to the placenta to connect the mother's blood supply with the embryo's. DNA repair Embryos of externally fertilized fish species are directly exposed during their development to environmental conditions that may damage their DNA, such as pollutants, UV light and reactive oxygen species. To deal with such DNA damages, a variety of different DNA repair pathways are employed by fish embryos during their development. In recent years zebrafish have become a useful model for assessing environmental pollutants that might be genotoxic, i.e. cause DNA damage. Defenses against disease Fish have both non-specific and immune defenses against disease. Non-specific defenses include the skin and scales, as well as the mucus layer secreted by the epidermis that traps and inhibits the growth of microorganisms. If pathogens breach these defenses, the innate immune system can mount an inflammatory response that increases blood flow to the infected region and delivers white blood cells that attempt to destroy pathogens, non-specifically. Specific defenses respond to particular antigens, such as proteins on the surfaces of pathogenic bacteria, recognised by the adaptive immune system. Immune systems evolved in deuterostomes as shown in the cladogram. Immune organs vary by type of fish. The jawless fish have lymphoid tissue within the anterior kidney, and granulocytes in the gut. They have their own type of adaptive immune system; it makes use of variable lymphocyte receptors (VLR) to generate immunity to a wide range of antigens, The result is much like that of jawed fishes and tetrapods, but it may have evolved separately. All jawed fishes have an adaptive immune system with B and T lymphocytes bearing immunoglobulins and T cell receptors respectively. This makes use of Variable–Diversity–Joining rearrangement (V(D)J) to create immunity to a wide range of antigens. This system evolved once and is basal to the jawed vertebrate clade. Cartilaginous fish have three specialized organs that contain immune system cells: the epigonal organs around the gonads, Leydig's organ within the esophagus, and a spiral valve in their intestine, while their thymus and spleen have similar functions to those of the same organs in the immune systems of tetrapods. Teleosts have lymphocytes in the thymus, and other immune cells in the spleen and other organs. Behavior Shoaling and schooling A shoal is a loosely organised group where each fish swims and forages independently but is attracted to other members of the group and adjusts its behaviour, such as swimming speed, so that it remains close to the other members of the group. A school is a much more tightly organised group, synchronising its swimming so that all fish move at the same speed and in the same direction. Schooling is sometimes an antipredator adaptation, offering improved vigilance against predators. It is often more efficient to gather food by working as a group, and individual fish optimise their strategies by choosing to join or leave a shoal. When a predator has been noticed, prey fish respond defensively, resulting in collective shoal behaviours such as synchronised movements. Responses do not consist only of attempting to hide or flee; antipredator tactics include for example scattering and reassembling. Fish also aggregate in shoals to spawn. The capelin migrates annually in large schools between its feeding areas and its spawning grounds. Communication Fish communicate by transmitting acoustic signals (sounds) to each other. This is most often in the context of feeding, aggression or courtship. The sounds emitted vary with the species and stimulus involved. Fish can produce either stridulatory sounds by moving components of the skeletal system, or can produce non-stridulatory sounds by manipulating specialized organs such as the swimbladder. Some fish produce sounds by rubbing or grinding their bones together. These sounds are stridulatory. In Haemulon flavolineatum, the French grunt fish, as it produces a grunting noise by grinding its teeth together, especially when in distress. The grunts are at a frequency of around 700 Hz, and last approximately 47 milliseconds. The longsnout seahorse, Hippocampus reidi produces two categories of sounds, 'clicks' and 'growls', by rubbing their coronet bone across the grooved section of their neurocranium. Clicks are produced during courtship and feeding, and the frequencies of clicks were within the range of 50 Hz-800 Hz. The frequencies are at the higher end of the range during spawning, when the female and male fishes were less than fifteen centimeters apart. Growls are produced when the H. reidi are stressed. The 'growl' sounds consist of a series of sound pulses and are emitted simultaneously with body vibrations. Some fish species create noise by engaging specialized muscles that contract and cause swimbladder vibrations. Oyster toadfish produce loud grunts by contracting sonic muscles along the sides of the swim bladder. Female and male toadfishes emit short-duration grunts, often as a fright response. In addition to short-duration grunts, male toadfishes produce "boat whistle calls". These calls are longer in duration, lower in frequency, and are primarily used to attract mates. The various sounds have frequency range of 140 Hz to 260 Hz. The frequencies of the calls depend on the rate at which the sonic muscles contract. The red drum, Sciaenops ocellatus, produces drumming sounds by vibrating its swimbladder. Vibrations are caused by the rapid contraction of sonic muscles that surround the dorsal aspect of the swimbladder. These vibrations result in repeated sounds with frequencies from 100 to >200 Hz. S. ocellatus produces different calls depending on the stimuli involved, such as courtship or a predator's attack. Females do not produce sounds, and lack sound-producing (sonic) muscles. Conservation The 2024 IUCN Red List names 2,168 fish species that are endangered or critically endangered. Included are species such as Atlantic cod, Devil's Hole pupfish, coelacanths, and great white sharks. Because fish live underwater they are more difficult to study than terrestrial animals and plants, and information about fish populations is often lacking. However, freshwater fish seem particularly threatened because they often live in relatively small water bodies. For example, the Devil's Hole pupfish occupies only a single pool. Overfishing The Food and Agriculture Organization reports that "in 2017, 34 percent of the fish stocks of the world's marine fisheries were classified as overfished". Overfishing is a major threat to edible fish such as cod and tuna. Overfishing eventually causes fish stocks to collapse, because the survivors cannot produce enough young to replace those removed. Such commercial extinction does not mean that the species is extinct, merely that it can no longer sustain a fishery. In the case of the Pacific sardine fishery off the California coast, the catch steadily declined from a 1937 peak of 800,000 tonnes to an economically inviable 24,000 tonnes in 1968. In the case of the Atlantic northwest cod fishery, overfishing reduced the fish population to 1% of its historical level by 1992. Fisheries scientists and the fishing industry have sharply differing views on the resiliency of fisheries to intensive fishing. In many coastal regions the fishing industry is a major employer, so governments are predisposed to support it. On the other hand, scientists and conservationists push for stringent protection, warning that many stocks could be destroyed within fifty years. Other threats A key stress on both freshwater and marine ecosystems is habitat degradation including water pollution, the building of dams, removal of water for use by humans, and the introduction of exotic species including predators. Freshwater fish, especially if endemic to a region (occurring nowhere else), may be threatened with extinction for all these reasons, as is the case for three of Spain's ten endemic freshwater fishes. River dams, especially major schemes like the Kariba Dam (Zambezi river) and the Aswan Dam (River Nile) on rivers with economically important fisheries, have caused large reductions in fish catch. Industrial bottom trawling can damage seabed habitats, as has occurred on the Georges Bank in the North Atlantic. Introduction of aquatic invasive species is widespread. It modifies ecosystems, causing biodiversity loss, and can harm fisheries. Harmful species include fish but are not limited to them; the arrival of a comb jelly in the Black Sea damaged the anchovy fishery there. The opening of the Suez Canal in 1869 made possible Lessepsian migration, facilitating the arrival of hundreds of Indo-Pacific marine species of fish, algae and invertebrates in the Mediterranean Sea, deeply impacting its overall biodiversity and ecology. The predatory Nile perch was deliberately introduced to Lake Victoria in the 1960s as a commercial and sports fish. The lake had high biodiversity, with some 500 endemic species of cichlid fish. It drastically altered the lake's ecology, and simplified the fishery from multi-species to just three: the Nile perch, the silver cyprinid, and another introduced fish, the Nile tilapia. The haplochromine cichlid populations have collapsed. Importance to humans Economic Throughout history, humans have used fish as a food source for dietary protein. Historically and today, most fish harvested for human consumption has come by means of catching wild fish. However, fish farming, which has been practiced since about 3,500 BCE in ancient China, is becoming increasingly important in many nations. Overall, about one-sixth of the world's protein is estimated to be provided by fish. Fishing is accordingly a large global business which provides income for millions of people. The Environmental Defense Fund has a guide on which fish are safe to eat, given the state of pollution in today's world, and which fish are obtained in a sustainable way. As of 2020, over 65 million tonnes (Mt) of marine fish and 10 Mt of freshwater fish were captured, while some 50 Mt of fish, mainly freshwater, were farmed. Of the marine species captured in 2020, anchoveta represented 4.9 Mt, Alaska pollock 3.5 Mt, skipjack tuna 2.8 Mt, and Atlantic herring and yellowfin tuna 1.6 Mt each; eight more species had catches over 1 Mt. Recreation Fish have been recognized as a source of beauty for almost as long as used for food, appearing in cave art, being raised as ornamental fish in ponds, and displayed in aquariums in homes, offices, or public settings. Recreational fishing is fishing primarily for pleasure or competition; it can be contrasted with commercial fishing, which is fishing for profit, or artisanal fishing, which is fishing primarily for food. The most common form of recreational fishing employs a rod, reel, line, hooks, and a wide range of baits. Recreational fishing is particularly popular in North America and Europe; government agencies often actively manage target fish species. Culture Fish themes have symbolic significance in many religions. In ancient Mesopotamia, fish offerings were made to the gods from the very earliest times. Fish were also a major symbol of Enki, the god of water. Fish frequently appear as filling motifs in cylinder seals from the Old Babylonian ( 1830 BC – 1531 BC) and Neo-Assyrian (911–609 BC) periods. Starting during the Kassite Period ( 1600 BC – 1155 BC) and lasting until the early Persian Period (550–30 BC), healers and exorcists dressed in ritual garb resembling the bodies of fish. During the Seleucid Period (312–63 BC), the legendary Babylonian culture hero Oannes was said to have dressed in the skin of a fish. Fish were sacred to the Syrian goddess Atargatis and, during her festivals, only her priests were permitted to eat them. In the Book of Jonah, the central figure, a prophet named Jonah, is swallowed by a giant fish after being thrown overboard by the crew of the ship he is travelling on. Early Christians used the ichthys, a symbol of a fish, to represent Jesus. Among the deities said to take the form of a fish are Ikatere of the Polynesians, the shark-god Kāmohoaliʻi of Hawaii, and Matsya of the Hindus. The constellation Pisces ("The Fishes") is associated with a legend from Ancient Rome that Venus and her son Cupid were rescued by two fishes. Fish feature prominently in art, in films such as Finding Nemo and books such as The Old Man and the Sea. Large fish, particularly sharks, have frequently been the subject of horror movies and thrillers, notably the novel Jaws, made into a film which in turn has been parodied and imitated many times. Piranhas are shown in a similar light to sharks in films such as Piranha.
Biology and health sciences
Biology
null
4700845
https://en.wikipedia.org/wiki/Entropy%20%28classical%20thermodynamics%29
Entropy (classical thermodynamics)
In classical thermodynamics, entropy () is a property of a thermodynamic system that expresses the direction or outcome of spontaneous changes in the system. The term was introduced by Rudolf Clausius in the mid-19th century to explain the relationship of the internal energy that is available or unavailable for transformations in form of heat and work. Entropy predicts that certain processes are irreversible or impossible, despite not violating the conservation of energy. The definition of entropy is central to the establishment of the second law of thermodynamics, which states that the entropy of isolated systems cannot decrease with time, as they always tend to arrive at a state of thermodynamic equilibrium, where the entropy is highest. Entropy is therefore also considered to be a measure of disorder in the system. Ludwig Boltzmann explained the entropy as a measure of the number of possible microscopic configurations of the individual atoms and molecules of the system (microstates) which correspond to the macroscopic state (macrostate) of the system. He showed that the thermodynamic entropy is , where the factor has since been known as the Boltzmann constant. Concept Differences in pressure, density, and temperature of a thermodynamic system tend to equalize over time. For example, in a room containing a glass of melting ice, the difference in temperature between the warm room and the cold glass of ice and water is equalized by energy flowing as heat from the room to the cooler ice and water mixture. Over time, the temperature of the glass and its contents and the temperature of the room achieve a balance. The entropy of the room has decreased. However, the entropy of the glass of ice and water has increased more than the entropy of the room has decreased. In an isolated system, such as the room and ice water taken together, the dispersal of energy from warmer to cooler regions always results in a net increase in entropy. Thus, when the system of the room and ice water system has reached thermal equilibrium, the entropy change from the initial state is at its maximum. The entropy of the thermodynamic system is a measure of the progress of the equalization. Many irreversible processes result in an increase of entropy. One of them is mixing of two or more different substances, occasioned by bringing them together by removing a wall that separates them, keeping the temperature and pressure constant. The mixing is accompanied by the entropy of mixing. In the important case of mixing of ideal gases, the combined system does not change its internal energy by work or heat transfer; the entropy increase is then entirely due to the spreading of the different substances into their new common volume. From a macroscopic perspective, in classical thermodynamics, the entropy is a state function of a thermodynamic system: that is, a property depending only on the current state of the system, independent of how that state came to be achieved. Entropy is a key ingredient of the Second law of thermodynamics, which has important consequences e.g. for the performance of heat engines, refrigerators, and heat pumps. Definition According to the Clausius equality, for a closed homogeneous system, in which only reversible processes take place, With being the uniform temperature of the closed system and the incremental reversible transfer of heat energy into that system. That means the line integral is path-independent. A state function , called entropy, may be defined which satisfies Entropy measurement The thermodynamic state of a uniform closed system is determined by its temperature and pressure . A change in entropy can be written as The first contribution depends on the heat capacity at constant pressure through This is the result of the definition of the heat capacity by and . The second term may be rewritten with one of the Maxwell relations and the definition of the volumetric thermal-expansion coefficient so that With this expression the entropy at arbitrary and can be related to the entropy at some reference state at and according to In classical thermodynamics, the entropy of the reference state can be put equal to zero at any convenient temperature and pressure. For example, for pure substances, one can take the entropy of the solid at the melting point at 1 bar equal to zero. From a more fundamental point of view, the third law of thermodynamics suggests that there is a preference to take at (absolute zero) for perfectly ordered materials such as crystals. is determined by followed a specific path in the P-T diagram: integration over at constant pressure , so that , and in the second integral one integrates over at constant temperature , so that . As the entropy is a function of state the result is independent of the path. The above relation shows that the determination of the entropy requires knowledge of the heat capacity and the equation of state (which is the relation between P,V, and T of the substance involved). Normally these are complicated functions and numerical integration is needed. In simple cases it is possible to get analytical expressions for the entropy. In the case of an ideal gas, the heat capacity is constant and the ideal gas law gives that , with the number of moles and R the molar ideal-gas constant. So, the molar entropy of an ideal gas is given by In this expression CP now is the molar heat capacity. The entropy of inhomogeneous systems is the sum of the entropies of the various subsystems. The laws of thermodynamics hold rigorously for inhomogeneous systems even though they may be far from internal equilibrium. The only condition is that the thermodynamic parameters of the composing subsystems are (reasonably) well-defined. Temperature-entropy diagrams Entropy values of important substances may be obtained from reference works or with commercial software in tabular form or as diagrams. One of the most common diagrams is the temperature-entropy diagram (TS-diagram). For example, Fig.2 shows the TS-diagram of nitrogen, depicting the melting curve and saturated liquid and vapor values with isobars and isenthalps. Entropy change in irreversible transformations We now consider inhomogeneous systems in which internal transformations (processes) can take place. If we calculate the entropy S1 before and S2 after such an internal process the Second Law of Thermodynamics demands that S2 ≥ S1 where the equality sign holds if the process is reversible. The difference is the entropy production due to the irreversible process. The Second law demands that the entropy of an isolated system cannot decrease. Suppose a system is thermally and mechanically isolated from the environment (isolated system). For example, consider an insulating rigid box divided by a movable partition into two volumes, each filled with gas. If the pressure of one gas is higher, it will expand by moving the partition, thus performing work on the other gas. Also, if the gases are at different temperatures, heat can flow from one gas to the other provided the partition allows heat conduction. Our above result indicates that the entropy of the system as a whole will increase during these processes. There exists a maximum amount of entropy the system may possess under the circumstances. This entropy corresponds to a state of stable equilibrium, since a transformation to any other equilibrium state would cause the entropy to decrease, which is forbidden. Once the system reaches this maximum-entropy state, no part of the system can perform work on any other part. It is in this sense that entropy is a measure of the energy in a system that cannot be used to do work. An irreversible process degrades the performance of a thermodynamic system, designed to do work or produce cooling, and results in entropy production. The entropy generation during a reversible process is zero. Thus entropy production is a measure of the irreversibility and may be used to compare engineering processes and machines. Thermal machines Clausius' identification of S as a significant quantity was motivated by the study of reversible and irreversible thermodynamic transformations. A heat engine is a thermodynamic system that can undergo a sequence of transformations which ultimately return it to its original state. Such a sequence is called a cyclic process, or simply a cycle. During some transformations, the engine may exchange energy with its environment. The net result of a cycle is mechanical work done by the system (which can be positive or negative, the latter meaning that work is done on the engine), heat transferred from one part of the environment to another. In the steady state, by the conservation of energy, the net energy lost by the environment is equal to the work done by the engine. If every transformation in the cycle is reversible, the cycle is reversible, and it can be run in reverse, so that the heat transfers occur in the opposite directions and the amount of work done switches sign. Heat engines Consider a heat engine working between two temperatures TH and Ta. With Ta we have ambient temperature in mind, but, in principle it may also be some other low temperature. The heat engine is in thermal contact with two heat reservoirs which are supposed to have a very large heat capacity so that their temperatures do not change significantly if heat QH is removed from the hot reservoir and Qa is added to the lower reservoir. Under normal operation TH > Ta and QH, Qa, and W are all positive. As our thermodynamical system we take a big system which includes the engine and the two reservoirs. It is indicated in Fig.3 by the dotted rectangle. It is inhomogeneous, closed (no exchange of matter with its surroundings), and adiabatic (no exchange of heat with its surroundings). It is not isolated since per cycle a certain amount of work W is produced by the system given by the first law of thermodynamics We used the fact that the engine itself is periodic, so its internal energy has not changed after one cycle. The same is true for its entropy, so the entropy increase S2 − S1 of our system after one cycle is given by the reduction of entropy of the hot source and the increase of the cold sink. The entropy increase of the total system S2 - S1 is equal to the entropy production Si due to irreversible processes in the engine so The Second law demands that Si ≥ 0. Eliminating Qa from the two relations gives The first term is the maximum possible work for a heat engine, given by a reversible engine, as one operating along a Carnot cycle. Finally This equation tells us that the production of work is reduced by the generation of entropy. The term TaSi gives the lost work, or dissipated energy, by the machine. Correspondingly, the amount of heat, discarded to the cold sink, is increased by the entropy generation These important relations can also be obtained without the inclusion of the heat reservoirs. See the article on entropy production. Refrigerators The same principle can be applied to a refrigerator working between a low temperature TL and ambient temperature. The schematic drawing is exactly the same as Fig.3 with TH replaced by TL, QH by QL, and the sign of W reversed. In this case the entropy production is and the work needed to extract heat QL from the cold source is The first term is the minimum required work, which corresponds to a reversible refrigerator, so we have i.e., the refrigerator compressor has to perform extra work to compensate for the dissipated energy due to irreversible processes which lead to entropy production.
Physical sciences
Thermodynamics
Physics
4701125
https://en.wikipedia.org/wiki/Entropy%20%28statistical%20thermodynamics%29
Entropy (statistical thermodynamics)
The concept entropy was first developed by German physicist Rudolf Clausius in the mid-nineteenth century as a thermodynamic property that predicts that certain spontaneous processes are irreversible or impossible. In statistical mechanics, entropy is formulated as a statistical property using probability theory. The statistical entropy perspective was introduced in 1870 by Austrian physicist Ludwig Boltzmann, who established a new field of physics that provided the descriptive linkage between the macroscopic observation of nature and the microscopic view based on the rigorous treatment of large ensembles of microscopic states that constitute thermodynamic systems. Boltzmann's principle Ludwig Boltzmann defined entropy as a measure of the number of possible microscopic states (microstates) of a system in thermodynamic equilibrium, consistent with its macroscopic thermodynamic properties, which constitute the macrostate of the system. A useful illustration is the example of a sample of gas contained in a container. The easily measurable parameters volume, pressure, and temperature of the gas describe its macroscopic condition (state). At a microscopic level, the gas consists of a vast number of freely moving atoms or molecules, which randomly collide with one another and with the walls of the container. The collisions with the walls produce the macroscopic pressure of the gas, which illustrates the connection between microscopic and macroscopic phenomena. A microstate of the system is a description of the positions and momenta of all its particles. The large number of particles of the gas provides an infinite number of possible microstates for the sample, but collectively they exhibit a well-defined average of configuration, which is exhibited as the macrostate of the system, to which each individual microstate contribution is negligibly small. The ensemble of microstates comprises a statistical distribution of probability for each microstate, and the group of most probable configurations accounts for the macroscopic state. Therefore, the system can be described as a whole by only a few macroscopic parameters, called the thermodynamic variables: the total energy E, volume V, pressure P, temperature T, and so forth. However, this description is relatively simple only when the system is in a state of equilibrium. Equilibrium may be illustrated with a simple example of a drop of food coloring falling into a glass of water. The dye diffuses in a complicated manner, which is difficult to precisely predict. However, after sufficient time has passed, the system reaches a uniform color, a state much easier to describe and explain. Boltzmann formulated a simple relationship between entropy and the number of possible microstates of a system, which is denoted by the symbol Ω. The entropy S is proportional to the natural logarithm of this number: The proportionality constant kB is one of the fundamental constants of physics and is named the Boltzmann constant in honor of its discoverer. Boltzmann's entropy describes the system when all the accessible microstates are equally likely. It is the configuration corresponding to the maximum of entropy at equilibrium. The randomness or disorder is maximal, and so is the lack of distinction (or information) of each microstate. Entropy is a thermodynamic property just like pressure, volume, or temperature. Therefore, it connects the microscopic and the macroscopic world view. Boltzmann's principle is regarded as the foundation of statistical mechanics. Gibbs entropy formula The macroscopic state of a system is characterized by a distribution on the microstates. The entropy of this distribution is given by the Gibbs entropy formula, named after J. Willard Gibbs. For a classical system (i.e., a collection of classical particles) with a discrete set of microstates, if is the energy of microstate i, and is the probability that it occurs during the system's fluctuations, then the entropy of the system is Entropy changes for systems in a canonical state A system with a well-defined temperature, i.e., one in thermal equilibrium with a thermal reservoir, has a probability of being in a microstate i given by Boltzmann's distribution. Changes in the entropy caused by changes in the external constraints are then given by: where we have twice used the conservation of probability, . Now, is the expectation value of the change in the total energy of the system. If the changes are sufficiently slow, so that the system remains in the same microscopic state, but the state slowly (and reversibly) changes, then is the expectation value of the work done on the system through this reversible process, dwrev. But from the first law of thermodynamics, . Therefore, In the thermodynamic limit, the fluctuation of the macroscopic quantities from their average values becomes negligible; so this reproduces the definition of entropy from classical thermodynamics, given above. The quantity is the Boltzmann constant. The remaining factor of the equation, the entire summation is dimensionless, since the value is a probability and therefore dimensionless, and is the natural logarithm. Hence the SI derived units on both sides of the equation are same as heat capacity: This definition remains meaningful even when the system is far away from equilibrium. Other definitions assume that the system is in thermal equilibrium, either as an isolated system, or as a system in exchange with its surroundings. The set of microstates (with probability distribution) on which the sum is done is called a statistical ensemble. Each type of statistical ensemble (micro-canonical, canonical, grand-canonical, etc.) describes a different configuration of the system's exchanges with the outside, varying from a completely isolated system to a system that can exchange one or more quantities with a reservoir, like energy, volume or molecules. In every ensemble, the equilibrium configuration of the system is dictated by the maximization of the entropy of the union of the system and its reservoir, according to the second law of thermodynamics (see the statistical mechanics article). Neglecting correlations (or, more generally, statistical dependencies) between the states of individual particles will lead to an incorrect probability distribution on the microstates and hence to an overestimate of the entropy. Such correlations occur in any system with nontrivially interacting particles, that is, in all systems more complex than an ideal gas. This S is almost universally called simply the entropy. It can also be called the statistical entropy or the thermodynamic entropy without changing the meaning. Note the above expression of the statistical entropy is a discretized version of Shannon entropy. The von Neumann entropy formula is an extension of the Gibbs entropy formula to the quantum mechanical case. It has been shown that the Gibbs Entropy is equal to the classical "heat engine" entropy characterized by , and the generalized Boltzmann distribution is a sufficient and necessary condition for this equivalence. Furthermore, the Gibbs Entropy is the only entropy that is equivalent to the classical "heat engine" entropy under the following postulates: Ensembles The various ensembles used in statistical thermodynamics are linked to the entropy by the following relations: is the microcanonical partition function is the canonical partition function is the grand canonical partition function Order through chaos and the second law of thermodynamics We can think of Ω as a measure of our lack of knowledge about a system. To illustrate this idea, consider a set of 100 coins, each of which is either heads up or tails up. In this example, let us suppose that the macrostates are specified by the total number of heads and tails, while the microstates are specified by the facings of each individual coin (i.e., the exact order in which heads and tails occur). For the macrostates of 100 heads or 100 tails, there is exactly one possible configuration, so our knowledge of the system is complete. At the opposite extreme, the macrostate which gives us the least knowledge about the system consists of 50 heads and 50 tails in any order, for which there are (100 choose 50) ≈ 1029 possible microstates. Even when a system is entirely isolated from external influences, its microstate is constantly changing. For instance, the particles in a gas are constantly moving, and thus occupy a different position at each moment of time; their momenta are also constantly changing as they collide with each other or with the container walls. Suppose we prepare the system in an artificially highly ordered equilibrium state. For instance, imagine dividing a container with a partition and placing a gas on one side of the partition, with a vacuum on the other side. If we remove the partition and watch the subsequent behavior of the gas, we will find that its microstate evolves according to some chaotic and unpredictable pattern, and that on average these microstates will correspond to a more disordered macrostate than before. It is possible, but extremely unlikely, for the gas molecules to bounce off one another in such a way that they remain in one half of the container. It is overwhelmingly probable for the gas to spread out to fill the container evenly, which is the new equilibrium macrostate of the system. This is an example illustrating the second law of thermodynamics: the total entropy of any isolated thermodynamic system tends to increase over time, approaching a maximum value. Since its discovery, this idea has been the focus of a great deal of thought, some of it confused. A chief point of confusion is the fact that the Second Law applies only to isolated systems. For example, the Earth is not an isolated system because it is constantly receiving energy in the form of sunlight. In contrast, the universe may be considered an isolated system, so that its total entropy is constantly increasing. (Needs clarification. See: Second law of thermodynamics#cite note-Grandy 151-21) Counting of microstates In classical statistical mechanics, the number of microstates is actually uncountably infinite, since the properties of classical systems are continuous. For example, a microstate of a classical ideal gas is specified by the positions and momenta of all the atoms, which range continuously over the real numbers. If we want to define Ω, we have to come up with a method of grouping the microstates together to obtain a countable set. This procedure is known as coarse graining. In the case of the ideal gas, we count two states of an atom as the "same" state if their positions and momenta are within δx and δp of each other. Since the values of δx and δp can be chosen arbitrarily, the entropy is not uniquely defined. It is defined only up to an additive constant. (As we will see, the thermodynamic definition of entropy is also defined only up to a constant.) To avoid coarse graining one can take the entropy as defined by the H-theorem. However, this ambiguity can be resolved with quantum mechanics. The quantum state of a system can be expressed as a superposition of "basis" states, which can be chosen to be energy eigenstates (i.e. eigenstates of the quantum Hamiltonian). Usually, the quantum states are discrete, even though there may be an infinite number of them. For a system with some specified energy E, one takes Ω to be the number of energy eigenstates within a macroscopically small energy range between E and . In the thermodynamical limit, the specific entropy becomes independent on the choice of δE. An important result, known as Nernst's theorem or the third law of thermodynamics, states that the entropy of a system at zero absolute temperature is a well-defined constant. This is because a system at zero temperature exists in its lowest-energy state, or ground state, so that its entropy is determined by the degeneracy of the ground state. Many systems, such as crystal lattices, have a unique ground state, and (since ) this means that they have zero entropy at absolute zero. Other systems have more than one state with the same, lowest energy, and have a non-vanishing "zero-point entropy". For instance, ordinary ice has a zero-point entropy of , because its underlying crystal structure possesses multiple configurations with the same energy (a phenomenon known as geometrical frustration). The third law of thermodynamics states that the entropy of a perfect crystal at absolute zero () is zero. This means that nearly all molecular motion should cease. The oscillator equation for predicting quantized vibrational levels shows that even when the vibrational quantum number is 0, the molecule still has vibrational energy: where is the Planck constant, is the characteristic frequency of the vibration, and is the vibrational quantum number. Even when (the zero-point energy), does not equal 0, in adherence to the Heisenberg uncertainty principle.
Physical sciences
Statistical mechanics
Physics
4701197
https://en.wikipedia.org/wiki/Entropy%20as%20an%20arrow%20of%20time
Entropy as an arrow of time
Entropy is one of the few quantities in the physical sciences that require a particular direction for time, sometimes called an arrow of time. As one goes "forward" in time, the second law of thermodynamics says, the entropy of an isolated system can increase, but not decrease. Thus, entropy measurement is a way of distinguishing the past from the future. In thermodynamic systems that are not isolated, local entropy can decrease over time, accompanied by a compensating entropy increase in the surroundings; examples include objects undergoing cooling, living systems, and the formation of typical crystals. Much like temperature, despite being an abstract concept, everyone has an intuitive sense of the effects of entropy. For example, it is often very easy to tell the difference between a video being played forwards or backwards. A video may depict a wood fire that melts a nearby ice block; played in reverse, it would show a puddle of water turning a cloud of smoke into unburnt wood and freezing itself in the process. Surprisingly, in either case, the vast majority of the laws of physics are not broken by these processes, with the second law of thermodynamics being one of the only exceptions. When a law of physics applies equally when time is reversed, it is said to show T-symmetry; in this case, entropy is what allows one to decide if the video described above is playing forwards or in reverse as intuitively we identify that only when played forwards the entropy of the scene is increasing. Because of the second law of thermodynamics, entropy prevents macroscopic processes showing T-symmetry. When studying at a microscopic scale, the above judgements cannot be made. Watching a single smoke particle buffeted by air, it would not be clear if a video was playing forwards or in reverse, and, in fact, it would not be possible as the laws which apply show T-symmetry. As it drifts left or right, qualitatively it looks no different; it is only when the gas is studied at a macroscopic scale that the effects of entropy become noticeable (see Loschmidt's paradox). On average it would be expected that the smoke particles around a struck match would drift away from each other, diffusing throughout the available space. It would be an astronomically improbable event for all the particles to cluster together, yet the movement of any one smoke particle cannot be predicted. By contrast, certain subatomic interactions involving the weak nuclear force violate the conservation of parity, but only very rarely. According to the CPT theorem, this means they should also be time irreversible, and so establish an arrow of time. This, however, is neither linked to the thermodynamic arrow of time, nor has anything to do with the daily experience of time irreversibility. Overview The second law of thermodynamics allows for the entropy to remain the same regardless of the direction of time. If the entropy is constant in either direction of time, there would be no preferred direction. However, the entropy can only be a constant if the system is in the highest possible state of disorder, such as a gas that always was, and always will be, uniformly spread out in its container. The existence of a thermodynamic arrow of time implies that the system is highly ordered in one time direction only, which would by definition be the "past". Thus this law is about the boundary conditions rather than the equations of motion. The second law of thermodynamics is statistical in nature, and therefore its reliability arises from the huge number of particles present in macroscopic systems. It is not impossible, in principle, for all 6 × 1023 atoms in a mole of a gas to spontaneously migrate to one half of a container; it is only fantastically unlikely—so unlikely that no macroscopic violation of the Second Law has ever been observed. The thermodynamic arrow is often linked to the cosmological arrow of time, because it is ultimately about the boundary conditions of the early universe. According to the Big Bang theory, the Universe was initially very hot with energy distributed uniformly. For a system in which gravity is important, such as the universe, this is a low-entropy state (compared to a high-entropy state of having all matter collapsed into black holes, a state to which the system may eventually evolve). As the Universe grows, its temperature drops, which leaves less energy [per unit volume of space] available to perform work in the future than was available in the past. Additionally, perturbations in the energy density grow (eventually forming galaxies and stars). Thus the Universe itself has a well-defined thermodynamic arrow of time. But this does not address the question of why the initial state of the universe was that of low entropy. If cosmic expansion were to halt and reverse due to gravity, the temperature of the Universe would once again grow hotter, but its entropy would also continue to increase due to the continued growth of perturbations and the eventual black hole formation, until the latter stages of the Big Crunch when entropy would be lower than now. An example of apparent irreversibility Consider the situation in which a large container is filled with two separated liquids, for example a dye on one side and water on the other. With no barrier between the two liquids, the random jostling of their molecules will result in them becoming more mixed as time passes. However, if the dye and water are mixed then one does not expect them to separate out again when left to themselves. A movie of the mixing would seem realistic when played forwards, but unrealistic when played backwards. If the large container is observed early on in the mixing process, it might be found only partially mixed. It would be reasonable to conclude that, without outside intervention, the liquid reached this state because it was more ordered in the past, when there was greater separation, and will be more disordered, or mixed, in the future. Now imagine that the experiment is repeated, this time with only a few molecules, perhaps ten, in a very small container. One can easily imagine that by watching the random jostling of the molecules it might occur—by chance alone—that the molecules became neatly segregated, with all dye molecules on one side and all water molecules on the other. That this can be expected to occur from time to time can be concluded from the fluctuation theorem; thus it is not impossible for the molecules to segregate themselves. However, for a large number of molecules it is so unlikely that one would have to wait, on average, many times longer than the current age of the universe for it to occur. Thus a movie that showed a large number of molecules segregating themselves as described above would appear unrealistic and one would be inclined to say that the movie was being played in reverse. See Boltzmann's second law as a law of disorder. Mathematics of the arrow The mathematics behind the arrow of time, entropy, and basis of the second law of thermodynamics derive from the following set-up, as detailed by Carnot (1824), Clapeyron (1832), and Clausius (1854): Here, as common experience demonstrates, when a hot body T1, such as a furnace, is put into physical contact, such as being connected via a body of fluid (working body), with a cold body T2, such as a stream of cold water, energy will invariably flow from hot to cold in the form of heat Q, and given time the system will reach equilibrium. Entropy, defined as Q/T, was conceived by Rudolf Clausius as a function to measure the molecular irreversibility of this process, i.e. the dissipative work the atoms and molecules do on each other during the transformation. In this diagram, one can calculate the entropy change ΔS for the passage of the quantity of heat Q from the temperature T1, through the "working body" of fluid (see heat engine), which was typically a body of steam, to the temperature T2. Moreover, one could assume, for the sake of argument, that the working body contains only two molecules of water. Next, if we make the assignment, as originally done by Clausius: Then the entropy change or "equivalence-value" for this transformation is: which equals: and by factoring out Q, we have the following form, as was derived by Clausius: Thus, for example, if Q was 50 units, T1 was initially 100 degrees, and T2 was 1 degree, then the entropy change for this process would be 49.5. Hence, entropy increased for this process, the process took a certain amount of "time", and one can correlate entropy increase with the passage of time. For this system configuration, subsequently, it is an "absolute rule". This rule is based on the fact that all natural processes are irreversible by virtue of the fact that molecules of a system, for example two molecules in a tank, not only do external work (such as to push a piston), but also do internal work on each other, in proportion to the heat used to do work (see: Mechanical equivalent of heat) during the process. Entropy accounts for the fact that internal inter-molecular friction exists. Correlations An important difference between the past and the future is that in any system (such as a gas of particles) its initial conditions are usually such that its different parts are uncorrelated, but as the system evolves and its different parts interact with each other, they become correlated. For example, whenever dealing with a gas of particles, it is always assumed that its initial conditions are such that there is no correlation between the states of different particles (i.e. the speeds and locations of the different particles are completely random, up to the need to conform with the macrostate of the system). This is closely related to the second law of thermodynamics: For example, in a finite system interacting with finite heat reservoirs, entropy is equivalent to system-reservoir correlations, and thus both increase together. Take for example (experiment A) a closed box that is, at the beginning, half-filled with ideal gas. As time passes, the gas obviously expands to fill the whole box, so that the final state is a box full of gas. This is an irreversible process, since if the box is full at the beginning (experiment B), it does not become only half-full later, except for the very unlikely situation where the gas particles have very special locations and speeds. But this is precisely because we always assume that the initial conditions in experiment B are such that the particles have random locations and speeds. This is not correct for the final conditions of the system in experiment A, because the particles have interacted between themselves, so that their locations and speeds have become dependent on each other, i.e. correlated. This can be understood if we look at experiment A backwards in time, which we'll call experiment C: now we begin with a box full of gas, but the particles do not have random locations and speeds; rather, their locations and speeds are so particular, that after some time they all move to one half of the box, which is the final state of the system (this is the initial state of experiment A, because now we're looking at the same experiment backwards!). The interactions between particles now do not create correlations between the particles, but in fact turn them into (at least seemingly) random, "canceling" the pre-existing correlations. The only difference between experiment C (which defies the Second Law of Thermodynamics) and experiment B (which obeys the Second Law of Thermodynamics) is that in the former the particles are uncorrelated at the end, while in the latter the particles are uncorrelated at the beginning. In fact, if all the microscopic physical processes are reversible (see discussion below), then the Second Law of Thermodynamics can be proven for any isolated system of particles with initial conditions in which the particles states are uncorrelated. To do this, one must acknowledge the difference between the measured entropy of a system—which depends only on its macrostate (its volume, temperature etc.)—and its information entropy, which is the amount of information (number of computer bits) needed to describe the exact microstate of the system. The measured entropy is independent of correlations between particles in the system, because they do not affect its macrostate, but the information entropy does depend on them, because correlations lower the randomness of the system and thus lowers the amount of information needed to describe it. Therefore, in the absence of such correlations the two entropies are identical, but otherwise the information entropy is smaller than the measured entropy, and the difference can be used as a measure of the amount of correlations. Now, by Liouville's theorem, time-reversal of all microscopic processes implies that the amount of information needed to describe the exact microstate of an isolated system (its information-theoretic joint entropy) is constant in time. This joint entropy is equal to the marginal entropy (entropy assuming no correlations) plus the entropy of correlation (mutual entropy, or its negative mutual information). If we assume no correlations between the particles initially, then this joint entropy is just the marginal entropy, which is just the initial thermodynamic entropy of the system, divided by the Boltzmann constant. However, if these are indeed the initial conditions (and this is a crucial assumption), then such correlations form with time. In other words, there is a decreasing mutual entropy (or increasing mutual information), and for a time that is not too long—the correlations (mutual information) between particles only increase with time. Therefore, the thermodynamic entropy, which is proportional to the marginal entropy, must also increase with time (note that "not too long" in this context is relative to the time needed, in a classical version of the system, for it to pass through all its possible microstates—a time that can be roughly estimated as , where is the time between particle collisions and S is the system's entropy. In any practical case this time is huge compared to everything else). Note that the correlation between particles is not a fully objective quantity. One cannot measure the mutual entropy, one can only measure its change, assuming one can measure a microstate. Thermodynamics is restricted to the case where microstates cannot be distinguished, which means that only the marginal entropy, proportional to the thermodynamic entropy, can be measured, and, in a practical sense, always increases. Arrow of time in various phenomena Phenomena that occur differently according to their time direction can ultimately be linked to the second law of thermodynamics, for example ice cubes melt in hot coffee rather than assembling themselves out of the coffee and a block sliding on a rough surface slows down rather than speeds up. The idea that we can remember the past and not the future is called the "psychological arrow of time" and it has deep connections with Maxwell's demon and the physics of information; memory is linked to the second law of thermodynamics if one views it as correlation between brain cells (or computer bits) and the outer world: Since such correlations increase with time, memory is linked to past events, rather than to future events. Current research Current research focuses mainly on describing the thermodynamic arrow of time mathematically, either in classical or quantum systems, and on understanding its origin from the point of view of cosmological boundary conditions. Dynamical systems Some current research in dynamical systems indicates a possible "explanation" for the arrow of time. There are several ways to describe the time evolution of a dynamical system. In the classical framework, one considers an ordinary differential equation, where the parameter is explicitly time. By the very nature of differential equations, the solutions to such systems are inherently time-reversible. However, many of the interesting cases are either ergodic or mixing, and it is strongly suspected that mixing and ergodicity somehow underlie the fundamental mechanism of the arrow of time. While the strong suspicion may be but a fleeting sense of intuition, it cannot be denied that, when there are multiple parameters, the field of partial differential equations comes into play. In such systems there is the Feynman–Kac formula in play, which assures for specific cases, a one-to-one correspondence between specific linear stochastic differential equation and partial differential equation. Therefore, any partial differential equation system is tantamount to a random system of a single parameter, which is not reversible due to the aforementioned correspondence. Mixing and ergodic systems do not have exact solutions, and thus proving time irreversibility in a mathematical sense is () impossible. The concept of "exact" solutions is an anthropic one. Does "exact" mean the same as closed form in terms of already know expressions, or does it mean simply a single finite sequence of strokes of a/the writing utensil/human finger? There are myriad of systems known to humanity that are abstract and have recursive definitions but no non-self-referential notation currently exists. As a result of this complexity, it is natural to look elsewhere for different examples and perspectives. Some progress can be made by studying discrete-time models or difference equations. Many discrete-time models, such as the iterated functions considered in popular fractal-drawing programs, are explicitly not time-reversible, as any given point "in the present" may have several different "pasts" associated with it: indeed, the set of all pasts is known as the Julia set. Since such systems have a built-in irreversibility, it is inappropriate to use them to explain why time is not reversible. There are other systems that are chaotic, and are also explicitly time-reversible: among these is the baker's map, which is also exactly solvable. An interesting avenue of study is to examine solutions to such systems not by iterating the dynamical system over time, but instead, to study the corresponding Frobenius-Perron operator or transfer operator for the system. For some of these systems, it can be explicitly, mathematically shown that the transfer operators are not trace-class. This means that these operators do not have a unique eigenvalue spectrum that is independent of the choice of basis. In the case of the baker's map, it can be shown that several unique and inequivalent diagonalizations or bases exist, each with a different set of eigenvalues. It is this phenomenon that can be offered as an "explanation" for the arrow of time. That is, although the iterated, discrete-time system is explicitly time-symmetric, the transfer operator is not. Furthermore, the transfer operator can be diagonalized in one of two inequivalent ways: one that describes the forward-time evolution of the system, and one that describes the backwards-time evolution. As of 2006, this type of time-symmetry breaking has been demonstrated for only a very small number of exactly-solvable, discrete-time systems. The transfer operator for more complex systems has not been consistently formulated, and its precise definition is mired in a variety of subtle difficulties. In particular, it has not been shown that it has a broken symmetry for the simplest exactly-solvable continuous-time ergodic systems, such as Hadamard's billiards, or the Anosov flow on the tangent space of PSL(2,R). Quantum mechanics Research on irreversibility in quantum mechanics takes several different directions. One avenue is the study of rigged Hilbert spaces, and in particular, how discrete and continuous eigenvalue spectra intermingle. For example, the rational numbers are completely intermingled with the real numbers, and yet have a unique, distinct set of properties. It is hoped that the study of Hilbert spaces with a similar inter-mingling will provide insight into the arrow of time. Another distinct approach is through the study of quantum chaos by which attempts are made to quantize systems as classically chaotic, ergodic or mixing. The results obtained are not dissimilar from those that come from the transfer operator method. For example, the quantization of the Boltzmann gas, that is, a gas of hard (elastic) point particles in a rectangular box reveals that the eigenfunctions are space-filling fractals that occupy the entire box, and that the energy eigenvalues are very closely spaced and have an "almost continuous" spectrum (for a finite number of particles in a box, the spectrum must be, of necessity, discrete). If the initial conditions are such that all of the particles are confined to one side of the box, the system very quickly evolves into one where the particles fill the entire box. Even when all of the particles are initially on one side of the box, their wave functions do, in fact, permeate the entire box: they constructively interfere on one side, and destructively interfere on the other. Irreversibility is then argued by noting that it is "nearly impossible" for the wave functions to be "accidentally" arranged in some unlikely state: such arrangements are a set of zero measure. Because the eigenfunctions are fractals, much of the language and machinery of entropy and statistical mechanics can be imported to discuss and argue the quantum case. Cosmology Some processes that involve high energy particles and are governed by the weak force (such as K-meson decay) defy the symmetry between time directions. However, all known physical processes do preserve a more complicated symmetry (CPT symmetry), and are therefore unrelated to the second law of thermodynamics, or to the day-to-day experience of the arrow of time. A notable exception is the wave function collapse in quantum mechanics, an irreversible process which is considered either real (by the Copenhagen interpretation) or apparent only (by the many-worlds interpretation of quantum mechanics). In either case, the wave function collapse always follows quantum decoherence, a process which is understood to be a result of the second law of thermodynamics. The universe was in a uniform, high density state at its very early stages, shortly after the Big Bang. The hot gas in the early universe was near thermodynamic equilibrium (see Horizon problem); in systems where gravitation plays a major role, this is a state of low entropy, due to the negative heat capacity of such systems (this is in contrary to non-gravitational systems where thermodynamic equilibrium is a state of maximum entropy). Moreover, due to its small volume compared to future epochs, the entropy was even lower as gas expansion increases its entropy. Thus the early universe can be considered to be highly ordered. Note that the uniformity of this early near-equilibrium state has been explained by the theory of cosmic inflation. According to this theory the universe (or, rather, its accessible part, a radius of 46 billion light years around Earth) evolved from a tiny, totally uniform volume (a portion of a much bigger universe), which expanded greatly; hence it was highly ordered. Fluctuations were then created by quantum processes related to its expansion, in a manner supposed to be such that these fluctuations went through quantum decoherence, so that they became uncorrelated for any practical use. This is supposed to give the desired initial conditions needed for the Second Law of Thermodynamics; different decoherent states ultimately evolved to different specific arrangements of galaxies and stars. The universe is apparently an open universe, so that its expansion will never terminate, but it is an interesting thought experiment to imagine what would have happened had the universe been closed. In such a case, its expansion would stop at a certain time in the distant future, and then begin to shrink. Moreover, a closed universe is finite. It is unclear what would happen to the second law of thermodynamics in such a case. One could imagine at least two different scenarios, though in fact only the first one is plausible, as the other requires a highly smooth cosmic evolution, contrary to what is observed: The broad consensus among the scientific community today is that smooth initial conditions lead to a highly non-smooth final state, and that this is in fact the source of the thermodynamic arrow of time. Gravitational systems tend to gravitationally collapse to compact bodies such as black holes (a phenomenon unrelated to wavefunction collapse), so the universe would end in a Big Crunch that is very different than a Big Bang run in reverse, since the distribution of the matter would be highly non-smooth; as the universe shrinks, such compact bodies merge to larger and larger black holes. It may even be that it is impossible for the universe to have both a smooth beginning and a smooth ending. Note that in this scenario the energy density of the universe in the final stages of its shrinkage is much larger than in the corresponding initial stages of its expansion (there is no destructive interference, unlike in the second scenario described below), and consists of mostly black holes rather than free particles. A highly controversial view is that instead, the arrow of time will reverse. The quantum fluctuations—which in the meantime have evolved into galaxies and stars—will be in superposition in such a way that the whole process described above is reversed—i.e., the fluctuations are erased by destructive interference and total uniformity is achieved once again. Thus the universe ends in a Big Crunch, which is similar to its beginning in the Big Bang. Because the two are totally symmetric, and the final state is very highly ordered, entropy must decrease close to the end of the universe, so that the second law of thermodynamics reverses when the universe shrinks. This can be understood as follows: in the very early universe, interactions between fluctuations created entanglement (quantum correlations) between particles spread all over the universe; during the expansion, these particles became so distant that these correlations became negligible (see quantum decoherence). At the time the expansion halts and the universe starts to shrink, such correlated particles arrive once again at contact (after circling around the universe), and the entropy starts to decrease—because highly correlated initial conditions may lead to a decrease in entropy. Another way of putting it, is that as distant particles arrive, more and more order is revealed because these particles are highly correlated with particles that arrived earlier. In this scenario, the cosmological arrow of time is the reason for both the thermodynamic arrow of time and the quantum arrow of time. Both will slowly disappear as the universe will come to a halt, and will later be reversed. In the first and more consensual scenario, it is the difference between the initial state and the final state of the universe that is responsible for the thermodynamic arrow of time. This is independent of the cosmological arrow of time.
Physical sciences
Statistical mechanics
Physics
1833705
https://en.wikipedia.org/wiki/Tropical%20fish
Tropical fish
Tropical fish are fish found in aquatic tropical environments around the world. Fishkeepers often keep tropical fish in freshwater and saltwater aquariums. The term "tropical fish" is not a taxonomic group, but rather is a general term for fish found in such environments, particularly those kept in aquariums. Aquarium fish Tropical fish is a term commonly used to refer to fish that are kept in heated aquariums. Freshwater tropical fish are more commonly kept than saltwater tropical fish due to the common availability of fresh water sources, such as tap water, whereas salt water is not commonly available and has to be recreated by using fresh water with sea salt additions. Salt water has to be monitored to maintain the correct salinity because of the effects of evaporation. Freshwater tropical aquariums can be maintained by simply topping up with fresh water. Tropical fish are popular choices for aquariums due to their often bright coloration, which typically derives from both pigmented cells and iridescent cells. Tropical fish may include wild-caught specimens, individuals born in captivity including lines selectively bred for special physical features, such as long fins, or particular colorations, such as albino. Some fish may be hybrids of more than one species. Freshwater tropical fish Most fish that are sold as tropical fish are freshwater species. Most species available are generally bred from fish farms in the far east and Florida where tropical temperatures make the commercial production more viable. Mass production of tropical fish from farms has led to many inexpensive fish available to aquarists. Tropical freshwater fish are the most popular group of fish because of the low price and ease of keeping in aquaria. Some species are difficult to breed in captivity and so are still sourced from the wild. These species are generally more expensive. Among the bred-in-captivity species, the most expensive freshwater species include arowanas and flowerhorn cichlids. Some male flowerhorns are sterile due to many cross breedings. Saltwater tropical fish Marine fish that are sold as tropical fish are generally sourced from the wild, usually from the coral reefs around the world. This is because only a few species of marine fish have been successfully bred in captivity with any regularity. The price of marine fish coupled with the difficulty in keeping them alive in aquaria makes them less of a popular choice for aquarists to keep. However, because of the more vivid colours, patterns and behaviour of marine fish compared to freshwater fish, they are still reasonably popular. The advances in filtration technology and increase in available knowledge on how to maintain marine fish as well as the increasing number of aquarium-bred species is seeing a gradual rise in their popularity. Coral reef tropical fish Many marine tropical fish, particularly those of interest to fishkeepers, are those that live among or in close relation to coral reefs. Coral reefs form complex ecosystems with tremendous biodiversity. Among ocean inhabitants, tropical fish stand out as particularly colorful. Hundreds of species can exist in a small area of a healthy reef, many of them hidden or well camouflaged. Reef fish have developed many ingenious specialisations adapted to survival on the reefs. Some recreational scuba divers keep lists of fish species they have observed while diving, especially in tropical marine environments. Coral reefs occupy less than 1% of the surface area of the world oceans, yet they provide a home for 25% of all marine fish species. Reef habitats are a sharp contrast to the open water habitats that make up the other 99% of the world's oceans. However, loss and degradation of coral reef habitat, increasing pollution, and overfishing including the use of destructive fishing practices, are threatening the survival of the coral reefs and the associated reef fish.
Biology and health sciences
Fishes by habitat
Animals
1834637
https://en.wikipedia.org/wiki/Enguri%20Dam
Enguri Dam
The Enguri Dam is a hydroelectric dam on the Enguri River in Tsalenjikha, Georgia. Currently, it is the world's second highest concrete arch dam, with a height of . It is located north of the town of Jvari. It is part of the Enguri hydroelectric power station (HES) which is partially located in Abkhazia. History Soviet First Secretary Nikita Khrushchev initially proposed a major dam and hydroelectric power scheme on the Bzyb River as his favourite resort was located near the mouth of the river at Pitsunda. However, his experts informed him that a dam built on the Bzyb River would have had catastrophic effects in causing beach erosion at Pitsunda, so in the end the dam was built on the Enguri River instead, where the impact upon the coastline was assessed to be considerably less pronounced. Construction of the Enguri dam began in 1961. The dam became temporarily operational in 1978, and was completed in 1987. In 1994, the dam was inspected by engineers of Hydro-Québec, who found that the dam was "in a rare state of dilapidation". In 1999, the European Commission granted €9.4 million to Georgia for urgent repairs at the Enguri HES, including replacing the stoplog at the arch dam on the Georgian side and, refurbishing one of the five generators of the power station at the Abkhaz side. In total, €116 million loans were granted by the EBRD, the European Union, the Japanese Government, KfW and Government of Georgia. In 2011 the European Investment Bank (EIB) loaned €20 million in order to complete the rehabilitation of the Enguri hydropower plant and to ensure safe water evacuation towards the Black Sea at the Vardnili hydropower cascade. In the early 1980's, a series of radio relays were built to connect the Enguri Dam with the Khudoni Dam, which was under construction. The relays were in remote territory with no access to electricity, and thus were powered with a series of eight radioisotope thermoelectric generators (RTGs). However, the Hudoni dam's construction was stopped as Georgian independence from the Soviet Union drew near. The stations and their RTGs were abandoned and eventually dismantled. The RTG's became lost at this time. Two were rediscovered in 1998, leading to no injuries. Two more were found in 1999, and again led to no injuries or significant radiation exposure. Two more were rediscovered in 2001, which led to the Lia radiological accident. The other two sources remain unaccounted for. Technical features The Enguri hydroelectric power station (HES) is a cascade of hydroelectric facilities including, in addition to the dam - diversion installation of the Enguri HES proper, the near-dam installation of the Perepad HES-1 and three similar channel installations of the Perepad HESs-2, -3, and -4 located on the tailrace emptying into the Black Sea. While the arch dam is located on the Georgian controlled territory in Upper Svanetia, the power station is located in the Gali District of breakaway Abkhazia. Enguri HES has 20 turbines with a nominal capacity of 66 MW each, resulting in a total capacity of 1,320 MW. Its average annual capacity is 3.8 TWh, which is approximately 46% of the total electricity supply in Georgia as of 2007. According to the 1992 agreement Abkhazia gets 40% and the rest of Georgia gets 60%, however in the late 2010s the Abkhazian consumption increased significantly driven in part by bitcoin mining. The facility's arched dam, located at the town of Jvari, was inscribed in the list of cultural heritage of Georgia in 2015.
Technology
Dams
null
1835568
https://en.wikipedia.org/wiki/Titanium%20nitride
Titanium nitride
Titanium nitride (TiN; sometimes known as tinite) is an extremely hard ceramic material, often used as a physical vapor deposition (PVD) coating on titanium alloys, steel, carbide, and aluminium components to improve the substrate's surface properties. Applied as a thin coating, TiN is used to harden and protect cutting and sliding surfaces, for decorative purposes (for its golden appearance), and as a non-toxic exterior for medical implants. In most applications a coating of less than is applied. Characteristics TiN has a Vickers hardness of 1800–2100, hardness of , a modulus of elasticity of , a thermal expansion coefficient of 9.35 K−1, and a superconducting transition temperature of 5.6 K. TiN oxidizes at 800 °C in a normal atmosphere. It is chemically stable at 20 °C, according to laboratory tests, but can be slowly attacked by concentrated acid solutions with rising temperatures. TiN has a brown color and appears gold when applied as a coating. Depending on the substrate material and surface finish, TiN has a coefficient of friction ranging from 0.4 to 0.9 against another TiN surface (non-lubricated). The typical TiN formation has a crystal structure of NaCl type with a roughly 1:1 stoichiometry; TiNx compounds with x ranging from 0.6 to 1.2 are, however, thermodynamically stable. TiN becomes superconducting at cryogenic temperatures, with critical temperature up to 6.0 K for single crystals. Superconductivity in thin-film TiN has been studied extensively, with the superconducting properties strongly varying depending on sample preparation, up to complete suppression of superconductivity at a superconductor–insulator transition. A thin film of TiN was chilled to near absolute zero, converting it into the first known superinsulator, with resistance suddenly increasing by a factor of 100,000. Natural occurrence Osbornite is a very rare natural form of titanium nitride, found almost exclusively in meteorites. Uses A well-known use for TiN coating is for edge retention and corrosion resistance on machine tooling, such as drill bits and milling cutters, often improving their lifetime by a factor of three or more. Because of the metallic gold color of TiN, this material is used to coat costume jewelry and automotive trim for decorative purposes. TiN is also widely used as a top-layer coating, usually with nickel- or chromium-plated substrates, on consumer plumbing fixtures and door hardware. As a coating, it is used in aerospace and military applications and to protect the sliding surfaces of suspension forks of bicycles and motorcycles, as well as the shock shafts of radio-controlled cars. TiN is also used as a protective coating on the moving parts of many rifles and semi-automatic firearms, as it is extremely durable. As well as being durable, it is also extremely smooth, making removing the carbon build-up extremely easy. TiN is non-toxic, meets FDA guidelines, and has seen use in medical devices such as scalpel blades and orthopedic bone-saw blades, where sharpness and edge retention are important. TiN coatings have also been used in implanted prostheses (especially hip replacement implants) and other medical implants. Though less visible, thin films of TiN are also used in microelectronics, where they serve as a conductive connection between the active device and the metal contacts used to operate the circuit, while acting as a diffusion barrier to block the diffusion of the metal into the silicon. In this context, TiN is classified as a "barrier metal" (electrical resistivity ~ 25 μΩ·cm), even though it is clearly a ceramic from the perspective of chemistry or mechanical behavior. Recent chip design in the 45 nm technology and beyond also makes use of TiN as a "metal" for improved transistor performance. In combination with gate dielectrics (e.g. HfSiO4) that have a higher permittivity compared to standard SiO2, the gate length can be scaled down with low leakage, higher drive current and the same or better threshold voltage. Additionally, TiN thin films are currently under consideration for coating zirconium alloys for accident-tolerant nuclear fuels. It is also used as a coating on some compression driver diaphragms to improve performance. Owing to their high biostability, TiN layers may also be used as electrodes in bioelectronic applications like in intelligent implants or in-vivo biosensors that have to withstand the severe corrosion caused by body fluids. TiN electrodes have already been applied in the subretinal prosthesis project as well as in biomedical microelectromechanical systems (BioMEMS). Fabrication The most common methods of TiN thin film creation are physical vapor deposition (PVD, usually sputter deposition, cathodic arc deposition or electron-beam heating) and chemical vapor deposition (CVD). In both methods, pure titanium is sublimed and reacted with nitrogen in a high-energy, vacuum environment. TiN film may also be produced on Ti workpieces by reactive growth (for example, annealing) in a nitrogen atmosphere. PVD is preferred for steel parts because the deposition temperatures exceeds the austenitizing temperature of steel. TiN layers are also sputtered on a variety of higher-melting-point materials such as stainless steels, titanium and titanium alloys. Its high Young's modulus (values between 450 and 590 GPa have been reported in the literature) means that thick coatings tend to flake away, making them much less durable than thin ones. Titanium-nitride coatings can also be deposited by thermal spraying whereas TiN powders are produced by nitridation of titanium with nitrogen or ammonia at 1200 °C. Bulk ceramic objects can be fabricated by packing powdered metallic titanium into the desired shape, compressing it to the proper density, then igniting it in an atmosphere of pure nitrogen. The heat released by the chemical reaction between the metal and gas is sufficient to sinter the nitride reaction product into a hard, finished item. See powder metallurgy. Other commercial variants There are several commercially used variants of TiN that have been developed since 2010, such as titanium carbon nitride (TiCN), titanium aluminium nitride (TiAlN or AlTiN), and titanium aluminum carbon nitride, which may be used individually or in alternating layers with TiN. These coatings offer similar or superior enhancements in corrosion resistance and hardness, and additional colors ranging from light gray to nearly black, to a dark, iridescent, bluish-purple, depending on the exact process of application. These coatings are becoming common on sporting goods, particularly knives and handguns, where they are used for both aesthetic and functional reasons. As a constituent in steel Titanium nitride is also produced intentionally, within some steels, by judicious addition of titanium to the alloy. TiN forms at very high temperatures because of its very low enthalpy of formation, and even nucleates directly from the melt in secondary steel-making. It forms discrete, micrometre-sized cubic particles at grain boundaries and triple points, and prevents grain growth by Ostwald ripening up to very high homologous temperatures. Titanium nitride has the lowest solubility product of any metal nitride or carbide in austenite, a useful attribute in microalloyed steel formulas.
Physical sciences
Nitride salts
Chemistry
1222689
https://en.wikipedia.org/wiki/Qualitative%20inorganic%20analysis
Qualitative inorganic analysis
Classical qualitative inorganic analysis is a method of analytical chemistry which seeks to find the elemental composition of inorganic compounds. It is mainly focused on detecting ions in an aqueous solution, therefore materials in other forms may need to be brought to this state before using standard methods. The solution is then treated with various reagents to test for reactions characteristic of certain ions, which may cause color change, precipitation and other visible changes. Qualitative inorganic analysis is that branch or method of analytical chemistry which seeks to establish the elemental composition of inorganic compounds through various reagents. Physical appearance of some inorganic compounds Detecting cations According to their properties, cations are usually classified into six groups. Each group has a common reagent which can be used to separate them from the solution. To obtain meaningful results, the separation must be done in the sequence specified below, as some ions of an earlier group may also react with the reagent of a later group, causing ambiguity as to which ions are present. This happens because cationic analysis is based on the solubility products of the ions. As the cation gains its optimum concentration needed for precipitation it precipitates and hence allowing us to detect it. The division and precise details of separating into groups vary slightly from one source to another; given below is one of the commonly used schemes. 1st analytical group of cations The 1st analytical group of cations consists of ions which form insoluble chlorides. As such, the group reagent to separate them is hydrochloric acid, usually used at a concentration of 1–2 M. Concentrated HCl must not be used, because it forms a soluble complex ([PbCl4]2−) with Pb2+. Consequently, the Pb2+ ion would go undetected. The most important cations in the 1st group are Ag+, Hg, and Pb2+. The chlorides of these elements cannot be distinguished from each other by their colour - they are all white solid compounds. PbCl2 is soluble in hot water, and can therefore be differentiated easily. Ammonia is used as a reagent to distinguish between the other two. While AgCl dissolves in ammonia (due to the formation of the complex ion [Ag(NH3)2]+), Hg2Cl2 gives a black precipitate consisting of a mixture of chloro-mercuric amide and elemental mercury. Furthermore, AgCl is reduced to silver under light, which gives samples a violet colour. The silver ammonia complex can react with bismuth ions and iodide to generate orange or brown Ag2BiI5 precipitate. PbCl2 is far more soluble than the chlorides of the other two ions, especially in hot water. Therefore, HCl in concentrations which completely precipitate Hg and Ag+ may not be sufficient to do the same to Pb2+. Higher concentrations of Cl− cannot be used for the before mentioned reasons. Thus, a filtrate obtained after first group analysis of Pb2+ contains an appreciable concentration of this cation, enough to give the test of the second group, viz. formation of an insoluble sulfide. For this reason, Pb2+ is usually also included in the 2nd analytical group. A signature reaction of lead ions involve the formation of a yellow lead chromate precipitate upon treatment with chromate ions. This precipitate doesn't dissolve in ammonia (unlike Cu(II) and Ag(I)) or acetic acid (unlike Cu(II) and Hg(II)). This group can be determined by adding the salt in water and then adding dilute hydrochloric acid. A white precipitate is formed, to which ammonia is then added. If the precipitate is insoluble, then Pb2+ is present; if the precipitate is soluble, then Ag+ is present, and if the white precipitate turns black, then Hg is present. Hg ions, after oxidation in the presence of chloride ions to HgCl42-, can form a characteristic orange-red precipitate of Cu2HgI4 with the addition of Cu2+ and I−. Confirmation test for Pb2+: Pb2+ + 2 KI → PbI2 + 2 K+ Pb2+ + K2CrO4 → PbCrO4 + 2 K+ Confirmation test for Ag+: Ag+ + KI → AgI + K+ 2Ag+ + K2CrO4 → Ag2CrO4 + 2 K+ Confirmation test for Hg: Hg + 2 KI → Hg2I2 + 2 K+ 2 Hg + 2 NaOH → 2 HgO + 2 Na+ + H2O 2nd analytical group of cations The 2nd analytical group of cations consists of ions which form acid-insoluble sulfides. Cations in the 2nd group include: Cd2+, Bi3+, Cu2+, As3+, As5+, Sb3+, Sb5+, Sn2+, Sn4+ and Hg2+. Pb2+ is usually also included here in addition to the first group. Although these methods refer to solutions that contain sulfide (S2−), these solutions actually only contain H2S and bisulfide (HS−). Sulfide (S2−) does not exist in appreciable concentrations in water. The reagent used can be any substance that gives S2− ions in such solutions; most commonly used are hydrogen sulfide (at 0.2-0.3 M), thioacetamide (at 0.3-0.6 M), addition of hydrogen sulfide can often prove to be a lumbersome process and therefore sodium sulfide can also serve the purpose. The test with the sulfide ion must be conducted in the presence of dilute HCl. Its purpose is to keep the sulfide ion concentration at a required minimum, so as to allow the precipitation of 2nd group cations alone. If dilute acid is not used, the early precipitation of 4th group cations (if present in solution) may occur, thus leading to misleading results. Acids beside HCl are rarely used. Sulfuric acid may lead to the precipitation of the 5th group cations, whereas nitric acid oxidises the sulfide ion in the reagent, forming colloidal sulfur. The precipitates of these cations are almost indistinguishable, except for CdS, which is yellow. All the precipitates, except for HgS, are soluble in dilute nitric acid. HgS is soluble only in aqua regia, which can be used to separate it from the rest. The action of ammonia is also useful in differentiating the cations. CuS dissolves in ammonia forming an intense blue solution, whereas CdS dissolves forming a colourless solution. The sulfides of As3+, As5+, Sb3+, Sb5+, Sn2+, Sn4+ are soluble in yellow ammonium sulfide, where they form polysulfide complexes. This group is determined by adding the salt in water and then adding dilute hydrochloric acid (to make the medium acidic) followed by hydrogen sulfide gas. Usually it is done by passing hydrogen sulfide over the test tube for detection of 1st group cations. If it forms a reddish-brown or black precipitate then Bi3+, Cu2+, Hg2+ or Pb2+ is present. Otherwise, if it forms a yellow precipitate, then Cd2+ or Sn4+ is present; or if it forms a brown precipitate, then Sn2+ must be present; or if a red orange precipitate is formed, then Sb3+ is present. Pb2+ + K2CrO4 → PbCrO4 + 2 K+ Confirmation test for copper: 2 Cu2+ + K4[Fe(CN)6] + CH3COOH → Cu2[Fe(CN)6] + 4 K+ Cu2+ + 2 NaOH → Cu(OH)2 + 2 Na+ Cu(OH)2 → CuO + H2O (endothermic) (Another very sensitive test for copper utilizes the fact that Cu2+ can serve as a catalyst for the oxidation of thiosulfate ions by Fe3+ ions. In the absence of Cu2+, Fe3+ can form the purple complex Fe(S2O3)2− without undergoing redox. If the added sample contains Cu2+, the solution will rapidly discolor.) Confirmation test for bismuth: Bi3+ + 3 KI (in excess) → BiI3 + 3 K+ BiI3 + KI → K[BiI4] Bi3+ + H2O (in excess) → BiO + 2 H+ (Bismuth ions can form th bright yellow complex Bi(tu)33+ in the presence of thiourea under acidic conditions, which can be precipitated as the orange-red Bi(tu)3I3•Cu(tu)3I in the presence of Cu2+ and I−, and this can also act as a test for bismuth.) Confirmation test for mercury: Hg2+ + 2 KI (in excess) → HgI2 + 2 K+ HgI2 + 2 KI → K2[HgI4] (red precipitate dissolves) 2 Hg2+ + SnCl2 → 2 Hg + SnCl4 (white precipitate turns gray) (Hg2+ may otherwise be detected via Cu2HgI4 formation, see Hg22+ in 1st group cations.) 3rd analytical group of cations The 3rd analytical group of cations includes ions which form hydroxides that are insoluble even at low concentrations. Cations in the 3rd group are, among others: Fe2+, Fe3+, Al3+, and Cr3+. The group is determined by making a solution of the salt in water and adding ammonium chloride and ammonium hydroxide. Ammonium chloride is added to ensure low concentration of hydroxide ions. The formation of a reddish-brown precipitate indicates Fe3+; a gelatinous white precipitate indicates Al3+; and a green precipitate indicates Cr3+ or Fe2+. These last two are distinguished by adding sodium hydroxide in excess to the green precipitate. If the precipitate dissolves, Cr3+ is indicated; otherwise, Fe2+ is present. 4th analytical group of cations The 4th analytical group of cations includes ions that precipitate as sulfides at pH 9. The reagent used is ammonium sulfide or Na2S 0.1 M added to the ammonia/ammonium chloride solution used to detect group 3 cations. It includes: Zn2+, Ni2+, Co2+, and Mn2+. Zinc will form a white precipitate, nickel and cobalt a black precipitate and manganese a brick/flesh colored precipitate. Dimethylglyoxime can be used to confirm nickel presence, while ammonium thiocyanate in ether will turn blue in the presence of cobalt. This group is sometimes denoted as IIIB since groups III and IV are tested for at the same time, with the addition of sulfide being the only difference. This includes ions which form sulfides that are insoluble at high concentrations. The reagents used are H2S in the presence of NH4OH. NH4OH is used to increase the concentration of the sulfide ion, by the common ion effect - hydroxide ions from NH4OH combine with H+ ions from H2S, which shifts the equilibrium in favor of the ionized form: 2 + + + They contain Zn2+, Mn2+, Ni2+ and Co2+ 5th analytical group of cations Ions in 5th analytical group of cations form carbonates that are insoluble in water. The reagent usually used is (NH4)2CO3 (at around 0.2 M), with a neutral or slightly basic pH. All the cations in the previous groups are separated beforehand, since many of them also form insoluble carbonates. The most important ions in the 5th group are Ba2+, Ca2+, and Sr2+. After separation, the easiest way to distinguish between these ions is by testing flame colour: barium gives a yellow-green flame, calcium gives brick red, and strontium, crimson red. 6th analytical group of cations Cations which are left after carefully separating previous groups are considered to be in the sixth analytical group. The most important ones are Mg2+, Li+, Na+ and K+. All the ions are distinguished by flame color: lithium gives a red flame, sodium gives bright yellow (even in trace amounts), potassium gives violet, and magnesium, colorless (although magnesium metal burns with a bright white flame). Magnesium can also be distinguished from other cations in this group by adding sodium hydroxide to drive the pH to 11 or higher, which selectively precipitates Mg(OH)2. Detecting anions 1st analytical group of anions The 1st group of anions consist of CO, HCO, CH3COO−, S2−, SO, and NO. The reagent for Group 1 anions is dilute hydrochloric acid (HCl) or dilute sulfuric acid (H2SO4). Carbonates give a brisk effervescence with dilute H2SO4 due to the release of CO2, a colorless gas which turns limewater milky due to formation of CaCO3 (carbonatation). The milkiness disappears on passing an excess of the gas through the lime water, due to formation of Ca(HCO3)2. Acetates give the vinegar-like smell of CH3COOH when treated with dilute H2SO4 and heated. A blood red colouration is produced upon addition of yellow FeCl3, due to formation of iron(III) acetate. Sulfides give the rotten egg smell of H2S when treated with dilute H2SO4. The presence of sulfide is confirmed by adding lead(II) acetate paper, which turns black due to the formation of PbS. Sulfides also turn solutions of red sodium nitroprusside purple. Sulfites produce SO2 gas, which smells of burning sulfur, when treated with dilute acid. They turn acidified K2Cr2O7 from orange to green. Thiosulfates produce SO2 gas when treated with dilute acid. In addition, they form a cloudy precipitate of sulfur. Nitrites give reddish-brown fumes of NO2 when treated with dilute H2SO4. These fumes cause a solution of potassium iodide (KI) and starch to turn blue. 2nd analytical group of anions The 2nd group of anions consist of Cl−, Br−, I−, NO and CO. The group reagent for Group 2 anion is concentrated sulfuric acid (H2SO4). After addition of the acid, chlorides, bromides and iodides will form precipitates with silver nitrate. The precipitates are white, pale yellow, and yellow, respectively. The silver halides formed are completely soluble, partially soluble, or not soluble at all, respectively, in aqueous ammonia solution. Chlorides are confirmed by the chromyl chloride test. When the salt is heated with K2Cr2O7 and concentrated H2SO4, red vapours of chromyl chloride (CrO2Cl2) are produced. Passing this gas through a solution of NaOH produces a yellow solution of Na2CrO4. The acidified solution of Na2CrO4 gives a yellow precipitate with the addition of (CH3COO)2Pb. Bromides and iodides are confirmed by the layer test. A sodium carbonate extract is made from the solution containing bromide or iodide, and CHCl3 or is added to the solution, which separates into two layers: an orange colour in the or layer indicates the presence of Br−, and a violet colour indicates the presence of I−. Nitrates give brown fumes with concentrated H2SO4 due to formation of NO2. This is intensified upon adding copper turnings. Nitrate ion is confirmed by adding an aqueous solution of the salt to FeSO4 and pouring concentrated H2SO4 slowly along the sides of the test tube, which produces a brown ring around the walls of the tube, at the junction of the two liquids caused by the formation of . Upon treatment with concentrated sulfuric acid, oxalates yield colourless CO2 and CO gases. These gases burn with a bluish flame and turn lime water milky. Oxalates also decolourise KMnO4 and give a white precipitate with CaCl2. 3rd analytical group of anions The 3rd group of anions consist of SO, PO and BO. They react neither with concentrated nor diluted H2SO4. Sulfates give a white precipitate of BaSO4 with BaCl2 which is insoluble in any acid or base. Phosphates give a yellow crystalline precipitate upon addition of HNO3 and ammonium molybdate and heating the solution. Borates give a green flame characteristic of ethyl borate when ignited with concentrated H2SO4 and ethanol. Modern techniques Qualitative inorganic analysis is now used only as a pedagogical tool. Modern techniques such as atomic absorption spectroscopy and ICP-MS are able to quickly detect the presence and concentrations of elements using a very small amount of sample. Sodium carbonate test The sodium carbonate test (not to be confused with sodium carbonate extract test) is used to distinguish between some common metal ions, which are precipitated as their respective carbonates. The test can distinguish between copper (Cu), iron (Fe), and calcium (Ca), zinc (Zn) or lead (Pb). Sodium carbonate solution is added to the salt of the metal. A blue precipitate indicates Cu2+ ion. A dirty green precipitate indicates Fe2+ ion. A yellow-brown precipitate indicates Fe3+ ion. A white precipitate indicates Ca2+, Zn2+, or Pb2+ ion. The compounds formed are, respectively, basic copper carbonate, iron(II) carbonate, iron(III) oxide, calcium carbonate, zinc carbonate, and lead(II) carbonate. This test is used to precipitate the ion present as almost all carbonates are insoluble. While this test is useful for telling these cations apart, it fails if other ions are present, because most metal carbonates are insoluble and will precipitate. In addition, calcium, zinc, and lead ions all produce white precipitates with carbonate, making it difficult to distinguish between them. Instead of sodium carbonate, sodium hydroxide may be added, this gives nearly the same colours, except that lead and zinc hydroxides are soluble in excess alkali, and can hence be distinguished from calcium. See qualitative inorganic analysis for the complete sequence of tests used for qualitative cation analysis.
Physical sciences
Basics_2
Chemistry
1223797
https://en.wikipedia.org/wiki/Machairodontinae
Machairodontinae
Machairodontinae is an extinct subfamily of carnivoran mammals of the family Felidae (true cats). They were found in Asia, Africa, North America, South America, and Europe, with the earliest species known from the Middle Miocene, with the last surviving species (belonging to the genera Smilodon and Homotherium) becoming extinct around Late Pleistocene-Holocene transition (~13-10,000 years ago). The Machairodontinae contain many of the extinct predators commonly known as "saber-toothed cats", including the famed genus Smilodon, and others like Megantereon as well as other cats with more modest increases in the size and length of their maxillary canines like Homotherium. The name means "dagger-tooth", from Greek μάχαιρα (machaira), sword. Sometimes, other carnivorous mammals with elongated teeth are also called saber-toothed cats, although they do not belong to the felids. Besides the machairodonts, other saber-toothed predators also arose in the nimravids, barbourofelids, machaeroidines, hyaenodonts and even in two groups of metatherians (the thylacosmilid sparassodonts and the deltatheroideans). Etymology Μαχαιροῦς, from Ancient Greek: μάχαιρα, lit. 'makhaira', means a sword, and oδόντος (odóntos), meaning "tooth." Evolution Family Felidae The Machairodontinae originated in the middle Miocene of Europe. The early felid Pseudaelurus quadridentatus showed a trend towards elongated upper canines, and is believed to be at the base of the machairodontine evolution. The earliest known machairodont genus is the middle Miocene Miomachairodus from Africa and Turkey. Until the late Miocene, machairodontines co-existed at several places together with barbourofelids, archaic large carnivores that also bore long sabre-teeth. Traditionally, three different tribes of machairodontines were recognized, the Smilodontini with typical dirk-toothed forms, such as Megantereon and Smilodon, the Machairodontini or Homotherini with scimitar-toothed cats, such as Machairodus or Homotherium, and the Metailurini, containing genera such as Dinofelis and Metailurus. However, some in the past have regrouped the Metailurini within the other felid subfamily, the Felinae, along with all modern cats. A 2022 phylogenetic study suggests a polyphyletic relationship between Metailurini and Smilodontini, with the genus Paramachairodus being ancestral to both groups. Based on mitochondrial DNA sequences extracted from fossils, machairodonts diverged from living cats around 20 million years ago, with the last surviving machairodont genera Homotherium and Smilodon estimated to have diverged from each other about 18 million years ago. The name 'saber-toothed tigers' is misleading. Machairodonts were not in the same subfamily as tigers, there is no evidence that they had tiger-like coat patterns, and this broad group of animals did not all live or hunt in the same manner as the modern tiger. DNA analysis published in 2005 confirmed and clarified cladistic analysis in showing that the Machairodontinae diverged early from the ancestors of modern cats and are not closely related to any living feline species. Classification Phylogeny The phylogenetic relationships of Machairodontinae are shown in the following cladogram: Evolutionary history and origin of phenotype Until the recent discovery of the Late Miocene fossil depository known as Batallones-1 in the 1990s, specimens of Smilodontini and Homotheriini ancestors were rare and fragmentary, so the evolutionary history of the saber-toothed phenotype, a phenotype affecting craniomandibular, cervical forelimb and forelimb anatomy, was largely unknown. Prior to the excavation of Batallones-1, the predominating hypothesis was that the highly derived saber-toothed phenotype arose rapidly through pleiotropic evolution. Batollnes-1 unearthed new specimens of Promegantereon ogygia, a Smilodontini ancestor, and Machairodus aphanistus, a Homotheriini ancestor, shedding light on evolutionary history. (Though the Smilodontini ancestor was originally assigned to the genus Paramachairodus, it was later revised to the genus Promegantereon). The leopard-sized P. ogygia (living 9.0 Ma) inhabited Spain (and perhaps additional territory). The current hypothesis for the evolution of the saber-toothed phenotype, made possible by Batollnes-1, is that this phenotype arose gradually over time through mosaic evolution. Although the exact cause is uncertain, current findings have supported the hypothesis that a need for the rapid killing of prey was the principal pressure driving the development of the phenotype over evolutionary time. As indicated by high instances of broken teeth, the biotic environment of saber-toothed cats was one marked by intense competition. Broken teeth indicate the frequency at which teeth contact bone. Increased teeth-bone contact suggests either increased consumption of carcasses, rapid consumption of prey, or increased aggression over kills – all three of which point to decreased prey availability, heightening competition between predators. Such a competitive environment would favor the faster killing of prey, because if prey is taken away before consumption (such as by out-competing) the energetic cost of capturing that prey is not reimbursed, and, if this occurs often enough in the lifetime of a predator, death by exhaustion or starvation would result. The earliest adaptations improving the speed at which prey was killed are present in the skull and mandible of P. ogygia and of M. aphanistus, and in the cervical vertebrae and forelimb of P. ogygia. They provide further morphological evidence for the importance of speed in the evolution of the saber-toothed phenotype. Machariodont diversity declined during the Pleistocene, by the Late Pleistocene, only two genera of machairodonts remained, Smilodon, and the distantly related Homotherium, both largely confined to the Americas. These two genera became extinct around 13,000-10,000 years ago as part of the wave of extinctions of most large animals across the Americas. Fossil remains Skull The most studied section of the machairodont group is the skull, and specifically the teeth. With a large range of genera, good fossil representation, comparable modern relatives, diversity within the group, and a good understanding of the ecosystems inhabited, the machairodont subfamily provides one of the best means of research for the analysis of hypercarnivores, specialization, and the relationships between predator and prey. Machairodonts are divided into two types: dirk-toothed and scimitar-toothed. Dirk-toothed cats had elongated, narrow upper canines and generally had stocky bodies. Scimitar-toothed cats had broader and shorter upper canines and a typically lithe body form with longer legs. The longer-toothed cats often had a bony flange that extended from their lower mandible. However, one genus, Xenosmilus, known only from two fairly complete fossils, broke this mould; possessing both the stout, heavy limbs associated with dirk-toothed cats, and the stout canines of a scimitar-toothed cat. Carnivores reduced the number of their teeth as they specialized in eating meat instead of grinding plant or insect matter. Cats have the fewest teeth of any carnivore group, and machairodonts reduce the number even further. Most machairodonts retain six incisors, two canines, and six premolars in each jaw, with two molars in the upper jaw only. Some genera, such as Smilodon, bear only eight premolars with one fewer on the mandible, leaving only four large premolars on the mandible along with two stunted canines and six stout incisors. The canines are curved back smoothly, and serrations are present, but are minor and wear away with age, leaving most middle-aged machairodonts (at about four or five) with no serrations. Hints in the bones such as these help paleontologists to estimate the age of an individual for population studies of an animal long extinct. Longer canines necessitate a larger gape. A lion with a gape of 95° could not bear canines that are nine inches long because they would not be able to have a gap between the lower and upper canines larger than an inch or so, not enough to use for killing. Machairodonts, along with the other groups of animals that acquired similar teeth by convergent evolution, needed a way to change their skulls to accommodate the canines in several ways. The main inhibitors of a large gape for mammals are the temporalis and masseter muscles at the back of the jaw. These muscles have the capacity to be powerful and undergo a great degree of modification for ranging bite forces, but are not very elastic due to their thickness, placement, and strength. To open the mouth wider, these species needed to make the muscles smaller and change their shape. The first step in this was to reduce the coronoid process. The masseter, and especially the temporalis, muscles insert on this jutting strip of bone, so reduction of this process meant the reduction of the muscles. Less mass for each muscle allowed greater elasticity and less resistance to a wide gape. Changing the shape of the temporalis muscle in this respect created a greater distance between the origin and insertion, so that the muscle became longer and more compact, which is generally a more suitable format for this type of stretching. This reduction led to a weaker bite. The skulls of machairodonts suggests another change in the shape of the temporalis muscle. The main constraint to opening the jaws is that the temporalis muscle will tear if it is stretched past a critical degree around the glenoid process when the mouth is opened. In modern felids, the occipital bone extends backward, but the temporalis muscles that attach to this surface are strained when opening the jaw wide as the muscle is wrapped around the glenoid process. To reduce the stretch of the temporalis muscle around the immovable process, machairodonts evolved a skull with a more vertical occipital bone. The domestic cat has a gape of 80°, while a lion has a gape of 91°. In Smilodon, the gape is 128°, and the angle between the ramus of the mandible and the occipital bone is 100°. This angle is the major limiting factor of the gape, and reducing the angle of the occipital bone relative to the palate of the mouth, as seen in Smilodon, allowed the gape to increase further. Had the occipital bone not been stretched towards the palate, and closer to perpendicular, the gape would theoretically be less, at roughly 113°. The skulls of many sabre-tooth predators, including machairodonts, are tall from top to bottom and short from front to back. The zygomatic arches are compressed, and the portion of the skull bearing facial features, such as eyes, is higher, while the muzzle is shorter. These changes help to compensate for an increased gape. Machairodonts also had reduced bottom canines, maintaining the distance between those in the upper and lower jaws. Post-cranial skeleton The dirk-toothed machairodonts, including Smilodon, Megantereon, and Paramachairodus, are defined by sturdiness and strength with the most primitive (Paramachairodus) being smaller and more lithe than the more advanced Smilodon; the intermediate Megantereon falls in between. They were not stamina runners with short tarsi and metatarsi and heavy bodies. When compared with the modern lion, their ribcages were barrel-like with narrow anterior ends and expanded posterior ends. Their scapulae were very well developed, especially in Smilodon, to allow for a larger surface area of attachment for massive shoulder and triceps muscles. The cervical vertebrae are very sturdy, and the attachments for muscles were powerful and strong. The lumbar section of the vertebral column was shortened. The tails were, from most primitive to most advanced, growing shorter and shorter, resulting in the bobcat-like tail of Smilodon. When viewing only postcranial remains, they are more similar in structure to modern bears than to modern cats. The scimitar-toothed machairodonts (Machairodontini, Homotherini and Metailurini) are a much more diverse group. The canines of this larger group are significantly shorter and generally stouter, but still much longer than in any modern cat. Because of the diversity of the genera, it is difficult to illustrate a specific type. The Machairodontini were the first among the machairodontines and the felids overall to attain near-lion size and already showed impressive upper canines early on in their evolution in the Miocene, but apart from these retained a relatively cat-like morphology that was more similar to modern pantherines than more derived machairodontines from later periods. Machairodus appears to have been an excellent jumper. The homotherines were overall more specialized and already the earliest taxa like Lokotunjailurus were remarkably long-legged and lean, though as large as a modern lion, a trend that further magnified in the Pliocene-Pleistocene genus Homotherium, which was once thought to be plantigrade, but was proven to be digitigrade. Homotherium serum, the most derived known species from the Pleistocene of North America bore a sloped back that might have made it excellent at running long distances, similar to the living spotted hyena. It also had a well-developed visual cortex, a large nasal cavity that would have allowed for better oxygen intake and smaller, only partially retractable claws that might have functioned like spikes for a better grip on the ground, all of which seems to point to a highly active lifestyle and cursoriality. Xenosmilus however, a mid-Pleistocene homotherine from Florida and close relative of Homotherium, broke this trend in that it had both scimitar-like teeth and a bulky and strong build that is more typical for dirk-tooths. The third scimitar-toothed tribe, the Metailurini, bore overall similarity to modern cats, but were highly diverse in terms of morphology with species ranging from a small cheetah to a small lion in size. Some had comparatively short, almost conical upper canines similar to modern cats, while some species bore strongly pronounced machairodontine features. However, in contrast to homotherines and smilodontines, even the most derived metailurines retained long tails, long hind legs and a long spine. On average, scimitar-toothed cats had more teeth than the average dirk-toothed machairodont, with six premolars on the mandible. When viewing only postcranial remains of similar-toothed machairodonts, many of their forms were comparatively similar to modern pantherines (genera Panthera and Neofelis). Mummified specimen In 2020, the mummified remains of a 3-week-old cub belonging to the species Homotherium latidens were found in Yakutia, Russia. They are believed to be the first of its kind to ever be found, with the fur's color and softness surprising scientists. Derived anatomy and diet Bite strength The jaws of machairodonts, especially more derived species with longer canines, such as Smilodon and Megantereon, are unusually weak. Digital reconstructions of the skulls of lions and of Smilodon show that the latter would have fared poorly with the stresses of holding onto struggling prey. The main issue was the stresses suffered by the mandible: a strong force threatened to break the jaw as pressure was placed on its weakest points. Smilodon would have had one-third the bite force of a lion, had it used only its jaw muscles. However, the neck muscles that connected to the back of the skull were stronger and depressed the head, forcing the skull down. When the jaw was hyper-extended, the jaw muscles could not contract, but the neck muscles pressed the head down, forcing the canines into whatever resisted them. When the mouth was closed far enough, the jaw muscles could raise the mandible by some margin. Diet On occasion, the bone of a fossilised predator is preserved well enough to retain recognizable proteins that belong to the species it consumed when alive. Stable isotope analysis of these proteins has shown that Smilodon preyed mainly on bison and horses, and occasionally ground sloths and mammoths, while Homotherium often preyed on young mammoths and other grazers such as pronghorn antelope and bighorn sheep when mammoths were not available. Examinations published in 2022 of tooth wear patterns on Smilodon and bite marks on the bones of the peccary Platygonus by Xenosmilus suggest that machairodonts were capable of efficiently stripping and de-fleshing a carcass of meat when feeding. They also show a degree of bone consumption on par with that of modern lions, which themselves can and regularly do eat smaller bones when consuming a meal. The face American paleontologist George Miller set forth a set of features not previously thought of in the soft tissues of machairodonts, specifically Smilodon. The first change he suggested in the appearance of machairodonts was lower ears, or rather the illusion of lower ears due to the higher sagittal crest. However, the positioning of the ears is always similar in modern felids, even in individuals that have crests comparable in size to those of sabretooth cats. The positioning of the pinnae, or outer ears, along with fur color, are dependent on the individual doing the reconstruction. Large or small, pointed or rounded, high or low, fossils do not record these characteristics, leaving them open to interpretation. Miller also suggested a pug-like nose. Aside from the pug and similar dogs, no modern carnivore exhibits a pug nose due to it being an unnaturally created trait originating from selective breeding. The relatively low distribution of the pug nose has resulted in it being generally ignored. Miller's rationale is based on the retraction of Smilodon nasal bones. Criticism of Miller's theory compares the nasal bones of lions and tigers. Lions, when compared to tigers, also have strongly retracted nasal bones, but a lion's rhinarium, or external nose, is no more retracted than the tiger's. Thus, the pug nose of Smilodon proposed by Miller has little evidence in the physical structures of comparable animals. According to Antón, García-Perea and Turner (1998), the nostrils of living felids always extend to a similar position, independently of the length of the nasal bones, which in Smilodon falls within the range observed in modern species. The third idea proposed is the elongation of the lips by 50%. While his other hypotheses have been largely discarded, the last is used significantly in modern depictions. Miller argues that longer lips allows the greater elasticity needed for biting prey with a wider gape. Although this argument has been disputed within the scientific community, it remains supported nevertheless by artists. Scientific criticism points out that the lips of modern cats, especially larger species, display incredible elasticity and the usual lip length would stretch suitably, despite the larger degree of opening, and that in living carnivores the lip line is always anterior to the masseter muscle, which in Smilodon was located just behind the carnassials. Regardless, reconstructions of Smilodon, Machairodus, and other species are shown with long lips, often resembling the jowls of large dogs. Studies of Homotherium and Smilodon suggest that scimitar-toothed machairodonts like Homotherium itself possessed upper lips and gum tissue that could effectively hide and protect their upper canines; a trait they shared in common with modern cat species, while Smilodon had canines that remained partially exposed and protruded past the lips and chin even while the mouth was closed due to their great length. Vocalizations Comparisons of the hyoid bones of Smilodon and lions show that the former, and possibly other machairodonts, could potentially have roared like their modern relatives. A 2023 study suggested that while machairodonts had the same number of hyoid bones as "roaring" cats, their shape was closer to that of "purring" cats. Social behavior Smilodon A 2009 study compared the ratios of social and solitary carnivores in reserves in South Africa and Tanzania with those of fossils of California's La Brea tar pits, a well-known fossil bed from the Pleistocene, and how they responded to recorded sounds of dying prey, to infer whether Smilodon was social or not. At one time, the La Brea tar pits consisted of deep tar in which animals became trapped. As they died, their calls attracted predators, which in turn also became caught. It is considered the best Pleistocene fossil bed in North America for the number of animals caught and preserved in the tar, and may be similar to the situation created in the study. The assumption was that solitary carnivores would not approach the sources of such sounds, because of the danger of confrontation with other predators. Social carnivores, such as lions, have few other predators to fear, and will readily attend these calls. The study concluded that this latter situation most closely fit the ratio of animals found at the La Brea tar pits, and therefore that Smilodon was most likely social. Homotherium At Friesenhahn Cave, Texas, the remains of almost 400 juvenile mammoths were discovered along with skeletons of Homotherium. Homotherium groups have been suggested to have specialized in hunting young mammoths, and to have dragged the kills into secluded caves to eat inside, out from the open. They also retained excellent nocturnal vision, and hunting at night in the arctic regions would probably have been their prime hunting method. The modern lion is capable of, in large numbers, killing weakened adult and healthy subadult elephants, so similar sized Homotherium likely could have managed the same feat with juvenile mammoths. This is supported by isotopic analysis. But the idea that a cat, even one of very large size and possibly social, was able to cooperatively 'drag' a mammoth calf any real distance into a cave without damaging its teeth has aroused great criticism. Its sloped back and powerful lumbar section of its vertebrae suggested a bear-like build, so it might have been capable of pulling weights, but breaking canines, a fate suffered by Machairodus and Smilodon with some frequency, is not seen in Homotherium. Moreover, the bones of these young mammoths show the distinctive marks of Homotherium incisors, indicating they could efficiently process most of the meat on a carcass, indicating it was they and not scavengers who dragged the carcasses into the caves. Examination of the bones also indicates that the carcasses of these mammoths were dismembered by the cats before being dragged away, indicating that Homotherium would disarticulate their kill to transport it to a safe area and prevent scavengers from claiming a hard-won meal. Evidence also shows the cats were able to effectively strip flesh from bone in a manner that left noticeable score marks. Paleopathology Machairodus is another genus with few fossil records to suggest a social nature, but canines on these species are broken more often than others and show signs of extensive healing afterward. A male Amphimachairodus giganteus from China housed by the Babiarz Institute of Paleontological Studies is an older individual with a broken canine, worn from usage after the break. However, the individual died of a severe nasal infection, an injury that a social predator would have had a better chance of healing, so the skull can be interpreted in different ways. The adult canine teeth of juvenile Machairodus took an exceptionally long time to erupt and be used, so until then, it was completely dependent on the care of its parents. In another example of paleopathology supporting the social hypothesis, a large number of Smilodon fossils from the La Brea tar pits feature hunting injuries. In addition to injuries resulting from strain while hunting, the more severe injuries strongly suggest a social nature. Animals may have been crippled long after the injury healed, suffering swollen ankles, prominent limps, and limited mobility that persisted for years. One such case displays a subadult suffering a shattered pelvis that healed. The specimen would barely have been able to use the damaged limb and would have limped slowly, favoring the other three legs, completely unable to hunt on its own. If a solitary predator would have been able to survive such a severe injury, it would have been a very rare occasion. It is far more likely that such an animal would have been unable to move from a single spot on the ground for several months and might have only survived by being brought food or dragging itself towards kills made by relatives. Rebuttals to the social hypothesis The question of sociality is still controversial. Strong support for the traditional concept of a solitary Smilodon is found in its brain. Most social predators, including humans, grey wolves, and lions, have brains that are slightly larger than those of their loner relatives. Smilodon had a relatively small brain, suggesting less ability for complex cooperative behaviors, such as hunting in groups. The broken bones still seem to support sociality, however, the best explanation for a solitary animal healing from serious wounds is that cats build up metabolic reserves that can be used in times of need. The cheetah is often viewed as a poor example because it is a specialized species with a more fragile physique than other cats. Larger, more sturdily built cat species, such as lions and leopards, have been observed to recover from severe injuries, such as broken jaws and torn muscles. Functionality of the sabers Stabbing It has been suggested that machairodonts used their saber teeth during hunting, grappling an animal, opening its mouth, and swinging its head down with enough force to puncture the animal's skin and flesh. It was once suggested that the saber teeth were used much like a knife. The canines seemed, initially, as tools of great power and devastating ability, used for crushing vertebrae, or for tearing open armored animals such as glyptodonts. However, teeth are made of unsupported enamel, and would have been easily broken against hard material such as bone. It has also been argued that the mandible and an inability to open the mouth very wide would have been an impediment to effective stabbing. For such reasons, this concept has been rejected by the scientific community. Sexual characteristic Long canines could also have been the product of sexual selection, much like the mane of a lion, and were used for courting, sexual display, and social status. Their canines are already well established as relatively fragile, and their jaw muscles not strong, so any predatory function is uncertain. However, when a trait is adopted to enhance sexual attraction, typically only one sex, usually males, display the feature. In all machairodont species, both males and females have these canines and, with only minor exceptions as in Machairodus, are shaped similarly. There is typically also a size difference between sexes, but male and female machairodonts appear to have been the same sizes. Also, this level of sexual selection seems extreme given an individual would be left severely impaired in eating and general function. Scavenging One suggestion is that most machairodonts were scavengers. This leaves the canines not functional for the most part, and is often coupled with the hypothesis of sexual selection. Many modern carnivores scavenge to a greater or lesser degree. A strong sense of smell and good hearing could have helped find carcasses or steal the kills of other predators, such as dire wolves or short-faced bears, and sprinting would not have been needed, as is seen in the stocky conformation of most machairodonts. Many modern cats show this mixture of traits. Lions are able-bodied hunters, but will steal when they are given the opportunity. Tigers and cougars bury their kills and return later to keep eating, even days later. All cats prefer killing the sick or injured, and there is a fine line between an animal so sick it cannot move and a dead animal. The abundance of Smilodon skeletons in the La Brea tar pits in California supports the hypothesis, as well. The animals caught in the pits would have been dying or dead, the kind of meal a true hypercarnivore, such as a modern cheetah, would pass up. This hypothesis is the oldest, but still considered viable. Opposition to this concept lies in many parts of the cat. The teeth are purely carnivorous, unable to grind plant material, as the omnivorous teeth of dogs and bears do. The carnassials are shaped to efficiently slice flesh, not crunch bone, as they are in the modern spotted hyena. Since both sexes bear these canines and additional modifications to the skull are present, machairodonts were likely opportunists to some degree. The neck-biting hypotheses A more common and widely accepted view of machairodont hunting is the throat-shearing bite. Modern cats use a throat clamp, a bite positioned around the upper section of the throat, to suffocate the prey by compressing the windpipe. Their canines serve to puncture the skin and mostly allow a better grip, and do not do any significant damage to the prey. Machairodonts, alternatively, would have caused damage if they used the same technique as their modern relatives. The major drawback to these methods is that the large amount of blood spilled could be smelled by other nearby carnivores, such as other machairodonts or dire wolves. Predators often form competitive relationships in which dominance can shift from one species to the other, as seen in the modern lion and spotted hyena of Africa. In such situations, squabbles are not uncommon. The balance of power and dominance between these apex predators remains a mystery because of the social factor. Strength in numbers can be significant in these struggles. For example, dire wolves are thought to have traveled in small packs, and while individually subordinate, their numbers might have been sufficient to force a machairodont off a kill. However, the cat might have been able to scavenge on kills made by dire wolves. Two solitary machairodonts would quickly develop a pecking order with the first individual dominant. Because of this uncertainty, a large part of the niche of machairodonts is still unknown. The several variations on this hypothesis all require a subdued and still animal. General "bite and retreat" The first hypothesis involving the sensitive neck is that the cat simply restrained the animal and then bit the neck, without much specificity to location, to cause major blood damage and then retreated to allow the animal to bleed to death. Stipulations include not biting the back of the neck where contact with vertebrae could break the teeth, but a deep bite anywhere in the neck would prove fatal. This general bite would be used wherever it could be attained, and needs fewer predators. When compared with the belly-shearing hypothesis, one Megantereon could kill a large deer, and possibly a horse, with little danger of breaking canines. This is because the bite can be applied while the carnivore keeps its body behind the prey for the most part, avoiding flinging legs while still pressing with its body weight to keep it still. It would have been a quick bite, suiting the ambush style of stalking and hunting implied by the heavy and strong bodies of most machairodonts. It would also have been possible for a lone machairodont to wound a large prey animal in this manner, then release and follow it until it fell from shock. The general bite-and-retreat hypothesis has been criticised because of its bloodiness and because the struggling prey would have attracted any predators and scavengers in the area. The idea that a single animal would wound, release, and follow a prey animal has been counteracted more strongly. Cats rarely walk away from prey until they have eaten their fill and it would have risked being stolen by other predators. Xenosmilus in particular might have used this method, as all the teeth in its mouth were serrated and aligned in a way that formed a consistent cutting surface. "Bite and compress" When the animal is wounded with a bite from a machairodont (ignoring the placement of the blood vessels, which are negligible in this hypothesis), the canines would have been inserted behind the windpipe and the premolars would have been encompassing the windpipe. This variation states that the machairodont compressed the windpipe after dealing the bite, serving to both suffocate and wound the prey animal. Puncturing large blood vessels in the throat and causing massive bleeding would hasten the death of the animal. Modern cats, and presumably the basal genera of all cats, such as Pseudaelurus and Proailurus, use the throat clamp as a common method of dispatching prey. The suffocation would inhibit sound from the panicked prey, a method used by modern cheetahs and leopards. The wound from the canines and the lack of air would then kill the prey animal. This method might inhibit the full effect of the wound created by the canines. Keeping the canines in the wound would stifle the blood flow from the body and could keep the animal alive longer even if the prey is unable to vocalize. There is no significant advantage to the longer canines in this method of killing when compared to the ancestral cats with their short, conical-shaped canines. If anything, the dangers to breaking teeth held in the throat of a panicked animal, even if well restrained, outweighs the possible benefits, so this method has often been viewed as improbable. Careful "shearing bite" Another variation suggests the advanced machairodonts were highly specialized, enough to obtain the specific geometry to puncture the four major blood vessels in the throat of a prey animal in one bite. This hypothesis would include a careful bite to puncture the blood vessels, similar to, but more precise than, the bite-and-compress hypothesis, whereupon the machairodont would retreat and allow the animal to bleed to death very quickly. Though bloody, this method would take the shortest amount of time to kill the animal out of all the hypotheses. Because of the differences of anatomy between species possibly hunted by machairodonts, the geometry needed to kill a horse, for instance, might not work for a bison. This would require the genus, or even the specific species, to be highly specialized for one type of prey animal. This might offer an explanation for their extinction, for the movement or extinction of that prey species would lead to the death of its specialist predator. The high specialization seems an extreme and unnecessary version of a bite-and-retreat version of the throat-shear, but the suggestion that machairodont species became more specialized to hunt one prey species is usually considered acceptable so long as the misconception that the machairodont hunted 'only' that species is taken. However, this would not resolve the issue of the messiness and the loud sounds probably associated with this kind of bite. More than one individual would probably have been needed to ensure a completely subdued animal. "Belly shearing" In 1985, American paleontologist William Akersten suggested the shearing bite. This method of killing is similar to the style of killing seen in hyenas and canines today. A group of machairodonts captured and completely subdued a prey item, holding it still while one from the group bit into the abdominal cavity, pulled back and tore open the body. For this technique to work, a specific sequence of motions would have to be followed. First, the animal must be completely subdued, and the predatory machairodonts must be social, so that several individuals can hold the prey animal down. The individual preparing to deliver the killing bite would open its mouth at maximum gape, and with its mandible, press up on the skin of the belly. Creating a depression where the lower canines and incisors press into the skin, a slight fold is created in the skin above the lower teeth as the mandible is shoved upward. Next, the upper canines are pressed into the skin and the muscles of the neck are used to depress the head, so instead of pulling the jaw 'up', the skull is pressed 'down' . When the canines pierce the skin, they are lowered until the gape of the mouth is roughly 45°, where the mandible is pulled up in addition to the skull still being depressed. The small flanges on the anterior portion of the mandible of most machairodonts would be used to aid the depression of the skull. When the animal's mouth is closed, it holds a thick flap of skin between its jaws, behind its canines, and the animal uses the muscles of its lower back and forequarters to pull back, tearing the flap clear of the body. This large gash, once opened, leaves intestines uncovered and arteries and veins torn. The bleeding animal would die within minutes, and the shock of repeated bites, tearing innards from the body, could speed up the process. This method allows social machairodonts to inflict large wounds on prey animals. Massive blood loss would ensue, and though bloody, the social group would be able to fend off almost any animal attracted to the area. The bite would not need to be specific, and could be repeated to hasten the death of the animal, and it is already seen in the killing methods of several extant species, such as the spotted hyena. Canines are not as likely to be broken due to the softer nature of the abdomen when compared to the throat and jerking movements are not as amplified in the abdomen as they are in the neck. The abdominal-tearing hypothesis has generally been regarded as highly plausible. In the La Brea tar pits, occurrences of broken canines in Smilodon are rare, and this less risky method might have contributed to this. However, a shearing bite may have been problematic for machairodonts for several reasons. Most ungulates are highly sensitive around the belly and hindquarters, and most predators find it much easier to capture and subdue an animal similar to the domestic cow, by manipulating the head and forequarters. By lowering the animal to the ground and placing itself between the pairs of legs, a machairodont would have suffered great risk of being kicked. The power behind such a kick would easily break teeth, a mandible, or a leg, and cripple or kill the cat. Sociability might have solved this issue by having one individual deliver the killing bite while others held the animal still. Furthermore, the diameter of the abdomen of a large ungulate such as a bison might have been too large, and the skin too taut, for a machairodont to grasp a flap of skin at all, much less tear it away from the body. A third issue with the shearing bite is that the canines would need to tear a large hole in the belly of the animal to be successful, but might instead simply flay the skin and produce two long slits. This wound may be painful and bleed, but the animal likely would not bleed to death and could still escape and survive, instead of bleeding to death. In 2004 an experiment used a pair of mechanical aluminum jaws, cast from the CT scans of a Smilodon fatalis from the La Brea tar pits, to simulate several biting techniques possibly used by Smilodon, including the shearing bite, on a fresh domestic cow carcass. The belly of the cow was found to be too large in diameter for the canines to puncture the skin, which were instead deflected off the body, with the mandible blocking their access. However, the model pulled its jaw upward as modern cats bite, while machairodonts most likely did not, instead pressing their skulls down with the aid of their neck muscles. This flaw in the procedure might nullify the results and leave the belly-shearing hypothesis untouched.
Biology and health sciences
Other carnivora
Animals
1224339
https://en.wikipedia.org/wiki/Sodium%20fluoride
Sodium fluoride
Sodium fluoride (NaF) is an inorganic compound with the formula . It is a colorless or white solid that is readily soluble in water. It is used in trace amounts in the fluoridation of drinking water to prevent tooth decay, and in toothpastes and topical pharmaceuticals for the same purpose. In 2022, it was the 221st most commonly prescribed medication in the United States, with more than 1million prescriptions. It is also used in metallurgy and in medical imaging. Uses Dental caries Fluoride salts are often added to municipal drinking water (as well as to certain food products in some countries) for the purpose of maintaining dental health. The fluoride enhances the strength of teeth by the formation of fluorapatite, a naturally occurring component of tooth enamel. Although sodium fluoride is used to fluoridate water and is the standard by which other water-fluoridation compounds are gauged, hexafluorosilicic acid (H2SiF6) and its salt sodium hexafluorosilicate (Na2SiF6) are more commonly used additives in the United States. Osteoporosis Fluoride supplementation has been extensively studied for the treatment of postmenopausal osteoporosis. This supplementation does not appear to be effective; even though sodium fluoride increases bone density, it does not decrease the risk of fractures. Medical imaging In medical imaging, fluorine-18-labelled sodium fluoride (USP, sodium fluoride Na18F) is one of the oldest tracers used in positron emission tomography (PET), having been in use since the 1960s. Relative to conventional bone scintigraphy carried out with gamma cameras or SPECT systems, PET offers more sensitivity and spatial resolution. Fluorine-18 has a half-life of 110 min, which requires it to be used promptly once produced; this logistical limitation hampered its adoption in the face of the more convenient technetium-99m-labelled radiopharmaceuticals. However, fluorine-18 is generally considered to be a superior radiopharmaceutical for skeletal imaging. In particular it has a high and rapid bone uptake accompanied by very rapid blood clearance, which results in a high bone-to-background ratio in a short time. Additionally the annihilation photons produced by decay of 18F have a high energy of 511 keV compared to the 140 keV photons of 99mTc. Chemistry Sodium fluoride has a variety of specialty chemical applications in synthesis and extractive metallurgy. It reacts with electrophilic chlorides including acyl chlorides, sulfur chlorides, and phosphorus chloride. Like other fluorides, sodium fluoride finds use in desilylation in organic synthesis. Sodium fluoride can be used to produce fluorocarbons via the Finkelstein reaction; this process has the advantage of being simple to perform on a small scale but is rarely used on an industrial scale due to the existence of more effective techniques (e.g. Electrofluorination, Fowler process). Biology Sodium fluoride is sometimes added at relatively high concentrations (~20 mM) to protein lysis buffers in order to inhibit endogenous phosphatases and thereby protect phosphorylated protein sites. Sodium pyrophosphate and Sodium orthovanadate are also used for this purpose. Insecticide Inorganic fluorides such as fluorosilicates and sodium fluoride complex magnesium ions as magnesium fluorophosphate. They inhibit enzymes such as enolase that require Mg2+ as a prosthetic group. Thus, fluoride poisoning prevents phosphate transfer in oxidative metabolism. Sodium fluoride, patented as an insecticide in 1896, was commonly used through the 1970s on ants and other domestic pests, and as a stomach poison for plant-feeding insects. Its use, along with that of sodium fluorosilicate, declined over the 20th century as the products were banned or restricted due to the possibility of poisoning, intentional or accidental. In 1942, for instance, 47 inmates at the Oregon State Hospital died after consuming scrambled eggs which had been inadvertently prepared with sodium fluoride; while assisting the cooks, another inmate had confused a container of insecticide—used by the hospital to control cockroaches—with powdered milk, which was stored nearby. Other uses Sodium fluoride is used as a cleaning agent (e.g., as a "laundry sour"). Sodium fluoride can be used in a nuclear molten salt reactor. Safety The lethal dose for a 70 kg (154 lb) human is estimated at 5–10 g. Fluorides, particularly aqueous solutions of sodium fluoride, are rapidly and quite extensively absorbed by the human body. Fluorides interfere with electron transport and calcium metabolism. Calcium is essential for maintaining cardiac membrane potentials and in regulating coagulation. High ingestion of fluoride salts or hydrofluoric acid may result in fatal arrhythmias due to profound hypocalcemia. Chronic over-absorption can cause hardening of bones, calcification of ligaments, and buildup on teeth. Fluoride can cause irritation or corrosion to eyes, skin, and nasal membranes. Sodium fluoride is classed as toxic by both inhalation (of dusts or aerosols) and ingestion. In high enough doses, it has been shown to affect the heart and circulatory system. For occupational exposures, the Occupational Safety and Health Administration and the National Institute for Occupational Safety and Health have established occupational exposure limits at 2.5 mg/m3 over an eight-hour time-weighted average. In the higher doses used to treat osteoporosis, plain sodium fluoride can cause pain in the legs and incomplete stress fractures when the doses are too high; it also irritates the stomach, sometimes so severely as to cause peptic ulcer disease. Slow-release and enteric-coated versions of sodium fluoride do not have significant gastric side effects, and have milder and less frequent complications in the bones. In the lower doses used for water fluoridation, the only clear adverse effect is dental fluorosis, which can alter the appearance of children's teeth during tooth development. A chronic fluoride ingestion of 1 ppm of fluoride in drinking water can cause mottling of the teeth (fluorosis) and an exposure of 1.7 ppm will produce mottling in 30%–50% of patients. Studies have shown that dental fluorosis negatively impacts the self-esteem and self-image of adolescents. Chemical structure Sodium fluoride is an inorganic ionic compound, dissolving in water to give separated Na+ and F− ions. Like sodium chloride, it crystallizes in a cubic motif where both Na+ and F− occupy octahedral coordination sites; its lattice spacing, approximately 462 pm, is smaller than that of sodium chloride (564 pm). Occurrence The mineral form of NaF, villiaumite, is moderately rare. It is known from plutonic nepheline syenite rocks. Production NaF is prepared by neutralizing hydrofluoric acid or hexafluorosilicic acid (H2SiF6), both byproducts of the reaction of fluorapatite (Ca5(PO4)3F) from phosphate rock during the production of superphosphate fertilizer. Neutralizing agents include sodium hydroxide and sodium carbonate. Alcohols are sometimes used to precipitate the NaF: HF + NaOH → NaF + H2O From solutions containing HF, sodium fluoride precipitates as the bifluoride salt sodium bifluoride (NaHF2). Heating the latter releases HF and gives NaF. HF + NaF ⇌ NaHF2 In a 1986 report, the annual worldwide consumption of NaF was estimated to be several million tonnes.
Physical sciences
Halide salts
Chemistry
1226047
https://en.wikipedia.org/wiki/Dessert%20spoon
Dessert spoon
A dessert spoon is a spoon designed specifically for eating dessert. Similar in size to a soup spoon (intermediate between a teaspoon and a tablespoon) but with an oval rather than round bowl, it typically has a capacity around twice that of a teaspoon. By extension, the term "dessert spoon" is used as a cooking measure of volume, usually of 10 millilitres (mL), US fl oz, or imp fl oz. Dining The use of dessert spoons around the world varies massively; in some areas they are very common, while in other places the use of the dessert spoon is almost unheard of—with diners using forks or teaspoons for their desserts as a default. In most traditional table settings, the dessert spoon is placed above the plate or bowl, separated from the rest of the cutlery, or it may simply be brought in with the dessert. Culinary measure As a unit of culinary measure, in the United States, a level dessert spoon (dsp., dspn. or dstspn.) equals 2 US customary teaspoons, which is 2 US customary fluid drams ( of a US customary fluid ounce). In the United Kingdom, a dessert spoon is traditionally 2 British imperial fluid drachms ( of a British imperial fluid ounce). 1 UK dessert spoon is the equivalence of UK tablespoon, 2 UK teaspoons, or 4 UK salt spoons. A metric dessert spoon is 10mL, the equivalence of 2 metric teaspoons. Apothecary measure As a unit of Apothecary measure, the dessert-spoon was an unofficial but widely used unit of fluid measure equal to two fluid drams, or fluid ounce. However, even when approximated, its use was discouraged: "Inasmuch as spoons vary greatly in capacity, and from their form are unfit for use in the dosage of medicine, it is desirable... to be measured with a suitable medicine measure." In the United States and pre-1824 England, the fluid ounce was of a Queen Anne wine gallon (which was defined as exactly 231 cubic inches) thus making the dessert-spoon approximately . The post-1824 (British) imperial Apothecaries' dessert-spoon was also fluid ounce, but the ounce in question was of an imperial gallon, approximately 277.4 cubic inches, yielding a dessert-spoon of approximately . In both the British and American variants of the Apothecaries' system, two tea-spoons make a dessert-spoon, while two dessert-spoons make a table-spoon. In pharmaceutical Latin, the Apothecaries' dessert-spoon is known as , abbreviated as or less frequently , as opposed to the tea-spoon ( or ) and table-spoon ( or ).
Physical sciences
Volume
Basics and measurement
1226799
https://en.wikipedia.org/wiki/Vehicle%20armour
Vehicle armour
Military vehicles are commonly armoured (or armored; see spelling differences) to withstand the impact of shrapnel, bullets, shells, rockets, and missiles, protecting the personnel inside from enemy fire. Such vehicles include armoured fighting vehicles like tanks, aircraft, and ships. Civilian vehicles may also be armoured. These vehicles include cars used by officials (e.g., presidential limousines), reporters and others in conflict zones or where violent crime is common. Civilian armoured cars are also routinely used by security firms to carry money or valuables to reduce the risk of highway robbery or the hijacking of the cargo. Armour may also be used in vehicles to protect from threats other than a deliberate attack. Some spacecraft are equipped with specialised armour to protect them against impacts from micrometeoroids or fragments of space debris. Modern aircraft powered by jet engines usually have them fitted with a sort of armour in the form of an aramid composite kevlar bandage around the fan casing or debris containment walls built into the casing of their gas turbine engines to prevent injuries or airframe damage should the fan, compressor, or turbine blades break free. The design and purpose of the vehicle determines the amount of armour plating carried, as the plating is often very heavy and excessive amounts of armour restrict mobility. In order to decrease this problem, some new materials (nanomaterials) and material compositions are being researched which include buckypaper, and aluminium foam armour plates. Materials Metals Steel Rolled homogeneous armour is strong, hard, and tough (does not shatter when struck with a fast, hard blow). Steel with these characteristics are produced by processing cast steel billets of appropriate size and then rolling them into plates of required thickness. Rolling and forging (hammering the steel when it is red hot) irons out the grain structure in the steel, removing imperfections which would reduce the strength of the steel. Rolling also elongates the grain structure in the steel to form long lines, which enable the stress the steel is placed under when loaded to flow throughout the metal, and not be concentrated in one area. Cast homogenous armour or cast steel armour is produced by directly casting steel into the desired shape. It tends to be softer as heat treatment is difficult or impossible. Nevertheless, the flexibility in shape has made it popular as the structural hull in modern tanks. Aluminium Aluminium is used when light weight is a necessity. It is most commonly used on APCs and armoured cars. While certainly not the strongest metal, it is cheap, lightweight, and tough enough that it can serve as easy armour. Iron Wrought iron was used on ironclad warships. Early European iron armour consisted of 10 to 12.5 cm of wrought iron backed by up to one metre of solid wood. It has since been replaced by steel due to steel being significantly stronger. Titanium Titanium has almost twice the density of aluminium, but can have a yield strength similar to high strength steels, giving it a high specific strength. It also has a high specific resilience and specific toughness. So, despite being more expensive, it finds an application in areas where weight is a concern, such as personal armour and military aviation. Some notable examples of its use include the USAF A-10 Thunderbolt II and the Soviet/Russian-built Sukhoi Su-25 ground-attack aircraft, utilising a bathtub-shaped titanium enclosure for the pilot, as well as the Soviet/Russian Mil Mi-24 attack helicopter. Uranium Because of its high density, depleted uranium (DU) can also be used in tank armour, sandwiched between sheets of steel armour plate. For instance, some late-production M1A1HA and M1A2 Abrams tanks built after 1998 have DU reinforcement as part of the armour plating in the front of the hull and the front of the turret, and there is a program to upgrade the rest (see Chobham armour). Plastic Plastic metal was a type of vehicle armour originally developed for merchant ships by the British Admiralty in 1940. The original composition was described as 50% clean granite of half-inch size, 43% of limestone mineral, and 7% of bitumen. It was typically applied in a layer two inches thick and backed by half an inch of steel. Plastic armour was highly effective at stopping armour piercing bullets because the hard granite particles would deflect the bullet, which would then lodge between plastic armour and the steel backing plate. Plastic armour could be applied by pouring it into a cavity formed by the steel backing plate and a temporary wooden form. Some main battle tank armour utilises polymers, for example polyurethane as used in the "BDD" appliqué armour applied to modernized T-62 and T-55. Glass Bulletproof glass is a colloquial term for glass that is particularly resistant to being penetrated when struck by bullets. The industry generally refers to it as bullet-resistant glass or transparent armour. Bullet-resistant glass is usually constructed using a strong but transparent material such as polycarbonate thermoplastic or by using layers of laminated glass. The desired result is a material with the appearance and light-transmitting behaviour of standard glass, which offers varying degrees of protection from small arms fire. The polycarbonate layer, usually consisting of products such as Armormax, Makroclear, Cyrolon, Lexan or Tuffak, is often sandwiched between layers of regular glass. The use of plastic in the laminate provides impact-resistance, such as physical assault with a hammer, an axe, etc. The plastic provides little in the way of bullet-resistance. The glass, which is much harder than plastic, flattens the bullet and thereby prevents penetration. This type of bullet-resistant glass is usually 70–75 mm (2.8–3.0 in) thick. Bullet-resistant glass constructed of laminated glass layers is built from glass sheets bonded together with polyvinyl butyral, polyurethane or ethylene-vinyl acetate. This type of bullet-resistant glass has been in regular use on combat vehicles since World War II; it is typically about 100–120 mm (3.9–4.7 in) thick and is usually extremely heavy. Newer materials are being developed. One such, aluminium oxynitride, is much lighter but at US$10–15 per square inch is much more costly. Ceramic Ceramic's precise mechanism for defeating HEAT was uncovered in the 1980s. High speed photography showed that the ceramic material shatters as the HEAT round penetrates, the highly energetic fragments destroying the geometry of the metal jet generated by the hollow charge, greatly diminishing the penetration. Ceramic layers can also be used as part of composite armour solutions. The high hardness of some ceramic materials serves as a disruptor that shatters and spreads the kinetic energy of projectiles. Composite Composite armour is armour consisting of layers of two or more materials with significantly different physical properties; steel and ceramics are the most common types of material in composite armour. Composite armour was initially developed in the 1940s, although it did not enter service until much later and the early examples are often ignored in the face of newer armour such as Chobham armour. Composite armour's effectiveness depends on its composition and may be effective against kinetic energy penetrators as well as shaped charge munitions; heavy metals are sometimes included specifically for protection from kinetic energy penetrators. Composite armour used on modern Western and Israeli main battle tanks largely consists of non-explosive reactive armour (NERA) elements - a type of reactive armour. These elements are often a laminate consisting of two hard plates (usually high hardness steel) with some low density interlayer material between them. Upon impact, the interlayer swells and moves the plates, disrupting heat 'jets' and possibly degrading kinetic energy projectiles. Behind these elements will be some backing element designed to stop the degraded jet or projectile element, which may be of high hardness steel, or some composite of steel and ceramic or possibly uranium. Soviet main battle tanks from the T-64 onward utilised composite armour which often consisted of some low density filler between relatively thick steel plates or castings, for example Combination K. For example, the T-64 turret had a layer of ceramic balls and aluminum sandwiched between layers of cast steel armour, whilst some models of the T-72 features a glass filler called "Kvartz". The tank glacis was often a sandwich of steel and some low density filler, either textolite (a fibreglass reinforced polymer) or ceramic plates. Later T-80 and T-72 turrets contained NERA elements, similar to those discussed above. Ships Belt armour is a layer of armour-plating outside the hull of warships, typically on battleships, battlecruisers, cruisers and some aircraft carriers. Typically, the belt covers from the deck down someway below the waterline of the ship. If built within the hull, rather than forming the outer hull, it can be fitted at an inclined angle to improve the protection. When struck by a shell or torpedo, the belt armour is designed to prevent penetration, by either being too thick for the warhead to penetrate, or sloped to a degree that would deflect either projectile. Often, the main belt armour was supplemented with a torpedo bulkhead spaced several metres behind the main belt, designed to maintain the ship's watertight integrity even if the main belt were penetrated. The air-space between the belt and the hull also adds buoyancy. Several wartime vessels had belt armour that was thinner or shallower than was desirable, to speed production and conserve resources. Deck armour on aircraft carriers is usually at the flight deck level, but on some early carriers was at the hangar deck. (See armoured flight deck.) Aircraft Armour plating is not common on aircraft, which generally rely on their speed and maneuverability to avoid attacks from enemy aircraft and ground fire, rather than trying to resist impacts. Additionally, any armour capable of stopping large-calibre anti-aircraft fire or missile fragments would result in an unacceptable weight penalty. So, only the vital parts of an aircraft, such as the ejection seat and engines, are usually armoured. This is one area where titanium is used extensively as armour plating. For example, in the American Fairchild Republic A-10 Thunderbolt II and the Soviet-built Sukhoi Su-25 ground attack aircraft, as well as the Mil Mi-24 Hind ground-attack helicopter, the pilot sits in a titanium enclosure known as the "bathtub" for its shape. In addition, the windscreens of larger aircraft are generally made of impact-resistant, laminated materials, even on civilian craft, to prevent damage from bird strikes or other debris. Armoured fighting vehicles The most heavily armoured vehicles today are the main battle tanks, which are the spearhead of the ground forces, and are designed to withstand anti-tank guided missiles, kinetic energy penetrators, high-explosive anti-tank weapons, NBC threats and in some tanks even steep-trajectory shells. The Israeli Merkava tanks were designed in a way that each tank component functions as added back-up armour to protect the crew. Outer armour is modular and enables quickly replacing damaged parts. Layout For efficiency, the heaviest armour on an armoured fighting vehicle (AFV) is placed on its front. Tank tactics require the vehicle to always face the likely direction of enemy fire as much as possible, even in defence or withdrawal operations. Sloping and curving armour can both increase its protection. Given a fixed thickness of armour plate, a projectile striking at an angle must penetrate more armour than one impacting perpendicularly. An angled surface also increases the chance of deflecting a projectile. This can be seen on v-hull designs, which direct the force of an improvised explosive device or landmine away from the crew compartment, increasing crew survivability. Spall liners Beginning during the Cold War, many AFVs have spall liners inside of the armour, designed to protect crew and equipment inside from fragmentation (spalling) released from the impact of enemy shells, especially high-explosive squash head warheads. Spall liners are made of aramids (Kevlar, Twaron), UHMWPE (Dyneema, Spectra Shield), or similar materials. Appliqué Appliqué armour, or add-on armour, consists of extra plates mounted onto the hull or turret of an AFV. The plates can be made of any material and are designed to be retrofitted to an AFV to withstand weapons that can penetrate the original armour of the vehicle. An advantage of appliqué armour is the possibility to tailor a vehicle's protection level to a specific threat scenario. Improvised Vehicle armour is sometimes improvised in the midst of an armed conflict by vehicle crews or individual units. In World War II, British, Canadian and Polish tank crews welded spare strips of tank track to the hulls of their Sherman tanks. U.S. tank crews often added sand bags in the hull and turrets on Sherman tanks, often in an elaborate cage made of girders. Some Sherman tanks were up-armoured in the field with glacis plates and other armour cut from knocked-out tanks to create Improvised Jumbos, named after the heavily armoured M4A3E2 assault tank. In the Vietnam War, U.S. "gun trucks" were armoured with sandbags and locally fabricated steel armour plate. More recently, U.S. troops in Iraq armoured Humvees and various military transport vehicles with scrap materials: this came to be known as "hillbilly armour" or "haji armour" by the Americans. Moreover, there was the Killdozer incident, with the modified bulldozer being armoured with steel and concrete composite, which proved to be highly resistant to small arms. Spaced Armour with two or more plates spaced a distance apart, called spaced armour, has been in use since the First World War, where it was used on the Schneider CA1 and Saint-Chamond tanks. Spaced armour can be advantageous in several situations. For example, it can reduce the effectiveness of kinetic energy penetrators because the interaction with each plate can cause the round to tumble, deflect, deform, or disintegrate. This effect can be enhanced when the armour is sloped. Spaced armour can also offer increased protection against HEAT projectiles. This occurs because the shaped charge warhead can detonate prematurely (at the first surface), so that the metal jet that is produced loses its coherence before reaching the main armour and impacting over a broader area. Sometimes the interior surfaces of these hollow cavities are sloped, presenting angles to the anticipated path of the shaped charge's jet in order to further dissipate its power. Taken to the extreme, relatively thin armour plates, metal mesh, or slatted plates, much lighter than fully protective armour, can be attached as side skirts or turret skirts to provide additional protection against such weapons. This can be seen in middle and late-World War II German tanks, as well as many modern AFVs. Taken as a whole, spaced armour can provide significantly increased protection while saving weight. The analogous Whipple shield uses the principle of spaced armour to protect spacecraft from the impacts of very fast micrometeoroids. The impact with the first wall melts or breaks up the incoming particle, causing fragments to be spread over a wider area when striking the subsequent walls. Sloped Sloped armour is armour that is mounted at a non-vertical and non-horizontal angle, typically on tanks and other armoured fighting vehicles. For a given normal to the surface of the armour, its plate thickness, increasing armour slope improves the armour's level of protection by increasing the thickness measured on a horizontal plane, while for a given area density of the armour the protection can be either increased or reduced by other sloping effects, depending on the armour materials used and the qualities of the projectile hitting it. The increased protection caused by increasing the slope while keeping the plate thickness constant, is due to a proportional increase of area density and thus mass, and thus offers no weight benefit. Therefore, the other possible effects of sloping, such as deflection, deforming and ricochet of a projectile, have been the reasons to apply sloped armour in armoured vehicles design. Another motive is the fact that sloping armour is a more efficient way of covering the necessary equipment since it encloses less volume with less material. The sharpest angles are usually seen on the frontal glacis plate, both as it is the hull side most likely to be hit and because there is more room to slope in the longitudinal direction of a vehicle. Reactive Explosive reactive armour, initially developed by German researcher Manfred Held while working in Israel, uses layers of high explosive sandwiched between steel plates. When a shaped-charge warhead hits, the explosive detonates and pushes the steel plates into the warhead, disrupting the flow of the charge's liquid metal penetrator (usually copper at around 500 degrees Celsius; it can be made to flow like water by sufficient pressure). Traditional "light" ERA is less effective against kinetic penetrators. "Heavy" reactive armour, however, offers better protection. The only example currently in widespread service is Russian Kontakt-5. Explosive reactive armour poses a threat to friendly troops near the vehicle. Non-explosive reactive armour is an advanced spaced armour which uses materials which change their geometry so as to increase protection under the stress of impact. Active protection systems use a sensor to detect an incoming projectile and explosively launch a counter-projectile into its path. Slat Slat armour is designed to protect against anti-tank rocket and missile attacks, where the warhead is a shaped charge. The slats are spaced so that the warhead is either partially deformed before detonating, or the fuzing mechanism is damaged, thereby preventing detonation entirely. As shaped charges rely on very specific structure to create a jet of hot metal, any disruption to this structure greatly reduces the effectiveness of the warhead. Slat armour can be defeated by tandem-charge designs such as the RPG-27 and RPG-29. Electric armour Electric armour is a recent development in the United Kingdom by the Defence Science and Technology Laboratory. A vehicle is fitted with two thin shells, separated by insulating material. The outer shell holds an enormous electric charge, while the inner shell is at ground. If an incoming HEAT jet penetrates the outer shell and forms a bridge between the shells, the electrical energy discharges through the jet, disrupting it. Trials have so far been extremely promising, and it is hoped that improved systems could protect against KE penetrators. The developers of the Future Rapid Effect System (FRES) series of armoured vehicles are considering this technology.
Technology
Armour
null
15582749
https://en.wikipedia.org/wiki/Austral%20storm%20petrel
Austral storm petrel
Austral storm petrels, or southern storm petrels, are seabirds in the family Oceanitidae, part of the order Procellariiformes. These smallest of seabirds feed on planktonic crustaceans and small fish picked from the surface, typically while hovering. Their flight is fluttering and sometimes bat-like. Austral storm petrels have a cosmopolitan distribution, being found in all oceans, although only Wilson's storm petrel and white-faced storm petrel are found in the Northern Hemisphere. They are almost all strictly pelagic, coming to land only when breeding. In the case of most petrel species, little is known of their behaviour and distribution at sea, where they can be hard to find and harder to identify. They are colonial nesters, displaying strong philopatry to their natal colonies and nesting sites. Most species nest in crevices or burrows, and all but one species attend the breeding colonies nocturnally. Pairs form long-term monogamous bonds and share incubation and chick-feeding duties. Like many species of seabirds, nesting is highly protracted with incubation taking up to 50 days and fledging another 70 days after that. The family contains just ten species which are assigned to five different genera. Several species are threatened by human activities. The New Zealand storm petrel was presumed extinct until rediscovered in 2003. The principal threats to storm petrels are introduced species, particularly mammals, in their breeding colonies; many storm petrels habitually nest on isolated mammal-free islands and are unable to cope with predators such as rats and feral cats. Taxonomy The family Oceanitidae was introduced in 1881 by the English zoologist William Alexander Forbes. Two subfamilies of storm petrel were traditionally recognized. The Oceanitinae, or austral storm-petrels, were mostly found in southern waters (though Wilson's storm petrel regularly migrates into the Northern Hemisphere); the ten species are placed in five genera. The Hydrobatinae, or northern storm petrels, were formerly placed in two genera Hydrobates and Oceanodroma. They are largely restricted to the Northern Hemisphere, although a few visit or breed a short distance beyond the Equator. Cytochrome b DNA sequence analysis suggests that the family was paraphyletic and more accurately treated as distinct families. The same study found that the austral storm petrels are basal within the Procellariiformes. The first split was the subfamily Oceanitidae, with the Hydrobatidae splitting from the rest of the order at a later date. Few fossil species have been found, with the earliest being from the Upper Miocene. Morphology and flight Austral storm petrels are the smallest of all the seabirds, ranging in size from 15–26 cm in length. Two body shapes occur in the family; the austral storm petrels have short wings, square tails, elongated skulls, and long legs. The legs of all storm petrels are proportionally longer than those of other Procellariiformes, but they are very weak and unable to support the bird's weight for more than a few steps. The plumage of the Oceanitidae is dark with white underparts (with the exception of Wilson's storm petrel) Onley and Scofield (2007) state that much published information is incorrect, and that photographs in the major seabird books and websites are frequently incorrectly ascribed as to species. They also consider that several national bird lists include species which have been incorrectly identified or have been accepted on inadequate evidence. Storm petrels use a variety of techniques to aid flight. Most species occasionally feed by surface pattering, holding and moving their feet on the water's surface while holding steady above the water. They remain stationary by hovering with rapid fluttering or using the wind to anchor themselves in place. This method of feeding flight is most commonly used by austral storm petrels. The white-faced storm petrel possesses a unique variation on pattering, holding its wings motionless and at an angle into the wind, it pushes itself off the water's surface in a succession of bounding jumps. Storm petrels also use dynamic soaring and slope soaring to travel over the ocean surface, although this method is used less by this family compared to the northern storm petrels. Slope soaring is more straightforward and favoured by the Oceanitidae, the storm petrel turns to the wind, gaining height, from where it can then glide back down to the sea. Diet The diet of many storm petrels species is poorly known owing to difficulties in researching; overall the family is thought to concentrate on crustaceans. Small fish and molluscs are also taken by many species. Some species are known to be rather more specialised; the grey-backed storm petrel is known to concentrate on the larvae of goose barnacles. Almost all species forage in the pelagic zone, except for Elliot's storm petrels, which are coastal feeders in the Galapagos Islands. Although storm petrels are capable of swimming well and often form rafts on the water's surface, they do not feed on the water. Instead, feeding usually takes place on the wing, with birds hovering above or "walking" on the surface (see morphology) and snatching small morsels. Rarely, prey is obtained by making shallow dives under the surface. Like many types of seabirds, storm petrels associate with other species of seabird and marine mammal species to help obtain food. They may benefit from the actions of diving predators such as seals and penguins, which push prey up towards the surface while hunting, allowing the surface-feeding storm petrels to reach them. Distribution and movements The austral storm petrels typically breed found in the Southern Hemisphere, in contrast to the northern storm-petrel in the Northern Hemisphere. Several species of storm petrels undertake migrations after the breeding season. The most widely travelled migrant is Wilson's storm petrel, which after breeding in Antarctica and the subantarctic islands, regularly crosses the equator to the waters of the north Pacific and Atlantic Oceans. Some species, such as the grey-backed storm petrel, are thought to be essentially sedentary and do not undertake any migrations away from their breeding islands. Breeding Storm petrels nest colonially, for the most part on islands, although a few species breed on the mainland, particularly Antarctica. Nesting sites are attended at night to avoid predators. Storm petrels display high levels of philopatry, returning to their natal colonies to breed. In one instance, a band-rumped storm petrel was caught as an adult 2 m from its natal burrow. Storm petrels nest either in burrows dug into soil or sand, or in small crevices in rocks and scree. Competition for nesting sites is intense in colonies where storm petrels compete with other burrowing petrels, with shearwaters having been recorded killing storm petrels to occupy their burrows. Colonies can be extremely large and dense; 840,000 pairs of white-faced storm petrel nest on South East Island in the Chatham Islands in densities between 1.18 and 0.47 burrows/m2. Storm petrels are monogamous and form long-term pair bonds that last a number of years. As with the other Procellariiformes, a single egg is laid by a pair in a breeding season; if the egg fails, then usually no attempt is made to lay again (although it happens rarely). Both sexes incubate in shifts of up to six days. The egg hatches after 40 or 50 days; the young is brooded continuously for another 7 days or so before being left alone in the nest during the day and fed by regurgitation at night. Meals fed to the chick weigh around 10–20% of the parent's body weight, and consist of both prey items and stomach oil. Stomach oil is an energy-rich (its calorific value is around 9.6 kcal/g) oil created by partly digested prey in a part of the fore gut known as the proventriculus. By partly converting prey items into stomach oil, storm petrels can maximise the amount of energy chicks receive during feeding, an advantage for small seabirds that can only make a single visit to the chick during a 24-hour period (at night). The average age at which chicks fledge depends on the species, taking between 50 and 70 days. The time taken to hatch and raise the young is long for the bird's size, but is typical of seabirds, which in general are K-selected, living much longer, delaying breeding for longer, and investing more effort into fewer young. The young leave their burrow at about 62 days. They are independent almost at once and quickly disperse into the ocean. They return to their original colony after 2 or 3 years, but do not breed until at least 4 years old. Storm petrels have been recorded living as long as 30 years. Threats and conservation Several species of austral storm petrels are threatened by human activities. The New Zealand storm petrel is listed as critically endangered, and was also considered extinct for many years, but was sighted again in 2003, though the population is likely to be very small. Storm petrels face the same threats as other seabirds; in particular, they are threatened by introduced species. Species The family contains ten species:
Biology and health sciences
Procellariiformes
Animals
6152943
https://en.wikipedia.org/wiki/Peteinosaurus
Peteinosaurus
Peteinosaurus ( ; meaning "winged lizard") was a prehistoric genus of pterosaur. It lived in the late Triassic period in the late Norian age (about 218-215 million years ago), and at a wingspan of around , was one of the smallest and earliest pterosaurs, although other estimates suggest a wingspan of up to . Discovery Three fossils have been found near Cene, Italy. The first fossil, the holotype MCSNB 2886, is fragmentary and disarticulated. The second, the articulated paratype MCSNB 3359, lacks any diagnostic features of Peteinosaurus and thus might be a different species. This paratype has a long tail (20 cm) made more stiff by long extensions of the vertebrae; this feature is common among pterosaurs of the Triassic. The third example is MCSNB 3496, another fragmentary skeleton. All specimens are those of subadults and of none has the skull been preserved. Like most pterosaurs, Peteinosaurus had bones that were strong but very light. Peteinosaurus is trimorphodontic, with three types of conical teeth. An insectivorous lifestyle has been attributed to Peteinosaurus. The fifth toe of Peteinosaurus was long and clawless. Its joint allowed it to flex in a different plane than the other phalanges in order to control the cruropatagium, as seen preserved in the specimen of Sordes pilosus PIN 2585.3. The genus has been described by the German paleontologist Rupert Wild in 1978. The type species is Peteinosaurus zambellii. The genus name is derived from Greek peteinos, "winged" and sauros, "lizard", the latter being used to indicate any saurian. The specific name, zambellii, honours Rocco Zambelli, the curator of the Bergamo natural history museum. Classification Peteinosaurus is one of the oldest-known pterosaurs, and at a mere , had a tiny wingspan when compared to some later genera, such as Pteranodon whose wingspan exceeded twenty feet. Its wings were also proportionally smaller than those of later pterosaurs, as its wing length was only twice the length of the hindlimb. All other known pterosaurs have wingspans at least three times the length of their hindlimbs. It also had single cusped teeth that lacked the specialized heterodonty present in the other Italian Triassic pterosaur genus, Eudimorphodon. All these factors converge to hint that Peteinosaurus belongs to a group that possibly represents the most basal known pterosaurs: the Dimorphodontidae, to which it was assigned in 1988 by Robert L. Carroll. The only other known member of that group is the later genus Dimorphodon, which lent its name to the family including both genera. Later cladistic analyses however, have not shown a close connection between the two forms. Nevertheless, the possible basal position of Peteinosaurus has been affirmed by Fabio Marco Dalla Vecchia who suggested that Preondactylus, according to David Unwin the most basal pterosaur, might be a subjective junior synonym of Peteinosaurus. A 2010 cladistic analysis by Brian Andres and colleagues placed Peteinosaurus in Lonchognatha which includes Eudimorphodon and Austriadactylus as more basal. The following phylogenetic analysis follows the topology of Upchurch et al. (2015). Mark Witton, however, considers Peteinosaurus to be a true dimorphodontid. A study published in 2020 found support for a sister-taxon relationship between Peteinosaurus and Macronychoptera, which together form the clade Zambellisauria.
Biology and health sciences
Pterosaurs
Animals
40579
https://en.wikipedia.org/wiki/Gill
Gill
A gill () is a respiratory organ that many aquatic organisms use to extract dissolved oxygen from water and to excrete carbon dioxide. The gills of some species, such as hermit crabs, have adapted to allow respiration on land provided they are kept moist. The microscopic structure of a gill presents a large surface area to the external environment. Branchia (: branchiae) is the zoologists' name for gills (from Ancient Greek ). With the exception of some aquatic insects, the filaments and lamellae (folds) contain blood or coelomic fluid, from which gases are exchanged through the thin walls. The blood carries oxygen to other parts of the body. Carbon dioxide passes from the blood through the thin gill tissue into the water. Gills or gill-like organs, located in different parts of the body, are found in various groups of aquatic animals, including mollusks, crustaceans, insects, fish, and amphibians. Semiterrestrial marine animals such as crabs and mudskippers have gill chambers in which they store water, enabling them to use the dissolved oxygen when they are on land. History Galen observed that fish had multitudes of openings (foramina), big enough to admit gases, but too fine to give passage to water. Pliny the Elder held that fish respired by their gills, but observed that Aristotle was of another opinion. The word branchia comes from the Greek , "gills", plural of (in singular, meaning a fin). Function Many microscopic aquatic animals, and some larger but inactive ones, can absorb sufficient oxygen through the entire surface of their bodies, and so can respire adequately without gills. However, more complex or more active aquatic organisms usually require a gill or gills. Many invertebrates, and even amphibians, use both the body surface and gills for gaseous exchange. Gills usually consist of thin filaments of tissue, lamellae (plates), branches, or slender, tufted processes that have a highly folded surface to increase surface area. The delicate nature of the gills is possible because the surrounding water provides support. The blood or other body fluid must be in intimate contact with the respiratory surface for ease of diffusion. A high surface area is crucial to the gas exchange of aquatic organisms, as water contains only a small fraction of the dissolved oxygen than air does, and it diffuses more slowly. A cubic meter of air contains about 275 grams of oxygen at STP. Fresh water hold less than 1/25th the oxygen content of air, the dissolved oxygen content being approximately 8 cm3/L compared to that of air which is 210 cm3/L. Water is 777 times more dense than air and is 100 times more viscous. Oxygen has a diffusion rate in air 10,000 times greater than in water. The use of sac-like lungs to remove oxygen from water would not be efficient enough to sustain life. Rather than using lungs, "[g]aseous exchange takes place across the surface of highly vascularised gills over which a one-way current of water is kept flowing by a specialised pumping mechanism. The density of the water prevents the gills from collapsing and lying on top of each other; [such collapse] happens when a fish is taken out of water." Usually water is moved across the gills in one direction by the current, by the motion of the animal through the water, by the beating of cilia or other appendages, or by means of a pumping mechanism. In fish and some molluscs, the efficiency of the gills is greatly enhanced by a countercurrent exchange mechanism in which the water passes over the gills in the opposite direction to the flow of blood through them. This mechanism is very efficient and as much as 90% of the dissolved oxygen in the water may be recovered. Vertebrates The gills of vertebrates typically develop in the walls of the pharynx, along a series of gill slits opening to the exterior. Most species employ a countercurrent exchange system to enhance the diffusion of substances in and out of the gill, with blood and water flowing in opposite directions to each other. The gills are composed of comb-like filaments, the gill lamellae, which help increase their surface area for oxygen exchange. When a fish breathes, it draws in a mouthful of water at regular intervals. Then it draws the sides of its throat together, forcing the water through the gill openings, so it passes over the gills to the outside. Fish gill slits may be the evolutionary ancestors of the thymus glands, parathyroid glands, as well as many other structures derived from the embryonic branchial pouches. Fish The gills of fish form a number of slits connecting the pharynx to the outside of the animal on either side of the fish behind the head. Originally there were many slits, but during evolution, the number reduced, and modern fish mostly have five pairs, and never more than eight. Cartilaginous fish Sharks and rays typically have five pairs of gill slits that open directly to the outside of the body, though some more primitive sharks have six pairs with the Broadnose sevengill shark being the only cartilaginous fish exceeding this number. Adjacent slits are separated by a cartilaginous gill arch from which projects a cartilaginous gill ray. This gill ray is the support for the sheet-like interbranchial septum, which the individual lamellae of the gills lie on either side of. The base of the arch may also support gill rakers, projections into the pharyngeal cavity that help to prevent large pieces of debris from damaging the delicate gills. A smaller opening, the spiracle, lies in the back of the first gill slit. This bears a small pseudobranch that resembles a gill in structure, but only receives blood already oxygenated by the true gills. The spiracle is thought to be homologous to the ear opening in higher vertebrates. Most sharks rely on ram ventilation, forcing water into the mouth and over the gills by rapidly swimming forward. In slow-moving or bottom-dwelling species, especially among skates and rays, the spiracle may be enlarged, and the fish breathes by sucking water through this opening, instead of through the mouth. Chimaeras differ from other cartilagenous fish, having lost both the spiracle and the fifth gill slit. The remaining slits are covered by an operculum, developed from the septum of the gill arch in front of the first gill. Bony fish In bony fish, the gills lie in a branchial chamber covered by a bony operculum. The great majority of bony fish species have five pairs of gills, although a few have lost some over the course of evolution. The operculum can be important in adjusting the pressure of water inside of the pharynx to allow proper ventilation of the gills, so bony fish do not have to rely on ram ventilation (and hence near constant motion) to breathe. Valves inside the mouth keep the water from escaping. The gill arches of bony fish typically have no septum, so the gills alone project from the arch, supported by individual gill rays. Some species retain gill rakers. Though all but the most primitive bony fish lack spiracles, the pseudobranch associated with them often remains, being located at the base of the operculum. This is, however, often greatly reduced, consisting of a small mass of cells without any remaining gill-like structure. Marine teleosts also use their gills to excrete osmolytes (e.g. Na⁺, Cl−). The gills' large surface area tends to create a problem for fish that seek to regulate the osmolarity of their internal fluids. Seawater contains more osmolytes than the fish's internal fluids, so marine fishes naturally lose water through their gills via osmosis. To regain the water, marine fishes drink large amounts of sea water while simultaneously expending energy to excrete salt through the Na+/K+-ATPase ionocytes (formerly known as mitochondrion-rich cells and chloride cells). Conversely, fresh water contains less osmolytes than the fish's internal fluids. Therefore, freshwater fishes must utilize their gill ionocytes to attain ions from their environment to maintain optimal blood osmolarity. Lampreys and hagfish do not have gill slits as such. Instead, the gills are contained in spherical pouches, with a circular opening to the outside. Like the gill slits of higher fish, each pouch contains two gills. In some cases, the openings may be fused together, effectively forming an operculum. Lampreys have seven pairs of pouches, while hagfishes may have six to fourteen, depending on the species. In the hagfish, the pouches connect with the pharynx internally and a separate tube which has no respiratory tissue (the pharyngocutaneous duct) develops beneath the pharynx proper, expelling ingested debris by closing a valve at its anterior end. Lungfish larvae also have external gills, as does the primitive ray-finned fish Polypterus, though the latter has a structure different from amphibians. Amphibians Tadpoles of amphibians have from three to five gill slits that do not contain actual gills. Usually no spiracle or true operculum is present, though many species have operculum-like structures. Instead of internal gills, they develop three feathery external gills that grow from the outer surface of the gill arches. Sometimes, adults retain these, but they usually disappear at metamorphosis. Examples of salamanders that retain their external gills upon reaching adulthood are the olm and the mudpuppy. Still, some extinct tetrapod groups did retain true gills. A study on Archegosaurus demonstrates that it had internal gills like true fish. Invertebrates Crustaceans, molluscs, and some aquatic insects have tufted gills or plate-like structures on the surfaces of their bodies. Gills of various types and designs, simple or more elaborate, have evolved independently in the past, even among the same class of animals. The segments of polychaete worms bear parapodia many of which carry gills. Sponges lack specialised respiratory structures, and the whole of the animal acts as a gill as water is drawn through its spongy structure. Aquatic arthropods usually have gills which are in most cases modified appendages. In some crustaceans these are exposed directly to the water, while in others, they are protected inside a gill chamber. Horseshoe crabs have book gills which are external flaps, each with many thin leaf-like membranes. Many marine invertebrates such as bivalve molluscs are filter feeders. A current of water is maintained through the gills for gas exchange, and food particles are filtered out at the same time. These may be trapped in mucus and moved to the mouth by the beating of cilia. Respiration in the echinoderms (such as starfish and sea urchins) is carried out using a very primitive version of gills called papulae. These thin protuberances on the surface of the body contain diverticula of the water vascular system. The gills of aquatic insects are tracheal, but the air tubes are sealed, commonly connected to thin external plates or tufted structures that allow diffusion. The oxygen in these tubes is renewed through the gills. In the larval dragonfly, the wall of the caudal end of the alimentary tract (rectum) is richly supplied with tracheae as a rectal gill, and water pumped into and out of the rectum provides oxygen to the closed tracheae. Plastrons A plastron is a type of structural adaptation occurring among some aquatic arthropods (primarily insects), a form of inorganic gill which holds a thin film of atmospheric oxygen in an area with small openings called spiracles that connect to the tracheal system. The plastron typically consists of dense patches of hydrophobic setae on the body, which prevent water entry into the spiracles, but may also involve scales or microscopic ridges projecting from the cuticle. The physical properties of the interface between the trapped air film and surrounding water allow gas exchange through the spiracles, almost as if the insect were in atmospheric air. Carbon dioxide diffuses into the surrounding water due to its high solubility, while oxygen diffuses into the film as the concentration within the film has been reduced by respiration, and nitrogen also diffuses out as its tension has been increased. Oxygen diffuses into the air film at a higher rate than nitrogen diffuses out. However, water surrounding the insect can become oxygen-depleted if there is no water movement, so many such insects in still water actively direct a flow of water over their bodies. The inorganic gill mechanism allows aquatic arthropods with plastrons to remain constantly submerged. Examples include many beetles in the family Elmidae, aquatic weevils, and true bugs in the family Aphelocheiridae, as well as at least one species of ricinuleid arachnid and various mites. A somewhat similar mechanism is used by the diving bell spider, which maintains an underwater bubble that exchanges gas like a plastron. Other diving insects (such as backswimmers, and hydrophilid beetles) may carry trapped air bubbles, but deplete the oxygen more quickly, and thus need constant replenishment.
Biology and health sciences
Respiratory system
null
40583
https://en.wikipedia.org/wiki/Peritoneum
Peritoneum
The peritoneum is the serous membrane forming the lining of the abdominal cavity or coelom in amniotes and some invertebrates, such as annelids. It covers most of the intra-abdominal (or coelomic) organs, and is composed of a layer of mesothelium supported by a thin layer of connective tissue. This peritoneal lining of the cavity supports many of the abdominal organs and serves as a conduit for their blood vessels, lymphatic vessels, and nerves. The abdominal cavity (the space bounded by the vertebrae, abdominal muscles, diaphragm, and pelvic floor) is different from the intraperitoneal space (located within the abdominal cavity but wrapped in peritoneum). The structures within the intraperitoneal space are called "intraperitoneal" (e.g., the stomach and intestines), the structures in the abdominal cavity that are located behind the intraperitoneal space are called "retroperitoneal" (e.g., the kidneys), and those structures below the intraperitoneal space are called "subperitoneal" or "infraperitoneal" (e.g., the bladder). Structure Layers The peritoneum is one continuous sheet, forming two layers and a potential space between them: the peritoneal cavity. The outer layer, the parietal peritoneum, is attached to the abdominal wall and the pelvic walls. The tunica vaginalis, the serous membrane covering the male testis, is derived from the vaginal process, an outpouching of the parietal peritoneum. The inner layer, the visceral peritoneum, is wrapped around the visceral organs, located inside the intraperitoneal space for protection. It is thinner than the parietal peritoneum. The mesentery is a double layer of visceral peritoneum that attaches to the gastrointestinal tract. There are often blood vessels, nerves, and other structures between these layers. The space between these two layers is technically outside of the peritoneal sac, and thus not in the peritoneal cavity. The potential space between these two layers is the peritoneal cavity, filled with a small amount (about 50 mL) of slippery serous fluid that allows the two layers to slide freely over each other. The right paracolic gutter is continuous with the right and left subhepatic spaces. The epiploic foramen allows communication between the greater sac and the lesser sac. The peritoneal space in males is closed, while the peritoneal space in females is continuous with the extraperitoneal pelvis through openings of the fallopian tubes, the uterus, and the vagina. Subdivisions Peritoneal folds are omentums, mesenteries and ligaments; they connect organs to each other or to the abdominal wall. There are two main regions of the peritoneal cavity, connected by the omental foramen. The greater sac, represented in red in the diagrams above. The lesser sac, represented in blue. The lesser sac is divided into two "omenta": The lesser omentum (or hepatogastric) is attached to the lesser curvature of the stomach and the liver. The greater omentum (or gastrocolic) hangs from the greater curvature of the stomach and loops down in front of the intestines before curving back upwards to attach to the transverse colon. In effect it is draped in front of the intestines like an apron and may serve as an insulating or protective layer. The mesentery is the part of the peritoneum through which most abdominal organs are attached to the abdominal wall and supplied with blood and lymph vessels and nerves. Omenta Mesenteries Other ligaments and folds In addition, in the pelvic cavity there are several structures that are usually named not for the peritoneum, but for the areas defined by the peritoneal folds: Classification of abdominal structures The structures in the abdomen are classified as intraperitoneal, mesoperitoneal, retroperitoneal or infraperitoneal depending on whether they are covered with visceral peritoneum and whether they are attached by mesenteries (mensentery, mesocolon). Structures that are intraperitoneal are generally mobile, while those that are retroperitoneal are relatively fixed in their location. Some structures, such as the kidneys, are "primarily retroperitoneal", while others such as the majority of the duodenum, are "secondarily retroperitoneal", meaning that structure developed intraperitoneally but lost its mesentery and thus became retroperitoneal. Development The peritoneum develops ultimately from the mesoderm of the trilaminar embryo. As the mesoderm differentiates, one region known as the lateral plate mesoderm splits to form two layers separated by an intraembryonic coelom. These two layers develop later into the visceral and parietal layers found in all serous cavities, including the peritoneum. As an embryo develops, the various abdominal organs grow into the abdominal cavity from structures in the abdominal wall. In this process they become enveloped in a layer of peritoneum. The growing organs "take their blood vessels with them" from the abdominal wall, and these blood vessels become covered by peritoneum, forming a mesentery. Peritoneal folds develop from the ventral and dorsal mesentery of the embryo. Clinical significance Imaging assessment CT scan is a fast (15 seconds) and efficient way in visualising the peritoneal spaces. Although ultrasound is good at visualizing peritoneal collections and ascites, without ionising radiation, it does not provide a good overall assessment of all the peritoneal cavities. MRI scan is also increasingly used to visualise peritoneal diseases, but requires long scan time (30 to 45 minutes) and prone to motion artifacts due to respiration and peristalsis and chemical shift artifacts at the bowel-mesentery interface. Those with peritoneal carcinomatosis, acute pancreatitis, and intraabdominal sepsis may not tolerate prolonged MRI scan. Peritoneal dialysis In one form of dialysis, called peritoneal dialysis, a glucose solution is sent through a tube into the peritoneal cavity. The fluid is left there for a prescribed amount of time to absorb waste products, and then removed through the tube. The reason for this effect is the high number of arteries and veins in the peritoneal cavity. Through the mechanism of diffusion, waste products are removed from the blood. Peritonitis Peritonitis is the inflammation of the peritoneum. It is more commonly associated to infection from a punctured organ of the abdominal cavity. It can also be provoked by the presence of fluids that produce chemical irritation, such as gastric acid or pancreatic juice. Peritonitis causes fever, tenderness, and pain in the abdominal area, which can be localized or diffuse. The treatment involves rehydration, administration of antibiotics, and surgical correction of the underlying cause. Mortality is higher in the elderly and if present for a prolonged time. Primary peritoneal carcinoma Primary peritoneal cancer is a cancer of the cells lining the peritoneum. Etymology "Peritoneum" is derived from via Latin. In Greek, means "around", while means "to stretch"; thus, "peritoneum" means "stretched over". Additional images
Biology and health sciences
External anatomy and regions of the body
Biology
40584
https://en.wikipedia.org/wiki/Pistachio
Pistachio
The pistachio ( ; Pistacia vera), a member of the cashew family, is a small to medium-sized tree originating in Persia. The tree produces seeds that are widely consumed as food. In 2022, world production of pistachios was one million tonnes, with the United States, Iran, and Turkey combined accounting for 88% of the total. Description The tree grows up to tall. It has deciduous, pinnate leaves long. The plants are dioecious, with separate male and female trees. The flowers are apetalous and unisexual and borne in panicles. The fruit is a drupe, containing an elongated seed, which is the edible portion. The seed, commonly thought of as a nut, is a culinary nut, not a botanical nut. The fruit has a hard, cream-colored exterior shell. The seed has a mauve-colored skin and light green flesh, with a distinctive flavor. When the fruit ripens, the shell changes from green to an autumnal yellow/red and abruptly splits partly open. This is known as dehiscence, and happens with an audible pop. The splitting open is a trait that has been selected by humans. Commercial cultivars vary in how consistently they split open. Each mature pistachio tree averages around of seeds, or around 50,000 seeds, every two years. Etymology Pistachio is from late Middle English pistace, from Old French, superseded in the 16th century by forms from Italian pistacchio, via Latin from Greek pistákion, and from Middle Persian pistakē. Distribution and habitat Pistachio is a desert plant and is highly tolerant of saline soil. It has been reported to grow well when irrigated with water having 3,000–4,000 ppm of soluble salts. Pistachio trees are fairly hardy in the right conditions and can survive temperatures ranging between in winter and in summer. They need a sunny position and well-drained soil. Pistachio trees do poorly in conditions of high humidity and are susceptible to root rot in winter if they get too much water and the soil is not sufficiently free-draining. Long, hot summers are required for proper ripening of the fruit. Cultivation The pistachio tree may live up to 300 years. The trees are planted in orchards, and take around 7 to 10 years to reach significant production. Production is alternate-bearing or biennial-bearing, meaning the harvest is heavier in alternate years. Peak production is reached around 20 years. Trees are usually pruned to size to make the harvest easier. One male tree produces enough pollen for 8 to 12 drupe-bearing females. Harvesting in the United States and in Greece is often accomplished using equipment to shake the drupes off the tree. After hulling and drying, pistachios are sorted according to open-mouth and closed-mouth shells, then roasted or processed by special machines to produce pistachio kernels. History The pistachio tree is native to Afghanistan, Iran and Central Asia. Archaeological evidence shows that pistachio seeds were a common food as early as 6750 BCE. The earliest evidence of pistachio consumption goes back to the Bronze Age Central Asia and comes from Djarkutan, modern Uzbekistan. Pistachio trees were introduced from Asia to Europe in the first century AD by the Romans. They are cultivated across Southern Europe and North Africa. Theophrastus described it as a terebinth-like tree with almond-like nuts from Bactria. It appears in Dioscorides' writings as pistákia (πιστάκια), recognizable as P. vera by its comparison to pine nuts. Pliny the Elder wrote in his Natural History that pistacia, "well known among us", was one of the trees unique to Syria, and that the seed was introduced into Italy by the Roman proconsul in Syria, Lucius Vitellius the Elder (in office in 35 AD), and into Hispania at the same time by Flaccus Pompeius. The early sixth-century manuscript De observatione ciborum (On the Observance of Foods) by Anthimus implies that pistacia remained well known in Europe in late antiquity. An article on pistachio tree cultivation was brought down in Ibn al-'Awwam's 12th-century agricultural work, Book on Agriculture. Archaeologists have found evidence from excavations at Jarmo in northeastern Iraq for the consumption of Atlantic pistachio. The Hanging Gardens of Babylon were said to have contained pistachio trees during the reign of King Marduk-apla-iddina II about 700 BCE. In the 19th century, the pistachio was cultivated commercially in parts of the English-speaking world, such as Australia and in the US in New Mexico and California, where it was introduced in 1854 as a garden tree. In 1904 and 1905, David Fairchild of the United States Department of Agriculture introduced hardier cultivars to California collected from China, but it was not promoted as a commercial crop until 1929. Walter T. Swingle's pistachios from Syria had already fruited well at Niles, California, by 1917. In 1969 and 1971, changes to the tax code in the United States eliminated tax shelters for almonds and citrus fruits. That encouraged California farmers to plant pistachio trees, because they were still eligible for such tax breaks. In 1972, the Shah of Iran began a school breakfast program that included packets of pistachios. This resulted in a decline of pistachio exports from Iran, resulting in increased prices in other countries and additional incentives to plant pistachio trees in California. The first commercial pistachio harvest in California took place in 1976. The Shah was forced into exile in January, 1979 during the Iranian Revolution, resulting in an end to trade between the United States and Iran, providing additional incentives for American farmers to plant dramatically more pistachio trees. By 2008, U.S. pistachio production rivaled that of Iran. Drought and unusually cold weather in Iran led to severe declines in production there, while U.S. production was increasing. At that time, pistachios were Iran's second-most important export product, after the oil and gas sector. By 2020, there were 150,000 pistachio farmers in Iran, approximately 70% of whom were small-scale producers using inefficient manual picking and processing techniques. There were 950 far larger U.S. producers, using highly efficient mechanized production techniques. Between them, the U.S. and Iran control 70% of the world export market, with the U.S. in the lead. Worldwide demand exceeds production, so both countries have the ability to sell their production to various export markets. In 2021, Fresno County, California, accounted for about 40% of U.S. pistachio production, with a value of $722 million. Diseases and environment Pistachio trees are vulnerable to numerous diseases and infestation by insects such as Leptoglossus clypealis in North America. Among these is infection by the fungus Botryosphaeria, which causes panicle and shoot blight (symptoms include death of the flowers and young shoots), and can damage entire pistachio orchards. In 2004, the rapidly growing pistachio industry in California was threatened by panicle and shoot blight first discovered in 1984. In 2011, anthracnose fungus caused a sudden 50% loss in the Australian pistachio harvest. Several years of severe drought in Iran around 2008 to 2015 caused significant declines in production. Production In 2022, world production of pistachios was , with the United States, Iran, and Turkey together accounting for 88% of the total (table). Italy produces a low quantity of pistachios, with the Pistacchio di Bronte (pistachios from Bronte town) DOP-protected. Toxicity As with other tree seeds, aflatoxin is found in poorly harvested or processed pistachios. Aflatoxins are potent carcinogenic chemicals produced by molds such as Aspergillus flavus and A. parasiticus. The mold contamination may occur from soil or poor storage, and be spread by pests. High levels of mold growth typically appear as gray to black filament-like growth. Eating mold-infected and aflatoxin-contaminated pistachios is unsafe. Aflatoxin contamination is a frequent risk, particularly in warmer and humid environments. Food contaminated with aflatoxins has been found as the cause of frequent outbreaks of acute illnesses in parts of the world. In some cases, such as in Kenya, this has led to several deaths. Pistachio shells typically split naturally prior to harvest, with a hull covering the intact seeds. The hull protects the kernel from invasion by molds and insects, but this hull protection can be damaged in the orchard by poor orchard management practices, by birds, or after harvest, which makes exposure to contamination much easier. Some pistachios undergo so-called "early split", wherein both the hull and the shell split. Damage or early splits can lead to aflatoxin contamination. In some cases, a harvest may be treated to keep contamination below strict food safety thresholds; in other cases, an entire batch of pistachios must be destroyed because of aflatoxin contamination. Like other members of the family Anacardiaceae (which includes poison ivy, sumac, mango, and cashew), pistachios contain urushiol, an irritant that can cause allergic reactions. Large quantities of pistachios are self-heating in the presence of moisture due to their high oil content in addition to naturally occurring lipases, and can spontaneously combust if stored with a combustible fabric such as jute. Uses The kernels are often eaten whole, either fresh or roasted and salted, and are also used in pistachio ice cream, traditional Persian ice cream, kulfi, spumoni, pistachio butter, pistachio paste, and confections such as baklava, pistachio chocolate, pistachio halva, pistachio lokum or biscotti, and cold cuts such as mortadella. Americans make pistachio salad, which includes fresh pistachios or pistachio pudding, whipped cream, and canned fruit. Indian cooking uses pounded pistachios with grilled meats, and in pulao rice dishes. The shell of the pistachio is naturally a beige color, but it may be dyed red or green in commercial pistachios. Originally, dye was applied to hide stains on the shells caused when the nuts were picked by hand. In the 21st century, most pistachios are harvested by machine and the shells remain unstained. Nutrition Raw pistachios are 4% water, 45% fat, 28% carbohydrates, and 20% protein (table). In a 100-gram reference amount, pistachios provide of food energy and are a rich source (20% or more of the Daily Value or DV) of protein, dietary fiber, several dietary minerals, and the B vitamins thiamin (73% DV) and vitamin B6 (100% DV) (table). Pistachios are a moderate source (10–19% DV) of riboflavin, vitamin B5, folate, vitamin E, and vitamin K (table). The fat profile of raw pistachios consists mainly of monounsaturated fats and polyunsaturated fats, with a small amount of saturated fats (table). Saturated fatty acids include palmitic acid (10% of total) and stearic acid (2%) (table). Oleic acid is the most common monounsaturated fatty acid (52% of total fat). and linoleic acid, a polyunsaturated fatty acid, is 30% of total fat. Relative to other tree nuts, pistachios have a lower amount of fat and food energy, but higher amounts of potassium, vitamin K, γ-tocopherol, and certain phytochemicals such as carotenoids, and phytosterols. Research and health effects In July 2003, the United States Food and Drug Administration approved the first qualified health claim specific to consumption of seeds (including pistachios) to lower the risk of heart disease: "Scientific evidence suggests but does not prove that eating per day of most nuts, such as pistachios, as part of a diet low in saturated fat and cholesterol may reduce the risk of heart disease". Although a typical serving of pistachios supplies substantial food energy (nutrition table), their consumption in normal amounts is not associated with weight gain or obesity. One review found that pistachio consumption lowered blood pressure in persons without diabetes mellitus. A 2021 review found that pistachio consumption for three months or less significantly reduced triglyceride levels.
Biology and health sciences
Nuts
Plants
40614
https://en.wikipedia.org/wiki/Network%20switch
Network switch
A network switch (also called switching hub, bridging hub, Ethernet switch, and, by the IEEE, MAC bridge) is networking hardware that connects devices on a computer network by using packet switching to receive and forward data to the destination device. A network switch is a multiport network bridge that uses MAC addresses to forward data at the data link layer (layer 2) of the OSI model. Some switches can also forward data at the network layer (layer 3) by additionally incorporating routing functionality. Such switches are commonly known as layer-3 switches or multilayer switches. Switches for Ethernet are the most common form of network switch. The first MAC Bridge was invented in 1983 by Mark Kempf, an engineer in the Networking Advanced Development group of Digital Equipment Corporation. The first 2 port Bridge product (LANBridge 100) was introduced by that company shortly after. The company subsequently produced multi-port switches for both Ethernet and FDDI such as GigaSwitch. Digital decided to license its MAC Bridge patent in a royalty-free, non-discriminatory basis that allowed IEEE standardization. This permitted a number of other companies to produce multi-port switches, including Kalpana. Ethernet was initially a shared-access medium, but the introduction of the MAC bridge began its transformation into its most-common point-to-point form without a collision domain. Switches also exist for other types of networks including Fibre Channel, Asynchronous Transfer Mode, and InfiniBand. Unlike repeater hubs, which broadcast the same data out of each port and let the devices pick out the data addressed to them, a network switch learns the Ethernet addresses of connected devices and then only forwards data to the port connected to the device to which it is addressed. Overview A switch is a device in a computer network that connects other devices together. Multiple data cables are plugged into a switch to enable communication between different networked devices. Switches manage the flow of data across a network by transmitting a received network packet only to the one or more devices for which the packet is intended. Each networked device connected to a switch can be identified by its network address, allowing the switch to direct the flow of traffic maximizing the security and efficiency of the network. A switch is more intelligent than an Ethernet hub, which simply retransmits packets out of every port of the hub except the port on which the packet was received, unable to distinguish different recipients, and achieving an overall lower network efficiency. An Ethernet switch operates at the data link layer (layer 2) of the OSI model to create a separate collision domain for each switch port. Each device connected to a switch port can transfer data to any of the other ports at any time and the transmissions will not interfere. Because broadcasts are still being forwarded to all connected devices by the switch, the newly formed network segment continues to be a broadcast domain. Switches may also operate at higher layers of the OSI model, including the network layer and above. A switch that also operates at these higher layers is known as a multilayer switch. Segmentation involves the use of a switch to split a larger collision domain into smaller ones in order to reduce collision probability and to improve overall network throughput. In the extreme case (i.e. micro-segmentation), each device is directly connected to a switch port dedicated to the device. In contrast to an Ethernet hub, there is a separate collision domain on each switch port. This allows computers to have dedicated bandwidth on point-to-point connections to the network and also to run in full-duplex mode. Full-duplex mode has only one transmitter and one receiver per collision domain, making collisions impossible. The network switch plays an integral role in most modern Ethernet local area networks (LANs). Mid-to-large-sized LANs contain a number of linked managed switches. Small office/home office (SOHO) applications typically use a single switch, or an all-purpose device such as a residential gateway to access small office/home broadband services such as DSL or cable Internet. In most of these cases, the end-user device contains a router and components that interface to the particular physical broadband technology. Many switches have pluggable modules, such as Small Form-factor Pluggable (SFP) modules. These modules often contain a transceiver that connects the switch to a physical medium, such as a fiber optic cable. These modules were preceded by Medium Attachment Units connected via Attachment Unit Interfaces to switches and have evolved over time: the first modules were Gigabit interface converters, followed by XENPAK modules, SFP modules, XFP transceivers, SFP+ modules, QSFP, QSFP-DD, and OSFP modules. Pluggable modules are also used for transmitting video in broadcast applications. Role in a network Switches are most commonly used as the network connection point for hosts at the edge of a network. In the hierarchical internetworking model and similar network architectures, switches are also used deeper in the network to provide connections between the switches at the edge. In switches intended for commercial use, built-in or modular interfaces make it possible to connect different types of networks, including Ethernet, Fibre Channel, RapidIO, ATM, ITU-T G.hn and 802.11. This connectivity can be at any of the layers mentioned. While the layer-2 functionality is adequate for bandwidth-shifting within one technology, interconnecting technologies such as Ethernet and Token Ring is performed more easily at layer 3 or via routing. Devices that interconnect at the layer 3 are traditionally called routers. Where there is a need for a great deal of analysis of network performance and security, switches may be connected between WAN routers as places for analytic modules. Some vendors provide firewall, network intrusion detection, and performance analysis modules that can plug into switch ports. Some of these functions may be on combined modules. Through port mirroring, a switch can create a mirror image of data that can go to an external device, such as intrusion detection systems and packet sniffers. A modern switch may implement power over Ethernet (PoE), which avoids the need for attached devices, such as a VoIP phone or wireless access point, to have a separate power supply. Since switches can have redundant power circuits connected to uninterruptible power supplies, the connected device can continue operating even when regular office power fails. In 1989 and 1990, Kalpana introduced the first multiport Ethernet switch, its seven-port EtherSwitch. Bridging Modern commercial switches primarily use Ethernet interfaces. The core function of an Ethernet switch is to provide multiple ports of layer-2 bridging. Layer-1 functionality is required in all switches in support of the higher layers. Many switches also perform operations at other layers. A device capable of more than bridging is known as a multilayer switch. A layer 2 network device is a multiport device that uses hardware addresses (MAC addresses) to process and forward data at the data link layer (layer 2). A switch operating as a network bridge may interconnect otherwise separate layer 2 networks. The bridge learns the MAC address of each connected device, storing this data in a table that maps MAC addresses to ports. This table is often implemented using high-speed content-addressable memory (CAM), some vendors refer to the MAC address table as a CAM table. Bridges also buffer an incoming packet and adapt the transmission speed to that of the outgoing port. While there are specialized applications, such as storage area networks, where the input and output interfaces are the same bandwidth, this is not always the case in general LAN applications. In LANs, a switch used for end-user access typically concentrates lower bandwidth and uplinks into a higher bandwidth. The Ethernet header at the start of the frame contains all the information required to make a forwarding decision, some high-performance switches can begin forwarding the frame to the destination whilst still receiving the frame payload from the sender. This cut-through switching can significantly reduce latency through the switch. Interconnects between switches may be regulated using the spanning tree protocol (STP) that disables forwarding on links so that the resulting local area network is a tree without switching loops. In contrast to routers, spanning tree bridges must have topologies with only one active path between two points. Shortest path bridging and TRILL (Transparent Interconnection of Lots of Links) are layer 2 alternatives to STP which allow all paths to be active with multiple equal cost paths. Types Form factors Switches are available in many form factors, including stand-alone, desktop units which are typically intended to be used in a home or office environment outside a wiring closet; rack-mounted switches for use in an equipment rack or an enclosure; DIN rail mounted for use in industrial environments; and small installation switches, mounted into a cable duct, floor box or communications tower, as found, for example, in fiber to the office infrastructures. Rack-mounted switches may be stand-alone units, stackable switches or large chassis units with swappable line cards. Configuration options es have no configuration interface or options. They are plug and play. They are typically the least expensive switches, and therefore often used in a small office/home office environment. Unmanaged switches can be desktop or rack mounted. Managed switches have one or more methods to modify the operation of the switch. Common management methods include: a command-line interface (CLI) accessed via serial console, telnet or Secure Shell, an embedded Simple Network Management Protocol (SNMP) agent allowing management from a remote console or management station, or a web interface for management from a web browser. Two sub-classes of managed switches are smart and enterprise-managed switches. Smart switches (aka intelligent switches) are managed switches with a limited set of management features. Likewise, web-managed switches are switches that fall into a market niche between unmanaged and managed. For a price much lower than a fully managed switch they provide a web interface (and usually no CLI access) and allow configuration of basic settings, such as VLANs, port-bandwidth and duplex. Enterprise managed switches (aka managed switches) have a full set of management features, including CLI, SNMP agent, and web interface. They may have additional features to manipulate configurations, such as the ability to display, modify, backup and restore configurations. Compared with smart switches, enterprise switches have more features that can be customized or optimized and are generally more expensive than smart switches. Enterprise switches are typically found in networks with a larger number of switches and connections, where centralized management is a significant savings in administrative time and effort. A stackable switch is a type of enterprise-managed switch. Typical management features Centralized configuration management and configuration distribution Enable and disable ports Link bandwidth and duplex settings Quality of service configuration and monitoring MAC filtering and other access control list features Configuration of Spanning Tree Protocol (STP) and Shortest Path Bridging (SPB) features Simple Network Management Protocol (SNMP) monitoring of device and link health Port mirroring for monitoring traffic and troubleshooting Link aggregation configuration to set up multiple ports for the same connection to achieve higher data transfer rates and reliability VLAN configuration and port assignments including IEEE 802.1Q tagging NTP (Network Time Protocol) synchronization Network access control features such as IEEE 802.1X LLDP (Link Layer Discovery Protocol) IGMP snooping for control of multicast traffic Traffic monitoring It is difficult to monitor traffic that is bridged using a switch because only the sending and receiving ports can see the traffic. Methods that are specifically designed to allow a network analyst to monitor traffic include: Port mirroring Because the purpose of a switch is to not forward traffic to network segments where it would be superfluous, a node attached to a switch cannot monitor traffic on other segments. Port mirroring is how this problem is addressed in switched networks: In addition to the usual behavior of forwarding frames only to ports through which they might reach their addressees, the switch forwards frames received through a given monitored port to a designated monitoring port, allowing analysis of traffic that would otherwise not be visible through the switch. Switch monitoring (SMON) is described by RFC 2613 and is a provision for controlling facilities such as port mirroring. RMON sFlow These monitoring features are rarely present on consumer-grade switches. Other monitoring methods include connecting a layer-1 hub or network tap between the monitored device and its switch port.
Technology
Networks
null
40623
https://en.wikipedia.org/wiki/Multimeter
Multimeter
A multimeter (also known as a volt-ohm-milliammeter, volt-ohmmeter or VOM) is a measuring instrument that can measure multiple electrical properties. A typical multimeter can measure voltage, resistance, and current, in which case can be used as a voltmeter, ohmmeter, and ammeter. Some feature the measurement of additional properties such as temperature and capacitance. Analog multimeters use a microammeter with a moving pointer to display readings. Digital multimeters (DMMs) have numeric displays and are more precise than analog multimeters as a result. Meters will typically include probes that temporarily connect the instrument to the device or circuit under test, and offer some intrinsic safety features to protect the operator if the instrument is connected to high voltages that exceed its measurement capabilities. Multimeters vary in size, features, and price. They can be portable handheld devices or highly-precise bench instruments. Multimeters are used in diagnostic operations to verify the correct operation of a circuit or to test passive components for values in tolerance with their specifications. History The first attested usage of the word "multimeter" listed by the Oxford English Dictionary is from 1907. Precursors The first moving-pointer current-detecting device was the galvanometer in 1820. These were used to measure resistance and voltage by using a Wheatstone bridge, and comparing the unknown quantity to a reference voltage or resistance. While useful in the lab, the devices were very slow and impractical in the field. These galvanometers were bulky and delicate. The D'Arsonval–Weston meter movement uses a moving coil which carries a pointer and rotates on pivots or a taut band ligament. The coil rotates in a permanent magnetic field and is restrained by fine spiral springs which also serve to carry current into the moving coil. It gives proportional measurement rather than just detection, and deflection is independent of the orientation of the meter. Instead of balancing a bridge, values could be directly read off the instrument's scale, which made measurement quick and easy. The basic moving coil meter is suitable only for direct current measurements, usually in the range of 10 μA to 100 mA. It is easily adapted to read heavier currents by using shunts (resistances in parallel with the basic movement) or to read voltage using series resistances known as multipliers. To read alternating currents or voltages, a rectifier is needed. One of the earliest suitable rectifiers was the copper oxide rectifier developed and manufactured by Union Switch & Signal Company, Swissvale, Pennsylvania, later part of Westinghouse Brake and Signal Company, from 1927. Avometer The invention of the first multimeter is attributed to British Post Office engineer, Donald Macadie, who became dissatisfied with the need to carry many separate instruments required for maintenance of telecommunication circuits. Macadie invented an instrument which could measure amperes (amps), volts and ohms, so the multifunctional meter was then named Avometer. The meter comprised a moving coil meter, voltage and precision resistors, and switches and sockets to select the range. The first Avometer had a sensitivity of 60 Ω/V, three direct current ranges (12 mA, 1.2 A, and 12 A), three direct voltage ranges (12, 120, and 600 V or optionally 1,200 V), and a 10,000 Ω resistance range. An improved version of 1927 increased this to 13 ranges and 166.6 Ω/V (6 mA) movement. A "Universal" version having additional alternating current and alternating voltage ranges was offered from 1933 and in 1936 the dual-sensitivity Avometer Model 7 offered 500 and 100 Ω/V. Between the mid-1930s until the 1950s, 1,000 Ω/V became a de facto standard of sensitivity for radio work and this figure was often quoted on service sheets. However, some manufacturers such as Simpson, Triplett and Weston, all in the US, produced 20,000 Ω/V VOMs before the Second World War and some of these were exported. After 1945–46, 20,000 Ω/V became the expected standard for electronics, but some makers offered even more sensitive instruments. For industrial and other "heavy-current" use low sensitivity multimeters continued to be produced and these were considered more robust than the more sensitive types. The Automatic Coil Winder and Electrical Equipment Company (ACWEECO), founded in 1923, was set up to manufacture the Avometer and a coil winding machine also designed and patented by MacAdie. Although a shareholder of ACWEECO, Mr MacAdie continued to work for the Post Office until his retirement in 1933. His son, Hugh S. MacAdie, joined ACWEECO in 1927 and became Technical Director. The first AVO was put on sale in 1923, and many of its features remained almost unaltered through to the last Model 8. Pocket watch meters Pocket-watch-style meters were in widespread use in the 1920s. The metal case was typically connected to the negative connection, an arrangement that caused numerous electric shocks. The technical specifications of these devices were often crude, for example the one illustrated has a resistance of just 25 Ω/V, a non-linear scale and no zero adjustment on both ranges. Vacuum tube voltmeters Vacuum tube voltmeters or valve voltmeters (VTVM, VVM) were used for voltage measurements in electronic circuits where high input impedance was necessary. The VTVM had a fixed input impedance of typically 1 MΩ or more, usually through use of a cathode follower input circuit, and thus did not significantly load the circuit being tested. VTVMs were used before the introduction of electronic high-impedance analog transistor and field effect transistor voltmeters (FETVOMs). Modern digital meters (DVMs) and some modern analog meters also use electronic input circuitry to achieve high input impedance—their voltage ranges are functionally equivalent to VTVMs. The input impedance of some poorly designed DVMs (especially some early designs) would vary over the course of a sample-and-hold internal measurement cycle, causing disturbances to some sensitive circuits under test. Introduction of digital meters The first digital multimeter was manufactured in 1955 by Non Linear Systems. It is claimed that the first handheld digital multimeter was developed by Frank Bishop of Intron Electronics in 1977, which at the time presented a major breakthrough for servicing and fault finding in the field. Features Any meter will load the circuit under test to some extent. For example, a multimeter using a moving coil movement with full-scale deflection current of 50 microamps (μA), the highest sensitivity commonly available, must draw at least 50 μA from the circuit under test for the meter to reach the top end of its scale. This may load a high-impedance circuit so much as to affect the circuit, thereby giving a low reading. The full-scale deflection current may also be expressed in terms of "ohms per volt" (Ω/V). The ohms per volt figure is often called the "sensitivity" of the instrument. Thus a meter with a 50 μA movement will have a "sensitivity" of 20,000 Ω/V. "Per volt" refers to the fact that the impedance the meter presents to the circuit under test will be 20,000 Ω multiplied by the full-scale voltage to which the meter is set. For example, if the meter is set to a range of 300 V full scale, the meter's impedance will be 6 MΩ. 20,000 Ω/V is the best (highest) sensitivity available for typical analog multimeters that lack internal amplifiers. For meters that do have internal amplifiers (VTVMs, FETVMs, etc.), the input impedance is fixed by the amplifier circuit. Additional scales such as decibels, and measurement functions such as capacitance, transistor gain, frequency, duty cycle, display hold, and continuity which sounds a buzzer when the measured resistance is small have been included on many multimeters. While multimeters may be supplemented by more specialized equipment in a technician's toolkit, some multimeters include additional functions for specialized applications (temperature with a thermocouple probe, inductance, connectivity to a computer, speaking measured value, etc.). Contemporary multimeters can measure many values. The most common are: Voltage, alternating and direct, in volts. Current, alternating and direct, in amperes. The frequency range for which AC measurements are accurate is important, depends on the circuitry design and construction, and should be specified, so users can evaluate the readings they take. Some meters measure currents as low as milliamps or even microamps. All meters have a burden voltage (caused by the combination of the shunt used and the meter's circuit design), and some (even expensive ones) have sufficiently high burden voltages that low current readings are seriously impaired. Meter specifications should include the burden voltage of the meter. Resistance in ohms. Additionally, some multimeters also measure: Capacitance in farads, but usually the limitations of the range are between a few hundred or thousand micro farads and a few pico farads. Very few general purpose multimeters can measure other important aspects of capacitor status such as ESR, dissipation factor, or leakage. Conductance in siemens, which is the inverse of the resistance measured. Decibels in circuitry, rarely in sound. Duty cycle as a percentage. Frequency in hertz. Inductance in henries. Like capacitance measurement, this is usually better handled by a purpose designed inductance / capacitance meter. Temperature in degrees Celsius or Fahrenheit, with an appropriate temperature test probe, often a thermocouple. Digital multimeters may also include circuits for: Continuity tester; a buzzer sounds when a circuit's resistance is low enough (just how low is enough varies from meter to meter), so the test must be treated as inexact. Diodes (measuring forward drop of diode junctions). Transistors (measuring current gain and other parameters in some kinds of transistors) Battery checking for simple 1.5 V and 9 V batteries. This is a current-loaded measurement, which simulates in-use battery loads; normal voltage ranges draw very little current from the battery. Various sensors can be attached to (or included in) multimeters to take measurements such as: Luminance Sound pressure level pH Relative humidity Very small current flow (down to nanoamps with some adapters) Very small resistances (down to micro ohms for some adapters) Large currents: adapters are available which use inductance (AC current only) or Hall effect sensors (both AC and DC current), usually through insulated clamp jaws to avoid direct contact with high current capacity circuits which can be dangerous, to the meter and to the operator Very high voltages: adapters are available which form a voltage divider with the meter's internal resistance, allowing measurement into the thousands of volts. However, very high voltages often have surprising behavior, aside from effects on the operator (perhaps fatal); high voltages which actually reach a meter's internal circuitry may internal damage parts, perhaps destroying the meter or permanently ruining its performance. Designs Analog An un-amplified analog multimeter combines a meter movement, range resistors and switches; VTVMs are amplified analog meters and contain active circuitry. For an analog meter movement, DC voltage is measured with a series resistor connected between the meter movement and the circuit under test. A switch (usually rotary) allows greater resistance to be inserted in series with the meter movement to read higher voltages. The product of the basic full-scale deflection current of the movement, and the sum of the series resistance and the movement's own resistance, gives the full-scale voltage of the range. As an example, a meter movement that required 1 mA for full-scale deflection, with an internal resistance of 500 Ω, would, on a 10 V range of the multimeter, have 9,500 Ω of series resistance. For analog current ranges, matched low-resistance shunts are connected in parallel with the meter movement to divert most of the current around the coil. Again for the case of a hypothetical 1 mA, 500 Ω movement on a 1 A range, the shunt resistance would be just over 0.5 Ω. Moving coil instruments can respond only to the average value of the current through them. To measure alternating current, which changes up and down repeatedly, a rectifier is inserted in the circuit so that each negative half cycle is inverted; the result is a varying and nonzero DC voltage whose maximum value will be half the AC peak to peak voltage, assuming a symmetrical waveform. Since the rectified average value and the root mean square (RMS) value of a waveform are only the same for a square wave, simple rectifier-type circuits can only be calibrated for sinusoidal waveforms. Other wave shapes require a different calibration factor to relate RMS and average value. This type of circuit usually has fairly limited frequency range. Since practical rectifiers have non-zero voltage drop, accuracy and sensitivity is poor at low AC voltage values. To measure resistance, switches arrange for a small battery within the instrument to pass a current through the device under test and the meter coil. Since the current available depends on the state of charge of the battery which changes over time, a multimeter usually has an adjustment for the ohm scale to zero it. In the usual circuits found in analog multimeters, the meter deflection is inversely proportional to the resistance, so full-scale will be 0 Ω, and higher resistance will correspond to smaller deflections. The ohms scale is compressed, so resolution is better at lower resistance values. Amplified instruments simplify the design of the series and shunt resistor networks. The internal resistance of the coil is decoupled from the selection of the series and shunt range resistors; the series network thus becomes a voltage divider. Where AC measurements are required, the rectifier can be placed after the amplifier stage, improving precision at low range. The meter movement in a moving pointer analog multimeter is practically always a moving-coil galvanometer of the d'Arsonval type, using either jeweled pivots or taut bands to support the moving coil. In a basic analog multimeter the current to deflect the coil and pointer is drawn from the circuit being measured; it is usually an advantage to minimize the current drawn from the circuit, which implies delicate mechanisms. The sensitivity of an analog multimeter is given in units of ohms per volt. For example, a very low-cost multimeter with a sensitivity of 1,000 Ω/V would draw 1 mA from a circuit at full-scale deflection. More expensive, (and mechanically more delicate) multimeters typically have sensitivities of 20,000 ohms per volt and sometimes higher, with 50,000 ohms per volt (drawing 20 microamperes at full scale) being about the upper limit for a portable, general purpose, non-amplified analog multimeter. To avoid the loading of the measured circuit by the current drawn by the meter movement, some analog multimeters use an amplifier inserted between the measured circuit and the meter movement. While this increases the expense and complexity of the meter, by use of vacuum tubes or field effect transistors the input resistance can be made very high and independent of the current required to operate the meter movement coil. Such amplified multimeters are called VTVMs (vacuum tube voltmeters), TVMs (transistor volt meters), FET-VOMs, and similar names. Analog meters are intuitive where the trend of a measurement was more important than an exact value obtained at a particular moment. A change in angle or in a proportion is easier to interpret than a change in the value of a digital readout. For this reason, some digital multimeters additionally have a bar graph as a second display, typically with a more rapid sampling rate than used for the primary readout. These fast sampling rate bar graphs have a superior response than the physical pointer of analog meters, obsoleting the older technology. With rapidly fluctuating DC, AC or a combination of both, advanced digital meters are able to track and display fluctuations better than analog meters whilst also having the ability to separate and simultaneously display DC and AC components. Because of the absence of amplification, ordinary analog multimeter are typically less susceptible to radio frequency interference, and so continue to have a prominent place in some fields even in a world of more accurate and flexible electronic multimeters. Analog meter movements are inherently more fragile physically and electrically than digital meters. Many analog multimeters feature a range switch position marked "off" to protect the meter movement during transportation which places a low resistance across the meter movement, resulting in dynamic braking. Meter movements as separate components may be protected in the same manner by connecting a shorting or jumper wire between the terminals when not in use. Meters which feature a shunt across the winding such as an ammeter may not require further resistance to arrest uncontrolled movements of the meter needle because of the low resistance of the shunt. High-quality analog multimeters continue to be made by several manufacturers, including Chauvin Arnoux (France), Gossen Metrawatt (Germany), and Simpson and Triplett (USA). Digital Digital instruments, which necessarily incorporate amplifiers, use the same principles as analog instruments for resistance readings. For resistance measurements, usually a small constant current is passed through the device under test and the digital multimeter reads the resultant voltage drop; this eliminates the scale compression found in analog meters, but requires a source of precise current. An autoranging digital multimeter can automatically adjust the scaling network so the measurement circuits use the full precision of the A/D converter. In a digital multimeter the signal under test is converted to a voltage and an amplifier with electronically controlled gain preconditions the signal. A digital multimeter displays the quantity measured as a number, which eliminates parallax errors. Modern digital multimeters may have an embedded computer, which provides a wealth of convenience features. Measurement enhancements available include: Auto-ranging, which selects the correct range for the quantity under test so that the most significant digits are shown. For example, a four-digit multimeter would automatically select an appropriate range to display 12.34 mV instead of 0.012 V, or overloading. Auto-ranging meters usually include a facility to hold the meter to a particular range, because a measurement that causes frequent range changes can be distracting to the user. Auto-polarity for direct-current readings, shows if the electric polarity of applied voltage is positive (agrees with meter lead labels) or negative (opposite polarity to meter leads). Sample and hold, which will latch the most recent reading for examination after the instrument is removed from the circuit under test. Current-limited tests for voltage drop across semi conductor junctions. While not a replacement for a proper transistor tester, and most certainly not for a swept curve tracer type, this facilitates testing diodes and a variety of transistor types. A graphic representation of the quantity under test, as a bar graph. This makes go/no-go testing easy, and also allows spotting of fast-moving trends. A low-bandwidth oscilloscope. Automotive circuit testers, including tests for automotive timing and dwell signals (dwell and engine rpm testing is usually available as an option and is not included in the basic automotive DMMs). Simple data acquisition features to record maximum and minimum readings over a given period, or to take a number of samples at fixed intervals. Integration with tweezers for surface-mount technology. A combined LCR meter for small-size SMD and through-hole components. Modern meters may be interfaced with a personal computer by IrDA links, RS-232 connections, USB, Bluetooth, or an instrument bus such as IEEE-488. The interface allows the computer to record measurements as they are made. Some DMMs can store measurements and upload them to a computer. Components Probes A multimeter can use many different test probes to connect to the circuit or device under test. Crocodile clips, retractable hook clips, and pointed probes are the three most common types. Tweezer probes are used for closely spaced test points, as for instance surface-mount devices. The connectors are attached to flexible, well insulated leads terminated with connectors appropriate for the meter. Probes are connected to portable meters typically by shrouded or recessed banana jacks, while benchtop meters may use banana jacks or BNC connectors. 2 mm plugs and binding posts have also been used at times, but are less commonly used today. Indeed, safety ratings now require shrouded banana jacks. The banana jacks are typically placed with a standardized center-to-center distance of , to allow standard adapters or devices such as voltage multiplier or thermocouple probes to be plugged in. Clamp meters clamp around a conductor carrying a current to measure without the need to connect the meter in series with the circuit, or make metallic contact at all. Those for AC measurement use the transformer principle; clamp-on meters to measure small current or direct current require more exotic sensors, such as; hall effect based systems that measure the nonchanging magnetic field to determine the current. Power supply Analog meters can measure voltage and current by using power from the test circuit, but require a supplementary internal voltage source for resistance testing, while electronic meters always require an internal power supply to run their internal circuitry. Hand-held meters use batteries, while bench meters usually use mains power; either arrangement allows the meter to test devices. Testing often requires that the component under test be isolated from the circuit in which they are mounted, as otherwise stray or leakage current paths may distort measurements. In some cases, the voltage from the multimeter may turn active devices on, distorting a measurement, or in extreme cases even damage an element in the circuit being investigated. Safety Most multimeters include a fuse, or two fuses, which will sometimes prevent damage to the multimeter from a current overload on the highest current range. (For added safety, test leads with fuses built in are available.) A common error when operating a multimeter is to set the meter to measure resistance or current, and then connect it directly to a low-impedance voltage source. Unfused meters are often quickly destroyed by such errors; fused meters often survive. Fuses used in meters must carry the maximum measuring current of the instrument, but are intended to disconnect if operator error exposes the meter to a low-impedance fault. Meters with inadequate or unsafe fusing were not uncommon; this situation has led to the creation of the IEC61010 categories to rate the safety and robustness of meters. Digital meters are rated into four categories based on their intended application, as set forth by IEC 61010-1 and echoed by country and regional standards groups such as the CEN EN61010 standard. Category I: used where equipment is not directly connected to the mains Category II: used on single phase mains final subcircuits Category III: used on permanently installed loads such as distribution panels, motors, and three-phase appliance outlets Category IV: used on locations where fault current levels can be very high, such as supply service entrances, main panels, supply meters, and primary over-voltage protection equipment Each Category rating also specifies maximum safe transient voltages for selected measuring ranges in the meter. Category-rated meters also feature protections from over-current faults. On meters that allow interfacing with computers, optical isolation may be used to protect attached equipment against high voltage in the measured circuit. Good quality multimeters designed to meet Category II and above standards include high rupture capacity (HRC) ceramic fuses typically rated at more than 20 A capacity; these are much less likely to fail explosively than more common glass fuses. They will also include high energy overvoltage MOV (Metal Oxide Varistor) protection, and circuit over-current protection in the form of a Polyswitch. Meters intended for testing in hazardous locations or for use on blasting circuits may require use of a manufacturer-specified battery to maintain their safety rating. Characteristics Resolution The resolution of a multimeter is the smallest part of the scale which can be shown, which is scale dependent. On some digital multimeters it can be configured, with higher resolution measurements taking longer to complete. For example, a multimeter that has a 1 mV resolution on a 10 V scale can show changes in measurements in 1 mV increments. Absolute accuracy is the error of the measurement compared to a perfect measurement. Relative accuracy is the error of the measurement compared to the device used to calibrate the multimeter. Most multimeter datasheets provide relative accuracy. To compute the absolute accuracy from the relative accuracy of a multimeter add the absolute accuracy of the device used to calibrate the multimeter to the relative accuracy of the multimeter. The resolution of a multimeter is often specified in the number of decimal digits resolved and displayed. If the most significant digit cannot take all values from 0 to 9 it is generally, and confusingly, termed a fractional digit. For example, a multimeter which can read up to 19999 (plus an embedded decimal point) is said to read digits. By convention, if the most significant digit can be either 0 or 1, it is termed a half-digit; if it can take higher values without reaching 9 (often 3 or 5), it may be called three-quarters of a digit. A -digit multimeter would display one "half digit" that could only display 0 or 1, followed by five digits taking all values from 0 to 9. Such a meter could show positive or negative values from 0 to 199999. A -digit meter can display a quantity from 0 to 3999 or 5999, depending on the manufacturer. While a digital display can easily be extended in resolution, the extra digits are of no value if not accompanied by care in the design and calibration of the analog portions of the multimeter. Meaningful (i.e., high-accuracy) measurements require a good understanding of the instrument specifications, good control of the measurement conditions, and traceability of the calibration of the instrument. However, even if its resolution exceeds the accuracy, a meter can be useful for comparing measurements. For example, a meter reading stable digits may indicate that one nominally 100 kΩ resistor is about 7 Ω greater than another, although the error of each measurement is 0.2% of reading plus 0.05% of full-scale value. Specifying "display counts" is another way to specify the resolution. Display counts give the largest number, or the largest number plus one (to include the display of all zeros) the multimeter's display can show, ignoring the decimal separator. For example, a -digit multimeter can also be specified as a 199999 display count or 200000 display count multimeter. Often the display count is just called the 'count' in multimeter specifications. The accuracy of a digital multimeter may be stated in a two-term form, such as "±1% of reading +2 counts", reflecting the different sources of error in the instrument. Analog meters are older designs, but despite being technically surpassed by digital meters with bar graphs, may still be preferred by engineers and troubleshooters. One reason given is that analog meters are more sensitive (or responsive) to changes in the circuit that is being measured. A digital multimeter samples the quantity being measured over time, and then displays it. Analog multimeters continuously read the test value. If there are slight changes in readings, the needle of an analog multimeter will attempt to track it, as opposed to the digital meter having to wait until the next sample, giving delays between each discontinuous reading (plus the digital meter may additionally require settling time to converge on the value). The digital display value as opposed to an analog display is subjectively more difficult to read. This continuous tracking feature becomes important when testing capacitors or coils, for example. A properly functioning capacitor should allow current to flow when voltage is applied, then the current slowly decreases to zero and this "signature" is easy to see on an analog multimeter but not on a digital multimeter. This is similar when testing a coil, except the current starts low and increases. Resistance measurements on an analog meter, in particular, can be of low precision due to the typical resistance measurement circuit which compresses the scale heavily at the higher resistance values. Inexpensive analog meters may have only a single resistance scale, seriously restricting the range of precise measurements. Typically, an analog meter will have a panel adjustment to set the zero-ohms calibration of the meter, to compensate for the varying voltage of the meter battery, and the resistance of the meter's test leads. Accuracy Digital multimeters generally take measurements with accuracy superior to their analog counterparts. Standard analog multimeters measure with typically ±3% accuracy, though instruments of higher accuracy are made. Standard portable digital multimeters are specified to have an accuracy of typically ±0.5% on the DC voltage ranges. Mainstream bench-top multimeters are available with specified accuracy of better than ±0.01%. Laboratory grade instruments can have accuracies of a few parts per million. Accuracy figures need to be interpreted with care. The accuracy of an analog instrument usually refers to full-scale deflection; a measurement of 30 V on the 100 V scale of a 3% meter is subject to an error of 3 V, 10% of the reading. Digital meters usually specify accuracy as a percentage of reading plus a percentage of full-scale value, sometimes expressed in counts rather than percentage terms. Quoted accuracy is specified as being that of the lower millivolt (mV) DC range, and is known as the "basic DC volts accuracy" figure. Higher DC voltage ranges, current, resistance, AC and other ranges will usually have a lower accuracy than the basic DC volts figure. AC measurements only meet specified accuracy within a specified range of frequencies. Manufacturers can provide calibration services so that new meters may be purchased with a certificate of calibration indicating the meter has been adjusted to standards traceable to, for example, the US National Institute of Standards and Technology (NIST), or other national standards organization. Test equipment tends to drift out of calibration over time, and the specified accuracy cannot be relied upon indefinitely. For more expensive equipment, manufacturers and third parties provide calibration services so that older equipment may be recalibrated and recertified. The cost of such services is disproportionate for inexpensive equipment; however extreme accuracy is not required for most routine testing. Multimeters used for critical measurements may be part of a metrology program to assure calibration. A multimeter can be assumed to be "average responding" to AC waveforms unless stated as being a "true RMS" type. An average responding multimeter will only meet its specified accuracy on AC volts and amps for purely sinusoidal waveforms. A True RMS responding multimeter on the other hand will meet its specified accuracy on AC volts and current with any waveform type up to a specified crest factor; RMS performance is sometimes claimed for meters which report accurate RMS readings only at certain frequencies (usually low) and with certain waveforms (essentially always sine waves). A meter's AC voltage and current accuracy may have different specifications at different frequencies. Sensitivity and input impedance When used for measuring voltage, the input impedance of the multimeter must be very high compared to the impedance of the circuit being measured; otherwise circuit operation may be affected and the reading will be inaccurate. Meters with electronic amplifiers (all digital multimeters and some analog meters) have a fixed input impedance that is high enough not to disturb most circuits. This is often either one or ten megohms; the standardization of the input resistance allows the use of external high-resistance probes which form a voltage divider with the input resistance to extend voltage range up to tens of thousands of volts. High-end multimeters generally provide an input impedance greater than 10 GΩ for ranges less than or equal to 10 V. Some high-end multimeters provide >10 Gigaohms of impedance to ranges greater than 10 V. Most analog multimeters of the moving-pointer type are unbuffered, and draw current from the circuit under test to deflect the meter pointer. The impedance of the meter varies depending on the basic sensitivity of the meter movement and the range which is selected. For example, a meter with a typical 20,000 Ω/V sensitivity will have an input resistance of 2 MΩ on the 100 V range (100 V × 20,000 Ω/V = 2,000,000 Ω). On every range, at full-scale voltage of the range, the full current required to deflect the meter movement is taken from the circuit under test. Lower sensitivity meter movements are acceptable for testing in circuits where source impedances are low compared to the meter impedance, for example, power circuits; these meters are more rugged mechanically. Some measurements in signal circuits require higher sensitivity movements so as not to load the circuit under test with the meter impedance. Sensitivity should not be confused with resolution of a meter, which is defined as the lowest signal change (voltage, current, resistance and so on) that can change the observed reading. For general-purpose digital multimeters, the lowest voltage range is typically several hundred millivolts AC or DC, but the lowest current range may be several hundred microamperes, although instruments with greater current sensitivity are available. Multimeters designed for (mains) "electrical" use instead of general electronics engineering use will typically forego the microamps current ranges. Measurement of low resistance requires lead resistance (measured by touching the test probes together) to be subtracted for best accuracy. This can be done with the "delta", "zero", or "null" feature of many digital multimeters. Contact pressure to the device under test and cleanliness of the surfaces can affect measurements of very low resistances. Some meters offer a four wire test where two probes supply the source voltage and the others take measurement. Using a very high impedance allows for very low voltage drop in the probes and resistance of the source probes is ignored resulting in very accurate results. The upper end of multimeter measurement ranges varies considerably; measurements over perhaps 600 volts, 10 amperes, or 100 megohms may require a specialized test instrument. Burden voltage Every inline series-connected ammeter, including a multimeter in a current range, has a certain resistance. Most multimeters inherently measure voltage, and pass a current to be measured through a shunt resistance, measuring the voltage developed across it. The voltage drop is known as the burden voltage, specified in volts per ampere. The value can change depending on the range the meter sets, since different ranges usually use different shunt resistors. The burden voltage can be significant in very low-voltage circuit areas. To check for its effect on accuracy and on external circuit operation the meter can be switched to different ranges; the current reading should be the same and circuit operation should not be affected if burden voltage is not a problem. If this voltage is significant it can be reduced (also reducing the inherent accuracy and precision of the measurement) by using a higher current range. Alternating current sensing Since the basic indicator system in either an analog or digital meter responds to DC only, a multimeter includes an AC to DC conversion circuit for making alternating current measurements. Basic meters utilize a rectifier circuit to measure the average or peak absolute value of the voltage, but are calibrated to show the calculated root mean square (RMS) value for a sinusoidal waveform; this will give correct readings for alternating current as used in power distribution. User guides for some such meters give correction factors for some simple non-sinusoidal waveforms, to allow the correct root mean square (RMS) equivalent value to be calculated. More expensive multimeters include an AC to DC converter that measures the true RMS value of the waveform within certain limits; the user manual for the meter may indicate the limits of the crest factor and frequency for which the meter calibration is valid. RMS sensing is necessary for measurements on non-sinusoidal periodic waveforms, such as found in audio signals and variable-frequency drives. Alternatives A quality general-purpose electronics digital multimeter is generally considered adequate for measurements at signal levels greater than 1 mV or 1 μA, or below about 100 MΩ; these values are far from the theoretical limits of sensitivity, and are of considerable interest in some circuit design situations. Other instruments—essentially similar, but with higher sensitivity—are used for accurate measurements of very small or very large quantities. These include nanovoltmeters, electrometers (for very low currents, and voltages with very high source resistance, such as 1 TΩ) and picoammeters. Accessories for more typical multimeters permit some of these measurements, as well. Such measurements are limited by available technology, and ultimately by inherent thermal noise.
Technology
Electronics: General
null
40630
https://en.wikipedia.org/wiki/Beach
Beach
A beach is a landform alongside a body of water which consists of loose particles. The particles composing a beach are typically made from rock, such as sand, gravel, shingle, pebbles, etc., or biological sources, such as mollusc shells or coralline algae. Sediments settle in different densities and structures, depending on the local wave action and weather, creating different textures, colors and gradients or layers of material. Though some beaches form on inland freshwater locations such as lakes and rivers, most beaches are in coastal areas where wave or current action deposits and reworks sediments. Erosion and changing of beach geologies happens through natural processes, like wave action and extreme weather events. Where wind conditions are correct, beaches can be backed by coastal dunes which offer protection and regeneration for the beach. However, these natural forces have become more extreme due to climate change, permanently altering beaches at very rapid rates. Some estimates describe as much as 50 percent of the earth's sandy beaches disappearing by 2100 due to climate-change driven sea level rise. Sandy beaches occupy about one third of global coastlines. These beaches are popular for recreation, playing important economic and cultural roles—often driving local tourism industries. To support these uses, some beaches have human-made infrastructure, such as lifeguard posts, changing rooms, showers, shacks and bars. They may also have hospitality venues (such as resorts, camps, hotels, and restaurants) nearby or housing, both for permanent and seasonal residents. Human forces have significantly changed beaches globally: direct impacts include bad construction practices on dunes and coastlines, while indirect human impacts include water pollution, plastic pollution and coastal erosion from sea level rise and climate change. Some coastal management practices are designed to preserve or restore natural beach processes, while some beaches are actively restored through practices like beach nourishment. Wild beaches, also known as undeveloped or undiscovered beaches, are not developed for tourism or recreation. Preserved beaches are important biomes with important roles in aquatic or marine biodiversity, such as for breeding grounds for sea turtles or nesting areas for seabirds or penguins. Preserved beaches and their associated dune are important for protection from extreme weather for inland ecosystems and human infrastructure. Location and profile Although the seashore is most commonly associated with the word beach, beaches are also found by lakes and alongside large rivers. Beach may refer to: small systems where rock material moves onshore, offshore, or alongshore by the forces of waves and currents; or geological units of considerable size. The former are described in detail below; the larger geological units are discussed elsewhere under bars. There are several conspicuous parts to a beach that relate to the processes that form and shape it. The part mostly above water (depending upon tide), and more or less actively influenced by the waves at some point in the tide, is termed the beach berm. The berm is the deposit of material comprising the active shoreline. The berm has a crest (top) and a face—the latter being the slope leading down towards the water from the crest. At the very bottom of the face, there may be a trough, and further seaward one or more long shore bars: slightly raised, underwater embankments formed where the waves first start to break. The sand deposit may extend well inland from the berm crest, where there may be evidence of one or more older crests (the storm beach) resulting from very large storm waves and beyond the influence of the normal waves. At some point the influence of the waves (even storm waves) on the material comprising the beach stops, and if the particles are small enough (sand size or smaller), winds shape the feature. Where wind is the force distributing the grains inland, the deposit behind the beach becomes a dune. These geomorphic features compose what is called the beach profile. The beach profile changes seasonally due to the change in wave energy experienced during summer and winter months. In temperate areas where summer is characterised by calmer seas and longer periods between breaking wave crests, the beach profile is higher in summer. The gentle wave action during this season tends to transport sediment up the beach towards the berm where it is deposited and remains while the water recedes. Onshore winds carry it further inland forming and enhancing dunes. Conversely, the beach profile is lower in the storm season (winter in temperate areas) due to the increased wave energy, and the shorter periods between breaking wave crests. Higher energy waves breaking in quick succession tend to mobilise sediment from the shallows, keeping it in suspension where it is prone to be carried along the beach by longshore currents, or carried out to sea to form longshore bars, especially if the longshore current meets an outflow from a river or flooding stream. The removal of sediment from the beach berm and dune thus decreases the beach profile. If storms coincide with unusually high tides, or with a freak wave event such as a tidal surge or tsunami which causes significant coastal flooding, substantial quantities of material may be eroded from the coastal plain or dunes behind the berm by receding water. This flow may alter the shape of the coastline, enlarge the mouths of rivers and create new deltas at the mouths of streams that had not been powerful enough to overcome longshore movement of sediment. The line between beach and dune is difficult to define in the field. Over any significant period of time, sediment is always being exchanged between them. The drift line (the high point of material deposited by waves) is one potential demarcation. This would be the point at which significant wind movement of sand could occur, since the normal waves do not wet the sand beyond this area. However, the drift line is likely to move inland under assault by storm waves. Formation Beaches are the result of wave action by which waves or currents move sand or other loose sediments of which the beach is made as these particles are held in suspension. Alternatively, sand may be moved by saltation (a bouncing movement of large particles). Beach materials come from erosion of rocks offshore, as well as from headland erosion and slumping producing deposits of scree. A coral reef offshore is a significant source of sand particles. Some species of fish that feed on algae attached to coral outcrops and rocks can create substantial quantities of sand particles over their lifetime as they nibble during feeding, digesting the organic matter, and discarding the rock and coral particles which pass through their digestive tracts. The composition of the beach depends upon the nature and quantity of sediments upstream of the beach, and the speed of flow and turbidity of water and wind. Sediments are moved by moving water and wind according to their particle size and state of compaction. Particles tend to settle and compact in still water. Once compacted, they are more resistant to erosion. Established vegetation (especially species with complex network root systems) will resist erosion by slowing the fluid flow at the surface layer. When affected by moving water or wind, particles that are eroded and held in suspension will increase the erosive power of the fluid that holds them by increasing the average density, viscosity, and volume of the moving fluid. Coastlines facing very energetic wind and wave systems will tend to hold only large rocks as smaller particles will be held in suspension in the turbid water column and carried to calmer areas by longshore currents and tides. Coastlines that are protected from waves and winds will tend to allow finer sediments such as clay and mud to precipitate creating mud flats and mangrove forests. The shape of a beach depends on whether the waves are constructive or destructive, and whether the material is sand or shingle. Waves are constructive if the period between their wave crests is long enough for the breaking water to recede and the sediment to settle before the succeeding wave arrives and breaks. Fine sediment transported from lower down the beach profile will compact if the receding water percolates or soaks into the beach. Compacted sediment is more resistant to movement by turbulent water from succeeding waves. Conversely, waves are destructive if the period between the wave crests is short. Sediment that remains in suspension when the following wave crest arrives will not be able to settle and compact and will be more susceptible to erosion by longshore currents and receding tides. The nature of sediments found on a beach tends to indicate the energy of the waves and wind in the locality. Constructive waves move material up the beach while destructive waves move the material down the beach. During seasons when destructive waves are prevalent, the shallows will carry an increased load of sediment and organic matter in suspension. On sandy beaches, the turbulent backwash of destructive waves removes material forming a gently sloping beach. On pebble and shingle beaches the swash is dissipated more quickly because the large particle size allows greater percolation, thereby reducing the power of the backwash, and the beach remains steep. Compacted fine sediments will form a smooth beach surface that resists wind and water erosion. During hot calm seasons, a crust may form on the surface of ocean beaches as the heat of the sun evaporates the water leaving the salt which crystallises around the sand particles. This crust forms an additional protective layer that resists wind erosion unless disturbed by animals or dissolved by the advancing tide. Cusps and horns form where incoming waves divide, depositing sand as horns and scouring out sand to form cusps. This forms the uneven face on some sand shorelines. White sand beaches look white because the quartz or eroded limestone in the sand reflects or scatters sunlight without signicantly absorbing any colors. Sand colors The composition of the sand varies depending on the local minerals and geology. Some of the types of sand found in beaches around the world are: White sand: Mostly made of quartz and limestone, it can also contain other minerals like feldspar and gypsum . Light-colored sand: This sand gets its color from quartz and iron, and is the most common sand color in Southern Europe and other regions of the Mediterranean Basin, such as Tunisia. Tropical white sand: On tropical islands, the sand is composed of calcium carbonate from the shells and skeletons of marine organisms, like corals and mollusks, as found in Aruba. Pink coral sand: Like the above, is composed of calcium carbonate and gets its pink hue from fragments of coral, such as in Bermuda and the Bahama Islands. Black sand: Black sand is composed of volcanic rock, like basalt and obsidian, which give it its gray-black color. Hawaii's Punaluu Beach, Madeira's Praia Formosa and Fuerteventura's Ajuy beach are examples of this type of sand. Red sand: This kind of sand is created by the oxidation of iron from volcanic rocks. Santorini's Kokkini Beach or the beaches on Prince Edward Island in Canada are examples of this kind of sand. Orange sand: Orange sand is high on iron. It can also be a combination of orange limestone, crushed shells, and volcanic deposits. Ramla Bay in Gozo, Malta or Porto Ferro in Sardinia are examples of each, respectively. Green sand: In this kind of sand, the mineral olivine has been separated from other volcanic fragments by erosive forces. A famous example is Hawaii's Papakolea Beach, which has sand containing basalt and coral fragments. Olivine beaches have a high potential for carbon sequestration, and artificial greensand beaches are being explored for this process by Project Vesta. Erosion and accretion Natural erosion and accretion Causes Beaches are changed in shape chiefly by the movement of water and wind. Any weather event that is associated with turbid or fast-flowing water or high winds will erode exposed beaches. Longshore currents will tend to replenish beach sediments and repair storm damage. Tidal waterways generally change the shape of their adjacent beaches by small degrees with every tidal cycle. Over time these changes can become substantial leading to significant changes in the size and location of the beach. Effects on flora Changes in the shape of the beach may undermine the roots of large trees and other flora. Many beach adapted species (such as coconut palms) have a fine root system and large root ball which tends to withstand wave and wind action and tends to stabilize beaches better than other trees with a lesser root ball. Effects on adjacent land Erosion of beaches can expose less resilient soils and rocks to wind and wave action leading to undermining of coastal headlands eventually resulting in catastrophic collapse of large quantities of overburden into the shallows. This material may be distributed along the beach front leading to a change in the habitat as sea grasses and corals in the shallows may be buried or deprived of light and nutrients. Humanmade erosion and accretion Coastal areas settled by man inevitably become subject to the effects of human-made structures and processes. Over long periods of time, these influences may substantially alter the shape of the coastline, and the character of the beach. Destruction of flora Beachfront flora plays a major role in stabilizing the foredunes and preventing beach head erosion and inland movement of dunes. If flora with network root systems (creepers, grasses, and palms) are able to become established, they provide an effective coastal defense as they trap sand particles and rainwater and enrich the surface layer of the dunes, allowing other plant species to become established. They also protect the berm from erosion by high winds, freak waves and subsiding floodwaters. Over long periods of time, well-stabilized foreshore areas will tend to accrete, while unstabilized foreshores will tend to erode, leading to substantial changes in the shape of the coastline. These changes usually occur over periods of many years. Freak wave events such as tsunami, tidal waves, and storm surges may substantially alter the shape, profile and location of a beach within hours. Destruction of flora on the berm by the use of herbicides, excessive pedestrian or vehicle traffic, or disruption to freshwater flows may lead to erosion of the berm and dunes. While the destruction of flora may be a gradual process that is imperceptible to regular beach users, it often becomes immediately apparent after storms associated with high winds and freak wave events that can rapidly move large volumes of exposed and unstable sand, depositing them further inland, or carrying them out into the permanent water forming offshore bars, lagoons or increasing the area of the beach exposed at low tide. Large and rapid movements of exposed sand can bury and smother flora in adjacent areas, aggravating the loss of habitat for fauna, and enlarging the area of instability. If there is an adequate supply of sand, and weather conditions do not allow vegetation to recover and stabilize the sediment, wind-blown sand can continue to advance, engulfing and permanently altering downwind landscapes. Sediment moved by waves or receding floodwaters can be deposited in coastal shallows, engulfing reed beds and changing the character of underwater flora and fauna in the coastal shallows. Burning or clearance of vegetation on the land adjacent to the beach head, for farming and residential development, changes the surface wind patterns, and exposes the surface of the beach to wind erosion. Farming and residential development are also commonly associated with changes in local surface water flows. If these flows are concentrated in stormwater drains emptying onto the beach head, they may erode the beach creating a lagoon or delta. Dense vegetation tends to absorb rainfall reducing the speed of runoff and releasing it over longer periods of time. Destruction by burning or clearance of the natural vegetation tends to increase the speed and erosive power of runoff from rainfall. This runoff will tend to carry more silt and organic matter from the land onto the beach and into the sea. If the flow is constant, runoff from cleared land arriving at the beach head will tend to deposit this material into the sand changing its color, odor and fauna. Creation of beach access points The concentration of pedestrian and vehicular traffic accessing the beach for recreational purposes may cause increased erosion at the access points if measures are not taken to stabilize the beach surface above high-water mark. Recognition of the dangers of loss of beach front flora has caused many local authorities responsible for managing coastal areas to restrict beach access points by physical structures or legal sanctions, and fence off foredunes in an effort to protect the flora. These measures are often associated with the construction of structures at these access points to allow traffic to pass over or through the dunes without causing further damage. Concentration of runoff Beaches provide a filter for runoff from the coastal plain. If the runoff is naturally dispersed along the beach, water borne silt and organic matter will be retained on the land and will feed the flora in the coastal area. Runoff that is dispersed along the beach will tend to percolate through the beach and may emerge from the beach at low tide. The retention of the freshwater may also help to maintain underground water reserves and will resist salt water incursion. If the surface flow of the runoff is diverted and concentrated by drains that create constant flows over the beach above the sea or river level, the beach will be eroded and ultimately form an inlet unless longshore flows deposit sediments to repair the breach. Once eroded, an inlet may allow tidal inflows of salt water to pollute areas inland from the beach and may also affect the quality of underground water supplies and the height of the water table. Deprivation of runoff Some flora naturally occurring on the beach head requires freshwater runoff from the land. Diversion of freshwater runoff into drains may deprive these plants of their water supplies and allow sea water incursion, increasing the saltiness of the groundwater. Species that are not able to survive in salt water may die and be replaced by mangroves or other species adapted to salty environments. Inappropriate beach nourishment Beach nourishment is the importing and deposition of sand or other sediments in an effort to restore a beach that has been damaged by erosion. Beach nourishment often involves excavation of sediments from riverbeds or sand quarries. This excavated sediment may be substantially different in size and appearance to the naturally occurring beach sand. In extreme cases, beach nourishment may involve placement of large pebbles or rocks in an effort to permanently restore a shoreline subject to constant erosion and loss of foreshore. This is often required where the flow of new sediment caused by the longshore current has been disrupted by construction of harbors, breakwaters, causeways or boat ramps, creating new current flows that scour the sand from behind these structures and deprive the beach of restorative sediments. If the causes of the erosion are not addressed, beach nourishment can become a necessary and permanent feature of beach maintenance. During beach nourishment activities, care must be taken to place new sediments so that the new sediments compact and stabilize before aggressive wave or wind action can erode them. Material that is concentrated too far down the beach may form a temporary groyne that will encourage scouring behind it. Sediments that are too fine or too light may be eroded before they have compacted or been integrated into the established vegetation. Foreign unwashed sediments may introduce flora or fauna that are not usually found in that locality. Brighton Beach, on the south coast of England, is a shingle beach that has been nourished with very large pebbles in an effort to withstand the erosion of the upper area of the beach. These large pebbles made the beach unwelcoming for pedestrians for a period of time until natural processes integrated the naturally occurring shingle into the pebble base. Use for recreation History Even in Roman times, wealthy people spent their free time on the coast. They also built large villa complexes with bathing facilities (so-called maritime villas) in particularly beautiful locations. Excavations of Roman architecture can still be found today, for example on the Amalfi Coast near Naples and in Barcola in Trieste. The development of the beach as a popular leisure resort from the mid-19th century was the first manifestation of what is now the global tourist industry. The first seaside resorts were opened in the 18th century for the aristocracy, who began to frequent the seaside as well as the then fashionable spa towns, for recreation and health. One of the earliest such seaside resorts, was Scarborough in Yorkshire during the 1720s; it had been a fashionable spa town since a stream of acidic water was discovered running from one of the cliffs to the south of the town in the 17th century. The first rolling bathing machines were introduced by 1735. The opening of the resort in Brighton and its reception of royal patronage from King George IV, extended the seaside as a resort for health and pleasure to the much larger London market, and the beach became a centre for upper-class pleasure and frivolity. This trend was praised and artistically elevated by the new romantic ideal of the picturesque landscape; Jane Austen's unfinished novel Sanditon is an example of that. Later, Queen Victoria's long-standing patronage of the Isle of Wight and Ramsgate in Kent ensured that a seaside residence was considered as a highly fashionable possession for those wealthy enough to afford more than one home. Seaside resorts for the working class The extension of this form of leisure to the middle and working classes began with the development of the railways in the 1840s, which offered cheap fares to fast-growing resort towns. In particular, the completion of a branch line to the small seaside town of Blackpool from Poulton led to a sustained economic and demographic boom. A sudden influx of visitors, arriving by rail, led entrepreneurs to build accommodation and create new attractions, leading to more visitors and a rapid cycle of growth throughout the 1850s and 1860s. The growth was intensified by the practice among the Lancashire cotton mill owners of closing the factories for a week every year to service and repair machinery. These became known as wakes weeks. Each town's mills would close for a different week, allowing Blackpool to manage a steady and reliable stream of visitors over a prolonged period in the summer. A prominent feature of the resort was the promenade and the pleasure piers, where an eclectic variety of performances vied for the people's attention. In 1863, the North Pier in Blackpool was completed, rapidly becoming a centre of attraction for upper class visitors. Central Pier was completed in 1868, with a theatre and a large open-air dance floor. Many of the popular beach resorts were equipped with bathing machines, because even the all-covering beachwear of the period was considered immodest. By the end of the century the English coastline had over 100 large resort towns, some with populations exceeding 50,000. Expansion around the world The development of the seaside resort abroad was stimulated by the well-developed English love of the beach. The French Riviera alongside the Mediterranean had already become a popular destination for the British upper class by the end of the 18th century. In 1864, the first railway to Nice was completed, making the Riviera accessible to visitors from all over Europe. By 1874, residents of foreign enclaves in Nice, most of whom were British, numbered 25,000. The coastline became renowned for attracting the royalty of Europe, including Queen Victoria and King Edward VII. Continental European attitudes towards gambling and nakedness tended to be more lax than in Britain, so British and French entrepreneurs were quick to exploit the possibilities. In 1863, Charles III, Prince of Monaco, and François Blanc, a French businessman, arranged for steamships and carriages to take visitors from Nice to Monaco, where large luxury hotels, gardens and casinos were built. The place was renamed Monte Carlo. Commercial sea bathing spread to the United States and parts of the British Empire by the end of the 19th century. The first public beach in the United States was Revere Beach, which opened in 1896. During that same time, Henry Flagler developed the Florida East Coast Railway, which linked the coastal sea resorts developing at St. Augustine, FL and Miami Beach, FL, to winter travelers from the northern United States and Canada on the East Coast Railway. By the early 20th century surfing was developed in Hawaii and Australia; it spread to southern California by the early 1960s. By the 1970s cheap and affordable air travel led to the growth of a truly global tourism market which benefited areas such as the Mediterranean, Australia, South Africa, and the coastal Sun Belt regions of the United States. Today Beaches can be popular on warm sunny days. In the Victorian era, many popular beach resorts were equipped with bathing machines because even the all-covering beachwear of the period was considered immodest. This social standard still prevails in many Muslim countries. At the other end of the spectrum are topfree beaches and nude beaches where clothing is optional or not allowed. In most countries social norms are significantly different on a beach in hot weather, compared to adjacent areas where similar behavior might not be tolerated and might even be prosecuted. In more than thirty countries in Europe, South Africa, New Zealand, Canada, Costa Rica, South America and the Caribbean, the best recreational beaches are awarded Blue Flag status, based on such criteria as water quality and safety provision. Subsequent loss of this status can have a severe effect on tourism revenues. Beaches are often dumping grounds for waste and litter, necessitating the use of beach cleaners and other cleanup projects. More significantly, many beaches are a discharge zone for untreated sewage in most underdeveloped countries; even in developed countries beach closure is an occasional circumstance due to sanitary sewer overflow. In these cases of marine discharge, waterborne disease from fecal pathogens and contamination of certain marine species are a frequent outcome. Artificial beaches Some beaches are artificial; they are either permanent or temporary (For examples, see Copenhagen, Hong Kong, Manila, Monaco, Nottingham, Paris, Rotterdam, Singapore, Tianjin, and Toronto). The soothing qualities of a beach and the pleasant environment offered to the beachgoer are replicated in artificial beaches, such as "beach style" pools with zero-depth entry and wave pools that recreate the natural waves pounding upon a beach. In a zero-depth entry pool, the bottom surface slopes gradually from above water down to depth. Another approach involves so-called urban beaches, a form of public park becoming common in large cities. Urban beaches attempt to mimic natural beaches with fountains that imitate surf and mask city noises, and in some cases can be used as a play park. Beach nourishment involves pumping sand onto beaches to improve their health. Beach nourishment is common for major beach cities around the world; however the beaches that have been nourished can still appear quite natural and often many visitors are unaware of the works undertaken to support the health of the beach. Such beaches are often not recognized by consumers as artificial. A famous example of beach nourishment came with the replenishment of Waikīkī Beach in Honolulu, Hawaii, where sand from Manhattan Beach, California was transported via ship and barge throughout most of the 20th century in order to combat Waikiki's erosion problems. The Surfrider Foundation has debated the merits of artificial reefs with members torn between their desire to support natural coastal environments and opportunities to enhance the quality of surfing waves. Similar debates surround beach nourishment and snow cannon in sensitive environments. Restrictions on access Public access to beaches is restricted in some parts of the world. For example, most beaches on the Jersey Shore are restricted to people who can purchase beach tags. Many beaches in Indonesia, both private and public, require admission fees. Some beaches also restrict dogs for some periods of the year. Private beaches Some jurisdiction make all beaches public by law. Some allow private ownership (for example by owners of abutting land or neighborhood associations) to the mean high tide line or mean low tide line. In some jurisdictions, the public has a general easement to use privately-owned beach land for certain purposes. Signs are sometimes posted where public access ends. In some places, such as Florida, it is not always clear which parts of a beach are public or private. Public beaches The first public beach in the United States opened on 12 July 1896, in the town of Revere, Massachusetts, with over 45,000 people attending on the opening day. The beach was run bay the Metropolitan Parks Commission and the new beach had a bandstand, public bathhouses, shade pavilions, and lined by a broad boulevard that ran along the beach. Public access to beaches is protected by law in the U.S. state of Oregon, thanks to a 1967 state law, the Oregon Beach Bill, which guaranteed public access from the Columbia River to the California state line, "so that the public may have the free and uninterrupted use". Public access to beaches in Hawaii (other than those owned by the U.S. federal government) is also protected by state law. Access design Beach access is an important consideration where substantial numbers of pedestrians or vehicles require access to the beach. Allowing random access across delicate foredunes is seldom considered good practice as it is likely to lead to destruction of flora and consequent erosion of the fore dunes. A well-designed beach access should: provide a durable surface able to withstand the traffic flow; aesthetically complement the surrounding structures and natural landforms; be located in an area that is convenient for users and consistent with safe traffic flows; be scaled to match the traffic flow (i.e. wide and strong enough to safely carry the size and quantity of pedestrians and vehicles intended to use it); be maintained appropriately; and be signed and lit to discourage beach users from creating their own alternative crossings that may be more destructive to the beachhead. Concrete ramp or steps A concrete ramp should follow the natural profile of the beach to prevent it from changing the normal flow of waves, longshore currents, water and wind. A ramp that is below the beach profile will tend to become buried and cease to provide a good surface for vehicular traffic. A ramp or stair that protrudes above the beach profile will tend to disrupt longshore currents creating deposits in front of the ramp, and scouring behind. Concrete ramps are the most expensive vehicular beach accesses to construct requiring use of a quick-drying concrete or a cofferdam to protect them from tidal water during the concrete curing process. Concrete is favored where traffic flows are heavy and access is required by vehicles that are not adapted to soft sand (e.g. road registered passenger vehicles and boat trailers). Concrete stairs are commonly favored on beaches adjacent to population centers where beach users may arrive on the beach in street shoes, or where the foreshore roadway is substantially higher than the beach head and a ramp would be too steep for safe use by pedestrians. A composite stair ramp may incorporate a central or side stair with one or more ramps allowing pedestrians to lead buggies or small boat dollies onto the beach without the aid of a powered vehicle or winch. Concrete ramps and steps should be maintained to prevent a buildup of moss or algae that may make their wet surfaces slippery and dangerous to pedestrians and vehicles. Corduroy (beach ladder) A corduroy road or beach ladder (or board and chain) is an array of planks (usually hardwood or treated timber) laid close together and perpendicular to the direction of traffic flow, and secured at each end by a chain or cable to form a pathway or ramp over the sand dune. Corduroys are cheap and easy to construct and quick to deploy or relocate. They are commonly used for pedestrian access paths and light duty vehicular access ways. They naturally conform to the shape of the underlying beach or dune profile, and adjust well to moderate erosion, especially longshore drift. However, they can cease to be an effective access surface if they become buried or undermined by erosion by surface runoff coming from the beach head. If the corduroy is not wide enough for vehicles using it, the sediment on either side may be displaced creating a spoon drain that accelerates surface runoff and can quickly lead to serious erosion. Significant erosion of the sediment beside and under the corduroy can render it completely ineffective and make it dangerous to pedestrian users who may fall between the planks. Fabric ramp Fabric ramps are commonly employed by the military for temporary purposes where the underlying sediment is stable and hard enough to support the weight of the traffic. A sheet of porous fabric is laid over the sand to stabilize the surface and prevent vehicles from bogging. Fabric Ramps usually cease to be useful after one tidal cycle as they are easily washed away, or buried in sediment. Foliage ramp A foliage ramp is formed by planting resilient species of hardy plants such as grasses over a well-formed sediment ramp. The plants may be supported while they become established by placement of layers of mesh, netting, or coarse organic material such as vines or branches. This type of ramp is ideally suited for intermittent use by vehicles with a low wheel loading such as dune buggies or agricultural vehicles with large tyres. A foliage ramp should require minimal maintenance if initially formed to follow the beach profile, and not overused. Gravel ramp A gravel ramp is formed by excavating the underlying loose sediment and filling the excavation with layers of gravel of graduated sizes as defined by John Loudon McAdam. The gravel is compacted to form a solid surface according to the needs of the traffic. Gravel ramps are less expensive to construct than concrete ramps and are able to carry heavy road traffic provided the excavation is deep enough to reach solid subsoil. Gravel ramps are subject to erosion by water. If the edges are retained with boards or walls and the profile matches the surrounding beach profile, a gravel ramp may become more stable as finer sediments are deposited by percolating water. Longest beaches Amongst the world's longest beaches are: Eighty Mile Beach () in north-west Australia; Praia do Cassino () in Brazil; Padre Island beach (about ) in Gulf of Mexico, Texas. Ninety Mile Beach, Victoria () in Victoria, Australia; Cox's Bazar, Bangladesh ( unbroken); Naikoon Provincial Park () in the north-east of Haida Gwaii, Canada; Playa Novillera beach (about ) in Mexico. 90 Mile Beach in New Zealand (); Fraser Island beach (about ) in Queensland, Australia; Troia-Sines Beach () in Portugal; the Jersey Shore, 204 km/127 miles; and Long Beach, Washington (which is about ). Wildlife A beach is an unstable environment that exposes plants and animals to changeable and potentially harsh conditions. Some animals burrow into the sand and feed on material deposited by the waves. Crabs, insects and shorebirds feed on these beach dwellers. The endangered piping plover and some tern species rely on beaches for nesting. Sea turtles also bury their eggs in ocean beaches. Seagrasses and other beach plants grow on undisturbed areas of the beach and dunes. Ocean beaches are habitats with organisms adapted to salt spray, tidal overwash, and shifting sands. Some of these organisms are found only on beaches. Examples of these beach organisms in the southeast US include plants like sea oats, sea rocket, beach elder, beach morning glory (Ipomoea pes-caprae), and beach peanut, and animals such as mole crabs (Hippoidea), coquina clams (Donax), ghost crabs, and white beach tiger beetles.
Physical sciences
Fluvial landforms
null
40650
https://en.wikipedia.org/wiki/Scale%20%28zoology%29
Scale (zoology)
In zoology, a scale (; ) is a small rigid plate that grows out of an animal's skin to provide protection. In lepidopterans (butterflies and moths), scales are plates on the surface of the insect wing, and provide coloration. Scales are quite common and have evolved multiple times through convergent evolution, with varying structure and function. Scales are generally classified as part of an organism's integumentary system. There are various types of scales according to the shape and class of an animal. Fish scales Fish scales are dermally derived, specifically in the mesoderm. This fact distinguishes them from reptile scales paleontologically. Genetically, the same genes involved in tooth and hair development in mammals are also involved in scale development. Cosmoid scales True cosmoid scales can only be found on the Sarcopterygians. The inner layer of the scale is made of lamellar bone. On top of this lies a layer of spongy or vascular bone and then a layer of dentine-like material called cosmine. The upper surface is keratin. The coelacanth has modified cosmoid scales that lack cosmine and are thinner than true cosmoid scales. Ganoid scales Ganoid scales can be found on gars (family Lepisosteidae), bichirs, and reedfishes (family Polypteridae). Ganoid scales are similar to cosmoid scales, but a layer of ganoin lies over the cosmine layer and under the enamel. Ganoin scales are diamond shaped, shiny, and hard. Within the ganoin are guanine compounds, iridescent derivatives of guanine found in a DNA molecule. The iridescent property of these chemicals provide the ganoin its shine. Placoid scales Placoid scales are found on cartilaginous fish including sharks and stingrays. These scales, also called denticles, are similar in structure to teeth, and have one median spine and two lateral spines. The modern jawed fish ancestors, the jawless ostracoderms and later jawed placoderms, may have had scales with the properties of both placoid and ganoid scales. Leptoid scales Leptoid scales are found on higher-order bony fish. As they grow they add concentric layers. They are arranged so as to overlap in a head-to-tail direction, like roof tiles, allowing a smoother flow of water over the body and therefore reducing drag. They come in two forms: Cycloid scales have a smooth outer edge, and are most common on fish with soft fin rays, such as salmon and carp. Ctenoid scales have a toothed outer edge, and are usually found on fish with spiny fin rays, such as bass and crappie. Reptilian scales Reptile scale types include: cycloid, granular (which appear bumpy), and keeled (which have a center ridge). Scales usually vary in size, the stouter, larger scales cover parts that are often exposed to physical stress (usually the feet, tail and head), while scales are small around the joints for flexibility. Most snakes have extra broad scales on the belly, each scale covering the belly from side to side. The scales of all reptiles have an epidermal component (what one sees on the surface), but many reptiles, such as crocodilians and turtles, have osteoderms underlying the epidermal scale. Such scales are more properly termed scutes. Snakes, tuataras and many lizards lack osteoderms. All reptilian scales have a dermal papilla underlying the epidermal part, and it is there that the osteoderms, if present, would be formed. Many reptiles possess large scales not supported by osteoderms known as feature scales. The green iguana possesses large feature scales on the ventral sides of its neck, and dorsal spines not supported by osteoderms. Many extinct non-avian dinosaurs such as Carnotaurus and Brachylophosaurus are known to possess feature scales from skin impressions. Avian scales Birds' scales are found mainly on the toes and metatarsus, but may be found further up on the ankle in some birds. The scales and scutes of birds were thought to be homologous to those of reptiles, but are now agreed to have evolved independently, being degenerate feathers. Carcharodontosaurid theropod dinosaur Concavenator, is known to have possessed these feather-derived tarsal scutes. Mammalian scales An example of a scaled mammal is the pangolin. Its scales are made of keratin and are used for protection, similar to an armadillo's armor. They have been convergently evolved, being unrelated to mammals' distant reptile-like ancestors (since therapsids lost scales), except that they use a similar gene. On the other hand, the musky rat-kangaroo has scales on its feet and tail. The precise nature of its purported scales has not been studied in detail, but they appear to be structurally different from pangolin scales. Anomalures also have scales on their tail undersides. Foot pad epidermal tissues in most mammal species have been compared to the scales of other vertebrates. They are likely derived from cornification processes or stunted fur much like avian reticulae are derived from stunted feathers. Arthropod scales Butterflies and moths - the order Lepidoptera (Greek "scale-winged") - have membranous wings covered in delicate, powdery scales, which are modified setae. Each scale consists of a series of tiny stacked platelets of organic material, and butterflies tend to have the scales broad and flattened, while moths tend to have the scales narrower and more hair like. Scales are usually pigmented, but some types of scales are iridescent, without pigments; because the thickness of the platelets is on the same order as the wavelength of visible light the plates lead to structural coloration and iridescence through the physical phenomenon described as thin-film optics. The most common color produced in this fashion is blue, such as in the Morpho butterflies. Some types of spiders also have scales. Spider scales are flattened setae that overlay the surface of the cuticle. They come in a wide variety of shapes, sizes, and colors. At least 13 different spider families are known to possess cuticular scales, although they have only been well described for jumping spiders (Salticidae) and lynx spiders (Oxyopidae). Some crustaceans such as Glyptonotus antarcticus have knobbly scales. Some crayfish have been shown to use antennal scales that are activated in rapid response movements.
Biology and health sciences
Integumentary system
null
40688
https://en.wikipedia.org/wiki/Baud
Baud
In telecommunications and electronics, baud (; symbol: Bd) is a common unit of measurement of symbol rate, which is one of the components that determine the speed of communication over a data channel. It is the unit for symbol rate or modulation rate in symbols per second or pulses per second. It is the number of distinct symbol changes (signalling events) made to the transmission medium per second in a digitally modulated signal or a bd rate line code. Baud is related to gross bit rate, which can be expressed in bits per second (bit/s). If there are precisely two symbols in the system (typically 0 and 1), then baud and bits per second are equivalent. Naming The baud unit is named after Émile Baudot, the inventor of the Baudot code for telegraphy, and is represented according to the rules for SI units. That is, the first letter of its symbol is uppercase (Bd), but when the unit is spelled out, it should be written in lowercase (baud) except when it begins a sentence or is capitalized for another reason, such as in title case. It was defined by the CCITT (now the ITU) in November 1926. The earlier standard had been the number of words per minute, which was a less robust measure since word length can vary. Definitions The symbol duration time, also known as the unit interval, can be directly measured as the time between transitions by looking at an eye diagram of the signal on an oscilloscope. The symbol duration time Ts can be calculated as: where fs is the symbol rate. There is also a chance of miscommunication which leads to ambiguity. Example: Communication at the baud rate 1000 Bd means communication by means of sending 1000 symbols per second. In the case of a modem, this corresponds to 1000 tones per second; similarly, in the case of a line code, this corresponds to 1000 pulses per second. The symbol duration time is  second (that is, 1 millisecond). The baud is scaled using standard metric prefixes, so that for example 1 kBd (kilobaud) = 1000 Bd 1 MBd (megabaud) = 1000 kBd 1 GBd (gigabaud) = 1000 MBd Relationship to gross bit rate The symbol rate is related to gross bit rate expressed in bit/s. The term baud has sometimes incorrectly been used to mean bit rate, since these rates are the same in old modems as well as in the simplest digital communication links using only one bit per symbol, such that binary digit "0" is represented by one symbol, and binary digit "1" by another symbol. In more advanced modems and data transmission techniques, a symbol may have more than two states, so it may represent more than one bit. A bit (binary digit) always represents one of two states. If bits are conveyed per symbol, and the gross bit rate is , inclusive of channel coding overhead, the symbol rate can be calculated as By taking information per pulse N in bit/pulse to be the base-2-logarithm of the number of distinct messages M that could be sent, Hartley constructed a measure of the gross bit rate R as where Here, the denotes the ceiling function of , where is taken to be any real number greater than zero, then the ceiling function rounds up to the nearest natural number (e.g. ). In that case, different symbols are used. In a modem, these may be time-limited sinewave tones with unique combinations of amplitude, phase and/or frequency. For example, in a 64QAM modem, , and so the bit rate is times the baud rate. In a line code, these may be M different voltage levels. The ratio is not necessarily an integer; in 4B3T coding, the bit rate is of the baud rate. (A typical basic rate interface with a 160 kbit/s raw data rate operates at 120 kBd.) Codes with many symbols, and thus a bit rate higher than the symbol rate, are most useful on channels such as telephone lines with a limited bandwidth but a high signal-to-noise ratio within that bandwidth. In other applications, the bit rate is less than the symbol rate. Eight-to-fourteen modulation as used on audio CDs has bit rate of the baud rate.
Physical sciences
Information
Basics and measurement
40814
https://en.wikipedia.org/wiki/Branch
Branch
A branch, also called a ramus in botany, is a stem that grows off from another stem, or when structures like veins in leaves are divided into smaller veins. History and etymology In Old English, there are numerous words for branch, including , , , and . There are also numerous descriptive words, such as (that is, something that has bled, or 'bloomed', out), (literally 'little bough'), (literally 'on growth'), and (literally 'offspringing'). Numerous other words for twigs and boughs abound, including , which still survives as the -toe in mistletoe. Latin words for branch are or . The latter term is an affix found in other modern words such as cladodont (prehistoric sharks with branched teeth), cladode (flattened leaf-like branches), or cladogram (a branched diagram showing relations among organisms). Woody branches Large branches are known as boughs and small branches are known as twigs. The term twig usually refers to a terminus, while bough refers only to branches coming directly from the trunk. Due to a broad range of species of trees, branches and twigs can be found in many different shapes and sizes. While branches can be nearly horizontal, vertical, or diagonal, the majority of trees have upwardly diagonal branches. A number of mathematical properties are associated with tree branchings; they are natural examples of fractal patterns in nature, and, as observed by Leonardo da Vinci, their cross-sectional areas closely follow the da Vinci branching rule. Specific terms A bough can also be called a limb or arm, and though these are arguably metaphors, both are widely accepted synonyms for bough. A crotch or fork is an area where a trunk splits into two or more boughs. A twig is frequently referred to as a sprig as well, especially when it has been plucked. Other words for twig include branchlet, spray, and surcle, as well as the technical terms surculus and ramulus. Branches found under larger branches can be called underbranches. Some branches from specific trees have their own names, such as osiers and withes or withies, which come from willows. Often trees have certain words which, in English, are naturally collocated, such as holly and mistletoe, which usually employ the phrase "sprig of" (as in, a "sprig of mistletoe"). Similarly, the branch of a cherry tree is generally referred to as a "cherry branch", while other such formations (i.e., "acacia branch" or "orange branch") carry no such alliance. A good example of this versatility is oak, which could be referred to as variously an "oak branch", an "oaken branch", a "branch of oak", or the "branch of an oak tree". Once a branch has been cut or in any other way removed from its source, it is most commonly referred to as a stick, and a stick employed for some purpose (such as walking, spanking, or beating) is often called a rod. Thin, flexible sticks are called switches, wands, shrags, or vimina (singular vimen).
Biology and health sciences
Plant stem
Biology
40858
https://en.wikipedia.org/wiki/Caesium%20standard
Caesium standard
The caesium standard is a primary frequency standard in which the photon absorption by transitions between the two hyperfine ground states of caesium-133 atoms is used to control the output frequency. The first caesium clock was built by Louis Essen in 1955 at the National Physical Laboratory in the UK and promoted worldwide by Gernot M. R. Winkler of the United States Naval Observatory. Caesium atomic clocks are one of the most accurate time and frequency standards, and serve as the primary standard for the definition of the second in the International System of Units (SI), the modern metric system. By definition, radiation produced by the transition between the two hyperfine ground states of caesium-133 (in the absence of external influences such as the Earth's magnetic field) has a frequency, , of exactly . That value was chosen so that the caesium second equaled, to the limit of measuring ability in 1960 when it was adopted, the existing standard ephemeris second based on the Earth's orbit around the Sun. Because no other measurement involving time had been as precise, the effect of the change was less than the experimental uncertainty of all existing measurements. While the second is the only base unit to be explicitly defined in terms of the caesium standard, the majority of SI units have definitions that mention either the second, or other units defined using the second. Consequently, every base unit except the mole and every named derived unit except the coulomb, ohm, siemens, gray, sievert, radian, and steradian have values that are implicitly defined by the properties of the caesium-133 hyperfine transition radiation. And of these, all but the mole, the coulomb, and the dimensionless radian and steradian are implicitly defined by the general properties of electromagnetic radiation. Technical details The official definition of the second was first given by the BIPM at the 13th General Conference on Weights and Measures in 1967 as: "The second is the duration of periods of the radiation corresponding to the transition between the two hyperfine levels of the ground state of the caesium 133 atom." At its 1997 meeting the BIPM added to the previous definition the following specification: "This definition refers to a caesium atom at rest at a temperature of 0 K." The BIPM restated this definition in its 26th conference (2018), "The second is defined by taking the fixed numerical value of the caesium frequency ∆Cs, the unperturbed ground-state hyperfine transition frequency of the caesium 133 atom, to be 9 192 631 770 when expressed in the unit Hz, which is equal to s–1." The meaning of the preceding definition is as follows. The caesium atom has a ground state electron state with configuration [Xe] 6s1 and, consequently, atomic term symbol 2S1/2. This means that there is one unpaired electron and the total electron spin of the atom is 1/2. Moreover, the nucleus of caesium-133 has a nuclear spin equal to 7/2. The simultaneous presence of electron spin and nuclear spin leads, by a mechanism called hyperfine interaction, to a (small) splitting of all energy levels into two sub-levels. One of the sub-levels corresponds to the electron and nuclear spin being parallel (i.e., pointing in the same direction), leading to a total spin F equal to ; the other sub-level corresponds to anti-parallel electron and nuclear spin (i.e., pointing in opposite directions), leading to a total spin . In the caesium atom it so happens that the sub-level lowest in energy is the one with , while the sub-level lies energetically slightly above. When the atom is irradiated with electromagnetic radiation having an energy corresponding to the energetic difference between the two sub-levels the radiation is absorbed and the atom is excited, going from the sub-level to the one. After a small fraction of a second the atom will re-emit the radiation and return to its ground state. From the definition of the second it follows that the radiation in question has a frequency of exactly , corresponding to a wavelength of about 3.26 cm and therefore belonging to the microwave range. Note that a common confusion involves the conversion from angular frequency () to frequency (), or vice versa. Angular frequencies are conventionally given as s–1 in scientific literature, but here the units implicitly mean radians per second. In contrast, the unit Hz should be interpreted as cycles per second. The conversion formula is , which implies that 1 Hz corresponds to an angular frequency of approximately 6.28 radians per second (or 6.28 s–1 where radians is omitted for brevity by convention). Parameters and significance in the second and other SI units Suppose the caesium standard has the parameters: Velocity: c Energy/frequency: h Time period: Frequency: Wavelength: Photon energy: Photon mass equivalent: Time and frequency The first set of units defined using the caesium standard were those relating to time, with the second being defined in 1967 as "the duration of 9 192 631 770 periods of the radiation corresponding to the transition between the two hyperfine levels of the ground state of the caesium 133 atom" meaning that: 1 second, s, = 9,192,631,770 1 hertz, Hz, = 1/s = 1 becquerel, Bq, = 1 nuclear decay/s = nuclear decays/ This also linked the definitions of the derived units relating to force and energy (see below) and of the ampere, whose definition at the time made reference to the newton, to the caesium standard. Before 1967 the SI units of time and frequency were defined using the tropical year and before 1960 by the length of the mean solar day Length In 1983, the meter was, indirectly, defined in terms of the caesium standard with the formal definition "The metre is the length of the path travelled by light in vacuum during a time interval of 1/299 792 458 of a second. This implied: 1 metre, m, = = c = 1 radian, rad, = 1 m/m = / = 1 (dimensionless unit of angle) 1 steradian, sr, = 1 m2/m2 = 2/2 = 1 (dimensionless unit of solid angle) Between 1960 and 1983, the metre had been defined by the wavelength of a different transition frequency associated with the krypton-86 atom. This had a much higher frequency and shorter wavelength than the caesium standard, falling inside the visible spectrum. The first definition, used between 1889 and 1960, was by the international prototype meter. Mass, energy, and force Following the 2019 revision of the SI, electromagnetic radiation, in general, was explicitly defined to have the exact parameters: c = 299,792,458 m/s h = J s The caesium-133 hyperfine transition radiation was explicitly defined to have frequency: = 9,192,631,770 Hz Though the above values for c and were already obviously implicit in the definitions of the metre and second. Together they imply: = = = c = m = h = 9,192,631,770 Hz × J s = J = = = kg Notably, the wavelength has a fairly human-sized value of about 3.26 centimetres and the photon energy is surprisingly close to the average molecular kinetic energy per degree of freedom per kelvin. From these it follows that: 1 kilogram, kg, = 1 joule, J, = 1 watt, W, = 1 J/s = 1 newton, N, = 1 J/m = / 1 pascal, Pa, = 1 N/m2 = /3 1 gray, Gy, = 1 J/kg = / = 1 sievert, Sv, = the ionizing radiation dose equivalent to 1 gray of gamma rays Prior to the revision, between 1889 and 2019, the family of metric (and later SI) units relating to mass, force, and energy were somewhat notoriously defined by the mass of the International Prototype of the Kilogram (IPK), a specific object stored at the headquarters of the International Bureau of Weights and Measures in Paris, meaning that any change to the mass of that object would have resulted in a change to the size of the kilogram and of the many other units whose value at the time depended on that of the kilogram. Temperature From 1954 to 2019, the SI temperature scales were defined using the triple point of water and absolute zero. The 2019 revision replaced these with an assigned value for the Boltzmann constant, k, of J/K, implying: 1 kelvin, K, = J/2 per degree of freedom = per degree of freedom = per degree of freedom Temperature in degrees Celsius, °C, = temperature in kelvins - 273.15 = Amount of substance The mole is an extremely large number of "elementary entities" (i.e. atoms, molecules, ions, etc). From 1969 to 2019, this number was 0.012 × the mass ratio between the IPK and a carbon 12 atom. The 2019 revision simplified this by assigning the Avogadro constant the exact value elementary entities per mole, thus, uniquely among the base units, the mole maintained its independence from the caesium standard: 1 mole, mol, = elementary entities 1 katal, kat, = 1 mol/s = elementary entities/ Electromagnetic units Prior to the revision, the ampere was defined as the current needed to produce a force between 2 parallel wires 1 m apart of 0.2 μN per meter. The 2019 revision replaced this definition by giving the charge on the electron, e, the exact value coulombs. Somewhat incongruously, the coulomb is still considered a derived unit and the ampere a base unit, rather than vice versa. In any case, this convention entailed the following exact relationships between the SI electromagnetic units, elementary charge, and the caesium-133 hyperfine transition radiation: 1 coulomb, C, = e 1 ampere, or amp, A, = 1 C/s = e 1 volt, V, = 1 J/C = /e 1 farad, F, = 1 C/V = e2/ 1 ohm, Ω, = 1 V/A = / e2 = h/e2 1 siemens, S, = 1/Ω = e2/h 1 weber, Wb, = 1 V s = /e = h/e 1 tesla, T, = 1 Wb/m2 = /e 2 = E/e c 1 henry, H, = Ω s = h /e2 Optical units From 1967 to 1979 the SI optical units, lumen, lux, and candela are defined using the Incandescent glow of platinum at its melting point. After 1979, the candela was defined as the luminous intensity of a monochromatic visible light source of frequency 540 THz (i.e that of the caesium standard) and radiant intensity watts per steradian. This linked the definition of the candela to the caesium standard and, until 2019, to the IPK. Unlike the units relating to mass, energy, temperature, amount of substance, and electromagnetism, the optical units were not massively redefined in 2019, though they were indirectly affected since their values depend on that of the watt, and hence of the kilogram. The frequency used to define the optical units has the parameters: Frequency: 540 THz Time period: fs Wavelength: μm Photon energy: Hz × J s = J luminous efficacy, KCD, = 683 lm/W Luminous energy per photon, , = J × 683 lm/W = lm s This implies: 1 lumen, lm, = 1 candela, cd, = 1 lm/sr = /sr 1 Lux, lx, = 1 lm/m2 = /2 Summary The parameters of the caesium 133 hyperfine transition radiation expressed exactly in SI units are: Frequency = 9,192,631,770 Hz Time period = s Wavelength = m Photon energy = J Photon mass equivalent = kg If the seven base units of the SI are expressed explicitly in terms of the SI defining constants, they are: 1 second = 1 metre = c/ 1 kilogram = h /c2 1 ampere = e 1 kelvin = h /k 1 mole = elementary entities 1 candela = h 2 KCD/sr Ultimately, 6 of the 7 base units notably have values that depend on that of , which appears far more often than any of the other defining constants.
Physical sciences
Measurement systems
Basics and measurement
40875
https://en.wikipedia.org/wiki/Circular%20polarization
Circular polarization
In electrodynamics, circular polarization of an electromagnetic wave is a polarization state in which, at each point, the electromagnetic field of the wave has a constant magnitude and is rotating at a constant rate in a plane perpendicular to the direction of the wave. In electrodynamics, the strength and direction of an electric field is defined by its electric field vector. In the case of a circularly polarized wave, the tip of the electric field vector, at a given point in space, relates to the phase of the light as it travels through time and space. At any instant of time, the electric field vector of the wave indicates a point on a helix oriented along the direction of propagation. A circularly polarized wave can rotate in one of two possible senses: right-handed circular polarization (RHCP) in which the electric field vector rotates in a right-hand sense with respect to the direction of propagation, and left-handed circular polarization (LHCP) in which the vector rotates in a left-hand sense. Circular polarization is a limiting case of elliptical polarization. The other special case is the easier-to-understand linear polarization. All three terms were coined by Augustin-Jean Fresnel, in a memoir read to the French Academy of Sciences on 9 December 1822. Fresnel had first described the case of circular polarization, without yet naming it, in 1821. The phenomenon of polarization arises as a consequence of the fact that light behaves as a two-dimensional transverse wave. Circular polarization occurs when the two orthogonal electric field component vectors are of equal magnitude and are out of phase by exactly 90°, or one-quarter wavelength. Characteristics In a circularly polarized electromagnetic wave, the individual electric field vectors, as well as their combined vector, have a constant magnitude, and with changing phase angle. Given that this is a plane wave, each vector represents the magnitude and direction of the electric field for an entire plane that is perpendicular to the optical axis. Specifically, given that this is a circularly polarized plane wave, these vectors indicate that the electric field, from plane to plane, has a constant strength while its direction steadily rotates. Refer to these two images in the plane wave article to better appreciate this dynamic. This light is considered to be right-hand, clockwise circularly polarized if viewed by the receiver. Since this is an electromagnetic wave, each electric field vector has a corresponding, but not illustrated, magnetic field vector that is at a right angle to the electric field vector and proportional in magnitude to it. As a result, the magnetic field vectors would trace out a second helix if displayed. Circular polarization is often encountered in the field of optics and, in this section, the electromagnetic wave will be simply referred to as light. The nature of circular polarization and its relationship to other polarizations is often understood by thinking of the electric field as being divided into two components that are perpendicular to each other. The vertical component and its corresponding plane are illustrated in blue, while the horizontal component and its corresponding plane are illustrated in green. Notice that the rightward (relative to the direction of travel) horizontal component leads the vertical component by one quarter of a wavelength, a 90° phase difference. It is this quadrature phase relationship that creates the helix and causes the points of maximum magnitude of the vertical component to correspond with the points of zero magnitude of the horizontal component, and vice versa. The result of this alignment are select vectors, corresponding to the helix, which exactly match the maxima of the vertical and horizontal components. To appreciate how this quadrature phase shift corresponds to an electric field that rotates while maintaining a constant magnitude, imagine a dot traveling clockwise in a circle. Consider how the vertical and horizontal displacements of the dot, relative to the center of the circle, vary sinusoidally in time and are out of phase by one quarter of a cycle. The displacements are said to be out of phase by one quarter of a cycle because the horizontal maximum displacement (toward the left) is reached one quarter of a cycle before the vertical maximum displacement is reached. Now referring again to the illustration, imagine the center of the circle just described, traveling along the axis from the front to the back. The circling dot will trace out a helix with the displacement toward our viewing left, leading the vertical displacement. Just as the horizontal and vertical displacements of the rotating dot are out of phase by one quarter of a cycle in time, the magnitude of the horizontal and vertical components of the electric field are out of phase by one quarter of a wavelength. The next pair of illustrations is that of left-handed, counterclockwise circularly polarized light when viewed by the receiver. Because it is left-handed, the rightward (relative to the direction of travel) horizontal component is now lagging the vertical component by one quarter of a wavelength, rather than leading it. Reversal of handedness Waveplate To convert circularly polarized light to the other handedness, one can use a half-waveplate. A half-waveplate shifts a given linear component of light one half of a wavelength relative to its orthogonal linear component. Reflection The handedness of polarized light is reversed reflected off a surface at normal incidence. Upon such reflection, the rotation of the plane of polarization of the reflected light is identical to that of the incident field. However, with propagation now in the opposite direction, the same rotation direction that would be described as "right-handed" for the incident beam, is "left-handed" for propagation in the reverse direction, and vice versa. Aside from the reversal of handedness, the ellipticity of polarization is also preserved (except in cases of reflection by a birefringent surface). Note that this principle only holds strictly for light reflected at normal incidence. For instance, right circularly polarized light reflected from a dielectric surface at grazing incidence (an angle beyond the Brewster angle) will still emerge as right-handed, but elliptically polarized. Light reflected by a metal at non-normal incidence will generally have its ellipticity changed as well. Such situations may be solved by decomposing the incident circular (or other) polarization into components of linear polarization parallel and perpendicular to the plane of incidence, commonly denoted p and s respectively. The reflected components in the p and s linear polarizations are found by applying the Fresnel coefficients of reflection, which are generally different for those two linear polarizations. Only in the special case of normal incidence, where there is no distinction between p and s, are the Fresnel coefficients for the two components identical, leading to the above property. Conversion to linear polarization Circularly polarized light can be converted into linearly polarized light by passing it through a quarter-waveplate. Passing linearly polarized light through a quarter-waveplate with its axes at 45° to its polarization axis will convert it to circular polarization. In fact, this is the most common way of producing circular polarization in practice. Note that passing linearly polarized light through a quarter-waveplate at an angle other than 45° will generally produce elliptical polarization. Handedness conventions Circular polarization may be referred to as right-handed or left-handed, and clockwise or anti-clockwise, depending on the direction in which the electric field vector rotates. Unfortunately, two opposing historical conventions exist. From the point of view of the source Using this convention, polarization is defined from the point of view of the source. When using this convention, left- or right-handedness is determined by pointing one's left or right thumb from the source, in the direction that the wave is propagating, and matching the curling of one's fingers to the direction of the temporal rotation of the field at a given point in space. When determining if the wave is clockwise or anti-clockwise circularly polarized, one again takes the point of view of the source, and while looking from the source and in the direction of the wave's propagation, one observes the direction of the field's temporal rotation. Using this convention, the electric field vector of a left-handed circularly polarized wave is as follows: As a specific example, refer to the circularly polarized wave in the first animation. Using this convention, that wave is defined as right-handed because when one points one's right thumb in the same direction of the wave's propagation, the fingers of that hand curl in the same direction of the field's temporal rotation. It is considered clockwise circularly polarized because, from the point of view of the source, looking in the same direction of the wave's propagation, the field rotates in the clockwise direction. The second animation is that of left-handed or anti-clockwise light, using this same convention. This convention is in conformity with the Institute of Electrical and Electronics Engineers (IEEE) standard and, as a result, it is generally used in the engineering community. Quantum physicists also use this convention of handedness because it is consistent with their convention of handedness for a particle's spin. Radio astronomers also use this convention in accordance with an International Astronomical Union (IAU) resolution made in 1973. From the point of view of the receiver In this alternative convention, polarization is defined from the point of view of the receiver. Using this convention, left- or right-handedness is determined by pointing one's left or right thumb the source, the direction of propagation, and then matching the curling of one's fingers to the temporal rotation of the field. When using this convention, in contrast to the other convention, the defined handedness of the wave matches the handedness of the screw type nature of the field in space. Specifically, if one freezes a right-handed wave in time, when one curls the fingers of one's right hand around the helix, the thumb will point in the direction of progression for the helix, given the sense of rotation. Note that, in the context of the nature of all screws and helices, it does not matter in which direction you point your thumb when determining its handedness. When determining if the wave is clockwise or anti-clockwise circularly polarized, one again takes the point of view of the receiver and, while looking the source, the direction of propagation, one observes the direction of the field's temporal rotation. Just as in the other convention, right-handedness corresponds to a clockwise rotation, and left-handedness corresponds to an anti-clockwise rotation. Many optics textbooks use this second convention. It is also used by SPIE as well as the International Union of Pure and Applied Chemistry (IUPAC). Uses of the two conventions As stated earlier, there is significant confusion with regards to these two conventions. As a general rule, the engineering, quantum physics, and radio astronomy communities use the first convention, in which the wave is observed from the point of view of the source. In many physics textbooks dealing with optics, the second convention is used, in which the light is observed from the point of view of the receiver. To avoid confusion, it is good practice to specify "as defined from the point of view of the source" or "as defined from the point of view of the receiver" when discussing polarization matters. The archive of the US Federal Standard 1037C proposes two contradictory conventions of handedness. Note that the IEEE defines RHCP and LHCP the opposite as those used by physicists. The IEEE 1979 Antenna Standard will show RHCP on the South Pole of the Poincare Sphere. The IEEE defines RHCP using the right hand with thumb pointing in the direction of transmit, and the fingers showing the direction of rotation of the E field with time. The rationale for the opposite conventions used by Physicists and Engineers is that Astronomical Observations are always done with the incoming wave traveling toward the observer, where as for most engineers, they are assumed to be standing behind the transmitter watching the wave traveling away from them. This article is not using the IEEE 1979 Antenna Standard and is not using the +t convention typically used in IEEE work. FM radio FM broadcast radio stations sometimes employ circular polarization to improve signal penetration into buildings and vehicles. It is one example of what the International Telecommunication Union refers to as "mixed polarization", i.e. radio emissions that include both horizontally- and vertically-polarized components. In the United States, Federal Communications Commission regulations state that horizontal polarization is the standard for FM broadcasting, but that "circular or elliptical polarization may be employed if desired". Dichroism Circular dichroism (CD) is the differential absorption of left- and right-handed circularly polarized light. Circular dichroism is the basis of a form of spectroscopy that can be used to determine the optical isomerism and secondary structure of molecules. In general, this phenomenon will be exhibited in absorption bands of any optically active molecule. As a consequence, circular dichroism is exhibited by most biological molecules, because of the dextrorotary (e.g., some sugars) and levorotary (e.g., some amino acids) molecules they contain. Noteworthy as well is that a secondary structure will also impart a distinct CD to its respective molecules. Therefore, the alpha helix, beta sheet and random coil regions of proteins and the double helix of nucleic acids have CD spectral signatures representative of their structures. Also, under the right conditions, even non-chiral molecules will exhibit magnetic circular dichroism — that is, circular dichroism induced by a magnetic field. Luminescence Circularly polarized luminescence (CPL) can occur when either a luminophore or an ensemble of luminophores is chiral. The extent to which emissions are polarized is quantified in the same way it is for circular dichroism, in terms of the dissymmetry factor, also sometimes referred to as the anisotropy factor. This value is given by: where corresponds to the quantum yield of left-handed circularly polarized light, and to that of right-handed light. The maximum absolute value of gem, corresponding to purely left- or right-handed circular polarization, is therefore 2. Meanwhile, the smallest absolute value that gem can achieve, corresponding to linearly polarized or unpolarized light, is zero. Mathematical description The classical sinusoidal plane wave solution of the electromagnetic wave equation for the electric and magnetic fields is: where k is the wavenumber; is the angular frequency of the wave; is an orthogonal matrix whose columns span the transverse x-y plane; and is the speed of light. Here, is the amplitude of the field, and is the normalized Jones vector in the x-y plane. If is rotated by radians with respect to and the x amplitude equals the y amplitude, the wave is circularly polarized. The Jones vector is: where the plus sign indicates left circular polarization, and the minus sign indicates right circular polarization. In the case of circular polarization, the electric field vector of constant magnitude rotates in the x-y plane. If basis vectors are defined such that: and: then the polarization state can be written in the "R-L basis" as: where: and: Antennas A number of different types of antenna elements can be used to produce circularly polarized (or nearly so) radiation; following Balanis, one can use dipole elements: "... two crossed dipoles provide the two orthogonal field components.... If the two dipoles are identical, the field intensity of each along zenith ... would be of the same intensity. Also, if the two dipoles were fed with a 90° degree time-phase difference (phase quadrature), the polarization along zenith would be circular.... One way to obtain the 90° time-phase difference between the two orthogonal field components, radiated respectively by the two dipoles, is by feeding one of the two dipoles with a transmission line which is 1/4 wavelength longer or shorter than that of the other," p.80; or helical elements: "To achieve circular polarization [in axial or end-fire mode] ... the circumference C of the helix must be ... with C/wavelength = 1 near optimum, and the spacing about S = wavelength/4," p.571; or patch elements: "... circular and elliptical polarizations can be obtained using various feed arrangements or slight modifications made to the elements.... Circular polarization can be obtained if two orthogonal modes are excited with a 90° time-phase difference between them. This can be accomplished by adjusting the physical dimensions of the patch.... For a square patch element, the easiest way to excite ideally circular polarization is to feed the element at two adjacent edges.... The quadrature phase difference is obtained by feeding the element with a 90° power divider," p.859. In quantum mechanics In the quantum mechanical view, light is composed of photons. Polarization is a manifestation of the spin angular momentum of light. More specifically, in quantum mechanics, the direction of spin of a photon is tied to the handedness of the circularly polarized light, and the spin of a beam of photons is similar to the spin of a beam of particles, such as electrons. In nature Only a few mechanisms in nature are known to systematically produce circularly polarized light. In 1911, Albert Abraham Michelson discovered that light reflected from the golden scarab beetle Chrysina resplendens is preferentially left-polarized. Since then, circular polarization has been measured in several other scarab beetles such as Chrysina gloriosa, as well as some crustaceans such as the mantis shrimp. In these cases, the underlying mechanism is the molecular-level helicity of the chitinous cuticle. The bioluminescence of the larvae of fireflies is also circularly polarized, as reported in 1980 for the species Photuris lucicrescens and Photuris versicolor. For fireflies, it is more difficult to find a microscopic explanation for the polarization, because the left and right lanterns of the larvae were found to emit polarized light of opposite senses. The authors suggest that the light begins with a linear polarization due to inhomogeneities inside aligned photocytes, and it picks up circular polarization while passing through linearly birefringent tissue. Circular polarization has been detected in light reflected from leaves and photosynthetic microbes. Water-air interfaces provide another source of circular polarization. Sunlight that gets scattered back up towards the surface is linearly polarized. If this light is then totally internally reflected back down, its vertical component undergoes a phase shift. To an underwater observer looking up, the faint light outside Snell's window therefore is (partially) circularly polarized. Weaker sources of circular polarization in nature include multiple scattering by linear polarizers, as in the circular polarization of starlight, and selective absorption by circularly dichroic media. Radio emission from pulsars can be strongly circularly polarized. Two species of mantis shrimp have been reported to be able to detect circular polarized light.
Physical sciences
Optics
Physics
40898
https://en.wikipedia.org/wiki/Collision
Collision
In physics, a collision is any event in which two or more bodies exert forces on each other in a relatively short time. Although the most common use of the word collision refers to incidents in which two or more objects collide with great force, the scientific use of the term implies nothing about the magnitude of the force. Types of collisions Collision is short-duration interaction between two bodies or more than two bodies simultaneously causing change in motion of bodies involved due to internal forces acted between them during this. Collisions involve forces (there is a change in velocity). The magnitude of the velocity difference just before impact is called the closing speed. All collisions conserve momentum. What distinguishes different types of collisions is whether they also conserve kinetic energy of the system before and after the collision. Collisions are of two types: Elastic collision If all of the total kinetic energy is conserved (i.e. no energy is released as sound, heat, etc.), the collision is said to be perfectly elastic. Such a system is an idealization and cannot occur in reality, due to the second law of thermodynamics. Inelastic collision. If most or all of the total kinetic energy is lost (dissipated as heat, sound, etc. or absorbed by the objects themselves), the collision is said to be inelastic; such collisions involve objects coming to a full stop. An example of this is a baseball bat hitting a baseball - the kinetic energy of the bat is transferred to the ball, greatly increasing the ball's velocity. The sound of the bat hitting the ball represents the loss of energy. A "perfectly inelastic" collision (also called a "perfectly plastic" collision) is a limiting case of inelastic collision in which the two bodies coalesce after impact. An example of such a collision is a car crash, as cars crumple inward when crashing, rather than bouncing off of each other. This is by design, for the safety of the occupants and bystanders should a crash occur - the frame of the car absorbs the energy of the crash instead. The degree to which a collision is elastic or inelastic is quantified by the coefficient of restitution, a value that generally ranges between zero and one. A perfectly elastic collision has a coefficient of restitution of one; a perfectly inelastic collision has a coefficient of restitution of zero. The line of impact is the line that is collinear to the common normal of the surfaces that are closest or in contact during impact. This is the line along which internal force of collision acts during impact, and Newton's coefficient of restitution is defined only along this line. Collisions in ideal gases approach perfectly elastic collisions, as do scattering interactions of sub-atomic particles which are deflected by the electromagnetic force. Some large-scale interactions like the slingshot type gravitational interactions between satellites and planets are almost perfectly elastic. Examples Billiards Collisions play an important role in cue sports. Because the collisions between billiard balls are nearly elastic, and the balls roll on a surface that produces low rolling friction, their behavior is often used to illustrate Newton's laws of motion. After a zero-friction collision of a moving ball with a stationary one of equal mass, the angle between the directions of the two balls is 90 degrees. This is an important fact that professional billiards players take into account, although it assumes the ball is moving without any impact of friction across the table rather than rolling with friction. Consider an elastic collision in two dimensions of any two masses ma and mb, with respective initial velocities va1 and vb1 where vb1 = 0, and final velocities va2 and vb2. Conservation of momentum gives mava1 = mava2 + mbvb2. Conservation of energy for an elastic collision gives (1/2)ma|va1|2 = (1/2)ma|va2|2 + (1/2)mb|vb2|2. Now consider the case ma = mb: we obtain va1 = va2 + vb2 and |va1|2 = |va2|2 + |vb2|2. Taking the dot product of each side of the former equation with itself, |va1|2 = va1•va1 = |va2|2 + |vb2|2 + 2va2•vb2. Comparing this with the latter equation gives va2•vb2 = 0, so they are perpendicular unless va2 is the zero vector (which occurs if and only if the collision is head-on). Perfect inelastic collision In a perfect inelastic collision, i.e., a zero coefficient of restitution, the colliding particles coalesce. Using conservation of momentum: the final velocity is given by The reduction of total kinetic energy is equal to the total kinetic energy before the collision in a center of momentum frame with respect to the system of two particles, because in such a frame the kinetic energy after the collision is zero. In this frame most of the kinetic energy before the collision is that of the particle with the smaller mass. In another frame, in addition to the reduction of kinetic energy there may be a transfer of kinetic energy from one particle to the other; the fact that this depends on the frame shows how relative this is. With time reversed we have the situation of two objects pushed away from each other, e.g. shooting a projectile, or a rocket applying thrust (compare the derivation of the Tsiolkovsky rocket equation). Animal locomotion Collisions of an animal's foot or paw with the underlying substrate are generally termed ground reaction forces. These collisions are inelastic, as kinetic energy is not conserved. An important research topic in prosthetics is quantifying the forces generated during the foot-ground collisions associated with both disabled and non-disabled gait. This quantification typically requires subjects to walk across a force platform (sometimes called a "force plate") as well as detailed kinematic and dynamic (sometimes termed kinetic) analysis. Hypervelocity impacts Hypervelocity is very high velocity, approximately over 3,000 meters per second (11,000 km/h, 6,700 mph, 10,000 ft/s, or Mach 8.8). In particular, hypervelocity is velocity so high that the strength of materials upon impact is very small compared to inertial stresses. Thus, metals and fluids behave alike under hypervelocity impact. An impact under extreme hypervelocity results in vaporization of the impactor and target. For structural metals, hypervelocity is generally considered to be over 2,500 m/s (5,600 mph, 9,000 km/h, 8,200 ft/s, or Mach 7.3). Meteorite craters are also examples of hypervelocity impacts.
Physical sciences
Basics_4
Physics
40909
https://en.wikipedia.org/wiki/Booting
Booting
In computing, booting is the process of starting a computer as initiated via hardware such as a physical button on the computer or by a software command. After it is switched on, a computer's central processing unit (CPU) has no software in its main memory, so some process must load software into memory before it can be executed. This may be done by hardware or firmware in the CPU, or by a separate processor in the computer system. On some systems a power-on reset (POR) does not initiate booting and the operator must initiate booting after POR completes. IBM uses the term Initial Program Load (IPL) on some product lines. Restarting a computer also is called rebooting, which can be "hard", e.g. after electrical power to the CPU is switched from off to on, or "soft", where the power is not cut. On some systems, a soft boot may optionally clear RAM to zero. Both hard and soft booting can be initiated by hardware such as a button press or by a software command. Booting is complete when the operative runtime system, typically the operating system and some applications, is attained. The process of returning a computer from a state of sleep (suspension) does not involve booting; however, restoring it from a state of hibernation does. Minimally, some embedded systems do not require a noticeable boot sequence to begin functioning and when turned on may simply run operational programs that are stored in ROM. All computing systems are state machines, and a reboot may be the only method to return to a designated zero-state from an unintended, locked state. In addition to loading an operating system or stand-alone utility, the boot process can also load a storage dump program for diagnosing problems in an operating system. Boot is short for bootstrap or bootstrap load and derives from the phrase to pull oneself up by one's bootstraps. The usage calls attention to the requirement that, if most software is loaded onto a computer by other software already running on the computer, some mechanism must exist to load the initial software onto the computer. Early computers used a variety of ad-hoc methods to get a small program into memory to solve this problem. The invention of read-only memory (ROM) of various types solved this paradox by allowing computers to be shipped with a start up program, stored in the boot ROM of the computer, that could not be erased. Growth in the capacity of ROM has allowed ever more elaborate start up procedures to be implemented. History There are many different methods available to load a short initial program into a computer. These methods reach from simple, physical input to removable media that can hold more complex programs. Pre integrated-circuit-ROM examples Early computers Early computers in the 1940s and 1950s were one-of-a-kind engineering efforts that could take weeks to program and program loading was one of many problems that had to be solved. An early computer, ENIAC, had no program stored in memory, but was set up for each problem by a configuration of interconnecting cables. Bootstrapping did not apply to ENIAC, whose hardware configuration was ready for solving problems as soon as power was applied. The EDSAC system, the second stored-program computer to be built, used stepping switches to transfer a fixed program into memory when its start button was pressed. The program stored on this device, which David Wheeler completed in late 1948, loaded further instructions from punched tape and then executed them. First commercial computers The first programmable computers for commercial sale, such as the UNIVAC I and the IBM 701 included features to make their operation simpler. They typically included instructions that performed a complete input or output operation. The same hardware logic could be used to load the contents of a punch card (the most typical ones) or other input media, such as a magnetic drum or magnetic tape, that contained a bootstrap program by pressing a single button. This booting concept was called a variety of names for IBM computers of the 1950s and early 1960s, but IBM used the term "Initial Program Load" with the IBM 7030 Stretch and later used it for their mainframe lines, starting with the System/360 in 1964. The IBM 701 computer (1952–1956) had a "Load" button that initiated reading of the first 36-bit word into main memory from a punched card in a card reader, a magnetic tape in a tape drive, or a magnetic drum unit, depending on the position of the Load Selector switch. The left 18-bit half-word was then executed as an instruction, which usually read additional words into memory. The loaded boot program was then executed, which, in turn, loaded a larger program from that medium into memory without further help from the human operator. The IBM 704, IBM 7090, and IBM 7094 had similar mechanisms, but with different load buttons for different devices. The term "boot" has been used in this sense since at least 1958. Other IBM computers of that era had similar features. For example, the IBM 1401 system (c. 1958) used a card reader to load a program from a punched card. The 80 characters stored in the punched card were read into memory locations 001 to 080, then the computer would branch to memory location 001 to read its first stored instruction. This instruction was always the same: move the information in these first 80 memory locations to an assembly area where the information in punched cards 2, 3, 4, and so on, could be combined to form the stored program. Once this information was moved to the assembly area, the machine would branch to an instruction in location 080 (read a card) and the next card would be read and its information processed. Another example was the IBM 650 (1953), a decimal machine, which had a group of ten 10-position switches on its operator panel which were addressable as a memory word (address 8000) and could be executed as an instruction. Thus setting the switches to 7004000400 and pressing the appropriate button would read the first card in the card reader into memory (op code 70), starting at address 400 and then jump to 400 to begin executing the program on that card. The IBM 7040 and 7044 have a similar mechanism, in which the Load button causes the instruction set up in the entry keys on the front panel is executed, and the channel that instruction sets up is given a command to transfer data to memory starting at address 00100; when that transfer finishes, the CPU jumps to address 00101. IBM's competitors also offered single button program load. The CDC 6600 (c. 1964) had a dead start panel with 144 toggle switches; the dead start switch entered 12 12-bit words from the toggle switches to the memory of peripheral processor (PP) 0 and initiated the load sequence by causing PP 0 to execute the code loaded into memory. PP 0 loaded the necessary code into its own memory and then initialized the other PPs. The GE 645 (c. 1965) had a "SYSTEM BOOTLOAD" button that, when pressed, caused one of the I/O controllers to load a 64-word program into memory from a diode read-only memory and deliver an interrupt to cause that program to start running. The first model of the PDP-10 had a "READ IN" button that, when pressed, reset the processor and started an I/O operation on a device specified by switches on the control panel, reading in a 36-bit word giving a target address and count for subsequent word reads; when the read completed, the processor started executing the code read in by jumping to the last word read in. A noteworthy variation of this is found on the Burroughs B1700 where there is neither a bootstrap ROM nor a hardwired IPL operation. Instead, after the system is reset it reads and executes microinstructions sequentially from a cassette tape drive mounted on the front panel; this sets up a boot loader in RAM which is then executed. However, since this makes few assumptions about the system it can equally well be used to load diagnostic (Maintenance Test Routine) tapes which display an intelligible code on the front panel even in cases of gross CPU failure. IBM System/360 and successors In the IBM System/360 and its successors, including the current z/Architecture machines, the boot process is known as Initial Program Load (IPL). IBM coined this term for the 7030 (Stretch), revived it for the design of the System/360, and continues to use it in those environments today. In the System/360 processors, an IPL is initiated by the computer operator by selecting the three hexadecimal digit device address (CUU; C=I/O Channel address, UU=Control unit and Device address) followed by pressing the LOAD button. On the high end System/360 models, most System/370 and some later systems, the functions of the switches and the LOAD button are simulated using selectable areas on the screen of a graphics console, often an IBM 2250-like device or an IBM 3270-like device. For example, on the System/370 Model 158, the keyboard sequence 0-7-X (zero, seven and X, in that order) results in an IPL from the device address which was keyed into the input area. The Amdahl 470V/6 and related CPUs supported four hexadecimal digits on those CPUs which had the optional second channel unit installed, for a total of 32 channels. Later, IBM would also support more than 16 channels. The IPL function in the System/360 and its successors prior to IBM Z, and its compatibles such as Amdahl's, reads 24 bytes from an operator-specified device into main storage starting at real address zero. The second and third groups of eight bytes are treated as Channel Command Words (CCWs) to continue loading the startup program (the first CCW is always simulated by the CPU and consists of a Read IPL command, , with command chaining and suppress incorrect length indication being enforced). When the I/O channel commands are complete, the first group of eight bytes is then loaded into the processor's Program Status Word (PSW) and the startup program begins execution at the location designated by that PSW. The IPL device is usually a disk drive, hence the special significance of the read-type command, but exactly the same procedure is also used to IPL from other input-type devices, such as tape drives, or even card readers, in a device-independent manner, allowing, for example, the installation of an operating system on a brand-new computer from an OS initial distribution magnetic tape. For disk controllers, the command also causes the selected device to seek to cylinder , head , simulating a Seek cylinder and head command, , and to search for record , simulating a Search ID Equal command, ; seeks and searches are not simulated by tape and card controllers, as for these device classes a Read IPL command is simply a sequential read command. The disk, tape or card deck must contain a special program to load the actual operating system or standalone utility into main storage, and for this specific purpose "IPL Text" is placed on the disk by the stand-alone DASDI (Direct Access Storage Device Initialization) program or an equivalent program running under an operating system, e.g., ICKDSF, but IPL-able tapes and card decks are usually distributed with this "IPL Text" already present. IBM introduced some evolutionary changes in the IPL process, changing some details for System/370 Extended Architecture (S/370-XA) and later, and adding a new type of IPL for z/Architecture. Minicomputers Minicomputers, starting with the Digital Equipment Corporation (DEC) PDP-5 and PDP-8 (1965) simplified design by using the CPU to assist input and output operations. This saved cost but made booting more complicated than pressing a single button. Minicomputers typically had some way to toggle in short programs by manipulating an array of switches on the front panel. Since the early minicomputers used magnetic-core memory, which did not lose its information when power was off, these bootstrap loaders would remain in place unless they were erased. Erasure sometimes happened accidentally when a program bug caused a loop that overwrote all of memory. Other minicomputers with such simple form of booting include Hewlett-Packard's HP 2100 series (mid-1960s), the original Data General Nova (1969), and DEC's PDP-4 (1962) and PDP-11 (1970). As the I/O operations needed to cause a read operation on a minicomputer I/O device were typically different for different device controllers, different bootstrap programs were needed for different devices. DEC later added, in 1971, an optional diode matrix read-only memory for the PDP-11 that stored a bootstrap program of up to 32 words (64 bytes). It consisted of a printed circuit card, the M792, that plugged into the Unibus and held a 32 by 16 array of semiconductor diodes. With all 512 diodes in place, the memory contained all "one" bits; the card was programmed by cutting off each diode whose bit was to be "zero". DEC also sold versions of the card, the BM792-Yx series, pre-programmed for many standard input devices by simply omitting the unneeded diodes. Following the older approach, the earlier PDP-1 has a hardware loader, such that an operator need only push the "load" switch to instruct the paper tape reader to load a program directly into core memory. The PDP-7, PDP-9, and PDP-15 successors to the PDP-4 have an added Read-In button to read a program in from paper tape and jump to it. The Data General Supernova used front panel switches to cause the computer to automatically load instructions into memory from a device specified by the front panel's data switches, and then jump to loaded code. Early minicomputer boot loader examples In a minicomputer with a paper tape reader, the first program to run in the boot process, the boot loader, would read into core memory either the second-stage boot loader (often called a Binary Loader) that could read paper tape with checksum or the operating system from an outside storage medium. Pseudocode for the boot loader might be as simple as the following eight instructions: Set the P register to 9 Check paper tape reader ready If not ready, jump to 2 Read a byte from paper tape reader to accumulator Store accumulator to address in P register If end of tape, jump to 9 Increment the P register Jump to 2 A related example is based on a loader for a Nicolet Instrument Corporation minicomputer of the 1970s, using the paper tape reader-punch unit on a Teletype Model 33 ASR teleprinter. The bytes of its second-stage loader are read from paper tape in reverse order. Set the P register to 106 Check paper tape reader ready If not ready, jump to 2 Read a byte from paper tape reader to accumulator Store accumulator to address in P register Decrement the P register Jump to 2 The length of the second stage loader is such that the final byte overwrites location 7. After the instruction in location 6 executes, location 7 starts the second stage loader executing. The second stage loader then waits for the much longer tape containing the operating system to be placed in the tape reader. The difference between the boot loader and second stage loader is the addition of checking code to trap paper tape read errors, a frequent occurrence with relatively low-cost, "part-time-duty" hardware, such as the Teletype Model 33 ASR. (Friden Flexowriters were far more reliable, but also comparatively costly.) Booting the first microcomputers The earliest microcomputers, such as the Altair 8800 (released first in 1975) and an even earlier, similar machine (based on the Intel 8008 CPU) had no bootstrapping hardware as such. When powered-up, the CPU would see memory that would contain random data. The front panels of these machines carried toggle switches for entering addresses and data, one switch per bit of the computer memory word and address bus. Simple additions to the hardware permitted one memory location at a time to be loaded from those switches to store bootstrap code. Meanwhile, the CPU was kept from attempting to execute memory content. Once correctly loaded, the CPU was enabled to execute the bootstrapping code. This process, similar to that used for several earlier minicomputers, was tedious and had to be error-free. Integrated circuit read-only memory era The introduction of integrated circuit read-only memory (ROM), with its many variants, including mask-programmed ROMs, programmable ROMs (PROM), erasable programmable ROMs (EPROM), and flash memory, reduced the physical size and cost of ROM. This allowed firmware boot programs to be included as part of the computer. Minicomputers The Data General Nova 1200 (1970) and Nova 800 (1971) had a program load switch that, in combination with options that provided two ROM chips, loaded a program into main memory from those ROM chips and jumped to it. Digital Equipment Corporation introduced the integrated-circuit-ROM-based BM873 (1974), M9301 (1977), M9312 (1978), REV11-A and REV11-C, MRV11-C, and MRV11-D ROM memories, all usable as bootstrap ROMs. The PDP-11/34 (1976), PDP-11/60 (1977), PDP-11/24 (1979), and most later models include boot ROM modules. An Italian telephone switching computer, called "Gruppi Speciali", patented in 1975 by Alberto Ciaramella, a researcher at CSELT, included an (external) ROM. Gruppi Speciali was, starting from 1975, a fully single-button machine booting into the operating system from a ROM memory composed from semiconductors, not from ferrite cores. Although the ROM device was not natively embedded in the computer of Gruppi Speciali, due to the design of the machine, it also allowed the single-button ROM booting in machines not designed for that (therefore, this "bootstrap device" was architecture-independent), e.g. the PDP-11. Storing the state of the machine after the switch-off was also in place, which was another critical feature in the telephone switching contest. Some minicomputers and superminicomputers include a separate console processor that bootstraps the main processor. The PDP-11/44 had an Intel 8085 as a console processor; the VAX-11/780, the first member of Digital's VAX line of 32-bit superminicomputers, had an LSI-11-based console processor, and the VAX-11/730 had an 8085-based console processor. These console processors could boot the main processor from various storage devices. Some other superminicomputers, such as the VAX-11/750, implement console functions, including the first stage of booting, in CPU microcode. Microprocessors and microcomputers Typically, a microprocessor will, after a reset or power-on condition, perform a start-up process that usually takes the form of "begin execution of the code that is found starting at a specific address" or "look for a multibyte code at a specific address and jump to the indicated location to begin execution". A system built using that microprocessor will have the permanent ROM occupying these special locations so that the system always begins operating without operator assistance. For example, Intel x86 processors always start by running the instructions beginning at F000:FFF0, while for the MOS 6502 processor, initialization begins by reading a two-byte vector address at $FFFD (MS byte) and $FFFC (LS byte) and jumping to that location to run the bootstrap code. Apple Computer's first computer, the Apple 1 introduced in 1976, featured PROM chips that eliminated the need for a front panel for the boot process (as was the case with the Altair 8800) in a commercial computer. According to Apple's ad announcing it "No More Switches, No More Lights ... the firmware in PROMS enables you to enter, display and debug programs (all in hex) from the keyboard." Due to the expense of read-only memory at the time, the Apple II booted its disk operating systems using a series of very small incremental steps, each passing control onward to the next phase of the gradually more complex boot process. (See Apple DOS: Boot loader). Because so little of the disk operating system relied on ROM, the hardware was also extremely flexible and supported a wide range of customized disk copy protection mechanisms. (See Software Cracking: History.) Some operating systems, most notably pre-1995 Macintosh systems from Apple, are so closely interwoven with their hardware that it is impossible to natively boot an operating system other than the standard one. This is the opposite extreme of the scenario using switches mentioned above; it is highly inflexible but relatively error-proof and foolproof as long as all hardware is working normally. A common solution in such situations is to design a boot loader that works as a program belonging to the standard OS that hijacks the system and loads the alternative OS. This technique was used by Apple for its A/UX Unix implementation and copied by various freeware operating systems and BeOS Personal Edition 5. Some machines, like the Atari ST microcomputer, were "instant-on", with the operating system executing from a ROM. Retrieval of the OS from secondary or tertiary store was thus eliminated as one of the characteristic operations for bootstrapping. To allow system customizations, accessories, and other support software to be loaded automatically, the Atari's floppy drive was read for additional components during the boot process. There was a timeout delay that provided time to manually insert a floppy as the system searched for the extra components. This could be avoided by inserting a blank disk. The Atari ST hardware was also designed so the cartridge slot could provide native program execution for gaming purposes as a holdover from Atari's legacy making electronic games; by inserting the Spectre GCR cartridge with the Macintosh system ROM in the game slot and turning the Atari on, it could "natively boot" the Macintosh operating system rather than Atari's own TOS. The IBM Personal Computer included ROM-based firmware called the BIOS; one of the functions of that firmware was to perform a power-on self test when the machine was powered up, and then to read software from a boot device and execute it. Firmware compatible with the BIOS on the IBM Personal Computer is used in IBM PC compatible computers. The UEFI was developed by Intel, originally for Itanium-based machines, and later also used as an alternative to the BIOS in x86-based machines, including Apple Macs using Intel processors. Unix workstations originally had vendor-specific ROM-based firmware. Sun Microsystems later developed OpenBoot, later known as Open Firmware, which incorporated a Forth interpreter, with much of the firmware being written in Forth. It was standardized by the IEEE as IEEE standard ; firmware that implements that standard was used in PowerPC-based Macs and some other PowerPC-based machines, as well as Sun's own SPARC-based computers. The Advanced RISC Computing specification defined another firmware standard, which was implemented on some MIPS-based and Alpha-based machines and the SGI Visual Workstation x86-based workstations. Modern boot loaders When a computer is turned off, its softwareincluding operating systems, application code, and dataremains stored on non-volatile memory. When the computer is powered on, it typically does not have an operating system or its loader in random-access memory (RAM). The computer first executes a relatively small program stored in read-only memory (ROM, and later EEPROM, NOR flash) which support execute in place, to initialize CPU and motherboard, to initialize the memory (especially on x86 systems), to initialize and access the storage (usually a block-addressed device, e.g. hard disk drive, NAND flash, solid-state drive) from which the operating system programs and data can be loaded into RAM, and to initialize other I/O devices. The small program that starts this sequence is known as a bootstrap loader, bootstrap or boot loader. Often, multiple-stage boot loaders are used, during which several programs of increasing complexity load one after the other in a process of chain loading. Some earlier computer systems, upon receiving a boot signal from a human operator or a peripheral device, may load a very small number of fixed instructions into memory at a specific location, initialize at least one CPU, and then point the CPU to the instructions and start their execution. These instructions typically start an input operation from some peripheral device (which may be switch-selectable by the operator). Other systems may send hardware commands directly to peripheral devices or I/O controllers that cause an extremely simple input operation (such as "read sector zero of the system device into memory starting at location 1000") to be carried out, effectively loading a small number of boot loader instructions into memory; a completion signal from the I/O device may then be used to start execution of the instructions by the CPU. Smaller computers often use less flexible but more automatic boot loader mechanisms to ensure that the computer starts quickly and with a predetermined software configuration. In many desktop computers, for example, the bootstrapping process begins with the CPU executing software contained in ROM (for example, the BIOS of an IBM PC) at a predefined address (some CPUs, including the Intel x86 series are designed to execute this software after reset without outside help). This software contains rudimentary functionality to search for devices eligible to participate in booting, and load a small program from a special section (most commonly the boot sector) of the most promising device, typically starting at a fixed entry point such as the start of the sector. Boot loaders may face peculiar constraints, especially in size; for instance, on the IBM PC and compatibles, the boot code must fit in the Master Boot Record (MBR) and the Partition Boot Record (PBR), which in turn are limited to a single sector; on the IBM System/360, the size is limited by the IPL medium, e.g., card size, track size. On systems with those constraints, the first program loaded into RAM may not be sufficiently large to load the operating system and, instead, must load another, larger program. The first program loaded into RAM is called a first-stage boot loader, and the program it loads is called a second-stage boot loader. On many embedded CPUs, the CPU built-in boot ROM, sometimes called the zero-stage boot loader, can find and load first-stage boot loaders. First-stage boot loaders Examples of first-stage (hardware initialization stage) boot loaders include BIOS, UEFI, coreboot, Libreboot and Das U-Boot. On the IBM PC, the boot loader in the Master Boot Record (MBR) and the Partition Boot Record (PBR) was coded to require at least 32 KB (later expanded to 64 KB) of system memory and only use instructions supported by the original 8088/8086 processors. Second-stage boot loaders Second-stage (OS initialization stage) boot loaders, such as shim, GNU GRUB, rEFInd, BOOTMGR, Syslinux, and NTLDR, are not themselves operating systems, but are able to load an operating system properly and transfer execution to it; the operating system subsequently initializes itself and may load extra device drivers. The second-stage boot loader does not need drivers for its own operation, but may instead use generic storage access methods provided by system firmware such as the BIOS, UEFI or Open Firmware, though typically with restricted hardware functionality and lower performance. Many boot loaders (like GNU GRUB, rEFInd, Windows's BOOTMGR, Syslinux, and Windows NT/2000/XP's NTLDR) can be configured to give the user multiple booting choices. These choices can include different operating systems (for dual or multi-booting from different partitions or drives), different versions of the same operating system (in case a new version has unexpected problems), different operating system loading options (e.g., booting into a rescue or safe mode), and some standalone programs that can function without an operating system, such as memory testers (e.g., memtest86+), a basic shell (as in GNU GRUB), or even games (see List of PC Booter games). Some boot loaders can also load other boot loaders; for example, GRUB loads BOOTMGR instead of loading Windows directly. Usually a default choice is preselected with a time delay during which a user can press a key to change the choice; after this delay, the default choice is automatically run so normal booting can occur without interaction. The boot process can be considered complete when the computer is ready to interact with the user, or the operating system is capable of running system programs or application programs. Embedded and multi-stage boot loaders Many embedded systems must boot immediately. For example, waiting a minute for a digital television or a GPS navigation device to start is generally unacceptable. Therefore, such devices have software systems in ROM or flash memory so the device can begin functioning immediately; little or no loading is necessary, because the loading can be precomputed and stored on the ROM when the device is made. Large and complex systems may have boot procedures that proceed in multiple phases until finally the operating system and other programs are loaded and ready to execute. Because operating systems are designed as if they never start or stop, a boot loader might load the operating system, configure itself as a mere process within that system, and then irrevocably transfer control to the operating system. The boot loader then terminates normally as any other process would. Network booting Most computers are also capable of booting over a computer network. In this scenario, the operating system is stored on the disk of a server, and certain parts of it are transferred to the client using a simple protocol such as the Trivial File Transfer Protocol (TFTP). After these parts have been transferred, the operating system takes over the control of the booting process. As with the second-stage boot loader, network booting begins by using generic network access methods provided by the network interface's boot ROM, which typically contains a Preboot Execution Environment (PXE) image. No drivers are required, but the system functionality is limited until the operating system kernel and drivers are transferred and started. As a result, once the ROM-based booting has completed it is entirely possible to network boot into an operating system that itself does not have the ability to use the network interface. IBM-compatible personal computers (PC) Boot devices The boot device is the storage device from which the operating system is loaded. A modern PC's UEFI or BIOS firmware supports booting from various devices, typically a local solid-state drive or hard disk drive via the GPT or Master Boot Record (MBR) on such a drive or disk, an optical disc drive (using El Torito), a USB mass storage device (USB flash drive, memory card reader, USB hard disk drive, USB optical disc drive, USB solid-state drive, etc.), or a network interface card (using PXE). Older, less common BIOS-bootable devices include floppy disk drives, Zip drives, and LS-120 drives. Typically, the system firmware (UEFI or BIOS) will allow the user to configure a boot order. If the boot order is set to "first, the DVD drive; second, the hard disk drive", then the firmware will try to boot from the DVD drive, and if this fails (e.g. because there is no DVD in the drive), it will try to boot from the local hard disk drive. For example, on a PC with Windows installed on the hard drive, the user could set the boot order to the one given above, and then insert a Linux Live CD in order to try out Linux without having to install an operating system onto the hard drive. This is an example of dual booting, in which the user chooses which operating system to start after the computer has performed its Power-on self-test (POST). In this example of dual booting, the user chooses by inserting or removing the DVD from the computer, but it is more common to choose which operating system to boot by selecting from a boot manager menu on the selected device, by using the computer keyboard to select from a BIOS or UEFI Boot Menu, or both; the Boot Menu is typically entered by pressing or keys during the POST; the BIOS Setup is typically entered by pressing or keys during the POST. Several devices are available that enable the user to quick-boot into what is usually a variant of Linux for various simple tasks such as Internet access; examples are Splashtop and Latitude ON. Boot sequence Upon starting, an IBM-compatible personal computer's x86 CPU, executes in real mode, the instruction located at reset vector (the physical memory address on 16-bit x86 processors and on 32-bit and 64-bit x86 processors), usually pointing to the firmware (UEFI or BIOS) entry point inside the ROM. This memory location typically contains a jump instruction that transfers execution to the location of the firmware (UEFI or BIOS) start-up program. This program runs a power-on self-test (POST) to check and initialize required devices such as main memory (DRAM), the PCI bus and the PCI devices (including running embedded Option ROMs). One of the most involved steps is setting up DRAM over SPD, further complicated by the fact that at this point memory is very limited. After initializing required hardware, the firmware (UEFI or BIOS) goes through a pre-configured list of non-volatile storage devices ("boot device sequence") until it finds one that is bootable. BIOS Once the BIOS has found a bootable device it loads the boot sector to linear address (usually segment:offset :, but some BIOSes erroneously use :) and transfers execution to the boot code. In the case of a hard disk, this is referred to as the Master Boot Record (MBR). The conventional MBR code checks the MBR's partition table for a partition set as bootable (the one with active flag set). If an active partition is found, the MBR code loads the boot sector code from that partition, known as Volume Boot Record (VBR), and executes it. The MBR boot code is often operating-system specific. A bootable MBR device is defined as one that can be read from, and where the last two bytes of the first sector contain the little-endian word , found as byte sequence , on disk (also known as the MBR boot signature), or where it is otherwise established that the code inside the sector is executable on x86 PCs. The boot sector code is the first-stage boot loader. It is located on fixed disks and removable drives, and must fit into the first 446 bytes of the Master Boot Record in order to leave room for the default 64-byte partition table with four partition entries and the two-byte boot signature, which the BIOS requires for a proper boot loader — or even less, when additional features like more than four partition entries (up to 16 with 16 bytes each), a disk signature (6 bytes), a disk timestamp (6 bytes), an Advanced Active Partition (18 bytes) or special multi-boot loaders have to be supported as well in some environments. In floppy and superfloppy Volume Boot Records, up to 59 bytes are occupied for the Extended BIOS Parameter Block on FAT12 and FAT16 volumes since DOS 4.0, whereas the FAT32 EBPB introduced with DOS 7.1 requires even 87 bytes, leaving only 423 bytes for the boot loader when assuming a sector size of 512 bytes. Microsoft boot sectors therefore traditionally imposed certain restrictions on the boot process, for example, the boot file had to be located at a fixed position in the root directory of the file system and stored as consecutive sectors, conditions taken care of by the SYS command and slightly relaxed in later versions of DOS. The boot loader was then able to load the first three sectors of the file into memory, which happened to contain another embedded boot loader able to load the remainder of the file into memory. When Microsoft added LBA and FAT32 support, they even switched to a boot loader reaching over two physical sectors and using 386 instructions for size reasons. At the same time other vendors managed to squeeze much more functionality into a single boot sector without relaxing the original constraints on only minimal available memory (32 KB) and processor support (). For example, DR-DOS boot sectors are able to locate the boot file in the FAT12, FAT16 and FAT32 file system, and load it into memory as a whole via CHS or LBA, even if the file is not stored in a fixed location and in consecutive sectors. The VBR is often OS-specific; however, its main function is to load and execute the operating system boot loader file (such as bootmgr or ntldr), which is the second-stage boot loader, from an active partition. Then the boot loader loads the OS kernel from the storage device. If there is no active partition, or the active partition's boot sector is invalid, the MBR may load a secondary boot loader which will select a partition (often via user input) and load its boot sector, which usually loads the corresponding operating system kernel. In some cases, the MBR may also attempt to load secondary boot loaders before trying to boot the active partition. If all else fails, it should issue an INT 18h BIOS interrupt call (followed by an INT 19h just in case INT 18h would return) in order to give back control to the BIOS, which would then attempt to boot off other devices, attempt a remote boot via network. UEFI Many modern systems (Intel Macs and newer PCs) use UEFI. Unlike BIOS, UEFI (not Legacy boot via CSM) does not rely on boot sectors, UEFI system loads the boot loader (EFI application file in USB disk or in the EFI System Partition) directly, and the OS kernel is loaded by the boot loader. SoCs, embedded systems, microcontrollers, and FPGAs Many modern CPUs, SoCs and microcontrollers (for example, TI OMAP) or sometimes even digital signal processors (DSPs) may have a boot ROM integrated directly into their silicon, so such a processor can perform a simple boot sequence on its own and load boot programs (firmware or software) from boot sources such as NAND flash or eMMC. It is difficult to hardwire all the required logic for handling such devices, so an integrated boot ROM is used instead in such scenarios. Also, a boot ROM may be able to load a boot loader or diagnostic program via serial interfaces like UART, SPI, USB and so on. This feature is often used for system recovery purposes, or it could also be used for initial non-volatile memory programming when there is no software available in the non-volatile memory yet. Many modern microcontrollers (e.g. flash memory controller on USB flash drives) have firmware ROM integrated directly into their silicon. Some embedded system designs may also include an intermediary boot sequence step. For example, Das U-Boot may be split into two stages: the platform would load a small SPL (Secondary Program Loader), which is a stripped-down version of U-Boot, and the SPL would do some initial hardware configuration (e.g. DRAM initialization using CPU cache as RAM) and load the larger, fully featured version of U-Boot. Some CPUs and SoCs may not use CPU cache as RAM on boot process, they use an integrated boot processor to do some hardware configuration, to reduce cost. It is also possible to take control of a system by using a hardware debug interface such as JTAG. Such an interface may be used to write the boot loader program into bootable non-volatile memory (e.g. flash) by instructing the processor core to perform the necessary actions to program non-volatile memory. Alternatively, the debug interface may be used to upload some diagnostic or boot code into RAM, and then to start the processor core and instruct it to execute the uploaded code. This allows, for example, the recovery of embedded systems where no software remains on any supported boot device, and where the processor does not have any integrated boot ROM. JTAG is a standard and popular interface; many CPUs, microcontrollers and other devices are manufactured with JTAG interfaces (). Some microcontrollers provide special hardware interfaces which cannot be used to take arbitrary control of a system or directly run code, but instead they allow the insertion of boot code into bootable non-volatile memory (like flash memory) via simple protocols. Then at the manufacturing phase, such interfaces are used to inject boot code (and possibly other code) into non-volatile memory. After system reset, the microcontroller begins to execute code programmed into its non-volatile memory, just like usual processors are using ROMs for booting. Most notably this technique is used by Atmel AVR microcontrollers, and by others as well. In many cases such interfaces are implemented by hardwired logic. In other cases such interfaces could be created by software running in integrated on-chip boot ROM from GPIO pins. Most DSPs have a serial mode boot, and a parallel mode boot, such as the host port interface (HPI boot). In case of DSPs there is often a second microprocessor or microcontroller present in the system design, and this is responsible for overall system behavior, interrupt handling, dealing with external events, user interface, etc. while the DSP is dedicated to signal processing tasks only. In such systems the DSP could be booted by another processor which is sometimes referred as the host processor (giving name to a Host Port). Such a processor is also sometimes referred as the master, since it usually boots first from its own memories and then controls overall system behavior, including booting of the DSP, and then further controlling the DSP's behavior. The DSP often lacks its own boot memories and relies on the host processor to supply the required code instead. The most notable systems with such a design are cell phones, modems, audio and video players and so on, where a DSP and a CPU/microcontroller are co-existing. Many FPGA chips load their configuration from an external serial EEPROM ("configuration ROM") on power-up. Security Various measures have been implemented which enhance the security of the booting process. Some of them are made mandatory, others can be disabled or enabled by the end user. Traditionally, booting did not involve the use of cryptography. The security can be bypassed by unlocking the boot loader, which might or might not be approved by the manufacturer. Modern boot loaders make use of concurrency, meaning they can run multiple processor cores, and threads at the same time, which add extra layers of complexity to secure booting. Matthew Garrett argued that booting security serves a legitimate goal but in doing so chooses defaults that are hostile to users. Measures UEFI secure boot Android Verified boot Samsung Knox Measured boot with the Trusted Platform Module, also known as "trusted boot". Intel BootGuard Disk encryption Firmware passwords Bootloop When debugging a concurrent and distributed system of systems, a bootloop (also called boot loop or boot-loop) is a diagnostic condition of an erroneous state that occurs on computing devices; when those devices repeatedly fail to complete the booting process and restart before a boot sequence is finished, a restart might prevent a user from accessing the regular interface. Detection of an erroneous state The system might exhibit its erroneous state in, for example, an explicit bootloop or a blue screen of death, before recovery is indicated. Detection of an erroneous state may require a distributed event store and stream-processing platform for real-time operation of a distributed system. Recovery from an erroneous state An erroneous state can trigger bootloops; this state can be caused by misconfiguration from previously known-good operations. Recovery attempts from that erroneous state then enter a reboot, in an attempt to return to a known-good state. In Windows OS operations, for example, the recovery procedure was to reboot three times, the reboots needed to return to a usable menu. Recovery policy Recovery might be specified via Security Assertion Markup Language (SAML), which can also implement Single sign-on (SSO) for some applications; in the zero trust security model identification, authorization, and authentication are separable concerns in an SSO session. When recovery of a site is indicated (viz. a blue screen of death is displayed on an airport terminal screen) personal site visits might be required to remediate the situation. Examples Windows NT 4.0 Windows 2000 Windows Server Windows 10 The Nexus 5X Android 10: when setting a specific image as wallpaper, the luminance value exceeded the maximum of 255 which happened due to a rounding error during conversion from sRGB to RGB. This then crashed the SystemUI component on every boot. Google Nest hub LG smartphone bootloop issues On 19 July 2024, an update of CrowdStrikes Falcon software caused the 2024 CrowdStrike incident resulting in Microsoft Windows systems worldwide stuck in bootloops or recovery mode.
Technology
Computer hardware
null
40979
https://en.wikipedia.org/wiki/Crystal%20oscillator
Crystal oscillator
A crystal oscillator is an electronic oscillator circuit that uses a piezoelectric crystal as a frequency-selective element. The oscillator frequency is often used to keep track of time, as in quartz wristwatches, to provide a stable clock signal for digital integrated circuits, and to stabilize frequencies for radio transmitters and receivers. The most common type of piezoelectric resonator used is a quartz crystal, so oscillator circuits incorporating them became known as crystal oscillators. However, other piezoelectric materials including polycrystalline ceramics are used in similar circuits. A crystal oscillator relies on the slight change in shape of a quartz crystal under an electric field, a property known as inverse piezoelectricity. A voltage applied to the electrodes on the crystal causes it to change shape; when the voltage is removed, the crystal generates a small voltage as it elastically returns to its original shape. The quartz oscillates at a stable resonant frequency (relative to other low-priced oscillators) with frequency accuracy measured in parts per million (ppm). It behaves like an RLC circuit, but with a much higher Q factor (lower energy loss on each cycle of oscillation and higher frequency selectivity) than can be reliably achieved with discrete capacitors (C) and inductors (L), which suffer from parasitic resistance (R). Once a quartz crystal is adjusted to a particular frequency (which is affected by the mass of electrodes attached to the crystal, the orientation of the crystal, temperature and other factors), it maintains that frequency with high stability. Quartz crystals are manufactured for frequencies from a few tens of kilohertz to hundreds of megahertz. As of 2003, around two billion crystals were manufactured annually. Most are used for consumer devices such as wristwatches, clocks, radios, computers, and cellphones. However, in applications where small size and weight is needed crystals can be replaced by thin-film bulk acoustic resonators, specifically if ultra-high frequency (more than roughly 1.5 GHz) resonance is needed. Quartz crystals are also found inside test and measurement equipment, such as counters, signal generators, and oscilloscopes. Terminology A crystal oscillator is an electric oscillator type circuit that uses a piezoelectric resonator, a crystal, as its frequency-determining element. Crystal is the common term used in electronics for the frequency-determining component, a wafer of quartz crystal or ceramic with electrodes connected to it. A more accurate term for "crystal" is piezoelectric resonator. Crystals are also used in other types of electronic circuits, such as crystal filters. Piezoelectric resonators are sold as separate components for use in crystal oscillator circuits. They are also often incorporated in a single package with the crystal oscillator circuit. History Piezoelectricity was discovered by Jacques and Pierre Curie in 1880. Paul Langevin first investigated quartz resonators for use in sonar during World War I. The first crystal-controlled oscillator, using a crystal of Rochelle salt, was built in 1917 and patented in 1918 by Alexander M. Nicholson at Bell Telephone Laboratories, although his priority was disputed by Walter Guyton Cady. Cady built the first quartz crystal oscillator in 1921. Other early innovators in quartz crystal oscillators include G. W. Pierce and Louis Essen. Quartz crystal oscillators were developed for high-stability frequency references during the 1920s and 1930s. Prior to crystals, radio stations controlled their frequency with tuned circuits, which could easily drift off frequency by 3–4 kHz. Since broadcast stations were assigned frequencies only 10 kHz (Americas) or 9 kHz (elsewhere) apart, interference between adjacent stations due to frequency drift was a common problem. In 1925, Westinghouse installed a crystal oscillator in its flagship station KDKA, and by 1926, quartz crystals were used to control the frequency of many broadcasting stations and were popular with amateur radio operators. In 1928, Warren Marrison of Bell Laboratories developed the first quartz-crystal clock. With accuracies of up to 1 second in 30 years (30 ms/y, or 0.95 ns/s), quartz clocks replaced precision pendulum clocks as the world's most accurate timekeepers until atomic clocks were developed in the 1950s. Using the early work at Bell Laboratories, American Telephone and Telegraph Company (AT&T) eventually established their Frequency Control Products division, later spun off and known today as Vectron International. A number of firms started producing quartz crystals for electronic use during this time. Using what are now considered primitive methods, about 100,000 crystal units were produced in the United States during 1939. Through World War II crystals were made from natural quartz crystal, virtually all from Brazil. Shortages of crystals during the war caused by the demand for accurate frequency control of military and naval radios and radars spurred postwar research into culturing synthetic quartz, and by 1950 a hydrothermal process for growing quartz crystals on a commercial scale was developed at Bell Laboratories. By the 1970s virtually all crystals used in electronics were synthetic. In 1968, Juergen Staudte invented a photolithographic process for manufacturing quartz crystal oscillators while working at North American Aviation (now Rockwell) that allowed them to be made small enough for portable products like watches. Although crystal oscillators still most commonly use quartz crystals, devices using other materials are becoming more common, such as ceramic resonators. Principle A crystal is a solid in which the constituent atoms, molecules, or ions are packed in a regularly ordered, repeating pattern extending in all three spatial dimensions. Almost any object made of an elastic material could be used like a crystal, with appropriate transducers, since all objects have natural resonant frequencies of vibration. For example, steel is very elastic and has a high speed of sound. It was often used in mechanical filters before quartz. The resonant frequency depends on size, shape, elasticity, and the speed of sound in the material. High-frequency crystals are typically cut in the shape of a simple rectangle or circular disk. Low-frequency crystals, such as those used in digital watches, are typically cut in the shape of a tuning fork. For applications not needing very precise timing, a low-cost ceramic resonator is often used in place of a quartz crystal. When a crystal of quartz is properly cut and mounted, it can be made to distort in an electric field by applying a voltage to an electrode near or on the crystal. This property is known as inverse piezoelectricity. When the field is removed, the quartz generates an electric field as it returns to its previous shape, and this can generate a voltage. The result is that a quartz crystal behaves like an RLC circuit, composed of an inductor, capacitor and resistor, with a precise resonant frequency. Quartz has the further advantage that its elastic constants and its size change in such a way that the frequency dependence on temperature can be very low. The specific characteristics depend on the mode of vibration and the angle at which the quartz is cut (relative to its crystallographic axes). Therefore, the resonant frequency of the plate, which depends on its size, does not change much. This means that a quartz clock, filter or oscillator remains accurate. For critical applications the quartz oscillator is mounted in a temperature-controlled container, called a crystal oven, and can also be mounted on shock absorbers to prevent perturbation by external mechanical vibrations. Modeling Electrical model A quartz crystal can be modeled as an electrical network with low-impedance (series) and high-impedance (parallel) resonance points spaced closely together. Mathematically, using the Laplace transform, the impedance of this network can be written as: or where is the complex frequency (), is the series resonant angular frequency, and is the parallel resonant angular frequency. Adding capacitance across a crystal causes the (parallel) resonant frequency to decrease. Adding inductance across a crystal causes the (parallel) resonant frequency to increase. These effects can be used to adjust the frequency at which a crystal oscillates. Crystal manufacturers normally cut and trim their crystals to have a specified resonant frequency with a known "load" capacitance added to the crystal. For example, a crystal intended for a 6 pF load has its specified parallel resonant frequency when a 6.0 pF capacitor is placed across it. Without the load capacitance, the resonant frequency is higher. Resonance modes A quartz crystal provides both series and parallel resonance. The series resonance is a few kilohertz lower than the parallel one. Crystals below 30 MHz are generally operated between series and parallel resonance, which means that the crystal appears as an inductive reactance in operation, this inductance forming a parallel resonant circuit with externally connected parallel capacitance. Any small additional capacitance in parallel with the crystal pulls the frequency lower. Moreover, the effective inductive reactance of the crystal can be reduced by adding a capacitor in series with the crystal. This latter technique can provide a useful method of trimming the oscillatory frequency within a narrow range; in this case inserting a capacitor in series with the crystal raises the frequency of oscillation. For a crystal to operate at its specified frequency, the electronic circuit has to be exactly that specified by the crystal manufacturer. Note that these points imply a subtlety concerning crystal oscillators in this frequency range: the crystal does not usually oscillate at precisely either of its resonant frequencies. Crystals above 30 MHz (up to >200 MHz) are generally operated at series resonance where the impedance appears at its minimum and equal to the series resistance. For these crystals the series resistance is specified (<100 Ω) instead of the parallel capacitance. To reach higher frequencies, a crystal can be made to vibrate at one of its overtone modes, which occur near multiples of the fundamental resonant frequency. Only odd numbered overtones are used. Such a crystal is referred to as a 3rd, 5th, or even 7th overtone crystal. To accomplish this, the oscillator circuit usually includes additional LC circuits to select the desired overtone. Temperature effects A crystal's frequency characteristic depends on the shape or "cut" of the crystal. A tuning-fork crystal is usually cut such that its frequency dependence on temperature is quadratic with the maximum around 25 °C. This means that a tuning-fork crystal oscillator resonates close to its target frequency at room temperature, but slows when the temperature either increases or decreases from room temperature. A common parabolic coefficient for a 32 kHz tuning-fork crystal is −0.04 ppm/°C2: In a real application, this means that a clock built using a regular 32 kHz tuning-fork crystal keeps good time at room temperature, but loses 2 minutes per year at 10 °C above or below room temperature and loses 8 minutes per year at 20 °C above or below room temperature due to the quartz crystal. Crystal oscillator circuits The crystal oscillator circuit sustains oscillation by taking a voltage signal from the quartz resonator, amplifying it, and feeding it back to the resonator. The rate of expansion and contraction of the quartz is the resonant frequency, and is determined by the cut and size of the crystal. When the energy of the generated output frequencies matches the losses in the circuit, an oscillation can be sustained. An oscillator crystal has two electrically conductive plates, with a slice or tuning fork of quartz crystal sandwiched between them. During startup, the controlling circuit places the crystal into an unstable equilibrium, and due to the positive feedback in the system, any tiny fraction of noise is amplified, ramping up the oscillation. The crystal resonator can also be seen as a highly frequency-selective filter in this system: it only passes a very narrow subband of frequencies around the resonant one, attenuating everything else. Eventually, only the resonant frequency is active. As the oscillator amplifies the signals coming out of the crystal, the signals in the crystal's frequency band becomes stronger, eventually dominating the output of the oscillator. The narrow resonance band of the quartz crystal filters out the unwanted frequencies. The output frequency of a quartz oscillator can be either that of the fundamental resonance or of a multiple of that resonance, called a harmonic frequency. Harmonics are an exact integer multiple of the fundamental frequency. But, like many other mechanical resonators, crystals exhibit several modes of oscillation, usually at approximately odd integer multiples of the fundamental frequency. These are termed "overtone modes", and oscillator circuits can be designed to excite them. The overtone modes are at frequencies which are approximate, but not exact odd integer multiples of that of the fundamental mode, and overtone frequencies are therefore not exact harmonics of the fundamental. High frequency crystals are often designed to operate at third, fifth, or seventh overtones. Manufacturers have difficulty producing crystals thin enough to produce fundamental frequencies over 30 MHz. To produce higher frequencies, manufacturers make overtone crystals tuned to put the 3rd, 5th, or 7th overtone at the desired frequency, because they are thicker and therefore easier to manufacture than a fundamental crystal that would produce the same frequency—although exciting the desired overtone frequency requires a slightly more complicated oscillator circuit. A fundamental crystal oscillator circuit is simpler and more efficient and has more pullability than a third overtone circuit. Depending on the manufacturer, the highest available fundamental frequency may be 25 MHz to 66 MHz. A major reason for the wide use of crystal oscillators is their high Q factor. A typical Q value for a quartz oscillator ranges from 104 to 106, compared to perhaps 102 for an LC oscillator. The maximum Q for a high stability quartz oscillator can be estimated as Q = 1.6 × 107/f, where f is the resonant frequency in megahertz. One of the most important traits of quartz crystal oscillators is that they can exhibit very low phase noise. In many oscillators, any spectral energy at the resonant frequency is amplified by the oscillator, resulting in a collection of tones at different phases. In a crystal oscillator, the crystal mostly vibrates in one axis, therefore only one phase is dominant. This property of low phase noise makes them particularly useful in telecommunications where stable signals are needed, and in scientific equipment where very precise time references are needed. Environmental changes of temperature, humidity, pressure, and vibration can change the resonant frequency of a quartz crystal, but there are several designs that reduce these environmental effects. These include the TCXO, MCXO, and OCXO which are defined below. These designs, particularly the OCXO, often produce devices with excellent short-term stability. The limitations in short-term stability are due mainly to noise from electronic components in the oscillator circuits. Long-term stability is limited by aging of the crystal. Due to aging and environmental factors (such as temperature and vibration), it is difficult to keep even the best quartz oscillators within one part in 1010 of their nominal frequency without constant adjustment. For this reason, atomic oscillators are used for applications requiring better long-term stability and accuracy. Spurious frequencies For crystals operated at series resonance or pulled away from the main mode by the inclusion of a series inductor or capacitor, significant (and temperature-dependent) spurious responses may be experienced. Though most spurious modes are typically some tens of kilohertz above the wanted series resonance, their temperature coefficient is different from the main mode, and the spurious response may move through the main mode at certain temperatures. Even if the series resistances at the spurious resonances appear higher than the one at the wanted frequency, a rapid change in the main mode series resistance can occur at specific temperatures when the two frequencies are coincidental. A consequence of these activity dips is that the oscillator may lock at a spurious frequency at specific temperatures. This is generally minimized by ensuring that the maintaining circuit has insufficient gain to activate unwanted modes. Spurious frequencies are also generated by subjecting the crystal to vibration. This modulates the resonant frequency to a small degree by the frequency of the vibrations. SC-cut (Stress Compensated) crystals are designed to minimize the frequency effect of mounting stress and they are therefore less sensitive to vibration. Acceleration effects including gravity are also reduced with SC-cut crystals, as is frequency change with time due to long term mounting stress variation. There are disadvantages with SC-cut shear mode crystals, such as the need for the maintaining oscillator to discriminate against other closely related unwanted modes and increased frequency change due to temperature when subject to a full ambient range. SC-cut crystals are most advantageous where temperature control at their temperature of zero temperature coefficient (turnover) is possible, under these circumstances an overall stability performance from premium units can approach the stability of rubidium frequency standards. Commonly used crystal frequencies Crystals can be manufactured for oscillation over a wide range of frequencies, from a few kilohertz up to several hundred megahertz. Many applications call for a crystal oscillator frequency conveniently related to some other desired frequency, so hundreds of standard crystal frequencies are made in large quantities and stocked by electronics distributors. For example 3.579545 MHz crystals, which were made in large quantities for NTSC color television receivers, are now popular for many non-television applications (although most modern television receivers now use other frequency crystals for the color decoder). Using frequency dividers, frequency multipliers and phase-locked loop circuits, it is practical to derive a wide range of frequencies from one reference frequency. Crystal structures and materials Quartz The most common material for oscillator crystals is quartz. At the beginning of the technology, natural quartz crystals were used but now synthetic crystalline quartz grown by hydrothermal synthesis is predominant due to higher purity, lower cost and more convenient handling. One of the few remaining uses of natural crystals is for pressure transducers in deep wells. During World War II and for some time afterwards, natural quartz was considered a strategic material by the USA. Large crystals were imported from Brazil. Raw "lascas", the source material quartz for hydrothermal synthesis, are imported to USA or mined locally by Coleman Quartz. The average value of as-grown synthetic quartz in 1994 was Types Two types of quartz crystals exist: left-handed and right-handed. The two differ in their optical rotation but they are identical in other physical properties. Both left and right-handed crystals can be used for oscillators, if the cut angle is correct. In manufacture, right-handed quartz is generally used. The SiO4 tetrahedrons form parallel helices; the direction of twist of the helix determines the left- or right-hand orientation. The helixes are aligned along the c-axis and merged, sharing atoms. The mass of the helixes forms a mesh of small and large channels parallel to the c-axis. The large ones are large enough to allow some mobility of smaller ions and molecules through the crystal. Quartz exists in several phases. At 573 °C at 1 atmosphere (and at higher temperatures and higher pressures) the α-quartz undergoes quartz inversion, transforms reversibly to β-quartz. The reverse process however is not entirely homogeneous and crystal twinning occurs. Care must be taken during manufacturing and processing to avoid phase transformation. Other phases, e.g. the higher-temperature phases tridymite and cristobalite, are not significant for oscillators. All quartz oscillator crystals are the α-quartz type. Quality Infrared spectrophotometry is used as one of the methods for measuring the quality of the grown crystals. The wavenumbers 3585, 3500, and 3410 cm−1 are commonly used. The measured value is based on the absorption bands of the OH radical and the infrared Q value is calculated. The electronic grade crystals, grade C, have Q of 1.8 million or above; the premium grade B crystals have Q of 2.2 million, and special premium grade A crystals have Q of 3.0 million. The Q value is calculated only for the z region; crystals containing other regions can be adversely affected. Another quality indicator is the etch channel density; when the crystal is etched, tubular channels are created along linear defects. For processing involving etching, e.g. the wristwatch tuning fork crystals, low etch channel density is desirable. The etch channel density for swept quartz is about 10–100 and significantly more for unswept quartz. Presence of etch channels and etch pits degrades the resonator's Q and introduces nonlinearities. Production Quartz crystals can be grown for specific purposes. Crystals for AT-cut are the most common in mass production of oscillator materials; the shape and dimensions are optimized for high yield of the required wafers. High-purity quartz crystals are grown with especially low content of aluminium, alkali metal and other impurities and minimal defects; the low amount of alkali metals provides increased resistance to ionizing radiation. Crystals for wrist watches, for cutting the tuning fork 32768 Hz crystals, are grown with very low etch channel density. Crystals for SAW devices are grown as flat, with large X-size seed with low etch channel density. Special high-Q crystals, for use in highly stable oscillators, are grown at constant slow speed and have constant low infrared absorption along the entire Z axis. Crystals can be grown as Y-bar, with a seed crystal in bar shape and elongated along the Y axis, or as Z-plate, grown from a plate seed with Y-axis direction length and X-axis width. The region around the seed crystal contains a large number of crystal defects and should not be used for the wafers. Crystals grow anisotropically; the growth along the Z axis is up to 3 times faster than along the X axis. The growth direction and rate also influences the rate of uptake of impurities. Y-bar crystals, or Z-plate crystals with long Y axis, have four growth regions usually called +X, −X, Z, and S. The distribution of impurities during growth is uneven; different growth areas contain different levels of contaminants. The Z regions are the purest, the small occasionally present S regions are less pure, the +X region is yet less pure, and the -X region has the highest level of impurities. The impurities have a negative impact on radiation hardness, susceptibility to twinning, filter loss, and long and short term stability of the crystals. Different-cut seeds in different orientations may provide other kinds of growth regions. The growth speed of the −X direction is slowest due to the effect of adsorption of water molecules on the crystal surface; aluminium impurities suppress growth in two other directions. The content of aluminium is lowest in Z region, higher in +X, yet higher in −X, and highest in S; the size of S regions also grows with increased amount of aluminium present. The content of hydrogen is lowest in Z region, higher in +X region, yet higher in S region, and highest in −X. Aluminium inclusions transform into color centers with gamma-ray irradiation, causing a darkening of the crystal proportional to the dose and level of impurities; the presence of regions with different darkness reveals the different growth regions. The dominant type of defect of concern in quartz crystals is the substitution of an Al(III) for a Si(IV) atom in the crystal lattice. The aluminium ion has an associated interstitial charge compensator present nearby, which can be a H+ ion (attached to the nearby oxygen and forming a hydroxyl group, called Al−OH defect), Li+ ion, Na+ ion, K+ ion (less common), or an electron hole trapped in a nearby oxygen atom orbital. The composition of the growth solution, whether it is based on lithium or sodium alkali compounds, determines the charge compensating ions for the aluminium defects. The ion impurities are of concern as they are not firmly bound and can migrate through the crystal, altering the local lattice elasticity and the resonant frequency of the crystal. Other common impurities of concern are e.g. iron(III) (interstitial), fluorine, boron(III), phosphorus(V) (substitution), titanium(IV) (substitution, universally present in magmatic quartz, less common in hydrothermal quartz), and germanium(IV) (substitution). Sodium and iron ions can cause inclusions of acnite and elemeusite crystals. Inclusions of water may be present in fast-grown crystals; interstitial water molecules are abundant near the crystal seed. Another defect of importance is the hydrogen containing growth defect, when instead of a Si−O−Si structure, a pair of Si−OH HO−Si groups is formed; essentially a hydrolyzed bond. Fast-grown crystals contain more hydrogen defects than slow-grown ones. These growth defects source as supply of hydrogen ions for radiation-induced processes and forming Al-OH defects. Germanium impurities tend to trap electrons created during irradiation; the alkali metal cations then migrate towards the negatively charged center and form a stabilizing complex. Matrix defects can also be present; oxygen vacancies, silicon vacancies (usually compensated by 4 hydrogens or 3 hydrogens and a hole), peroxy groups, etc. Some of the defects produce localized levels in the forbidden band, serving as charge traps; Al(III) and B(III) typically serve as hole traps while electron vacancies, titanium, germanium, and phosphorus atoms serve as electron traps. The trapped charge carriers can be released by heating; their recombination is the cause of thermoluminescence. The mobility of interstitial ions depends strongly on temperature. Hydrogen ions are mobile down to 10 K, but alkali metal ions become mobile only at temperatures around and above 200 K. The hydroxyl defects can be measured by near-infrared spectroscopy. The trapped holes can be measured by electron spin resonance. The Al−Na+ defects show as an acoustic loss peak due to their stress-induced motion; the Al−Li+ defects do not form a potential well so are not detectable this way. Some of the radiation-induced defects during their thermal annealing produce thermoluminescence; defects related to aluminium, titanium, and germanium can be distinguished. Swept crystals are crystals that have undergone a solid-state electrodiffusion purification process. Sweeping involves heating the crystal above 500 °C in a hydrogen-free atmosphere, with a voltage gradient of at least 1 kV/cm, for several hours (usually over 12). The migration of impurities and the gradual replacement of alkali metal ions with hydrogen (when swept in air) or electron holes (when swept in vacuum) causes a weak electric current through the crystal; decay of this current to a constant value signals the end of the process. The crystal is then left to cool, while the electric field is maintained. The impurities are concentrated at the cathode region of the crystal, which is cut off afterwards and discarded. Swept crystals have increased resistance to radiation, as the dose effects are dependent on the level of alkali metal impurities; they are suitable for use in devices exposed to ionizing radiation, e.g. for nuclear and space technology. Sweeping under vacuum at higher temperatures and higher field strengths yields yet more radiation-hard crystals. The level and character of impurities can be measured by infrared spectroscopy. Quartz can be swept in both α and β phase; sweeping in β phase is faster, but the phase transition may induce twinning. Twinning can be mitigated by subjecting the crystal to compression stress in the X direction, or an AC or DC electric field along the X axis while the crystal cools through the phase transformation temperature region. Sweeping can also be used to introduce one kind of an impurity into the crystal. Lithium, sodium, and hydrogen swept crystals are used for, e.g., studying quartz behavior. Very small crystals for high fundamental-mode frequencies can be manufactured by photolithography. Crystals can be adjusted to exact frequencies by laser trimming. A technique used in the world of amateur radio for slight decrease of the crystal frequency may be achieved by exposing crystals with silver electrodes to vapors of iodine, which causes a slight mass increase on the surface by forming a thin layer of silver iodide; such crystals however had problematic long-term stability. Another method commonly used is electrochemical increase or decrease of silver electrode thickness by submerging a resonator in lapis lazuli dissolved in water, citric acid in water, or water with salt, and using the resonator as one electrode, and a small silver electrode as the other. By choosing the direction of current one can either increase or decrease the mass of the electrodes. Details were published in "Radio" magazine (3/1978) by UB5LEV. Raising frequency by scratching off parts of the electrodes is not advised as this may damage the crystal and lower its Q factor. Capacitor trimmers can be also used for frequency adjustment of the oscillator circuit. Other materials Some other piezoelectric materials than quartz can be employed. These include single crystals of lithium tantalate, lithium niobate, lithium borate, berlinite, gallium arsenide, lithium tetraborate, aluminium phosphate, bismuth germanium oxide, polycrystalline zirconium titanate ceramics, high-alumina ceramics, silicon-zinc oxide composite, or dipotassium tartrate. Some materials may be more suitable for specific applications. An oscillator crystal can be also manufactured by depositing the resonator material on the silicon chip surface. Crystals of gallium phosphate, langasite, langanite and langatate are about 10 times more pullable than the corresponding quartz crystals, and are used in some VCXO oscillators. Stability The frequency stability is determined by the crystal's Q. It is inversely dependent on the frequency, and on the constant that is dependent on the particular cut. Other factors influencing Q are the overtone used, the temperature, the level of driving of the crystal, the quality of the surface finish, the mechanical stresses imposed on the crystal by bonding and mounting, the geometry of the crystal and the attached electrodes, the material purity and defects in the crystal, type and pressure of the gas in the enclosure, interfering modes, and presence and absorbed dose of ionizing and neutron radiation. The stability of AT cut crystals decreases with increasing frequency. For more accurate higher frequencies it is better to use a crystal with lower fundamental frequency, operating at an overtone. A badly designed oscillator circuit may suddenly begin oscillating on an overtone. In 1972, a train in Fremont, California crashed due to a faulty oscillator. An inappropriate value of the tank capacitor caused the crystal in a control board to be overdriven, jumping to an overtone, and causing the train to speed up instead of slowing down. Temperature Temperature influences the operating frequency; various forms of compensation are used, from analog compensation (TCXO) and microcontroller compensation (MCXO) to stabilization of the temperature with a crystal oven (OCXO). The crystals possess temperature hysteresis; the frequency at a given temperature achieved by increasing the temperature is not equal to the frequency on the same temperature achieved by decreasing the temperature. The temperature sensitivity depends primarily on the cut; the temperature compensated cuts are chosen as to minimize frequency/temperature dependence. Special cuts can be made with linear temperature characteristics; the LC cut is used in quartz thermometers. Other influencing factors are the overtone used, the mounting and electrodes, impurities in the crystal, mechanical strain, crystal geometry, rate of temperature change, thermal history (due to hysteresis), ionizing radiation, and drive level. Crystals tend to suffer anomalies in their frequency/temperature and resistance/temperature characteristics, known as activity dips. These are small downward frequency or upward resistance excursions localized at certain temperatures, with their temperature position dependent on the value of the load capacitors. Mechanical stress Mechanical stresses also influence the frequency. The stresses can be induced by mounting, bonding, and application of the electrodes, by differential thermal expansion of the mounting, electrodes, and the crystal itself, by differential thermal stresses when there is a temperature gradient present, by expansion or shrinkage of the bonding materials during curing, by the air pressure that is transferred to the ambient pressure within the crystal enclosure, by the stresses of the crystal lattice itself (nonuniform growth, impurities, dislocations), by the surface imperfections and damage caused during manufacture, and by the action of gravity on the mass of the crystal; the frequency can therefore be influenced by position of the crystal. Other dynamic stress inducing factors are shocks, vibrations, and acoustic noise. Some cuts are less sensitive to stresses; the SC (stress-compensated) cut is an example. Atmospheric pressure changes can also introduce deformations to the housing, influencing the frequency by changing stray capacitances. Atmospheric humidity influences the thermal transfer properties of air, and can change electrical properties of plastics by diffusion of water molecules into their structure, altering the dielectric constants and electrical conductivity. Other factors influencing the frequency are the power supply voltage, load impedance, magnetic fields, electric fields (in case of cuts that are sensitive to them, e.g., SC cuts), the presence and absorbed dose of γ-particles and ionizing radiation, and the age of the crystal. Aging Crystals undergo slow gradual change of frequency with time, known as aging. There are many mechanisms involved. The mounting and contacts may undergo relief of the built-in stresses. Molecules of contamination either from the residual atmosphere, outgassed from the crystal, electrodes or packaging materials, or introduced during sealing the housing can be adsorbed on the crystal surface, changing its mass; this effect is exploited in quartz crystal microbalances. The composition of the crystal can be gradually altered by outgassing, diffusion of atoms of impurities or migrating from the electrodes, or the lattice can be damaged by radiation. Slow chemical reactions may occur on or in the crystal, or on the inner surfaces of the enclosure. Electrode material, e.g. chromium or aluminium, can react with the crystal, creating layers of metal oxide and silicon; these interface layers can undergo changes in time. The pressure in the enclosure can change due to varying atmospheric pressure, temperature, leaks, or outgassing of the materials inside. Factors outside of the crystal itself are e.g. aging of the oscillator circuitry (and e.g. change of capacitances), and drift of parameters of the crystal oven. External atmosphere composition can also influence the aging; hydrogen can diffuse through nickel housing. Helium can cause similar issues when it diffuses through glass enclosures of rubidium standards. Gold is a favored electrode material for low-aging resonators; its adhesion to quartz is strong enough to maintain contact even at strong mechanical shocks, but weak enough to not support significant strain gradients (unlike chromium, aluminium, and nickel). Gold also does not form oxides; it adsorbs organic contaminants from the air, but these are easy to remove. However, gold alone can undergo delamination; a layer of chromium is therefore sometimes used for improved binding strength. Silver and aluminium are often used as electrodes; however both form oxide layers with time that increases the crystal mass and lowers frequency. Silver can be passivated by exposition to iodine vapors, forming a layer of silver iodide. Aluminium oxidizes readily but slowly, until about 5 nm thickness is reached; increased temperature during artificial aging does not significantly increase the oxide forming speed; a thick oxide layer can be formed during manufacture by anodizing. Exposition of silver-plated crystal to iodine vapors can also be used in amateur conditions for lowering the crystal frequency slightly; the frequency can also be increased by scratching off parts of the electrodes, but that carries risk of damage to the crystal and loss of Q. A DC voltage bias between the electrodes can accelerate the initial aging, probably by induced diffusion of impurities through the crystal. Placing a capacitor in series with the crystal and a several-megaohm resistor in parallel can minimize such voltages. Aging decreases logarithmically with time, the largest changes occurring shortly after manufacture. Artificially aging a crystal by prolonged storage at 85 to 125 °C can increase its long-term stability. Mechanical damage Crystals are sensitive to shock. The mechanical stress causes a short-term change in the oscillator frequency due to the stress-sensitivity of the crystal, and can introduce a permanent change of frequency due to shock-induced changes of mounting and internal stresses (if the elastic limits of the mechanical parts are exceeded), desorption of contamination from the crystal surfaces, or change in parameters of the oscillator circuit. High magnitudes of shocks may tear the crystals off their mountings (especially in the case of large low-frequency crystals suspended on thin wires), or cause cracking of the crystal. Crystals free of surface imperfections are highly shock-resistant; chemical polishing can produce crystals able to survive tens of thousands of g. Crystals have no inherent failure mechanisms; some have operated in devices for decades. Failures may be, however, introduced by faults in bonding, leaky enclosures, corrosion, frequency shift by aging, breaking the crystal by too high mechanical shock, or radiation-induced damage when non-swept quartz is used. Crystals can be also damaged by overdriving. Frequency fluctuations Crystals suffer from minor short-term frequency fluctuations as well. The main causes of such noise are e.g. thermal noise (which limits the noise floor), phonon scattering (influenced by lattice defects), adsorption/desorption of molecules on the surface of the crystal, noise of the oscillator circuits, mechanical shocks and vibrations, acceleration and orientation changes, temperature fluctuations, and relief of mechanical stresses. The short-term stability is measured by four main parameters: Allan variance (the most common one specified in oscillator data sheets), phase noise, spectral density of phase deviations, and spectral density of fractional frequency deviations. The effects of acceleration and vibration tend to dominate the other noise sources; surface acoustic wave devices tend to be more sensitive than bulk acoustic wave (BAW) ones, and the stress-compensated cuts are even less sensitive. The relative orientation of the acceleration vector to the crystal dramatically influences the crystal's vibration sensitivity. Mechanical vibration isolation mountings can be used for high-stability crystals. Phase noise plays a significant role in frequency synthesis systems using frequency multiplication; a multiplication of a frequency by N increases the phase noise power by N2. A frequency multiplication by 10 times multiplies the magnitude of the phase error by 10 times. This can be disastrous for systems employing PLL or FSK technologies. Magnetic fields have little effect on the crystal itself, as quartz is diamagnetic; eddy currents or AC voltages can however be induced into the circuits, and magnetic parts of the mounting and housing may be influenced. After the power-up, the crystals take several seconds to minutes to "warm up" and stabilize their frequency. The oven-controlled OCXOs require usually 3–10 minutes for heating up to reach thermal equilibrium; the oven-less oscillators stabilize in several seconds as the few milliwatts dissipated in the crystal cause a small but noticeable level of internal heating. Drive level The crystals have to be driven at the appropriate drive level. Low-frequency crystals, especially flexural-mode ones, may fracture at too high drive levels. The drive level is specified as the amount of power dissipated in the crystal. The appropriate drive levels are about 5 μW for flexural modes up to 100 kHz, 1 μW for fundamental modes at 1–4 MHz, 0.5 μW for fundamental modes 4–20 MHz and 0.5 μW for overtone modes at 20–200 MHz. Too low drive level may cause problems with starting the oscillator. Low drive levels are better for higher stability and lower power consumption of the oscillator. Higher drive levels, in turn, reduce the impact of noise by increasing the signal-to-noise ratio. Crystal cuts The resonator plate can be cut from the source crystal in many different ways. The orientation of the cut influences the crystal's aging characteristics, frequency stability, thermal characteristics, and other parameters. These cuts operate at bulk acoustic wave (BAW); for higher frequencies, surface acoustic wave (SAW) devices are employed. Image of several crystal cuts The letter ‘T’ in the cut name marks a temperature-compensated cut – a cut oriented in a way that the temperature coefficients of the lattice are minimal; the FC and SC cuts are also temperature-compensated. The high frequency cuts are mounted by their edges, usually on springs; the stiffness of the spring has to be optimal, as if it is too stiff, mechanical shocks could be transferred to the crystal and cause it to break, and too little stiffness may allow the crystal to collide with the inside of the package when subjected to a mechanical shock, and break. Strip resonators, usually AT cuts, are smaller and therefore less sensitive to mechanical shocks. At the same frequency and overtone, the strip has less pullability, higher resistance, and higher temperature coefficient. The low frequency cuts are mounted at the nodes where they are virtually motionless; thin wires are attached at such points on each side between the crystal and the leads. The large mass of the crystal suspended on the thin wires makes the assembly sensitive to mechanical shocks and vibrations. The crystals are usually mounted in hermetically sealed glass or metal cases, filled with a dry and inert atmosphere, usually vacuum, nitrogen, or helium. Plastic housings can be used as well, but those are not hermetic and another secondary sealing has to be built around the crystal. Several resonator configurations are possible, in addition to the classical way of directly attaching leads to the crystal. E.g. the BVA resonator (Boîtier à Vieillissement Amélioré, Enclosure with Improved Aging), developed in 1976; the parts that influence the vibrations are machined from a single crystal (which reduces the mounting stress), and the electrodes are deposited not on the resonator itself but on the inner sides of two condenser discs made of adjacent slices of the quartz from the same bar, forming a three-layer sandwich with no stress between the electrodes and the vibrating element. The gap between the electrodes and the resonator act as two small series capacitors, making the crystal less sensitive to circuit influences. The architecture eliminates the effects of the surface contacts between the electrodes, the constraints in the mounting connections, and the issues related to ion migration from the electrodes into the lattice of the vibrating element. The resulting configuration is rugged, resistant to shock and vibration, resistant to acceleration and ionizing radiation, and has improved aging characteristics. AT cut is usually used, though SC cut variants exist as well. BVA resonators are often used in spacecraft applications. In the 1930s to 1950s, it was fairly common for people to adjust the frequency of the crystals by manual grinding. The crystals were ground using a fine abrasive slurry, or even a toothpaste, to increase their frequency. A slight decrease by 1–2 kHz when the crystal was overground was possible by marking the crystal face with a pencil lead, at the cost of a lowered . The frequency of the crystal is slightly adjustable ("pullable") by modifying the attached capacitances. A varactor, a diode with capacitance depending on applied voltage, is often used in voltage-controlled crystal oscillators, VCXO. The crystal cuts are usually AT or rarely SC, and operate in fundamental mode; the amount of available frequency deviation is inversely proportional to the square of the overtone number, so a third overtone has only one-ninth of the pullability of the fundamental mode. SC cuts, while more stable, are significantly less pullable. Circuit notations and abbreviations On electrical schematic diagrams, crystals are designated with the class letter Y (Y1, Y2, etc.). Oscillators, whether they are crystal oscillators or others, are designated with the class letter G (G1, G2, etc.). Crystals may also be designated on a schematic with X or XTAL (a phonetic abbreviation, comparable to using Xmas for Christmas), or a crystal oscillator with XO. Crystal oscillator types and their abbreviations: ATCXO — Analog temperature controlled crystal oscillator CDXO — Calibrated dual crystal oscillator DTCXO — Digital temperature compensated crystal oscillator EMXO — Evacuated miniature crystal oscillator GPSDO — Global positioning system disciplined oscillator MCXO — Microcomputer-compensated crystal oscillator OCVCXO — oven-controlled voltage-controlled crystal oscillator OCXO — Oven-controlled crystal oscillator RbXO — Rubidium crystal oscillators (RbXO), a crystal oscillator (can be an MCXO) synchronized with a built-in rubidium standard which is run only occasionally to save power TCVCXO — Temperature-compensated voltage-controlled crystal oscillator TCXO — Temperature-compensated crystal oscillator TMXO – Tactical miniature crystal oscillator TSXO — Temperature-sensing crystal oscillator, an adaptation of the TCXO VCTCXO — Voltage-controlled temperature-compensated crystal oscillator VCXO — Voltage-controlled crystal oscillator
Technology
Functional circuits
null
41017
https://en.wikipedia.org/wiki/Demand%20factor
Demand factor
In telecommunications, electronics and the electrical power industry, the term demand factor is used to refer to the fractional amount of some quantity being used relative to the maximum amount that could be used by the same system. The demand factor is always less than or equal to one. As the amount of demand is a time dependent quantity so is the demand factor. The demand factor is often implicitly averaged over time when the time period of demand is understood by the context. Electrical engineering In electrical engineering the demand factor is taken as a time independent quantity where the numerator is taken as the maximum demand in the specified time period instead of the averaged or instantaneous demand. This is the peak in the load profile divided by the full load of the device. Example: If a residence has equipment which could draw 6,000 W when all equipment was drawing a full load, drew a maximum of 3,000 W in a specified time, then the demand factor = 3,000 W / 6,000 W = 0.5 This quantity is relevant when trying to establish the amount of load for which a system should be rated. In the above example, it would be unlikely that the system would be rated to 6,000 W, even though there may be a slight possibility that this amount of power can be drawn. This is closely related to the load factor which is the average load divided by the peak load in a specified time period.
Technology
Concepts
null
41026
https://en.wikipedia.org/wiki/Dielectric
Dielectric
In electromagnetism, a dielectric (or dielectric medium) is an electrical insulator that can be polarised by an applied electric field. When a dielectric material is placed in an electric field, electric charges do not flow through the material as they do in an electrical conductor, because they have no loosely bound, or free, electrons that may drift through the material, but instead they shift, only slightly, from their average equilibrium positions, causing dielectric polarisation. Because of dielectric polarisation, positive charges are displaced in the direction of the field and negative charges shift in the direction opposite to the field. This creates an internal electric field that reduces the overall field within the dielectric itself. If a dielectric is composed of weakly bonded molecules, those molecules not only become polarised, but also reorient so that their symmetry axes align to the field. The study of dielectric properties concerns storage and dissipation of electric and magnetic energy in materials. Dielectrics are important for explaining various phenomena in electronics, optics, solid-state physics and cell biophysics. Terminology Although the term insulator implies low electrical conduction, dielectric typically means materials with a high polarisability. The latter is expressed by a number called the relative permittivity. Insulator is generally used to indicate electrical obstruction while dielectric is used to indicate the energy storing capacity of the material (by means of polarisation). A common example of a dielectric is the electrically insulating material between the metallic plates of a capacitor. The polarisation of the dielectric by the applied electric field increases the capacitor's surface charge for the given electric field strength. The term dielectric was coined by William Whewell (from dia + electric) in response to a request from Michael Faraday. A perfect dielectric is a material with zero electrical conductivity (cf. perfect conductor infinite electrical conductivity), thus exhibiting only a displacement current; therefore it stores and returns electrical energy as if it were an ideal capacitor. Electric susceptibility The electric susceptibility of a dielectric material is a measure of how easily it polarises in response to an electric field. This, in turn, determines the electric permittivity of the material and thus influences many other phenomena in that medium, from the capacitance of capacitors to the speed of light. It is defined as the constant of proportionality (which may be a tensor) relating an electric field to the induced dielectric polarisation density such that where is the electric permittivity of free space. The susceptibility of a medium is related to its relative permittivity by So in the case of a classical vacuum, The electric displacement is related to the polarisation density by Dispersion and causality In general, a material cannot polarise instantaneously in response to an applied field. The more general formulation as a function of time is That is, the polarisation is a convolution of the electric field at previous times with time-dependent susceptibility given by . The upper limit of this integral can be extended to infinity as well if one defines for . An instantaneous response corresponds to Dirac delta function susceptibility . It is more convenient in a linear system to take the Fourier transform and write this relationship as a function of frequency. Due to the convolution theorem, the integral becomes a simple product, The susceptibility (or equivalently the permittivity) is frequency dependent. The change of susceptibility with respect to frequency characterises the dispersion properties of the material. Moreover, the fact that the polarisation can only depend on the electric field at previous times (i.e., for ), a consequence of causality, imposes Kramers–Kronig constraints on the real and imaginary parts of the susceptibility . Dielectric polarisation Basic atomic model In the classical approach to the dielectric, the material is made up of atoms. Each atom consists of a cloud of negative charge (electrons) bound to and surrounding a positive point charge at its center. In the presence of an electric field, the charge cloud is distorted, as shown in the top right of the figure. This can be reduced to a simple dipole using the superposition principle. A dipole is characterised by its dipole moment, a vector quantity shown in the figure as the blue arrow labeled M. It is the relationship between the electric field and the dipole moment that gives rise to the behaviour of the dielectric. (Note that the dipole moment points in the same direction as the electric field in the figure. This is not always the case, and is a major simplification, but is true for many materials.) When the electric field is removed, the atom returns to its original state. The time required to do so is called relaxation time; an exponential decay. This is the essence of the model in physics. The behaviour of the dielectric now depends on the situation. The more complicated the situation, the richer the model must be to accurately describe the behaviour. Important questions are: Is the electric field constant, or does it vary with time? At what rate? Does the response depend on the direction of the applied field (isotropy of the material)? Is the response the same everywhere (homogeneity of the material)? Do any boundaries or interfaces have to be taken into account? Is the response linear with respect to the field, or are there nonlinearities? The relationship between the electric field E and the dipole moment M gives rise to the behaviour of the dielectric, which, for a given material, can be characterised by the function F defined by the equation: When both the type of electric field and the type of material have been defined, one then chooses the simplest function F that correctly predicts the phenomena of interest. Examples of phenomena that can be so modelled include: Refractive index Group velocity dispersion Birefringence Self-focusing Harmonic generation Dipolar polarisation Dipolar polarisation is a polarisation that is either inherent to polar molecules (orientation polarisation), or can be induced in any molecule in which the asymmetric distortion of the nuclei is possible (distortion polarisation). Orientation polarisation results from a permanent dipole, e.g., that arises from the 104.45° angle between the asymmetric bonds between oxygen and hydrogen atoms in the water molecule, which retains polarisation in the absence of an external electric field. The assembly of these dipoles forms a macroscopic polarisation. When an external electric field is applied, the distance between charges within each permanent dipole, which is related to chemical bonding, remains constant in orientation polarisation; however, the direction of polarisation itself rotates. This rotation occurs on a timescale that depends on the torque and surrounding local viscosity of the molecules. Because the rotation is not instantaneous, dipolar polarisations lose the response to electric fields at the highest frequencies. A molecule rotates about 1 radian per picosecond in a fluid, thus this loss occurs at about 1011 Hz (in the microwave region). The delay of the response to the change of the electric field causes friction and heat. When an external electric field is applied at infrared frequencies or less, the molecules are bent and stretched by the field and the molecular dipole moment changes. The molecular vibration frequency is roughly the inverse of the time it takes for the molecules to bend, and this distortion polarisation disappears above the infrared. Ionic polarisation Ionic polarisation is polarisation caused by relative displacements between positive and negative ions in ionic crystals (for example, NaCl). If a crystal or molecule consists of atoms of more than one kind, the distribution of charges around an atom in the crystal or molecule leans to positive or negative. As a result, when lattice vibrations or molecular vibrations induce relative displacements of the atoms, the centers of positive and negative charges are also displaced. The locations of these centers are affected by the symmetry of the displacements. When the centers do not correspond, polarisation arises in molecules or crystals. This polarisation is called ionic polarisation. Ionic polarisation causes the ferroelectric effect as well as dipolar polarisation. The ferroelectric transition, which is caused by the lining up of the orientations of permanent dipoles along a particular direction, is called an order-disorder phase transition. The transition caused by ionic polarisations in crystals is called a displacive phase transition. In biological cells Ionic polarisation enables the production of energy-rich compounds in cells (the proton pump in mitochondria) and, at the plasma membrane, the establishment of the resting potential, energetically unfavourable transport of ions, and cell-to-cell communication (the Na+/K+-ATPase). All cells in animal body tissues are electrically polarised – in other words, they maintain a voltage difference across the cell's plasma membrane, known as the membrane potential. This electrical polarisation results from a complex interplay between ion transporters and ion channels. In neurons, the types of ion channels in the membrane usually vary across different parts of the cell, giving the dendrites, axon, and cell body different electrical properties. As a result, some parts of the membrane of a neuron may be excitable (capable of generating action potentials), whereas others are not. Dielectric dispersion In physics, dielectric dispersion is the dependence of the permittivity of a dielectric material on the frequency of an applied electric field. Because there is a lag between changes in polarisation and changes in the electric field, the permittivity of the dielectric is a complex function of the frequency of the electric field. Dielectric dispersion is very important for the applications of dielectric materials and the analysis of polarisation systems. This is one instance of a general phenomenon known as material dispersion: a frequency-dependent response of a medium for wave propagation. When the frequency becomes higher: The dipolar polarisation can no longer follow the oscillations of the electric field in the microwave region around 1010 Hz, The ionic polarisation and molecular distortion polarisation can no longer track the electric field past the infrared or far-infrared region around 1013 Hz, The electronic polarisation loses its response in the ultraviolet region around 1015 Hz. In the frequency region above ultraviolet, permittivity approaches the constant ε0 in every substance, where ε0 is the permittivity of the free space. Because permittivity indicates the strength of the relation between an electric field and polarisation, if a polarisation process loses its response, permittivity decreases. Dielectric relaxation Dielectric relaxation is the momentary delay (or lag) in the dielectric constant of a material. This is usually caused by the delay in molecular polarisation with respect to a changing electric field in a dielectric medium (e.g., inside capacitors or between two large conducting surfaces). Dielectric relaxation in changing electric fields could be considered analogous to hysteresis in changing magnetic fields (e.g., in inductor or transformer cores). Relaxation in general is a delay or lag in the response of a linear system, and therefore dielectric relaxation is measured relative to the expected linear steady state (equilibrium) dielectric values. The time lag between electrical field and polarisation implies an irreversible degradation of Gibbs free energy. In physics, dielectric relaxation refers to the relaxation response of a dielectric medium to an external, oscillating electric field. This relaxation is often described in terms of permittivity as a function of frequency, which can, for ideal systems, be described by the Debye equation. On the other hand, the distortion related to ionic and electronic polarisation shows behaviour of the resonance or oscillator type. The character of the distortion process depends on the structure, composition, and surroundings of the sample. Debye relaxation Debye relaxation is the dielectric relaxation response of an ideal, noninteracting population of dipoles to an alternating external electric field. It is usually expressed in the complex permittivity ε of a medium as a function of the field's angular frequency ω: where ε∞ is the permittivity at the high frequency limit, where εs is the static, low frequency permittivity, and τ is the characteristic relaxation time of the medium. Separating into the real part and the imaginary part of the complex dielectric permittivity yields: Note that the above equation for is sometimes written with in the denominator due to an ongoing sign convention ambiguity whereby many sources represent the time dependence of the complex electric field with whereas others use . In the former convention, the functions and representing real and imaginary parts are given by whereas in the latter convention . The above equation uses the latter convention. The dielectric loss is also represented by the loss tangent: This relaxation model was introduced by and named after the physicist Peter Debye (1913). It is characteristic for dynamic polarisation with only one relaxation time. Variants of the Debye equation Cole–Cole equation This equation is used when the dielectric loss peak shows symmetric broadening. Cole–Davidson equation This equation is used when the dielectric loss peak shows asymmetric broadening. Havriliak–Negami relaxation This equation considers both symmetric and asymmetric broadening. Kohlrausch–Williams–Watts function Fourier transform of stretched exponential function. Curie–von Schweidler law This shows the response of dielectrics to an applied DC field to behave according to a power law, which can be expressed as an integral over weighted exponential functions. Djordjevic–Sarkar approximation This is used when the dielectric loss is approximately constant for a wide range of frequencies. Paraelectricity Paraelectricity is the nominal behaviour of dielectrics when the dielectric permittivity tensor is proportional to the unit matrix, i.e., an applied electric field causes polarisation and/or alignment of dipoles only parallel to the applied electric field. Contrary to the analogy with a paramagnetic material, no permanent electric dipole needs to exist in a paraelectric material. Removal of the fields results in the dipolar polarisation returning to zero. The mechanisms that causes paraelectric behaviour are distortion of individual ions (displacement of the electron cloud from the nucleus) and polarisation of molecules or combinations of ions or defects. Paraelectricity can occur in crystal phases where electric dipoles are unaligned and thus have the potential to align in an external electric field and weaken it. Most dielectric materials are paraelectrics. A specific example of a paraelectric material of high dielectric constant is strontium titanate. The LiNbO3 crystal is ferroelectric below 1430 K, and above this temperature it transforms into a disordered paraelectric phase. Similarly, other perovskites also exhibit paraelectricity at high temperatures. Paraelectricity has been explored as a possible refrigeration mechanism; polarising a paraelectric by applying an electric field under adiabatic process conditions raises the temperature, while removing the field lowers the temperature. A heat pump that operates by polarising the paraelectric, allowing it to return to ambient temperature (by dissipating the extra heat), bringing it into contact with the object to be cooled, and finally depolarising it, would result in refrigeration. Tunability Tunable dielectrics are insulators whose ability to store electrical charge changes when a voltage is applied. Generally, strontium titanate () is used for devices operating at low temperatures, while barium strontium titanate () substitutes for room temperature devices. Other potential materials include microwave dielectrics and carbon nanotube (CNT) composites. In 2013, multi-sheet layers of strontium titanate interleaved with single layers of strontium oxide produced a dielectric capable of operating at up to 125 GHz. The material was created via molecular beam epitaxy. The two have mismatched crystal spacing that produces strain within the strontium titanate layer that makes it less stable and tunable. Systems such as have a paraelectric–ferroelectric transition just below ambient temperature, providing high tunability. Films suffer significant losses arising from defects. Applications Capacitors Commercially manufactured capacitors typically use a solid dielectric material with high permittivity as the intervening medium between the stored positive and negative charges. This material is often referred to in technical contexts as the capacitor dielectric. The most obvious advantage to using such a dielectric material is that it prevents the conducting plates, on which the charges are stored, from coming into direct electrical contact. More significantly, however, a high permittivity allows a greater stored charge at a given voltage. This can be seen by treating the case of a linear dielectric with permittivity ε and thickness d between two conducting plates with uniform charge density σε. In this case the charge density is given by and the capacitance per unit area by From this, it can easily be seen that a larger ε leads to greater charge stored and thus greater capacitance. Dielectric materials used for capacitors are also chosen such that they are resistant to ionisation. This allows the capacitor to operate at higher voltages before the insulating dielectric ionises and begins to allow undesirable current. Dielectric resonator A dielectric resonator oscillator (DRO) is an electronic component that exhibits resonance of the polarisation response for a narrow range of frequencies, generally in the microwave band. It consists of a "puck" of ceramic that has a large dielectric constant and a low dissipation factor. Such resonators are often used to provide a frequency reference in an oscillator circuit. An unshielded dielectric resonator can be used as a dielectric resonator antenna (DRA). BST thin films From 2002 to 2004, the United States Army Research Laboratory (ARL) conducted research on thin film technology. Barium strontium titanate (BST), a ferroelectric thin film, was studied for the fabrication of radio frequency and microwave components, such as voltage-controlled oscillators, tunable filters and phase shifters. The research was part of an effort to provide the Army with highly-tunable, microwave-compatible materials for broadband electric-field tunable devices, which operate consistently in extreme temperatures. This work improved tunability of bulk barium strontium titanate, which is a thin film enabler for electronics components. In a 2004 research paper, U.S. ARL researchers explored how small concentrations of acceptor dopants can dramatically modify the properties of ferroelectric materials such as BST. Researchers "doped" BST thin films with magnesium, analyzing the "structure, microstructure, surface morphology and film/substrate compositional quality" of the result. The Mg doped BST films showed "improved dielectric properties, low leakage current, and good tunability", meriting potential for use in microwave tunable devices. Some practical dielectrics Dielectric materials can be solids, liquids, or gases. (A high vacuum can also be a useful, nearly lossless dielectric even though its relative dielectric constant is only unity.) Solid dielectrics are perhaps the most commonly used dielectrics in electrical engineering, and many solids are very good insulators. Some examples include porcelain, glass, and most plastics. Air, nitrogen and sulfur hexafluoride are the three most commonly used gaseous dielectrics. Industrial coatings such as Parylene provide a dielectric barrier between the substrate and its environment. Mineral oil is used extensively inside electrical transformers as a fluid dielectric and to assist in cooling. Dielectric fluids with higher dielectric constants, such as electrical grade castor oil, are often used in high voltage capacitors to help prevent corona discharge and increase capacitance. Because dielectrics resist the flow of electricity, the surface of a dielectric may retain stranded excess electrical charges. This may occur accidentally when the dielectric is rubbed (the triboelectric effect). This can be useful, as in a Van de Graaff generator or electrophorus, or it can be potentially destructive as in the case of electrostatic discharge. Specially processed dielectrics, called electrets (which should not be confused with ferroelectrics), may retain excess internal charge or "frozen in" polarisation. Electrets have a semi-permanent electric field, and are the electrostatic equivalent to magnets. Electrets have numerous practical applications in the home and industry. Some dielectrics can generate a potential difference when subjected to mechanical stress, or (equivalently) change physical shape if an external voltage is applied across the material. This property is called piezoelectricity. Piezoelectric materials are another class of very useful dielectrics. Some ionic crystals and polymer dielectrics exhibit a spontaneous dipole moment, which can be reversed by an externally applied electric field. This behaviour is called the ferroelectric effect. These materials are analogous to the way ferromagnetic materials behave within an externally applied magnetic field. Ferroelectric materials often have very high dielectric constants, making them quite useful for capacitors.
Physical sciences
Basics_9
null
41031
https://en.wikipedia.org/wiki/Diffraction%20grating
Diffraction grating
In optics, a diffraction grating is an optical grating with a periodic structure that diffracts light, or another type of electromagnetic radiation, into several beams traveling in different directions (i.e., different diffraction angles). The emerging coloration is a form of structural coloration. The directions or diffraction angles of these beams depend on the wave (light) incident angle to the diffraction grating, the spacing or periodic distance between adjacent diffracting elements (e.g., parallel slits for a transmission grating) on the grating, and the wavelength of the incident light. The grating acts as a dispersive element. Because of this, diffraction gratings are commonly used in monochromators and spectrometers, but other applications are also possible such as optical encoders for high-precision motion control and wavefront measurement. For typical applications, a reflective grating has ridges or rulings on its surface while a transmissive grating has transmissive or hollow slits on its surface. Such a grating modulates the amplitude of an incident wave to create a diffraction pattern. Some gratings modulate the phases of incident waves rather than the amplitude, and these types of gratings can be produced frequently by using holography. James Gregory (1638–1675) observed the diffraction patterns caused by a bird feather, which was effectively the first diffraction grating (in a natural form) to be discovered, about a year after Isaac Newton's prism experiments. The first human-made diffraction grating was made around 1785 by Philadelphia inventor David Rittenhouse, who strung hairs between two finely threaded screws. This was similar to notable German physicist Joseph von Fraunhofer's wire diffraction grating in 1821. The principles of diffraction were discovered by Thomas Young and Augustin-Jean Fresnel. Using these principles, Fraunhofer was the first to use a diffraction grating to obtain line spectra and the first to measure the wavelengths of spectral lines with a diffraction grating. In the 1860s, state-of-the-art diffraction gratings with small groove period (d) were manufactured by Friedrich Adolph Nobert (1806–1881) in Greifswald; then the two Americans Lewis Morris Rutherfurd (1816–1892) and William B. Rogers (1804–1882) took over the lead. By the end of the 19th century, the concave gratings of Henry Augustus Rowland (1848–1901) were the best available. A diffraction grating can create "rainbow" colors when it is illuminated by a wide-spectrum (e.g., continuous) light source. Rainbow-like colors from closely spaced narrow tracks on optical data storage disks such as CDs or DVDs are an example of light diffraction caused by diffraction gratings. A usual diffraction grating has parallel lines (It is true for 1-dimensional gratings, but 2 or 3-dimensional gratings are also possible and they have their applications such as wavefront measurement), while a CD has a spiral of finely spaced data tracks. Diffraction colors also appear when one looks at a bright point source through a translucent fine-pitch umbrella fabric covering. Decorative patterned plastic films based on reflective grating patches are inexpensive and commonplace. A similar color separation seen from thin layers of oil (or gasoline, etc.) on water, known as iridescence, is not caused by diffraction from a grating but rather by thin film interference from the closely stacked transmissive layers. Theory of operation For a diffraction grating, the relationship between the grating spacing (i.e., the distance between adjacent grating grooves or slits), the angle of the wave (light) incidence to the grating, and the diffracted wave from the grating is known as the grating equation. Like many other optical formulas, the grating equation can be derived by using the Huygens–Fresnel principle, stating that each point on a wavefront of a propagating wave can be considered to act as a point wave source, and a wavefront at any subsequent point can be found by adding together the contributions from each of these individual point wave sources on the previous wavefront. Gratings may be of the 'reflective' or 'transmissive' type, analogous to a mirror or lens, respectively. A grating has a 'zero-order mode' (where the integer order of diffraction m is set to zero), in which a ray of light behaves according to the laws of reflection (like a mirror) and refraction (like a lens), respectively. An idealized diffraction grating is made up of a set of slits of spacing , that must be wider than the wavelength of interest to cause diffraction. Assuming a plane wave of monochromatic light of wavelength at normal incidence on a grating (i.e., wavefronts of the incident wave are parallel to the grating main plane), each slit in the grating acts as a quasi point wave source from which light propagates in all directions (although this is typically limited to the forward hemisphere from the point source). Of course, every point on every slit to which the incident wave reaches plays as a point wave source for the diffraction wave and all these contributions to the diffraction wave determine the detailed diffraction wave light property distribution, but diffraction angles (at the grating) at which the diffraction wave intensity is highest are determined only by these quasi point sources corresponding the slits in the grating. After the incident light (wave) interacts with the grating, the resulting diffracted light from the grating is composed of the sum of interfering wave components emanating from each slit in the grating; At any given point in space through which the diffracted light may pass, typically called observation point, the path length from each slit in the grating to the given point varies, so the phase of the wave emanating from each of the slits at that point also varies. As a result, the sum of the diffracted waves from the grating slits at the given observation point creates a peak, valley, or some degree between them in light intensity through additive and destructive interference. When the difference between the light paths from adjacent slits to the observation point is equal to an odd integer-multiple of the half of the wavelength, l with an odd integer , the waves are out of phase at that point, and thus cancel each other to create the (locally) minimum light intensity. Similarly, when the path difference is a multiple of , the waves are in phase and the (locally) maximum intensity occurs. For light at the normal incidence to the grating, the intensity maxima occur at diffraction angles , which satisfy the relationship , where is the angle between the diffracted ray and the grating's normal vector, is the distance from the center of one slit to the center of the adjacent slit, and is an integer representing the propagation-mode of interest called the diffraction order. When a plane light wave is normally incident on a grating of uniform period , the diffracted light has maxima at diffraction angles given by a special case of the grating equation as It can be shown that if the plane wave is incident at angle relative to the grating normal, in the plane orthogonal to the grating periodicity, the grating equation becomes which describes in-plane diffraction as a special case of the more general scenario of conical, or off-plane, diffraction described by the generalized grating equation: where is the angle between the direction of the plane wave and the direction of the grating grooves, which is orthogonal to both the directions of grating periodicity and grating normal. Various sign conventions for , and are used; any choice is fine as long as the choice is kept through diffraction-related calculations. When solved for diffracted angle at which the diffracted wave intensity are maximized, the equation becomes The diffracted light that corresponds to direct transmission for a transmissive diffraction grating or specular reflection for a reflective grating is called the zero order, and is denoted . The other diffracted light intensity maxima occur at angles represented by non-zero integer diffraction orders . Note that can be positive or negative, corresponding to diffracted orders on both sides of the zero-order diffracted beam. Even if the grating equation is derived from a specific grating such as the grating in the right diagram (this grating is called a blazed grating), the equation can apply to any regular structure of the same spacing, because the phase relationship between light scattered from adjacent diffracting elements of the grating remains the same. The detailed diffracted light property distribution (e.g., intensity) depends on the detailed structure of the grating elements as well as on the number of elements in the grating, but it always gives maxima in the directions given by the grating equation. Depending on how a grating modulates incident light on it to cause the diffracted light, there are the following grating types: Transmission amplitude diffraction grating, which spatially and periodically modulates the intensity of an incident wave that transmits through the grating (and the diffracted wave is the consequence of this modulation). Reflection amplitude diffraction gratings, which spatially and periodically modulate the intensity of an incident wave that is reflected from the grating. Transmission phase diffraction grating, that spatially and periodically modulates the phase of an incident wave passing through the grating. Reflection phase diffraction grating, that spatially and periodically modulates the phase of an incident wave reflected from the grating. An optical axis diffraction grating, in which the optical axis is spatially and periodically modulated, is also considered either a reflection or transmission phase diffraction grating. The grating equation applies to all these gratings due to the same phase relationship between the diffracted waves from adjacent diffracting elements of the gratings, even if the detailed distribution of the diffracted wave property depends on the detailed structure of each grating. Quantum electrodynamics Quantum electrodynamics (QED) offers another derivation of the properties of a diffraction grating in terms of photons as particles (at some level). QED can be described intuitively with the path integral formulation of quantum mechanics. As such it can model photons as potentially following all paths from a source to a final point, each path with a certain probability amplitude. These probability amplitudes can be represented as a complex number or equivalent vector—or, as Richard Feynman simply calls them in his book on QED, "arrows". For the probability that a certain event will happen, one sums the probability amplitudes for all of the possible ways in which the event can occur, and then takes the square of the length of the result. The probability amplitude for a photon from a monochromatic source to arrive at a certain final point at a given time, in this case, can be modeled as an arrow that spins rapidly until it is evaluated when the photon reaches its final point. For example, for the probability that a photon will reflect off of a mirror and be observed at a given point a given amount of time later, one sets the photon's probability amplitude spinning as it leaves the source, follows it to the mirror, and then to its final point, even for paths that do not involve bouncing off of the mirror at equal angles. One can then evaluate the probability amplitude at the photon's final point; next, one can integrate over all of these arrows (see vector sum), and square the length of the result to obtain the probability that this photon will reflect off of the mirror in the pertinent fashion. The times these paths take are what determines the angle of the probability amplitude arrow, as they can be said to "spin" at a constant rate (which is related to the frequency of the photon). The times of the paths near the classical reflection site of the mirror are nearly the same, so the probability amplitudes point in nearly the same direction—thus, they have a sizable sum. Examining the paths towards the edges of the mirror reveals that the times of nearby paths are quite different from each other, and thus we wind up summing vectors that cancel out quickly. So, there is a higher probability that light will follow a near-classical reflection path than a path further out. However, a diffraction grating can be made out of this mirror, by scraping away areas near the edge of the mirror that usually cancel nearby amplitudes out—but now, since the photons don't reflect from the scraped-off portions, the probability amplitudes that would all point, for instance, at forty-five degrees, can have a sizable sum. Thus, this lets light of the right frequency sum to a larger probability amplitude, and as such possess a larger probability of reaching the appropriate final point. This particular description involves many simplifications: a point source, a "surface" that light can reflect off of (thus neglecting the interactions with electrons) and so forth. The biggest simplification is perhaps in the fact that the "spinning" of the probability amplitude arrows is actually more accurately explained as a "spinning" of the source, as the probability amplitudes of photons do not "spin" while they are in transit. We obtain the same variation in probability amplitudes by letting the time at which the photon left the source be indeterminate—and the time of the path now tells us when the photon would have left the source, and thus what the angle of its "arrow" would be. However, this model and approximation is a reasonable one to illustrate a diffraction grating conceptually. Light of a different frequency may also reflect off of the same diffraction grating, but with a different final point. Gratings as dispersive elements The wavelength dependence in the grating equation shows that the grating separates an incident polychromatic beam into its constituent wavelength components at different angles, i.e., it is angular dispersive. Each wavelength of input beam spectrum is sent into a different direction, producing a rainbow of colors under white light illumination. This is visually similar to the operation of a prism, although the mechanism is very different. A prism refracts waves of different wavelengths at different angles due to their different refractive indices, while a grating diffracts different wavelengths at different angles due to interference at each wavelength. The diffracted beams corresponding to consecutive orders may overlap, depending on the spectral content of the incident beam and the grating density. The higher the spectral order, the greater the overlap into the next order. The grating equation shows that the angles of the diffracted orders only depend on the grooves' period, and not on their shape. By controlling the cross-sectional profile of the grooves, it is possible to concentrate most of the diffracted optical energy in a particular order for a given wavelength. A triangular profile is commonly used. This technique is called blazing. The incident angle and wavelength for which the diffraction is most efficient (the ratio of the diffracted optical energy to the incident energy is the highest) are often called blazing angle and blazing wavelength. The efficiency of a grating may also depend on the polarization of the incident light. Gratings are usually designated by their groove density, the number of grooves per unit length, usually expressed in grooves per millimeter (g/mm), also equal to the inverse of the groove period. The groove period must be on the order of the wavelength of interest; the spectral range covered by a grating is dependent on groove spacing and is the same for ruled and holographic gratings with the same grating constant (meaning groove density or the groove period). The maximum wavelength that a grating can diffract is equal to twice the grating period, in which case the incident and diffracted light are at ninety degrees (90°) to the grating normal. To obtain frequency dispersion over a wider frequency one must use a prism. The optical regime, in which the use of gratings is most common, corresponds to wavelengths between 100 nm and 10 µm. In that case, the groove density can vary from a few tens of grooves per millimeter, as in echelle gratings, to a few thousands of grooves per millimeter. When groove spacing is less than half the wavelength of light, the only present order is the m = 0 order. Gratings with such small periodicity (with respect to the incident light wavelength) are called subwavelength gratings and exhibit special optical properties. Made on an isotropic material the subwavelength gratings give rise to form birefringence, in which the material behaves as if it were birefringent. Fabrication SR (Surface Relief) gratings SR gratings are named due to its surface structure of depressions (low relief) and elevations (high relief). Originally, high-resolution gratings were ruled by high-quality ruling engines whose construction was a large undertaking. Henry Joseph Grayson designed a machine to make diffraction gratings, succeeding with one of 120,000 lines to the inch (approx. 4,724 lines per mm) in 1899. Later, photolithographic techniques created gratings via holographic interference patterns. A holographic grating has sinusoidal grooves as the result of an optical sinusoidal interference pattern on the grating material during its fabrication, and may not be as efficient as ruled gratings, but are often preferred in monochromators because they produce less stray light. A copying technique can make high quality replicas from master gratings of either type, thereby lowering fabrication costs. Semiconductor technology today is also used to etch holographically patterned gratings into robust materials such as fused silica. In this way, low stray-light holography is combined with the high efficiency of deep, etched transmission gratings, and can be incorporated into high-volume, low-cost semiconductor manufacturing technology. VPH (Volume Phase Holography) gratings Another method for manufacturing diffraction gratings uses a photosensitive gel sandwiched between two substrates. A holographic interference pattern exposes the gel, which is later developed. These gratings, called volume phase holography diffraction gratings (or VPH diffraction gratings) have no physical grooves, but instead a periodic modulation of the refractive index within the gel. This removes much of the surface scattering effects typically seen in other types of gratings. These gratings also tend to have higher efficiencies, and allow for the inclusion of complicated patterns into a single grating. A VPH diffraction grating is typically a transmission grating, through which incident light passes and is diffracted, but a VPH reflection grating can also be made by tilting the direction of a refractive index modulation with respect to the grating surface. In older versions of such gratings, environmental susceptibility was a trade-off, as the gel had to be contained at low temperature and humidity. Typically, the photosensitive substances are sealed between two substrates that make them resistant to humidity, and thermal and mechanical stresses. VPH diffraction gratings are not destroyed by accidental touches and are more scratch resistant than typical relief gratings. Blazed gratings A blazed grating is manufactured with grooves that have a sawtooth-shaped cross section, unlike the symmetrical grooves of other gratings. This allows the grating to achieve maximum diffraction efficiency, but in only one diffraction order which is dependent on the angle of the sawtooth grooves, known as the blaze angle. Common uses include specific wavelength selection for tunable lasers, among others. Other gratings A new technology for grating insertion into integrated photonic lightwave circuits is digital planar holography (DPH). DPH gratings are generated in computer and fabricated on one or several interfaces of an optical waveguide planar by using standard micro-lithography or nano-imprinting methods, compatible with mass-production. Light propagates inside the DPH gratings, confined by the refractive index gradient, which provides longer interaction path and greater flexibility in light steering. Examples Diffraction gratings are often used in monochromators, spectrometers, lasers, wavelength division multiplexing devices, optical pulse compressing devices, interferometers, and many other optical instruments. Ordinary pressed CD and DVD media are every-day examples of diffraction gratings and can be used to demonstrate the effect by reflecting sunlight off them onto a white wall. This is a side effect of their manufacture, as one surface of a CD has many small pits in the plastic, arranged in a spiral; that surface has a thin layer of metal applied to make the pits more visible. The structure of a DVD is optically similar, although it may have more than one pitted surface, and all pitted surfaces are inside the disc. Due to the sensitivity to the refractive index of the media, diffraction grating can be used as sensor of fluid properties. In a standard pressed vinyl record when viewed from a low angle perpendicular to the grooves, a similar but less defined effect to that in a CD/DVD is seen. This is due to viewing angle (less than the critical angle of reflection of the black vinyl) and the path of the light being reflected due to this being changed by the grooves, leaving a rainbow relief pattern behind. Diffraction gratings are also used to distribute evenly the frontlight of e-readers such as the Nook Simple Touch with GlowLight. Gratings from electronic components Some everyday electronic components contain fine and regular patterns, and as a result readily serve as diffraction gratings. For example, CCD sensors from discarded mobile phones and cameras can be removed from the device. With a laser pointer, diffraction can reveal the spatial structure of the CCD sensors. This can be done for LCD or LED displays of smart phones as well. Because such displays are usually protected just by transparent casing, experiments can be done without damaging the phones. If accurate measurements are not intended, a spotlight can reveal the diffraction patterns. Natural gratings Striated muscle is the most commonly found natural diffraction grating and, this has helped physiologists in determining the structure of such muscle. Aside from this, the chemical structure of crystals can be thought of as diffraction gratings for types of electromagnetic radiation other than visible light, this is the basis for techniques such as X-ray crystallography. Most commonly confused with diffraction gratings are the iridescent colors of peacock feathers, mother-of-pearl, and butterfly wings. Iridescence in birds, fish and insects is often caused by thin-film interference rather than a diffraction grating. Diffraction produces the entire spectrum of colors as the viewing angle changes, whereas thin-film interference usually produces a much narrower range. The surfaces of flowers can also create a diffraction, but the cell structures in plants are usually too irregular to produce the fine slit geometry necessary for a diffraction grating. The iridescence signal of flowers is thus only appreciable very locally and hence not visible to man and flower visiting insects. However, natural gratings do occur in some invertebrate animals, like the peacock spiders, the antennae of seed shrimp, and have even been discovered in Burgess Shale fossils. Diffraction grating effects are sometimes seen in meteorology. Diffraction coronas are colorful rings surrounding a source of light, such as the sun. These are usually observed much closer to the light source than halos, and are caused by very fine particles, like water droplets, ice crystals, or smoke particles in a hazy sky. When the particles are all nearly the same size they diffract the incoming light at very specific angles. The exact angle depends on the size of the particles. Diffraction coronas are commonly observed around light sources, like candle flames or street lights, in the fog. Cloud iridescence is caused by diffraction, occurring along coronal rings when the particles in the clouds are all uniform in size.
Technology
Optics
null
41092
https://en.wikipedia.org/wiki/Electric%20field
Electric field
An electric field (sometimes called E-field) is a physical field that surrounds electrically charged particles. In classical electromagnetism, the electric field of a single charge (or group of charges) describes their capacity to exert attractive or repulsive forces on another charged object. Charged particles exert attractive forces on each other when their charges are opposite, and repulse each other when their charges are the same. Because these forces are exerted mutually, two charges must be present for the forces to take place. These forces are described by Coulomb's law, which says that the greater the magnitude of the charges, the greater the force, and the greater the distance between them, the weaker the force. Informally, the greater the charge of an object, the stronger its electric field. Similarly, an electric field is stronger nearer charged objects and weaker further away. Electric fields originate from electric charges and time-varying electric currents. Electric fields and magnetic fields are both manifestations of the electromagnetic field. Electromagnetism is one of the four fundamental interactions of nature. Electric fields are important in many areas of physics, and are exploited in electrical technology. For example, in atomic physics and chemistry, the interaction in the electric field between the atomic nucleus and electrons is the force that holds these particles together in atoms. Similarly, the interaction in the electric field between atoms is the force responsible for chemical bonding that result in molecules. The electric field is defined as a vector field that associates to each point in space the force per unit of charge exerted on an infinitesimal test charge at rest at that point. The SI unit for the electric field is the volt per meter (V/m), which is equal to the newton per coulomb (N/C). Description The electric field is defined at each point in space as the force that would be experienced by an infinitesimally small stationary test charge at that point divided by the charge. The electric field is defined in terms of force, and force is a vector (i.e. having both magnitude and direction), so it follows that an electric field may be described by a vector field. The electric field acts between two charges similarly to the way that the gravitational field acts between two masses, as they both obey an inverse-square law with distance. This is the basis for Coulomb's law, which states that, for stationary charges, the electric field varies with the source charge and varies inversely with the square of the distance from the source. This means that if the source charge were doubled, the electric field would double, and if you move twice as far away from the source, the field at that point would be only one-quarter its original strength. The electric field can be visualized with a set of lines whose direction at each point is the same as those of the field, a concept introduced by Michael Faraday, whose term 'lines of force' is still sometimes used. This illustration has the useful property that, when drawn so that each line represents the same amount of flux, the strength of the field is proportional to the density of the lines. Field lines due to stationary charges have several important properties, including that they always originate from positive charges and terminate at negative charges, they enter all good conductors at right angles, and they never cross or close in on themselves. The field lines are a representative concept; the field actually permeates all the intervening space between the lines. More or fewer lines may be drawn depending on the precision to which it is desired to represent the field. The study of electric fields created by stationary charges is called electrostatics. Faraday's law describes the relationship between a time-varying magnetic field and the electric field. One way of stating Faraday's law is that the curl of the electric field is equal to the negative time derivative of the magnetic field. In the absence of time-varying magnetic field, the electric field is therefore called conservative (i.e. curl-free). This implies there are two kinds of electric fields: electrostatic fields and fields arising from time-varying magnetic fields. While the curl-free nature of the static electric field allows for a simpler treatment using electrostatics, time-varying magnetic fields are generally treated as a component of a unified electromagnetic field. The study of magnetic and electric fields that change over time is called electrodynamics. Mathematical formulation Electric fields are caused by electric charges, described by Gauss's law, and time varying magnetic fields, described by Faraday's law of induction. Together, these laws are enough to define the behavior of the electric field. However, since the magnetic field is described as a function of electric field, the equations of both fields are coupled and together form Maxwell's equations that describe both fields as a function of charges and currents. Electrostatics In the special case of a steady state (stationary charges and currents), the Maxwell-Faraday inductive effect disappears. The resulting two equations (Gauss's law and Faraday's law with no induction term ), taken together, are equivalent to Coulomb's law, which states that a particle with electric charge at position exerts a force on a particle with charge at position of: where is the force on charged particle caused by charged particle . is the permittivity of free space. is a unit vector directed from to . is the displacement vector from to . Note that must be replaced with , permittivity, when charges are in non-empty media. When the charges and have the same sign this force is positive, directed away from the other charge, indicating the particles repel each other. When the charges have unlike signs the force is negative, indicating the particles attract. To make it easy to calculate the Coulomb force on any charge at position this expression can be divided by leaving an expression that only depends on the other charge (the source charge) where is the component of the electric field at due to . This is the electric field at point due to the point charge ; it is a vector-valued function equal to the Coulomb force per unit charge that a positive point charge would experience at the position . Since this formula gives the electric field magnitude and direction at any point in space (except at the location of the charge itself, , where it becomes infinite) it defines a vector field. From the above formula it can be seen that the electric field due to a point charge is everywhere directed away from the charge if it is positive, and toward the charge if it is negative, and its magnitude decreases with the inverse square of the distance from the charge. The Coulomb force on a charge of magnitude at any point in space is equal to the product of the charge and the electric field at that point The SI unit of the electric field is the newton per coulomb (N/C), or volt per meter (V/m); in terms of the SI base units it is kg⋅m⋅s−3⋅A−1. Superposition principle Due to the linearity of Maxwell's equations, electric fields satisfy the superposition principle, which states that the total electric field, at a point, due to a collection of charges is equal to the vector sum of the electric fields at that point due to the individual charges. This principle is useful in calculating the field created by multiple point charges. If charges are stationary in space at points , in the absence of currents, the superposition principle says that the resulting field is the sum of fields generated by each particle as described by Coulomb's law: where is the unit vector in the direction from point to point is the displacement vector from point to point . Continuous charge distributions The superposition principle allows for the calculation of the electric field due to a distribution of charge density . By considering the charge in each small volume of space at point as a point charge, the resulting electric field, , at point can be calculated as where is the unit vector pointing from to . is the displacement vector from to . The total field is found by summing the contributions from all the increments of volume by integrating the charge density over the volume : Similar equations follow for a surface charge with surface charge density on surface and for line charges with linear charge density on line Electric potential If a system is static, such that magnetic fields are not time-varying, then by Faraday's law, the electric field is curl-free. In this case, one can define an electric potential, that is, a function such that This is analogous to the gravitational potential. The difference between the electric potential at two points in space is called the potential difference (or voltage) between the two points. In general, however, the electric field cannot be described independently of the magnetic field. Given the magnetic vector potential, , defined so that , one can still define an electric potential such that: where is the gradient of the electric potential and is the partial derivative of with respect to time. Faraday's law of induction can be recovered by taking the curl of that equation which justifies, a posteriori, the previous form for . Continuous vs. discrete charge representation The equations of electromagnetism are best described in a continuous description. However, charges are sometimes best described as discrete points; for example, some models may describe electrons as point sources where charge density is infinite on an infinitesimal section of space. A charge located at can be described mathematically as a charge density , where the Dirac delta function (in three dimensions) is used. Conversely, a charge distribution can be approximated by many small point charges. Electrostatic fields Electrostatic fields are electric fields that do not change with time. Such fields are present when systems of charged matter are stationary, or when electric currents are unchanging. In that case, Coulomb's law fully describes the field. Parallels between electrostatic and gravitational fields Coulomb's law, which describes the interaction of electric charges: is similar to Newton's law of universal gravitation: (where ). This suggests similarities between the electric field and the gravitational field , or their associated potentials. Mass is sometimes called "gravitational charge". Electrostatic and gravitational forces both are central, conservative and obey an inverse-square law. Uniform fields A uniform field is one in which the electric field is constant at every point. It can be approximated by placing two conducting plates parallel to each other and maintaining a voltage (potential difference) between them; it is only an approximation because of boundary effects (near the edge of the planes, the electric field is distorted because the plane does not continue). Assuming infinite planes, the magnitude of the electric field is: where is the potential difference between the plates and is the distance separating the plates. The negative sign arises as positive charges repel, so a positive charge will experience a force away from the positively charged plate, in the opposite direction to that in which the voltage increases. In micro- and nano-applications, for instance in relation to semiconductors, a typical magnitude of an electric field is in the order of , achieved by applying a voltage of the order of 1 volt between conductors spaced 1 μm apart. Electromagnetic fields Electromagnetic fields are electric and magnetic fields, which may change with time, for instance when charges are in motion. Moving charges produce a magnetic field in accordance with Ampère's circuital law (with Maxwell's addition), which, along with Maxwell's other equations, defines the magnetic field, , in terms of its curl: where is the current density, is the vacuum permeability, and is the vacuum permittivity. Both the electric current density and the partial derivative of the electric field with respect to time, contribute to the curl of the magnetic field. In addition, the Maxwell–Faraday equation states These represent two of Maxwell's four equations and they intricately link the electric and magnetic fields together, resulting in the electromagnetic field. The equations represent a set of four coupled multi-dimensional partial differential equations which, when solved for a system, describe the combined behavior of the electromagnetic fields. In general, the force experienced by a test charge in an electromagnetic field is given by the Lorentz force law: Energy in the electric field The total energy per unit volume stored by the electromagnetic field is where is the permittivity of the medium in which the field exists, its magnetic permeability, and and are the electric and magnetic field vectors. As and fields are coupled, it would be misleading to split this expression into "electric" and "magnetic" contributions. In particular, an electrostatic field in any given frame of reference in general transforms into a field with a magnetic component in a relatively moving frame. Accordingly, decomposing the electromagnetic field into an electric and magnetic component is frame-specific, and similarly for the associated energy. The total energy stored in the electromagnetic field in a given volume is Electric displacement field Definitive equation of vector fields In the presence of matter, it is helpful to extend the notion of the electric field into three vector fields: where is the electric polarization – the volume density of electric dipole moments, and is the electric displacement field. Since and are defined separately, this equation can be used to define . The physical interpretation of is not as clear as (effectively the field applied to the material) or (induced field due to the dipoles in the material), but still serves as a convenient mathematical simplification, since Maxwell's equations can be simplified in terms of free charges and currents. Constitutive relation The and fields are related by the permittivity of the material, . For linear, homogeneous, isotropic materials and are proportional and constant throughout the region, there is no position dependence: For inhomogeneous materials, there is a position dependence throughout the material: For anisotropic materials the and fields are not parallel, and so and are related by the permittivity tensor (a 2nd order tensor field), in component form: For non-linear media, and are not proportional. Materials can have varying extents of linearity, homogeneity and isotropy. Relativistic effects on electric field Point charge in uniform motion The invariance of the form of Maxwell's equations under Lorentz transformation can be used to derive the electric field of a uniformly moving point charge. The charge of a particle is considered frame invariant, as supported by experimental evidence. Alternatively the electric field of uniformly moving point charges can be derived from the Lorentz transformation of four-force experienced by test charges in the source's rest frame given by Coulomb's law and assigning electric field and magnetic field by their definition given by the form of Lorentz force. However the following equation is only applicable when no acceleration is involved in the particle's history where Coulomb's law can be considered or symmetry arguments can be used for solving Maxwell's equations in a simple manner. The electric field of such a uniformly moving point charge is hence given by: where is the charge of the point source, is the position vector from the point source to the point in space, is the ratio of observed speed of the charge particle to the speed of light and is the angle between and the observed velocity of the charged particle. The above equation reduces to that given by Coulomb's law for non-relativistic speeds of the point charge. Spherical symmetry is not satisfied due to breaking of symmetry in the problem by specification of direction of velocity for calculation of field. To illustrate this, field lines of moving charges are sometimes represented as unequally spaced radial lines which would appear equally spaced in a co-moving reference frame. Propagation of disturbances in electric fields Special theory of relativity imposes the principle of locality, that requires cause and effect to be time-like separated events where the causal efficacy does not travel faster than the speed of light. Maxwell's laws are found to confirm to this view since the general solutions of fields are given in terms of retarded time which indicate that electromagnetic disturbances travel at the speed of light. Advanced time, which also provides a solution for Maxwell's law are ignored as an unphysical solution.For the motion of a charged particle, considering for example the case of a moving particle with the above described electric field coming to an abrupt stop, the electric fields at points far from it do not immediately revert to that classically given for a stationary charge. On stopping, the field around the stationary points begin to revert to the expected state and this effect propagates outwards at the speed of light while the electric field lines far away from this will continue to point radially towards an assumed moving charge. This virtual particle will never be outside the range of propagation of the disturbance in electromagnetic field, since charged particles are restricted to have speeds slower than that of light, which makes it impossible to construct a Gaussian surface in this region that violates Gauss's law. Another technical difficulty that supports this is that charged particles travelling faster than or equal to speed of light no longer have a unique retarded time. Since electric field lines are continuous, an electromagnetic pulse of radiation is generated that connects at the boundary of this disturbance travelling outwards at the speed of light. In general, any accelerating point charge radiates electromagnetic waves however, non-radiating acceleration is possible in a systems of charges. Arbitrarily moving point charge For arbitrarily moving point charges, propagation of potential fields such as Lorenz gauge fields at the speed of light needs to be accounted for by using Liénard–Wiechert potential. Since the potentials satisfy Maxwell's equations, the fields derived for point charge also satisfy Maxwell's equations. The electric field is expressed as: where is the charge of the point source, is retarded time or the time at which the source's contribution of the electric field originated, is the position vector of the particle, is a unit vector pointing from charged particle to the point in space, is the velocity of the particle divided by the speed of light, and is the corresponding Lorentz factor. The retarded time is given as solution of: The uniqueness of solution for for given , and is valid for charged particles moving slower than speed of light. Electromagnetic radiation of accelerating charges is known to be caused by the acceleration dependent term in the electric field from which relativistic correction for Larmor formula is obtained. There exist yet another set of solutions for Maxwell's equation of the same form but for advanced time instead of retarded time given as a solution of: Since the physical interpretation of this indicates that the electric field at a point is governed by the particle's state at a point of time in the future, it is considered as an unphysical solution and hence neglected. However, there have been theories exploring the advanced time solutions of Maxwell's equations, such as Feynman Wheeler absorber theory. The above equation, although consistent with that of uniformly moving point charges as well as its non-relativistic limit, are not corrected for quantum-mechanical effects. Common formulæ Electric field infinitely close to a conducting surface in electrostatic equilibrium having charge density at that point is since charges are only formed on the surface and the surface at the infinitesimal scale resembles an infinite 2D plane. In the absence of external fields, spherical conductors exhibit a uniform charge distribution on the surface and hence have the same electric field as that of uniform spherical surface distribution.
Physical sciences
Electrostatics
null
41113
https://en.wikipedia.org/wiki/Epoch
Epoch
In chronology and periodization, an epoch or reference epoch is an instant in time chosen as the origin of a particular calendar era. The "epoch" serves as a reference point from which time is measured. The moment of epoch is usually decided by congruity, or by following conventions understood from the epoch in question. The epoch moment or date is usually defined from a specific, clear event of change, an epoch event. In a more gradual change, a deciding moment is chosen when the epoch criterion was reached. Calendar eras Pre-modern eras The Yoruba calendar (Kọ́jọ́dá) uses 8042 BC as the epoch, regarded as the year of the creation of Ile-Ife by the god Obatala, also regarded as the creation of the earth. Anno Mundi (years since the creation of the world) is used in the Byzantine calendar (5509 BC). Anno Mundi (years since the creation of the world) as used in the Hebrew calendar (3761 BC). The Mesoamerican Long Count Calendar uses the creation of the fourth world in 3114 BC. Olympiads, the ancient Greek era of four-year periods between Olympic Games, beginning in 776 BC. Ab urbe condita ("from the foundation of the city"), used to some extent by Roman calendars of the Roman imperial period (753 BC). Buddhist calendars tend to use the epoch of 544 BC (date of Buddha's parinirvana). The term Hindu calendar may refer to a number of traditional Indian calendars. A notable example of a Hindu epoch is the Vikram Samvat (58 BC), also used in modern times as the national calendars of Nepal and Bangladesh. The Julian and Gregorian calendars use as epoch the Incarnation of Jesus as calculated in the 6th century by Dionysius Exiguus. (Subsequent research has shown that this moment is about four years after the best estimate for the date of birth of Jesus.) This epoch was applied retrospectively to the Julian calendar, long after its original creation by Julius Caesar. The epoch of the Islamic calendar is the Hijra (AD 622). The year count in this calendar shifts relative to the solar year count, as the calendar is purely lunar: its year consists of 12 lunations and is thus ten or eleven days shorter than a solar year. This calendar denotes "lunar years" as Anno Hegiræ ([since] the year of the Hijra) or AH. This calendar is used in Sunni Islam and related sects. The epoch of the official Iranian calendar is also the Hijra, but it is a solar calendar; each year begins at the Northern spring equinox. This calendar is used in Shia Islam and related sects. Modern eras The Bahá'í calendar is dated from the vernal equinox of the year the Báb proclaimed his religion (AD 1844). Years are grouped in Váḥids of 19 years, and Kull-i-Shay of 361 (19×19) years. In Thailand in 1888 King Chulalongkorn decreed a National Thai Era dating from the founding of Bangkok on April 6, 1782. In 1912, New Year's Day was shifted to April 1. In 1941, Prime Minister Phibunsongkhram decided to count the years since 543 BC. This is the Thai solar calendar using the Thai Buddhist Era. Except for this era, it is the Gregorian calendar. In the French Republican Calendar, a calendar used by the French government for about twelve years from late 1793, the epoch was the beginning of the "Republican Era", September 22, 1792 (the day the French First Republic was proclaimed, one day after the Convention abolished the Ancien Regime). The Indian national calendar, introduced in 1957, follows the Saka era (AD 78). The Minguo calendar used by officials of Taiwan and its predecessor dates from January 1, 1912, the first year after the Xinhai Revolution, which overthrew the Qing Empire. North Korea uses a system that starts in 1912 (= Juche 1), the year of the birth of its founder Kim Il-Sung. The Fascist Era dates to Mussolini's March on Rome in 1922, and was in use only in countries under hegemony of the Fascist regime of Benito Mussolini. It has been defunct since the fall of the Italian Social Republic in 1945. In the scientific Before Present system of numbering years for purposes of radiocarbon dating, the reference date is January 1, 1950 (though the specific date January 1 is quite unnecessary, as radiocarbon dating has limited precision). Different branches of Freemasonry have selected different years to date their documents according to a Masonic era, such as the Anno Lucis (A.L.). The Holocene calendar uses 10,000 BC as the epoch, the beginning of the Holocene epoch on the geological time scale. Regnal eras The official Japanese system numbers years from the accession of the current emperor, regarding the calendar year during which the accession occurred as the first year. A similar system existed in China before 1912, being based on the accession year of the emperor (1911 was thus the third year of the Xuantong period). With the establishment of the Republic of China in 1912, the republican era was introduced. It is still very common in Taiwan to date events via the republican era. The People's Republic of China adopted the common era calendar in 1949 (the 38th year of the Chinese Republic). Other applications An epoch in computing is the time at which the representation is zero. For example, Unix time is represented as the number of seconds since 00:00:00 UTC on 1 January 1970, not counting leap seconds. An epoch in astronomy is a reference time used for consistency in calculation of positions and orbits. A common astronomical epoch is J2000, which is noon on January 1, 2000, Terrestrial Time. An epoch in Geochronology is a period of time, typically in the order of tens of millions of years. The current epoch is the Holocene.
Physical sciences
Time
Basics and measurement
41118
https://en.wikipedia.org/wiki/Error
Error
An error (from the Latin , meaning 'to wander') is an inaccurate or incorrect action, thought, or judgement. In statistics, "error" refers to the difference between the value which has been computed and the correct value. An error could result in failure or in a deviation from the intended performance or behavior. Human behavior One reference differentiates between "error" and "mistake" as follows: In human behavior the norms or expectations for behavior or its consequences can be derived from the intention of the actor or from the expectations of other individuals or from a social grouping or from social norms. (See deviance.) Gaffes and faux pas can be labels for certain instances of this kind of error. More serious departures from social norms carry labels such as misbehavior and labels from the legal system, such as misdemeanor and crime. Departures from norms connected to religion can have other labels, such as sin. Language An individual language user's deviations from standard language norms in grammar, pronunciation and orthography are sometimes referred to as errors. However, in light of the role of language usage in everyday social class distinctions, many feel that linguistics should restrain itself from such prescriptivist judgments to avoid reinforcing dominant class value claims about what linguistic forms should and should not be used. One may distinguish various kinds of linguistic errors – some, such as aphasia or speech disorders, where the user is unable to say what they intend to, are generally considered errors, while cases where natural, intended speech is non-standard (as in vernacular dialects), are considered legitimate speech in scholarly linguistics, but might be considered errors in prescriptivist contexts.
Physical sciences
Science basics
Basics and measurement
41155
https://en.wikipedia.org/wiki/Firmware
Firmware
In computing, firmware is software that provides low-level control of computing device hardware. For a relatively simple device, firmware may perform all control, monitoring and data manipulation functionality. For a more complex device, firmware may provide relatively low-level control as well as hardware abstraction services to higher-level software such as an operating system. Firmware is found in a wide range of computing devices including personal computers, smartphones, home appliances, vehicles, computer peripherals and in many of the integrated circuits inside each of these larger systems. Firmware is stored in non-volatile memory either read-only memory (ROM) or programmable memory such as EPROM, EEPROM, or flash. Changing a device's firmware stored in ROM requires physically replacing the memory chip although some chips are not designed to be removed after manufacture. Programmable firmware memory can be reprogrammed via a procedure sometimes called flashing. Common reasons for changing firmware include fixing bugs and adding features. History and etymology Ascher Opler used the term firmware in a 1967 Datamation article, as an intermediary term between hardware and software. Opler projected that fourth-generation computer systems would have a writable control store (a small specialized high-speed memory) into which microcode firmware would be loaded. Many software functions would be moved to microcode, and instruction sets could be customized, with different firmware loaded for different instruction sets. As computers began to increase in complexity, it became clear that various programs needed to first be initiated and run to provide a consistent environment necessary for running more complex programs at the user's discretion. This required programming the computer to run those programs automatically. Furthermore, as companies, universities, and marketers wanted to sell computers to laypeople with little technical knowledge, greater automation became necessary to allow a lay-user to easily run programs for practical purposes. This gave rise to a kind of software that a user would not consciously run, and it led to software that a lay user wouldn't even know about. As originally used, firmware contrasted with hardware (the CPU itself) and software (normal instructions executing on a CPU). It was not composed of CPU machine instructions, but of lower-level microcode involved in the implementation of machine instructions. It existed on the boundary between hardware and software; thus the name firmware. Over time, popular usage extended the word firmware to denote any computer program that is tightly linked to hardware, including BIOS on PCs, boot firmware on smartphones, computer peripherals, or the control systems on simple consumer electronic devices such as microwave ovens and remote controls. Applications Computers In some respects, the various firmware components are as important as the operating system in a working computer. However, unlike most modern operating systems, firmware rarely has a well-evolved automatic mechanism of updating itself to fix any functionality issues detected after shipping the unit. A computer's firmware may be manually updated by a user via a small utility program. In contrast, firmware in mass storage devices (hard-disk drives, optical disc drives, flash memory storage e.g. solid state drive) is less frequently updated, even when flash memory (rather than ROM, EEPROM) storage is used for the firmware. Most computer peripherals are themselves special-purpose computers. Devices such as printers, scanners, webcams, and USB flash drives have internally-stored firmware; some devices may also permit field upgrading of their firmware. For modern simpler devices, such as USB keyboards, USB mouses and USB sound cards, the trend is to store the firmware in on-chip memory in the device's microcontroller, as opposed to storing it in a separate EEPROM chip. Examples of computer firmware include: The BIOS firmware used on PCs The (U)EFI-compliant firmware used on Itanium systems, Intel-based Macs, and many newer PCs Hard disk drive, solid-state drive, optical disc drive and optical disc recorder firmware Video BIOS of a graphics card Open Firmware, used in SPARC-based computers from Sun Microsystems and Oracle Corporation, PowerPC-based computers from Apple, and computers from Genesi ARCS, used in computers from Silicon Graphics Kickstart, used in the Amiga line of computers (POST, hardware init + Plug and Play auto-configuration of peripherals, kernel, etc.) RTAS (Run-Time Abstraction Services), used in System i and System p computers from IBM The Common Firmware Environment (CFE) for Broadcom systems-on-chip (SoCs) Home and personal-use products Consumer appliances like gaming consoles, digital cameras and portable music players support firmware upgrades. Some companies use firmware updates to add new playable file formats (codecs). Other features that may change with firmware updates include the GUI or even the battery life. Smartphones have a firmware over the air upgrade capability for adding new features and patching security issues. Automobiles Since 1996, most automobiles have employed an on-board computer and various sensors to detect mechanical problems. , modern vehicles also employ computer-controlled anti-lock braking systems (ABS) and computer-operated transmission control units (TCUs). The driver can also get in-dash information while driving in this manner, such as real-time fuel economy and tire pressure readings. Local dealers can update most vehicle firmware. Other examples Other firmware applications include: In home and personal-use products: Timing and control systems for washing machines Controlling sound and video attributes, as well as the channel list, in modern televisions In routers, switches, and firewalls: LibreCMC a 100% free software router distribution based on the Linux-libre kernel IPFire an open-source firewall/router distribution based on the Linux kernel fli4l an open-source firewall/router distribution based on the Linux kernel OpenWrt an open-source firewall/router distribution based on the Linux kernel m0n0wall an embedded firewall distribution of FreeBSD Proprietary firmware In NAS systems: NAS4Free an open-source NAS operating system based on FreeBSD Openfiler an open-source NAS operating system based on the Linux kernel Proprietary firmware Field-Programmable Gate Array (FPGA) code may be referred to as firmware Flashing Flashing is a process that involves the overwriting of existing firmware or data, contained in EEPROM or flash memory module present in an electronic device, with new data. This can be done to upgrade a device or to change the provider of a service associated with the function of the device, such as changing from one mobile phone service provider to another or installing a new operating system. If firmware is upgradable, it is often done via a program from the provider, and will often allow the old firmware to be saved before upgrading so it can be reverted to if the process fails, or if the newer version performs worse. Free software replacements for vendor flashing tools have been developed, such as Flashrom. Firmware hacking Sometimes, third parties develop an unofficial new or modified ("aftermarket") version of firmware to provide new features or to unlock hidden functionality; this is referred to as custom firmware. An example is Rockbox as a firmware replacement for portable media players. There are many homebrew projects for various devices, which often unlock general-purpose computing functionality in previously limited devices (e.g., running Doom on iPods). Firmware hacks usually take advantage of the firmware update facility on many devices to install or run themselves. Some, however, must resort to exploits to run, because the manufacturer has attempted to lock the hardware to stop it from running unlicensed code. Most firmware hacks are free software. HDD firmware hacks The Moscow-based Kaspersky Lab discovered that a group of developers it refers to as the Equation Group has developed hard disk drive firmware modifications for various drive models, containing a trojan horse that allows data to be stored on the drive in locations that will not be erased even if the drive is formatted or wiped. Although the Kaspersky Lab report did not explicitly claim that this group is part of the United States National Security Agency (NSA), evidence obtained from the code of various Equation Group software suggests that they are part of the NSA. Researchers from the Kaspersky Lab categorized the undertakings by Equation Group as the most advanced hacking operation ever uncovered, also documenting around 500 infections caused by the Equation Group in at least 42 countries. Security risks Mark Shuttleworth, the founder of the company Canonical, which created the Ubuntu Linux distribution, has described proprietary firmware as a security risk, saying that "firmware on your device is the NSA's best friend" and calling firmware "a trojan horse of monumental proportions". He has asserted that low-quality, closed source firmware is a major threat to system security: "Your biggest mistake is to assume that the NSA is the only institution abusing this position of trust in fact, it's reasonable to assume that all firmware is a cesspool of insecurity, courtesy of incompetence of the highest degree from manufacturers, and competence of the highest degree from a very wide range of such agencies". As a potential solution to this problem, he has called for declarative firmware, which would describe "hardware linkage and dependencies" and "should not include executable code". Firmware should be open-source so that the code can be checked and verified. Custom firmware hacks have also focused on injecting malware into devices such as smartphones or USB devices. One such smartphone injection was demonstrated on the Symbian OS at MalCon, a hacker convention. A USB device firmware hack called BadUSB was presented at the Black Hat USA 2014 conference, demonstrating how a USB flash drive microcontroller can be reprogrammed to spoof various other device types to take control of a computer, exfiltrate data, or spy on the user. Other security researchers have worked further on how to exploit the principles behind BadUSB, releasing at the same time the source code of hacking tools that can be used to modify the behavior of different USB devices.
Technology
Operating systems
null
41193
https://en.wikipedia.org/wiki/Frequency-shift%20keying
Frequency-shift keying
Frequency-shift keying (FSK) is a frequency modulation scheme in which digital information is encoded on a carrier signal by periodically shifting the frequency of the carrier between several discrete frequencies. The technology is used for communication systems such as telemetry, weather balloon radiosondes, caller ID, garage door openers, and low frequency radio transmission in the VLF and ELF bands. The simplest FSK is binary FSK (BFSK, which is also commonly referred to as 2FSK or 2-FSK), in which the carrier is shifted between two discrete frequencies to transmit binary (0s and 1s) information. Modulating and demodulating Reference implementations of FSK modems exist and are documented in detail. The demodulation of a binary FSK signal can be done using the Goertzel algorithm very efficiently, even on low-power microcontrollers. Variations Multiple frequency-shift keying Continuous-phase frequency-shift keying In principle FSK can be implemented by using completely independent free-running oscillators, and switching between them at the beginning of each symbol period. In general, independent oscillators will not be at the same phase and therefore the same amplitude at the switch-over instant, causing sudden discontinuities in the transmitted signal. In practice, many FSK transmitters use only a single oscillator, and the process of switching to a different frequency at the beginning of each symbol period preserves the phase. The elimination of discontinuities in the phase (and therefore elimination of sudden changes in amplitude) reduces sideband power, reducing interference with neighboring channels. Gaussian frequency-shift keying Rather than directly modulating the frequency with the digital data symbols, "instantaneously" changing the frequency at the beginning of each symbol period, Gaussian frequency-shift keying (GFSK) filters the data pulses with a Gaussian filter to make the transitions smoother. This filter has the advantage of reducing sideband power, reducing interference with neighboring channels, at the cost of increasing intersymbol interference. It is used by Improved Layer 2 Protocol, DECT, Bluetooth, Cypress WirelessUSB, Nordic Semiconductor, Texas Instruments, IEEE 802.15.4, Z-Wave and Wavenis devices. For basic data rate Bluetooth the minimum deviation is 115 kHz. A GFSK modulator differs from a simple frequency-shift keying modulator in that before the baseband waveform (with levels −1 and +1) goes into the FSK modulator, it passed through a Gaussian filter to make the transitions smoother to limit spectral width. Gaussian filtering is a standard way to reduce spectral width; it is called pulse shaping in this application. In ordinary non-filtered FSK, at a jump from −1 to +1 or +1 to −1, the modulated waveform changes rapidly, which introduces large out-of-band spectrum. If the pulse is changed going from −1 to +1 as −1, −0.98, −0.93, ..., +0.93, +0.98, +1, and this smoother pulse is used to determine the carrier frequency, the out-of-band spectrum will be reduced. Minimum-shift keying Minimum frequency-shift keying or minimum-shift keying (MSK) is a particular spectrally efficient form of coherent FSK. In MSK, the difference between the higher and lower frequency is identical to half the bit rate. Consequently, the waveforms that represent a 0 and a 1 bit differ by exactly half a carrier period. The maximum frequency deviation is δ = 0.25 fm, where fm is the maximum modulating frequency. As a result, the modulation index m is 0.5. This is the smallest FSK modulation index that can be chosen such that the waveforms for 0 and 1 are orthogonal. Gaussian minimum-shift keying A variant of MSK called Gaussian minimum-shift keying (GMSK) is used in the GSM mobile phone standard. Audio frequency-shift keying Audio frequency-shift keying (AFSK) is a modulation technique by which digital data is represented by changes in the frequency (pitch) of an audio tone, yielding an encoded signal suitable for transmission via radio or telephone. Normally, the transmitted audio alternates between two tones: one, the "mark", represents a binary one; the other, the "space", represents a binary zero. AFSK differs from regular frequency-shift keying in performing the modulation at baseband frequencies. In radio applications, the AFSK-modulated signal normally is being used to modulate an RF carrier (using a conventional technique, such as AM or FM) for transmission. AFSK is not always used for high-speed data communications, since it is far less efficient in both power and bandwidth than most other modulation modes. In addition to its simplicity, however, AFSK has the advantage that encoded signals will pass through AC-coupled links, including most equipment originally designed to carry music or speech. AFSK is used in the U.S.-based Emergency Alert System to notify stations of the type of emergency, locations affected, and the time of issue without actually hearing the text of the alert. Multilevel frequency-shift keying Phase 1 radios in the Project 25 system use 4-level frequency-shift keying (4FSK). Applications In 1910, Reginald Fessenden invented a two-tone method of transmitting Morse code. Dots and dashes were replaced with different tones of equal length. The intent was to minimize transmission time. Some early Continuous Wave (CW) transmitters employed an arc converter that could not be conveniently keyed. Instead of turning the arc on and off, the key slightly changed the transmitter frequency in a technique known as the compensation-wave method. The compensation-wave was not used at the receiver. Spark transmitters used for this method consumed a lot of bandwidth and caused interference, so it was discouraged by 1921. Most early telephone-line modems used audio frequency-shift keying (AFSK) to send and receive data at rates up to about 1200 bits per second. The Bell 103 and Bell 202 modems used this technique. Even today, North American caller ID uses 1200 baud AFSK in the form of the Bell 202 standard. Some early microcomputers used a specific form of AFSK modulation, the Kansas City standard, to store data on audio cassettes. AFSK is still widely used in amateur radio, as it allows data transmission through unmodified voiceband equipment. AFSK is also used in the United States' Emergency Alert System to transmit warning information. It is used at higher bitrates for Weathercopy used on Weatheradio by NOAA in the U.S. The CHU shortwave radio station in Ottawa, Ontario, Canada broadcasts an exclusive digital time signal encoded using AFSK modulation. Caller ID and remote metering standards Frequency-shift keying (FSK) is commonly used over telephone lines for caller ID (displaying callers' numbers) and remote metering applications. There are several variations of this technology. European Telecommunications Standards Institute In some countries of Europe, the European Telecommunications Standards Institute (ETSI) standards 200 778-1 and -2 – replacing 300 778-1 & -2 – allow 3 physical transport layers (Telcordia Technologies (formerly Bellcore), British Telecom (BT) and Cable Communications Association (CCA)), combined with 2 data formats Multiple Data Message Format (MDMF) & Single Data Message Format (SDMF), plus the Dual-tone multi-frequency (DTMF) system and a no-ring mode for meter-reading and the like. It's more of a recognition that the different types exist than an attempt to define a single "standard". Telcordia Technologies The Telcordia Technologies (formerly Bellcore) standard is used in the United States, Canada (but see below), Australia, China, Hong Kong and Singapore. It sends the data after the first ring tone and uses the 1200 bits per second Bell 202 tone modulation. The data may be sent in SDMF – which includes the date, time and number – or in MDMF, which adds a NAME field. British Telecom British Telecom (BT) in the United Kingdom developed their own standard, which wakes up the display with a line reversal, then sends the data as CCITT v.23 modem tones in a format similar to MDMF. It is used by BT, wireless networks like the late Ionica, and some cable companies. Details are to be found in BT Supplier Information
Technology
Telecommunications
null
41207
https://en.wikipedia.org/wiki/Gel
Gel
A gel is a semi-solid that can have properties ranging from soft and weak to hard and tough. Gels are defined as a substantially dilute cross-linked system, which exhibits no flow when in the steady state, although the liquid phase may still diffuse through this system. Gels are mostly liquid by mass, yet they behave like solids because of a three-dimensional cross-linked network within the liquid. It is the cross-linking within the fluid that gives a gel its structure (hardness) and contributes to the adhesive stick (tack). In this way, gels are a dispersion of molecules of a liquid within a solid medium. The word gel was coined by 19th-century Scottish chemist Thomas Graham by clipping from gelatine. The process of forming a gel is called gelation. Composition Gels consist of a solid three-dimensional network that spans the volume of a liquid medium and ensnares it through surface tension effects. This internal network structure may result from physical bonds such as polymer chain entanglements (see polymers) (physical gels) or chemical bonds such as disulfide bonds (see thiomers) (chemical gels), as well as crystallites or other junctions that remain intact within the extending fluid. Virtually any fluid can be used as an extender including water (hydrogels), oil, and air (aerogel). Both by weight and volume, gels are mostly fluid in composition and thus exhibit densities similar to those of their constituent liquids. Edible jelly is a common example of a hydrogel and has approximately the density of water. Polyionic polymers Polyionic polymers are polymers with an ionic functional group. The ionic charges prevent the formation of tightly coiled polymer chains. This allows them to contribute more to viscosity in their stretched state, because the stretched-out polymer takes up more space. This is also the reason gel hardens. See polyelectrolyte for more information. Types Colloidal gels A colloidal gel consists of a percolated network of particles in a fluid medium, providing mechanical properties, in particular the emergence of elastic behaviour. The particles can show attractive interactions through osmotic depletion or through polymeric links. Colloidal gels have three phases in their lifespan: gelation, aging and collapse. The gel is initially formed by the assembly of particles into a space-spanning network, leading to a phase arrest. In the aging phase, the particles slowly rearrange to form thicker strands, increasing the elasticity of the material. Gels can also be collapsed and separated by external fields such as gravity. Colloidal gels show linear response rheology at low amplitudes. These materials have been explored as candidates for a drug release matrix. Hydrogels A hydrogel is a network of polymer chains that are hydrophilic, sometimes found as a colloidal gel in which water is the dispersion medium. A three-dimensional solid results from the hydrophilic polymer chains being held together by cross-links. Because of the inherent cross-links, the structural integrity of the hydrogel network does not dissolve from the high concentration of water. Hydrogels are highly absorbent (they can contain over 90% water) natural or synthetic polymeric networks. Hydrogels also possess a degree of flexibility very similar to natural tissue, due to their significant water content. As responsive "smart materials," hydrogels can encapsulate chemical systems which upon stimulation by external factors such as a change of pH may cause specific compounds such as glucose to be liberated to the environment, in most cases by a gel-sol transition to the liquid state. Chemomechanical polymers are mostly also hydrogels, which upon stimulation change their volume and can serve as actuators or sensors. The first appearance of the term 'hydrogel' in the literature was in 1894. Organogels An organogel is a non-crystalline, non-glassy thermoreversible (thermoplastic) solid material composed of a liquid organic phase entrapped in a three-dimensionally cross-linked network. The liquid can be, for example, an organic solvent, mineral oil, or vegetable oil. The solubility and particle dimensions of the structurant are important characteristics for the elastic properties and firmness of the organogel. Often, these systems are based on self-assembly of the structurant molecules. (An example of formation of an undesired thermoreversible network is the occurrence of wax crystallization in petroleum.) Organogels have potential for use in a number of applications, such as in pharmaceuticals, cosmetics, art conservation, and food. Xerogels A xerogel is a solid formed from a gel by drying with unhindered shrinkage. Xerogels usually retain high porosity (15–50%) and enormous surface area (150–900 m2/g), along with very small pore size (1–10 nm). When solvent removal occurs under supercritical conditions, the network does not shrink and a highly porous, low-density material known as an aerogel is produced. Heat treatment of a xerogel at elevated temperature produces viscous sintering (shrinkage of the xerogel due to a small amount of viscous flow) which results in a denser and more robust solid, the density and porosity achieved depend on the sintering conditions. Nanocomposite hydrogels Nanocomposite hydrogels or hybrid hydrogels, are highly hydrated polymeric networks, either physically or covalently crosslinked with each other and/or with nanoparticles or nanostructures. Nanocomposite hydrogels can mimic native tissue properties, structure and microenvironment due to their hydrated and interconnected porous structure. A wide range of nanoparticles, such as carbon-based, polymeric, ceramic, and metallic nanomaterials can be incorporated within the hydrogel structure to obtain nanocomposites with tailored functionality. Nanocomposite hydrogels can be engineered to possess superior physical, chemical, electrical, thermal, and biological properties. Properties Many gels display thixotropy – they become fluid when agitated, but resolidify when resting. In general, gels are apparently solid, jelly-like materials. It is a type of non-Newtonian fluid. By replacing the liquid with gas it is possible to prepare aerogels, materials with exceptional properties including very low density, high specific surface areas, and excellent thermal insulation properties. Thermodynamics of gel deformation A gel is in essence the mixture of a polymer network and a solvent phase. Upon stretching, the network crosslinks are moved further apart from each other. Due to the polymer strands between crosslinks acting as entropic springs, gels demonstrate elasticity like rubber (which is just a polymer network, without solvent). This is so because the free energy penalty to stretch an ideal polymer segment monomers of size between crosslinks to an end-to-end distance is approximately given by This is the origin of both gel and rubber elasticity. But one key difference is that gel contains an additional solvent phase and hence is capable of having significant volume changes under deformation by taking in and out solvent. For example, a gel could swell to several times its initial volume after being immersed in a solvent after equilibrium is reached. This is the phenomenon of gel swelling. On the contrary, if we take the swollen gel out and allow the solvent to evaporate, the gel would shrink to roughly its original size. This gel volume change can alternatively be introduced by applying external forces. If a uniaxial compressive stress is applied to a gel, some solvent contained in the gel would be squeezed out and the gel shrinks in the applied-stress direction. To study the gel mechanical state in equilibrium, a good starting point is to consider a cubic gel of volume that is stretched by factors , and in the three orthogonal directions during swelling after being immersed in a solvent phase of initial volume . The final deformed volume of gel is then and the total volume of the system is , that is assumed constant during the swelling process for simplicity of treatment. The swollen state of the gel is now completely characterized by stretch factors , and and hence it is of interest to derive the deformation free energy as a function of them, denoted as . For analogy to the historical treatment of rubber elasticity and mixing free energy, is most often defined as the free energy difference after and before the swelling normalized by the initial gel volume , that is, a free energy difference density. The form of naturally assumes two contributions of radically different physical origins, one associated with the elastic deformation of the polymer network, and the other with the mixing of the network with the solvent. Hence, we write We now consider the two contributions separately. The polymer elastic deformation term is independent of the solvent phase and has the same expression as a rubber, as derived in the Kuhn's theory of rubber elasticity: where denotes the shear modulus of the initial state. On the other hand, the mixing term is usually treated by the Flory-Huggins free energy of concentrated polymer solutions , where is polymer volume fraction. Suppose the initial gel has a polymer volume fraction of , the polymer volume fraction after swelling would be since the number of monomers remains the same while the gel volume has increased by a factor of . As the polymer volume fraction decreases from to , a polymer solution of concentration and volume is mixed with a pure solvent of volume to become a solution with polymer concentration and volume . The free energy density change in this mixing step is given as where on the right-hand side, the first term is the Flory–Huggins energy density of the final swollen gel, the second is associated with the initial gel and the third is of the pure solvent prior to mixing. Substitution of leads to Note that the second term is independent of the stretching factors , and and hence can be dropped in subsequent analysis. Now we make use of the Flory-Huggins free energy for a polymer-solvent solution that reads where is monomer volume, is polymer strand length and is the Flory-Huggins energy parameter. Because in a network, the polymer length is effectively infinite, we can take the limit and reduces to Substitution of this expression into and addition of the network contribution leads to This provides the starting point to examining the swelling equilibrium of a gel network immersed in solvent. It can be shown that gel swelling is the competition between two forces, one is the osmotic pressure of the polymer solution that favors the take in of solvent and expansion, the other is the restoring force of the polymer network elasticity that favors shrinkage. At equilibrium, the two effects exactly cancel each other in principle and the associated , and define the equilibrium gel volume. In solving the force balance equation, graphical solutions are often preferred. In an alternative, scaling approach, suppose an isotropic gel is stretch by a factor of in all three directions. Under the affine network approximation, the mean-square end-to-end distance in the gel increases from initial to and the elastic energy of one stand can be written as where is the mean-square fluctuation in end-to-end distance of one strand. The modulus of the gel is then this single-strand elastic energy multiplied by strand number density to give This modulus can then be equated to osmotic pressure (through differentiation of the free energy) to give the same equation as we found above. Modified Donnan equilibrium of polyelectrolyte gels Consider a hydrogel made of polyelectrolytes decorated with weak acid groups that can ionize according to the reaction is immersed in a salt solution of physiological concentration. The degree of ionization of the polyelectrolytes is then controlled by the and due to the charged nature of and , electrostatic interactions with other ions in the systems. This is effectively a reacting system governed by acid-base equilibrium modulated by electrostatic effects, and is relevant in drug delivery, sea water desalination and dialysis technologies. Due to the elastic nature of the gel, the dispersion of in the system is constrained and hence, there will be a partitioning of salts ions and inside and outside the gel, which is intimately coupled to the polyelectrolyte degree of ionization. This ion partitioning inside and outside the gel is analogous to the partitioning of ions across a semipemerable membrane in classical Donnan theory, but a membrane is not needed here because the gel volume constraint imposed by network elasticity effectively acts its role, in preventing the macroions to pass through the fictitious membrane while allowing ions to pass. The coupling between the ion partitioning and polyelectrolyte ionization degree is only partially by the classical Donnan theory. As a starting point we can neglect the electrostatic interactions among ions. Then at equilibrium, some of the weak acid sites in the gel would dissociate to form that electrostatically attracts positive charged and salt cations leading to a relatively high concentration of and salt cations inside the gel. But because the concentration of is locally higher, it suppresses the further ionization of the acid sites. This phenomenon is the prediction of the classical Donnan theory. However, with electrostatic interactions, there are further complications to the picture. Consider the case of two adjacent, initially uncharged acid sites are both dissociated to form . Since the two sites are both negatively charged, there will be a charge-charge repulsion along the backbone of the polymer than tends to stretch the chain. This energy cost is high both elastically and electrostatically and hence suppress ionization. Even though this ionization suppression is qualitatively similar to that of Donnan prediction, it is absent without electrostatic consideration and present irrespective of ion partitioning. The combination of both effects as well as gel elasticity determines the volume of the gel at equilibrium. Due to the complexity of the coupled acid-base equilibrium, electrostatics and network elasticity, only recently has such system been correctly recreated in computer simulations. Animal-produced gels Some species secrete gels that are effective in parasite control. For example, the long-finned pilot whale secretes an enzymatic gel that rests on the outer surface of this animal and helps prevent other organisms from establishing colonies on the surface of these whales' bodies. Hydrogels existing naturally in the body include mucus, the vitreous humor of the eye, cartilage, tendons and blood clots. Their viscoelastic nature results in the soft tissue component of the body, disparate from the mineral-based hard tissue of the skeletal system. Researchers are actively developing synthetically derived tissue replacement technologies derived from hydrogels, for both temporary implants (degradable) and permanent implants (non-degradable). A review article on the subject discusses the use of hydrogels for nucleus pulposus replacement, cartilage replacement, and synthetic tissue models. Applications Many substances can form gels when a suitable thickener or gelling agent is added to their formula. This approach is common in the manufacture of a wide range of products, from foods to paints and adhesives. In fiber optic communications, a soft gel resembling hair gel in viscosity is used to fill the plastic tubes containing the fibers. The main purpose of the gel is to prevent water intrusion if the buffer tube is breached, but the gel also buffers the fibers against mechanical damage when the tube is bent around corners during installation, or flexed. Additionally, the gel acts as a processing aid when the cable is being constructed, keeping the fibers central whilst the tube material is extruded around it.
Physical sciences
Chemical mixtures: General
null
41210
https://en.wikipedia.org/wiki/Geostationary%20orbit
Geostationary orbit
A geostationary orbit, also referred to as a geosynchronous equatorial orbit (GEO), is a circular geosynchronous orbit in altitude above Earth's equator, in radius from Earth's center, and following the direction of Earth's rotation. An object in such an orbit has an orbital period equal to Earth's rotational period, one sidereal day, and so to ground observers it appears motionless, in a fixed position in the sky. The concept of a geostationary orbit was popularised by the science fiction writer Arthur C. Clarke in the 1940s as a way to revolutionise telecommunications, and the first satellite to be placed in this kind of orbit was launched in 1963. Communications satellites are often placed in a geostationary orbit so that Earth-based satellite antennas do not have to rotate to track them but can be pointed permanently at the position in the sky where the satellites are located. Weather satellites are also placed in this orbit for real-time monitoring and data collection, and navigation satellites to provide a known calibration point and enhance GPS accuracy. Geostationary satellites are launched via a temporary orbit, and then placed in a "slot" above a particular point on the Earth's surface. The satellite requires periodic station-keeping to maintain its position. Modern retired geostationary satellites are placed in a higher graveyard orbit to avoid collisions. History In 1929, Herman Potočnik described both geosynchronous orbits in general and the special case of the geostationary Earth orbit in particular as useful orbits for space stations. The first appearance of a geostationary orbit in popular literature was in October 1942, in the first Venus Equilateral story by George O. Smith, but Smith did not go into details. British science fiction author Arthur C. Clarke popularised and expanded the concept in a 1945 paper entitled Extra-Terrestrial Relays – Can Rocket Stations Give Worldwide Radio Coverage?, published in Wireless World magazine. Clarke acknowledged the connection in his introduction to The Complete Venus Equilateral. The orbit, which Clarke first described as useful for broadcast and relay communications satellites, is sometimes called the Clarke orbit. Similarly, the collection of artificial satellites in this orbit is known as the Clarke Belt. In technical terminology the orbit is referred to as either a geostationary or geosynchronous equatorial orbit, with the terms used somewhat interchangeably. The first geostationary satellite was designed by Harold Rosen while he was working at Hughes Aircraft in 1959. Inspired by Sputnik 1, he wanted to use a geostationary satellite to globalise communications. Telecommunications between the US and Europe was then possible between just 136 people at a time, and reliant on high frequency radios and an undersea cable. Conventional wisdom at the time was that it would require too much rocket power to place a satellite in a geostationary orbit and it would not survive long enough to justify the expense, so early efforts were put towards constellations of satellites in low or medium Earth orbit. The first of these were the passive Echo balloon satellites in 1960, followed by Telstar 1 in 1962. Although these projects had difficulties with signal strength and tracking, issues that could be solved using geostationary orbits, the concept was seen as impractical, so Hughes often withheld funds and support. By 1961, Rosen and his team had produced a cylindrical prototype with a diameter of , height of , weighing , light and small enough to be placed into orbit. It was spin stabilised with a dipole antenna producing a pancake shaped beam. In August 1961, they were contracted to begin building the real satellite. They lost Syncom 1 to electronics failure, but Syncom 2 was successfully placed into a geosynchronous orbit in 1963. Although its inclined orbit still required moving antennas, it was able to relay TV transmissions, and allowed for US President John F. Kennedy in Washington D.C., to phone Nigerian prime minister Abubakar Tafawa Balewa aboard the USNS Kingsport docked in Lagos on August 23, 1963. The first satellite placed in a geostationary orbit was Syncom 3, which was launched by a Delta D rocket in 1964. With its increased bandwidth, this satellite was able to transmit live coverage of the Summer Olympics from Japan to America. Geostationary orbits have been in common use ever since, in particular for satellite television. Today there are hundreds of geostationary satellites providing remote sensing and communications. Although most populated land locations on the planet now have terrestrial communications facilities (microwave, fiber-optic), with telephone access covering 96% of the population and internet access 90%, some rural and remote areas in developed countries are still reliant on satellite communications. Uses Most commercial communications satellites, broadcast satellites and SBAS satellites operate in geostationary orbits. Communications Geostationary communication satellites are useful because they are visible from a large area of the earth's surface, extending 81° away in latitude and 77° in longitude. They appear stationary in the sky, which eliminates the need for ground stations to have movable antennas. This means that Earth-based observers can erect small, cheap and stationary antennas that are always directed at the desired satellite. However, latency becomes significant as it takes about 240 ms for a signal to pass from a ground based transmitter on the equator to the satellite and back again. This delay presents problems for latency-sensitive applications such as voice communication, so geostationary communication satellites are primarily used for unidirectional entertainment and applications where low latency alternatives are not available. Geostationary satellites are directly overhead at the equator and appear lower in the sky to an observer nearer the poles. As the observer's latitude increases, communication becomes more difficult due to factors such as atmospheric refraction, Earth's thermal emission, line-of-sight obstructions, and signal reflections from the ground or nearby structures. At latitudes above about 81°, geostationary satellites are below the horizon and cannot be seen at all. Because of this, some Russian communication satellites have used elliptical Molniya and Tundra orbits, which have excellent visibility at high latitudes. Meteorology A worldwide network of operational geostationary meteorological satellites is used to provide visible and infrared images of Earth's surface and atmosphere for weather observation, oceanography, and atmospheric tracking. As of 2019 there are 19 satellites in either operation or stand-by. These satellite systems include: the United States' GOES series, operated by NOAA the Meteosat series, launched by the European Space Agency and operated by the European Weather Satellite Organization, EUMETSAT the Republic of Korea COMS-1 and GK-2A multi mission satellites. the Russian Elektro-L satellites the Japanese Himawari series Chinese Fengyun series India's INSAT series These satellites typically capture images in the visual and infrared spectrum with a spatial resolution between 0.5 and 4 square kilometres. The coverage is typically 70°, and in some cases less. Geostationary satellite imagery has been used for tracking volcanic ash, measuring cloud top temperatures and water vapour, oceanography, measuring land temperature and vegetation coverage, facilitating cyclone path prediction, and providing real time cloud coverage and other tracking data. Some information has been incorporated into meteorological prediction models, but due to their wide field of view, full-time monitoring and lower resolution, geostationary weather satellite images are primarily used for short-term and real-time forecasting. Navigation Geostationary satellites can be used to augment GNSS systems by relaying clock, ephemeris and ionospheric error corrections (calculated from ground stations of a known position) and providing an additional reference signal. This improves position accuracy from approximately 5m to 1m or less. Past and current navigation systems that use geostationary satellites include: The Wide Area Augmentation System (WAAS), operated by the United States Federal Aviation Administration (FAA); The European Geostationary Navigation Overlay Service (EGNOS), operated by the ESSP (on behalf of EU's GSA); The Multi-functional Satellite Augmentation System (MSAS), operated by Japan's Ministry of Land, Infrastructure and Transport Japan Civil Aviation Bureau (JCAB); The GPS Aided Geo Augmented Navigation (GAGAN) system being operated by India. The commercial StarFire navigation system, operated by John Deere and C-Nav Positioning Solutions (Oceaneering); The commercial Starfix DGPS System and OmniSTAR system, operated by Fugro. Implementation Launch Geostationary satellites are launched to the east into a prograde orbit that matches the rotation rate of the equator. The smallest inclination that a satellite can be launched into is that of the launch site's latitude, so launching the satellite from close to the equator limits the amount of inclination change needed later. Additionally, launching from close to the equator allows the speed of the Earth's rotation to give the satellite a boost. A launch site should have water or deserts to the east, so any failed rockets do not fall on a populated area. Most launch vehicles place geostationary satellites directly into a geostationary transfer orbit (GTO), an elliptical orbit with an apogee at GEO height and a low perigee. On-board satellite propulsion is then used to raise the perigee, circularise and reach GEO. Orbit allocation Satellites in geostationary orbit must all occupy a single ring above the equator. The requirement to space these satellites apart, to avoid harmful radio-frequency interference during operations, means that there are a limited number of orbital slots available, and thus only a limited number of satellites can be operated in geostationary orbit. This has led to conflict between different countries wishing access to the same orbital slots (countries near the same longitude but differing latitudes) and radio frequencies. These disputes are addressed through the International Telecommunication Union's allocation mechanism under the Radio Regulations. In the 1976 Bogota Declaration, eight countries located on the Earth's equator claimed sovereignty over the geostationary orbits above their territory, but the claims gained no international recognition. Statite proposal A statite is a hypothetical satellite that uses radiation pressure from the sun against a solar sail to modify its orbit. It would hold its location over the dark side of the Earth at a latitude of approximately 30 degrees. A statite is stationary relative to the Earth and Sun system rather than compared to surface of the Earth, and could ease congestion in the geostationary ring. Retired satellites Geostationary satellites require some station keeping to keep their position, and once they run out of thruster fuel they are generally retired. The transponders and other onboard systems often outlive the thruster fuel and by allowing the satellite to move naturally into an inclined geosynchronous orbit some satellites can remain in use, or else be elevated to a graveyard orbit. This process is becoming increasingly regulated and satellites must have a 90% chance of moving over 200 km above the geostationary belt at end of life. Space debris Space debris at geostationary orbits typically has a lower collision speed than at low Earth orbit (LEO) since all GEO satellites orbit in the same plane, altitude and speed; however, the presence of satellites in eccentric orbits allows for collisions at up to 4 km/s. Although a collision is comparatively unlikely, GEO satellites have a limited ability to avoid any debris. At geosynchronous altitude, objects less than 10 cm in diameter cannot be seen from the Earth, making it difficult to assess their prevalence. Despite efforts to reduce risk, spacecraft collisions have occurred. The European Space Agency telecom satellite Olympus-1 was struck by a meteoroid on August 11, 1993, and eventually moved to a graveyard orbit, and in 2006 the Russian Express-AM11 communications satellite was struck by an unknown object and rendered inoperable, although its engineers had enough contact time with the satellite to send it into a graveyard orbit. In 2017, both AMC-9 and Telkom-1 broke apart from an unknown cause. Properties A typical geostationary orbit has the following properties: Inclination: 0° Period: 1436 minutes (one sidereal day) Eccentricity: 0 Argument of perigee: undefined Semi-major axis: 42,164 km Inclination An inclination of zero ensures that the orbit remains over the equator at all times, making it stationary with respect to latitude from the point of view of a ground observer (and in the Earth-centered Earth-fixed reference frame). Period The orbital period is equal to exactly one sidereal day. This means that the satellite will return to the same point above the Earth's surface every (sidereal) day, regardless of other orbital properties. For a geostationary orbit in particular, it ensures that it holds the same longitude over time. This orbital period, T, is directly related to the semi-major axis of the orbit through the formula: where: is the length of the orbit's semi-major axis is the standard gravitational parameter of the central body Eccentricity The eccentricity is zero, which produces a circular orbit. This ensures that the satellite does not move closer or further away from the Earth, which would cause it to track backwards and forwards across the sky. Stability A geostationary orbit can be achieved only at an altitude very close to and directly above the equator. This equates to an orbital speed of and an orbital period of 1,436 minutes, one sidereal day. This ensures that the satellite will match the Earth's rotational period and has a stationary footprint on the ground. All geostationary satellites have to be located on this ring. A combination of lunar gravity, solar gravity, and the flattening of the Earth at its poles causes a precession motion of the orbital plane of any geostationary object, with an orbital period of about 53 years and an initial inclination gradient of about 0.85° per year, achieving a maximal inclination of 15° after 26.5 years. To correct for this perturbation, regular orbital stationkeeping maneuvers are necessary, amounting to a delta-v of approximately 50 m/s per year. A second effect to be taken into account is the longitudinal drift, caused by the asymmetry of the Earth – the equator is slightly elliptical (equatorial eccentricity). There are two stable equilibrium points sometimes called "gravitational wells" (at 75.3°E and 108°W) and two corresponding unstable points (at 165.3°E and 14.7°W). Any geostationary object placed between the equilibrium points would (without any action) be slowly accelerated towards the stable equilibrium position, causing a periodic longitude variation. The correction of this effect requires station-keeping maneuvers with a maximal delta-v of about 2 m/s per year, depending on the desired longitude. Solar wind and radiation pressure also exert small forces on satellites: over time, these cause them to slowly drift away from their prescribed orbits. In the absence of servicing missions from the Earth or a renewable propulsion method, the consumption of thruster propellant for station-keeping places a limitation on the lifetime of the satellite. Hall-effect thrusters, which are currently in use, have the potential to prolong the service life of a satellite by providing high-efficiency electric propulsion. Derivation For circular orbits around a body, the centripetal force required to maintain the orbit (Fc) is equal to the gravitational force acting on the satellite (Fg): From Isaac Newton's universal law of gravitation, , where Fg is the gravitational force acting between two objects, ME is the mass of the Earth, , ms is the mass of the satellite, r is the distance between the centers of their masses, and G is the gravitational constant, . The magnitude of the acceleration, a, of a body moving in a circle is given by: where v is the magnitude of the velocity (i.e. the speed) of the satellite. From Newton's second law of motion, the centripetal force Fc is given by: . As Fc = Fg, , so that Replacing v with the equation for the speed of an object moving around a circle produces: where T is the orbital period (i.e. one sidereal day), and is equal to . This gives an equation for r: The product GME is known with much greater precision than either factor alone; it is known as the geocentric gravitational constant μ = . Hence The resulting orbital radius is . Subtracting the Earth's equatorial radius, , gives the altitude of . The orbital speed is calculated by multiplying the angular speed by the orbital radius: In other planets By the same method, we can determine the orbital altitude for any similar pair of bodies, including the areostationary orbit of an object in relation to Mars, if it is assumed that it is spherical (which it is not entirely). The gravitational constant GM (μ) for Mars has the value of , its equatorial radius is and the known rotational period (T) of the planet is (). Using these values, Mars' orbital altitude is equal to .
Physical sciences
Orbital mechanics
null
41215
https://en.wikipedia.org/wiki/Ground%20%28electricity%29
Ground (electricity)
In electrical engineering, ground or earth may be a reference point in an electrical circuit from which voltages are measured, a common return path for electric current, or a direct physical connection to the Earth. Electrical circuits may be connected to ground for several reasons. Exposed conductive parts of electrical equipment are connected to ground to protect users from electrical shock hazards. If internal insulation fails, dangerous voltages may appear on the exposed conductive parts. Connecting exposed conductive parts to a "ground" wire which provides a low-impedance path for current to flow back to the incoming neutral (which is also connected to ground, close to the point of entry) will allow circuit breakers (or RCDs) to interrupt power supply in the event of a fault. In electric power distribution systems, a protective earth (PE) conductor is an essential part of the safety provided by the earthing system. Connection to ground also limits the build-up of static electricity when handling flammable products or electrostatic-sensitive devices. In some telegraph and power transmission circuits, the ground itself can be used as one conductor of the circuit, saving the cost of installing a separate return conductor (see single-wire earth return and earth-return telegraph). For measurement purposes, the Earth serves as a (reasonably) constant potential reference against which other potentials can be measured. An electrical ground system should have an appropriate current-carrying capability to serve as an adequate zero-voltage reference level. In electronic circuit theory, a "ground" is usually idealized as an infinite source or sink for charge, which can absorb an unlimited amount of current without changing its potential. Where a real ground connection has a significant resistance, the approximation of zero potential is no longer valid. Stray voltages or earth potential rise effects will occur, which may create noise in signals or produce an electric shock hazard if large enough. The use of the term ground (or earth) is so common in electrical and electronics applications that circuits in portable electronic devices, such as cell phones and media players, as well as circuits in vehicles, may be spoken of as having a "ground" or chassis ground connection without any actual connection to the Earth, despite "common" being a more appropriate term for such a connection. That is usually a large conductor attached to one side of the power supply (such as the "ground plane" on a printed circuit board), which serves as the common return path for current from many different components in the circuit. History Long-distance electromagnetic telegraph systems from 1820 onwards used two or more wires to carry the signal and return currents. It was discovered by German scientist in 1836–1837, that the ground could be used as the return path to complete the circuit, making the return wire unnecessary. Steinheil was not the first to do this, but he was not aware of earlier experimental work, and he was the first to do it on an in-service telegraph, thus making the principle known to telegraph engineers generally. However, there were problems with this system, exemplified by the transcontinental telegraph line constructed in 1861 by the Western Union Company between St. Joseph, Missouri, and Sacramento, California. During dry weather, the ground connection often developed a high resistance, requiring water to be poured on the ground rod to enable the telegraph to work or phones to ring. In the late nineteenth century, when telephony began to replace telegraphy, it was found that the currents in the earth induced by power systems, electric railways, other telephone and telegraph circuits, and natural sources including lightning caused unacceptable interference to the audio signals, and the two-wire or 'metallic circuit' system was reintroduced around 1883. Building wiring installations Electrical power distribution systems are often connected to earth ground to limit the voltage that can appear on distribution circuits. A distribution system insulated from earth ground may attain a high potential due to transient voltages caused by static electricity or accidental contact with higher potential circuits. An earth ground connection of the system dissipates such potentials and limits the rise in voltage of the grounded system. In a mains electricity (AC power) wiring installation, the term ground conductor typically refers to two different conductors or conductor systems as listed below: Equipment bonding conductors or equipment ground conductors (EGC) provide a low-impedance path between normally non-current-carrying metallic parts of equipment and one of the conductors of that electrical system's source. If any exposed metal part should become energized (fault), such as by a frayed or damaged insulator, it creates a short circuit, causing the overcurrent device (circuit breaker or fuse) to open, clearing (disconnecting) the fault. It is important to note this action occurs regardless of whether there is a connection to the physical ground (earth); the earth itself has no role in this fault-clearing process since current must return to its source; however, the sources are very frequently connected to the physical ground (earth). (see Kirchhoff's circuit laws). By bonding (interconnecting) all exposed non-current carrying metal objects together, as well as to other metallic objects such as pipes or structural steel, they should remain near the same voltage potential, thus reducing the chance of a shock. This is especially important in bathrooms where one may be in contact with several different metallic systems such as supply and drain pipes and appliance frames. When a conductive system is to be electrically connected to the physical ground (earth), one puts the equipment bonding conductor and the grounding electrode conductor at the same potential (for example, see §Metal water pipe as grounding electrode below). A (GEC) is used to connect the system grounded ("neutral") conductor, or the equipment to a grounding electrode, or a point on the grounding electrode system. This is called "system grounding" and most electrical systems are required to be grounded. The U.S. NEC and the UK's BS 7671 list systems that are required to be grounded. According to the NEC, the purpose of connecting an electrical system to the physical ground (earth) is to limit the voltage imposed by lightning events and contact with higher voltage lines. In the past, water supply pipes were used as grounding electrodes, but due to the increased use of plastic pipes, which are poor conductors, the use of a specific grounding electrode is often mandated by regulating authorities. The same type of ground applies to radio antennas and to lightning protection systems. Permanently installed electrical equipment, unless not required to, has permanently connected grounding conductors. Portable electrical devices with metal cases may have them connected to earth ground by a pin on the attachment plug (see AC power plugs and sockets). The size of power grounding conductors is usually regulated by local or national wiring regulations. Bonding Strictly speaking, the terms grounding or earthing are meant to refer to an electrical connection to ground/earth. Bonding is the practice of intentionally electrically connecting metallic items not designed to carry electricity. This brings all the bonded items to the same electrical potential as a protection from electrical shock. The bonded items can then be connected to ground to eliminate foreign voltages. Earthing systems In electricity supply systems, an earthing (grounding) system defines the electrical potential of the conductors relative to that of the Earth's conductive surface. The choice of earthing system has implications for the safety and electromagnetic compatibility of the power supply. Regulations for earthing systems vary considerably between different countries. A functional earth connection serves more than protecting against electrical shock, as such a connection may carry current during the normal operation of a device. Such devices include surge suppression, electromagnetic-compatibility filters, some types of antennas, and various measurement instruments. Generally the protective earth system is also used as a functional earth, though this requires care. Impedance grounding Distribution power systems may be solidly grounded, with one circuit conductor directly connected to an earth grounding electrode system. Alternatively, some amount of electrical impedance may be connected between the distribution system and ground, to limit the current that can flow to earth. The impedance may be a resistor, or an inductor (coil). In a high-impedance grounded system, the fault current is limited to a few amperes (exact values depend on the voltage class of the system); a low-impedance grounded system will permit several hundred amperes to flow on a fault. A large solidly grounded distribution system may have tens of thousands of amperes of ground fault current. In a polyphase AC system, the instantaneous vector sum of the phases is zero. This neutral point is commonly used to refer the phase voltages to earth ground instead of connecting one of the phase conductors to earth. Any Δ-Y (delta-wye) connected transformer may be used for the purpose. A nine winding transformer (a "zig zag" transformer) may be used to balance the phase currents of a delta connected source with an unbalanced load. Low-resistance grounding systems use a neutral grounding resistor (NGR) to limit the fault current to 25 A or greater. Low resistance grounding systems will have a time rating (say, 10 seconds) that indicates how long the resistor can carry the fault current before overheating. A ground fault protection relay must trip the breaker to protect the circuit before overheating of the resistor occurs. High-resistance grounding (HRG) systems use an NGR to limit the fault current to 25 A or less. They have a continuous rating, and are designed to operate with a single-ground fault. This means that the system will not immediately trip on the first ground fault. If a second ground fault occurs, a ground fault protection relay must trip the breaker to protect the circuit. On an HRG system, a sensing resistor is used to continuously monitor system continuity. If an open-circuit is detected (e.g., due to a broken weld on the NGR), the monitoring device will sense voltage through the sensing resistor and trip the breaker. Without a sensing resistor, the system could continue to operate without ground protection (since an open circuit condition would mask the ground fault) and transient overvoltages could occur. Ungrounded systems Where the danger of electric shock is high, special ungrounded power systems may be used to minimize possible leakage current to ground. Examples of such installations include patient care areas in hospitals, where medical equipment is directly connected to a patient and must not permit any power-line current to pass into the patient's body. Medical systems include monitoring devices to warn of any increase of leakage current. On wet construction sites or in shipyards, isolation transformers may be provided so that a fault in a power tool or its cable does not expose users to shock hazard. Circuits used to feed sensitive audio/video production equipment or measurement instruments may be fed from an isolated ungrounded technical power system to limit the injection of noise from the power system. Power transmission In single-wire earth return (SWER) AC electrical distribution systems, costs are saved by using just a single high voltage conductor for the power grid, while routing the AC return current through the earth. This system is mostly used in rural areas where large earth currents will not otherwise cause hazards. Some high-voltage direct-current (HVDC) power transmission systems use the ground as second conductor. This is especially common in schemes with submarine cables, as sea water is a good conductor. Buried grounding electrodes are used to make the connection to the earth. The site of these electrodes must be chosen carefully to prevent electrochemical corrosion on underground structures. A particular concern in design of electrical substations is earth potential rise. When very large fault currents are injected into the earth, the area around the point of injection may rise to a high potential with respect to points distant from it. This is due to the limited finite conductivity of the layers of soil in the earth of the substation. The gradient of the voltage (the change in voltage across the distance to the injection point) may be so high that two points on the ground may be at significantly different potentials. This gradient creates a hazard to anyone standing on the earth in an area of the electrical substation that is insufficiently insulated from ground. Pipes, rails, or communication wires entering a substation may see different ground potentials inside and outside the substation, creating a dangerous touch voltage for unsuspecting persons who might touch those pipes, rails, or wires. This problem is alleviated by creating a low-impedance equipotential bonding plane installed in accordance with IEEE 80, within the substation. This plane eliminates voltage gradients and ensures that any fault is cleared within three voltage cycles. Electronics |- align = "center" | |width = "25"| | |width = "25"| | |- align = "center" | Signal ground | | Chassis ground | | Earth ground Signal grounds serve as return paths for signals and power (at extra-low voltages, less than about 50 V) within equipment, and on the signal interconnections between equipment. Many electronic designs feature a single return that acts as a reference for all signals. Power and signal grounds often get connected, usually through the metal case of the equipment. Designers of printed circuit boards must take care in the layout of electronic systems so that high-power or rapidly switching currents in one part of a system do not inject noise into low-level sensitive parts of a system due to some common impedance in the grounding traces of the layout. Circuit ground versus earth Voltage is defined as the difference of electric potentials between points in an electric field. A voltmeter is used to measure the potential difference between some point and a convenient, but otherwise arbitrary reference point. This common reference point is denoted "ground" and is designated as having a nominal zero potential. Signals are defined with respect to signal ground, which may be connected to a power ground. A system where the system ground is not connected to another circuit or to earth (in which there may still be AC coupling between those circuits) is often referred to as a floating ground, and may correspond to Class 0 or Class II appliances. Functional grounds Some devices require a connection to the mass of earth to function correctly, as distinct from any purely protective role. Such a connection is known as a functional earthfor example some long wavelength antenna structures require a functional earth connection, which generally should not be indiscriminately connected to the supply protective earth, as the introduction of transmitted radio frequencies into the electrical distribution network is both illegal and potentially dangerous. Because of this separation, a purely functional ground should not normally be relied upon to perform a protective function. To avoid accidents, such functional grounds are normally wired in white, cream or pink cable, and not green or green/yellow. Separating low signal ground from a noisy ground In television stations, recording studios, and other installations where signal quality is critical, a special signal ground known as a "technical ground" (or "technical earth", "special earth", and "audio earth") is often installed, to prevent ground loops. This is basically the same thing as an AC power ground, but no general appliance ground wires are allowed any connection to it, as they may carry electrical interference. For example, only audio equipment is connected to the technical ground in a recording studio. In most cases, the studio's metal equipment racks are all joined with heavy copper cables (or flattened copper tubing or busbars) and similar connections are made to the technical ground. Great care is taken that no general chassis grounded appliances are placed on the racks, as a single AC ground connection to the technical ground will destroy its effectiveness. For particularly demanding applications, the main technical ground may consist of a heavy copper pipe, if necessary fitted by drilling through several concrete floors, such that all technical grounds may be connected by the shortest possible path to a grounding rod in the basement. Radio frequency ground Certain types of radio antennas (or their feedlines) require a connection to ground that functions adequately at radio frequencies. The required caliber of grounding system is called a radio frequency ground. In general, a radio transmitter, its power source, and its antenna will require three functionally different grounds: A lightning safety ground (perhaps several) that discharges lightning strikes on an outdoor antenna, and separately one that diverts residual strike current from entering the house / radio shack / radio equipment An electrical power safety ground, provided by the ground connection at the electrical outlet A radio frequency ground that establishes a low-resistance return path for the electrical field produced by the antenna during the process of creating radiated waves. Although some of these grounds might be combined, and should be connected at exactly one point, only the last type of ground is covered in this section. Lightning safety grounding (1) is covered in the following section, not here. The electrical safety ground (2) was discussed in previous sections and is unsuitable for radio purposes, although required for the power supply. The radio frequency ground (3) is the topic of this section. Since the radio frequencies of the current in antennas are far higher than the 50 or 60 Hz frequency of the power line, radio grounding systems use different principles than AC power grounding. The "protective earth" (PE) safety ground wires in AC utility building wiring were not designed for, and cannot be used as an adequate substitute for an RF ground. The long utility ground wires have high impedance at certain frequencies. In the case of a transmitter, the RF current flowing through the ground wires can radiate radio frequency interference and induce hazardous voltages on grounded metal parts of other appliances, so separate ground systems are used. Monopole antennas operating at lower frequencies, below 20 MHz, use the surface of the Earth as a part of the antenna, as a conductive plane to reflect the radio waves and provide a return path for electric fields extending from the antenna. The monopoles include the mast radiator used by AM radio stations, and the 'T' and inverted 'L' antenna, and umbrella antenna. The feedline from the transmitter is connected between the antenna and ground, so it requires a grounding (earthing) system under the antenna to make contact with the soil to collect the return current. The ground system also functions as a capacitor plate, to receive the displacement current from the antenna and return it to the ground side of the transmitter's feedline, so it is preferably located directly under the antenna. In receivers and low efficiency / low power transmitters, the ground connection can be as simple as one or several metal rods or stakes driven into the soil, or an electrical connection to a building's metal water piping which extends into the earth. However, in transmitting antennas the ground system carries the full output current of the transmitter, so the resistance of an inadequate ground contact can be a major loss of transmitter power. Medium to high power transmitters usually have an extensive ground system consisting of bare copper cables buried in the earth under the antenna, to lower resistance. Since for the omnidirectional antennas used on these bands the Earth currents travel radially toward the ground point from all directions, the grounding system usually consists of a radial pattern of buried cables extending outward under the antenna in all directions, connected together to the ground side of the transmitter's feedline at a terminal next to the base of the antenna called a radial ground system. The transmitter power lost in the ground resistance, and so the efficiency of the antenna, depends on the soil conductivity. This varies widely; marshy ground or ponds, particularly salt water, provide the lowest resistance ground, while dry rocky or sandy soil are the highest. The power loss per square meter in the ground is proportional to the square of the transmitter current density flowing in the earth. The current density, and power dissipated, increases the closer one gets to the ground terminal at the base of the antenna, so the radial ground system can be thought of as providing a higher conductivity medium, copper, for the ground current to flow through, in the parts of the ground carrying high current density, to reduce power losses. Design A standard ground system widely used for mast radiator broadcasting antennas operating in the MF and LF bands consists of 120 equally-spaced, buried, radial ground wires extending out one quarter of a wavelength (, or 90 electrical degrees) from the antenna base. AWG 8 to AWG 10 soft-drawn copper wire is typically used, buried 4–10 inches deep. For AM broadcast band antennas this requires a circular land area extending from the mast . This is usually planted with grass, which is kept mowed short, as tall grass can increase power loss in certain circumstances. If the land area available is too limited for such long radials, they can in many cases be adequately replaced by a greater number of shorter radials, or a smaller number of longer radials. In transmitting antennas a second cause of power wastage is dielectric power losses of the electric field (displacement current) of the antenna passing through the earth to reach the ground wires. For antennas near a half-wavelength high (180 electrical degrees) the antenna has a voltage maximum (antinode) near its base, which results in strong electric fields in the earth above the ground wires near the mast where the displacement current enters the ground. To reduce this loss these antennas often use a conductive copper ground screen under the antenna connected to the buried ground wires, either lying on the ground or elevated a few feet, to shield the ground from the electric field. In a few cases where rocky or sandy soil has too high a resistance for a buried ground, a counterpoise is used. This is a radial network of wires similar to that in a buried ground system, but lying on the surface or suspended a few feet above the ground. It acts as a capacitor plate, capacitively coupling the feedline to conductive layers of the soil. Electrically short antennas At lower frequencies the resistance of the ground system is a more critical factor because of the small radiation resistance of the antenna. In the LF and VLF bands, construction height limitations require that electrically short antennas be used, shorter than the fundamental resonant length of one quarter of a wavelength (). A quarter wave monopole has a radiation resistance of around 25~36 ohms, but below the resistance decreases with the square of the ratio of height to wavelength. The power fed to an antenna is split between the radiation resistance, which represents power emitted as radio waves, the desired function of the antenna, and the ohmic resistance of the ground system, which results in power wasted as heat. As the wavelength gets longer in relation to antenna height, the radiation resistance of the antenna decreases so the ground resistance constitutes a larger proportion of the input resistance of the antenna and consumes more of the transmitter power. Antennas in the VLF band often have a resistance of less than 1 ohm, and even with extremely low resistance ground systems 50% to 90% of the transmitter power may be wasted in the ground system. Lightning protection systems Lightning protection systems are designed to mitigate the effects of lightning through connection to extensive grounding systems that provide a large surface area connection to earth. The large area is required to dissipate the high current of a lightning strike without damaging the system conductors by excess heat. Since lightning strikes are pulses of energy with very high frequency components, grounding systems for lightning protection tend to use short straight runs of conductors to reduce the self-inductance and skin effect. Ground (earth) mat In an electrical substation a ground (earth) mat is a mesh of conductive material installed at places where a person would stand to operate a switch or other apparatus; it is bonded to the local supporting metal structure and to the handle of the switchgear, so that the operator will not be exposed to a high differential voltage due to a fault in the substation. In the vicinity of electrostatic sensitive devices, a ground (earth) mat or grounding (earthing) mat is used to ground static electricity generated by people and moving equipment. There are two types used in static control: Static Dissipative Mats, and Conductive Mats. A static dissipative mat that rests on a conductive surface (commonly the case in military facilities) are typically made of 3 layers (3-ply) with static dissipative vinyl layers surrounding a conductive substrate which is electrically attached to ground (earth). For commercial uses, static dissipative rubber mats are traditionally used that are made of 2 layers (2-ply) with a tough solder resistant top static dissipative layer that makes them last longer than the vinyl mats, and a conductive rubber bottom. Conductive mats are made of carbon and used only on floors for the purpose of drawing static electricity to ground as quickly as possible. Normally conductive mats are made with cushioning for standing and are referred to as "anti-fatigue" mats. For a static dissipative mat to be reliably grounded it must be attached to a path to ground. Normally, both the mat and the wrist strap are connected to ground by using a common point ground system (CPGS). In computer repair shops and electronics manufacturing, workers must be grounded before working on devices sensitive to voltages capable of being generated by humans. For that reason static dissipative mats can be and are also used on production assembly floors as "floor runner" along the assembly line to draw static generated by people walking up and down. Isolation Isolation is a mechanism that defeats grounding. It is frequently used with low-power consumer devices, and when engineers, hobbyists, or repairmen are working on circuits that would normally be operated using the power line voltage. Isolation can be accomplished by simply placing a "1:1 wire ratio" transformer with an equal number of turns between the device and the regular power service, but applies to any type of transformer using two or more coils electrically insulated from each other. For an isolated device, touching a single powered conductor does not cause a severe shock, because there is no path back to the other conductor through the ground. However, shocks and electrocution may still occur if both poles of the transformer are contacted by bare skin. Previously it was suggested that repairmen "work with one hand behind their back" to avoid touching two parts of the device under test at the same time, thereby preventing a current from crossing through the chest and interrupting cardiac rhythms or causing cardiac arrest. Generally every AC power line transformer acts as an isolation transformer, and every step up or down has the potential to form an isolated circuit. However, this isolation would prevent failed devices from blowing fuses when shorted to their ground conductor. The isolation that could be created by each transformer is defeated by always having one leg of the transformers grounded, on both sides of the input and output transformer coils. Power lines also typically ground one specific wire at every pole, to ensure current equalization from pole to pole if a short to ground is occurring. In the past, grounded appliances have been designed with internal isolation to a degree that allowed the simple disconnection of ground by cheater plugs without apparent problem (a dangerous practice, since the safety of the resulting floating equipment relies on the insulation in its power transformer). Modern appliances however often include power entry modules which are designed with deliberate capacitive coupling between the AC power lines and chassis, to suppress electromagnetic interference. This results in a significant leakage current from the power lines to ground. If the ground is disconnected by a cheater plug or by accident, the resulting leakage current can cause mild shocks, even without any fault in the equipment. Even small leakage currents are a significant concern in medical settings, as the accidental disconnection of ground can introduce these currents into sensitive parts of the human body. As a result, medical power supplies are designed to have low capacitance. Class II appliances and power supplies (such as cell phone chargers) do not provide any ground connection, and are designed to isolate the output from input. Safety is ensured by double-insulation, so that two failures of insulation are required to cause a shock.
Physical sciences
Electrical circuits
Physics
41226
https://en.wikipedia.org/wiki/Hamming%20code
Hamming code
In computer science and telecommunications, Hamming codes are a family of linear error-correcting codes. Hamming codes can detect one-bit and two-bit errors, or correct one-bit errors without detection of uncorrected errors. By contrast, the simple parity code cannot correct errors, and can detect only an odd number of bits in error. Hamming codes are perfect codes, that is, they achieve the highest possible rate for codes with their block length and minimum distance of three. Richard W. Hamming invented Hamming codes in 1950 as a way of automatically correcting errors introduced by punched card readers. In his original paper, Hamming elaborated his general idea, but specifically focused on the Hamming(7,4) code which adds three parity bits to four bits of data. In mathematical terms, Hamming codes are a class of binary linear code. For each integer there is a code-word with block length and message length . Hence the rate of Hamming codes is , which is the highest possible for codes with minimum distance of three (i.e., the minimal number of bit changes needed to go from any code word to any other code word is three) and block length . The parity-check matrix of a Hamming code is constructed by listing all columns of length that are non-zero, which means that the dual code of the Hamming code is the shortened Hadamard code, also known as a Simplex code. The parity-check matrix has the property that any two columns are pairwise linearly independent. Due to the limited redundancy that Hamming codes add to the data, they can only detect and correct errors when the error rate is low. This is the case in computer memory (usually RAM), where bit errors are extremely rare and Hamming codes are widely used, and a RAM with this correction system is an ECC RAM (ECC memory). In this context, an extended Hamming code having one extra parity bit is often used. Extended Hamming codes achieve a Hamming distance of four, which allows the decoder to distinguish between when at most one one-bit error occurs and when any two-bit errors occur. In this sense, extended Hamming codes are single-error correcting and double-error detecting, abbreviated as SECDED. History Richard Hamming, the inventor of Hamming codes, worked at Bell Labs in the late 1940s on the Bell Model V computer, an electromechanical relay-based machine with cycle times in seconds. Input was fed in on punched paper tape, seven-eighths of an inch wide, which had up to six holes per row. During weekdays, when errors in the relays were detected, the machine would stop and flash lights so that the operators could correct the problem. During after-hours periods and on weekends, when there were no operators, the machine simply moved on to the next job. Hamming worked on weekends, and grew increasingly frustrated with having to restart his programs from scratch due to detected errors. In a taped interview, Hamming said, "And so I said, 'Damn it, if the machine can detect an error, why can't it locate the position of the error and correct it?'". Over the next few years, he worked on the problem of error-correction, developing an increasingly powerful array of algorithms. In 1950, he published what is now known as Hamming code, which remains in use today in applications such as ECC memory. Codes predating Hamming A number of simple error-detecting codes were used before Hamming codes, but none were as effective as Hamming codes in the same overhead of space. Parity Parity adds a single bit that indicates whether the number of ones (bit-positions with values of one) in the preceding data was even or odd. If an odd number of bits is changed in transmission, the message will change parity and the error can be detected at this point; however, the bit that changed may have been the parity bit itself. The most common convention is that a parity value of one indicates that there is an odd number of ones in the data, and a parity value of zero indicates that there is an even number of ones. If the number of bits changed is even, the check bit will be valid and the error will not be detected. Moreover, parity does not indicate which bit contained the error, even when it can detect it. The data must be discarded entirely and re-transmitted from scratch. On a noisy transmission medium, a successful transmission could take a long time or may never occur. However, while the quality of parity checking is poor, since it uses only a single bit, this method results in the least overhead. Two-out-of-five code A two-out-of-five code is an encoding scheme which uses five bits consisting of exactly three 0s and two 1s. This provides possible combinations, enough to represent the digits 0–9. This scheme can detect all single bit-errors, all odd numbered bit-errors and some even numbered bit-errors (for example the flipping of both 1-bits). However it still cannot correct any of these errors. Repetition Another code in use at the time repeated every data bit multiple times in order to ensure that it was sent correctly. For instance, if the data bit to be sent is a 1, an repetition code will send 111. If the three bits received are not identical, an error occurred during transmission. If the channel is clean enough, most of the time only one bit will change in each triple. Therefore, 001, 010, and 100 each correspond to a 0 bit, while 110, 101, and 011 correspond to a 1 bit, with the greater quantity of digits that are the same ('0' or a '1') indicating what the data bit should be. A code with this ability to reconstruct the original message in the presence of errors is known as an error-correcting code. This triple repetition code is a Hamming code with since there are two parity bits, and data bit. Such codes cannot correctly repair all errors, however. In our example, if the channel flips two bits and the receiver gets 001, the system will detect the error, but conclude that the original bit is 0, which is incorrect. If we increase the size of the bit string to four, we can detect all two-bit errors but cannot correct them (the quantity of parity bits is even); at five bits, we can both detect and correct all two-bit errors, but not all three-bit errors. Moreover, increasing the size of the parity bit string is inefficient, reducing throughput by three times in our original case, and the efficiency drops drastically as we increase the number of times each bit is duplicated in order to detect and correct more errors. Description If more error-correcting bits are included with a message, and if those bits can be arranged such that different incorrect bits produce different error results, then bad bits could be identified. In a seven-bit message, there are seven possible single bit errors, so three error control bits could potentially specify not only that an error occurred but also which bit caused the error. Hamming studied the existing coding schemes, including two-of-five, and generalized their concepts. To start with, he developed a nomenclature to describe the system, including the number of data bits and error-correction bits in a block. For instance, parity includes a single bit for any data word, so assuming ASCII words with seven bits, Hamming described this as an (8,7) code, with eight bits in total, of which seven are data. The repetition example would be (3,1), following the same logic. The code rate is the second number divided by the first, for our repetition example, 1/3. Hamming also noticed the problems with flipping two or more bits, and described this as the "distance" (it is now called the Hamming distance, after him). Parity has a distance of 2, so one bit flip can be detected but not corrected, and any two bit flips will be invisible. The (3,1) repetition has a distance of 3, as three bits need to be flipped in the same triple to obtain another code word with no visible errors. It can correct one-bit errors or it can detect - but not correct - two-bit errors. A (4,1) repetition (each bit is repeated four times) has a distance of 4, so flipping three bits can be detected, but not corrected. When three bits flip in the same group there can be situations where attempting to correct will produce the wrong code word. In general, a code with distance k can detect but not correct errors. Hamming was interested in two problems at once: increasing the distance as much as possible, while at the same time increasing the code rate as much as possible. During the 1940s he developed several encoding schemes that were dramatic improvements on existing codes. The key to all of his systems was to have the parity bits overlap, such that they managed to check each other as well as the data. General algorithm The following general algorithm generates a single-error correcting (SEC) code for any number of bits. The main idea is to choose the error-correcting bits such that the index-XOR (the XOR of all the bit positions containing a 1) is 0. We use positions 1, 10, 100, etc. (in binary) as the error-correcting bits, which guarantees it is possible to set the error-correcting bits so that the index-XOR of the whole message is 0. If the receiver receives a string with index-XOR 0, they can conclude there were no corruptions, and otherwise, the index-XOR indicates the index of the corrupted bit. An algorithm can be deduced from the following description: Number the bits starting from 1: bit 1, 2, 3, 4, 5, 6, 7, etc. Write the bit numbers in binary: 1, 10, 11, 100, 101, 110, 111, etc. All bit positions that are powers of two (have a single 1 bit in the binary form of their position) are parity bits: 1, 2, 4, 8, etc. (1, 10, 100, 1000) All other bit positions, with two or more 1 bits in the binary form of their position, are data bits. Each data bit is included in a unique set of 2 or more parity bits, as determined by the binary form of its bit position. Parity bit 1 covers all bit positions which have the least significant bit set: bit 1 (the parity bit itself), 3, 5, 7, 9, etc. Parity bit 2 covers all bit positions which have the second least significant bit set: bits 2–3, 6–7, 10–11, etc. Parity bit 4 covers all bit positions which have the third least significant bit set: bits 4–7, 12–15, 20–23, etc. Parity bit 8 covers all bit positions which have the fourth least significant bit set: bits 8–15, 24–31, 40–47, etc. In general each parity bit covers all bits where the bitwise AND of the parity position and the bit position is non-zero. If a byte of data to be encoded is 10011010, then the data word (using _ to represent the parity bits) would be __1_001_1010, and the code word is 011100101010. The choice of the parity, even or odd, is irrelevant but the same choice must be used for both encoding and decoding. This general rule can be shown visually: {| class="wikitable" style="text-align:center;" |- !colspan="2"| Bit position ! 1 !! 2 !! 3 !! 4 !! 5 !! 6 !! 7 !! 8 !! 9 !! 10 !! 11 !! 12 !! 13 !! 14 !! 15 !! 16 !! 17 !! 18 !! 19 !! 20 |rowspan="7"| ... |- !colspan="2"| Encoded data bits !style="background-color: #90FF90;"| p1 !style="background-color: #90FF90;"| p2 !! d1 !style="background-color: #90FF90;"| p4 !! d2 !! d3 !! d4 !style="background-color: #90FF90;"| p8 !! d5 !! d6 !! d7 !! d8 !! d9 !! d10 !! d11 !style="background-color: #90FF90;"| p16 !! d12 !! d13 !! d14 !! d15 |- !rowspan="5"|Paritybitcoverage !style="background-color: #90FF90;"| p1 | || || || || || || || || || || || || || || || || || || || |- !style="background-color: #90FF90;"| p2 | || || || || || || || || || || || || || || || || || || || |- !style="background-color: #90FF90;"| p4 | || || || || || || || || || || || || || || || || || || || |- !style="background-color: #90FF90;"| p8 | || || || || || || || || || || || || || || || || || || || |- !style="background-color: #90FF90;"| p16 | || || || || || || || || || || || || || || || || || || || |} Shown are only 20 encoded bits (5 parity, 15 data) but the pattern continues indefinitely. The key thing about Hamming codes that can be seen from visual inspection is that any given bit is included in a unique set of parity bits. To check for errors, check all of the parity bits. The pattern of errors, called the error syndrome, identifies the bit in error. If all parity bits are correct, there is no error. Otherwise, the sum of the positions of the erroneous parity bits identifies the erroneous bit. For example, if the parity bits in positions 1, 2 and 8 indicate an error, then bit 1+2+8=11 is in error. If only one parity bit indicates an error, the parity bit itself is in error. With parity bits, bits from 1 up to can be covered. After discounting the parity bits, bits remain for use as data. As varies, we get all the possible Hamming codes: Hamming codes with additional parity (SECDED) Hamming codes have a minimum distance of 3, which means that the decoder can detect and correct a single error, but it cannot distinguish a double bit error of some codeword from a single bit error of a different codeword. Thus, some double-bit errors will be incorrectly decoded as if they were single bit errors and therefore go undetected, unless no correction is attempted. To remedy this shortcoming, Hamming codes can be extended by an extra parity bit. This way, it is possible to increase the minimum distance of the Hamming code to 4, which allows the decoder to distinguish between single bit errors and two-bit errors. Thus the decoder can detect and correct a single error and at the same time detect (but not correct) a double error. If the decoder does not attempt to correct errors, it can reliably detect triple bit errors. If the decoder does correct errors, some triple errors will be mistaken for single errors and "corrected" to the wrong value. Error correction is therefore a trade-off between certainty (the ability to reliably detect triple bit errors) and resiliency (the ability to keep functioning in the face of single bit errors). This extended Hamming code was popular in computer memory systems, starting with IBM 7030 Stretch in 1961, where it is known as SECDED (or SEC-DED, abbreviated from single error correction, double error detection). Server computers in 21st century, while typically keeping the SECDED level of protection, no longer use Hamming's method, relying instead on the designs with longer codewords (128 to 256 bits of data) and modified balanced parity-check trees. The (72,64) Hamming code is still popular in some hardware designs, including Xilinx FPGA families. [7,4] Hamming code In 1950, Hamming introduced the [7,4] Hamming code. It encodes four data bits into seven bits by adding three parity bits. As explained earlier, it can either detect and correct single-bit errors or it can detect (but not correct) both single and double-bit errors. With the addition of an overall parity bit, it becomes the [8,4] extended Hamming code and can both detect and correct single-bit errors and detect (but not correct) double-bit errors. Construction of G and H The matrix is called a (canonical) generator matrix of a linear (n,k) code, and is called a parity-check matrix. This is the construction of G and H in standard (or systematic) form. Regardless of form, G and H for linear block codes must satisfy , an all-zeros matrix. Since [7, 4, 3] = [n, k, d] = [2m − 1, 2m − 1 − m, 3]. The parity-check matrix H of a Hamming code is constructed by listing all columns of length m that are pair-wise independent. Thus H is a matrix whose left side is all of the nonzero n-tuples where order of the n-tuples in the columns of matrix does not matter. The right hand side is just the (n − k)-identity matrix. So G can be obtained from H by taking the transpose of the left hand side of H with the identity k-identity matrix on the left hand side of G. The code generator matrix and the parity-check matrix are: and Finally, these matrices can be mutated into equivalent non-systematic codes by the following operations: Column permutations (swapping columns) Elementary row operations (replacing a row with a linear combination of rows) Encoding Example From the above matrix we have 2k = 24 = 16 codewords. Let be a row vector of binary data bits, . The codeword for any of the 16 possible data vectors is given by the standard matrix product where the summing operation is done modulo-2. For example, let . Using the generator matrix from above, we have (after applying modulo 2, to the sum), [8,4] Hamming code with an additional parity bit The [7,4] Hamming code can easily be extended to an [8,4] code by adding an extra parity bit on top of the (7,4) encoded word (see Hamming(7,4)). This can be summed up with the revised matrices: and Note that H is not in standard form. To obtain G, elementary row operations can be used to obtain an equivalent matrix to H in systematic form: For example, the first row in this matrix is the sum of the second and third rows of H in non-systematic form. Using the systematic construction for Hamming codes from above, the matrix A is apparent and the systematic form of G is written as The non-systematic form of G can be row reduced (using elementary row operations) to match this matrix. The addition of the fourth row effectively computes the sum of all the codeword bits (data and parity) as the fourth parity bit. For example, 1011 is encoded (using the non-systematic form of G at the start of this section) into 01100110 where blue digits are data; red digits are parity bits from the [7,4] Hamming code; and the green digit is the parity bit added by the [8,4] code. The green digit makes the parity of the [7,4] codewords even. Finally, it can be shown that the minimum distance has increased from 3, in the [7,4] code, to 4 in the [8,4] code. Therefore, the code can be defined as [8,4] Hamming code. To decode the [8,4] Hamming code, first check the parity bit. If the parity bit indicates an error, single error correction (the [7,4] Hamming code) will indicate the error location, with "no error" indicating the parity bit. If the parity bit is correct, then single error correction will indicate the (bitwise) exclusive-or of two error locations. If the locations are equal ("no error") then a double bit error either has not occurred, or has cancelled itself out. Otherwise, a double bit error has occurred.
Mathematics
Information theory
null
41232
https://en.wikipedia.org/wiki/Harmonic
Harmonic
In physics, acoustics, and telecommunications, a harmonic is a sinusoidal wave with a frequency that is a positive integer multiple of the fundamental frequency of a periodic signal. The fundamental frequency is also called the 1st harmonic; the other harmonics are known as higher harmonics. As all harmonics are periodic at the fundamental frequency, the sum of harmonics is also periodic at that frequency. The set of harmonics forms a harmonic series. The term is employed in various disciplines, including music, physics, acoustics, electronic power transmission, radio technology, and other fields. For example, if the fundamental frequency is 50 Hz, a common AC power supply frequency, the frequencies of the first three higher harmonics are 100 Hz (2nd harmonic), 150 Hz (3rd harmonic), 200 Hz (4th harmonic) and any addition of waves with these frequencies is periodic at 50 Hz. In music, harmonics are used on string instruments and wind instruments as a way of producing sound on the instrument, particularly to play higher notes and, with strings, obtain notes that have a unique sound quality or "tone colour". On strings, bowed harmonics have a "glassy", pure tone. On stringed instruments, harmonics are played by touching (but not fully pressing down the string) at an exact point on the string while sounding the string (plucking, bowing, etc.); this allows the harmonic to sound, a pitch which is always higher than the fundamental frequency of the string. Terminology Harmonics may be called "overtones", "partials", or "upper partials", and in some music contexts, the terms "harmonic", "overtone" and "partial" are used fairly interchangeably. But more precisely, the term "harmonic" includes all pitches in a harmonic series (including the fundamental frequency) while the term "overtone" only includes pitches above the fundamental. Characteristics Most acoustic instruments emit complex tones containing many individual partials (component simple tones or sinusoidal waves), but the untrained human ear typically does not perceive those partials as separate phenomena. Rather, a musical note is perceived as one sound, the quality or timbre of that sound being a result of the relative strengths of the individual partials. Many acoustic oscillators, such as the human voice or a bowed violin string, produce complex tones that are more or less periodic, and thus are composed of partials that are nearly matched to the integer multiples of fundamental frequency and therefore resemble the ideal harmonics and are called "harmonic partials" or simply "harmonics" for convenience (although it's not strictly accurate to call a  partial  a  harmonic,  the first being actual and the second being theoretical). Oscillators that produce harmonic partials behave somewhat like one-dimensional resonators, and are often long and thin, such as a guitar string or a column of air open at both ends (as with the metallic modern orchestral transverse flute). Wind instruments whose air column is open at only one end, such as trumpets and clarinets, also produce partials resembling harmonics. However they only produce partials matching the odd harmonics—at least in theory. In practical use, no real acoustic instrument behaves as perfectly as the simplified physical models predict; for example, instruments made of non-linearly elastic wood, instead of metal, or strung with gut instead of brass or steel strings, tend to have not-quite-integer partials. Partials whose frequencies are not integer multiples of the fundamental are referred to as inharmonic partials. Some acoustic instruments emit a mix of harmonic and inharmonic partials but still produce an effect on the ear of having a definite fundamental pitch, such as pianos, strings plucked pizzicato, vibraphones, marimbas, and certain pure-sounding bells or chimes. Antique singing bowls are known for producing multiple harmonic partials or multiphonics. Other oscillators, such as cymbals, drum heads, and most percussion instruments, naturally produce an abundance of inharmonic partials and do not imply any particular pitch, and therefore cannot be used melodically or harmonically in the same way other instruments can. Building on of Sethares (2004), dynamic tonality introduces the notion of pseudo-harmonic partials, in which the frequency of each partial is aligned to match the pitch of a corresponding note in a pseudo-just tuning, thereby maximizing the consonance of that pseudo-harmonic timbre with notes of that pseudo-just tuning. Partials, overtones, and harmonics An overtone is any partial higher than the lowest partial in a compound tone. The relative strengths and frequency relationships of the component partials determine the timbre of an instrument. The similarity between the terms overtone and partial sometimes leads to their being loosely used interchangeably in a musical context, but they are counted differently, leading to some possible confusion. In the special case of instrumental timbres whose component partials closely match a harmonic series (such as with most strings and winds) rather than being inharmonic partials (such as with most pitched percussion instruments), it is also convenient to call the component partials "harmonics", but not strictly correct, because harmonics are numbered the same even when missing, while partials and overtones are only counted when present. This chart demonstrates how the three types of names (partial, overtone, and harmonic) are counted (assuming that the harmonics are present): In many musical instruments, it is possible to play the upper harmonics without the fundamental note being present. In a simple case (e.g., recorder) this has the effect of making the note go up in pitch by an octave, but in more complex cases many other pitch variations are obtained. In some cases it also changes the timbre of the note. This is part of the normal method of obtaining higher notes in wind instruments, where it is called overblowing. The extended technique of playing multiphonics also produces harmonics. On string instruments it is possible to produce very pure sounding notes, called harmonics or flageolets by string players, which have an eerie quality, as well as being high in pitch. Harmonics may be used to check at a unison the tuning of strings that are not tuned to the unison. For example, lightly fingering the node found halfway down the highest string of a cello produces the same pitch as lightly fingering the node of the way down the second highest string. For the human voice see Overtone singing, which uses harmonics. While it is true that electronically produced periodic tones (e.g. square waves or other non-sinusoidal waves) have "harmonics" that are whole number multiples of the fundamental frequency, practical instruments do not all have this characteristic. For example, higher "harmonics" of piano notes are not true harmonics but are "overtones" and can be very sharp, i.e. a higher frequency than given by a pure harmonic series. This is especially true of instruments other than strings, brass, or woodwinds. Examples of these "other" instruments are xylophones, drums, bells, chimes, etc.; not all of their overtone frequencies make a simple whole number ratio with the fundamental frequency. (The fundamental frequency is the reciprocal of the longest time period of the collection of vibrations in some single periodic phenomenon.) On stringed instruments The following table displays the stop points on a stringed instrument at which gentle touching of a string will force it into a harmonic mode when vibrated. String harmonics (flageolet tones) are described as having a "flutelike, silvery quality" that can be highly effective as a special color or tone color (timbre) when used and heard in orchestration. It is unusual to encounter natural harmonics higher than the fifth partial on any stringed instrument except the double bass, on account of its much longer strings. {| class="wikitable" |- !width="20"| order ! Stop note ! Note sounded(relative toopen string) !width="50"| Audio frequency (Hz) !width="50"| Cents ! Audio (octave shifted) |- |style="text-align:center;"| 1 | fundamental, perfect unison |style="text-align:center;"| 1 | style="text-align:right;"| | style="text-align:right;"| 0.0 | |- |style="text-align:center;"| 2 | first perfect octave |style="text-align:center;"| 8 | style="text-align:right;"| | style="text-align:right;"| 0.0 | |- |style="text-align:center;"| 3 | perfect fifth |style="text-align:center;"| 8 + 5 | style="text-align:right;"| | style="text-align:right;"| 702.0 | |- |style="text-align:center;"| 4 | doubled perfect octave |style="text-align:center;"| 28 | style="text-align:right;"| | style="text-align:right;"| 0.0 | |- |style="text-align:center;"| 5 | just major third, major third |style="text-align:center;"| 28 + 3 | style="text-align:right;"| | style="text-align:right;"| 386.3 | |- |style="text-align:center;"| 6 | perfect fifth |style="text-align:center;"| 28 + 5 | style="text-align:right;"| | style="text-align:right;"| 702.0 | |- |style="text-align:center;"| 7 | harmonic seventh, septimal minor seventh (‘the lost chord’) |style="text-align:center;"| 28 + 7 | style="text-align:right;"| | style="text-align:right;"| 968.8 | |- |style="text-align:center;"| 8 | third perfect octave |style="text-align:center;"| 38 | style="text-align:right;"| | style="text-align:right;"| 0.0 | |- |style="text-align:center;"| 9 | Pythagorean major second harmonic ninth |style="text-align:center;"| 38 + 2 | style="text-align:right;"| | style="text-align:right;"| 203.9 | |- |style="text-align:center;"| 10 | just major third |style="text-align:center;"| 38 + 3 | style="text-align:right;"| | style="text-align:right;"| 386.3 | |- |style="text-align:center;"| 11 | lesser undecimal tritone, undecimal semi-augmented fourth |style="text-align:center;"| 38 + 4 | style="text-align:right;"| | style="text-align:right;"| 551.3 | |- |style="text-align:center;"| 12 | perfect fifth |style="text-align:center;"| 38 + 5 | style="text-align:right;"| | style="text-align:right;"| 702.0 | |- |style="text-align:center;"| 13 | tridecimal neutral sixth |style="text-align:center;"| 38 + 6 | style="text-align:right;"| | style="text-align:right;"| 840.5 | |- |style="text-align:center;"| 14 | harmonic seventh, septimal minor seventh (‘the lost chord’) |style="text-align:center;"| 38 + 7 | style="text-align:right;"| | style="text-align:right;"| 968.8 | |- |style="text-align:center;"| 15 | just major seventh |style="text-align:center;"| 38 + 7 | style="text-align:right;"| | style="text-align:right;"| | |- |style="text-align:center;"| 16 | fourth perfect octave |style="text-align:center;"| 48 | style="text-align:right;"| | style="text-align:right;"| 0.0 | |- |style="text-align:center;"| 17 | septidecimal semitone |style="text-align:center;"| 48 + 2 | style="text-align:right;"| | style="text-align:right;"| 105.0 | |- |style="text-align:center;"| 18 | Pythagorean major second |style="text-align:center;"| 48 + 2 | style="text-align:right;"| | style="text-align:right;"| 203.9 | |- |style="text-align:center;"| 19 | nanodecimal minor third |style="text-align:center;"| 48 + 3 | style="text-align:right;"| | style="text-align:right;"| 297.5 | |- |style="text-align:center;"| 20 | just major third |style="text-align:center;"| 48 + 3 | style="text-align:right;"| | style="text-align:right;"| 386.3 | |} {| |+ Notation key |- |   || perfect interval |- | || augmented interval (sharpened) |- | || major interval |- | || minor interval (flattened major) |- | || neutral interval (between major and minor) |- | || half-flattened (approximate) ( for just, for ) |- | || flattened by a syntonic comma (approximate) () |- | || flattened by a half-comma (approximate) () |- | || flattened by a quarter-comma (approximate) () |} Artificial harmonics Occasionally a score will call for an artificial harmonic, produced by playing an overtone on an already stopped string. As a performance technique, it is accomplished by using two fingers on the fingerboard, the first to shorten the string to the desired fundamental, with the second touching the node corresponding to the appropriate harmonic. Other information Harmonics may be either used in or considered as the basis of just intonation systems. Composer Arnold Dreyblatt is able to bring out different harmonics on the single string of his modified double bass by slightly altering his unique bowing technique halfway between hitting and bowing the strings. Composer Lawrence Ball uses harmonics to generate music electronically.
Physical sciences
Waves
Physics
2545455
https://en.wikipedia.org/wiki/Systolic%20geometry
Systolic geometry
In mathematics, systolic geometry is the study of systolic invariants of manifolds and polyhedra, as initially conceived by Charles Loewner and developed by Mikhail Gromov, Michael Freedman, Peter Sarnak, Mikhail Katz, Larry Guth, and others, in its arithmetical, ergodic, and topological manifestations.
Mathematics
Other
null
2546065
https://en.wikipedia.org/wiki/Chromalveolata
Chromalveolata
Chromalveolata was a eukaryote supergroup present in a major classification of 2005, then regarded as one of the six major groups within the eukaryotes. It was a refinement of the kingdom Chromista, first proposed by Thomas Cavalier-Smith in 1981. Chromalveolata was proposed to represent the organisms descended from a single secondary endosymbiosis involving a red alga and a bikont. The plastids in these organisms are those that contain chlorophyll c. However, the monophyly of the Chromalveolata has been rejected. Thus, two papers published in 2008 have phylogenetic trees in which the chromalveolates are split up, and recent studies continue to support this view. Groups and classification Historically, many chromalveolates were considered plants, because of their cell walls, photosynthetic ability, and in some cases their morphological resemblance to the land plants (Embryophyta). However, when the five-kingdom system (proposed in 1969) took prevalence over the animal–plant dichotomy, most of what we now call chromalveolates were put into the kingdom Protista, but the water molds and slime nets were put into the kingdom Fungi, while the brown algae stayed in the plant kingdom. These various organisms were later grouped together and given the name Chromalveolata by Cavalier-Smith. He believed them to be a monophyletic group, but this is not the case. In 2005, in a classification reflecting the consensus at the time, the Chromalveolata were regarded as one of the six major clades of eukaryotes. Although not given a formal taxonomic status in this classification, elsewhere the group had been treated as a Kingdom. The Chromalveolata were divided into four major subgroups: Cryptophyta Haptophyta Stramenopiles (or Heterokontophyta) Alveolata Other groups that may be included within, or related to, chromalveolates, are: Centrohelids Katablepharids Telonemia Though several groups, such as the ciliates and the water molds, have lost the ability to photosynthesize, most are autotrophic. All photosynthetic chromalveolates use chlorophylls a and c, and many use accessory pigments. Chromalveolates share similar glyceraldehyde 3-phosphate dehydrogenase proteins. However, as early as 2005, doubts were being expressed as to whether Chromalveolata was monophyletic, and a review in 2006 noted the lack of evidence for several of the supposed six major eukaryote groups, including the Chromalveolata. In 2012, consensus emerged that the group is not monophyletic. The four original subgroups fall into at least two categories: one comprises the Stramenopiles and the Alveolata, to which the Rhizaria are now usually added to form the SAR group; the other comprises the Cryptophyta and the Haptophyta. A 2010 paper splits the Cryptophyta and Haptophyta; the former are a sister group to the SAR group, the latter cluster with the Archaeplastida (plants in the broad sense). The katablepharids are closely related to the cryptophytes and the telonemids and centrohelids may be related to the haptophytes. A variety of names have been used for different combinations of the groups formerly thought to make up the Chromalveolata. Halvaria Analyses in 2007 and 2008 agreed that the Stramenopiles and the Alveolata were related, forming a reduced chromalveolate clade, called Halvaria. SAR group The Rhizaria, which were originally not considered to be chromalveolates, belong with the Stramenopiles and Alveolata in many analyses, forming the SAR group, i.e. Halvaria plus Rhizaria. Hacrobia The other two groups originally included in Chromalveolata, the Haptophyta and the Cryptophyta, were related in some analyses, forming a clade which has been called Hacrobia. Alternatively, the Hacrobia appeared to be more closely related to the Archaeplastida (plants in the very broad sense), being a sister group in one analysis, and actually nested inside this group in another. (Earlier, Cavalier-Smith had suggested a clade called Corticata for the grouping of all the chromalveolates and the Archaeplastida.) More recently, as noted above, Hacrobia has been split, with the Haptophyta being sister to the SAR group and the Cryptophyta instead related to the Archaeplastida. Morphology Chromalveolates, unlike other groups with multicellular representatives, do not have very many common morphological characteristics. Each major subgroup has certain unique features, including the alveoli of the Alveolata, the haptonema of the Haptophyta, the ejectisome of the Cryptophyta, and the two different flagella of the Heterokontophyta. However, none of these features are present in all of the groups. The only common chromalveolate features are these: The shared origin of chloroplasts, as mentioned above Presence of cellulose in most cell walls Since this is such a diverse group, it is difficult to summarize shared chromalveolate characteristics. Ecological role Many chromalveolates affect our ecosystem in enormous ways. Some of these organisms can be very harmful. Dinoflagellates produce red tides, which can devastate fish populations and intoxicate oyster harvests. Apicomplexans are some of the most successful specific parasites to animals (including the genus Plasmodium, the malaria parasites). Water molds cause several plant diseases - it was the water mold Phytophthora infestans that caused the Irish potato blight that led to the Great Irish Famine. However, many others are vital members of our ecosystem. Diatoms are one of the major photosynthetic producers, and as such produce much of the oxygen that we breathe, and also take in much of the carbon dioxide from the atmosphere. Brown algae, most specifically kelps, create underwater "forest" habitats for many marine creatures, and provide a large portion of the diet of coastal communities. Chromalveolates also provide many products that we use. The algin in brown algae is used as a food thickener, most famously in ice cream. The siliceous shells of diatoms have many uses, such as in reflective paint, in toothpaste, or as a filter, in what is known as diatomaceous earth. Chromalveolata viruses Like other organisms, chromalveolata have viruses. In the case of Emiliania huxleyi (a common algal bloom chromalveolate), a virus believed to be specific to it causes mass death and the end of the bloom.
Biology and health sciences
Other organisms
null
18490998
https://en.wikipedia.org/wiki/Obstructive%20shock
Obstructive shock
Obstructive shock is one of the four types of shock, caused by a physical obstruction in the flow of blood. Obstruction can occur at the level of the great vessels or the heart itself. Causes include pulmonary embolism, cardiac tamponade, and tension pneumothorax. These are all life-threatening. Symptoms may include shortness of breath, weakness, or altered mental status. Low blood pressure and tachycardia are often seen in shock. Other symptoms depend on the underlying cause. The physiology of obstructive shock is similar to cardiogenic shock. In both types, the heart's output of blood (cardiac output) is decreased. This causes a back-up of blood into the veins entering the right atrium. Jugular venous distension can be observed in the neck. This finding can be seen in obstructive and cardiogenic shock. With the decrease cardiac output, blood flow to vital tissues is decreased. Poor perfusion to organs leads to shock. Due to these similarities, some sources place obstructive shock under the category of cardiogenic shock. However, it is important to distinguish between the two types, because treatment is different. In cardiogenic shock, the problem is in the function of the heart itself. In obstructive shock, the underlying problem is not the pump. Rather, the input into the heart (venous return) is decreased or the pressure against which the heart is pumping (afterload) is higher than normal. Treating the underlying cause can reverse the shock. For example, tension pneumothorax needs rapid needle decompression. This decreases the pressure in the chest. Blood flow to and from the heart is restored, and shock resolves. Signs and Symptoms As in all types of shock, low blood pressure is a key finding in patients with obstructive shock. In response to low blood pressure, heart rate increases. Shortness of breath, tachypnea, and hypoxia may be present. Because of poor blood flow to the tissues, patients may have cold extremities. Less blood to the kidneys and brain can cause decreased urine output and altered mental status, respectively. Other signs may be seen depending on the underlying cause. For example, jugular venous distension is a significant finding in evaluating shock. This occurs in cardiogenic and obstructive shock. This is not observed in the other two types of shock, hypovolemic and distributive. Some particular clinical findings are described below. A classic finding of cardiac tamponade is Beck's triad. The triad includes hypotension, jugular vein distension, and muffled heart sounds. Kussmaul's sign and pulsus paradoxus may also be seen. Low-voltage QRS complexes and electrical alternans are signs on EKG. However, EKG may not show these findings and most often shows tachycardia. Tension pneumothorax would have decreased breath sounds on the affected side. Tracheal deviation may also be present, shifted away from the affected side. Thus, a lung exam is important. Other findings may include decreased chest mobility and air underneath the skin (subcutaneous emphysema). Pulmonary embolism similarly presents with shortness of breath and hypoxia. Chest pain worse with inspiration is frequently seen. Chest pain can also be similar to a heart attack. This is due to the right ventricular stress and ischemia that can occur in PE. Other symptoms are syncope and hemoptysis. DVT is a common cause. Thus, symptoms including leg pain, redness, and swelling can be present. The likelihood of pulmonary embolism can be evaluated through various criteria. The Wells score is often calculated. It gives points based on these symptoms and patient risk factors. Causes Causes include any obstruction of blood flow to and from the heart. There are multiple, including pulmonary embolism, cardiac tamponade, and tension pneumothorax. Other causes include abdominal compartment syndrome, Hiatal hernia, severe aortic valve stenosis, and disorders of the aorta. Constrictive pericarditis is a rare cause. Masses can grow to press on major blood vessels causing shock. Tension pneumothorax A pneumothorax occurs when air collects in the pleural space around the lungs. Normally, this space has negative pressure to allow the lung to fill. Pressure increases as more air enters this space. The lung collapses, impairing normal breathing. Surrounding structures may also shift. When severe enough to cause these shifts and hypotension, it is called a tension pneumothorax. This is life-threatening. The increased pressure inside the chest can compress the heart and lead to a collapse of the blood vessels that drain to the heart. The veins supplying the heart are compressed, in turn decreasing venous return. With the heart unable to fill, cardiac output drops. Hypotension and shock ensue. If not rapidly treated, it can lead to cardiac arrest and death. Pulmonary embolism A pulmonary embolism (PE) is an obstruction of the pulmonary arteries. Deaths from PE have been estimated at ~100,000 per year in the United States. However, this may be higher in recent years. Most often, the obstruction is a blood clot that traveled from elsewhere in the body. Most commonly, this is from a deep vein thrombosis (DVT) in the legs or pelvis. Risk factors are conditions that increase the risk of clotting. This includes genetic (factor V Leiden) and acquired conditions (cancer). Trauma, surgery, and prolonged bed-rest are common risks. Covid-19 is a recent risk factor. This obstruction increases the pulmonary vascular resistance. If large enough, the clot increases the load on the right side of the heart. The right ventricle must work harder to pump blood to the lungs. With back-up of blood, the right ventricle can begin to dilate. Right heart failure can ensue, leading to shock and death. A PE is considered "massive" when it causes hypotension or shock. A submassive PE causes right heart dysfunction without hypotension. Cardiac tamponade A pericardial effusion is fluid in the pericardial sac. When large enough, the pressure compresses the heart. This causes shock by preventing the heart from filling with blood. This is called cardiac tamponade. The chambers of the heart can collapse from this pressure. The right heart has thinner walls and collapses more easily. With less venous return, cardiac output decreases. The lack of blood flow to vital organs can cause death. Whether an effusion causes tamponade depends on the amount of fluid and how long it took to accumulate. When fluid collects slowly, the pericardium can stretch. Thus, a chronic effusion can be as large as 1 liter. Acute effusions can cause tamponade when small because the tissue does not have time to stretch. Diagnosis Rapid evaluation of shock is essential given its life-threatening nature. Diagnosis requires a thorough history, physical exam, and additional tests. One must also consider the possibility of multiple types of shock being present. For example, a trauma patient may be hypovolemic from blood loss. This patient could also have tension pneumothorax due to trauma to the chest. Vital signs in obstructive shock may show hypotension, tachycardia, and/or hypoxia. A physical exam include be thorough, including jugular vein exam, cardiac and lung exams, and assessing skin tone and temperature. Response to fluids may aid in diagnosis. Labs including a metabolic panel can assess electrolytes and kidney and liver function. Lactic acid rises due to poor tissue perfusion. This may even be an initial sign of shock and rise before blood pressure decreases. Lactic acid should lower with appropriate treatment of shock. EKG should also be performed. Tachycardia is often present, but other specific findings may be present based on the underlying cause. At the bedside, point-of-care echocardiography should be used. This is non-invasive and can help diagnose the four types of shock. Echocardiography can look for ventricular dysfunction, effusions, or valve dysfunction. Measurement of the vena cava during the breathing cycle can help assess volume status. A point-of-care echocardiogram can also assess for causes of obstructive shock. The vena cava would be dilated due to the obstruction. In pulmonary embolism, the right ventricle will be dilated. Other findings include paradoxical septal motion or clots in the right heart or pulmonary artery. Echocardiography can assess for pericardial effusion. In tamponade, collapse of the right atrium and ventricle would be seen due to pressure in the pericardial sac. A chest X-ray can rapidly identify a pneumothorax, seen as absence of lung markings. Ultrasound can show the lack of lung sliding. However, imaging should not delay treatment. CT angiography is the standard of diagnosis of pulmonary embolism. Clots appear in the vasculature as filling defects. Treatment In any type of shock, rapid treatment is essential. Delays in therapy increase the risk of mortality. This is often done as diagnostic assessment is still occurring. Resuscitation addresses the ABC's - airway, breathing, and circulation. Supplemental oxygen is given, and intubation is performed if indicated. Intravenous fluids can increase blood pressure and maintain blood flow to organs. However, fluids should be given with caution. Too much fluid can cause overload and pulmonary edema. In some cases, fluids may be beneficial. Fluids can improve venous return. For example, tamponade prevents normal cardiac filling due to pressure compressing the heart. In this case, giving fluids can improve right heart filling. However, in other causes of obstructive shock, too much fluid can worsen cardiac output. Thus, fluid therapy should be monitored closely. After these stabilizing measures, further treatment depends on the cause. Treatment of the underlying condition can quickly resolve the shock. For tension pneumothorax, needle decompression should be done immediately. A chest tube is also inserted. Cardiac tamponade is treated through needle or surgical decompression. Needle pericardiocentesis can be done at the bedside. This is often the preferred therapy. A catheter may be placed for continued drainage. If these methods are not effective, surgery may be needed. Pericardial window is a surgery that is particularly in cases of cancer. Massive pulmonary embolism requires thrombolysis or embolectomy. Thrombolysis can be systemic via IV alteplase (tPA) or catheter-directed. tPA works to break up the clot. A major risk of tPA is bleeding. Thus, patients must be assessed for their risk of bleeding and contraindications. Catheter-directed therapy involves giving tPA locally in the pulmonary artery. It can also fragment and remove the clot itself (embolectomy). This local therapy has a lower risk of bleeding. Surgical embolectomy is a more invasive treatment, associated with 10-20% surgical mortality risk.
Biology and health sciences
Cardiovascular disease
Health
10461876
https://en.wikipedia.org/wiki/Net%20%28device%29
Net (device)
A net comprises threads or yarns knotted and twisted into a grid-like structure which blocks the passage of large items, while letting small items and fluids pass. It requires less material than something sheet-like, and provides a degree of transparency, as well as flexibility and lightness. Nets have been constructed by human beings since at least the Mesolithic period for use in capturing or retaining things. Their open structure provide lightness and flexibility that allow them to be carried and manipulated with relative ease, making them valuable for methodical tasks such as hunting, fishing, sleeping, and carrying. History The oldest nets found are from the Mesolithic era, but nets may have existed in the Upper Paleolithic era. Nets are typically made of perishable materials and leave little archeological record. Some nets are preserved in ice or bogs, and there are also clay impressions of nets. Making and repairing nets Originally, all nets were made by hand. Construction begins from a single point for round nets such as purse nets, net bags, or hair nets, but square nets are usually started from a headrope. A line is tied to the headrope at regular intervals, forming a series of loops. This can be done using slipped overhand knots or other knots, such as clove hitches. Subsequent rows are then worked using sheet bends, as shown in the diagram, or another knot. Some nets, such as hammocks, may be looped rather than knotted. To avoid hauling a long length of loose twine through each knot, the twine is wound onto a netting shuttle or netting needle. This must be done correctly to prevent it twisting as it is used, but makes net production much faster. A gauge – often a smooth stick – is used to keep the loops the same size and the mesh even. The first and last rows are generally made using a half-size gauge, so that the edges of the net will be smooth. There are also knot-free nets. Some nets are still shaped by their end users, although nets are now often knotted by machine. When a hole is ripped in a net, there are fewer holes in it than before the net was ripped. However, the stress concentration at the edges of the hole often causes it to tear further, making timely repairs important. Mending nets by hand is still an important skill for those who work with them. Materials Nets may be made using almost any sort of fiber. Traditional net materials varied with what was locally available; early European fishing nets were often made of linen, for instance. Longer-lasting synthetics are now fairly universal. Nylon monofilament nets are transparent, and are therefore often used for fishing and trapping. Structural properties Nets, like fabric, stretch less along their constituent strands (the "bars" between knots) than diagonally across the gaps in the mesh. They are, so to speak, made on the bias. The choice of material used also affects the structural properties of the net. Nets are designed and constructed for their specific purpose by modifying the parameters of the weave and the material used. Safety nets, for example, must decelerate the person hitting them gradually, usually by having a concave-upwards stress–strain curve, where the amount of force required to stretch the net increases the further the net is stretched. Uses Transport Examples include cargo nets and net bags. Some vegetables, like onions, are often shipped in nets. Sports Nets are used in sporting goals and in games such as soccer, basketball, bossaball and ice hockey. A net separates opponents in various net sports such as volleyball, tennis, badminton, and table tennis, where the ball or shuttlecock must go over the net to remain in play. A net also may be used for safety during practice, as in cricket. Capturing animals Nets for capturing animals include fishing nets, butterfly nets, bird netting, and trapping nets such as purse and long nets. Some, like mist nets, rocket nets, and netguns, are designed not to harm the animals caught. Camouflage nets may also be used. Furnishings Hammocks, safety nets, and mosquito nets are net-based. Some furniture includes a net stretched on a frame. Multihull boats may have net trampolines strung between their hulls. Clothing Hair nets, net lace, and net embroidery are sartorial nets. Armed conflict Anti-submarine nets and anti-torpedo nets can be laid by net-laying ships.
Technology
Flexible components
null
10469950
https://en.wikipedia.org/wiki/Red%20mullet
Red mullet
The red mullets or surmullets are two species of goatfish, Mullus barbatus and Mullus surmuletus, found in the Mediterranean Sea, east North Atlantic Ocean, and the Black Sea. Both "red mullet" and "surmullet" can also refer to the Mullidae in general. Classification Though they can easily be distinguished—M. surmuletus has a striped first dorsal fin—their common names overlap in many of the languages of the region. In English, M. surmuletus is sometimes called the striped red mullet. Despite the English name "red mullet", these fishes of the goatfish family Mullidae are not closely related to many other species called "mullet", which are members of the grey mullet family Mugilidae. The word "surmullet" comes from the French, and ultimately probably from a Germanic root "sor" 'reddish brown'. Cultural impact They are both favored delicacies in the Mediterranean, and in antiquity were "one of the most famous and valued fish". They are very similar, and cooked in the same ways. M. surmuletus is perhaps somewhat more prized. The ancient Romans reared them in ponds where they were attended and caressed by their owners, and taught to come to be fed at the sound of the voice or bell of the keeper. Specimens were sometimes sold for their weight in silver. Pliny cites a case in which a large sum was paid for a single fish, and an extraordinary expenditure of time was lavished upon these slow-learning pets. Juvenal and other satirists descanted upon the height to which the pursuit of this luxury was carried as a type of extravagance. The statesman Titus Annius Milo, exiled to Marseille in 52 B.C., joked that he would have no regrets as long as he could eat the delicious red mullet of Marseille. Claudius Aelianus in his On the Nature of Animals, writes that the species is sacred to the Greek agricultural goddess Demeter. "At Eleusis it [the Red Mullet] is held in honour by the initiated, and of this honour two accounts are given. Some say, it is because it gives birth three times in a year; others, because it eats the Sea-Hare, which is deadly to man." The red mullet was also significant in the cult of the witch goddess Hecate.
Biology and health sciences
Acanthomorpha
Animals
3495453
https://en.wikipedia.org/wiki/Crenulation
Crenulation
In a geological context, crenulation or crenulation cleavage is a fabric formed in metamorphic rocks such as phyllite, schist and some gneiss by two or more stress directions causing the formation of the superimposed foliations. Formation Crenulations form when an early planar fabric is overprinted by a later planar fabric. Crenulations form by recrystallisation of mica minerals during metamorphism. Micaceous minerals form planar surfaces known as foliations perpendicular to the principal stress fields. If a rock is subjected to two separate deformations and the second deformation is at some other angle to the original, growth of new micas on the foliation planes will create a new foliation plane perpendicular to the plane of principal stress. The angular intersection of the two foliations causes a diagnostic texture called a crenulation, which may involve folding of the earlier mica foliations by the later foliation. Recognition Recognising a crenulation in a rock may require inspecting the rock with a hand-lens or petrographic microscope in thin section. Crenulations may be very cryptic, and there may be several recorded within a rock and especially, entrained within porphyroblasts. Crenulations may manifest as kinking of previous foliation, such that the original foliation appears to be lined or inscribed by a later foliation. In more advanced states, the later foliation will tend to form distinct foliation planes cross-cutting the earlier foliation, resulting in breaking, warping, and micro-scale folding of the earlier foliation into the new foliation. When the crenulation foliation begins to dominate it may totally or almost completely wipe out the original foliation. This process occurs at different rates in rocks and beds of different lithology and chemical composition so that it is usually valuable to look at a variety of outcrops to gain a better appreciation of the effect of crenulation or discover the orientation or presence of earlier foliations. Crenulation may also be the incipient foliation plane which precipitates shearing. In this case, it is often likely that the crenulation acts as a shear plane and it may be difficult to reconstruct earlier foliations and rock units across the crenulation foliation. Analysis Crenulations, because they are the result of a second (or more) foliation, preserve important information on not only the stresses which formed the crenulation foliation, but the orientation of previous foliations. Firstly, the crenulation must be analysed to determine the initial foliation, termed S1, and the overprinting subsequent foliation. The intersection of these two planes forms an intersection lineation. This intersection lineation, L1-2 may approximate the plunge of F2 interference folds. The initial impact of a crenulation foliation may be cryptic, microscopic growth of new minerals at an angle to previous foliations. This may occur only in certain compositions of the rocks which favor growth of minerals under the P-T conditions at that time. In more brittle conditions, especially in highly micaceous rocks, a crenulation may appear as 'kink bands, where S1 foliations are kinked by the S2 foliation so that the original minerals are broken or deformed. This may not result in new mineral growth. Eventually, the crenulation foliation overprints the S1 foliation. In extreme cases, the S2 foliation will obliterate the previous foliation, especially in wet rocks which have compositions amenable to growth of minerals at that time. In this case, porphyroblasts may be the only means in which to observe earlier foliations, assuming they have inclusion trails of the S1 foliation.
Physical sciences
Structural geology
Earth science
8111079
https://en.wikipedia.org/wiki/Gravitational%20wave
Gravitational wave
Gravitational waves are transient displacements in a gravitational fieldgenerated by the relative motion of gravitating massesthat radiate outward from their source at the speed of light. They were proposed by Oliver Heaviside in 1893 and then later by Henri Poincaré in 1905 as the gravitational equivalent of electromagnetic waves. In 1916, Albert Einstein demonstrated that gravitational waves result from his general theory of relativity as ripples in spacetime. Gravitational waves transport energy as gravitational radiation, a form of radiant energy similar to electromagnetic radiation. Newton's law of universal gravitation, part of classical mechanics, does not provide for their existence, instead asserting that gravity has instantaneous effect everywhere. Gravitational waves therefore stand as an important relativistic phenomenon that is absent from Newtonian physics. In gravitational-wave astronomy, observations of gravitational waves are used to infer data about the sources of gravitational waves. Sources that can be studied this way include binary star systems composed of white dwarfs, neutron stars, and black holes; events such as supernovae; and the formation of the early universe shortly after the Big Bang. The first indirect evidence for the existence of gravitational waves came in 1974 from the observed orbital decay of the Hulse–Taylor binary pulsar, which matched the decay predicted by general relativity as energy is lost to gravitational radiation. In 1993, Russell Alan Hulse and Joseph Hooton Taylor Jr. received the Nobel Prize in Physics for this discovery. The first direct observation of gravitational waves was made in September 2015, when a signal generated by the merger of two black holes was received by the LIGO gravitational wave detectors in Livingston, Louisiana, and in Hanford, Washington. The 2017 Nobel Prize in Physics was subsequently awarded to Rainer Weiss, Kip Thorne and Barry Barish for their role in the direct detection of gravitational waves. Introduction In Albert Einstein's general theory of relativity, gravity is treated as a phenomenon resulting from the curvature of spacetime. This curvature is caused by the presence of mass. If the masses move, the curvature of spacetime changes. If the motion is not spherically symmetric, the motion can cause gravitational waves which propagate away at the speed of light. As a gravitational wave passes an observer, that observer will find spacetime distorted by the effects of strain. Distances between objects increase and decrease rhythmically as the wave passes, at a frequency equal to that of the wave. The magnitude of this effect is inversely proportional to the distance (not distance squared) from the source. Inspiraling binary neutron stars are predicted to be a powerful source of gravitational waves as they coalesce, due to the very large acceleration of their masses as they orbit close to one another. However, due to the astronomical distances to these sources, the effects when measured on Earth are predicted to be very small, having strains of less than 1 part in 1020. Scientists demonstrate the existence of these waves with highly-sensitive detectors at multiple observation sites. , the LIGO and VIRGO observatories were the most sensitive detectors, operating at resolutions of about one part in . The Japanese detector KAGRA was completed in 2019; its first joint detection with LIGO and VIRGO was reported in 2021. Another European ground-based detector, the Einstein Telescope, is under development. A space-based observatory, the Laser Interferometer Space Antenna (LISA), is also being developed by the European Space Agency. Gravitational waves do not strongly interact with matter in the way that electromagnetic radiation does. This allows for the observation of events involving exotic objects in the distant universe that cannot be observed with more traditional means such as optical telescopes or radio telescopes; accordingly, gravitational wave astronomy gives new insights into the workings of the universe. In particular, gravitational waves could be of interest to cosmologists as they offer a possible way of observing the very early universe. This is not possible with conventional astronomy, since before recombination the universe was opaque to electromagnetic radiation. Precise measurements of gravitational waves will also allow scientists to test more thoroughly the general theory of relativity. In principle, gravitational waves can exist at any frequency. Very low frequency waves can be detected using pulsar timing arrays. In this technique, the timing of approximately 100 pulsars spread widely across our galaxy is monitored over the course of years. Detectable changes in the arrival time of their signals can result from passing gravitational waves generated by merging supermassive black holes with wavelengths measured in lightyears. These timing changes can be used to locate the source of the waves. Using this technique, astronomers have discovered the 'hum' of various SMBH mergers occurring in the universe. Stephen Hawking and Werner Israel list different frequency bands for gravitational waves that could plausibly be detected, ranging from 10−7 Hz up to 1011 Hz. Speed of gravity The speed of gravitational waves in the general theory of relativity is equal to the speed of light in vacuum, . Within the theory of special relativity, the constant is not only about light; instead it is the highest possible speed for any interaction in nature. Formally, is a conversion factor for changing the unit of time to the unit of space. This makes it the only speed which does not depend either on the motion of an observer or a source of light and/or gravity. Thus, the speed of "light" is also the speed of gravitational waves, and, further, the speed of any massless particle. Such particles include the gluon (carrier of the strong force), the photons that make up light (hence carrier of electromagnetic force), and the hypothetical gravitons (which are the presumptive field particles associated with gravity; however, an understanding of the graviton, if any exist, requires an as-yet unavailable theory of quantum gravity). In August 2017, the LIGO and Virgo detectors received gravitational wave signals at nearly the same time as gamma ray satellites and optical telescopes saw signals from a source located about 130 million light years away. History The possibility of gravitational waves and that those might travel at the speed of light was discussed in 1893 by Oliver Heaviside, using the analogy between the inverse-square law of gravitation and the electrostatic force. In 1905, Henri Poincaré proposed gravitational waves, emanating from a body and propagating at the speed of light, as being required by the Lorentz transformations and suggested that, in analogy to an accelerating electrical charge producing electromagnetic waves, accelerated masses in a relativistic field theory of gravity should produce gravitational waves. In 1915 Einstein published his general theory of relativity, a complete relativistic theory of gravitation. He conjectured, like Poincare, that the equation would produce gravitational waves, but, as he mentions in a letter to Schwarzschild in February 1916, these could not be similar to electromagnetic waves. Electromagnetic waves can be produced by dipole motion, requiring both a positive and a negative charge. Gravitation has no equivalent to negative charge. Einstein continued to work through the complexity of the equations of general relativity to find an alternative wave model. The result was published in June 1916, and there he came to the conclusion that the gravitational wave must propagate with the speed of light, and there must, in fact, be three types of gravitational waves dubbed longitudinal–longitudinal, transverse–longitudinal, and transverse–transverse by Hermann Weyl. However, the nature of Einstein's approximations led many (including Einstein himself) to doubt the result. In 1922, Arthur Eddington showed that two of Einstein's types of waves were artifacts of the coordinate system he used, and could be made to propagate at any speed by choosing appropriate coordinates, leading Eddington to jest that they "propagate at the speed of thought". This also cast doubt on the physicality of the third (transverse–transverse) type that Eddington showed always propagate at the speed of light regardless of coordinate system. In 1936, Einstein and Nathan Rosen submitted a paper to Physical Review in which they claimed gravitational waves could not exist in the full general theory of relativity because any such solution of the field equations would have a singularity. The journal sent their manuscript to be reviewed by Howard P. Robertson, who anonymously reported that the singularities in question were simply the harmless coordinate singularities of the employed cylindrical coordinates. Einstein, who was unfamiliar with the concept of peer review, angrily withdrew the manuscript, never to publish in Physical Review again. Nonetheless, his assistant Leopold Infeld, who had been in contact with Robertson, convinced Einstein that the criticism was correct, and the paper was rewritten with the opposite conclusion and published elsewhere. In 1956, Felix Pirani remedied the confusion caused by the use of various coordinate systems by rephrasing the gravitational waves in terms of the manifestly observable Riemann curvature tensor. At the time, Pirani's work was overshadowed by the community's focus on a different question: whether gravitational waves could transmit energy. This matter was settled by a thought experiment proposed by Richard Feynman during the first "GR" conference at Chapel Hill in 1957. In short, his argument known as the "sticky bead argument" notes that if one takes a rod with beads then the effect of a passing gravitational wave would be to move the beads along the rod; friction would then produce heat, implying that the passing wave had done work. Shortly after, Hermann Bondi published a detailed version of the "sticky bead argument". This later led to a series of articles (1959 to 1989) by Bondi and Pirani that established the existence of plane wave solutions for gravitational waves. Paul Dirac further postulated the existence of gravitational waves, declaring them to have "physical significance" in his 1959 lecture at the Lindau Meetings. Further, it was Dirac who predicted gravitational waves with a well defined energy density in 1964. After the Chapel Hill conference, Joseph Weber started designing and building the first gravitational wave detectors now known as Weber bars. In 1969, Weber claimed to have detected the first gravitational waves, and by 1970 he was "detecting" signals regularly from the Galactic Center; however, the frequency of detection soon raised doubts on the validity of his observations as the implied rate of energy loss of the Milky Way would drain our galaxy of energy on a timescale much shorter than its inferred age. These doubts were strengthened when, by the mid-1970s, repeated experiments from other groups building their own Weber bars across the globe failed to find any signals, and by the late 1970s consensus was that Weber's results were spurious. In the same period, the first indirect evidence of gravitational waves was discovered. In 1974, Russell Alan Hulse and Joseph Hooton Taylor, Jr. discovered the first binary pulsar, which earned them the 1993 Nobel Prize in Physics. Pulsar timing observations over the next decade showed a gradual decay of the orbital period of the Hulse–Taylor pulsar that matched the loss of energy and angular momentum in gravitational radiation predicted by general relativity. This indirect detection of gravitational waves motivated further searches, despite Weber's discredited result. Some groups continued to improve Weber's original concept, while others pursued the detection of gravitational waves using laser interferometers. The idea of using a laser interferometer for this seems to have been floated independently by various people, including M.E. Gertsenshtein and V. I. Pustovoit in 1962, and Vladimir B. Braginskiĭ in 1966. The first prototypes were developed in the 1970s by Robert L. Forward and Rainer Weiss. In the decades that followed, ever more sensitive instruments were constructed, culminating in the construction of GEO600, LIGO, and Virgo. After years of producing null results, improved detectors became operational in 2015. On 11 February 2016, the LIGO-Virgo collaborations announced the first observation of gravitational waves, from a signal (dubbed GW150914) detected at 09:50:45 GMT on 14 September 2015 of two black holes with masses of 29 and 36 solar masses merging about 1.3 billion light-years away. During the final fraction of a second of the merger, it released more than 50 times the power of all the stars in the observable universe combined. The signal increased in frequency from 35 to 250 Hz over 10 cycles (5 orbits) as it rose in strength for a period of 0.2 second. The mass of the new merged black hole was 62 solar masses. Energy equivalent to three solar masses was emitted as gravitational waves. The signal was seen by both LIGO detectors in Livingston and Hanford, with a time difference of 7 milliseconds due to the angle between the two detectors and the source. The signal came from the Southern Celestial Hemisphere, in the rough direction of (but much farther away than) the Magellanic Clouds. The confidence level of this being an observation of gravitational waves was 99.99994%. A year earlier, the BICEP2 collaboration claimed that they had detected the imprint of gravitational waves in the cosmic microwave background. However, they were later forced to retract this result. In 2017, the Nobel Prize in Physics was awarded to Rainer Weiss, Kip Thorne and Barry Barish for their role in the detection of gravitational waves. In 2023, NANOGrav, EPTA, PPTA, and IPTA announced that they found evidence of a universal gravitational wave background. North American Nanohertz Observatory for Gravitational Waves states, that they were created over cosmological time scales by supermassive black holes, identifying the distinctive Hellings-Downs curve in 15 years of radio observations of 25 pulsars. Similar results are published by European Pulsar Timing Array, who claimed a -significance. They expect that a -significance will be achieved by 2025 by combining the measurements of several collaborations. Effects of passing Gravitational waves are constantly passing Earth; however, even the strongest have a minuscule effect and their sources are generally at a great distance. For example, the waves given off by the cataclysmic final merger of GW150914 reached Earth after travelling over a billion light-years, as a ripple in spacetime that changed the length of a 4 km LIGO arm by a thousandth of the width of a proton, proportionally equivalent to changing the distance to the nearest star outside the Solar System by one hair's width. This tiny effect from even extreme gravitational waves makes them observable on Earth only with the most sophisticated detectors. The effects of a passing gravitational wave, in an extremely exaggerated form, can be visualized by imagining a perfectly flat region of spacetime with a group of motionless test particles lying in a plane, e.g., the surface of a computer screen. As a gravitational wave passes through the particles along a line perpendicular to the plane of the particles, i.e., following the observer's line of vision into the screen, the particles will follow the distortion in spacetime, oscillating in a "cruciform" manner, as shown in the animations. The area enclosed by the test particles does not change and there is no motion along the direction of propagation. The oscillations depicted in the animation are exaggerated for the purpose of discussion in reality a gravitational wave has a very small amplitude (as formulated in linearized gravity). However, they help illustrate the kind of oscillations associated with gravitational waves as produced by a pair of masses in a circular orbit. In this case the amplitude of the gravitational wave is constant, but its plane of polarization changes or rotates at twice the orbital rate, so the time-varying gravitational wave size, or 'periodic spacetime strain', exhibits a variation as shown in the animation. If the orbit of the masses is elliptical then the gravitational wave's amplitude also varies with time according to Einstein's quadrupole formula. As with other waves, there are a number of characteristics used to describe a gravitational wave: Amplitude: Usually denoted h, this is the size of the wave the fraction of stretching or squeezing in the animation. The amplitude shown here is roughly h = 0.5 (or 50%). Gravitational waves passing through the Earth are many sextillion times weaker than this h ≈ 10−20. Frequency: Usually denoted f, this is the frequency with which the wave oscillates (1 divided by the amount of time between two successive maximum stretches or squeezes) Wavelength: Usually denoted λ, this is the distance along the wave between points of maximum stretch or squeeze. Speed: This is the speed at which a point on the wave (for example, a point of maximum stretch or squeeze) travels. For gravitational waves with small amplitudes, this wave speed is equal to the speed of light (c). The speed, wavelength, and frequency of a gravitational wave are related by the equation , just like the equation for a light wave. For example, the animations shown here oscillate roughly once every two seconds. This would correspond to a frequency of 0.5 Hz, and a wavelength of about 600 000 km, or 47 times the diameter of the Earth. In the above example, it is assumed that the wave is linearly polarized with a "plus" polarization, written h+. Polarization of a gravitational wave is just like polarization of a light wave except that the polarizations of a gravitational wave are 45 degrees apart, as opposed to 90 degrees. In particular, in a "cross"-polarized gravitational wave, h×, the effect on the test particles would be basically the same, but rotated by 45 degrees, as shown in the second animation. Just as with light polarization, the polarizations of gravitational waves may also be expressed in terms of circularly polarized waves. Gravitational waves are polarized because of the nature of their source. Sources In general terms, gravitational waves are radiated by large, coherent motions of immense mass, especially in regions where gravity is so strong that Newtonian gravity begins to fail. The effect does not occur in a purely spherically symmetric system. A simple example of this principle is a spinning dumbbell. If the dumbbell spins around its axis of symmetry, it will not radiate gravitational waves; if it tumbles end over end, as in the case of two planets orbiting each other, it will radiate gravitational waves. The heavier the dumbbell, and the faster it tumbles, the greater is the gravitational radiation it will give off. In an extreme case, such as when the two weights of the dumbbell are massive stars like neutron stars or black holes, orbiting each other quickly, then significant amounts of gravitational radiation would be given off. Some more detailed examples: Two objects orbiting each other, as a planet would orbit the Sun, will radiate. A spinning non-axisymmetric planetoid say with a large bump or dimple on the equator will radiate. A supernova will radiate except in the unlikely event that the explosion is perfectly symmetric. An isolated non-spinning solid object moving at a constant velocity will not radiate. This can be regarded as a consequence of the principle of conservation of linear momentum. A spinning disk will not radiate. This can be regarded as a consequence of the principle of conservation of angular momentum. However, it will show gravitomagnetic effects. A spherically pulsating spherical star (non-zero monopole moment or mass, but zero quadrupole moment) will not radiate, in agreement with Birkhoff's theorem. More technically, the second time derivative of the quadrupole moment (or the l-th time derivative of the l-th multipole moment) of an isolated system's stress–energy tensor must be non-zero in order for it to emit gravitational radiation. This is analogous to the changing dipole moment of charge or current that is necessary for the emission of electromagnetic radiation. Binaries Gravitational waves carry energy away from their sources and, in the case of orbiting bodies, this is associated with an in-spiral or decrease in orbit. Imagine for example a simple system of two masses such as the Earth–Sun system moving slowly compared to the speed of light in circular orbits. Assume that these two masses orbit each other in a circular orbit in the x–y plane. To a good approximation, the masses follow simple Keplerian orbits. However, such an orbit represents a changing quadrupole moment. That is, the system will give off gravitational waves. In theory, the loss of energy through gravitational radiation could eventually drop the Earth into the Sun. However, the total energy of the Earth orbiting the Sun (kinetic energy + gravitational potential energy) is about 1.14 joules of which only 200 watts (joules per second) is lost through gravitational radiation, leading to a decay in the orbit by about 1 meters per day or roughly the diameter of a proton. At this rate, it would take the Earth approximately 3 times more than the current age of the universe to spiral onto the Sun. This estimate overlooks the decrease in r over time, but the radius varies only slowly for most of the time and plunges at later stages, as with the initial radius and the total time needed to fully coalesce. More generally, the rate of orbital decay can be approximated by where r is the separation between the bodies, t time, G the gravitational constant, c the speed of light, and m1 and m2 the masses of the bodies. This leads to an expected time to merger of Compact binaries Compact stars like white dwarfs and neutron stars can be constituents of binaries. For example, a pair of solar mass neutron stars in a circular orbit at a separation of 1.89 m (189,000 km) has an orbital period of 1,000 seconds, and an expected lifetime of 1.30 seconds or about 414,000 years. Such a system could be observed by LISA if it were not too far away. A far greater number of white dwarf binaries exist with orbital periods in this range. White dwarf binaries have masses in the order of the Sun, and diameters in the order of the Earth. They cannot get much closer together than 10,000 km before they will merge and explode in a supernova which would also end the emission of gravitational waves. Until then, their gravitational radiation would be comparable to that of a neutron star binary. When the orbit of a neutron star binary has decayed to 1.89 m (1890 km), its remaining lifetime is about 130,000 seconds or 36 hours. The orbital frequency will vary from 1 orbit per second at the start, to 918 orbits per second when the orbit has shrunk to 20 km at merger. The majority of gravitational radiation emitted will be at twice the orbital frequency. Just before merger, the inspiral could be observed by LIGO if such a binary were close enough. LIGO has only a few minutes to observe this merger out of a total orbital lifetime that may have been billions of years. In August 2017, LIGO and Virgo observed the first binary neutron star inspiral in GW170817, and 70 observatories collaborated to detect the electromagnetic counterpart, a kilonova in the galaxy NGC 4993, 40 megaparsecs away, emitting a short gamma ray burst (GRB 170817A) seconds after the merger, followed by a longer optical transient (AT 2017gfo) powered by r-process nuclei. Advanced LIGO detectors should be able to detect such events up to 200 megaparsecs away; at this range, around 40 detections per year would be expected. Black hole binaries Black hole binaries emit gravitational waves during their in-spiral, merger, and ring-down phases. Hence, in the early 1990s the physics community rallied around a concerted effort to predict the waveforms of gravitational waves from these systems with the Binary Black Hole Grand Challenge Alliance. The largest amplitude of emission occurs during the merger phase, which can be modeled with the techniques of numerical relativity. The first direct detection of gravitational waves, GW150914, came from the merger of two black holes. Supernova A supernova is a transient astronomical event that occurs during the last stellar evolutionary stages of a massive star's life, whose dramatic and catastrophic destruction is marked by one final titanic explosion. This explosion can happen in one of many ways, but in all of them a significant proportion of the matter in the star is blown away into the surrounding space at extremely high velocities (up to 10% of the speed of light). Unless there is perfect spherical symmetry in these explosions (i.e., unless matter is spewed out evenly in all directions), there will be gravitational radiation from the explosion. This is because gravitational waves are generated by a changing quadrupole moment, which can happen only when there is asymmetrical movement of masses. Since the exact mechanism by which supernovae take place is not fully understood, it is not easy to model the gravitational radiation emitted by them. Spinning neutron stars As noted above, a mass distribution will emit gravitational radiation only when there is spherically asymmetric motion among the masses. A spinning neutron star will generally emit no gravitational radiation because neutron stars are highly dense objects with a strong gravitational field that keeps them almost perfectly spherical. In some cases, however, there might be slight deformities on the surface called "mountains", which are bumps extending no more than 10 centimeters (4 inches) above the surface, that make the spinning spherically asymmetric. This gives the star a quadrupole moment that changes with time, and it will emit gravitational waves until the deformities are smoothed out. Inflation Many models of the Universe suggest that there was an inflationary epoch in the early history of the Universe when space expanded by a large factor in a very short amount of time. If this expansion was not symmetric in all directions, it may have emitted gravitational radiation detectable today as a gravitational wave background. This background signal is too weak for any currently operational gravitational wave detector to observe, and it is thought it may be decades before such an observation can be made. Properties and behaviour Energy, momentum, and angular momentum Water waves, sound waves, and electromagnetic waves are able to carry energy, momentum, and angular momentum and by doing so they carry those away from the source. Gravitational waves perform the same function. Thus, for example, a binary system loses angular momentum as the two orbiting objects spiral towards each otherthe angular momentum is radiated away by gravitational waves. The waves can also carry off linear momentum, a possibility that has some interesting implications for astrophysics. After two supermassive black holes coalesce, emission of linear momentum can produce a "kick" with amplitude as large as 4000 km/s. This is fast enough to eject the coalesced black hole completely from its host galaxy. Even if the kick is too small to eject the black hole completely, it can remove it temporarily from the nucleus of the galaxy, after which it will oscillate about the center, eventually coming to rest. A kicked black hole can also carry a star cluster with it, forming a hyper-compact stellar system. Or it may carry gas, allowing the recoiling black hole to appear temporarily as a "naked quasar". The quasar SDSS J092712.65+294344.0 is thought to contain a recoiling supermassive black hole. Redshifting Like electromagnetic waves, gravitational waves should exhibit shifting of wavelength and frequency due to the relative velocities of the source and observer (the Doppler effect), but also due to distortions of spacetime, such as cosmic expansion. Redshifting of gravitational waves is different from redshifting due to gravity (gravitational redshift). Quantum gravity, wave-particle aspects, and graviton In the framework of quantum field theory, the graviton is the name given to a hypothetical elementary particle speculated to be the force carrier that mediates gravity. However the graviton is not yet proven to exist, and no scientific model yet exists that successfully reconciles general relativity, which describes gravity, and the Standard Model, which describes all other fundamental forces. Attempts, such as quantum gravity, have been made, but are not yet accepted. If such a particle exists, it is expected to be massless (because the gravitational force appears to have unlimited range) and must be a spin-2 boson. It can be shown that any massless spin-2 field would give rise to a force indistinguishable from gravitation, because a massless spin-2 field must couple to (interact with) the stress-energy tensor in the same way that the gravitational field does; therefore if a massless spin-2 particle were ever discovered, it would be likely to be the graviton without further distinction from other massless spin-2 particles. Such a discovery would unite quantum theory with gravity. Significance for study of the early universe Due to the weakness of the coupling of gravity to matter, gravitational waves experience very little absorption or scattering, even as they travel over astronomical distances. In particular, gravitational waves are expected to be unaffected by the opacity of the very early universe. In these early phases, space had not yet become "transparent", so observations based upon light, radio waves, and other electromagnetic radiation that far back into time are limited or unavailable. Therefore, gravitational waves are expected in principle to have the potential to provide a wealth of observational data about the very early universe. Determining direction of travel The difficulty in directly detecting gravitational waves means it is also difficult for a single detector to identify by itself the direction of a source. Therefore, multiple detectors are used, both to distinguish signals from other "noise" by confirming the signal is not of earthly origin, and also to determine direction by means of triangulation. This technique uses the fact that the waves travel at the speed of light and will reach different detectors at different times depending on their source direction. Although the differences in arrival time may be just a few milliseconds, this is sufficient to identify the direction of the origin of the wave with considerable precision. Only in the case of GW170814 were three detectors operating at the time of the event, therefore, the direction is precisely defined. The detection by all three instruments led to a very accurate estimate of the position of the source, with a 90% credible region of just 60 deg2, a factor 20 more accurate than before. Gravitational wave astronomy During the past century, astronomy has been revolutionized by the use of new methods for observing the universe. Astronomical observations were initially made using visible light. Galileo Galilei pioneered the use of telescopes to enhance these observations. However, visible light is only a small portion of the electromagnetic spectrum, and not all objects in the distant universe shine strongly in this particular band. More information may be found, for example, in radio wavelengths. Using radio telescopes, astronomers have discovered pulsars and quasars, for example. Observations in the microwave band led to the detection of faint imprints of the Big Bang, a discovery Stephen Hawking called the "greatest discovery of the century, if not all time". Similar advances in observations using gamma rays, x-rays, ultraviolet light, and infrared light have also brought new insights to astronomy. As each of these regions of the spectrum has opened, new discoveries have been made that could not have been made otherwise. The astronomy community hopes that the same holds true of gravitational waves. Gravitational waves have two important and unique properties. First, there is no need for any type of matter to be present nearby in order for the waves to be generated by a binary system of uncharged black holes, which would emit no electromagnetic radiation. Second, gravitational waves can pass through any intervening matter without being scattered significantly. Whereas light from distant stars may be blocked out by interstellar dust, for example, gravitational waves will pass through essentially unimpeded. These two features allow gravitational waves to carry information about astronomical phenomena heretofore never observed by humans. The sources of gravitational waves described above are in the low-frequency end of the gravitational-wave spectrum (10−7 to 105 Hz). An astrophysical source at the high-frequency end of the gravitational-wave spectrum (above 105 Hz and probably 1010 Hz) generates relic gravitational waves that are theorized to be faint imprints of the Big Bang like the cosmic microwave background. At these high frequencies it is potentially possible that the sources may be "man made" that is, gravitational waves generated and detected in the laboratory. A supermassive black hole, created from the merger of the black holes at the center of two merging galaxies detected by the Hubble Space Telescope, is theorized to have been ejected from the merger center by gravitational waves. Detection Indirect detection Although the waves from the Earth–Sun system are minuscule, astronomers can point to other sources for which the radiation should be substantial. One important example is the Hulse–Taylor binary a pair of stars, one of which is a pulsar. The characteristics of their orbit can be deduced from the Doppler shifting of radio signals given off by the pulsar. Each of the stars is about and the size of their orbits is about 1/75 of the Earth–Sun orbit, just a few times larger than the diameter of our own Sun. The combination of greater masses and smaller separation means that the energy given off by the Hulse–Taylor binary will be far greater than the energy given off by the Earth–Sun system roughly 1022 times as much. The information about the orbit can be used to predict how much energy (and angular momentum) would be radiated in the form of gravitational waves. As the binary system loses energy, the stars gradually draw closer to each other, and the orbital period decreases. The resulting trajectory of each star is an inspiral, a spiral with decreasing radius. General relativity precisely describes these trajectories; in particular, the energy radiated in gravitational waves determines the rate of decrease in the period, defined as the time interval between successive periastrons (points of closest approach of the two stars). For the Hulse–Taylor pulsar, the predicted current change in radius is about 3 mm per orbit, and the change in the 7.75 hr period is about 2 seconds per year. Following a preliminary observation showing an orbital energy loss consistent with gravitational waves, careful timing observations by Taylor and Joel Weisberg dramatically confirmed the predicted period decrease to within 10%. With the improved statistics of more than 30 years of timing data since the pulsar's discovery, the observed change in the orbital period currently matches the prediction from gravitational radiation assumed by general relativity to within 0.2 percent. In 1993, spurred in part by this indirect detection of gravitational waves, the Nobel Committee awarded the Nobel Prize in Physics to Hulse and Taylor for "the discovery of a new type of pulsar, a discovery that has opened up new possibilities for the study of gravitation." The lifetime of this binary system, from the present to merger is estimated to be a few hundred million years. Inspirals are very important sources of gravitational waves. Any time two compact objects (white dwarfs, neutron stars, or black holes) are in close orbits, they send out intense gravitational waves. As they spiral closer to each other, these waves become more intense. At some point they should become so intense that direct detection by their effect on objects on Earth or in space is possible. This direct detection is the goal of several large-scale experiments. The only difficulty is that most systems like the Hulse–Taylor binary are so far away. The amplitude of waves given off by the Hulse–Taylor binary at Earth would be roughly h ≈ 10−26. There are some sources, however, that astrophysicists expect to find that produce much greater amplitudes of h ≈ 10−20. At least eight other binary pulsars have been discovered. Difficulties Gravitational waves are not easily detectable. When they reach the Earth, they have a small amplitude with strain approximately 10−21, meaning that an extremely sensitive detector is needed, and that other sources of noise can overwhelm the signal. Gravitational waves are expected to have frequencies 10−16 Hz < f < 104 Hz. Ground-based detectors Though the Hulse–Taylor observations were very important, they give only indirect evidence for gravitational waves. A more conclusive observation would be a direct measurement of the effect of a passing gravitational wave, which could also provide more information about the system that generated it. Any such direct detection is complicated by the extraordinarily small effect the waves would produce on a detector. The amplitude of a spherical wave will fall off as the inverse of the distance from the source (the 1/R term in the formulas for h above). Thus, even waves from extreme systems like merging binary black holes die out to very small amplitudes by the time they reach the Earth. Astrophysicists expect that some gravitational waves passing the Earth may be as large as h ≈ 10−20, but generally no bigger. Resonant antennas A simple device theorised to detect the expected wave motion is called a Weber bar a large, solid bar of metal isolated from outside vibrations. This type of instrument was the first type of gravitational wave detector. Strains in space due to an incident gravitational wave excite the bar's resonant frequency and could thus be amplified to detectable levels. Conceivably, a nearby supernova might be strong enough to be seen without resonant amplification. With this instrument, Joseph Weber claimed to have detected daily signals of gravitational waves. His results, however, were contested in 1974 by physicists Richard Garwin and David Douglass. Modern forms of the Weber bar are still operated, cryogenically cooled, with superconducting quantum interference devices to detect vibration. Weber bars are not sensitive enough to detect anything but extremely powerful gravitational waves. MiniGRAIL is a spherical gravitational wave antenna using this principle. It is based at Leiden University, consisting of an exactingly machined 1,150 kg sphere cryogenically cooled to 20 millikelvins. The spherical configuration allows for equal sensitivity in all directions, and is somewhat experimentally simpler than larger linear devices requiring high vacuum. Events are detected by measuring deformation of the detector sphere. MiniGRAIL is highly sensitive in the 2–4 kHz range, suitable for detecting gravitational waves from rotating neutron star instabilities or small black hole mergers. There are currently two detectors focused on the higher end of the gravitational wave spectrum (10−7 to 105 Hz): one at University of Birmingham, England, and the other at INFN Genoa, Italy. A third is under development at Chongqing University, China. The Birmingham detector measures changes in the polarization state of a microwave beam circulating in a closed loop about one meter across. Both detectors are expected to be sensitive to periodic spacetime strains of h ~ , given as an amplitude spectral density. The INFN Genoa detector is a resonant antenna consisting of two coupled spherical superconducting harmonic oscillators a few centimeters in diameter. The oscillators are designed to have (when uncoupled) almost equal resonant frequencies. The system is currently expected to have a sensitivity to periodic spacetime strains of h ~ , with an expectation to reach a sensitivity of h ~ . The Chongqing University detector is planned to detect relic high-frequency gravitational waves with the predicted typical parameters ≈1011 Hz (100 GHz) and h ≈10−30 to 10−32. Interferometers A more sensitive class of detector uses a laser Michelson interferometer to measure gravitational-wave induced motion between separated 'free' masses. This allows the masses to be separated by large distances (increasing the signal size); a further advantage is that it is sensitive to a wide range of frequencies (not just those near a resonance as is the case for Weber bars). After years of development ground-based interferometers made the first detection of gravitational waves in 2015. Currently, the most sensitive is LIGO the Laser Interferometer Gravitational Wave Observatory. LIGO has three detectors: one in Livingston, Louisiana, one at the Hanford site in Richland, Washington and a third (formerly installed as a second detector at Hanford) that is planned to be moved to India. Each observatory has two light storage arms that are 4 kilometers in length. These are at 90 degree angles to each other, with the light passing through 1 m diameter vacuum tubes running the entire 4 kilometers. A passing gravitational wave will slightly stretch one arm as it shortens the other. This is the motion to which an interferometer is most sensitive. Even with such long arms, the strongest gravitational waves will only change the distance between the ends of the arms by at most roughly 10−18 m. LIGO should be able to detect gravitational waves as small as h ~ . Upgrades to LIGO and Virgo should increase the sensitivity still further. Another highly sensitive interferometer, KAGRA, which is located in the Kamioka Observatory in Japan, is in operation since February 2020. A key point is that a tenfold increase in sensitivity (radius of 'reach') increases the volume of space accessible to the instrument by one thousand times. This increases the rate at which detectable signals might be seen from one per tens of years of observation, to tens per year. Interferometric detectors are limited at high frequencies by shot noise, which occurs because the lasers produce photons randomly; one analogy is to rainfall the rate of rainfall, like the laser intensity, is measurable, but the raindrops, like photons, fall at random times, causing fluctuations around the average value. This leads to noise at the output of the detector, much like radio static. In addition, for sufficiently high laser power, the random momentum transferred to the test masses by the laser photons shakes the mirrors, masking signals of low frequencies. Thermal noise (e.g., Brownian motion) is another limit to sensitivity. In addition to these 'stationary' (constant) noise sources, all ground-based detectors are also limited at low frequencies by seismic noise and other forms of environmental vibration, and other 'non-stationary' noise sources; creaks in mechanical structures, lightning or other large electrical disturbances, etc. may also create noise masking an event or may even imitate an event. All of these must be taken into account and excluded by analysis before detection may be considered a true gravitational wave event. Einstein@Home The simplest gravitational waves are those with constant frequency. The waves given off by a spinning, non-axisymmetric neutron star would be approximately monochromatic: a pure tone in acoustics. Unlike signals from supernovae or binary black holes, these signals evolve little in amplitude or frequency over the period it would be observed by ground-based detectors. However, there would be some change in the measured signal, because of Doppler shifting caused by the motion of the Earth. Despite the signals being simple, detection is extremely computationally expensive, because of the long stretches of data that must be analysed. The Einstein@Home project is a distributed computing project similar to SETI@home intended to detect this type of gravitational wave. By taking data from LIGO and GEO, and sending it out in little pieces to thousands of volunteers for parallel analysis on their home computers, Einstein@Home can sift through the data far more quickly than would be possible otherwise. Space-based interferometers Space-based interferometers, such as LISA and DECIGO, are also being developed. LISA's design calls for three test masses forming an equilateral triangle, with lasers from each spacecraft to each other spacecraft forming two independent interferometers. LISA is planned to occupy a solar orbit trailing the Earth, with each arm of the triangle being 2.5 million kilometers. This puts the detector in an excellent vacuum far from Earth-based sources of noise, though it will still be susceptible to heat, shot noise, and artifacts caused by cosmic rays and solar wind. Using pulsar timing arrays Pulsars are rapidly rotating stars. A pulsar emits beams of radio waves that, like lighthouse beams, sweep through the sky as the pulsar rotates. The signal from a pulsar can be detected by radio telescopes as a series of regularly spaced pulses, essentially like the ticks of a clock. GWs affect the time it takes the pulses to travel from the pulsar to a telescope on Earth. A pulsar timing array uses millisecond pulsars to seek out perturbations due to GWs in measurements of the time of arrival of pulses to a telescope, in other words, to look for deviations in the clock ticks. To detect GWs, pulsar timing arrays search for a distinct quadrupolar pattern of correlation and anti-correlation between the time of arrival of pulses from different pulsar pairs as a function of their angular separation in the sky. Although pulsar pulses travel through space for hundreds or thousands of years to reach us, pulsar timing arrays are sensitive to perturbations in their travel time of much less than a millionth of a second. The most likely source of GWs to which pulsar timing arrays are sensitive are supermassive black hole binaries, which form from the collision of galaxies. In addition to individual binary systems, pulsar timing arrays are sensitive to a stochastic background of GWs made from the sum of GWs from many galaxy mergers. Other potential signal sources include cosmic strings and the primordial background of GWs from cosmic inflation. Globally there are three active pulsar timing array projects. The North American Nanohertz Observatory for Gravitational Waves uses data collected by the Arecibo Radio Telescope and Green Bank Telescope. The Australian Parkes Pulsar Timing Array uses data from the Parkes radio-telescope. The European Pulsar Timing Array uses data from the four largest telescopes in Europe: the Lovell Telescope, the Westerbork Synthesis Radio Telescope, the Effelsberg Telescope and the Nancay Radio Telescope. These three groups also collaborate under the title of the International Pulsar Timing Array project. In June 2023, NANOGrav published the 15-year data release, which contained the first evidence for a stochastic gravitational wave background. In particular, it included the first measurement of the Hellings-Downs curve, the tell-tale sign of the gravitational wave origin of the observed background. Primordial gravitational wave Primordial gravitational waves are gravitational waves observed in the cosmic microwave background. They were allegedly detected by the BICEP2 instrument, an announcement made on 17 March 2014, which was withdrawn on 30 January 2015 ("the signal can be entirely attributed to dust in the Milky Way"). LIGO and Virgo observations On 11 February 2016, the LIGO collaboration announced the first observation of gravitational waves, from a signal detected at 09:50:45 GMT on 14 September 2015 of two black holes with masses of 29 and 36 solar masses merging about 1.3 billion light-years away. During the final fraction of a second of the merger, it released more than 50 times the power of all the stars in the observable universe combined. The signal increased in frequency from 35 to 250 Hz over 10 cycles (5 orbits) as it rose in strength for a period of 0.2 second. The mass of the new merged black hole was 62 solar masses. Energy equivalent to three solar masses was emitted as gravitational waves. The signal was seen by both LIGO detectors in Livingston and Hanford, with a time difference of 7 milliseconds due to the angle between the two detectors and the source. The signal came from the Southern Celestial Hemisphere, in the rough direction of (but much farther away than) the Magellanic Clouds. The gravitational waves were observed in the region more than 5 sigma (in other words, 99.99997% chances of showing/getting the same result), the probability of finding enough to have been assessed/considered as the evidence/proof in an experiment of statistical physics. Since then LIGO and Virgo have reported more gravitational wave observations from merging black hole binaries. On 16 October 2017, the LIGO and Virgo collaborations announced the first-ever detection of gravitational waves originating from the coalescence of a binary neutron star system. The observation of the GW170817 transient, which occurred on 17 August 2017, allowed for constraining the masses of the neutron stars involved between 0.86 and 2.26 solar masses. Further analysis allowed a greater restriction of the mass values to the interval 1.17–1.60 solar masses, with the total system mass measured to be 2.73–2.78 solar masses. The inclusion of the Virgo detector in the observation effort allowed for an improvement of the localization of the source by a factor of 10. This in turn facilitated the electromagnetic follow-up of the event. In contrast to the case of binary black hole mergers, binary neutron star mergers were expected to yield an electromagnetic counterpart, that is, a light signal associated with the event. A gamma-ray burst (GRB 170817A) was detected by the Fermi Gamma-ray Space Telescope, occurring 1.7 seconds after the gravitational wave transient. The signal, originating near the galaxy NGC 4993, was associated with the neutron star merger. This was corroborated by the electromagnetic follow-up of the event (AT 2017gfo), involving 70 telescopes and observatories and yielding observations over a large region of the electromagnetic spectrum which further confirmed the neutron star nature of the merged objects and the associated kilonova. In 2021, the detection of the first two neutron star-black hole binaries by the LIGO and VIRGO detectors was published in the Astrophysical Journal Letters, allowing to first set bounds on the quantity of such systems. No neutron star-black hole binary had ever been observed using conventional means before the gravitational observation. Microscopic sources In 1964, L. Halpern and B. Laurent theoretically proved that gravitational spin-2 electron transitions are possible in atoms. Compared to electric and magnetic transitions the emission probability is extremely low. Stimulated emission was discussed for increasing the efficiency of the process. Due to the lack of mirrors or resonators for gravitational waves, they determined that a single pass GASER (a kind of laser emitting gravitational waves) is practically unfeasible. In 1998, the possibility of a different implementation of the above theoretical analysis was proposed by Giorgio Fontana. The required coherence for a practical GASER could be obtained by Cooper pairs in superconductors that are characterized by a macroscopic collective wave-function. Cuprate high temperature superconductors are characterized by the presence of s-wave and d-wave Cooper pairs. Transitions between s-wave and d-wave are gravitational spin-2. Out of equilibrium conditions can be induced by injecting s-wave Cooper pairs from a low temperature superconductor, for instance lead or niobium, which is pure s-wave, by means of a Josephson junction with high critical current. The amplification mechanism can be described as the effect of superradiance, and 10 cubic centimeters of cuprate high temperature superconductor seem sufficient for the mechanism to properly work. A detailed description of the approach can be found in "High Temperature Superconductors as Quantum Sources of Gravitational Waves: The HTSC GASER". Chapter 3 of this book. In fiction An episode of the 1962 Russian science-fiction novel Space Apprentice by Arkady and Boris Strugatsky shows an experiment monitoring the propagation of gravitational waves at the expense of annihilating a chunk of asteroid 15 Eunomia the size of Mount Everest. In Stanislaw Lem's 1986 novel Fiasco, a "gravity gun" or "gracer" (gravity amplification by collimated emission of resonance) is used to reshape a collapsar, so that the protagonists can exploit the extreme relativistic effects and make an interstellar journey. In Greg Egan's 1997 novel Diaspora, the analysis of a gravitational wave signal from the inspiral of a nearby binary neutron star reveals that its collision and merger is imminent, implying a large gamma-ray burst is going to impact the Earth. In Liu Cixin's 2006 Remembrance of Earth's Past series, gravitational waves are used as an interstellar broadcast signal, which serves as a central plot point in the conflict between civilizations within the galaxy.
Physical sciences
Theory of relativity
null
8113846
https://en.wikipedia.org/wiki/Abrasion%20%28geology%29
Abrasion (geology)
Abrasion is a process of weathering that occurs when material being transported wears away at a surface over time, commonly occurring with ice and glaciers. The primary process of abrasion is physical weathering. Its the process of friction caused by scuffing, scratching, wearing down, marring, and rubbing away of materials. The intensity of abrasion depends on the hardness, concentration, velocity and mass of the moving particles. Abrasion generally occurs in four ways: glaciation slowly grinds rocks picked up by ice against rock surfaces; solid objects transported in river channels make abrasive surface contact with the bed with ppl in it and walls; objects transported in waves breaking on coastlines; and by wind transporting sand or small stones against surface rocks. Abrasion is the natural scratching of bedrock by a continuous movement of snow or glacier downhill. This is caused by a force, friction, vibration, or internal deformation of the ice, and by sliding over the rocks and sediments at the base (that also causes an avalanche) that causes the glacier to move. Abrasion, under its strictest definition, is commonly confused with attrition and sometimes hydraulic action however, the latter less commonly so. Both abrasion and attrition refers to the wearing down of an object. Abrasion occurs as a result of two surfaces rubbing against each other, resulting in the wearing down of one or both of the surfaces. However, attrition refers to the breaking off of particles (erosion) which occurs as a result of objects hitting against each other. Abrasion leads to surface-level destruction over a period of time, whereas attrition results in more change at a faster rate. Today, the geomorphology community uses the term "abrasion" in a looser way, often interchangeably with the term "wear". In channel transport Abrasion in a stream or river channel occurs when the sediment carried by a river scours the bed and banks, contributing significantly to erosion. In addition to chemical and physical weathering of hydraulic action, freeze-thaw cycles, and more, there is a suite of processes which have long been considered to contribute significantly to bedrock channel erosion include plucking, abrasion (due to both bedload and suspended load), solution, and cavitation. In terms of a glacier, it is a similar principal; the moving of rocks over a surface wears it away with friction, digging a channel that, when the glacier moves away, is called a U-shaped valley. Bedload transport consists of mostly larger clasts, which cannot be picked up by the velocity of the streamflow, rolling, sliding, and/or saltating (bouncing) downstream along the bed. Suspended load typically refers to smaller particles, such as silt, clay, and finer grain sands uplifted by processes of sediment transport. Grains of various sizes and composition are transported differently in terms of the threshold flow velocities required to dislodge and deposit them, as is modeled in the Hjulström curve. These grains polish and scour the bedrock and banks when they make abrasive contact. In coastal erosion Coastal abrasion occurs as breaking ocean waves containing a sand and larger fragments erode the shoreline or headland. The hydraulic action of waves contributes heavily. This removes material, resulting in undercutting and possible collapse of unsupported overhanging cliffs. This erosion can threaten structure or infrastructure on coastlines, and the impact will very likely increase as global warming increases sea level rise. Seawalls are sometimes built-in defense, but in many locations, conventional coastal engineering solutions such as sea walls are increasingly challenged and their maintenance may become unsustainable due to changes in climate conditions, sea-level rise, land subsidence, and sediment supply. Abrasion platforms are shore platforms where wave action abrasion is a prominent process. If it is currently being fashioned, it will be exposed only at low tide, but there is a possibility that the wave-cut platform will be hidden sporadically by a mantle of beach shingle (the abrading agent). If the platform is permanently exposed above the high-water mark, it is probably a raised beach platform, which is not considered a product of abrasion but may be undercut by abrasion as sea level rises. From glaciation Glacial abrasion is the surface wear achieved by individual clasts, or rocks of various sizes, contained within ice or by subglacial sediment as the glacier slides over bedrock. Abrasion can crush smaller grains or particles and remove grains or multigrain fragments, but the removal of larger fragments is classified as plucking (or quarrying), the other major erosion source from glaciers. Plucking creates the debris at the base or sides of the glacier that causes abrasion. While plucking has generally been thought of as a greater force of geomorphological change, there is evidence that in softer rocks with wide joint spacing that abrasion can be just as efficient. A smooth, polished surface is left behind by glacial abrasion, sometimes with glacial striations, which provide information about the mechanics of abrasion under temperate glaciers. From wind Much consideration has been given to the role of wind as an agent of geomorphological change on Earth and other planets (Greely & Iversen 1987). Aeolian processes involve wind eroding materials, such as exposed rock, and moving particles through the air to contact other materials and deposit them elsewhere. These forces are notably similar to models in fluvial environments. Aeolian processes demonstrate their most notable consequences in arid regions of sparse and abundant unconsolidated sediments, such as sand. There is now evidence that bedrock canyons, landforms traditionally thought to evolve only from the fluvial forces of flowing water, may indeed be extended by the aeolian forces of wind, perhaps even amplifying bedrock canyon incision rates by an order of magnitude above fluvial abrasion rates. Redistribution of materials by wind occurs at multiple geographic scales and can have important consequences for regional ecology and landscape evolution.
Physical sciences
Geomorphology: General
Earth science
17322701
https://en.wikipedia.org/wiki/Penis
Penis
A penis (; : penises or penes) is a male sex organ that is used to inseminate female or hermaphrodite animals during copulation. Such organs occur in both vertebrates and invertebrates, including humans, but not in all male animals. The term penis applies to many intromittent organs, but not to all. As an example, the intromittent organ of most Cephalopoda is the hectocotylus, a specialized arm, and male spiders use their pedipalps. Even within the Vertebrata, there are morphological variants with specific terminology, such as hemipenes. Etymology The word "penis" is taken from the Latin word for "tail". Some derive that from Indo-European *pesnis, and the Greek word πέος = "penis" from Indo-European *pesos. Prior to the adoption of the Latin word in English, the penis was referred to as a "yard". The Oxford English Dictionary cites an example of the word yard used in this sense from 1379, and notes that in his Physical Dictionary of 1684, Steven Blankaart defined the word penis as "the Yard, made up of two nervous Bodies, the Channel, Nut, Skin, and Fore-skin, etc." According to Wiktionary, this term meant (among other senses) "rod" or "bar". As with nearly any aspect of the body involved in sexual or excretory functions, the penis is the subject of many slang words and euphemisms for it, a particularly common and enduring one being "cock". See WikiSaurus:penis for a list of alternative words for penis. The Latin word "phallus" (from Greek φαλλος) is sometimes used to describe the penis, although "phallus" originally was used to describe representations, pictorial or carved, of the penis. Evolution and function The external genital organs appeared in the Devonian, about 410 million years ago, when tetrapods began to abandon the aquatic environment. In fact, the necessity to overcome the absence of a liquid phase in which to release the gametes was achieved through the transition to internal fertilization. Among amniotes, the development of an erectile penis occurred independently for mammals, squamates (lizards and snakes), testudines (turtles), and archosaurs (crocodiles and birds). Over time, birds have lost this organ, with the exception of Paleognathae and Anseriformes. The penis is an intromittent organ used to transfer sperm into the female genital tract (i.e., vagina or cloaca) for potential fertilization. The penises of different animal groups are not homologous with each other, but were created several times independently of each other in the course of evolution. An erection is the stiffening and rising of the penis, which occurs during sexual arousal, though it can also happen in non-sexual situations. During ejaculation, a series of muscular contractions delivers semen, containing male gametes known as sperm cells or spermatozoa, from the penis. Ejaculation is usually accompanied by orgasm. The last common ancestor of all living amniotes (mammals, birds and reptiles) likely possessed a penis. Vertebrates Birds Most male birds (e.g., roosters and turkeys) have a cloaca (also present on the female), but not a penis. Among bird species with a penis are paleognaths (tinamous and ratites) and Anatidae (ducks, geese and swans). The magpie goose in the family Anseranatidae also has a penis. A bird penis is different in structure from mammal penises, being an erectile expansion of the cloacal wall (in ducks) and being erected by lymph, not blood. It is usually partially feathered and in some species features spines and brush-like filaments, and in a flaccid state, curls up inside the cloaca. Mammals As with any other bodily attribute, the length and girth of the penis can be highly variable between mammals of different species. In many mammals, the penis is retracted into a prepuce when not erect. Mammals have either musculocavernous penises, which expand while erect, or fibroelastic penises, which become erect by straightening without expanding. Preputial glands are present in some prepuces. In placentals, the urethra, which is connected to the vasa deferentia, travels through and exits the penis, thus both urine and semen are expelled from this organ. The perineum of testicond mammals (mammals without a scrotum) separates the anus and the penis. A bone called the baculum is present in most placentals but absent in humans, cattle and horses. In mammals, the penis is divided into three parts: Roots (crura): these begin at the caudal border of the pelvic ischial arch. Body: the part of the penis extending from the roots. Glans: the free end of the penis and where the urethra opens into in placentals. The penile glans is absent in chimpanzees and bonobos. The internal structures of the penis consist mainly of cavernous, erectile tissue, which is a collection of blood sinusoids separated by sheets of connective tissue (trabeculae). Canine penises have a structure at the base called the bulbus glandis. During copulation, the spotted hyena inserts his penis through the female's pseudo-penis instead of directly through the vagina, which is blocked by the false scrotum. The pseudo-penis and pseudo-scrotum, which are actually a masculinized vulva, closely resemble the male hyena's genitalia, but can be distinguished from the male by the female's greater thickness and more rounded glans. Domestic cats have barbed penises, with about 120–150 one millimetre long backwards-pointing spines. Marsupials usually have bifurcated penises that are retracted into a preputial sheath in the male's urogenital sinus when not erect. Monotremes and marsupial moles are the only mammals in which the penis is located inside the cloaca. Reptiles Male turtles and crocodilians have a penis, while male specimens of the reptile order Squamata, which are snakes and lizards, have two paired organs called hemipenes. Tuataras must use their cloacae for reproduction. Due to evolutionary convergence, turtle and mammal penises have a similar structure. Fish In some fish, the gonopodium, andropodium, and claspers are intromittent organs (to introduce sperm into the female) developed from modified fins. Invertebrates Harvestmen are the only male arachnids that have a penis. In male insects, the structure analogous to a penis is known as an aedeagus. The male copulatory organ of various lower invertebrate animals is often called the cirrus. In 2010, entomologist Charles Linehard described a new genus of barkflies called Neotrogla. Species of this genus have sex-reversed genitalia: females have penis-like organs called gynosomes that are inserted into vagina-like openings of males during mating. A similar female structure has also been described in the closely related Afrotrogla. Scientists who study these insects have occasionally called the gynosome a "female penis" and insisted to drop the definition of penis as "the male copulatory organ". Motivations for using the term "female penis" include that such a term "is easier to understand and much more eye-catching" and that the gynosome have "analogous features" with male penises. Meanwhile, critics have argued that it does not fit the intromittent organ definition of "a structure that enters the female genital tract and deposits sperm". Heraldry Pizzles are represented in heraldry, where the adjective pizzled (or vilené) indicates that part of an animate charge's anatomy, especially if coloured differently.
Biology and health sciences
Reproductive system
null
17322938
https://en.wikipedia.org/wiki/Eurytherm
Eurytherm
A eurytherm is an organism, often an endotherm, that can function at a wide range of ambient temperatures. To be considered a eurytherm, all stages of an organism's life cycle must be considered, including juvenile and larval stages. These wide ranges of tolerable temperatures are directly derived from the tolerance of a given eurythermal organism's proteins. Extreme examples of eurytherms include Tardigrades (Tardigrada), the desert pupfish (Cyprinodon macularis), and green crabs (Carcinus maenas), however, nearly all mammals, including humans, are considered eurytherms. Eurythermy can be an evolutionary advantage: adaptations to cold temperatures, called cold-eurythemy, are seen as essential for the survival of species during ice ages. In addition, the ability to survive in a wide range of temperatures increases a species' ability to inhabit other areas, an advantage for natural selection. Eurythermy is an aspect of thermoregulation in organisms. It is in contrast with the idea of stenothermic organisms, which can only operate within a relatively narrow range of ambient temperatures. Through a wide variety of thermal coping mechanisms, eurythermic organisms can either provide or expel heat for themselves in order to survive in cold or hot, respectively, or otherwise prepare themselves for extreme temperatures. Certain species of eurytherm have been shown to have unique protein synthesis processes that differentiate them from relatively stenothermic, but otherwise similar, species. Examples Tardigrades, known for their ability to survive in nearly any environment, are extreme examples of eurytherms. Certain species of tardigrade, including Mi. tardigradum, are able to withstand and survive temperatures ranging from –273 °C (near absolute zero) to 150 °C in their anhydrobiotic state. The desert pupfish, a rare bony fish that occupies places like the Colorado River Delta in Baja California, small ponds in Sonora, Mexico, and drainage sites near the Salton Sea in California, can function in waters ranging from 8º to 42 °C. The green crab is a common species of littoral crab with a range that extends from Iceland and Central Norway in the north to South Africa and Victoria, Australia in the south, including more temperate regions like Northwest Africa in between. The green crab has been shown to survive in waters at least as cold as 8 °C, and at least as warm as 35 °C. Boreal deciduous conifers (genus Larix) are the primary plants occupying the boreal forests of Siberia and North America. Although they are conifers, they are deciduous, and therefore lose their needles in Autumn. Species like the black spruce, or tamarack (Larix laricina) occupy wide swaths of land ranging from Indiana in the south, well into the arctic circle in Northern Alaska, Canada, and Siberia in the north. It has been shown that the black spruce can endure temperatures as cold as –85°, and at least as warm as 20 °C. Killer whales (Orcinus orca) are found at nearly every latitude on earth. They are able to withstand water temperatures ranging from 0° to 30-35 °C. Killer whales are deemed a cosmopolitan species, along with the osprey (Pandion haliaetus) and the house sparrow (Passer domesticus). Advantages over stenotherms It is thought that adaptations to cold temperatures (cold-eurythermy) in animals, despite the high cost of functional adaptation, has allowed for mobility and agility. This cold eurythermy is also viewed as a near necessity for survival of the evolutionary crises, including ice ages, that occur with relative frequency over the evolutionary timescale. Due to its ability to provide the excess energy and aerobic scope required for endothermy, eurythermy is considered to be the "missing link" between ectothermy and endothermy. The green crab's success demonstrates one example of eurythermic advantage. Although invasive species are typically considered to be detrimental to the environment in which they are introduced, and even considered to be a leading cause of animal extinctions, the ability of an animal to thrive in various environmental conditions is a form of evolutionary fitness, and therefore is typically a characteristic of successful species. A species' relative eurythermality is one of the main factors in its ability to survive in different conditions. One example of eurythermic advantage can be seen in the failure of many of the world's coral reefs. Most species of coral are considered to be stenothermic. The worldwide increase in oceanic temperatures has caused many coral reefs to begin bleaching and dying because the coral have begun to expel the zooxanthellae algae that live in their tissues and provide them with their food and color. This bleaching has resulted in a 50% mortality rate in observed corals in the waters off of Cape York in Northeastern Australia, and a 12% bleaching rate in observed reefs throughout the world. Although regulators, especially endotherms, expend a significantly higher proportion of energy per unit of mass, the advantages of endothermy, particularly endogenous thermogenesis, have proven significant enough for selection. Thermal coping mechanisms The ability to maintain homeostasis at varying temperatures is the most important characteristic in defining an endothermic eurytherm, whereas other, thermoconforming eurytherms like tardigrades are simply able to endure significant shifts in their internal body temperature that occur with ambient temperature changes. Eurythermic animals can be either conformers or regulators, meaning that their internal physiology can either vary with the external environment or maintain consistency regardless of the external environment, respectively. It is important to note that endotherms do not solely rely on internal thermogenesis for all parts of homeostasis or comfort; in fact, in many ways, they are equally as reliant upon behavior to regulate body temperature as ectotherms are. Reptiles are ectotherms, and therefore rely upon positive thermotaxis, basking (heliothermy), burrowing, and crowding with members of their species in order to regulate their body temperature within a narrow range and even to produce fevers to fight infection. Similarly, humans rely upon clothing, housing, air conditioning, and drinking to achieve the same goals, although humans are not considered indicative of endotherms on the whole. The sustained supply of oxygen to body tissues determines the body temperature range of an organism. Eurytherms that live in environments with large temperature changes adapt to higher temperatures through a variety of methods. In green crabs, the process of initial warming results in an increase of oxygen consumption and heart rate, accompanied by a decrease in stroke volume and haemolymph oxygen partial pressure. As this warming continues, dissolved oxygen levels decrease below the threshold for full haemocyanin oxygen saturation. This heating then progressively releases haemocyanin-bound oxygen, saving energy in oxygen transport and resulting in an associated leveling off of metabolic rate. Key to maintaining homeostasis, individual thermoregulation is the ability to maintain internal body temperature in humans, the most recognizable eurytherm. In humans, deep-body temperature is regulated by cutaneous blood flow, which maintains this temperature despite changes in the external environment. Homo Sapiens' ability to survive in different ambient temperatures is a key factor in the species success, and one cited reason for why Homo sapiens eventually outcompeted Neanderthals (Homo neanderthalensis). Humans have two major forms of thermogenesis. The first is shivering, in which a warm-blooded creature produces involuntary contraction of skeletal muscle in order to produce heat. In addition, shivering also signals the body to produce irisin, a hormone that has been shown to convert white fat to brown fat, which is used in non-shivering thermogenesis, the second type of human thermogensis. Non-shivering thermogenesis occurs in the brown fat, which contains the uncoupling protein thermogenin. This protein decreases the proton gradient generated in oxidative phosphorylation during the synthesis of ATP, uncoupling the electron transport in the mitochondrion from the production of chemical energy (ATP). This creation of a gradient across the mitochondrial membrane causes energy to be lost as heat. On the other hand, humans have only one method of cooling themselves, biologically speaking: sweat evaporation. Cutaneous eccrine sweat glands produce sweat, which is made up of mostly water with a small amount of ions. Evaporation of this sweat helps to cool the blood beneath the skin, resulting in a cooling of deep-body temperature. While some organisms are eurythermic due to their ability to regulate internal body temperature, like humans, others have wildly different methods of extreme temperature tolerance. Tardigrades are able to enter an anhydrobiotic state, often called a tun, in order to both prevent desiccation and endure extreme temperatures. In this state, tardigrades decrease their bodily water to about 1–3% wt./wt. Although this state allows certain tardigrades to endure temperatures at the extremes of –273° and 150 °C at the extremes, tardigrades in their hydrated state are able to withstand temperatures as low as –196 °C. This displayed extremotolerance has led scientists to speculate that tardigrades could theoretically survive on Mars, where temperatures regularly fluctuate between –123° and 25 °C, as well as even possibly the near absolute zero of interplanetary space. The tardigrade's ability to withstand extremely cold temperatures as a tun is a form of cryptobiosis called cryobiosis. Although the high temperature endurance of tardigrades has been significantly less studied, their cryobiotic response to low temperatures has been well-documented. Tardigrades are able to withstand such cold temperatures not by avoiding freezing using antifreeze proteins as a freeze avoidance organism would, but rather by tolerating ice formation in the extracellular body water, activated by ice nucleating proteins. In addition to other organisms, plants (Plantae) can be either stenothermic or eurythermic. Plants inhabiting the boreal and polar climates generally tend to be cold-eurythermic, enduring temperatures as cold as –85°, and as warm as at least 20 °C, such as boreal deciduous conifers. This is in direct contrast to plants that typically inhabit more tropical or montane regions, where plants may have purely tolerable range between only about 10° and 25 °C, such as the banyan tree. Eurythermal protein adaptation The tolerance for extreme body temperatures in a given eurythermic organism is largely due to an increased temperature tolerance by the respective organism's homologous proteins. In particular, the proteins of a warm-adapted species may be inherently more eurythermal than a cold-adapted species, with warm-adapted species' proteins withstanding higher temperatures before beginning to denature, therefore avoiding possible cell death. Eurythermal species also have shown adaptations in protein synthesis rates compared to non-eurythermal similar species. Rainbow trout (Salmo gairdneri) have shown constant protein synthesis rates over temperatures ranging from 5° to 20 °C, after acclimating to any temperature in this range for 1 month. In contrast, carp (Cyprinus carpio) have shown significantly higher protein synthesis rates after acclimating to higher water temperatures (25 °C) than after acclimating to lower water temperatures (10 °C). This type of experiment is common throughout fish. A similar example is given by the Senegalese sole (Solea senegalensis), which, when acclimated to temperatures of 26 °C, produced a significantly higher amount of taurine, glutamate, GABA and glycine compared to acclimation to 12 °C. This may mean that the aforementioned compounds aid in antioxidant defense, osmoregulatory processes, or energetic purposes at these temperatures.
Biology and health sciences
Basics
Biology
10474719
https://en.wikipedia.org/wiki/Robotic%20arm
Robotic arm
A robotic arm is a type of mechanical arm, usually programmable, with similar functions to a human arm; the arm may be the sum total of the mechanism or may be part of a more complex robot. The links of such a manipulator are connected by joints allowing either rotational motion (such as in an articulated robot) or translational (linear) displacement. The links of the manipulator can be considered to form a kinematic chain. The terminus of the kinematic chain of the manipulator is called the end effector and it is analogous to the human hand. However, the term "robotic hand" as a synonym of the robotic arm is often proscribed. Types Cartesian robot / Gantry robot: Used for pick and place work, application of sealant, assembly operations, handling machine tools and arc welding. It is a robot whose arm has three prismatic joints, whose axes are coincident with a Cartesian coordinator. collaborative robot / Cobot: Cobot applications contrast with traditional industrial robot applications in which robots are isolated from human contact. Cobot has a large variety of applications such as: Commercial Application, Robotic Research, Dispensing, Material Handling, Assembly, Finishing, Quality Inspection. Cobot safety may rely on lightweight construction materials, rounded edges, and the inherent limitation of speed and force, or on sensors and software that ensures safe behavior. Cylindrical robot: Used for assembly operations, handling at machine tools, spot welding, and handling at die casting machines. It is a robot whose axes form a cylindrical coordinate system. Spherical robot / Polar robot: Used for handling machine tools, spot welding, die casting, fettling machines, gas welding and arc welding. It is a robot whose axes form a polar coordinate system. SCARA robot: Used for pick and place work, application of sealant, assembly operations and handling machine tools. This robot features two parallel rotary joints to provide compliance in a plane. Articulated robot: Used for assembly operations, diecasting, fettling machines, gas welding, arc welding and spray-painting. It is a robot whose arm has at least three rotary joints. Parallel robot: One use is a mobile platform handling cockpit flight simulators. It is a robot whose arms have concurrent prismatic or rotary joints. Anthropomorphic robot: It is shaped in a way that resembles a human hand, i.e. with independent fingers and thumbs. .. Notable robotic arms In space, the Canadarm and its successor Canadarm2 are examples of multi degree of freedom robotic arms. These robotic arms have been used to perform a variety of tasks such as inspection of the Space Shuttle using a specially deployed boom with cameras and sensors attached at the end effector, and also satellite deployment and retrieval manoeuvres from the cargo bay of the Space Shuttle. The Curiosity and Perseverance rovers on the planet Mars also use robotic arms. Additionally, Perseverance has a smaller sample caching arm hidden inside its body below the rover in its caching assembly. TAGSAM is a robotic arm for collecting a sample from a small asteroid in space on the spacecraft OSIRIS-REx. The 2018 Mars lander InSight has a robotic arm called the IDA, it has a camera, grappler,and is used to move special instruments. Low-cost robotic arms In the decade of 2010 the availability of low-cost robotic arms increased substantially. Although such robotic arms are mostly marketed as hobby or educational devices, applications in laboratory automation have been proposed, like their use as autosamplers. Classification A serial robot arm can be described as a chain of links that are moved by joints which are actuated by motors. An end-effector, also called a robot hand, can be attached to the end of the chain. As other robotic mechanisms, robot arms are typically classified in terms of the number of degrees of freedom. Usually, the number of degrees of freedom is equal to the number of joints that move the links of the robot arm. At least six degrees of freedom are required to enable the robot hand to reach an arbitrary pose (position and orientation) in three dimensional space. Additional degrees of freedom allow to change the configuration of some link on the arm (e.g., elbow up/down), while keeping the robot hand in the same pose. Inverse kinematics is the mathematical process to calculate the configuration of an arm, typically in terms of joint angles, given a desired pose of the robot hand in three dimensional space. Robotic hands The end effector, or robotic hand, can be designed to perform any desired task such as welding, gripping, spinning etc., depending on the application. For example, robot arms in automotive assembly lines perform a variety of tasks such as welding and parts rotation and placement during assembly. In some circumstances, close emulation of the human hand is desired, as in robots designed to conduct bomb disarmament and disposal.
Technology
Machinery and tools: General
null
12763479
https://en.wikipedia.org/wiki/Scolecophidia
Scolecophidia
The Scolecophidia, commonly known as blind snakes or thread snakes, are an infraorder of snakes. They range in length from . All are fossorial (adapted for burrowing). Five families and 39 genera are recognized. The Scolecophidia infraorder is most-likely paraphyletic (with the family Anomalepididae recovered with strong support as sister clade to the 'typical snakes'). Taxonomy The infraorder name Scolecophidia derives from the two Ancient Greek words or σκώληκος (, genitive ), meaning "earthworm", and (), meaning "snake". It refers to their shape and fossorial lifestyle. Families Evolution Despite only having fossils as early as the Cretaceous, Scolecophidia itself likely originated in the Middle Jurassic, with Anomalepididae, Leptotyphlopidae, and Typhlopoidea diverging from one another during the Late Jurassic. Within Typhlopoidea, Gerrhopilidae likely diverged from the Xenotyphlopidae-Typhlopidae clade during the Early Cretaceous, and Xenotyphlopidae and Typhlopidae likely diverged from one another during the Late Cretaceous. Scolecophidians are believed to have originated on Gondwana, with anomalepidids and leptotyphlopids evolving in west Gondwana (South America and Africa) and the Typhlopoidea (typhlopids, gerrhopilids, and xenotyphlopids) on east Gondwana, initially on the combined India/Madagascar land mass, during the Mesozoic. Typhlopids, initially isolated on Madagascar, then dispersed to Africa and Eurasia. South American typhlopids appear to have evolved from African typhlopids that rafted across the Atlantic about 60 million years ago; they, in turn, dispersed to the Caribbean about 33 million years ago. Similarly, typhlopids appear to have reached Australia from Southeast Asia or Indonesia about 28 million years ago. Meanwhile, the gerrhopilids, isolated on Insular India, underwent a radiation throughout tropical Asia following the collision of India with Asia, while the xenotyphlopids remained isolated on Madagascar. The Malagasy typhlopoids (Madatyphlops in Typhlopidae and Xenotyphlops in Xenotyphlopidae) are among the only extant terrestrial vertebrates on Madagascar whose isolation occurred due to vicariance from the Cretaceous breakup of Gondwana. The only other terrestrial vertebrate on Madagascar that shares this evolutionary history is the Madagascan big-headed turtle (Erymnochelys madagascariensis); all other Malagasy land vertebrates dispersed from the mainland to an already-isolated Madagascar from the latest Cretaceous to the present. Fossil record The extinct fossil species Boipeba tayasuensis from the Late Cretaceous of Brazil was described in 2020, marking the earliest fossil record of Scolecophidia. It was a sister group to Typhlopoidea and was over 1 meter in length, making it much larger than most modern blindsnakes, with only Afrotyphlops schlegelii and Afrotyphlops mucruso rivaling it in size. Prior to this, the earliest scolecophidian fossils were only known from the Paleocene of Morocco and the Eocene of Europe. Possible Typhopid skin has been identified in Dominican amber. Phylogeny This phylogeny combines the ones recovered by Vidal et al. in 2010 and Fachini et al. in 2020. Description The common name of Scolecophidia, blind snakes, is based on their shared characteristic of reduced eyes that are located under their head scales. These head scales are found in all snakes and are referred to as spectacles, but within this infraorder, they are opaque, resulting in decreased visual capabilities. Reduced eyes of the Scolecophidia have been attributed to evolutionary origins of snakes, which are hypothesized to have arisen from fossorial ancestors, causing a loss of genes related to eyesight that later evolved again in higher snakes to be similar to other vertebrates due to convergent evolution. Newer research shows that seven of the 12 genes associated with bright-light vision in most snakes and lizards are not present in this infraorder, and the common ancestor of all snakes had better eyesight. Other shared characteristics include an absent left oviduct in four of the five families, aside from the Anomalepididae, which have a well developed yet reduced left oviduct. Aside from this, these snakes range in length from . Their typical body shapes include slender, cylindrical bodies and small, narrow heads. All these families either lack or have a vestigial left lung and lack cranial infrared receptors. Behavior The main shared characteristic found across all Scolecophidia is a fossorial nature, either living underground or within logs and leaf litter. Aside from this, thus far the reproduction remains understudied with all Scolecophidia studied thus far being noted to be oviparous, with elongate eggs noted in both leptotyphlopids and typhlopids. Foraging behaviors vary across families, but all feed on invertebrates. Some of their main food sources include ant or termite eggs, which are tracked down by following chemical cues left by these invertebrates to create trails. Tricheilostomata macrolepis has been seen climbing up trees and waving its head side to side vertically to detect chemical cues in the air to locate insect nests. In a study on the Leptotyphlopidae, some species were found to specialize in eating only termites or ants; some rely on binge feeding patterns, while others do not. While these snakes are often difficult to locate due to their burrowing habits, they are more often seen above ground after rain due to flooding that occurs in burrows. The ancestral nature of the Scolecophidia has resulted in the use of these organisms as models for evolutionary studies in Serpentes to better understand evolution of reproduction, morphology, and feeding habits.
Biology and health sciences
Snakes
Animals
12763945
https://en.wikipedia.org/wiki/Land%20snail
Land snail
A land snail is any of the numerous species of snail that live on land, as opposed to the sea snails and freshwater snails. Land snail is the common name for terrestrial gastropod mollusks that have shells (those without shells are known as slugs). However, it is not always easy to say which species are terrestrial, because some are more or less amphibious between land and fresh water, and others are relatively amphibious between land and salt water. Land snails are a polyphyletic group comprising at least ten independent evolutionary transitions to terrestrial life (the last common ancestor of all gastropods was marine). The majority of land snails are pulmonates that have a lung and breathe air. Most of the non-pulmonate land snails belong to lineages in the Caenogastropoda, and tend to have a gill and an operculum. The largest clade of land snails is the Cyclophoroidea, with more than 7,000 species. Many of these operculate land snails live in habitats or microhabitats that are sometimes (or often) damp or wet, such as in moss. Land snails have a strong muscular foot; they use mucus to enable them to crawl over rough surfaces and to keep their soft bodies from drying out. Like other mollusks, land snails have a mantle, and they have one or two pairs of tentacles on their head. Their internal anatomy includes a radula and a primitive brain. In terms of reproduction, many caenogastropod land snails (e.g., diplommatinids) are dioecious, but pulmonate land snails are hermaphrodites (they have a full set of organs of both sexes) and most lay clutches of eggs in the soil. Tiny snails hatch out of the egg with a small shell in place, and the shell grows spirally as the soft parts gradually increase in size. Most land snails have shells that are right-handed in their coiling. A wide range of different vertebrate and invertebrate animals prey on land snails. They are used as food by humans in various cultures worldwide, and are raised on farms in some areas for use as food. Biology Physical characteristics Land snails move by gliding along on their muscular foot, which is lubricated with mucus and covered with epithelial cilia. This motion is powered by succeeding waves of muscular contractions that move down the ventral of the foot. This muscular action is clearly visible when a snail is crawling on the glass of a window or aquarium. Snails move at a proverbially low speed (1 mm/s is a typical speed for adult Helix lucorum). Snails secrete mucus externally to keep their soft bodies from drying out. They also secrete mucus from the foot to aid in locomotion by reducing friction, and to help reduce the risk of mechanical injury from sharp objects, meaning they can crawl over a sharp edge like a straight razor and not be injured. The mucus that land snails secrete with the foot leaves a slime trail behind them, which is often visible for some hours afterwards as a shiny "path" on the surface over which they have crawled. Snails (like all molluscs) also have a mantle, a specialized layer of tissue which covers all of the internal organs as they are grouped together in the visceral mass. The mantle also extends outward in flaps which reach to the edge of the shell and in some cases can cover the shell, and which are partially retractable. The mantle is attached to the shell, and creates the shell and makes shell growth possible by secretion. Most molluscs, including land snails, have a shell which is part of their anatomy since the larval stage. When they are active, the organs such as the lung, heart, kidney, and intestines remain inside the shell; only the head and foot emerge. The shell grows with them in size by the process of secreting calcium carbonate along the open edge and on the inner side for extra strength. Although some land snails create shells that are almost entirely formed from the protein conchiolin, most land snails need a good supply of calcium in their diet and environment to produce a strong shell. A lack of calcium, or low pH in their surroundings, can result in thin, cracked, or perforated shells. Usually, a snail can repair damage to its shell over time if its living conditions improve, but severe damage can be fatal. When retracted into their shells, many snails with gills (including some terrestrial species) are able to protect themselves with a door-like anatomical structure called an operculum. Land snails range greatly in size. The largest living species is the Giant African Snail or Ghana Tiger Snail (Achatina achatina; Family Achatinidae), which can measure up to 30 cm. The largest land snails of non-tropical Eurasia are endemic Caucasian snails Helix buchi and Helix goderdziana from the south-eastern Black Sea area in Georgia and Turkey; diameter of the shell of the latter may exceed 6 cm. At the other end of the size spectrum is Angustopila psammion, a species with shell diameter of 0.60-0.68 mm. Most land snails bear one or two pairs of tentacles on their heads. In most land snails the eyes are carried on the first (upper) set of tentacles (called ommatophores or more informally 'eye stalks') which are usually roughly 75% of the width of the eyes. The second (lower) set of tentacles act as olfactory organs. Both sets of tentacles are retractable in land snails. Digestion and nervous system A snail breaks up its food using the radula inside its mouth. The radula is a chitinous ribbon-like structure containing rows of microscopic teeth. With this the snail scrapes at food, which is then transferred to the digestive tract. In a very quiet setting, a large land snail can be heard 'crunching' its food: the radula is tearing away at the surface of the food that the snail is eating. The cerebral ganglia of the snail form a primitive brain which is divided into four sections. This structure is very much simpler than the brains of mammals, reptiles and birds, but nonetheless, snails are capable of associative learning. Respiration Since snails in the genus Helix are terrestrial rather than freshwater or marine, they have developed a simple lung for respiration. (Most other snails and gastropods have gills, instead.) Oxygen is carried by the blood pigment hemocyanin. Both oxygen and carbon dioxide diffuse in and out of blood through the capillaries. A muscular valve regulates the process of opening and closing the entrance of the lung. When the valve opens, the air can either enter or leave the lung. The valve plays an important role in reducing water loss and preventing drowning. Shell growth As the snail grows, so does its calcium carbonate shell. The shell grows additively, by the addition of new calcium carbonate, which is secreted by glands located in the snail's mantle. The new material is added to the edge of the shell aperture (the opening of the shell). Therefore, the centre of the shell's spiral was made when the snail was younger, and the outer part when the snail was older. When the snail reaches full adult size, it may build a thickened lip around the shell aperture. At this point the snail stops growing, and begins reproducing. A snail's shell forms a logarithmic spiral. Most snail shells are right-handed or dextral in coiling, meaning that if the shell is held with the apex (the tip, or the juvenile whorls) pointing towards the observer, the spiral proceeds in a clockwise direction from the apex to the opening. Hibernation and estivation Some snails hibernate during the winter (typically October through April in the Northern Hemisphere). They may also estivate in the summer in drought conditions. To stay moist during hibernation, a snail seals its shell opening with a dry layer of mucus called an epiphragm. Reproduction The great majority of land snails are hermaphrodites with a full set of reproductive organs of both sexes, able to produce both spermatozoa and ova. A few groups of land snails such as the Pomatiidae, which are distantly related to periwinkles, have separate sexes: male and female. The age of sexual maturity varies depending on species of snail, ranging from as little as 6 weeks to 5 years. Adverse environmental conditions may delay sexual maturity in some snail species. Most pulmonate air-breathing land snails perform courtship behaviors before mating. The courtship may last anywhere between two and twelve hours. In a number of different families of land snails and slugs, prior to mating one or more love darts are fired into the body of the partner. Pulmonate land snails are prolific breeders and inseminate each other in pairs to internally fertilize their ova via a reproductive opening on one side of the body, near the front, through which the outer reproductive organs are extruded so that sperm can be exchanged. Fertilization then occurs and the eggs develop. Each brood may consist of up to 100 eggs. Garden snails bury their eggs in shallow topsoil primarily while the weather is warm and damp, usually 5 to 10 cm down, digging with their foot. Egg sizes differ between species, from a 3 mm diameter in the grove snail to a 6 mm diameter in the Giant African Land Snail. After 2 to 4 weeks of favorable weather, these eggs hatch and the young emerge. Snails may lay eggs as often as once a month. There have been hybridizations of snail species; although these do not occur commonly in the wild, in captivity they can be coaxed into doing so. Parthenogenesis has been reported only in one species of slug, but many species can self-fertilise. C. obtusus is a prominent endemic snail species of the Eastern Alps. There is strong evidence for selfing (self-fertilization) in the easternmost snail populations as indicated by microsatellite data. Compared to western populations, in the eastern population mucous gland structures employed in sexual reproduction are highly variable and deformed suggesting that in selfing organisms these structures have reduced function. Lifespan Most species of land snail are annual, others are known to live 2 or 3 years, but some of the larger species may live over 10 years in the wild. For instance, 10-year old individuals of the Roman snail Helix pomatia are probably not uncommon in natural populations. Populations of some threatened species may be dependent on a pool of such long-lived adults. In captivity, the lifespan of snails can be much longer than in the wild, for instance up to 25 years in H. pomatia. Diet In the wild, snails eat a variety of different foods. Terrestrial snails are usually herbivorous, however some species are predatory carnivores or omnivores, including the genus Powelliphanta, which includes the largest carnivorous snails in the world, native to New Zealand. The diet of most land snails can include leaves, stems, soft bark, fruit, vegetables, fungi and algae. They may have a specialized crop of symbiotic bacteria that aid in digestion, especially with the breakdown of the polysaccharide cellulose into simple sugars. Some species can cause damage to agricultural crops and garden plants, and are therefore often regarded as pests. Predators Many predators, both specialist and generalist, feed on snails. Some animals, such as the song thrush, break the shell of the snail by hammering it against a hard object, such as stone, to expose its edible insides. Other predators, such as some species of frogs, circumvent the need to break snail shells by simply swallowing the snail whole, shell and all. Some carnivorous species of snails, such as the decollate snail and the rosy wolf snail, also prey on other land snails. Such carnivorous snails are commercially grown and sold to combat pest snail species. Many of these also escape into the wild, where they prey on indigenous snails, such as the Cuban land snails of the genus Polymita, and the indigenous snails of Hawaii. In an attempt to protect themselves against predators, land snails retract their soft parts into their shell when they are resting; some bury themselves. Land snails have many natural predators, including members of all the land vertebrate groups, three examples being thrushes, hedgehogs and Pareas snakes. Invertebrate predators include decollate snails, ground beetles, leeches, certain land flatworms such as Platydemus manokwari and even the predatory caterpillar Hyposmocoma molluscivora. In the case of the marsh snail Succinea putris, the snails can be parasitized by a microscopic flatworm of the species Leucochloridium paradoxum, which then reproduces within the snail's body. The flatworms invade the snail's eye stalks, causing them to become enlarged. Birds are attracted to and consume these eye stalks, consuming the flatworms in the process and becoming the final hosts of the flatworm. Human activity poses great dangers to snails in the wild. Pollution and habitat destruction have caused the extinction of a considerable number of snail species in recent years. Ecology Snails easily suffer moisture loss. Snails are most active at night and after rainfall. During unfavourable conditions, a snail remains inside its shell, usually under rocks or other hiding places, to avoid being discovered by predators. In dry climates, snails naturally congregate near water sources, including artificial sources such as wastewater outlets of air conditioners. Human food Land snails have been eaten for thousands of years, going back at least as far as the Pleistocene. Archaeological evidence of snail consumption is especially abundant in Capsian sites in North Africa, but is also found throughout the Mediterranean region in archaeological sites dating between 12,000 and 6,000 years ago. Snail eggs, sold as snail caviar, are a specialty food that is growing in popularity in European cuisine. Snails contain many nutrients. They are rich in calcium and also contain vitamin B1 and E. They contain various essential amino acids, and are low in calories and fat. However, wild-caught land snails which are prepared for the table but are not thoroughly cooked, can harbor a parasite (Angiostrongylus cantonensis) that can cause a rare kind of meningitis. The process of snail farming is called heliciculture. The establishment of snail farms outside of Europe has introduced several species to North America, South America, and Africa, where some escapees have established themselves as invasive species. Africa In parts of West Africa, specifically Ghana, snails are served as a delicacy. Achatina achatina, Ghana tiger snails, are also known as some of the largest snails in the world. Snail, called "igbin" in Yoruba language is a delicacy, widely eaten in Nigeria, especially among the Yorubas and Igbos. In Igbo language, snails are called "Ejuna" or "Eju". In Cameroon, snails, usually called 'nyamangoro' and 'slow boys' are a delicacy especially to natives of the South West region of Cameroon. The snails are either eaten cooked and spiced or with a favourite dish called 'eru'. In North Morocco, small snails are eaten as snacks in spicy soup. The recipe is identical to this prepared in Andalusia (South Spain), showing the close cultural relationship between both kinds of cuisine. Europe Snails are eaten in several European countries, as they were in the past in the Roman Empire. Mainly three species, all from the family Helicidae, are ordinarily eaten: Helix pomatia, or edible snail, generally prepared in its shell, with parsley butter (size: 40 to 55 mm for an adult weight of 25 to 45 g; typically found in Burgundy, France; known as l'Escargot de Bourgogne). Helix lucorum, found throughout the Eastern Mediterranean region are commonly eaten in Greece and in some rural communities (ethnic Greeks and Georgian Catholics) in Georgia. Cornu aspersum, synonym Helix aspersa: Cornu aspersum, better known as the European brown snail, is cooked in many different ways, according to different local traditions (size: 28 to 35 mm for an adult weight of 7 to 15 g; typically found in the Mediterranean countries of Europe and North Africa and the French Atlantic coast; Helix aspersa aspersa known as le Petit-gris). Cornu aspersum maxima (size 40 to 45 mm for an average weight of 20 to 30 g; typically found in North Africa). Snails are a delicacy in French cuisine, where they are called escargots. 191 farms produced escargots in France as of 2014. In an English-language menu, escargot is generally reserved for snails prepared with traditional French recipes (served in the shell with a garlic and parsley butter). Before preparing snails to eat, the snails should be fasting for three days with only water available. After three days of fasting, the snails should be fed flour and offered water for at least a week. This process is thought to cleanse the snails. Snails are also popular in Portuguese cuisine where they are called in Portuguese caracóis, and served in cheap snack houses and taverns, usually stewed (with different mixtures of white wine, garlic, piri piri, oregano, coriander or parsley, and sometimes chouriço). Bigger varieties, called caracoletas (especially, Cornu aspersum), are generally grilled and served with a butter sauce, but other dishes also exist such as feijoada de caracóis. Overall, Portugal consumes about 4,000 tonnes of snails each year. Traditional Spanish cuisine also uses snails ("caracoles" in Spanish; "caragols" or "cargols" in Catalan), consuming several species such as Cornu aspersum, Otala lactea, Otala punctata and Theba pisana. Snails are very popular in Andalusia, Valencia and Catalonia. There are even snail celebrations, such as the "L'Aplec del Caragol", which takes place in Lleida each May and draws more than 200,000 visitors from abroad. Small to medium-sized varieties are usually cooked in one of several spicy sauces or even in soups, and eaten as an appetizer. The bigger ones may be reserved for more elaborate dishes, such as the "arroz con conejo y caracoles" (a paella-style rice with snails and rabbit meat, from the inner regions of south-eastern Spain), "cabrillas" (snails in spicy tomato sauce, typical of western Andalusia) and the Catalan caragols a la llauna (grilled inside their own shells and then eaten after dipping them in garlic mayonnaise) and a la gormanda (boiled in tomato and onion sauce). In Greece, snails are popular in the island of Crete, but are also eaten in many parts of the country and can even be found in supermarkets, sometimes placed alive near partly refrigerated vegetables. In this regard, snails are one of the few live organisms sold at supermarkets as food. They are eaten either boiled with vinegar added, or sometimes cooked alive in a casserole with tomato, potatoes and squashes. Limpets and sea snails also find their way to the Greek table around the country. Another snail cooking method is the Kohli Bourbouristi (κοχλιοί μπου(ρ)μπουριστοί), a traditional Cretan dish, which consists of fried snails in olive oil with salt, vinegar and rosemary. They often feature on Cyprus taverna menus, in the meze section, under the name karaoloi (καράολοι). In Sicily, snails (or babbaluci as they are commonly called in Sicilian) are a popular dish. They are usually boiled with salt first, then served with tomato sauce or bare with oil, garlic and parsley. Snails are similarly appreciated in other Italian regions, such as Piedmont where in Cherasco there is the Italian National Institute of Heliculture. Snails (or bebbux as they are called in Maltese) are a dish on the Mediterranean island of Malta, generally prepared and served in the Sicilian manner. In southwestern Germany there is a regional specialty of soup with snails and herbs, called "Black Forest Snail Chowder" (Badener Schneckensuepple). Heliciculture is the farming of snails. Some species such as the Roman snail are protected in the wild in several European countries and must not be collected, but the Roman Snail and the Garden Snail (Cornu aspersum) are cultivated on snail farms. Although there is not usually considered to be a tradition of snail eating in Great Britain, common garden snails Cornu aspersum were eaten in the Southwick area of Sunderland in North East England. They were collected from quarries and along the stone walls of railway embankments during the winter when the snails were hibernating and had voided the contents of their guts. Gibson writes that this tradition was introduced in the 19th century by French immigrant glass workers. "Snail suppers" were a feature of local pubs and Southwick working men were collecting and eating snails as late as the 1970s, though the tradition may now have died out. Oceania In New Caledonia, Placostylus fibratus (French: bulime) is considered a highly prized delicacy and is locally farmed to ensure supplies. It is often served by restaurants prepared in the French style with garlic butter. Prevention Metaldehyde and iron phosphate can be used to exterminate snails. Since copper generates electric shocks that make it difficult for snails to move, it makes a great barrier material for them.
Biology and health sciences
Gastropods
Animals
6159360
https://en.wikipedia.org/wiki/Juniperus%20deppeana
Juniperus deppeana
Juniperus deppeana (alligator juniper or checkerbark juniper) is a small to medium-sized tree reaching in height. It is native to central and northern Mexico and the southwestern United States. Description The tree reaches , rarely , in height. The bark is usually very distinctive, unlike other junipers, hard, dark gray-brown, cracked into small square plates superficially resembling alligator skin; it is however sometimes like other junipers, with stringy vertical fissuring. The shoots are in diameter. On juvenile specimens, the leaves are needle-like and long. The leaves are arranged in opposite decussate pairs or whorls of three; in adulthood they are scale-like, long (up to 5 mm) and 1–1.5 mm broad. The cones are berrylike, wide, green when young and maturing to orange-brown with a whitish waxy bloom,. These contain 2–6 seeds, which mature in about 18 months. The male cones are long, and shed their pollen in spring. The species is largely dioecious, producing cones of only one sex on each tree, but occasional trees are monoecious. Taxonomy There are five varieties, not accepted as distinct by all authorities: Juniperus deppeana var. deppeana. Throughout the range of the species. Foliage dull gray-green with a transparent or yellowish resin spot on each leaf; cones diameter. Juniperus deppeana var. pachyphlaea (syn. J. pachyphlaea). Arizona, New Mexico, northernmost Mexico. Foliage strongly glaucous with a white resin spot on each leaf; cones 7–12 mm diameter. Juniperus deppeana var. robusta (syn. J. deppeana var. patoniana). Northwestern Mexico. Cones larger, diameter. Juniperus deppeana var. sperryi. Western Texas, very rare. Bark furrowed, not square-cracked, branchlets pendulous; possibly a hybrid with J. flaccida. Juniperus deppeana var. zacatecensis. Zacatecas. Cones large, 10–15 mm diameter. Etymology Native American names include táscate and tláscal. Distribution and habitat It is native to central and northern Mexico (from Oaxaca northward) and the southwestern United States (Arizona, New Mexico, western Texas). It grows at moderate altitudes of on dry soils. Ecology The berrylike cones are eaten by birds and mammals. Uses Berries from alligator juniper growing in the Davis Mountains of West Texas are used to flavor gin, including one produced by WildGins Co. in Austin, Texas.
Biology and health sciences
Cupressaceae
Plants
6160807
https://en.wikipedia.org/wiki/Electrosynthesis
Electrosynthesis
In electrochemistry, electrosynthesis is the synthesis of chemical compounds in an electrochemical cell. Compared to ordinary redox reactions, electrosynthesis sometimes offers improved selectivity and yields. Electrosynthesis is actively studied as a science and also has industrial applications. Electrooxidation has potential for wastewater treatment as well. Experimental setup The basic setup in electrosynthesis is a galvanic cell, a potentiostat and two electrodes. Typical solvent and electrolyte combinations minimizes electrical resistance. Protic conditions often use alcohol-water or dioxane-water solvent mixtures with an electrolyte such as a soluble salt, acid or base. Aprotic conditions often use an organic solvent such as acetonitrile or dichloromethane with electrolytes such as lithium perchlorate or tetrabutylammonium salts. The choice of electrodes with respect to their composition and surface area can be decisive. For example, in aqueous conditions the competing reactions in the cell are the formation of oxygen at the anode and hydrogen at the cathode. In this case a graphite anode and lead cathode could be used effectively because of their high overpotentials for oxygen and hydrogen formation respectively. Many other materials can be used as electrodes. Other examples include platinum, magnesium, mercury (as a liquid pool in the reactor), stainless steel or reticulated vitreous carbon. Some reactions use a sacrificial electrode that is consumed during the reaction like zinc or lead. Cell designs can be undivided cell or divided cell type. In divided cells the cathode and anode chambers are separated with a semiporous membrane. Common membrane materials include sintered glass, porous porcelain, polytetrafluoroethene or polypropylene. The purpose of the divided cell is to permit the diffusion of ions while restricting the flow of the products and reactants. This separation simplifies workup. An example of a reaction requiring a divided cell is the reduction of nitrobenzene to phenylhydroxylamine, where the latter chemical is susceptible to oxidation at the anode. Reactions Organic oxidations take place at the anode. Compounds are reduced at the cathode. Radical intermediates are often invoked. The initial reaction takes place at the surface of the electrode and then the intermediates diffuse into the solution where they participate in secondary reactions. The yield of an electrosynthesis is expressed both in terms of the chemical yield and current efficiency. Current efficiency is the ratio of Coulombs consumed in forming the products to the total number of Coulombs passed through the cell. Side reactions decrease the current efficiency. The potential drop between the electrodes determines the rate constant of the reaction. Electrosynthesis is carried out with either constant potential or constant current. The reason one chooses one over the other is due to a trade-off of ease of experimental conditions versus current efficiency. Constant potential uses current more efficiently because the current in the cell decreases with time due to the depletion of the substrate around the working electrode (stirring is usually necessary to decrease the diffusion layer around the electrode). This is not the case under constant current conditions, however. Instead, as the substrate's concentration decreases the potential across the cell increases in order to maintain the fixed reaction rate. This consumes current in side reactions produced outside the target voltage. Anodic oxidations A well-known electrosynthesis is the Kolbe electrolysis, in which two carboxylic acids decarboxylate, and the remaining structures bond together: A variation is called the non-Kolbe reaction when a heteroatom (nitrogen or oxygen) is present at the α-position. The intermediate oxonium ion is trapped by a nucleophile, usually solvent. Anodic electrosynthesis oxidize primary aliphatic amine to nitrile. Amides can be oxidized to N-acyliminium ions, which can be captured by various nucleophiles, for example: This reaction type is called a Shono oxidation. An example is the α-methoxylation of N-carbomethoxypyrrolidine Oxidation of a carbanion can lead to a coupling reaction for instance in the electrosynthesis of the tetramethyl ester of ethanetetracarboxylic acid from the corresponding malonate ester α-amino acids form nitriles and carbon dioxide via oxidative decarboxylation at AgO anodes (the latter is formed in-situ by oxidation of Ag2O): Cyanoacetic acid from cathodic reduction of carbon dioxide and anodic oxidation of acetonitrile. Selective electrochemical oxidation have been developed in the last decades for nitrile preparation form amines. Propiolic acid is prepared commercially by oxidizing propargyl alcohol at a lead electrode.. Cathodic reductions In the Markó–Lam deoxygenation, an alcohol could be almost instantaneously deoxygenated by electroreducing its toluate ester. In concept, adiponitrile is prepared from dimerizing acrylonitrile: In practice,the cathodic hydrodimerization of activated olefins is applied industrially in the synthesis of adiponitrile from two equivalents of acrylonitrile : The cathodic reduction of arene compounds to the 1,4-dihydro derivatives is similar to a Birch reduction. Examples from industry are the reduction of phthalic acid: and the reduction of 2-methoxynaphthalene: The Tafel rearrangement, named for Julius Tafel, was at one time an important method for the synthesis of certain hydrocarbons from alkylated ethyl acetoacetate, a reaction accompanied by the rearrangement reaction of the alkyl group: The cathodic reduction of a nitrile to a primary amine in a divided cell; the cathodic reduction of benzyl cyanide to phenethylamine is shown: Cathodic reduction of a nitroalkene can give the oxime in good yield. At higher negative reduction potentials, the nitroalkene can be reduced further, giving the primary amine but with lower yield. Azobenzene is prepared in industrial electrosynthesis using nitrobenzene. An electrochemical carboxylation of a para-isobutyl benzyl chloride to Ibuprofen is promoted under supercritical carbon dioxide. Cathodic reduction of a carboxylic acid (oxalic acid) to an aldehyde (glyoxylic acid, shows as the rare aldehyde form) in a divided cell: Originally phenylpropanoic acid could be prepared from reduction of cinnamic acid by electrolysis. An electrocatalysis by a copper complex helps reduce carbon dioxide to oxalic acid; this conversion uses carbon dioxide as a feedstock to generate oxalic acid. It has been reported that formate can be formed by the electrochemical reduction of (in the form of bicarbonate) at a lead cathode at pH 8.6: or If the feed is and oxygen is evolved at the anode, the total reaction is: Redox reactions Cathodic reduction of carbon dioxide and anodic oxidation of acetonitrile afford cyanoacetic acid. An electrosynthesis employing alternating current prepares phenol at both the cathode and the anode. Electrofluorination In organofluorine chemistry, many perfluorinated compounds are prepared by electrochemical synthesis, which is conducted in liquid HF at voltages near 5–6 V using Ni anodes. The method was invented in the 1930s. Amines, alcohols, carboxylic acids, and sulfonic acids are converted to perfluorinated derivatives using this technology. A solution or suspension of the hydrocarbon in hydrogen fluoride is electrolyzed at 5–6 V to produce high yields of the perfluorinated product.
Physical sciences
Synthetic strategies
Chemistry