id
int64
39
79M
url
stringlengths
32
168
text
stringlengths
7
145k
source
stringlengths
2
105
categories
listlengths
1
6
token_count
int64
3
32.2k
subcategories
listlengths
0
27
72,672
https://en.wikipedia.org/wiki/Exocrine%20gland
Exocrine glands are glands that secrete substances onto an epithelial surface by way of a duct. Examples of exocrine glands include sweat, salivary, mammary, ceruminous, lacrimal, sebaceous, prostate and mucous. Exocrine glands are one of two types of glands in the human body, the other being endocrine glands, which secrete their products directly into the bloodstream. The liver and pancreas are both exocrine and endocrine glands; they are exocrine glands because they secrete products—bile and pancreatic juice—into the gastrointestinal tract through a series of ducts, and endocrine because they secrete other substances directly into the bloodstream. Exocrine sweat glands are part of the integumentary system; they have eccrine and apocrine types. Classification Structure Exocrine glands contain a glandular portion and a duct portion, the structures of which can be used to classify the gland. The duct portion may be branched (called compound) or unbranched (called simple). The glandular portion may be tubular or acinar, or may be a mix of the two (called tubuloacinar). If the glandular portion branches, then the gland is called a branched gland. Method of secretion Depending on how their products are secreted, exocrine glands are categorized as merocrine, apocrine, or holocrine. Merocrine – the cells of the gland excrete their substances by exocytosis into a duct; for example, pancreatic acinar cells, eccrine sweat glands, salivary glands, goblet cells, intestinal glands, tear glands, etc. Apocrine – the apical portion of the cytoplasm in the cell membrane, which contains the excretion, buds off. Examples are sweat glands of arm pits, pubic region, skin around anus, lips and nipples; mammary glands, etc. Holocrine – the entire cell disintegrates to excrete its substance; for example, sebaceous glands of the skin and nose, meibomian gland, zeis gland, etc. Product secreted Serous cells secrete proteins, often enzymes. Examples include gastric chief cells and Paneth cells Mucous cells secrete mucus. Examples include Brunner's glands, esophageal glands, and pyloric glands Seromucous glands (mixed) secrete both protein and mucus. Examples include the salivary glands: although the parotid gland (saliva secretion 25%) is predominantly serous, the sublingual gland (saliva secretion 5%) mainly mucous gland, and the submandibular gland (saliva secretion 70%) is a mixed, mainly serous gland. Sebaceous glands secrete sebum, a lipid product. These glands are also known as oil glands, e.g. Fordyce spots and Meibomian glands. Additional images See also List of glands of the human body List of specialized glands within the human integumentary system References External links Diagram at mhhe.com Exocrine system
Exocrine gland
[ "Biology" ]
679
[ "Exocrine system", "Organ systems" ]
72,750
https://en.wikipedia.org/wiki/Prosthesis
In medicine, a prosthesis (: prostheses; from ), or a prosthetic implant, is an artificial device that replaces a missing body part, which may be lost through physical trauma, disease, or a condition present at birth (congenital disorder). Prostheses may restore the normal functions of the missing body part, or may perform a cosmetic function. A person who has undergone an amputation is sometimes referred to as an amputee, however, this term may be offensive. Rehabilitation for someone with an amputation is primarily coordinated by a physiatrist as part of an inter-disciplinary team consisting of physiatrists, prosthetists, nurses, physical therapists, and occupational therapists. Prostheses can be created by hand or with computer-aided design (CAD), a software interface that helps creators design and analyze the creation with computer-generated 2-D and 3-D graphics as well as analysis and optimization tools. Types A person's prosthesis should be designed and assembled according to the person's appearance and functional needs. For instance, a person may need a transradial prosthesis, but the person needs to choose between an aesthetic functional device, a myoelectric device, a body-powered device, or an activity specific device. The person's future goals and economical capabilities may help them choose between one or more devices. Craniofacial prostheses include intra-oral and extra-oral prostheses. Extra-oral prostheses are further divided into hemifacial, auricular (ear), nasal, orbital and ocular. Intra-oral prostheses include dental prostheses, such as dentures, obturators, and dental implants. Prostheses of the neck include larynx substitutes, trachea and upper esophageal replacements, Somato prostheses of the torso include breast prostheses which may be either single or bilateral, full breast devices or nipple prostheses. Penile prostheses are used to treat erectile dysfunction, correct penile deformity, perform phalloplasty procedures in cisgender men, and to build a new penis in female-to-male gender reassignment surgeries. Limb prostheses Limb prostheses include both upper- and lower-extremity prostheses. Upper-extremity prostheses are used at varying levels of amputation: forequarter, shoulder disarticulation, transhumeral prosthesis, elbow disarticulation, transradial prosthesis, wrist disarticulation, full hand, partial hand, finger, partial finger. A transradial prosthesis is an artificial limb that replaces an arm missing below the elbow. Upper limb prostheses can be categorized in three main categories: Passive devices, Body Powered devices, and Externally Powered (myoelectric) devices. Passive devices can either be passive hands, mainly used for cosmetic purposes, or passive tools, mainly used for specific activities (e.g. leisure or vocational). An extensive overview and classification of passive devices can be found in a literature review by Maat et.al. A passive device can be static, meaning the device has no movable parts, or it can be adjustable, meaning its configuration can be adjusted (e.g. adjustable hand opening). Despite the absence of active grasping, passive devices are very useful in bimanual tasks that require fixation or support of an object, or for gesticulation in social interaction. According to scientific data a third of the upper limb amputees worldwide use a passive prosthetic hand. Body Powered or cable-operated limbs work by attaching a harness and cable around the opposite shoulder of the damaged arm. A recent body-powered approach has explored the utilization of the user's breathing to power and control the prosthetic hand to help eliminate actuation cable and harness. The third category of available prosthetic devices comprises myoelectric arms. This particular class of devices distinguishes itself from the previous ones due to the inclusion of a battery system. This battery serves the dual purpose of providing energy for both actuation and sensing components. While actuation predominantly relies on motor or pneumatic systems, a variety of solutions have been explored for capturing muscle activity, including techniques such as Electromyography, Sonomyography, Myokinetic, and others. These methods function by detecting the minute electrical currents generated by contracted muscles during upper arm movement, typically employing electrodes or other suitable tools. Subsequently, these acquired signals are converted into gripping patterns or postures that the artificial hand will then execute. In the prosthetics industry, a trans-radial prosthetic arm is often referred to as a "BE" or below elbow prosthesis. Lower-extremity prostheses provide replacements at varying levels of amputation. These include hip disarticulation, transfemoral prosthesis, knee disarticulation, transtibial prosthesis, Syme's amputation, foot, partial foot, and toe. The two main subcategories of lower extremity prosthetic devices are trans-tibial (any amputation transecting the tibia bone or a congenital anomaly resulting in a tibial deficiency) and trans-femoral (any amputation transecting the femur bone or a congenital anomaly resulting in a femoral deficiency). A transfemoral prosthesis is an artificial limb that replaces a leg missing above the knee. Transfemoral amputees can have a very difficult time regaining normal movement. In general, a transfemoral amputee must use approximately 80% more energy to walk than a person with two whole legs. This is due to the complexities in movement associated with the knee. In newer and more improved designs, hydraulics, carbon fiber, mechanical linkages, motors, computer microprocessors, and innovative combinations of these technologies are employed to give more control to the user. In the prosthetics industry, a trans-femoral prosthetic leg is often referred to as an "AK" or above the knee prosthesis. A transtibial prosthesis is an artificial limb that replaces a leg missing below the knee. A transtibial amputee is usually able to regain normal movement more readily than someone with a transfemoral amputation, due in large part to retaining the knee, which allows for easier movement. Lower extremity prosthetics describe artificially replaced limbs located at the hip level or lower. In the prosthetics industry, a trans-tibial prosthetic leg is often referred to as a "BK" or below the knee prosthesis. Prostheses are manufactured and fit by clinical prosthetists. Prosthetists are healthcare professionals responsible for making, fitting, and adjusting prostheses and for lower limb prostheses will assess both gait and prosthetic alignment. Once a prosthesis has been fit and adjusted by a prosthetist, a rehabilitation physiotherapist (called physical therapist in America) will help teach a new prosthetic user to walk with a leg prosthesis. To do so, the physical therapist may provide verbal instructions and may also help guide the person using touch or tactile cues. This may be done in a clinic or home. There is some research suggesting that such training in the home may be more successful if the treatment includes the use of a treadmill. Using a treadmill, along with the physical therapy treatment, helps the person to experience many of the challenges of walking with a prosthesis. In the United Kingdom, 75% of lower limb amputations are performed due to inadequate circulation (dysvascularity). This condition is often associated with many other medical conditions (co-morbidities) including diabetes and heart disease that may make it a challenge to recover and use a prosthetic limb to regain mobility and independence. For people who have inadequate circulation and have lost a lower limb, there is insufficient evidence due to a lack of research, to inform them regarding their choice of prosthetic rehabilitation approaches. Lower extremity prostheses are often categorized by the level of amputation or after the name of a surgeon: Transfemoral (Above-knee) Transtibial (Below-knee) Ankle disarticulation (more commonly known as Syme's amputation) Knee disarticulation (also see knee replacement) Hip disarticulation, (also see hip replacement) Hemi-pelvictomy Partial foot amputations (Pirogoff, Talo-Navicular and Calcaneo-cuboid (Chopart), Tarso-metatarsal (Lisfranc), Trans-metatarsal, Metatarsal-phalangeal, Ray amputations, toe amputations). Van Nes rotationplasty Prosthetic raw materials Prosthetic are made lightweight for better convenience for the amputee. Some of these materials include: Plastics: Polyethylene Polypropylene Acrylics Polyurethane Wood (early prosthetics) Rubber (early prosthetics) Lightweight metals: Aluminum Composites: Carbon fiber reinforced polymers Wheeled prostheses have also been used extensively in the rehabilitation of injured domestic animals, including dogs, cats, pigs, rabbits, and turtles. History Prosthetics originate from the ancient Near East circa 3000 BCE, with the earliest evidence of prosthetics appearing in ancient Egypt and Iran. The earliest recorded mention of eye prosthetics is from the Egyptian story of the Eye of Horus dated circa 3000 BC, which involves the left eye of Horus being plucked out and then restored by Thoth. Circa 3000-2800 BC, the earliest archaeological evidence of prosthetics is found in ancient Iran, where an eye prosthetic is found buried with a woman in Shahr-i Shōkhta. It was likely made of bitumen paste that was covered with a thin layer of gold. The Egyptians were also early pioneers of foot prosthetics, as shown by the wooden toe found on a body from the New Kingdom circa 1000 BC. Another early textual mention is found in South Asia circa 1200 BC, involving the warrior queen Vishpala in the Rigveda. Roman bronze crowns have also been found, but their use could have been more aesthetic than medical. An early mention of a prosthetic comes from the Greek historian Herodotus, who tells the story of Hegesistratus, a Greek diviner who cut off his own foot to escape his Spartan captors and replaced it with a wooden one. Wood and metal prosthetics Pliny the Elder also recorded the tale of a Roman general, Marcus Sergius, whose right hand was cut off while campaigning and had an iron hand made to hold his shield so that he could return to battle. A famous and quite refined historical prosthetic arm was that of Götz von Berlichingen, made at the beginning of the 16th century. The first confirmed use of a prosthetic device, however, is from 950 to 710 BC. In 2000, research pathologists discovered a mummy from this period buried in the Egyptian necropolis near ancient Thebes that possessed an artificial big toe. This toe, consisting of wood and leather, exhibited evidence of use. When reproduced by bio-mechanical engineers in 2011, researchers discovered that this ancient prosthetic enabled its wearer to walk both barefoot and in Egyptian style sandals. Previously, the earliest discovered prosthetic was an artificial leg from Capua. Around the same time, François de la Noue is also reported to have had an iron hand, as is, in the 17th century, René-Robert Cavalier de la Salle. Henri de Tonti had a prosthetic hook for a hand. During the Middle Ages, prosthetics remained quite basic in form. Debilitated knights would be fitted with prosthetics so they could hold up a shield, grasp a lance or a sword, or stabilize a mounted warrior. Only the wealthy could afford anything that would assist in daily life. One notable prosthesis was that belonging to an Italian man, who scientists estimate replaced his amputated right hand with a knife. Scientists investigating the skeleton, which was found in a Longobard cemetery in Povegliano Veronese, estimated that the man had lived sometime between the 6th and 8th centuries AD. Materials found near the man's body suggest that the knife prosthesis was attached with a leather strap, which he repeatedly tightened with his teeth. During the Renaissance, prosthetics developed with the use of iron, steel, copper, and wood. Functional prosthetics began to make an appearance in the 1500s. Technology progress before the 20th century An Italian surgeon recorded the existence of an amputee who had an arm that allowed him to remove his hat, open his purse, and sign his name. Improvement in amputation surgery and prosthetic design came at the hands of Ambroise Paré. Among his inventions was an above-knee device that was a kneeling peg leg and foot prosthesis with a fixed position, adjustable harness, and knee lock control. The functionality of his advancements showed how future prosthetics could develop. Other major improvements before the modern era: Pieter Verduyn – First non-locking below-knee (BK) prosthesis. James Potts – Prosthesis made of a wooden shank and socket, a steel knee joint and an articulated foot that was controlled by catgut tendons from the knee to the ankle. Came to be known as "Anglesey Leg" or "Selpho Leg". Sir James Syme – A new method of ankle amputation that did not involve amputating at the thigh. Benjamin Palmer – Improved upon the Selpho leg. Added an anterior spring and concealed tendons to simulate natural-looking movement. Dubois Parmlee – Created prosthetic with a suction socket, polycentric knee, and multi-articulated foot. Marcel Desoutter & Charles Desoutter – First aluminium prosthesis Henry Heather Bigg, and his son Henry Robert Heather Bigg, won the Queen's command to provide "surgical appliances" to wounded soldiers after Crimea War. They developed arms that allowed a double arm amputee to crochet, and a hand that felt natural to others based on ivory, felt and leather. At the end of World War II, the NAS (National Academy of Sciences) began to advocate better research and development of prosthetics. Through government funding, a research and development program was developed within the Army, Navy, Air Force, and the Veterans Administration. Lower extremity modern history After the Second World War, a team at the University of California, Berkeley including James Foort and C.W. Radcliff helped to develop the quadrilateral socket by developing a jig fitting system for amputations above the knee. Socket technology for lower extremity limbs saw a further revolution during the 1980s when John Sabolich C.P.O., invented the Contoured Adducted Trochanteric-Controlled Alignment Method (CATCAM) socket, later to evolve into the Sabolich Socket. He followed the direction of Ivan Long and Ossur Christensen as they developed alternatives to the quadrilateral socket, which in turn followed the open ended plug socket, created from wood. The advancement was due to the difference in the socket to patient contact model. Prior to this, sockets were made in the shape of a square shape with no specialized containment for muscular tissue. New designs thus help to lock in the bony anatomy, locking it into place and distributing the weight evenly over the existing limb as well as the musculature of the patient. Ischial containment is well known and used today by many prosthetist to help in patient care. Variations of the ischial containment socket thus exists and each socket is tailored to the specific needs of the patient. Others who contributed to socket development and changes over the years include Tim Staats, Chris Hoyt, and Frank Gottschalk. Gottschalk disputed the efficacy of the CAT-CAM socket- insisting the surgical procedure done by the amputation surgeon was most important to prepare the amputee for good use of a prosthesis of any type socket design. The first microprocessor-controlled prosthetic knees became available in the early 1990s. The Intelligent Prosthesis was the first commercially available microprocessor-controlled prosthetic knee. It was released by Chas. A. Blatchford & Sons, Ltd., of Great Britain, in 1993 and made walking with the prosthesis feel and look more natural. An improved version was released in 1995 by the name Intelligent Prosthesis Plus. Blatchford released another prosthesis, the Adaptive Prosthesis, in 1998. The Adaptive Prosthesis utilized hydraulic controls, pneumatic controls, and a microprocessor to provide the amputee with a gait that was more responsive to changes in walking speed. Cost analysis reveals that a sophisticated above-knee prosthesis will be about $1 million in 45 years, given only annual cost of living adjustments. In 2019, a project under AT2030 was launched in which bespoke sockets are made using a thermoplastic, rather than through a plaster cast. This is faster to do and significantly less expensive. The sockets were called Amparo Confidence sockets. Upper extremity modern history In 2005, DARPA started the Revolutionizing Prosthetics program. According to DARPA, the goal of the $100 million program was to "develop an advanced electromechanical prosthetic upper limb with near-natural control that would dramatically enhance independence and quality of life for amputees." In 2014, the LUKE Arm developed by Dean Kamen and his team at DEKA Research and Development Corp. became the first prosthetic arm approved by FDA that "translates signals from a person's muscles to perform complex tasks," according to FDA. Johns Hopkins University and the U.S. Department of Veteran Affairs also participated in the program. Design trends moving forward There are many steps in the evolution of prosthetic design trends that are moving forward with time. Many design trends point to lighter, more durable, and flexible materials like carbon fiber, silicone, and advanced polymers. These not only make the prosthetic limb lighter and more durable but also allow it to mimic the look and feel of natural skin, providing users with a more comfortable and natural experience. This new technology helps prosthetic users blend in with people with normal ligaments to reduce the stigmatism for people who wear prosthetics. Another trend points towards using bionics and myoelectric components in prosthetic design. These limbs utilize sensors to detect electrical signals from the user's residual muscles. The signals are then converted into motions, allowing users to control their prosthetic limbs using their own muscle contractions. This has greatly improved the range and fluidity of movements available to amputees, making tasks like grasping objects or walking naturally much more feasible. Integration with AI is also on the forefront to the prosthetic design. AI-enabled prosthetic limbs can learn and adapt to the user's habits and preferences over time, ensuring optimal functionality. By analyzing the user's gait, grip, and other movements, these smart limbs can make real-time adjustments, providing smoother and more natural motions. Patient procedure A prosthesis is a functional replacement for an amputated or congenitally malformed or missing limb. Prosthetists are responsible for the prescription, design, and management of a prosthetic device. In most cases, the prosthetist begins by taking a plaster cast of the patient's affected limb. Lightweight, high-strength thermoplastics are custom-formed to this model of the patient. Cutting-edge materials such as carbon fiber, titanium and Kevlar provide strength and durability while making the new prosthesis lighter. More sophisticated prostheses are equipped with advanced electronics, providing additional stability and control. Current technology and manufacturing Over the years, there have been advancements in artificial limbs. New plastics and other materials, such as carbon fiber, have allowed artificial limbs to be stronger and lighter, limiting the amount of extra energy necessary to operate the limb. This is especially important for trans-femoral amputees. Additional materials have allowed artificial limbs to look much more realistic, which is important to trans-radial and transhumeral amputees because they are more likely to have the artificial limb exposed. In addition to new materials, the use of electronics has become very common in artificial limbs. Myoelectric limbs, which control the limbs by converting muscle movements to electrical signals, have become much more common than cable operated limbs. Myoelectric signals are picked up by electrodes, the signal gets integrated and once it exceeds a certain threshold, the prosthetic limb control signal is triggered which is why inherently, all myoelectric controls lag. Conversely, cable control is immediate and physical, and through that offers a certain degree of direct force feedback that myoelectric control does not. Computers are also used extensively in the manufacturing of limbs. Computer Aided Design and Computer Aided Manufacturing are often used to assist in the design and manufacture of artificial limbs. Most modern artificial limbs are attached to the residual limb (stump) of the amputee by belts and cuffs or by suction. The residual limb either directly fits into a socket on the prosthetic, or—more commonly today—a liner is used that then is fixed to the socket either by vacuum (suction sockets) or a pin lock. Liners are soft and by that, they can create a far better suction fit than hard sockets. Silicone liners can be obtained in standard sizes, mostly with a circular (round) cross section, but for any other residual limb shape, custom liners can be made. The socket is custom made to fit the residual limb and to distribute the forces of the artificial limb across the area of the residual limb (rather than just one small spot), which helps reduce wear on the residual limb. Production of prosthetic socket The production of a prosthetic socket begins with capturing the geometry of the residual limb, this process is called shape capture. The goal of this process is to create an accurate representation of the residual limb, which is critical to achieve good socket fit. The custom socket is created by taking a plaster cast of the residual limb or, more commonly today, of the liner worn over their residual limb, and then making a mold from the plaster cast. The commonly used compound is called Plaster of Paris. In recent years, various digital shape capture systems have been developed which can be input directly to a computer allowing for a more sophisticated design. In general, the shape capturing process begins with the digital acquisition of three-dimensional (3D) geometric data from the amputee's residual limb. Data are acquired with either a probe, laser scanner, structured light scanner, or a photographic-based 3D scanning system. After shape capture, the second phase of the socket production is called rectification, which is the process of modifying the model of the residual limb by adding volume to bony prominence and potential pressure points and remove volume from load bearing area. This can be done manually by adding or removing plaster to the positive model, or virtually by manipulating the computerized model in the software. Lastly, the fabrication of the prosthetic socket begins once the model has been rectified and finalized. The prosthetists would wrap the positive model with a semi-molten plastic sheet or carbon fiber coated with epoxy resin to construct the prosthetic socket. For the computerized model, it can be 3D printed using a various of material with different flexibility and mechanical strength. Optimal socket fit between the residual limb and socket is critical to the function and usage of the entire prosthesis. If the fit between the residual limb and socket attachment is too loose, this will reduce the area of contact between the residual limb and socket or liner, and increase pockets between residual limb skin and socket or liner. Pressure then is higher, which can be painful. Air pockets can allow sweat to accumulate that can soften the skin. Ultimately, this is a frequent cause for itchy skin rashes. Over time, this can lead to breakdown of the skin. On the other hand, a very tight fit may excessively increase the interface pressures that may also lead to skin breakdown after prolonged use. Artificial limbs are typically manufactured using the following steps: Measurement of the residual limb Measurement of the body to determine the size required for the artificial limb Fitting of a silicone liner Creation of a model of the liner worn over the residual limb Formation of thermoplastic sheet around the model – This is then used to test the fit of the prosthetic Formation of permanent socket Formation of plastic parts of the artificial limb – Different methods are used, including vacuum forming and injection molding Creation of metal parts of the artificial limb using die casting Assembly of entire limb Body-powered arms Current technology allows body-powered arms to weigh around one-half to one-third of what a myoelectric arm does. Sockets Current body-powered arms contain sockets that are built from hard epoxy or carbon fiber. These sockets or "interfaces" can be made more comfortable by lining them with a softer, compressible foam material that provides padding for the bone prominences. A self-suspending or supra-condylar socket design is useful for those with short to mid-range below elbow absence. Longer limbs may require the use of a locking roll-on type inner liner or more complex harnessing to help augment suspension. Wrists Wrist units are either screw-on connectors featuring the UNF 1/2-20 thread (USA) or quick-release connector, of which there are different models. Voluntary opening and voluntary closing Two types of body-powered systems exist, voluntary opening "pull to open" and voluntary closing "pull to close". Virtually all "split hook" prostheses operate with a voluntary opening type system. More modern "prehensors" called GRIPS utilize voluntary closing systems. The differences are significant. Users of voluntary opening systems rely on elastic bands or springs for gripping force, while users of voluntary closing systems rely on their own body power and energy to create gripping force. Voluntary closing users can generate prehension forces equivalent to the normal hand, up to or exceeding one hundred pounds. Voluntary closing GRIPS require constant tension to grip, like a human hand, and in that property, they do come closer to matching human hand performance. Voluntary opening split hook users are limited to forces their rubber or springs can generate which usually is below 20 pounds. Feedback An additional difference exists in the biofeedback created that allows the user to "feel" what is being held. Voluntary opening systems once engaged provide the holding force so that they operate like a passive vice at the end of the arm. No gripping feedback is provided once the hook has closed around the object being held. Voluntary closing systems provide directly proportional control and biofeedback so that the user can feel how much force that they are applying. In 1997, the Colombian Prof. Álvaro Ríos Poveda, a researcher in bionics in Latin America, developed an upper limb and hand prosthesis with sensory feedback. This technology allows amputee patients to handle prosthetic hand systems in a more natural way. A recent study showed that by stimulating the median and ulnar nerves, according to the information provided by the artificial sensors from a hand prosthesis, physiologically appropriate (near-natural) sensory information could be provided to an amputee. This feedback enabled the participant to effectively modulate the grasping force of the prosthesis with no visual or auditory feedback. In February 2013, researchers from École Polytechnique Fédérale de Lausanne in Switzerland and the Scuola Superiore Sant'Anna in Italy, implanted electrodes into an amputee's arm, which gave the patient sensory feedback and allowed for real time control of the prosthetic. With wires linked to nerves in his upper arm, the Danish patient was able to handle objects and instantly receive a sense of touch through the special artificial hand that was created by Silvestro Micera and researchers both in Switzerland and Italy. In July 2019, this technology was expanded on even further by researchers from the University of Utah, led by Jacob George. The group of researchers implanted electrodes into the patient's arm to map out several sensory precepts. They would then stimulate each electrode to figure out how each sensory precept was triggered, then proceed to map the sensory information onto the prosthetic. This would allow the researchers to get a good approximation of the same kind of information that the patient would receive from their natural hand. Unfortunately, the arm is too expensive for the average user to acquire, however, Jacob mentioned that insurance companies could cover the costs of the prosthetic. Terminal devices Terminal devices contain a range of hooks, prehensors, hands or other devices. Hooks Voluntary opening split hook systems are simple, convenient, light, robust, versatile and relatively affordable. A hook does not match a normal human hand for appearance or overall versatility, but its material tolerances can exceed and surpass the normal human hand for mechanical stress (one can even use a hook to slice open boxes or as a hammer whereas the same is not possible with a normal hand), for thermal stability (one can use a hook to grip items from boiling water, to turn meat on a grill, to hold a match until it has burned down completely) and for chemical hazards (as a metal hook withstands acids or lye, and does not react to solvents like a prosthetic glove or human skin). Hands Prosthetic hands are available in both voluntary opening and voluntary closing versions and because of their more complex mechanics and cosmetic glove covering require a relatively large activation force, which, depending on the type of harness used, may be uncomfortable. A recent study by the Delft University of Technology, The Netherlands, showed that the development of mechanical prosthetic hands has been neglected during the past decades. The study showed that the pinch force level of most current mechanical hands is too low for practical use. The best tested hand was a prosthetic hand developed around 1945. In 2017 however, a research has been started with bionic hands by Laura Hruby of the Medical University of Vienna. A few open-hardware 3-D printable bionic hands have also become available. Some companies are also producing robotic hands with integrated forearm, for fitting unto a patient's upper arm and in 2020, at the Italian Institute of Technology (IIT), another robotic hand with integrated forearm (Soft Hand Pro) was developed. Commercial providers and materials Hosmer and Otto Bock are major commercial hook providers. Mechanical hands are sold by Hosmer and Otto Bock as well; the Becker Hand is still manufactured by the Becker family. Prosthetic hands may be fitted with standard stock or custom-made cosmetic looking silicone gloves. But regular work gloves may be worn as well. Other terminal devices include the V2P Prehensor, a versatile robust gripper that allows customers to modify aspects of it, Texas Assist Devices (with a whole assortment of tools) and TRS that offers a range of terminal devices for sports. Cable harnesses can be built using aircraft steel cables, ball hinges, and self-lubricating cable sheaths. Some prosthetics have been designed specifically for use in salt water. Lower-extremity prosthetics Lower-extremity prosthetics describes artificially replaced limbs located at the hip level or lower. Concerning all ages Ephraim et al. (2003) found a worldwide estimate of all-cause lower-extremity amputations of 2.0–5.9 per 10,000 inhabitants. For birth prevalence rates of congenital limb deficiency they found an estimate between 3.5 and 7.1 cases per 10,000 births. The two main subcategories of lower extremity prosthetic devices are trans-tibial (any amputation transecting the tibia bone or a congenital anomaly resulting in a tibial deficiency), and trans-femoral (any amputation transecting the femur bone or a congenital anomaly resulting in a femoral deficiency). In the prosthetic industry, a trans-tibial prosthetic leg is often referred to as a "BK" or below the knee prosthesis while the trans-femoral prosthetic leg is often referred to as an "AK" or above the knee prosthesis. Other, less prevalent lower extremity cases include the following: Hip disarticulations – This usually refers to when an amputee or congenitally challenged patient has either an amputation or anomaly at or in close proximity to the hip joint. See hip replacement Knee disarticulations – This usually refers to an amputation through the knee disarticulating the femur from the tibia. See knee replacement Symes – This is an ankle disarticulation while preserving the heel pad. Socket The socket serves as an interface between the residuum and the prosthesis, ideally allowing comfortable weight-bearing, movement control and proprioception. Socket problems, such as discomfort and skin breakdown, are rated among the most important issues faced by lower-limb amputees. Shank and connectors This part creates distance and support between the knee-joint and the foot (in case of an upper-leg prosthesis) or between the socket and the foot. The type of connectors that are used between the shank and the knee/foot determines whether the prosthesis is modular or not. Modular means that the angle and the displacement of the foot in respect to the socket can be changed after fitting. In developing countries prosthesis mostly are non-modular, in order to reduce cost. When considering children modularity of angle and height is important because of their average growth of 1.9 cm annually. Foot Providing contact to the ground, the foot provides shock absorption and stability during stance. Additionally it influences gait biomechanics by its shape and stiffness. This is because the trajectory of the center of pressure (COP) and the angle of the ground reaction forces is determined by the shape and stiffness of the foot and needs to match the subject's build in order to produce a normal gait pattern. Andrysek (2010) found 16 different types of feet, with greatly varying results concerning durability and biomechanics. The main problem found in current feet is durability, endurance ranging from 16 to 32 months These results are for adults and will probably be worse for children due to higher activity levels and scale effects. Evidence comparing different types of feet and ankle prosthetic devices is not strong enough to determine if one mechanism of ankle/foot is superior to another. When deciding on a device, the cost of the device, a person's functional need, and the availability of a particular device should be considered. Knee joint In case of a trans-femoral (above knee) amputation, there also is a need for a complex connector providing articulation, allowing flexion during swing-phase but not during stance. As its purpose is to replace the knee, the prosthetic knee joint is the most critical component of the prosthesis for trans-femoral amputees. The function of the good prosthetic knee joint is to mimic the function of the normal knee, such as providing structural support and stability during stance phase but able to flex in a controllable manner during swing phase. Hence it allows users to have a smooth and energy efficient gait and minimize the impact of amputation. The prosthetic knee is connected to the prosthetic foot by the shank, which is usually made of an aluminum or graphite tube. One of the most important aspect of a prosthetic knee joint would be its stance-phase control mechanism. The function of stance-phase control is to prevent the leg from buckling when the limb is loaded during weight acceptance. This ensures the stability of the knee in order to support the single limb support task of stance phase and provides a smooth transition to the swing phase. Stance phase control can be achieved in several ways including the mechanical locks, relative alignment of prosthetic components, weight activated friction control, and polycentric mechanisms. Microprocessor control To mimic the knee's functionality during gait, microprocessor-controlled knee joints have been developed that control the flexion of the knee. Some examples are Otto Bock's C-leg, introduced in 1997, Ossur's Rheo Knee, released in 2005, the Power Knee by Ossur, introduced in 2006, the Plié Knee from Freedom Innovations and DAW Industries' Self Learning Knee (SLK). The idea was originally developed by Kelly James, a Canadian engineer, at the University of Alberta. A microprocessor is used to interpret and analyze signals from knee-angle sensors and moment sensors. The microprocessor receives signals from its sensors to determine the type of motion being employed by the amputee. Most microprocessor controlled knee-joints are powered by a battery housed inside the prosthesis. The sensory signals computed by the microprocessor are used to control the resistance generated by hydraulic cylinders in the knee-joint. Small valves control the amount of hydraulic fluid that can pass into and out of the cylinder, thus regulating the extension and compression of a piston connected to the upper section of the knee. The main advantage of a microprocessor-controlled prosthesis is a closer approximation to an amputee's natural gait. Some allow amputees to walk near walking speed or run. Variations in speed are also possible and are taken into account by sensors and communicated to the microprocessor, which adjusts to these changes accordingly. It also enables the amputees to walk downstairs with a step-over-step approach, rather than the one step at a time approach used with mechanical knees. There is some research suggesting that people with microprocessor-controlled prostheses report greater satisfaction and improvement in functionality, residual limb health, and safety. People may be able to perform everyday activities at greater speeds, even while multitasking, and reduce their risk of falls. However, some have some significant drawbacks that impair its use. They can be susceptible to water damage and thus great care must be taken to ensure that the prosthesis remains dry. Myoelectric A myoelectric prosthesis uses the electrical tension generated every time a muscle contracts, as information. This tension can be captured from voluntarily contracted muscles by electrodes applied on the skin to control the movements of the prosthesis, such as elbow flexion/extension, wrist supination/pronation (rotation) or opening/closing of the fingers. A prosthesis of this type utilizes the residual neuromuscular system of the human body to control the functions of an electric powered prosthetic hand, wrist, elbow or foot. This is different from an electric switch prosthesis, which requires straps and/or cables actuated by body movements to actuate or operate switches that control the movements of the prosthesis. There is no clear evidence concluding that myoelectric upper extremity prostheses function better than body-powered prostheses. Advantages to using a myoelectric upper extremity prosthesis include the potential for improvement in cosmetic appeal (this type of prosthesis may have a more natural look), may be better for light everyday activities, and may be beneficial for people experiencing phantom limb pain. When compared to a body-powered prosthesis, a myoelectric prosthesis may not be as durable, may have a longer training time, may require more adjustments, may need more maintenance, and does not provide feedback to the user. Prof. Alvaro Ríos Poveda has been working for several years on a non-invasive and affordable solution to this feedback problem. He considers that: "Prosthetic limbs that can be controlled with thought hold great promise for the amputee, but without sensorial feedback from the signals returning to the brain, it can be difficult to achieve the level of control necessary to perform precise movements. When connecting the sense of touch from a mechanical hand directly to the brain, prosthetics can restore the function of the amputated limb in an almost natural-feeling way." He presented the first Myoelectric prosthetic hand with sensory feedback at the XVIII World Congress on Medical Physics and Biomedical Engineering, 1997, held in Nice, France. The USSR was the first to develop a myoelectric arm in 1958, while the first myoelectric arm became commercial in 1964 by the Central Prosthetic Research Institute of the USSR, and distributed by the Hangar Limb Factory of the UK. The Myoelectric prosthesis are expensive requires regular maintenance, sensitive to sweat and moisture affecting sensor performance. Robotic prostheses Robots can be used to generate objective measures of patient's impairment and therapy outcome, assist in diagnosis, customize therapies based on patient's motor abilities, and assure compliance with treatment regimens and maintain patient's records. It is shown in many studies that there is a significant improvement in upper limb motor function after stroke using robotics for upper limb rehabilitation. In order for a robotic prosthetic limb to work, it must have several components to integrate it into the body's function: Biosensors detect signals from the user's nervous or muscular systems. It then relays this information to a microcontroller located inside the device, and processes feedback from the limb and actuator, e.g., position or force, and sends it to the controller. Examples include surface electrodes that detect electrical activity on the skin, needle electrodes implanted in muscle, or solid-state electrode arrays with nerves growing through them. One type of these biosensors are employed in myoelectric prostheses. A device known as the controller is connected to the user's nerve and muscular systems and the device itself. It sends intention commands from the user to the actuators of the device and interprets feedback from the mechanical and biosensors to the user. The controller is also responsible for the monitoring and control of the movements of the device. An actuator mimics the actions of a muscle in producing force and movement. Examples include a motor that aids or replaces original muscle tissue. Targeted muscle reinnervation (TMR) is a technique in which motor nerves, which previously controlled muscles on an amputated limb, are surgically rerouted such that they reinnervate a small region of a large, intact muscle, such as the pectoralis major. As a result, when a patient thinks about moving the thumb of their missing hand, a small area of muscle on their chest will contract instead. By placing sensors over the reinnervated muscle, these contractions can be made to control the movement of an appropriate part of the robotic prosthesis. A variant of this technique is called targeted sensory reinnervation (TSR). This procedure is similar to TMR, except that sensory nerves are surgically rerouted to skin on the chest, rather than motor nerves rerouted to muscle. Recently, robotic limbs have improved in their ability to take signals from the human brain and translate those signals into motion in the artificial limb. DARPA, the Pentagon's research division, is working to make even more advancements in this area. Their desire is to create an artificial limb that ties directly into the nervous system. Robotic arms Advancements in the processors used in myoelectric arms have allowed developers to make gains in fine-tuned control of the prosthetic. The Boston Digital Arm is a recent artificial limb that has taken advantage of these more advanced processors. The arm allows movement in five axes and allows the arm to be programmed for a more customized feel. Recently the I-LIMB Hand, invented in Edinburgh, Scotland, by David Gow has become the first commercially available hand prosthesis with five individually powered digits. The hand also possesses a manually rotatable thumb which is operated passively by the user and allows the hand to grip in precision, power, and key grip modes. Another neural prosthetic is Johns Hopkins University Applied Physics Laboratory Proto 1. Besides the Proto 1, the university also finished the Proto 2 in 2010. Early in 2013, Max Ortiz Catalan and Rickard Brånemark of the Chalmers University of Technology, and Sahlgrenska University Hospital in Sweden, succeeded in making the first robotic arm which is mind-controlled and can be permanently attached to the body (using osseointegration). An approach that is very useful is called arm rotation which is common for unilateral amputees which is an amputation that affects only one side of the body; and also essential for bilateral amputees, a person who is missing or has had amputated either both arms or legs, to carry out activities of daily living. This involves inserting a small permanent magnet into the distal end of the residual bone of subjects with upper limb amputations. When a subject rotates the residual arm, the magnet will rotate with the residual bone, causing a change in magnetic field distribution. EEG (electroencephalogram) signals, detected using small flat metal discs attached to the scalp, essentially decoding human brain activity used for physical movement, is used to control the robotic limbs. This allows the user to control the part directly. Robotic transtibial prostheses The research of robotic legs has made some advancement over time, allowing exact movement and control. Researchers at the Rehabilitation Institute of Chicago announced in September 2013 that they have developed a robotic leg that translates neural impulses from the user's thigh muscles into movement, which is the first prosthetic leg to do so. It is currently in testing. Hugh Herr, head of the biomechatronics group at MIT's Media Lab developed a robotic transtibial leg (PowerFoot BiOM). The Icelandic company Össur has also created a robotic transtibial leg with motorized ankle that moves through algorithms and sensors that automatically adjust the angle of the foot during different points in its wearer's stride. Also there are brain-controlled bionic legs that allow an individual to move his limbs with a wireless transmitter. Prosthesis design The main goal of a robotic prosthesis is to provide active actuation during gait to improve the biomechanics of gait, including, among other things, stability, symmetry, or energy expenditure for amputees. There are several powered prosthetic legs currently on the market, including fully powered legs, in which actuators directly drive the joints, and semi-active legs, which use small amounts of energy and a small actuator to change the mechanical properties of the leg but do not inject net positive energy into gait. Specific examples include The emPOWER from BionX, the Proprio Foot from Ossur, and the Elan Foot from Endolite. Various research groups have also experimented with robotic legs over the last decade. Central issues being researched include designing the behavior of the device during stance and swing phases, recognizing the current ambulation task, and various mechanical design problems such as robustness, weight, battery-life/efficiency, and noise-level. However, scientists from Stanford University and Seoul National University has developed artificial nerves system that will help prosthetic limbs feel. This synthetic nerve system enables prosthetic limbs sense braille, feel the sense of touch and respond to the environment. Use of recycled materials Prosthetics are being made from recycled plastic bottles and lids around the world. Direct bone attachment and osseointegration Most prostheses are attached to the exterior of the body in a non-permanent way. The stump and socket method can cause significant pain for the person, which is why direct bone attachment has been explored extensively. Osseointegration is a method of attaching the artificial limb to the body by a prosthetic implant. This method is also sometimes referred to as exoprosthesis (attaching an artificial limb to the bone), or endo-exoprosthesis. Endoprosthesis are prosthetic joint implants which remain wholly inside the body such as knee and hip replacement implants. The method works by inserting a titanium bolt into the bone at the end of the stump. After several months the bone attaches itself to the titanium bolt and an abutment is attached to the titanium bolt. The abutment extends out of the stump and the (removable) artificial limb is then attached to the abutment. Some of the benefits of this method include the following: Better muscle control of the prosthetic. The ability to wear the prosthetic for an extended period of time; with the stump and socket method this is not possible. The ability for transfemoral amputees to drive a car. The main disadvantage of this method is that amputees with the direct bone attachment cannot have large impacts on the limb, such as those experienced during jogging, because of the potential for the bone to break. Cosmesis Cosmetic prosthesis has long been used to disguise injuries and disfigurements. With advances in modern technology, cosmesis, the creation of lifelike limbs made from silicone or PVC, has been made possible. Such prosthetics, including artificial hands, can now be designed to simulate the appearance of real hands, complete with freckles, veins, hair, fingerprints and even tattoos. Custom-made cosmeses are generally more expensive (costing thousands of U.S. dollars, depending on the level of detail), while standard cosmeses come premade in a variety of sizes, although they are often not as realistic as their custom-made counterparts. Another option is the custom-made silicone cover, which can be made to match a person's skin tone but not details such as freckles or wrinkles. Cosmeses are attached to the body in any number of ways, using an adhesive, suction, form-fitting, stretchable skin, or a skin sleeve. Cognition Unlike neuromotor prostheses, neurocognitive prostheses would sense or modulate neural function in order to physically reconstitute or augment cognitive processes such as executive function, attention, language, and memory. No neurocognitive prostheses are currently available but the development of implantable neurocognitive brain-computer interfaces has been proposed to help treat conditions such as stroke, traumatic brain injury, cerebral palsy, autism, and Alzheimer's disease. The recent field of Assistive Technology for Cognition concerns the development of technologies to augment human cognition. Scheduling devices such as Neuropage remind users with memory impairments when to perform certain activities, such as visiting the doctor. Micro-prompting devices such as PEAT, AbleLink and Guide have been used to aid users with memory and executive function problems perform activities of daily living. Prosthetic enhancement In addition to the standard artificial limb for everyday use, many amputees or congenital patients have special limbs and devices to aid in the participation of sports and recreational activities. Within science fiction, and, more recently, within the scientific community, there has been consideration given to using advanced prostheses to replace healthy body parts with artificial mechanisms and systems to improve function. The morality and desirability of such technologies are being debated by transhumanists, other ethicists, and others in general. Body parts such as legs, arms, hands, feet, and others can be replaced. The first experiment with a healthy individual appears to have been that by the British scientist Kevin Warwick. In 2002, an implant was interfaced directly into Warwick's nervous system. The electrode array, which contained around a hundred electrodes, was placed in the median nerve. The signals produced were detailed enough that a robot arm was able to mimic the actions of Warwick's own arm and provide a form of touch feedback again via the implant. The DEKA company of Dean Kamen developed the "Luke arm", an advanced nerve-controlled prosthetic. Clinical trials began in 2008, with FDA approval in 2014 and commercial manufacturing by the Universal Instruments Corporation expected in 2017. The price offered at retail by Mobius Bionics is expected to be around $100,000. Further research in April 2019, there have been improvements towards prosthetic function and comfort of 3D-printed personalized wearable systems. Instead of manual integration after printing, integrating electronic sensors at the intersection between a prosthetic and the wearer's tissue can gather information such as pressure across wearer's tissue, that can help improve further iteration of these types of prosthetic. Oscar Pistorius In early 2008, Oscar Pistorius, the "Blade Runner" of South Africa, was briefly ruled ineligible to compete in the 2008 Summer Olympics because his transtibial prosthesis limbs were said to give him an unfair advantage over runners who had ankles. One researcher found that his limbs used twenty-five percent less energy than those of a non-disabled runner moving at the same speed. This ruling was overturned on appeal, with the appellate court stating that the overall set of advantages and disadvantages of Pistorius' limbs had not been considered. Pistorius did not qualify for the South African team for the Olympics, but went on to sweep the 2008 Summer Paralympics, and has been ruled eligible to qualify for any future Olympics. He qualified for the 2011 World Championship in South Korea and reached the semi-final where he ended last timewise, he was 14th in the first round, his personal best at 400m would have given him 5th place in the finals. At the 2012 Summer Olympics in London, Pistorius became the first amputee runner to compete at an Olympic Games. He ran in the 400 metres race semi-finals, and the 4 × 400 metres relay race finals. He also competed in 5 events in the 2012 Summer Paralympics in London. Design considerations There are multiple factors to consider when designing a transtibial prosthesis. Manufacturers must make choices about their priorities regarding these factors. Performance Nonetheless, there are certain elements of socket and foot mechanics that are invaluable for the athlete, and these are the focus of today's high-tech prosthetics companies: Fit – athletic/active amputees, or those with bony residua, may require a carefully detailed socket fit; less-active patients may be comfortable with a 'total contact' fit and gel liner Energy storage and return – storage of energy acquired through ground contact and utilization of that stored energy for propulsion Energy absorption – minimizing the effect of high impact on the musculoskeletal system Ground compliance – stability independent of terrain type and angle Rotation – ease of changing direction Weight – maximizing comfort, balance and speed Suspension – how the socket will join and fit to the limb Other The buyer is also concerned with numerous other factors: Cosmetics Cost Ease of use Size availability Design for Prosthetics A key feature of prosthetics and prosthetic design is the idea of “designing for disabilities.” This might sound like a good idea in which people with disabilities can participate in equitable design but this is unfortunately not true. The idea of designing for disabilities is first problematic because of the underlying meaning of disabilities. It tells amputees that there is a right and wrong way to move and walk and that if amputees are adapted to the surrounding environment by their own means, then that is the wrong way. Along with that underlying meaning of disabilities, many people designing for disabilities are not actually disabled. “Design for disability" from these experiences, takes disability as the object - with the feeling from non-disabled designers that they have properly learned about their job from their own simulation of the experience. The simulation is misleading and does a disservice to disabled people - so the design that flows from this is highly problematic. Engaging in disability design should be… with, ideally, team members who have the relevant disability and are part of communities that matter to the research. This leads to people, who do not know what the day-to-day personal experiences are, designing materials that do not meet the needs or hinder the needs of people with actual disabilities. Cost and source freedom High-cost In the USA a typical prosthetic limb costs anywhere between $15,000 and $90,000, depending on the type of limb desired by the patient. With medical insurance, a patient will typically pay 10%–50% of the total cost of a prosthetic limb, while the insurance company will cover the rest of the cost. The percent that the patient pays varies on the type of insurance plan, as well as the limb requested by the patient. In the United Kingdom, much of Europe, Australia and New Zealand the entire cost of prosthetic limbs is met by state funding or statutory insurance. For example, in Australia prostheses are fully funded by state schemes in the case of amputation due to disease, and by workers compensation or traffic injury insurance in the case of most traumatic amputations. The National Disability Insurance Scheme, which is being rolled out nationally between 2017 and 2020 also pays for prostheses. Transradial (below the elbow amputation) and transtibial prostheses (below the knee amputation) typically cost between US $6,000 and $8,000, while transfemoral (above the knee amputation) and transhumeral prosthetics (above the elbow amputation) cost approximately twice as much with a range of $10,000 to $15,000 and can sometimes reach costs of $35,000. The cost of an artificial limb often recurs, while a limb typically needs to be replaced every 3–4 years due to wear and tear of everyday use. In addition, if the socket has fit issues, the socket must be replaced within several months from the onset of pain. If height is an issue, components such as pylons can be changed. Not only does the patient need to pay for their multiple prosthetic limbs, but they also need to pay for physical and occupational therapy that come along with adapting to living with an artificial limb. Unlike the reoccurring cost of the prosthetic limbs, the patient will typically only pay the $2000 to $5000 for therapy during the first year or two of living as an amputee. Once the patient is strong and comfortable with their new limb, they will not be required to go to therapy anymore. Throughout one's life, it is projected that a typical amputee will go through $1.4 million worth of treatment, including surgeries, prosthetics, as well as therapies. Low-cost Low-cost above-knee prostheses often provide only basic structural support with limited function. This function is often achieved with crude, non-articulating, unstable, or manually locking knee joints. A limited number of organizations, such as the International Committee of the Red Cross (ICRC), create devices for developing countries. Their device which is manufactured by CR Equipments is a single-axis, manually operated locking polymer prosthetic knee joint. Table. List of knee joint technologies based on the literature review. A plan for a low-cost artificial leg, designed by Sébastien Dubois, was featured at the 2007 International Design Exhibition and award show in Copenhagen, Denmark, where it won the Index: Award. It would be able to create an energy-return prosthetic leg for US $8.00, composed primarily of fiberglass. Prior to the 1980s, foot prostheses merely restored basic walking capabilities. These early devices can be characterized by a simple artificial attachment connecting one's residual limb to the ground. The introduction of the Seattle Foot (Seattle Limb Systems) in 1981 revolutionized the field, bringing the concept of an Energy Storing Prosthetic Foot (ESPF) to the fore. Other companies soon followed suit, and before long, there were multiple models of energy storing prostheses on the market. Each model utilized some variation of a compressible heel. The heel is compressed during initial ground contact, storing energy which is then returned during the latter phase of ground contact to help propel the body forward. Since then, the foot prosthetics industry has been dominated by steady, small improvements in performance, comfort, and marketability. With 3D printers, it is possible to manufacture a single product without having to have metal molds, so the costs can be drastically reduced. Jaipur foot, an artificial limb from Jaipur, India, costs about US$40. Open-source robotic prosthesis There is currently an open-design Prosthetics forum known as the "Open Prosthetics Project". The group employs collaborators and volunteers to advance Prosthetics technology while attempting to lower the costs of these necessary devices. Open Bionics is a company that is developing open-source robotic prosthetic hands. They utilize 3D printing to manufacture the devices and low-cost 3D scanners to fit them onto the residual limb of a specific patient. Open Bionics' use of 3D printing allows for more personalized designs, such as the "Hero Arm" which incorporates the users favourite colours, textures, and even aesthetics to look like superheroes or characters from Star Wars with the aim of lowering the cost. A review study on a wide range of printed prosthetic hands found that 3D printing technology holds a promise for individualised prosthesis design, is cheaper than commercial prostheses available on the market, and is more expensive than mass production processes such as injection molding. The same study also found that evidence on the functionality, durability and user acceptance of 3D printed hand prostheses is still lacking. Low-cost prosthetics for children In the USA an estimate was found of 32,500 children (<21 years) had a major paediatric amputation, with 5,525 new cases each year, of which 3,315 congenital. Carr et al. (1998) investigated amputations caused by landmines for Afghanistan, Bosnia and Herzegovina, Cambodia and Mozambique among children (<14 years), showing estimates of respectively 4.7, 0.19, 1.11 and 0.67 per 1000 children. Mohan (1986) indicated in India a total of 424,000 amputees (23,500 annually), of which 10.3% had an onset of disability below the age of 14, amounting to a total of about 43,700 limb deficient children in India alone. Few low-cost solutions have been created specially for children. Examples of low-cost prosthetic devices include: Pole and crutch This hand-held pole with leather support band or platform for the limb is one of the simplest and cheapest solutions found. It serves well as a short-term solution, but is prone to rapid contracture formation if the limb is not stretched daily through a series of range-of motion (RoM) sets. Bamboo, PVC or plaster limbs This also fairly simple solution comprises a plaster socket with a bamboo or PVC pipe at the bottom, optionally attached to a prosthetic foot. This solution prevents contractures because the knee is moved through its full RoM. The David Werner Collection, an online database for the assistance of disabled village children, displays manuals of production of these solutions. Adjustable bicycle limb This solution is built using a bicycle seat post up side down as foot, generating flexibility and (length) adjustability. It is a very cheap solution, using locally available materials. Sathi Limb It is an endoskeletal modular lower limb from India, which uses thermoplastic parts. Its main advantages are the small weight and adaptability. Monolimb Monolimbs are non-modular prostheses and thus require more experienced prosthetist for correct fitting, because alignment can barely be changed after production. However, their durability on average is better than low-cost modular solutions. Cultural and social theory perspectives A number of theorists have explored the meaning and implications of prosthetic extension of the body. Elizabeth Grosz writes, "Creatures use tools, ornaments, and appliances to augment their bodily capacities. Are their bodies lacking something, which they need to replace with artificial or substitute organs?...Or conversely, should prostheses be understood, in terms of aesthetic reorganization and proliferation, as the consequence of an inventiveness that functions beyond and perhaps in defiance of pragmatic need?" Elaine Scarry argues that every artifact recreates and extends the body. Chairs supplement the skeleton, tools append the hands, clothing augments the skin. In Scarry's thinking, "furniture and houses are neither more nor less interior to the human body than the food it absorbs, nor are they fundamentally different from such sophisticated prosthetics as artificial lungs, eyes and kidneys. The consumption of manufactured things turns the body inside out, opening it up to and as the culture of objects." Mark Wigley, a professor of architecture, continues this line of thinking about how architecture supplements our natural capabilities, and argues that "a blurring of identity is produced by all prostheses." Some of this work relies on Freud's earlier characterization of man's relation to objects as one of extension. Negative social implications Prosthetics play a vital role in how a person perceives themselves and how other people perceive them. The ability to conceal such use enabled participants to ward off social stigmatization that in turn enabled their social integration and the reduction of emotional problems surrounding such disability. People that lose a limb first have to deal with the emotional result of losing that limb. Regardless of the reasons for amputation, whether due to traumatic causes or as a consequence of illness, emotional shock exists. It may have a smaller or larger amplitude depending on a variety of factors such as patient age, medical culture, medical cause, etc. As a result of amputation, the research participants' reports were loaded with drama. The first emotional response to amputation was one of despair, a severe sense of self-collapse, something almost unbearable. Emotional factors are just a small part of looking at social implications. Many people who lose a limb may have lots of anxiety surrounding prosthetics and their limbs. After surgery, for an extended period of time, the interviewed patients from the National Library of Medicine noticed the appearance and increase of anxiety. A lot of negative thoughts invaded their minds. Projections about the future were grim, marked by sadness, helplessness, and even despair. Existential uncertainty, lack of control, and further anticipated losses in one's life due to amputation were the primary causes of anxiety and consequently ruminations and insomnia. From losing a leg and getting a prosthetics there were also many factors that can happen including anger and regret. The amputation of a limb is associated not only with physical loss and change in body image but also with an abrupt severing in one's sense of continuity. For participants with amputation as a result of physical trauma the event is often experienced as a transgression and can lead to frustration and anger. Ethical concerns There are also many ethical concerns about how the prosthetics are made and produced. A wide range of ethical issues arise in connection with experiments and clinical usage of sensory prostheses: animal experimentation; informed consent, for instance, in patients with a locked-in syndrome that may be alleviated with a sensory prosthesis; unrealistic expectations of research subjects testing new devices. How prosthetics come to be and testing of the usability of the device is a major concern in the medical world. Although many positives come when a new prosthetic design is announced, how the device got to where it is leads to some questioning the ethics of prosthetics. Debates There are also many debates among the prosthetic community about whether they should wear prosthetics at all. This is sparked by whether prosthetics help in day-to-day living or make it harder. Many people have adapted to their loss of limb making it work for them and do not need a prosthesis in their life. Not all amputees will wear a prosthesis. In a 2011 national survey of Australian amputees, Limbs 4 Life found that 7 percent of amputees do not wear a prosthesis, and in another Australian hospital study, this number was closer to 20 percent. Many people report being uncomfortable in prostheses and not wanting to wear them, even reporting that wearing a prosthetic is more cumbersome than not having one at all. These debates are natural among the prosthetic community and help us shed light on the issues that they are facing. Notable users of prosthetic devices Henry William Paget, 1st Marquess of Anglesey (1768–1854), whose leg was amputated at the Battle of Waterloo Marie Moentmann (1900–74), child survivor of industrial accident Terry Fox (1958–81), Canadian athlete, humanitarian, and cancer research activist Oscar Pistorius (born 1986), South African former professional sprinter Harold Russell (1914–2002), WWII veteran, Academy Award-winning actor See also Artificial heart Bionics Capua Leg Cybernetics Cyborg Robotic arm Transhumanism Whole brain emulation References Citations Sources 'Biomechanics of running: from faulty movement patterns come injury.' Sports Injury Bulletin. Edelstein, J. E. Prosthetic feet. State of the Art. Physical Therapy 68(12) Dec 1988: 1874–1881. Gailey, Robert. The Biomechanics of Amputee Running. October 2002. External links Afghan amputees tell their stories at Texas gathering, Fayetteville Observer Can modern prosthetics actually help reclaim the sense of touch?, PBS Newshour A hand for Rick, Fayetteville Observer What is prosthesis, prosthetic limb and its various component I have one of the most advanced prosthetic arms in the world – and I hate it by Britt H. Young A systematic review of randomised controlled trials assessing effectiveness of prosthetic and orthotic interventions Biological engineering Biomedical engineering Egyptian inventions Iranian inventions Robotics
Prosthesis
[ "Engineering", "Biology" ]
14,630
[ "Biological engineering", "Biomedical engineering", "Automation", "Robotics", "Medical technology" ]
72,827
https://en.wikipedia.org/wiki/Cauchy%27s%20integral%20formula
In mathematics, Cauchy's integral formula, named after Augustin-Louis Cauchy, is a central statement in complex analysis. It expresses the fact that a holomorphic function defined on a disk is completely determined by its values on the boundary of the disk, and it provides integral formulas for all derivatives of a holomorphic function. Cauchy's formula shows that, in complex analysis, "differentiation is equivalent to integration": complex differentiation, like integration, behaves well under uniform limits – a result that does not hold in real analysis. Theorem Let be an open subset of the complex plane , and suppose the closed disk defined as is completely contained in . Let be a holomorphic function, and let be the circle, oriented counterclockwise, forming the boundary of . Then for every in the interior of , The proof of this statement uses the Cauchy integral theorem and like that theorem, it only requires to be complex differentiable. Since can be expanded as a power series in the variable it follows that holomorphic functions are analytic, i.e. they can be expanded as convergent power series. In particular is actually infinitely differentiable, with This formula is sometimes referred to as Cauchy's differentiation formula. The theorem stated above can be generalized. The circle can be replaced by any closed rectifiable curve in which has winding number one about . Moreover, as for the Cauchy integral theorem, it is sufficient to require that be holomorphic in the open region enclosed by the path and continuous on its closure. Note that not every continuous function on the boundary can be used to produce a function inside the boundary that fits the given boundary function. For instance, if we put the function , defined for , into the Cauchy integral formula, we get zero for all points inside the circle. In fact, giving just the real part on the boundary of a holomorphic function is enough to determine the function up to an imaginary constant — there is only one imaginary part on the boundary that corresponds to the given real part, up to addition of a constant. We can use a combination of a Möbius transformation and the Stieltjes inversion formula to construct the holomorphic function from the real part on the boundary. For example, the function has real part . On the unit circle this can be written . Using the Möbius transformation and the Stieltjes formula we construct the function inside the circle. The term makes no contribution, and we find the function . This has the correct real part on the boundary, and also gives us the corresponding imaginary part, but off by a constant, namely . Proof sketch By using the Cauchy integral theorem, one can show that the integral over (or the closed rectifiable curve) is equal to the same integral taken over an arbitrarily small circle around . Since is continuous, we can choose a circle small enough on which is arbitrarily close to . On the other hand, the integral over any circle centered at . This can be calculated directly via a parametrization (integration by substitution) where and is the radius of the circle. Letting gives the desired estimate Example Let and let be the contour described by (the circle of radius 2). To find the integral of around the contour , we need to know the singularities of . Observe that we can rewrite as follows: where and . Thus, has poles at and . The moduli of these points are less than 2 and thus lie inside the contour. This integral can be split into two smaller integrals by Cauchy–Goursat theorem; that is, we can express the integral around the contour as the sum of the integral around and where the contour is a small circle around each pole. Call these contours around and around . Now, each of these smaller integrals can be evaluated by the Cauchy integral formula, but they first must be rewritten to apply the theorem. For the integral around , define as . This is analytic (since the contour does not contain the other singularity). We can simplify to be: and now Since the Cauchy integral formula says that: we can evaluate the integral as follows: Doing likewise for the other contour: we evaluate The integral around the original contour then is the sum of these two integrals: An elementary trick using partial fraction decomposition: Consequences The integral formula has broad applications. First, it implies that a function which is holomorphic in an open set is in fact infinitely differentiable there. Furthermore, it is an analytic function, meaning that it can be represented as a power series. The proof of this uses the dominated convergence theorem and the geometric series applied to The formula is also used to prove the residue theorem, which is a result for meromorphic functions, and a related result, the argument principle. It is known from Morera's theorem that the uniform limit of holomorphic functions is holomorphic. This can also be deduced from Cauchy's integral formula: indeed the formula also holds in the limit and the integrand, and hence the integral, can be expanded as a power series. In addition the Cauchy formulas for the higher order derivatives show that all these derivatives also converge uniformly. The analog of the Cauchy integral formula in real analysis is the Poisson integral formula for harmonic functions; many of the results for holomorphic functions carry over to this setting. No such results, however, are valid for more general classes of differentiable or real analytic functions. For instance, the existence of the first derivative of a real function need not imply the existence of higher order derivatives, nor in particular the analyticity of the function. Likewise, the uniform limit of a sequence of (real) differentiable functions may fail to be differentiable, or may be differentiable but with a derivative which is not the limit of the derivatives of the members of the sequence. Another consequence is that if is holomorphic in and then the coefficients satisfy Cauchy's estimate From Cauchy's estimate, one can easily deduce that every bounded entire function must be constant (which is Liouville's theorem). The formula can also be used to derive Gauss's Mean-Value Theorem, which states In other words, the average value of over the circle centered at with radius is . This can be calculated directly via a parametrization of the circle. Generalizations Smooth functions A version of Cauchy's integral formula is the Cauchy–Pompeiu formula, and holds for smooth functions as well, as it is based on Stokes' theorem. Let be a disc in and suppose that is a complex-valued function on the closure of . Then One may use this representation formula to solve the inhomogeneous Cauchy–Riemann equations in . Indeed, if is a function in , then a particular solution of the equation is a holomorphic function outside the support of . Moreover, if in an open set , for some (where ), then is also in and satisfies the equation The first conclusion is, succinctly, that the convolution of a compactly supported measure with the Cauchy kernel is a holomorphic function off the support of . Here denotes the principal value. The second conclusion asserts that the Cauchy kernel is a fundamental solution of the Cauchy–Riemann equations. Note that for smooth complex-valued functions of compact support on the generalized Cauchy integral formula simplifies to and is a restatement of the fact that, considered as a distribution, is a fundamental solution of the Cauchy–Riemann operator . The generalized Cauchy integral formula can be deduced for any bounded open region with boundary from this result and the formula for the distributional derivative of the characteristic function of : where the distribution on the right hand side denotes contour integration along . Now we can deduce the generalized Cauchy integral formula: Several variables In several complex variables, the Cauchy integral formula can be generalized to polydiscs. Let be the polydisc given as the Cartesian product of open discs : Suppose that is a holomorphic function in continuous on the closure of . Then where . In real algebras The Cauchy integral formula is generalizable to real vector spaces of two or more dimensions. The insight into this property comes from geometric algebra, where objects beyond scalars and vectors (such as planar bivectors and volumetric trivectors) are considered, and a proper generalization of Stokes' theorem. Geometric calculus defines a derivative operator under its geometric product — that is, for a -vector field , the derivative generally contains terms of grade and . For example, a vector field () generally has in its derivative a scalar part, the divergence (), and a bivector part, the curl (). This particular derivative operator has a Green's function: where is the surface area of a unit -ball in the space (that is, , the circumference of a circle with radius 1, and , the surface area of a sphere with radius 1). By definition of a Green's function, It is this useful property that can be used, in conjunction with the generalized Stokes theorem: where, for an -dimensional vector space, is an -vector and is an -vector. The function can, in principle, be composed of any combination of multivectors. The proof of Cauchy's integral theorem for higher dimensional spaces relies on the using the generalized Stokes theorem on the quantity and use of the product rule: When , is called a monogenic function, the generalization of holomorphic functions to higher-dimensional spaces — indeed, it can be shown that the Cauchy–Riemann condition is just the two-dimensional expression of the monogenic condition. When that condition is met, the second term in the right-hand integral vanishes, leaving only where is that algebra's unit -vector, the pseudoscalar. The result is Thus, as in the two-dimensional (complex analysis) case, the value of an analytic (monogenic) function at a point can be found by an integral over the surface surrounding the point, and this is valid not only for scalar functions but vector and general multivector functions as well. See also Cauchy–Riemann equations Methods of contour integration Nachbin's theorem Morera's theorem Mittag-Leffler's theorem Green's function generalizes this idea to the non-linear setup Schwarz integral formula Parseval–Gutzmer formula Bochner–Martinelli formula Helffer–Sjöstrand formula Notes References External links Augustin-Louis Cauchy Theorems in complex analysis
Cauchy's integral formula
[ "Mathematics" ]
2,239
[ "Theorems in mathematical analysis", "Theorems in complex analysis" ]
72,839
https://en.wikipedia.org/wiki/Foucault%20pendulum
The Foucault pendulum or Foucault's pendulum is a simple device named after French physicist Léon Foucault, conceived as an experiment to demonstrate the Earth's rotation. If a long and heavy pendulum suspended from the high roof above a circular area is monitored over an extended period of time, its plane of oscillation appears to change spontaneously as the Earth makes its 24-hourly rotation. The pendulum was introduced in 1851 and was the first experiment to give simple, direct evidence of the Earth's rotation. Foucault followed up in 1852 with a gyroscope experiment to further demonstrate the Earth's rotation. Foucault pendulums today are popular displays in science museums and universities. History Foucault was inspired by observing a thin flexible rod on the axis of a lathe, which vibrated in the same plane despite the rotation of the supporting frame of the lathe. The first public exhibition of a Foucault pendulum took place in February 1851 in the Meridian of the Paris Observatory. A few weeks later, Foucault made his most famous pendulum when he suspended a brass-coated lead bob with a wire from the dome of the Panthéon, Paris. Because the latitude of its location was , the plane of the pendulum's swing made a full circle in approximately , rotating clockwise approximately 11.3° per hour. The proper period of the pendulum was approximately , so with each oscillation, the pendulum rotates by about . Foucault reported observing 2.3 mm of deflection on the edge of a pendulum every oscillation, which is achieved if the pendulum swing angle is 2.1°. Foucault explained his results in an 1851 paper entitled Physical demonstration of the Earth's rotational movement by means of the pendulum, published in the Comptes rendus de l'Académie des Sciences. He wrote that, at the North Pole: ...an oscillatory movement of the pendulum mass follows an arc of a circle whose plane is well known, and to which the inertia of matter ensures an unchanging position in space. If these oscillations continue for a certain time, the movement of the earth, which continues to rotate from west to east, will become sensitive in contrast to the immobility of the oscillation plane whose trace on the ground will seem animated by a movement consistent with the apparent movement of the celestial sphere; and if the oscillations could be perpetuated for twenty-four hours, the trace of their plane would then execute an entire revolution around the vertical projection of the point of suspension. The original bob used in 1851 at the Panthéon was moved in 1855 to the Conservatoire des Arts et Métiers in Paris. A second temporary installation was made for the 50th anniversary in 1902. During museum reconstruction in the 1990s, the original pendulum was temporarily displayed at the Panthéon (1995), but was later returned to the Musée des Arts et Métiers before it reopened in 2000. On April 6, 2010, the cable suspending the bob in the Musée des Arts et Métiers snapped, causing irreparable damage to the pendulum bob and to the marble flooring of the museum. The original, now damaged pendulum bob is displayed in a separate case adjacent to the current pendulum display. An exact copy of the original pendulum has been operating under the dome of the Panthéon, Paris since 1995. Mechanism At either the [[Geographic South Pole] or Geographic South Pole, the plane of oscillation of a pendulum remains fixed relative to the distant masses of the universe while Earth rotates underneath it, taking one sidereal day to complete a rotation. So, relative to Earth, the plane of oscillation of a pendulum at the North Pole (viewed from above) undergoes a full clockwise rotation during one day; a pendulum at the South Pole rotates counterclockwise. When a Foucault pendulum is suspended at the equator, the plane of oscillation remains fixed relative to Earth. At other latitudes, the plane of oscillation precesses relative to Earth, but more slowly than at the pole; the angular speed, (measured in clockwise degrees per sidereal day), is proportional to the sine of the latitude, : where latitudes north and south of the equator are defined as positive and negative, respectively. A "pendulum day" is the time needed for the plane of a freely suspended Foucault pendulum to complete an apparent rotation about the local vertical. This is one sidereal day divided by the sine of the latitude. For example, a Foucault pendulum at 30° south latitude, viewed from above by an earthbound observer, rotates counterclockwise 360° in two days. Using enough wire length, the described circle can be wide enough that the tangential displacement along the measuring circle of between two oscillations can be visible by eye, rendering the Foucault pendulum a spectacular experiment: for example, the original Foucault pendulum in Panthéon moves circularly, with a 6-metre pendulum amplitude, by about 5 mm each period. A Foucault pendulum requires care to set up because imprecise construction can cause additional veering which masks the terrestrial effect. Heike Kamerlingh Onnes (Nobel laureate 1913) performed precise experiments and developed a fuller theory of the Foucault pendulum for his doctoral thesis (1879). He observed the pendulum to go over from linear to elliptic oscillation in an hour. By a perturbation analysis, he showed that geometrical imperfection of the system or elasticity of the support wire may cause a beat between two horizontal modes of oscillation. The initial launch of the pendulum is also critical; the traditional way to do this is to use a flame to burn through a thread which temporarily holds the bob in its starting position, thus avoiding unwanted sideways motion (see a detail of the launch at the 50th anniversary in 1902). Notably, veering of a pendulum was observed already in 1661 by Vincenzo Viviani, a disciple of Galileo, but there is no evidence that he connected the effect with the Earth's rotation; rather, he regarded it as a nuisance in his study that should be overcome with suspending the bob on two ropes instead of one. Air resistance damps the oscillation, so some Foucault pendulums in museums incorporate an electromagnetic or other drive to keep the bob swinging; others are restarted regularly, sometimes with a launching ceremony as an added attraction. Besides air resistance (the use of a heavy symmetrical bob is to reduce friction forces, mainly air resistance by a symmetrical and aerodynamic bob) the other main engineering problem in creating a 1-meter Foucault pendulum nowadays is said to be ensuring there is no preferred direction of swing. Related physical systems Many physical systems precess in a similar manner to a Foucault pendulum. As early as 1836, the Scottish mathematician Edward Sang contrived and explained the precession of a spinning top. In 1851, Charles Wheatstone described an apparatus that consists of a vibrating spring that is mounted on top of a disk so that it makes a fixed angle with the disk. The spring is struck so that it oscillates in a plane. When the disk is turned, the plane of oscillation changes just like the one of a Foucault pendulum at latitude . Similarly, consider a nonspinning, perfectly balanced bicycle wheel mounted on a disk so that its axis of rotation makes an angle with the disk. When the disk undergoes a full clockwise revolution, the bicycle wheel will not return to its original position, but will have undergone a net rotation of . Foucault-like precession is observed in a virtual system wherein a massless particle is constrained to remain on a rotating plane that is inclined with respect to the axis of rotation. Spin of a relativistic particle moving in a circular orbit precesses similar to the swing plane of Foucault pendulum. The relativistic velocity space in Minkowski spacetime can be treated as a sphere S3 in 4-dimensional Euclidean space with imaginary radius and imaginary timelike coordinate. Parallel transport of polarization vectors along such sphere gives rise to Thomas precession, which is analogous to the rotation of the swing plane of Foucault pendulum due to parallel transport along a sphere S2 in 3-dimensional Euclidean space. In physics, the evolution of such systems is determined by geometric phases. Mathematically they are understood through parallel transport. Absolute reference frame for pendulum The motion of a pendulum, such as the Foucault pendulum, is typically analyzed relative to an Inertial frame of reference, approximated by the "fixed stars." These stars, owing to their immense distance from Earth, exhibit negligible motion relative to one another over short timescales, making them a practical benchmark for physical calculations. While fixed stars are sufficient for physical analyses, the concept of an absolute reference frame introduces philosophical and theoretical considerations. Newtonian absolute space Isaac Newton proposed the existence of "absolute space," a universal, immovable reference frame independent of any material objects. In his Principia Mathematica, Newton described absolute space as the backdrop against which true motion occurs. This concept was criticized by later thinkers, such as Ernst Mach, who argued that motion should only be defined relative to other masses in the universe. Cosmic microwave background (CMB) The CMB, the remnant radiation from the Big Bang, provides a universal reference for cosmological observations. By measuring motion relative to the CMB, scientists can determine the velocity of celestial bodies, including Earth, relative to the universe's early state. This has led some to consider the CMB a modern analogue of an absolute reference frame. Mach's principle and distant masses Ernst Mach proposed that inertia arises from the interaction of an object with the distant masses in the universe. According to this view, the pendulum's frame of reference might be defined by the distribution of all matter in the cosmos, rather than an abstract absolute space. The "distant masses of the universe" play a crucial role in defining the inertial frame, suggesting that the pendulum's apparent motion might be influenced by the collective gravitational effect of these masses. This perspective aligns with Mach’s principle, emphasizing the interconnectedness of local and cosmic phenomena. However, the connection between Mach's principle and Einstein's general relativity remains unresolved. Einstein initially hoped to incorporate Mach's ideas but later acknowledged difficulties in doing so. General relativity and spacetime General relativity suggests that spacetime itself can serve as a reference frame. The pendulum’s motion might be understood as relative to the curvature of spacetime, which is influenced by nearby and distant masses. This view aligns with the concept of geodesics in curved spacetime. The Lense-Thirring effect, a prediction of general relativity, implies that massive rotating objects like Earth can slightly "drag" spacetime, which could affect the pendulum’s oscillation. This effect, though theoretically significant, is currently too small to measure with a Foucault pendulum. Equation formulation for the Foucault pendulum To model the Foucault pendulum, we consider a pendulum of length L and mass m, oscillating with small amplitudes. In a reference frame rotating with Earth at angular velocity Ω, the Coriolis force must be included. The equations of motion in the horizontal plane (x, y) are: where: is the natural angular frequency of the pendulum, is the latitude, is the acceleration due to gravity. These coupled differential equations describe the pendulum's motion, incorporating the Coriolis effect due to Earth's rotation. Precession rate calculation The precession rate of the pendulum’s oscillation plane depends on latitude. The angular precession rate is given by: where is Earth's angular rotation rate (approximately radians per second). Examples of precession periods The time for a full rotation of the pendulum’s plane is: Calculations for specific locations: Paris, France (latitude ): New York City, USA (latitude ): These calculations show that the pendulum's precession period varies with latitude, completing a full rotation more quickly at higher latitudes. Installations There are numerous Foucault pendulums at universities, science museums, and the like throughout the world. The United Nations General Assembly Building at the United Nations headquarters in New York City has one. The Oregon Convention Center pendulum is claimed to be the largest, its length approximately , however, there are larger ones listed in the article, such as the one in Gamow Tower at the University of Colorado of . There used to be much longer pendulums, such as the pendulum in Saint Isaac's Cathedral, Saint Petersburg, Russia. The experiment has also been carried out at the South Pole, where it was assumed that the rotation of the Earth would have maximum effect. A pendulum was installed in a six-story staircase of a new station under construction at the Amundsen-Scott South Pole Station. It had a length of and the bob weighed . The location was ideal: no moving air could disturb the pendulum. The researchers confirmed about 24 hours as the rotation period of the plane of oscillation. See also References Further reading External links Wolfe, Joe, "A derivation of the precession of the Foucault pendulum". "The Foucault Pendulum", derivation of the precession in polar coordinates. "The Foucault Pendulum" By Joe Wolfe, with film clip and animations. "Foucault's Pendulum" by Jens-Peer Kuska with Jeff Bryant, Wolfram Demonstrations Project: a computer model of the pendulum allowing manipulation of pendulum frequency, Earth rotation frequency, latitude, and time. "Webcam Kirchhoff-Institut für Physik, Universität Heidelberg". California academy of sciences, CA Foucault pendulum explanation, in friendly format Foucault pendulum model Exposition including a tabletop device that shows the Foucault effect in seconds. Foucault, M. L., Physical demonstration of the rotation of the Earth by means of the pendulum, Franklin Institute, 2000, retrieved 2007-10-31. Translation of his paper on Foucault pendulum. Pendolo nel Salone The Foucault Pendulum inside Palazzo della Ragione in Padova, Italy 1851 introductions 1851 in science Pendulums Physics experiments French inventions
Foucault pendulum
[ "Physics" ]
2,979
[ "Experimental physics", "Physics experiments" ]
12,202,029
https://en.wikipedia.org/wiki/Gibbons%E2%80%93Hawking%20effect
In the theory of general relativity, the Gibbons–Hawking effect is the statement that a temperature can be associated to each solution of the Einstein field equations that contains a causal horizon. It is named after Gary Gibbons and Stephen Hawking. The term "causal horizon" does not necessarily refer to event horizons only, but could also stand for the horizon of the visible universe, for instance. For example, Schwarzschild spacetime contains an event horizon and so can be associated a temperature. In the case of Schwarzschild spacetime this is the temperature of a black hole of mass , satisfying (see also Hawking radiation). A second example is de Sitter space which contains an event horizon. In this case the temperature is proportional to the Hubble parameter , i.e. . See also Hawking radiation Gibbons–Hawking space References General relativity Stephen Hawking
Gibbons–Hawking effect
[ "Physics" ]
178
[ "General relativity", "Relativity stubs", "Theory of relativity" ]
12,202,917
https://en.wikipedia.org/wiki/Taylor%20dispersion
Taylor dispersion or Taylor diffusion is an apparent or effective diffusion of some scalar field arising on the large scale due to the presence of a strong, confined, zero-mean shear flow on the small scale. Essentially, the shear acts to smear out the concentration distribution in the direction of the flow, enhancing the rate at which it spreads in that direction. The effect is named after the British fluid dynamicist G. I. Taylor, who described the shear-induced dispersion for large Peclet numbers. The analysis was later generalized by Rutherford Aris for arbitrary values of the Peclet number. The dispersion process is sometimes also referred to as the Taylor-Aris dispersion. The canonical example is that of a simple diffusing species in uniform Poiseuille flow through a uniform circular pipe with no-flux boundary conditions. Description We use z as an axial coordinate and r as the radial coordinate, and assume axisymmetry. The pipe has radius a, and the fluid velocity is: The concentration of the diffusing species is denoted c and its diffusivity is D. The concentration is assumed to be governed by the linear advection–diffusion equation: The concentration and velocity are written as the sum of a cross-sectional average (indicated by an overbar) and a deviation (indicated by a prime), thus: Under some assumptions (see below), it is possible to derive an equation just involving the average quantities: Observe how the effective diffusivity multiplying the derivative on the right hand side is greater than the original value of diffusion coefficient, D. The effective diffusivity is often written as: where is the Péclet number, based on the channel radius . The interesting result is that for large values of the Péclet number, the effective diffusivity is inversely proportional to the molecular diffusivity. The effect of Taylor dispersion is therefore more pronounced at higher Péclet numbers. In a frame moving with the mean velocity, i.e., by introducing , the dispersion process becomes a purely diffusion process, with diffusivity given by the effective diffusivity. The assumption is that for given , which is the case if the length scale in the direction is long enough to smooth the gradient in the direction. This can be translated into the requirement that the length scale in the direction satisfies: . Dispersion is also a function of channel geometry. An interesting phenomenon for example is that the dispersion of a flow between two infinite flat plates and a rectangular channel, which is infinitely thin, differs approximately 8.75 times. Here the very small side walls of the rectangular channel have an enormous influence on the dispersion. While the exact formula will not hold in more general circumstances, the mechanism still applies, and the effect is stronger at higher Péclet numbers. Taylor dispersion is of particular relevance for flows in porous media modelled by Darcy's law. Derivation One may derive the Taylor equation using method of averages, first introduced by Aris. The result can also be derived from large-time asymptotics, which is more intuitively clear. In the dimensional coordinate system , consider the fully-developed Poiseuille flow flowing inside a pipe of radius , where is the average velocity of the fluid. A species of concentration with some arbitrary distribution is to be released at somewhere inside the pipe at time . As long as this initial distribution is compact, for instance the species/solute is not released everywhere with finite concentration level, the species will be convected along the pipe with the mean velocity . In a frame moving with the mean velocity and scaled with following non-dimensional scales where is the time required for the species to diffuse in the radial direction, is the diffusion coefficient of the species and is the Peclet number, the governing equations are given by Thus in this moving frame, at times (in dimensional variables, ), the species will diffuse radially. It is clear then that when (in dimensional variables, ), diffusion in the radial direction will make the concentration uniform across the pipe, although however the species is still diffusing in the direction. Taylor dispersion quantifies this axial diffusion process for large . Suppose (i.e., times large in comparison with the radial diffusion time ), where is a small number. Then at these times, the concentration would spread to an axial extent . To quantify large-time behavior, the following rescalings can be introduced. The equation then becomes If pipe walls do not absorb or react with the species, then the boundary condition must be satisfied at . Due to symmetry, at . Since , the solution can be expanded in an asymptotic series, Substituting this series into the governing equation and collecting terms of different orders will lead to series of equations. At leading order, the equation obtained is Integrating this equation with boundary conditions defined before, one finds . At this order, is still an unknown function. This fact that is independent of is an expected result since as already said, at times , the radial diffusion will dominate first and make the concentration uniform across the pipe. Terms of order leads to the equation Integrating this equation with respect to using the boundary conditions leads to where is the value of at , an unknown function at this order. Terms of order leads to the equation This equation can also be integrated with respect to , but what is required is the solvability condition of the above equation. The solvability condition is obtained by multiplying the above equation by and integrating the whole equation from to . This is also the same as averaging the above equation over the radial direction. Using the boundary conditions and results obtained in the previous two orders, the solvability condition leads to This is the required diffusion equation. Going back to the laboratory frame and dimensional variables, the equation becomes By the way in which this equation is derived, it can be seen that this is valid for in which changes significantly over a length scale (or more precisely on a scale . At the same time scale , at any small length scale about some location that moves with the mean flow, say , i.e., on the length scale , the concentration is no longer independent of , but is given by Higher order asymptotics Integrating the equations obtained at the second order, we find where is an unknown at this order. Now collecting terms of order , we find The solvability condition of the above equation yields the governing equation for as follows References Other sources Mestel. J. Taylor dispersion — shear augmented diffusion, Lecture Handout for Course M4A33, Imperial College. Fluid mechanics Fluid dynamics
Taylor dispersion
[ "Chemistry", "Engineering" ]
1,369
[ "Chemical engineering", "Civil engineering", "Piping", "Fluid mechanics", "Fluid dynamics" ]
12,203,588
https://en.wikipedia.org/wiki/Mitotic%20inhibitor
A mitotic inhibitor, microtubule inhibitor, or tubulin inhibitor, is a drug that inhibits mitosis, or cell division, and is used in treating cancer, gout, and nail fungus. These drugs disrupt microtubules, which are structures that pull the chromosomes apart when a cell divides. Mitotic inhibitors are used in cancer treatment, because cancer cells are able to grow through continuous division that eventually spread through the body (metastasize). Thus, cancer cells are more sensitive to inhibition of mitosis than normal cells. Mitotic inhibitors are also used in cytogenetics (the study of chromosomes), where they stop cell division at a stage where chromosomes can be easily examined. Mitotic inhibitors are derived from natural substances such as plant alkaloids, and prevent cells from undergoing mitosis by disrupting microtubule polymerization, thus preventing cancerous growth. Microtubules are long, ropelike proteins, long polymers made of smaller units (monomers) of the protein tubulin, that extend through the cell and move cellular components around. Microtubules are created during normal cell functions by assembling (polymerizing) tubulin components, and are disassembled when they are no longer needed. One of the important functions of microtubules is to move and separate chromosomes and other components of the cell for cell division (mitosis). Mitotic inhibitors interfere with the assembly and disassembly of tubulin into microtubule polymers. This interrupts cell division, usually during the mitosis (M) phase of the cell cycle when two sets of fully formed chromosomes are supposed to separate into daughter cells. Tubulin binding molecules have generated significant interest after the introduction of the taxanes into clinical oncology and the general use of the vinca alkaloids. Examples of mitotic inhibitors frequently used in the treatment of cancer include paclitaxel, docetaxel, vinblastine, vincristine, and vinorelbine. Colchicine and griseofulvin are mitotic inhibitors used in the treatment of gout and nail fungus, respectively. Microtubules Microtubules are the key components of the cytoskeleton of eukaryotic cells and have an important role in various cellular functions such as intracellular migration and transport, cell shape maintenance, polarity, cell signaling and mitosis. They play a critical role in cell division by their involvement in the movement and attachment of chromosomes during various stages of mitosis. Therefore, microtubule dynamics are an important target for the developing anti-cancer drugs. Structure Microtubules are composed of two globular protein subunits, α- and β-tubulin. These two subunits combine to form an α,β-heterodimer which then assembles in a filamentous tube-shaped structure. The tubulin hetero-dimers arrange themselves in a head to tail manner with the α-subunit of one dimer coming in contact with the β-subunit of the other. This arrangement results in the formation of long protein fibres called protofilaments. These protofilaments form the backbone of the hollow, cylindrical microtubule, which is about 25 nanometers in diameter and varies from 200 nanometers to 25 micrometers in length. About 12–13 protofilaments arrange themselves in parallel to form a C-shaped protein sheet, which then curls around to give a pipe-like structure called the microtubule. The head to tail arrangement of the hetero dimers gives polarity to the resulting microtubule, which has an α-subunit at one end and a β-subunit at the other end. The α-tubulin end has negative (–) charges while the β-tubulin end has positive (+) charges. The microtubule grows from discrete assembly sites in the cells called Microtubule organizing centers (MTOCs), which are networks of microtubule associated proteins (MAP). Two molecules of energy rich guanosine triphosphate (GTP) are also important components of the microtubule structure. One molecule of GTP is tightly bound to the α-tubulin and is non-exchangeable whereas the other GTP molecule is bound to β-tubulin and can be easily exchanged with guanosine diphosphate (GDP). The stability of the microtubule will depend on whether the β-end is occupied by GTP or GDP. A microtubule having a GTP molecule at the β-end will be stable and continue to grow whereas a microtubule having a GDP molecule at the β-end will be unstable and will depolymerise rapidly. Microtubule dynamics Microtubules are not static but they are highly dynamic polymers and exhibit two kinds of dynamic behaviors : 'dynamic instability' and 'treadmilling'. Dynamic instability is a process in which the microtubule ends switches between periods of growth and shortening. The two ends are not equal; the α-tubulin ringed (-)end is less dynamic while the more dynamic β-tubulin ringed (+) end grows and shortens more rapidly. Microtubules undergo long periods of slow lengthening, brief periods of rapid shortening and also pauses in which there is neither growth nor shortening. Dynamic instability is characterized by four variables: the rate of microtubule growth; the rate of shortening; frequency of transition from the growth or paused state to shortening (called a 'catastrophe') and the frequency of transition from shortening to growth or pause (called a 'rescue'). The other dynamic behavior called treadmilling is the net growth of the microtubule at one end and the net shortening at the other end. It involves the intrinsic flow of tubulin sub-units from the plus end to the minus end. Both the dynamic behaviors are important and a particular microtubule may exhibit primarily dynamic instability, treadmilling or a mixture of both. Mechanism of action Agents which act as inhibitors of tubulin also act as inhibitors of cell division. A microtubule exists in a continuous dynamic state of growing and shortening by reversible association and dissociation of α/β-tubulin heterodimers at both the ends. This dynamic behavior and resulting control over the length of the microtubule is vital to the proper functioning of the mitotic spindle in mitosis i.e., cell division. Microtubules are involved in different stages of the cell cycle. During the first stage or prophase, the microtubules required for cell division begin to form and grow towards the newly formed chromosomes,forming a bundle of microtubules called the mitotic spindle. During prometaphase and metaphase this spindle attaches itself to the chromosomes at a particular point called the kinetochore and undergoes several growing and shortening periods in tune with the back and forth oscillations of the chromosomes. In anaphase also, the microtubules attached to the chromosomes maintain a carefully regulated shortening and lengthening process. Thus a drug which can suppress the microtubule dynamics can block the cell cycle and result in the death of the cells by apoptosis. Tubulin inhibitors thus act by interfering with the dynamics of the microtubule, i.e., growing (polymerization) and shortening (depolymerization). One class of inhibitors operate by inhibiting polymerization of tubulin to form microtubules and are called polymerization inhibitors like the colchicine analogues and the vinca alkaloids. They decrease the microtubule polymer mass in the cells at high concentration and act as microtubule-destabilizing agents. The other class of inhibitors operate by inhibiting the depolymerization of polymerized tubulin and increases the microtubule polymer mass in the cells. They act as microtubule-stabilizing agents and are called depolymerization inhibitors like the paclitaxel analogues. These three classes of drugs seems to operate by slightly different mechanism. Colchicine analogues blocks cell division by disrupting the microtubule. It has been reported that the β-subunit of tubulin is involved in colchicine binding. It binds to the soluble tubulin to form colchicine-tubulin complex. This complex along with the normal tubulins then undergoes polymerization to form the microtubule. However the presence of this T-C complex prevents further polymerization of the microtubule . This complex brings about a conformational change which blocks the tubulin dimers from further addition and thereby prevents the growth of the microtubule. As the T-C complex slows down the addition of new dimers, the microtubule disassembles due to structural imbalance or instability during the metaphase of mitosis. The Vinca alkaloids bind to the β-subunit of tubulin dimers at a distinct region called the Vinca-binding domain. They bind to tubulin rapidly, and this binding is reversible and independent of temperature (between 0 °C and 37 °C). In contrast to colchicine, vinca alkaloids bind to the microtubule directly. They do not first form a complex with the soluble tubulin nor do they copolymerize to form the microtubule, however they are capable of bringing about a conformational change in tubulin in connection with tubulin self-association. Vinca alkaloids bind to the tubulin with high affinity at the microtubule ends but with low affinity at the tubulin sites present along the sides of the microtubule cylinder. The binding of these drugs at the high affinity sites results in strong kinetic suppression of tubulin exchange even at low drug concentration while their binding to the low affinity sites in relatively high drug concentration depolymerizes microtubules. In contrast to colchicine and vinca alkaloids, paclitaxel enhances microtubule polymerization promoting both the nucleation and elongation phases of the polymerization reaction, and it reduces the critical tubulin sub-unit concentration (i.e., soluble tubulin concentration at steady- state). Microtubules polymerized in presence of paclitaxel are extremely stable. The binding mechanism of the paclitaxel mimic that of the GTP nucleotide along with some important differences. GTP binds at one end of the tubulin dimer keeping contact with the next dimer along each of the protofilament while the paclitaxel binds to one side of β-tubulin keeping contact with the next protofilament. GTP binds to unassembled tubulin dimers whereas paclitaxel binding sites are located only in assembled tubulin. The hydrolysis of GTP permits the disassembly and the regulation of the microtubule system; however, the activation of tubulin by paclitaxel results in permanent stabilization of the microtubule. Thus the suppression of microtubule dynamics was described to be the main cause of the inhibition of cell division and of tumor cell death in paclitaxel treated cells. Structure activity relationship (SAR) Colchicine is one of the oldest known antimitotic drugs and in the past years much research has been done in order to isolate or develop compounds having similar structure but high activity and less toxicity. This resulted in the discovery of a number of colchicine analogues. The structure of colchicine is made up of three rings, a trimethoxy benzene ring (ring A), a methoxy tropone ring (ring C) and a seven-membered ring (ring B) with an acetamido group located at its C-7 position. The trimethoxy phenyl group of colchicine not only helps in stabilizing the tubulin-colchicine complex but is also important for antitubulin activity in conjunction with the ring C. The 3-methoxy group increased the binding ability whereas the 1-methoxy group helped in attaining the correct conformation of the molecule. The stability of the tropone ring and the position of the methoxy and carbonyl group are crucial for the binding ability of the compound. The 10-methoxy group can be replaced with halogen, alkyl, alkoxy or amino groups without affecting tubulin binding affinity, while bulky substituents reduce the activity. Ring B when expanded showed reduced activity, however the ring and its C-7 side chain is thought to affect the conformation of the colchicine analogues rather than their tubulin binding ability. Substitution at C-5 resulted in loss of activity whereas attachment of annulated heterocyclic ring systems to ring B resulted in highly potent compound. Paclitaxel has achieved great success as an anti-cancer drug, yet there has been continuous effort to improve its efficacy and develop analogues which are more active and have greater bioavailability and specificity. The importance of C-13 substituted phenylisoserine side chain to bioactivity of paclitaxel has been known for a long time. Several replacements at the C-3' substitution have been tested. Replacement of the C-3' phenyl group with alkyl or alkyneyl groups greatly enhanced the activity, and with CF3 group at that position in combination with modification of the 10-Ac with other acyl groups increased the activity several times. Another modification of C-3' with cyclopropane and epoxide moieties were also found to be potent. Most of the analogues without ring A were found to be much less active than paclitaxel itself. The analogues with amide side chain at C-13 are less active than their ester counterpart. Also deoxygenation at position 1 showed reduced activity. Preparation of 10-α-spiro epoxide and its 7-MOM ether gave compounds having comparable cytotoxicity and tubulin assembly activity as that of paclitaxel. Substitution with C-6-α-OH and C-6-β-OH gave analogues which were equipotent to paclitaxel in tubulin assembly assay. Finally the oxetane ring is found to play an important role during interaction with tubulin. Vinblastine is a highly potent drug which also has serious side effects especially on the neurological system. Therefore, new synthetic analogues were developed with the goal of obtaining more efficient and less toxic drugs. The stereochemical configurations at C-20', C-16' and C-14' in the velbanamine portion are critical and inversion leads to loss of activity. The C-16' carboxymethyl group is important for activity since decarboxylated dimer is inactive. Structural variation at C-15'- C-20' in the velbanamine ring is well tolerated. The upper skeletal modification of vinblastine gave vinorelbine which shows comparable activity as that of vinblastine. Another analogue prepared was the difluoro derivative of vinorelbine which showed improved in vivo antitumor activity. It was discovered that fluorination at C-19' position of vinorelbine dramatically increased the in vivo activity. Most of the SAR studies involve the vindoline portion of bis-indole alkaloids because modification at C-16 and C-17 offers good opportunities for developing new analogues. The replacement of the ester group with an amide group at the C-16 resulted in the development of vindesine. Similarly replacement of the acetyl group at C-16 with L-trp-OC2H5, d-Ala(P)-(OC2H5)2, L-Ala(P)-(OC2H5)2 and I-Vla(P)-(OC2H5)2 gave rise to new analogues having anti- tubulin activity. Also it was found that the vindoline's indole methyl group is a useful position to functionalize potentially and develop new, potent vinblastine derivatives. A new series of semi-synthetic C-16 -spiro-oxazolidine-1,3-diones prepared from 17-deacetyl vinblastine showed good anti-tubulin activity and lower cytotoxicity. Vinglycinate a glycinate prodrug derived from the C-17-OH group of vinblastine showed similar antitumor activity and toxicity as that of vinblastine. Use in cytogenetics Cytogenetics, the study of chromosomal material by analysis of G-Banded chromosomes, uses mitotic inhibitors extensively. In order to prepare a slide for cytogenetic study, a mitotic inhibitor is added to the cells being studied. This stops the cells during mitosis, while the chromosomes are still visible. Once the cells are centrifuged and placed in a hypotonic solution, they swell, spreading the chromosomes. After preparation, the chromosomes of the cells can be viewed under a microscope to have the banding patterns of the chromosomes examined. This experiment is crucial to many forms of cancer research. Tubulin binding drugs Tubulin binding molecules differ from the other anticancer drugs in their mode of action because they target the mitotic spindle and not the DNA. Tubulin binding drugs have been classified on the basis of their mode of action and binding site as: I. Tubulin depolymerization inhibitors a) Paclitaxel site ligands, includes the paclitaxel, epothilone, docetaxel, discodermolide etc. II. Tubulin polymerization inhibitors a) Colchicine binding site, includes the colchicine, combrestatin, 2-methoxyestradiol, methoxy benzenesulfonamides (E7010) etc. b) Vinca alkaloids binding site, includes vinblastine, vincristine, vinorelbine, vinflunine, dolastatins, halichondrins, hemiasterlins, cryptophysin 52, etc. Specific agents Taxanes Taxanes are complex terpenes produced by the plants of the genus Taxus (yews). Originally derived from the Pacific yew tree, they are now synthesized artificially. Their principal mechanism is the disruption of the cell's microtubule function by stabilizing microtubule formation. Microtubules are essential to mitotic reproduction, so through the inactivation of the microtubule function of a cell, taxanes inhibit cell division. Paclitaxel—used to treat lung cancer, ovarian cancer, breast cancer, and advanced forms of Kaposi's sarcoma. Docetaxel—used to treat breast, ovarian, and non-small cell lung cancer. Vinca alkaloids Vinca alkaloids are amines produced by the hallucinogenic plant Catharanthus roseus (Madagascar Periwinkle). Vinca alkaloids inhibit microtubule polymerization. Vinblastine—used to treat leukaemia, Hodgkin's lymphoma, non-small cell lung cancer, breast cancer and testicular cancer. It is also a component in a large number of chemotherapy regimens. Vinblastine and vincristine were isolated from the Madagascar periwinkle Catharanthus roseus, traditionally used to treat diabetes. In fact it has been used for centuries throughout the world to treat all kinds of ailments from wasp stings in India, to eye infections in the Caribbean. In the 1950s researchers began to analyse the plant and discovered that it contained over 70 alkaloids. Some were found to lower blood sugar levels and others to act as hemostatics. The most interesting thing was that vinblastine and vincristine, were found to lower the number of white cells in blood. A high number of white cells in the blood indicates leukemia, so a new anti-cancer drug had been discovered. These two alkaloids bind to tubulin to prevent the cell from making the spindles that it needs to be able to divide. This is different from the action of taxol, which interferes with cell division by keeping the spindles from being broken down. Vinblastine is mainly useful for treating Hodgkin's lymphoma, advanced testicular cancer and advanced breast cancer. Vincristine is mainly used to treat acute leukemia and other lymphomas. Vincristine—used to treat lymphoma, breast cancer, lung cancer, and acute lymphoblastic leukemia. Vindesine—used to treat leukaemia, lymphoma, melanoma, breast cancer, and lung cancer. Vinorelbine—used to treat breast cancer and non-small-cell lung cancer. It was developed under the direction of the French pharmacist Pierre Poiter, who, in 1989 obtained an initial license under the brand name Navelbine. Vinorelbine is also known as vinorelbine tartrate. The drug is a semi-synthetic analogue of another cancer-fighting drug, vinblastine. Vinorelbine is included in the class of pharmaceuticals known as vinca alkaloids, and many of its characteristics mimic the chemistry and biological mechanisms of the cytotoxic drugs vincristine and vinblastine. Vinorelbine showed promising activity against breast cancer and is in clinical trial for the treatment of other types of tumors. Vinflunine is a novel fluorinated vinca alkaloid currently in Phase II clinical trials, which in preclinical studies exhibited superior antitumor activity to vinorelbine and vinblastine. Vinflunine block mitosis at the metaphase/anaphase transition, leading to apoptosis. Vinflunine is a chemotherapy drug used to treat advanced transitional cell bladder and urothelial tract cancer. It is also called Javlor. It is licensed for people who have already had cisplatin or carboplatin chemotherapy. Colchicine Colchicine is an alkaloid derived from the autumn crocus (Colchicum autumnale). It inhibits mitosis by inhibiting microtubule polymerization. While colchicine is not used to treat cancer in humans, it is commonly used to treat acute attacks of gout. Colchicine is an anti-inflammatory drug that has been in continuous use for more than 3000 years. Colchicine is an oral drug, known to be used for treating acute gout and preventing acute attacks of familial Mediterranean fever (FMF). However, the use of colchicine is limited by its high toxicity in other therapies. Colchicine is known to inhibit cell division and proliferation. Early study demonstrated that colchicine disrupts the mitotic spindle. Dissolution of microtubules subsequently was shown to be responsible for the effect of colchicine on the mitotic spindle and cellular proliferation. Podophyllotoxin Podophyllotoxin derived from the may apple plant, is used to treat viral skin infections and synthetic analogues of the molecule are used to treat certain types of cancer. Griseofulvin Griseofulvin, derived from a species of Penicillium is an mitotic inhibitor that is used as an antifungal drug. It inhibits the assembly of fungal microtubules Others Glaziovianin A is typically isolated from the leaves of the Brazilian tree Ateleia glazioviana Baill. Cryptophycin 52 was isolated from the blue–green algae Nostoc sp. GSV 224. The cryptophycins are a family of related depsipeptides showing highly potent cytotoxic activity. Cryptophycin 52 was originally developed as a fungicide, but was too toxic for clinical use. Later the research was focused on treating cryptophycin as a microtubule poison, preventing the formation of the mitotic spindle. Cryptophycin 52 showed high potent antimitotic activity to resist spindle microtubule dynamics. As well, the interest in this drug has been further arose by the discovery that cryptophycin shows reduced susceptibility to the multidrug resistance pump, and shows no reduction of activity in a number of drug-resistant cell lines. Halichondrin B was first isolated from Halichondria okadai, and later from the unrelated sponges Axinella carteri and Phankella carteri. Halichondrin B is a complex polyether macrolide which is synthesized and arrests cell growth at subnanomolar concentrations. Halichondrin B is noncompetitive inhibitor of the binding of both vincristine and vinblastine to tubulin, suggesting the drugs bind to the vinca binding site, or a site nearby. The isolation of halichondrin B is from two unrelated genera of sponge, has led to speculate that halichondrin B is a microbial in reality, rather than sponge metabolite because sponges support a wide range of microbes. If this is the case, fermentation technologies could provide a useful supply of halichondrin B. Dolastatins were isolated from the sea hare Dolabella auricularia, a small sea mollusc, and thought to be the source of poison used to murder the son of Emperor Claudius of Rome in 55 A.D. Dolastatins 10 and 15 are novel pentapeptides and exhibit powerful antimitotic properties. They are cytotoxic in a number of cell lines at subnanomolar concentrations. The peptides of dolastatins 10 and 15 noncompetitively inhibit the binding of vincristine to tubulin. Dolastatin 10 is 9 times more potent than dolastatin 15 and both are more potent than vinblastin. The dolastatins also enhance and stabilize the binding of colchicine to tubulin. Hemiasterlins were isolated from the marine sponge, Cymbastela sp. The hemiasterlins are a family of potent cytotoxic peptides. Hemiasterlin A and hemiasterlin B show potent activity against the P388 cell line and inhibit cell division by binding to the vinca alkaloid site on tubulin. Hemiasterlin A and B exhibit stronger antiproliferative activities than both the vinca alkaloids and paclitaxel. Combretastatins is isolated from the South African Willow, Combretum afrum. Combretastatin is one of the simpler compounds to show antimitotic effects by interaction with the colchicine binding site of tubulin, and is also one of the most potent inhibitors of colchicine binding. Combretastatin is not recognized by the multiple drug resistance (MDR) pump, a cellular pump which rapidly ejects foreign molecules from the cell. Combretastatin is also reported to be able to inhibit angiogenesis, a process essential for tumor growth. Except those factors, one of the disadvantage of combretastatin is the low water solubility. E7010 is the most active of sulfonamide antimitotic agent, which has been shown to inhibit microtubule formation by binding at the site of colchicines. It is quite soluble in water as an acid salt. Methoxybenzene-sulfonamide showed good results against a wide range of tumor cells including vinca alkaloid resistant solid tumors. Results from animals studies indicated activity against colorectal, breast and lung cancer tissues. 2-Methoxyestradiol is a natural metabolite of the mammalian hormone oestradiol and is formed by oxidation in the liver. 2-methoxyestradiol is cytotoxic to several tumor cell lines, binds to the colchicine site of tubulin, inducing the formation of abnormal microtubules. 2-Methoxyestradiol exhibits potent apoptotic activity against rapidly growing tumor cells. It also has antiangiogenic activity through a direct apoptotic effect on endothelial cells. Docetaxel, is a semi-synthetic analogue of paclitaxel, with a trade name Taxotere. Docetaxel has the minimal structure modifications at C13 side chain and C10 substitution showed more water solubility and more potency than paclitaxel. Clinical trials have shown that patients who develop hypersensitivity to paclitaxel may receive docetaxel without an allergic response. Paclitaxel was isolated from the bark of the Pacific yew tree Taxus brevifolia Nutt. (Taxaceae). Later it was also isolated from hazelnut trees (leaves, twigs, and nuts) and the fungi living on these trees but the concentration is only around 10% of the concentration in yew trees. Paclitaxel is also known as Taxol and Onxol to be an anti-cancer drug. The drug is the first line treatment for ovarian, breast, lung, and colon cancer and the second line treatment for AIDS-related Kaposi's sarcoma. (Kaposi sarcoma is a cancer of the skin and mucous membranes that is commonly found in patients with acquired immunodeficiency syndrome, AIDS). It is so effective that some oncologists refer to the period before 1994 as the "pre-taxol" era for treating breast cancer. Epothilones are derived from a fermenting soil bacteria, Sorangium cellulosum and it was found to be too toxic for use as an antifungal. Epothilones are microtubule stabilizing agents with a mechanism of action similar to taxanes, including suppression of microtubule dynamics, stabilization of microtubules, promotion of tubulin polymerization, and increased polymer mass at high concentrations. They induce mitotic arrest in the G2-M phase of the cell cycle, resulting in apoptosis. Epothilone A and epothilone B exhibit both antifungal and cytotoxic properties. These epothilones are competitive inhibitors of the binding of paclitaxel to tubulin, exhibiting activity at similar concentrations. This finding leads to assume that the epothilones and paclitaxel adopt similar conformations in vivo. However, the epothilones are around 30 times more water-soluble than paclitaxel and more available, being easily obtained by fermentation of the parent myxobacterium and could be prepared by total synthesis. The epothilones also shows not to be recognized by multidrug resistant mechanisms, therefore it has much higher potency than paclitaxel in multidrug resistant cell lines. Discodermolide was initially found to have immunosuppressive and antifungal activities. Discodermolide is a polyhydroxylated alketetraene lactone marine product, isolated from the Bahamian deep-sea sponge, Discodermia dissoluta, inhibited cell mitosis and induced formation of stable tubulin polymer in vitro and considered to be more effective than paclitaxel with EC50 value of 3.0μM versus 23μM. The drug, a macrolide (polyhydroxylated lactone), is a member of a structural diverse class of compounds called polyketides with notable chemical mechanism of action. It stabilizes the microtubules of target cells, essentially arresting them at a specific stage in the cell cycle and halting cell division. It is a promising marine-derived candidate for treating certain cancers. Limitations Side effects chemotherapy-induced peripheral neuropathy, a progressive, enduring, often irreversible tingling numbness, intense pain, and hypersensitivity to cold, beginning in the hands and feet and sometimes involving the arms and legs. stomatitis (ulceration of the lips, tongue, oral cavity) nausea, vomiting, diarrhea, constipation, paralytic ileus, urinary retention bone marrow suppression hypersensitivity reactions – flushing, localized skin reactions, rash (with or without) pruritus, chest tightness, back pain, dyspnea, drug fever, or chills musculoskeletal effects – arthralgia and/or myalgia severe weakness hypotension alopecia neurotoxicity Human factors Limitations in anticancer therapy occur mainly due to two reasons; because of the patient's organism, or because of the specific genetic alterations in the tumor cells. From the patient, therapy is limited by poor absorption of a drug which can lead to low concentration of the active agent in the blood and small amount delivery to the tumor. Low serum level of a drug can be also caused by rapid metabolism and excretion associated with affinity to intestinal or/and liver cytochrome P450. Another reason is the instability and degradation of the drugs in gastro-intestinal environment. Serious problem is also variability between patients what causes different bioavailability after administration equal dose of a drug and different tolerance to effect of chemotherapy agents. The second problem is particularly important in treatment elderly people. Their body is weaker and need to apply lower doses, often below therapeutic level. Another problem with anticancer agents is their limited aqueous solubility what substantially reduces absorption of a drug. Problems with delivery of drags to the tumor occur also when active agent has high molecular weight which limits tissue penetration or the tumor has large volume prevent for penetration. Drug resistance Multidrug resistance is the most important limitation in anticancer therapy. It can develop in many chemically distinct compounds. Until now, several mechanisms are known to develop the resistance. The most common is production of so-called "efflux pumps". The pumps remove drugs from tumor cells which lead to low drug concentration in the target, below therapeutic level. Efflux is caused by P-glycoprotein called also the multidrug transporter. This protein is a product of multidrug resistance gene MDR1 and a member of family of ATP-dependent transporters (ATP-binding cassette). P-glycoprotein occurs in every organism and serves to protect the body from xenobiotics and is involved in moving nutrients and other biologically important compounds inside one cell or between cells. P-glycoprotein detects substrates when they enter the plasma membrane and bind them which causes activation of one of the ATP-binding domains. The next step is hydrolysis of ATP, which leads to a change in the shape of P-gp and opens a channel through which the drug is pumped out of the cell. Hydrolysis of a second molecule of ATP results in closing of the channel and the cycle is repeated. P-glycoprotein has affinity to hydrophobic drugs with a positive charge or electrically neutral and is often over-expressed in many human cancers. Some tumors, e.g. lung cancer, do not over-express this transporter but also are able to develop the resistance. It was discovered that another transporter MRP1 also work as the efflux pump, but in this case substrates are negatively charged natural compounds or drugs modified by glutathione, conjugation, glycosylation, sulfation and glucuronylation. Drugs can enter into a cell in few kinds of ways. Major routes are: diffusion across the plasma membrane, through receptor or transporter or by the endocytosis process. Cancer can develop the resistance by mutations to their cells which result in alterations in the surface of cells or in impaired endocytosis. Mutation can eliminate or change transporters or receptors which allows drugs to enter into the tumor cell. Other cause of drug resistance is a mutation in β tubulin which cause alterations in binding sites and a given drug cannot be bound to its target. Tumors also change expression isoforms of tubulin for these ones, which are not targets for antimitotic drugs e.g. overexpress βIII-tubulin. In addition tumor cells express other kinds of proteins and change microtubule dynamic to counteract effect of anticancer drugs. Drug resistance can also develop due to the interruption in therapy. Others Marginal clinical efficacy – often compounds show activity in vitro but do not have antitumor activity in clinic. Poor water solubility of drugs which need to be dissolved in polyoxyethylated castor oil or polysorbate what cause hypersensitivity reactions. It has been suggested this solvents can also reduce delivery of the drugs to target cells. Bioavailability Dose limit – higher doses cause high toxicity and long-term use lead to cumulative neurotoxicity and hematopoietic toxicity. Neuropathy which is significant side effect can develop at any time in therapy and require an interruption of treatment. After symptoms have resolved therapy can be started again but the break allow tumor for develop of resistance. Poor penetration through the blood–brain barrier. Discovery and development The first known compound which binds to tubulin was colchicine, it was isolated from the autumn crocus, Colchicum autumnale, but it has not been used for cancer treatment. First anticancer drugs approved for clinical use were Vinca alkaloids, vinblastine and vincristine in the 1960s. They were isolated from extracts leaves of the Catharanthus roseus (Vinca rosea) plant at the University of Western Ontario in 1958. First drug belong to the taxanes and paclitaxel, discovered in extracts from the bark of the yew tree, Taxus brevifolia, in 1967 by Monroe Wall and Mansukh Wani but, its tubulin inhibition activity was not known until 1979. Yews trees are poor source of active agents that limited the development of taxanes for over 20 years until discover the way of synthesis. In December 1992 paclitaxel was approved to use in chemotherapy. Future drug development Because of numerous adverse effect and limitations in use, new drugs with better properties are needed. Especially are desired improvements in antitumor activity, toxicity profile, drug formulation and pharmacology. Currently have been suggested few approaches in development of novel therapeutic agents with better properties Discovery agents which are not a substrate for efflux pump or modifications of drugs in toward lower affinity to transporting proteins. Discover P-glycoprotein inhibitors with higher affinity to the transporter then drugs, is next approach. For improving oral bioavailability is suggested co-administration of P-gp and cytochrome inhibitors with anticancer drugs. Development of inhibitors that have their binding site in α-tubulin. This part of tubulin dimer remains unused because all currently use drugs bind to the β-tubulin. Research in this field can open new opportunity in treatment and provide new class of inhibitors. One of the targets for anticancer drugs can be tumor vasculature. The advantage in this case is relatively easy access of therapeutic agents to the target. It is known that some compounds can inhibit the formation of new blood vessels (inhibit the process of angiogenesis) or shut down existing ones. Tumor cells die very fast after cutting off the oxygen supply what suggest these agents are especially interesting. What more, it seems the agents act only with tumor vasculature and do not interact with normal tissues. The mechanisms is not known but has been suggested that the reason are differences between young tissue of tumor and mature tissue of normal vasculature. Antivascular agents are similar to colchicine and bind to the colchicine binding site on β-tubulin so development of novel agents acting with colchicine binding site (which is not used by any of currently approved drugs) seems to be a promising approach. Therapy with combination of two or more drugs which have various binding sites and/or different mechanism of action but have non overlapping adverse effects. This would allow use of drugs in low concentration what reduce strength of side effects associated with high doses of anticancer agents. Better efficiency might be also a result of maintenance low concentrations of drugs for long period instead of drastic changes in the amount of administered drugs. Liposomes and polymer-bound drugs comprise promising improvements in delivery system. Liposomes allow for delivery considerable amounts of drag to the tumor without toxic effect in normal tissues and slowly release drugs what result in prolongation of pharmaceutical action. Similar properties have drugs bound to polymer. In addition, use of water-soluble polymers allow hydrophilic anticancer agents become soluble. The nature of polymer-drug linkage can be designed to be stable in normal tissues and break down in tumor environment, which is more acidic. This approach allow for release active agent exactly in the target. Discover new compounds active against drug-resistant cancers with different mechanism than drugs have been already known. Elucidation of all resistance mechanisms and design drugs which avoid it. See also Medicinal molds Tubulin Microtubule Cancer Chemotherapy Drug design Vinblastine Vincristine Vinorelbine Vinflunine Cryptophycin Halichondrin B Colchicine Combretastatins 2-Methoxyestradiol Docetaxel Paclitaxel Epothilones Discodermolide eribulin, a newer agent References Mitosis
Mitotic inhibitor
[ "Biology" ]
8,617
[ "Mitosis", "Harmful chemical substances", "Cellular processes", "Mitotic inhibitors" ]
12,206,759
https://en.wikipedia.org/wiki/HESX1
Homeobox expressed in ES cells 1, also known as homeobox protein ANF, is a homeobox protein that in humans is encoded by the HESX1 gene. Expression of HEX1 and HESX1 marks the anterior visceral endoderm of the embryo. The AVE is an extra-embryonic tissue, key to the establishment of the anterior-posterior body axis. Clinical significance Mutations in the HESX1 gene are associated with some cases of septo-optic dysplasia or Pickardt-Fahlbusch syndrome. References Further reading External links GeneReviews/NCBI/NIH/UW entry on Anophthalmia / Microphthalmia Overview Transcription factors
HESX1
[ "Chemistry", "Biology" ]
153
[ "Induced stem cells", "Gene expression", "Transcription factors", "Signal transduction" ]
12,207,199
https://en.wikipedia.org/wiki/Either%E2%80%93or%20topology
In mathematics, the either–or topology is a topological structure defined on the closed interval [−1, 1] by declaring a set open if it either does not contain {0} or does contain (−1, 1). References General topology
Either–or topology
[ "Mathematics" ]
52
[ "General topology", "Topology", "Topology stubs" ]
12,207,392
https://en.wikipedia.org/wiki/Compact%20complement%20topology
In mathematics, the compact complement topology is a topology defined on the set of real numbers, defined by declaring a subset open if and only if it is either empty or its complement is compact in the standard Euclidean topology on . References Topology
Compact complement topology
[ "Physics", "Mathematics" ]
48
[ "Topology stubs", "Topology", "Space", "Geometry", "Spacetime" ]
12,212,114
https://en.wikipedia.org/wiki/On-board%20scale
On-board scales are mobile weighing systems that have been integrated into a vehicle, such as a flatbed truck or semi-trailer. In the United States, such scales are used primarily as a self-check for weight compliance. Thus the operator can use the scale to determine the weight of the vehicle as it is loaded. This enables the operator to avoid penalties by complying with state weight laws, while still transporting the maximum allowable weight. Weight laws are based on safety considerations; in the United States, the Federal Highway Administration requires each state to certify its enforcement of weight laws. In addition, some states allow on-board scales approved under the National Type Evaluation Program (NTEP) to be considered legal for trade. Benefits The convenience of being able to weigh at the loading site is a key factor in the acceptance of on-board scales. Other factors include: avoiding overweight penalties with consequent reduction of driver anxiety and thus greater driver retention; the ability to load knowledgably to the maximum permissable weight; eliminating costs associated with using an in-ground scale, including lost hours of service, scale fees, extra fuel costs to visit the scale, and driver wages; if the scale is equipped with a printer, the ability to provide a weight receipt ("weight ticket") to the customer; for scales connected to an on-truck computer network such as SAE J1939, the possibility of more efficient drivetrain performance based on current truck weight; for scales connected directly or via other on-vehicle devices to a wide area network, enhanced corporate ability to manage a fleet based on vehicle weights. For refuse vehicles and fleets, the additional benefits of on-board scales include the following: avoiding vehicle damage arising from overloading; ability to audit pickup routes and set rates appropriately, by identifying customers who are overloading bins; establishing more efficient routes based on customer bin weight data. On-board scales sometimes appear in non-commercial applications. In one such, the Federal Motor Carrier Safety Administration used an on-board scale in 2009 on a technology demonstration vehicle. The scale was one among several systems intended to provide information for researchers developing tools to determine the safety fitness of a vehicle. Considering an unrelated possible non-commercial application, in 2015 the Federal Highway Administration wrote: "Recording and collecting data from on-board load cells can provide a metric for [weight law] compliance." History On-board scales have been used on vocational trucks at least since 1985. Among the first industries to use these scales were logging operations, in which the difficulty of determining the weight of newly cut logs, with their varying density and moisture content, was problematic. Avoiding overweight tickets by weighing when loading the logs was the incentive for using these scales. As more states began more rigorously enforcing weight limits in the early 1990's, other vocational trucking industries, such as waste hauling and aggregate hauling, began to install on-board scales. In 1987, "On-Board Load Cell" received a US Patent. This system was based on the application of a strain gauge to a sensor mounted to a vehicle's frame. The measured strain is described as "being representative of the weight of the vehicle load." Two years later, in 1989, "A Vehicle Mounted Load Indicator System" received a US Patent. This system was based on the air pressure in a truck's air suspension. It relied on calibration and claimed an accurate reading of the weight of the carried load, transmitted to a readout. On-board scales using the technology described in this patent were first sold in 1991. Already by 1995, the Society of Automotive Engineers was publishing a "History of On-Board Electronic Truck Scales and Future Design Trends". This review's abstract notes that newer on-board scale systems included calibration data in the load sensors, which would function as part of an on-truck computer network. Thus, a calibrated load sensor on a trailer or semi-trailer could be attached to any tractor that could receive the trailer sensor's weight transmission over the network. Acceptance of on-board scales increased to the point that in 2008, for instance, all thirteen comments from poultry growers and agricultural associations, concerning a proposed U. S. Department of Agriculture rule, requested that the Department "not permit the delivery of... feed for more than one grower on a single truck unless the truck has an on-board scale and weighing system, specifically when feed is taken from one farm directly to another." [emphasis added.] Types of on-board weighing systems Load-cell scales Load-cell scales are based on electronic load cell transducers, and can be mechanical or strain-gauge. There is a wide variety of scale types that can be built with load cell technology. For example, in vehicles with spring suspension, payload scales commonly use load cells. As with other electronic scales, the weight may be transmitted to an operator readout. It may be further transmitted via a wide area network to a company office or corporate headquarters. Electronic scales with PSI sensors Electronic scales with PSI sensors measure air pressure in a vehicle's air suspension. The scale relays this data to a receiver hardwired into the cab, or wirelessly to a handheld unit such as a smart phone, either of which will interpret the data and display axle weight(s) and/or gross vehicle weight. Data may be further transmitted via a wide area network to a company office or corporate headquarters. Waste bin loader scales These are scales that determine the weight of the contents of a waste bin as it is being loaded onto a waste hauler truck. Their sensors, customized by each scale manufacturer, are generally based on strain gauges. They may use temperature sensors to allow for correct results with varying tempeatures. As with other electronic scales, the bin weight may be transmitted to an operator readout, or via a wide area network to a company office or corporate headquarters. Air-suspension load scales These non-electric gauges are analog (dial-face), and include versions that can be calibrated for accuracy. Suitable for air-ride applications, they show on-the-ground weight in pounds (LBS) or kilograms (KG) instead of standard PSI. Air-suspension PSI gauges Air-suspension PSI gauges are used on commercial trucks and semi-trailers where accurate weights are not as critical. These are not scales as such, but may be usable for estimating weight. Commercial distribution On-board scale manufacturers are located on most continents. Channels of distribution for these scales include Original equipment manufacturer (OEM) sales through truck or trailer companies as either a standard part of the vehicle or an option. Truck dealerships or service centers may provide the scales as an aftermarket option, including to truck fleets as well as individual truck owners. See also Truck scale Notes Vocational trucks are designed for a specific task, such as collecting refuse, mixing and pouring concrete, firefighting and the like. Each is custom-built on a truck chassis and may be light-, medium-, or heavy-duty. References Weighing instruments Trucks
On-board scale
[ "Physics", "Technology", "Engineering" ]
1,447
[ "Weighing instruments", "Mass", "Matter", "Measuring instruments" ]
4,351,011
https://en.wikipedia.org/wiki/C-slowing
C-slow retiming is a technique used in conjunction with retiming to improve throughput of a digital circuit. Each register in a circuit is replaced by a set of C registers (in series). This creates a circuit with C independent threads, as if the new circuit contained C copies of the original circuit. A single computation of the original circuit takes C times as many clock cycles to compute in the new circuit. C-slowing by itself increases latency, but throughput remains the same. Increasing the number of registers allows optimization of the circuit through retiming to reduce the clock period of the circuit. In the best case, the clock period can be reduced by a factor of C. Reducing the clock period of the circuit reduces latency and increases throughput. Thus, for computations that can be multi-threaded, combining C-slowing with retiming can increase the throughput of the circuit, with little, or in the best case, no increase in latency. Since registers are relatively plentiful in FPGAs, this technique is typically applied to circuits implemented with FPGAs. See also Pipelining Barrel processor Resources Intel® Hyperflex™ Architecture High-Performance Design Handbook § PipeRoute: A Pipelining-Aware Router for Reconfigurable Architectures Simple Symmetric Multithreading in Xilinx FPGAs Post Placement C-Slow Retiming for Xilinx Virtex (.ppt) Post Placement C-Slow Retiming for Xilinx Virtex (.pdf) Exploration of RaPiD-style Pipelined FPGA Interconnects Time and Area Efficient Pattern Matching on FPGAs Gate arrays
C-slowing
[ "Technology", "Engineering" ]
345
[ "Computer engineering", "Gate arrays", "Computer science stubs", "Computer science", "Computing stubs" ]
4,355,120
https://en.wikipedia.org/wiki/Fredholm%20theory
In mathematics, Fredholm theory is a theory of integral equations. In the narrowest sense, Fredholm theory concerns itself with the solution of the Fredholm integral equation. In a broader sense, the abstract structure of Fredholm's theory is given in terms of the spectral theory of Fredholm operators and Fredholm kernels on Hilbert space. The theory is named in honour of Erik Ivar Fredholm. Overview The following sections provide a casual sketch of the place of Fredholm theory in the broader context of operator theory and functional analysis. The outline presented here is broad, whereas the difficulty of formalizing this sketch is, of course, in the details. Fredholm equation of the first kind Much of Fredholm theory concerns itself with the following integral equation for f when g and K are given: This equation arises naturally in many problems in physics and mathematics, as the inverse of a differential equation. That is, one is asked to solve the differential equation where the function is given and is unknown. Here, stands for a linear differential operator. For example, one might take to be an elliptic operator, such as in which case the equation to be solved becomes the Poisson equation. A general method of solving such equations is by means of Green's functions, namely, rather than a direct attack, one first finds the function such that for a given pair , where is the Dirac delta function. The desired solution to the above differential equation is then written as an integral in the form of a Fredholm integral equation, The function is variously known as a Green's function, or the kernel of an integral. It is sometimes called the nucleus of the integral, whence the term nuclear operator arises. In the general theory, and may be points on any manifold; the real number line or -dimensional Euclidean space in the simplest cases. The general theory also often requires that the functions belong to some given function space: often, the space of square-integrable functions is studied, and Sobolev spaces appear often. The actual function space used is often determined by the solutions of the eigenvalue problem of the differential operator; that is, by the solutions to where the are the eigenvalues, and the are the eigenvectors. The set of eigenvectors span a Banach space, and, when there is a natural inner product, then the eigenvectors span a Hilbert space, at which point the Riesz representation theorem is applied. Examples of such spaces are the orthogonal polynomials that occur as the solutions to a class of second-order ordinary differential equations. Given a Hilbert space as above, the kernel may be written in the form In this form, the object is often called the Fredholm operator or the Fredholm kernel. That this is the same kernel as before follows from the completeness of the basis of the Hilbert space, namely, that one has Since the are generally increasing, the resulting eigenvalues of the operator are thus seen to be decreasing towards zero. Inhomogeneous equations The inhomogeneous Fredholm integral equation may be written formally as which has the formal solution A solution of this form is referred to as the resolvent formalism, where the resolvent is defined as the operator Given the collection of eigenvectors and eigenvalues of K, the resolvent may be given a concrete form as with the solution being A necessary and sufficient condition for such a solution to exist is one of Fredholm's theorems. The resolvent is commonly expanded in powers of , in which case it is known as the Liouville-Neumann series. In this case, the integral equation is written as and the resolvent is written in the alternate form as Fredholm determinant The Fredholm determinant is commonly defined as where and and so on. The corresponding zeta function is The zeta function can be thought of as the determinant of the resolvent. The zeta function plays an important role in studying dynamical systems. Note that this is the same general type of zeta function as the Riemann zeta function; however, in this case, the corresponding kernel is not known. The existence of such a kernel is known as the Hilbert–Pólya conjecture. Main results The classical results of the theory are Fredholm's theorems, one of which is the Fredholm alternative. One of the important results from the general theory is that the kernel is a compact operator when the space of functions are equicontinuous. A related celebrated result is the Atiyah–Singer index theorem, pertaining to index (dim ker – dim coker) of elliptic operators on compact manifolds. History Fredholm's 1903 paper in Acta Mathematica is considered to be one of the major landmarks in the establishment of operator theory. David Hilbert developed the abstraction of Hilbert space in association with research on integral equations prompted by Fredholm's (amongst other things). See also Green's functions Spectral theory Fredholm alternative References Mathematical physics Spectral theory
Fredholm theory
[ "Physics", "Mathematics" ]
1,019
[ "Applied mathematics", "Theoretical physics", "Mathematical physics" ]
7,542,417
https://en.wikipedia.org/wiki/Modal%20analysis
Modal analysis is the study of the dynamic properties of systems in the frequency domain. It consists of mechanically exciting a studied component in such a way to target the modeshapes of the structure, and recording the vibration data with a network of sensors. Examples would include measuring the vibration of a car's body when it is attached to a shaker, or the noise pattern in a room when excited by a loudspeaker. Modern day experimental modal analysis systems are composed of 1) sensors such as transducers (typically accelerometers, load cells), or non contact via a Laser vibrometer, or stereophotogrammetric cameras 2) data acquisition system and an analog-to-digital converter front end (to digitize analog instrumentation signals) and 3) host PC (personal computer) to view the data and analyze it. Classically this was done with a SIMO (single-input, multiple-output) approach, that is, one excitation point, and then the response is measured at many other points. In the past a hammer survey, using a fixed accelerometer and a roving hammer as excitation, gave a MISO (multiple-input, single-output) analysis, which is mathematically identical to SIMO, due to the principle of reciprocity. In recent years MIMO (multi-input, multiple-output) have become more practical, where partial coherence analysis identifies which part of the response comes from which excitation source. Using multiple shakers leads to a uniform distribution of the energy over the entire structure and a better coherence in the measurement. A single shaker may not effectively excite all the modes of a structure. Typical excitation signals can be classed as impulse, broadband, swept sine, chirp, and possibly others. Each has its own advantages and disadvantages. The analysis of the signals typically relies on Fourier analysis. The resulting transfer function will show one or more resonances, whose characteristic mass, frequency and damping ratio can be estimated from the measurements. The animated display of the mode shape is very useful to NVH (noise, vibration, and harshness) engineers. The results can also be used to correlate with finite element analysis normal mode solutions. Structures In structural engineering, modal analysis uses the overall mass and stiffness of a structure to find the various periods at which it will naturally resonate. These periods of vibration are very important to note in earthquake engineering, as it is imperative that a building's natural frequency does not match the frequency of expected earthquakes in the region in which the building is to be constructed. If a structure's natural frequency matches an earthquake's frequency, the structure may continue to resonate and experience structural damage. Modal analysis is also important in structures such as bridges where the engineer should attempt to keep the natural frequencies away from the frequencies of people walking on the bridge. This may not be possible and for this reasons when groups of people are to walk along a bridge, for example a group of soldiers, the recommendation is that they break their step to avoid possibly significant excitation frequencies. Other natural excitation frequencies may exist and may excite a bridge's natural modes. Engineers tend to learn from such examples (at least in the short term) and more modern suspension bridges take account of the potential influence of wind through the shape of the deck, which might be designed in aerodynamic terms to pull the deck down against the support of the structure rather than allow it to lift. Other aerodynamic loading issues are dealt with by minimizing the area of the structure projected to the oncoming wind and to reduce wind generated oscillations of, for example, the hangers in suspension bridges. Although modal analysis is usually carried out by computers, it is possible to hand-calculate the period of vibration of any high-rise building through idealization as a fixed-ended cantilever with lumped masses. Electrodynamics The basic idea of a modal analysis in electrodynamics is the same as in mechanics. The application is to determine which electromagnetic wave modes can stand or propagate within conducting enclosures such as waveguides or resonators. Superposition of modes Once a set of modes has been calculated for a system, the response to any kind of excitation can be calculated as a superposition of modes. This means that the response is the sum of the different mode shapes each one vibrating at its frequency. The weighting coefficients of this sum depend on the initial conditions and on the input signal. Reciprocity If the response is measured at point B in direction x (for example), for an excitation at point A in direction y, then the transfer function (crudely Bx/Ay in the frequency domain) is identical to that which is obtained when the response at Ay is measured when excited at Bx. That is Bx/Ay=Ay/Bx. Again this assumes (and is a good test for) linearity. (Furthermore, this assumes restricted types of damping and restricted types of active feedback.) Identification methods Identification methods are the mathematical backbone of modal analysis. They allow, through linear algebra, specifically through least square methods to fit large amounts of data to find the modal constants (modal mass, modal stiffness modal damping) of the system. The methods are divided on the basis of the kind of system they aim to study in SDOF (single degree of freedom) methods and MDOF (multiple degree of freedom systems) methods and on the basis of the domain in which the data fitting takes place in time domain methods and frequency domain methods. See also Frequency analysis Modal analysis using FEM Modeshape Eigenanalysis Structural dynamics Vibration Modal testing Seismic performance analysis References D. J. Ewins: Modal Testing: Theory, Practice and Application Jimin He, Zhi-Fang Fu (2001). Modal Analysis, Butterworth-Heinemann. . External links Ewins - Modal Testing theory and practice Free Excel sheets to estimate modal parameters Modal Space In Our Own Little World - a tutorial by Peter Avitabile Mechanical engineering Earthquake engineering
Modal analysis
[ "Physics", "Engineering" ]
1,281
[ "Structural engineering", "Applied and interdisciplinary physics", "Civil engineering", "Mechanical engineering", "Earthquake engineering" ]
7,552,188
https://en.wikipedia.org/wiki/Smiles%20rearrangement
In organic chemistry, the Smiles rearrangement is an organic reaction and a rearrangement reaction named after British chemist Samuel Smiles. It is an intramolecular, nucleophilic aromatic substitution of the type: where X in the arene compound can be a sulfone, a sulfide, an ether or any substituent capable of dislodging from the arene carrying a negative charge. The terminal functional group in the chain end Y is able to act as a strong nucleophile for instance an alcohol, amine or thiol. As in other nucleophilic aromatic substitutions the arene requires activation by an electron-withdrawing group preferably in the aromatic ortho position. In one modification called the Truce–Smiles rearrangement the incoming nucleophile is sufficiently strong that the arene does not require this additional activation, for example when the nucleophile is an organolithium. This reaction is exemplified by the conversion of an aryl sulfone into a sulfinic acid by action of n-butyllithium: This particular reaction requires the interaction of the alkyllithium group ortho to the sulfone group akin a directed ortho metalation. A conceptually related reaction is the Chapman rearrangement. A radical version of Smiles rearrangement is reported by Stephenson in 2015. The Hayashi rearrangement can be considered as the cationic counterpart of Smiles rearrangement. External links Article in Organic Syntheses: Org. Synth. 2007, 84, pp. 325–333. References Rearrangement reactions Name reactions
Smiles rearrangement
[ "Chemistry" ]
340
[ "Name reactions", "Rearrangement reactions", "Organic reactions" ]
3,197,890
https://en.wikipedia.org/wiki/Overlapping%20interval%20topology
In mathematics, the overlapping interval topology is a topology which is used to illustrate various topological principles. Definition Given the closed interval of the real number line, the open sets of the topology are generated from the half-open intervals with and with . The topology therefore consists of intervals of the form , , and with , together with itself and the empty set. Properties Any two distinct points in are topologically distinguishable under the overlapping interval topology as one can always find an open set containing one but not the other point. However, every non-empty open set contains the point 0 which can therefore not be separated from any other point in , making with the overlapping interval topology an example of a T0 space that is not a T1 space. The overlapping interval topology is second countable, with a countable basis being given by the intervals , and with and r and s rational. See also List of topologies Particular point topology, a topology where sets are considered open if they are empty or contain a particular, arbitrarily chosen, point of the topological space References (See example 53) Topological spaces
Overlapping interval topology
[ "Mathematics" ]
222
[ "Topological spaces", "Mathematical structures", "Topology", "Space (mathematics)" ]
3,199,351
https://en.wikipedia.org/wiki/Adiabatic%20flame%20temperature
In the study of combustion, the adiabatic flame temperature is the temperature reached by a flame under ideal conditions. It is an upper bound of the temperature that is reached in actual processes. There are two types of adiabatic flame temperature: constant volume and constant pressure, depending on how the process is completed. The constant volume adiabatic flame temperature is the temperature that results from a complete combustion process that occurs without any work, heat transfer or changes in kinetic or potential energy. Its temperature is higher than in the constant pressure process because no energy is utilized to change the volume of the system (i.e., generate work). Common flames In daily life, the vast majority of flames one encounters are those caused by rapid oxidation of hydrocarbons in materials such as wood, wax, fat, plastics, propane, and gasoline. The constant-pressure adiabatic flame temperature of such substances in air is in a relatively narrow range around . This is mostly because the heat of combustion of these compounds is roughly proportional to the amount of oxygen consumed, which proportionally increases the amount of air that has to be heated, so the effect of a larger heat of combustion on the flame temperature is offset. Incomplete reaction at higher temperature further curtails the effect of a larger heat of combustion. Because most combustion processes that happen naturally occur in the open air, there is nothing that confines the gas to a particular volume like the cylinder in an engine. As a result, these substances will burn at a constant pressure, which allows the gas to expand during the process. Common flame temperatures Assuming initial atmospheric conditions (1bar and 20 °C), the following table lists the flame temperature for various fuels under constant pressure conditions. The temperatures mentioned here are for a stoichiometric fuel-oxidizer mixture (i.e. equivalence ratio φ = 1). Note that these are theoretical, not actual, flame temperatures produced by a flame that loses no heat. The closest will be the hottest part of a flame, where the combustion reaction is most efficient. This also assumes complete combustion (e.g. perfectly balanced, non-smoky, usually bluish flame). Several values in the table significantly disagree with the literature or predictions by online calculators. Thermodynamics From the first law of thermodynamics for a closed reacting system we have where, and are the heat and work transferred from the system to the surroundings during the process, respectively, and and are the internal energy of the reactants and products, respectively. In the constant volume adiabatic flame temperature case, the volume of the system is held constant and hence there is no work occurring: There is also no heat transfer because the process is defined to be adiabatic: . As a result, the internal energy of the products is equal to the internal energy of the reactants: . Because this is a closed system, the mass of the products and reactants is constant and the first law can be written on a mass basis, . In the case of the constant pressure adiabatic flame temperature, the pressure of the system is held constant, which results in the following equation for the work: Again there is no heat transfer occurring because the process is defined to be adiabatic: . From the first law, we find that, Recalling the definition of enthalpy we obtain . Because this is a closed system, the mass of the products and reactants is the same and the first law can be written on a mass basis: . We see that the adiabatic flame temperature of the constant pressure process is lower than that of the constant volume process. This is because some of the energy released during combustion goes, as work, into changing the volume of the control system. If we make the assumption that combustion goes to completion (i.e. forming only and ), we can calculate the adiabatic flame temperature by hand either at stoichiometric conditions or lean of stoichiometry (excess air). This is because there are enough variables and molar equations to balance the left and right hand sides, Rich of stoichiometry there are not enough variables because combustion cannot go to completion with at least and needed for the molar balance (these are the most common products of incomplete combustion), However, if we include the water gas shift reaction, and use the equilibrium constant for this reaction, we will have enough variables to complete the calculation. Different fuels with different levels of energy and molar constituents will have different adiabatic flame temperatures. We can see by the following figure why nitromethane (CH3NO2) is often used as a power boost for cars. Since each molecule of nitromethane contains an oxidant with relatively high-energy bonds between nitrogen and oxygen, it can burn much hotter than hydrocarbons or oxygen-containing methanol. This is analogous to adding pure oxygen, which also raises the adiabatic flame temperature. This in turn allows it to build up more pressure during a constant volume process. The higher the pressure, the more force upon the piston creating more work and more power in the engine. It stays relatively hot rich of stoichiometry because it contains its own oxidant. However, continual running of an engine on nitromethane will eventually melt the piston and/or cylinder because of this higher temperature. In real world applications, complete combustion does not typically occur. Chemistry dictates that dissociation and kinetics will change the composition of the products. There are a number of programs available that can calculate the adiabatic flame temperature taking into account dissociation through equilibrium constants (Stanjan, NASA CEA, AFTP). The following figure illustrates that the effects of dissociation tend to lower the adiabatic flame temperature. This result can be explained through Le Chatelier's principle. See also Flame speed References External links General information Computation of adiabatic flame temperature Adiabatic flame temperature Tables adiabatic flame temperature of hydrogen, methane, propane and octane with oxygen or air as oxidizers Temperature of a blue flame and common materials Calculators Online adiabatic flame temperature calculator using Cantera Adiabatic flame temperature program Gaseq, program for performing chemical equilibrium calculations. Flame Temperature Calculator - Constant pressure bipropellant adiabatic combustion Adiabatic Flame Temperature calculator Combustion Temperature Threshold temperatures
Adiabatic flame temperature
[ "Physics", "Chemistry" ]
1,310
[ "Scalar physical quantities", "Temperature", "Thermodynamic properties", "Physical phenomena", "Physical quantities", "Phase transitions", "SI base quantities", "Intensive quantities", "Threshold temperatures", "Combustion", "Thermodynamics", "Wikipedia categories named after physical quantiti...
3,199,737
https://en.wikipedia.org/wiki/Co-receptor
A co-receptor is a cell surface receptor that binds a signalling molecule in addition to a primary receptor in order to facilitate ligand recognition and initiate biological processes, such as entry of a pathogen into a host cell. Properties The term co-receptor is prominent in literature regarding signal transduction, the process by which external stimuli regulate internal cellular functioning. The key to optimal cellular functioning is maintained by possessing specific machinery that can carry out tasks efficiently and effectively. Specifically, the process through which intermolecular reactions forward and amplify extracellular signals across the cell surface has developed to occur by two mechanisms. First, cell surface receptors can directly transduce signals by possessing both serine and threonine or simply serine in the cytoplasmic domain. They can also transmit signals through adaptor molecules through their cytoplasmic domain which bind to signalling motifs. Secondly, certain surface receptors lacking a cytoplasmic domain can transduce signals through ligand binding. Once the surface receptor binds the ligand it forms a complex with a corresponding surface receptor to regulate signalling. These categories of cell surface receptors are prominently referred to as co-receptors. Co-receptors are also referred to as accessory receptors, especially in the fields of biomedical research and immunology. Co-receptors are proteins that maintain a three-dimensional structure. The large extracellular domains make up approximately 76–100% of the receptor. The motifs that make up the large extracellular domains participate in ligand binding and complex formation. The motifs can include glycosaminoglycans, EGF repeats, cysteine residues or ZP-1 domains. The variety of motifs leads to co-receptors being able to interact with two to nine different ligands, which themselves can also interact with a number of different co-receptors. Most co-receptors lack a cytoplasmic domain and tend to be GPI-anchored, though a few receptors have been identified which contain short cytoplasmic domains that lack intrinsic kinase activity. Localization and function Depending on the type of ligand a co-receptor binds, its location and function can vary. Various ligands include interleukins, neurotrophic factors, fibroblast growth factors, transforming growth factors, vascular endothelial growth factors and epidermal growth factors. Co-receptors prominent in embryonic tissue have an essential role in morphogen gradient formation or tissue differentiation. Co-receptors localized in endothelial cells function to enhance cell proliferation and cell migration. With such variety in regards to location, co-receptors can participate in many different cellular activities. Co-receptors have been identified as participants in cell signalling cascades, embryonic development, cell adhesion regulation, gradient formation, tissue proliferation and migration. Some classical examples CD family The CD family of co-receptors are a well-studied group of extracellular receptors found in immunological cells. The CD receptor family typically act as co-receptors, illustrated by the classic example of CD4 acting as a co-receptor to the T cell receptor (TCR) to bind major histocompatibility complex II (MHC-II). This binding is particularly well-studied in T-cells where it serves to activate T-cells that are in their resting (or dormant) phase and to cause active cycling T-cells to undergo programmed cell death. Boehme et al. demonstrated this interesting dual outcome by blocking the binding of CD4 to MHC-II which prevented the programmed cell death reaction that active T-cells typically display. The CD4 receptor is composed of four concatamerized Ig-like domains and is anchored to the cell membrane by a single transmembrane domain. CD family receptors are typically monomers or dimers, though they are all primarily extracellular proteins. The CD4 receptor in particular interacts with murine MHC-II following the "ball-on-stick" model, where the Phe-43 ball fits into the conserved hydrophobic α2 and β2 domain residues. During binding with MHC-II, CD4 maintains independent structure and does not form any bonds with the TCR receptor. The members of the CD family of co-receptors have a wide range of function. As well as being involved in forming a complex with MHC-II with TCR to control T-cell fate, the CD4 receptor is infamously the primary receptor that HIV envelope glycoprotein GP120 binds to. In comparison, CD28 acts as a ‘co-coreceptor’ (costimulatory receptor) for the MHC-II binding with TCR and CD4. CD28 increases the IL-2 secretion from the T-cells if it is involved in the initial activation; however, CD28 blockage has no effect on programmed cell death after the T-cell has been activated. CCR family of receptors The CCR family of receptors are a group of g-protein coupled receptors (GPCRs) that normally operate as chemokine receptors. They are primarily found on immunological cells, especially T-cells. CCR receptors are also expressed on neuronal cells, such as dendrites and microglia. Perhaps the most famous and well-studied of the CCR family is CCR5 (and its near-homologue CXCR4) which acts as the primary co-receptor for HIV viral infection. The HIV envelope glycoprotein GP120 binds to CD4 as its primary receptor, CCR5 then forms a complex with CD4 and HIV, allowing viral entry into the cell. CCR5 is not the only member of the CCR family that allows for HIV infection. Due to the commonality of structures found throughout the family, CCR2b, CCR3, and CCR8 can be utilized by some HIV strains as co-receptors to facilitate infection. CXCR4 is very similar to CCR5 in structure. While only some HIV strains can utilize CCR2b, CCR3 and CCR8, all HIV strains can infect through CCR5 and CXCR4. CCR5 is known to have an affinity for macrophage inflammatory protein (MIP) and is thought to play a role in inflammatory immunological responses. The primary role of this receptor is less understood than its role in HIV infection, as inflammation responses remain a poorly understood facet of the immune system. CCR5's affinity for MIP makes it of great interest for practical applications such as tissue engineering, where attempts are being made to control host inflammatory and immunological responses at a cellular signalling level. The affinity for MIP has been utilized in-vitro to prevent HIV infection through ligand competition; however, these entry-inhibitors have failed in-vivo due to the highly adaptive nature of HIV and toxicity concerns. Clinical significance Because of their importance in cell signaling and regulation, co-receptors have been implicated in a number of diseases and disorders. Co-receptor knockout mice are often unable to develop and such knockouts generally result in embryonic or perinatal lethality. In immunology in particular, the term "co-receptor" often describes a secondary receptor used by a pathogen to gain access to the cell, or a receptor that works alongside T cell receptors such as CD4, CD8, or CD28 to bind antigens or regulate T cell activity in some way. Inherited co-receptor autosomal disorders Many co-receptor-related disorders occur due to mutations in the receptor's coding gene. LRP5 (low-density lipoprotein receptor-related protein 5) acts as a co-receptor for the Wnt-family of glycoproteins which regulate bone mass. Malfunctions in this co-receptor lead to lower bone density and strength which contribute to osteoporosis. Loss of function mutations in LRP5 have been implicated in Osteoporosis-pseudoglioma syndrome, Familial exudative vitreoretinopathy, and a specific missense mutation in the first β-propeller region of LRP5 can lead to abnormally high bone density or osteopetrosis. Mutations in LRP1 have also been found in cases of Familial Alzheimer's disease Loss of function mutations in the Cryptic co-receptor can lead to random organ positioning due to developmental left-right orientation defects. Gigantism is believed to be caused, in some cases, by a loss of function of the Glypican 3 co-receptor. Cancer Carcinoembryonic antigen cell adhesion molecule-1 (Caecam1) is an immunoglobulin-like co-receptor that aids in cell adhesion in epithelial, endothelial and hematopoietic cells, and plays a vital role during vascularization and angiogenesis by binding vascular endothelial growth factor (VEGF). Angiogenesis is important in embryonic development but it is also a fundamental process of tumor growth. Deletion of the gene in Caecam1-/- mice results in a reduction of the abnormal vascularization seen in cancer and lowered nitric oxide production, suggesting a therapeutic possibility through targeting of this gene. The neuropilin co-receptor family mediates binding of VEGF in conjunction with the VEGFR1/VEGFR2 and Plexin signaling receptors, and therefore also plays a role in tumor vascular development. CD109 acts as a negative regulator of the tumor growth factor β (TGF-β) receptor. Upon binding TGF-β, the receptor is internalized via endocytosis through CD109's action which lowers signal transmission into the cell. In this case, the co-receptor is functioning in a critical regulatory manner to reduce signals that instruct the cell to grow and migrate – the hallmarks of cancer. In conjunction, the LRP co-receptor family also mediates binding of TGF-β with a variety of membrane receptors. Interleukins 1, 2, and 5 all rely on interleukin co-receptors to bind to the primary interleukin receptors. Syndecans 1 and 4 have been implicated in a variety of cancer types including cervical, breast, lung, and colon cancer, and abnormal expression levels have been associated with poorer prognosis. HIV In order to infect a cell, the envelope glycoprotein GP120 of the HIV virus interacts with CD4 (acting as the primary receptor) and a co-receptor: either CCR5 or CXCR4. This binding results in membrane fusion and the subsequent intracellular signaling that facilitates viral invasion. In approximately half of all HIV cases, the viruses using the CCR5 co-receptor seem to favor immediate infection and transmission while those using the CXCR4 receptor do not present until later in the immunologically suppressed stage of the disease. The virus will often switch from using CCR5 to CXCR4 during the course of the infection, which serves as an indicator for the progression of the disease. Recent evidence suggests that some forms of HIV also use the large integrin a4b7 receptor to facilitate increased binding efficiency in mucosal tissues. Hepatitis C The Hepatitis C virus requires the CD81 co-receptor for infection. Studies suggest that the tight junction protein Claudin-1 (CLDN1) may also play a part in HCV entry. Claudin family abnormalities are also common in hepatocellular carcinoma, which can result from HPV infection. Blockade as a treatment for autoimmunity It is possible to perform a CD4 co-receptor blockade, using antibodies, in order to lower T cell activation and counteract autoimmune disorders. This blockade appears to elicit a "dominant" effect, that is to say, once blocked, the T cells do not regain their ability to become active. This effect then spreads to native T cells which then switch to a CD4+CD25+GITR+FoxP3+ T regulatory phenotype. Current areas of research Currently, the two most prominent areas of co-receptor research are investigations regarding HIV and cancer. HIV research is highly focused on the adaption of HIV strains to a variety of host co-receptors. Cancer research is mostly focused on enhancing the immune response to tumor cells, while some research also involves investigating the receptors expressed by the cancerous cells themselves. HIV Most HIV-based co-receptor research focuses on the CCR5 co-receptor. The majority of HIV strains use the CCR5 receptor. HIV-2 strains can also use the CXCR4 receptor though the CCR5 receptor is the more predominantly targeted of the two. Both the CCR5 and the CXCR4 co-receptors are seven-trans-membrane (7TM) G protein-coupled receptors. Different strains of HIV work on different co-receptors, although the virus can switch to utilizing other co-receptors. For example, R5X4 receptors can become the dominant HIV co-receptor target in main strains. HIV-1 and HIV-2 can both use the CCR8 co-receptor. The crossover of co-receptor targets for different strains and the ability for the strains to switch from their dominant co-receptor can impede clinical treatment of HIV. Treatments such as WR321 mAb can inhibit some strains of CCR5 HIV-1, preventing cell infection. The mAb causes the release of HIV-1-inhibitory b-chemokines, preventing other cells from becoming infected. Cancer Cancer-based research into co-receptors includes the investigation of growth factor activated co-receptors, such as Transforming Growth Factor (TGF-β) co-receptors. Expression of the co-receptor endoglin, which is expressed on the surface of tumor cells, is correlated with cell plasticity and the development of tumors. Another co-receptor of TGF-β is CD8. Although the exact mechanism is still unknown, CD8 co-receptors have been shown to enhance T-cell activation and TGF-β-mediated immune suppression. TGF-β has been shown to influence the plasticity of cells through integrin and focal adhesion kinase. The co-receptors of tumor cells and their interaction with T-cells provide important considerations for tumor immunotherapy. Recent research into co-receptors for p75, such as the sortilin co-receptor, has implicated sortilin in connection to neurotrophins, a type of nerve growth factor. The p75 receptor and co-receptors have been found to influence the aggressiveness of tumors, specifically via the ability of neurotrophins to rescue cells from certain forms of cell death. Sortilin, the p75 co-receptor, has been found in natural killer cells, but with only low levels of neurotrophin receptor. The sortilin co-receptor is believed to work with a neurotrophin homologue that can also cause neurotrophin to alter the immune response. See also Signal transduction References Signal transduction Transmembrane receptors
Co-receptor
[ "Chemistry", "Biology" ]
3,110
[ "Transmembrane receptors", "Neurochemistry", "Biochemistry", "Signal transduction" ]
3,200,021
https://en.wikipedia.org/wiki/Synthetic%20molecular%20motor
Synthetic molecular motors are molecular machines capable of continuous directional rotation under an energy input. Although the term "molecular motor" has traditionally referred to a naturally occurring protein that induces motion (via protein dynamics), some groups also use the term when referring to non-biological, non-peptide synthetic motors. Many chemists are pursuing the synthesis of such molecular motors. The basic requirements for a synthetic motor are repetitive 360° motion, the consumption of energy and unidirectional rotation. The first two efforts in this direction, the chemically driven motor by Dr. T. Ross Kelly of Boston College with co-workers and the light-driven motor by Ben Feringa and co-workers, were published in 1999 in the same issue of Nature. As of 2020, the smallest atomically precise molecular machine has a rotor that consists of four atoms. Chemically driven rotary molecular motors An example of a prototype for a synthetic chemically driven rotary molecular motor was reported by Kelly and co-workers in 1999. Their system is made up from a three-bladed triptycene rotor and a helicene, and is capable of performing a unidirectional 120° rotation. This rotation takes place in five steps. The amine group present on the triptycene moiety is converted to an isocyanate group by condensation with phosgene (a). Thermal or spontaneous rotation around the central bond then brings the isocyanate group in proximity of the hydroxyl group located on the helicene moiety (b), thereby allowing these two groups to react with each other (c). This reaction irreversibly traps the system as a strained cyclic urethane that is higher in energy and thus energetically closer to the rotational energy barrier than the original state. Further rotation of the triptycene moiety therefore requires only a relatively small amount of thermal activation in order to overcome this barrier, thereby releasing the strain (d). Finally, cleavage of the urethane group restores the amine and alcohol functionalities of the molecule (e). The result of this sequence of events is a unidirectional 120° rotation of the triptycene moiety with respect to the helicene moiety. Additional forward or backward rotation of the triptycene rotor is inhibited by the helicene moiety, which serves a function similar to that of the pawl of a ratchet. The unidirectionality of the system is a result from both the asymmetric skew of the helicene moiety as well as the strain of the cyclic urethane which is formed in c. This strain can be only be lowered by the clockwise rotation of the triptycene rotor in d, as both counterclockwise rotation as well as the inverse process of d are energetically unfavorable. In this respect the preference for the rotation direction is determined by both the positions of the functional groups and the shape of the helicene and is thus built into the design of the molecule instead of dictated by external factors. The motor by Kelly and co-workers is an elegant example of how chemical energy can be used to induce controlled, unidirectional rotational motion, a process which resembles the consumption of ATP in organisms in order to fuel numerous processes. However, it does suffer from a serious drawback: the sequence of events that leads to 120° rotation is not repeatable. Kelly and co-workers have therefore searched for ways to extend the system so that this sequence can be carried out repeatedly. Unfortunately, their attempts to accomplish this objective have not been successful and currently the project has been abandoned. In 2016 David Leigh's group invented the first autonomous chemically-fuelled synthetic molecular motor. Some other examples of synthetic chemically driven rotary molecular motors that all operate by sequential addition of reagents have been reported, including the use of the stereoselective ring opening of a racemic biaryl lactone by the use of chiral reagents, which results in a directed 90° rotation of one aryl with respect to the other aryl. Branchaud and co-workers have reported that this approach, followed by an additional ring closing step, can be used to accomplish a non-repeatable 180° rotation. Feringa and co-workers used this approach in their design of a molecule that can repeatably perform 360° rotation. The full rotation of this molecular motor takes place in four stages. In stages A and C rotation of the aryl moiety is restricted, although helix inversion is possible. In stages B and D the aryl can rotate with respect to the naphthalene with steric interactions preventing the aryl from passing the naphthalene. The rotary cycle consists of four chemically induced steps which realize the conversion of one stage into the next. Steps 1 and 3 are asymmetric ring opening reactions which make use of a chiral reagent in order to control the direction of the rotation of the aryl. Steps 2 and 4 consist of the deprotection of the phenol, followed by regioselective ring formation. Light-driven rotary molecular motors In 1999 the laboratory of Prof. Dr. Ben L. Feringa at the University of Groningen, The Netherlands, reported the creation of a unidirectional molecular rotor. Their 360° molecular motor system consists of a bis-helicene connected by an alkene double bond displaying axial chirality and having two stereocenters. One cycle of unidirectional rotation takes 4 reaction steps. The first step is a low temperature endothermic photoisomerization of the trans (P,P) isomer 1 to the cis (M,M) 2 where P stands for the right-handed helix and M for the left-handed helix. In this process, the two axial methyl groups are converted into two less sterically favorable equatorial methyl groups. By increasing the temperature to 20 °C these methyl groups convert back exothermally to the (P,P) cis axial groups (3) in a helix inversion. Because the axial isomer is more stable than the equatorial isomer, reverse rotation is blocked. A second photoisomerization converts (P,P) cis 3 into (M,M) trans 4, again with accompanying formation of sterically unfavorable equatorial methyl groups. A thermal isomerization process at 60 °C closes the 360° cycle back to the axial positions. A major hurdle to overcome is the long reaction time for complete rotation in these systems, which does not compare to rotation speeds displayed by motor proteins in biological systems. In the fastest system to date, with a fluorene lower half, the half-life of the thermal helix inversion is 0.005 seconds. This compound is synthesized using the Barton-Kellogg reaction. In this molecule the slowest step in its rotation, the thermally induced helix-inversion, is believed to proceed much more quickly because the larger tert-butyl group makes the unstable isomer even less stable than when the methyl group is used. This is because the unstable isomer is more destabilized than the transition state that leads to helix-inversion. The different behaviour of the two molecules is illustrated by the fact that the half-life time for the compound with a methyl group instead of a tert-butyl group is 3.2 minutes. The Feringa principle has been incorporated into a prototype nanocar. The car synthesized has a helicene-derived engine with an oligo (phenylene ethynylene) chassis and four carborane wheels and is expected to be able to move on a solid surface with scanning tunneling microscopy monitoring, although so far this has not been observed. The motor does not perform with fullerene wheels because they quench the photochemistry of the motor moiety. Feringa motors have also been shown to remain operable when chemically attached to solid surfaces. The ability of certain Feringa systems to act as an asymmetric catalyst has also been demonstrated. In 2016, Feringa was awarded a Nobel prize for his work on molecular motors. Experimental demonstration of a single-molecule electric motor A single-molecule electrically operated motor made from a single molecule of n-butyl methyl sulfide (C5H12S) has been reported. The molecule is adsorbed onto a copper (111) single-crystal piece by chemisorption. See also Molecular machine Molecular motors Molecular propeller Nanomotor References Nanotechnology Molecular machines
Synthetic molecular motor
[ "Physics", "Chemistry", "Materials_science", "Technology", "Engineering" ]
1,754
[ "Machines", "Materials science", "Molecular machines", "Physical systems", "Nanotechnology" ]
3,200,841
https://en.wikipedia.org/wiki/Fresnel%20%28unit%29
A fresnel is a unit of frequency equal to 1012 s−1. It was occasionally used in the field of spectroscopy, but its use has been superseded by terahertz (with the identical value 1012 hertz). It is named for Augustin-Jean Fresnel the physicist whose expertise in optics led to the creation of Fresnel lenses. References Units of frequency Obsolete units of measurement Non-SI metric units Spectroscopy
Fresnel (unit)
[ "Physics", "Chemistry", "Astronomy", "Mathematics" ]
91
[ "Spectroscopy stubs", "Obsolete units of measurement", "Molecular physics", "Spectrum (physical sciences)", "Physical quantities", "Time", "Time stubs", "Instrumental analysis", "Quantity", "Non-SI metric units", "Astronomy stubs", "Units of frequency", "Spacetime", "Molecular physics stub...
3,201,172
https://en.wikipedia.org/wiki/High%20Power%20Electric%20Propulsion
High Power Electric Propulsion (HiPEP) is a variation of ion thruster for use in nuclear electric propulsion applications. It was ground-tested in 2003 by NASA and was intended for use on the Jupiter Icy Moons Orbiter, which was canceled in 2005. Theory The HiPEP thruster differs from earlier ion thrusters because the xenon ions are produced using a combination of microwave and magnetic fields. The ionization is achieved through a process called Electron Cyclotron Resonance (ECR). In ECR, the small number of free electrons present in the neutral gas gyrate around the static magnetic field lines. The injected microwaves' frequency is set to match this gyrofrequency and a resonance is established. Energy is transferred from the right-hand polarized portion of the microwave to the electrons. This energy is then transferred to the bulk gas/plasma via the rare - yet important - collisions between electrons and neutrals. During these collisions, electrons can be knocked free from the neutrals, forming ion-electron pairs. The process is a highly efficient means of creating a plasma in low density gases. Previously the electrons required were provided by a hollow cathode. Specifications The thruster itself is in the 20-50 kW class, with a specific impulse of 6,000-9,000 seconds, and a propellant throughput capability exceeding 100 kg/kW. The goal of the project, as of June 2003, was to achieve a technology readiness level of 4-5 within 2 years. The pre-prototype HiPEP produced 670  millinewton (mN) of thrust at a power level of 39.3 kW using 7.0 mg/s of fuel giving a specific impulse of 9620 s. Downrated to 24.4 kW, the HiPEP used 5.6 mg/s of fuel giving a specific impulse of 8270 s and 460 mN of thrust. Project and development history Phase 1 of HiPEP development concluded in early 2003. Conceptual Design of the thruster was completed, and individual component testing concluded. A full-scale laboratory thruster was constructed for Phase 2 of the HiPEP's development. However, with cancellation of the Jupiter Icy Moon Orbiter mission in 2005, HiPEP's development also came to a halt. Before cancellation, HiPEP completed a 2000 hour wear test. See also Exploration of Jupiter List of spacecraft with electric propulsion Solar electric propulsion References External links NASA GRC Media Packet on HiPEP. Ion engines
High Power Electric Propulsion
[ "Physics", "Chemistry" ]
503
[ "Ions", "Ion engines", "Matter" ]
3,201,543
https://en.wikipedia.org/wiki/Electron%20scattering
Electron scattering occurs when electrons are displaced from their original trajectory. This is due to the electrostatic forces within matter interaction or, if an external magnetic field is present, the electron may be deflected by the Lorentz force. This scattering typically happens with solids such as metals, semiconductors and insulators; and is a limiting factor in integrated circuits and transistors. Electron scattering has many applications ranging from the use of swift electron in electron microscopes to very high energies for hadronic systems, that allows the measurement of the distribution of charges for nucleons and nuclear structure. The scattering of electrons has allowed us to understand that protons and neutrons are made up of the smaller elementary subatomic particles called quarks. Electrons may be scattered through a solid in several ways: Not at all: no electron scattering occurs at all and the beam passes straight through. Single scattering: when an electron is scattered just once. Plural scattering: when electron(s) scatter several times. Multiple scattering: when electron(s) scatter many times over. The likelihood of an electron scattering and the degree of the scattering is a probability function of the specimen thickness and the mean free path. History The principle of the electron was first theorised in the period of 1838–1851 by a natural philosopher by the name of Richard Laming who speculated the existence of sub-atomic, unit charged particles; he also pictured the atom as being an 'electrosphere' of concentric shells of electrical particles surrounding a material core. It is generally accepted that J. J. Thomson first discovered the electron in 1897, although other notable members in the development in charged particle theory are George Johnstone Stoney (who coined the term "electron"), Emil Wiechert (who was first to publish his independent discovery of the electron), Walter Kaufmann, Pieter Zeeman and Hendrik Lorentz. Compton scattering was first observed at Washington University in St. Louis in 1923 by Arthur Compton who earned the 1927 Nobel Prize in Physics for the discovery; his graduate student Y. H. Woo who further verified the results is also of mention. Compton scattering is usually cited in reference to the interaction involving the electrons of an atom, however nuclear Compton scattering does exist. The first electron diffraction experiment was conducted in 1927 by Clinton Davisson and Lester Germer using what would come to be a prototype for modern LEED system. The experiment was able to demonstrate the wave-like properties of electrons, thus confirming the de Broglie hypothesis that matter particles have a wave-like nature. However, after this the interest in LEED diminished in favour of high-energy electron diffraction until the early 1960s when an interest in LEED was revived; of notable mention during this period is H. E. Farnsworth who continued to develop LEED techniques. High energy electron-electron colliding beam history begins in 1956 when K. O'Neill of Princeton University became interested in high energy collisions, and introduced the idea of accelerator(s) injecting into storage ring(s). While the idea of beam-beam collisions had been around since approximately the 1920s, it was not until 1953 that a German patent for colliding beam apparatus was obtained by Rolf Widerøe. Phenomena Electrons can be scattered by other charged particles through the electrostatic Coulomb forces. Furthermore, if a magnetic field is present, a traveling electron will be deflected by the Lorentz force. An extremely accurate description of all electron scattering, including quantum and relativistic aspects, is given by the theory of quantum electrodynamics. Lorentz force The Lorentz force, named after Dutch physicist Hendrik Lorentz, for a charged particle q is given (in SI units) by the equation: where qE describes the electric force due to a present electric field, E, acting on q. And qv × B describes the magnetic force due to a present magnetic field, B, acting on q when q is moving with velocity v. This can also be written as: where is the electric potential, and A is the magnetic vector potential. It was Oliver Heaviside who is attributed in 1885 and 1889 to first deriving the correct expression for the Lorentz force of qv × B. Hendrik Lorentz derived and refined the concept in 1892 and gave it his name, incorporating forces due to electric fields. Rewriting this as the equation of motion for a free particle of charge q mass m,this becomes: or in the relativistic case using Lorentz contraction where γ is: this equation of motion was first verified in 1897 in J. J. Thomson's experiment investigating cathode rays which confirmed, through bending of the rays in a magnetic field, that these rays were a stream of charged particles now known as electrons. Variations on this basic formula describe the magnetic force on a current-carrying wire (sometimes called Laplace force), the electromotive force in a wire loop moving through a magnetic field (an aspect of Faraday's law of induction), and the force on a particle which might be traveling near the speed of light (relativistic form of the Lorentz force). Electrostatic Coulomb force Electrostatic Coulomb force also known as Coulomb interaction and electrostatic force, named for Charles-Augustin de Coulomb who published the result in 1785, describes the attraction or repulsion of particles due to their electric charge. Coulomb's law states that: The magnitude of the electric force between two point charges is directly proportional to the product of the charges and inversely proportional to the square of the distance between them. The magnitude of the electrostatic force is proportional to the scalar multiple of the charge magnitudes, and inversely proportional to the square of the distance (i.e. inverse-square law), and is given by: or in vector notation: where q1, q2 are two point charges; being the unit vector direction of the distance r between charges and ε0 is the permittivity of free space, given in SI units by: The directions of the forces exerted by the two charges on one another are always along the straight line joining them (the shortest distance), and are vector forces of infinite range, and obey Newton's third law, being of equal magnitude and opposite direction. Further, when both charges q1 and q2 have the same sign (either both positive or both negative) the forces between them are repulsive, if they are of opposite sign then the forces are attractive. These forces obey an important property called the principle of superposition of forces which states that if a third charge were introduced then the total force acting on that charge is the vector sum of the forces that would be exerted by the other charges individually, this holds for any number of charges. However, Coulomb's law has been stated for charges in a vacuum, if the space between point charges contains matter then the permittivity of the matter between the charges must be accounted for as follows: where εr is the relative permittivity of the space the force acts through, and is dimensionless. Collisions If two particles interact with one another in a scattering process there are two results possible after the interaction: Elastic Elastic scattering is when the collisions between target and incident particles have total conservation of kinetic energy. This implies that there is no breaking up of the particles or energy loss through vibrations, that is to say that the internal states of each of the particles remains unchanged. Due to the fact that there is no breaking present, elastic collisions can be modeled as occurring between point-like particles, a principle that is very useful for an elementary particle such as the electron. Inelastic Inelastic scattering is when the collisions do not conserve kinetic energy, and as such the internal states of one or both of the particles has changed. This is due to energy being converted into vibrations which can be interpreted as heat, waves (sound), or vibrations between constituent particles of either collision party. Particles may also split apart, further energy can be converted into breaking the chemical bonds between components. Furthermore, momentum is conserved in both elastic and inelastic scattering. Other results than scattering are reactions, in which the structure of the interacting particles is changed producing two or more generally complex particles, and the creation of new particles that are not constituent elementary particles of the interacting particles. Other types of scattering Electron–molecule scattering Electron scattering by isolated atoms and molecules occurs in the gas phase. It plays a key role in plasma physics and chemistry and it's important for such applications as semiconductor physics. Electron-molecule/atom scattering is normally treated by means of quantum mechanics. The leading approach to compute the cross sections is using R-matrix method. Compton scattering Compton scattering, so named for Arthur Compton who first observed the effect in 1922 and which earned him the 1927 Nobel Prize in Physics; is the inelastic scattering of a high-energy photon by a free charged particle. This was demonstrated in 1923 by firing radiation of a given wavelength (X-rays in the given case) through a foil (carbon target), which was scattered in a manner inconsistent with classical radiation theory. Compton published a paper in the Physical Review explaining the phenomenon: A quantum theory of the scattering of X-rays by light elements. The Compton effect can be understood as high-energy photons scattering in-elastically off individual electrons, when the incoming photon gives part of its energy to the electron, then the scattered photon has lower energy and lower frequency and longer wavelength according to the Planck relation: which gives the energy E of the photon in terms of frequency f or ν, and the Planck constant h ( = ). The wavelength change in such scattering depends only upon the angle of scattering for a given target particle. This was an important discovery during the 1920s when the particle (photon) nature of light suggested by the photoelectric effect was still being debated, the Compton experiment gave clear and independent evidence of particle-like behavior. The formula describing the Compton shift in the wavelength due to scattering is given by: where λf is the final wavelength of the photon after scattering, λi is the initial wavelength of the photon before scattering, h is the Planck constant, me is the rest mass of the electron, c is the speed of light and θ is the scattering angle of the photon. The coefficient of (1 − cos θ) is known as the Compton wavelength, but is in fact a proportionality constant for the wavelength shift. The collision causes the photon wavelength to increase by somewhere between 0 (for a scattering angle of 0°) and twice the Compton wavelength (for a scattering angle of 180°). Thomson scattering is the classical elastic quantitative interpretation of the scattering process, and this can be seen to happen with lower, mid-energy, photons. The classical theory of an electromagnetic wave scattered by charged particles, cannot explain low intensity shifts in wavelength. Inverse Compton scattering takes place when the electron is moving, and has sufficient kinetic energy compared to the photon. In this case net energy may be transferred from the electron to the photon. The inverse Compton effect is seen in astrophysics when a low energy photon (e.g. of the cosmic microwave background) bounces off a high energy (relativistic) electron. Such electrons are produced in supernovae and active galactic nuclei. Møller scattering Mott scattering Bhabha scattering Bremsstrahlung scattering Deep inelastic scattering Synchrotron emission If a charged particle such as an electron is accelerated – this can be acceleration in a straight line or motion in a curved path – electromagnetic radiation is emitted by the particle. Within electron storage rings and circular particle accelerators known as synchrotrons, electrons are bent in a circular path and emit X-rays typically. This radially emitted () electromagnetic radiation when charged particles are accelerated is called synchrotron radiation. It is produced in synchrotrons using bending magnets, undulators and/or wigglers. The first observation came at the General Electric Research Laboratory in Schenectady, New York, on April 24, 1947, in the synchrotron built by a team of Herb Pollack to test the idea of phase-stability principle for RF accelerators. When the technician was asked to look around the shielding with a large mirror to check for sparking in the tube, he saw a bright arc of light coming from the electron beam. Robert Langmuir is credited as recognizing it as synchrotron radiation or, as he called it, "Schwinger radiation" after Julian Schwinger. Classically, the radiated power P from an accelerated electron is: this comes from the Larmor formula; where ε0 is the vacuum permittivity, e is elementary charge, c is the speed of light, and a is the acceleration. Within a circular orbit such as a storage ring, the non-relativistic case is simply the centripetal acceleration. However within a storage ring the acceleration is highly relativistic, and can be obtained as follows: , where v is the circular velocity, r is the radius of the circular accelerator, m is the rest mass of the charged particle, p is the momentum, τ is the Proper time (t/γ), and γ is the Lorentz factor. Radiated power then becomes: For highly relativistic particles, such that velocity becomes nearly constant, the factor γ4 becomes the dominant variable in determining loss rate, which means that the loss scales as the fourth power of the particle energy γmc2; and the inverse dependence of synchrotron radiation loss on radius argues for building the accelerator as large as possible. Facilities SLAC Stanford Linear Accelerator Center is located near Stanford University, California. Construction began on the linear accelerator in 1962 and was completed in 1967, and in 1968 the first experimental evidence of quarks was discovered resulting in the 1990 Nobel Prize in Physics, shared by SLAC's Richard Taylor and Jerome I. Friedman and Henry Kendall of MIT. The accelerator came with a 20 GeV capacity for the electron acceleration, and while similar to Rutherford's scattering experiment, that experiment operated with alpha particles at only 7 MeV. In the SLAC case the incident particle was an electron and the target a proton, and due to the short wavelength of the electron (due to its high energy and momentum) it was able to probe into the proton. The Stanford Positron Electron Asymmetric Ring (SPEAR) addition to the SLAC made further such discoveries possible, leading to the discovery in 1974 of the J/psi particle, which consists of a paired charm quark and anti-charm quark, and another Nobel Prize in Physics in 1976. This was followed up with Martin Perl's announcement of the discovery of the tau lepton, for which he shared the 1995 Nobel Prize in Physics. The SLAC aims to be a premier accelerator laboratory, to pursue strategic programs in particle physics, particle astrophysics and cosmology, as well as the applications in discovering new drugs for healing, new materials for electronics and new ways to produce clean energy and clean up the environment. Under the directorship of Chi-Chang Kao the SLAC's fifth director (as of November 2012), a noted X-ray scientist who came to SLAC in 2010 to serve as associate laboratory director for the Stanford Synchrotron Radiation Lightsource. BaBar SSRL – Stanford Synchrotron Radiation Lightsource Other scientific programs run at SLAC include: Advanced Accelerator Research ATLAS/Large Hadron Collider Elementary Particle Theory EXO – Enriched Xenon Observatory FACET – Facility for Advanced Accelerator Experimental Tests Fermi Gamma-ray Space Telescope Geant4 KIPAC – Kavli Institute for Particle Astrophysics and Cosmology LCLS – Linac Coherent Light Source LSST – Large Synoptic Survey Telescope NLCTA – Next Linear Collider Test Accelerator Stanford PULSE Institute SIMES – Stanford Institute for Materials and Energy Sciences SUNCAT Center for Interface Science and Catalysis Super CDMS – Super Cryogenic Dark Matter Search RIKEN RI Beam Factory RIKEN was founded in 1917 as a private research foundation in Tokyo, and is Japan's largest comprehensive research institution. Having grown rapidly in size and scope, it is today renowned for high-quality research in a diverse range of scientific disciplines, and encompasses a network of world-class research centers and institutes across Japan. The RIKEN RI Beam Factory, otherwise known as the RIKEN Nishina Centre (for Accelerator-Based Science), is a cyclotron-based research facility which began operating in 2007; 70 years after the first in Japanese cyclotron, from Dr. Yoshio Nishina whose name is given to the facility. As of 2006, the facility has a world-class heavy-ion accelerator complex. This consists of a K540-MeV ring cyclotron (RRC) and two different injectors: a variable-frequency heavy-ion linac (RILAC) and a K70-MeV AVF cyclotron (AVF). It has a projectile-fragment separator (RIPS) which provides RI (Radioactive Isotope) beams of less than 60 amu, the world's most intense light-atomic-mass RI beams. Overseen by the Nishina Centre, the RI Beam Factory is utilized by users worldwide promoting research in nuclear, particle and hadron physics. This promotion of accelerator applications research is an important mission of the Nishina Centre, and implements the use of both domestic and oversea accelerator facilities. SCRIT The SCRIT (Self-Confining Radioactive isotope Ion Target) facility, is currently under construction at the RIKEN RI beam factory (RIBF) in Japan. The project aims to investigate short-lived nuclei through the use of an elastic electron scattering test of charge density distribution, with initial testing done with stable nuclei. With the first electron scattering off unstable Sn isotopes to take place in 2014. The investigation of short-lived radioactive nuclei (RI) by means of electron scattering has never been performed because of an inability to make these nuclei a target, now with the advent of a novel self-confining RI technique at the world's first facility dedicated to the study of the structure of short-lived nuclei by electron scattering this research becomes possible. The principle of the technique is based around the ion trapping phenomenon which is observed at electron storage ring facilities, which has an adverse effect on the performance of electron storage rings. The novel idea to be employed at SCRIT is to use the ion trapping to allow short-lived RI's to be made a target, as trapped ions on the electron beam, for the scattering experiments. This idea was first given a proof-of-principle study using the electron storage ring of Kyoto University, KSR; this was done using a stable nucleus of 133Cs as a target in an experiment of 120MeV electron beam energy, 75mA typical stored beam current and a 100 seconds beam lifetime. The results of this study were favorable with elastically scattered electrons from the trapped Cs being clearly visible. See also Zeeman effect Particle physics Low-energy electron diffraction Quantum electrodynamics R-matrix Notes References External links Physics Out Loud: Electron Scattering (video) Brightstorm: Compton Scattering (video) Electron Scattering
Electron scattering
[ "Physics", "Chemistry", "Materials_science" ]
3,962
[ "Electron", "Molecular physics", "Scattering", "Condensed matter physics", "Particle physics", "Nuclear physics" ]
3,201,650
https://en.wikipedia.org/wiki/Magnetic%20Reynolds%20number
In magnetohydrodynamics, the magnetic Reynolds number (Rm) is a dimensionless quantity that estimates the relative effects of advection or induction of a magnetic field by the motion of a conducting medium to the magnetic diffusion. It is the magnetic analogue of the Reynolds number in fluid mechanics and is typically defined by: where is a typical velocity of the flow, is a typical length scale of the flow, is the magnetic diffusivity. The mechanism by which the motion of a conducting fluid generates a magnetic field is the subject of dynamo theory. When the magnetic Reynolds number is very large, however, diffusion and the dynamo are less of a concern, and in this case focus instead often rests on the influence of the magnetic field on the flow. Derivation In the theory of magnetohydrodynamics, the magnetic Reynolds number can be derived from the induction equation: where is the magnetic field, is the fluid velocity, is the magnetic diffusivity. The first term on the right hand side accounts for effects from magnetic induction in the plasma and the second term accounts for effects from magnetic diffusion. The relative importance of these two terms can be found by taking their ratio, the magnetic Reynolds number . If it is assumed that both terms share the scale length such that and the scale velocity such that , the induction term can be written as and the diffusion term as The ratio of the two terms is therefore General characteristics for large and small Rm For , advection is relatively unimportant, and so the magnetic field will tend to relax towards a purely diffusive state, determined by the boundary conditions rather than the flow. For , diffusion is relatively unimportant on the length scale L. Flux lines of the magnetic field are then advected with the fluid flow, until such time as gradients are concentrated into regions of short enough length scale that diffusion can balance advection. Range of values The Sun has a large , of order 106. Dissipative affects are generally small, and there is no difficulty in maintaining a magnetic field against diffusion. For the Earth, is estimated to be of order 103 . Dissipation is more significant, but a magnetic field is supported by motion in the liquid iron outer core. There are other bodies in the solar system that have working dynamos, e.g. Jupiter, Saturn, and Mercury, and others that do not, e.g. Mars, Venus and the Moon. The human length scale is very small so that typically . The generation of magnetic field by the motion of a conducting fluid has been achieved in only a handful of large experiments using mercury or liquid sodium. Bounds In situations where permanent magnetisation is not possible, e.g. above the Curie temperature, to maintain a magnetic field must be large enough such that induction outweighs diffusion. It is not the absolute magnitude of velocity that is important for induction, but rather the relative differences and shearing in the flow, which stretch and fold magnetic field lines . A more appropriate form for the magnetic Reynolds number in this case is therefore where S is a measure of strain. One of the most well known results is due to Backus which states that the minimum for generation of a magnetic field by flow in a sphere is such that where is the radius of the sphere and is the maximum strain rate. This bound has since been improved by approximately 25% by Proctor. Many studies of the generation of magnetic field by a flow consider the computationally-convenient periodic cube. In this case the minimum is found to be where is the root-mean-square strain over a scaled domain with sides of length . If shearing over small length scales in the cube is ruled out, then is the minimum, where is the root-mean-square value. Relationship to Reynolds number and Péclet number The magnetic Reynolds number has a similar form to both the Péclet number and the Reynolds number. All three can be regarded as giving the ratio of advective to diffusive effects for a particular physical field and have the form of the product of a velocity and a length divided by a diffusivity. While the magnetic Reynolds number is related to the magnetic field in an magnetohydrodynamic flow, the Reynolds number is related to the fluid velocity itself and the Péclet number is related to heat. The dimensionless groups arise in the non-dimensionalization of the respective governing equations: the induction equation, the Navier–Stokes equations, and the heat equation. Relationship to eddy current braking The dimensionless magnetic Reynolds number, , is also used in cases where there is no physical fluid involved. × (characteristic length) × (characteristic velocity) where is the magnetic permeability is the electrical conductivity. For the skin effect is negligible and the eddy current braking torque follows the theoretical curve of an induction motor. For the skin effect dominates and the braking torque decreases much slower with increasing speed than predicted by the induction motor model. See also Lundquist number Magnetohydrodynamics Alfvén Mach number Reynolds number Péclet number References Further reading Moffatt, H. Keith, 2000, "Reflections on Magnetohydrodynamics" . In: Perspectives in Fluid Dynamics () (Ed. G.K. Batchelor, H.K. Moffatt & M.G. Worster) Cambridge University Press, p 347–391. P. A. Davidson, 2001, An Introduction to Magnetohydrodynamics (), Cambridge University Press. Dimensionless numbers of fluid mechanics Fluid dynamics Magnetohydrodynamics
Magnetic Reynolds number
[ "Chemistry", "Engineering" ]
1,135
[ "Piping", "Magnetohydrodynamics", "Chemical engineering", "Fluid dynamics" ]
3,201,787
https://en.wikipedia.org/wiki/Saddle-node%20bifurcation
In the mathematical area of bifurcation theory a saddle-node bifurcation, tangential bifurcation or fold bifurcation is a local bifurcation in which two fixed points (or equilibria) of a dynamical system collide and annihilate each other. The term 'saddle-node bifurcation' is most often used in reference to continuous dynamical systems. In discrete dynamical systems, the same bifurcation is often instead called a fold bifurcation. Another name is blue sky bifurcation in reference to the sudden creation of two fixed points. If the phase space is one-dimensional, one of the equilibrium points is unstable (the saddle), while the other is stable (the node). Saddle-node bifurcations may be associated with hysteresis loops and catastrophes. Normal form A typical example of a differential equation with a saddle-node bifurcation is: Here is the state variable and is the bifurcation parameter. If there are two equilibrium points, a stable equilibrium point at and an unstable one at . At (the bifurcation point) there is exactly one equilibrium point. At this point the fixed point is no longer hyperbolic. In this case the fixed point is called a saddle-node fixed point. If there are no equilibrium points. In fact, this is a normal form of a saddle-node bifurcation. A scalar differential equation which has a fixed point at for with is locally topologically equivalent to , provided it satisfies and . The first condition is the nondegeneracy condition and the second condition is the transversality condition. Example in two dimensions An example of a saddle-node bifurcation in two dimensions occurs in the two-dimensional dynamical system: As can be seen by the animation obtained by plotting phase portraits by varying the parameter , When is negative, there are no equilibrium points. When , there is a saddle-node point. When is positive, there are two equilibrium points: that is, one saddle point and one node (either an attractor or a repellor). Other examples are in modelling biological switches. Recently, it was shown that under certain conditions, the Einstein field equations of General Relativity have the same form as a fold bifurcation. A non-autonomous version of the saddle-node bifurcation (i.e. the parameter is time-dependent) has also been studied. See also Pitchfork bifurcation Transcritical bifurcation Hopf bifurcation Saddle point Notes References Bifurcation theory Articles containing video clips
Saddle-node bifurcation
[ "Mathematics" ]
530
[ "Bifurcation theory", "Dynamical systems" ]
3,201,899
https://en.wikipedia.org/wiki/Enceinte
Enceinte (from Latin incinctus "girdled, surrounded") is a French term that refers to the "main defensive enclosure of a fortification". For a castle, this is the main defensive line of wall towers and curtain walls enclosing the position. For a settlement, it would refer to the main town wall with its associated gatehouses, towers, and walls. According to the 1911 Encyclopædia Britannica, the term was strictly applied to the continuous line of bastions and curtain walls forming "the body of the place", this last expression being often used as synonymous with enceinte. However, the outworks or defensive wall close to the enceinte were not considered as forming part of it. In early 20th-century fortification, the enceinte was usually simply the innermost continuous line of fortifications. In architecture, generally, an enceinte is the close or precinct of a cathedral, abbey, castle, etc. This definition of the term differs from the more common use of enceinte as a French adjective, which means "pregnant". Features The enceinte may be laid out as a freestanding structure or combined with buildings adjoining the outer walls. The enceinte not only provided passive protection for the areas behind it, but was usually an important component of the defence with its wall walks (often surmounted by battlements), embrasures and covered firing positions. The outline of the enceinte, with its fortified towers and domestic buildings, shaped the silhouette of a castle. The ground plan of an enceinte is affected by the terrain. The enceintes of hill castles often have an irregular polygonal shape dictated by the topography, whilst lowland castles more frequently have a regular rectangular shape, as exemplified by quadrangular castles. From the 12th century onwards, an additional enclosure called a was often built in front of the enceinte of many European castles. This afforded an additional layer of defense as it formed a killing ground in front of the main defensive wall. Sometimes—depending on the size and type of the surrounding fortifications—several wall systems were built (e.g. as Zwingers) that could also be used to keep dogs, wild boar or bears, or even cattle in times of need. During the Baroque era it was not uncommon for these enclosures to be turned into pleasure gardens as for example in the Zwinger at Dresden. Notes References Attribution: Castle architecture
Enceinte
[ "Engineering" ]
517
[ "Architecture stubs", "Architecture" ]
3,201,966
https://en.wikipedia.org/wiki/Spin%20density%20wave
Spin-density wave (SDW) and charge-density wave (CDW) are names for two similar low-energy ordered states of solids. Both these states occur at low temperature in anisotropic, low-dimensional materials or in metals that have high densities of states at the Fermi level . Other low-temperature ground states that occur in such materials are superconductivity, ferromagnetism and antiferromagnetism. The transition to the ordered states is driven by the condensation energy which is approximately where is the magnitude of the energy gap opened by the transition. Fundamentally SDWs and CDWs involve the development of a superstructure in the form of a periodic modulation in the density of the electronic spins and charges with a characteristic spatial frequency that does not transform according to the symmetry group that describes the ionic positions. The new periodicity associated with CDWs can easily be observed using scanning tunneling microscopy or electron diffraction while the more elusive SDWs are typically observed via neutron diffraction or susceptibility measurements. If the new periodicity is a rational fraction or multiple of the lattice constant, the density wave is said to be commensurate; otherwise the density wave is termed incommensurate. Some solids with a high form density waves while others choose a superconducting or magnetic ground state at low temperatures, because of the existence of nesting vectors in the materials' Fermi surfaces. The concept of a nesting vector is illustrated in the Figure for the famous case of chromium, which transitions from a paramagnetic to SDW state at a Néel temperature of 311 K. Cr is a body-centered cubic metal whose Fermi surface features many parallel boundaries between electron pockets centered at and hole pockets at H. These large parallel regions can be spanned by the nesting wavevector shown in red. The real-space periodicity of the resulting spin-density wave is given by . The formation of an SDW with a corresponding spatial frequency causes the opening of an energy gap that lowers the system's energy. The existence of the SDW in Cr was first posited in 1960 by Albert Overhauser of Purdue. The theory of CDWs was first put forth by Rudolf Peierls of Oxford University, who was trying to explain superconductivity. Many low-dimensional solids have anisotropic Fermi surfaces that have prominent nesting vectors. Well-known examples include layered materials like NbSe3, TaSe2 and K0.3MoO3 (a Chevrel phase) and quasi-1D organic conductors like TMTSF or TTF-TCNQ. CDWs are also common at the surface of solids where they are more commonly called surface reconstructions or even dimerization. Surfaces so often support CDWs because they can be described by two-dimensional Fermi surfaces like those of layered materials. Chains of Au and In on semiconducting substrates have been shown to exhibit CDWs. More recently, monatomic chains of Co on a metallic substrate were experimentally shown to exhibit a CDW instability and was attributed to ferromagnetic correlations. The most intriguing properties of density waves are their dynamics. Under an appropriate electric field or magnetic field, a density wave will "slide" in the direction indicated by the field due to the electrostatic or magnetostatic force. Typically the sliding will not begin until a "depinning" threshold field is exceeded where the wave can escape from a potential well caused by a defect. The hysteretic motion of density waves is therefore not unlike that of dislocations or magnetic domains. The current-voltage curve of a CDW solid therefore shows a very high electrical resistance up to the depinning voltage, above which it shows a nearly ohmic behavior. Under the depinning voltage (which depends on the purity of the material), the crystal is an insulator. See also Peierls transition Superstructure (condensed matter) References General References A pedagogical article about the topic: "Charge and Spin Density Waves," Stuart Brown and George Gruner, Scientific American 270, 50 (1994). Authoritative work on Cr: About Fermi surfaces and nesting: Electronic Structure and the Properties of Solids, Walter A. Harrison, . Observation of CDW by ARPES: Peierls instability. An extensive review of experiments as of 2013 by Pierre Monceau. Condensed matter physics Electric and magnetic fields in matter
Spin density wave
[ "Physics", "Chemistry", "Materials_science", "Engineering" ]
915
[ "Phases of matter", "Electric and magnetic fields in matter", "Materials science", "Condensed matter physics", "Matter" ]
16,592,105
https://en.wikipedia.org/wiki/Kith%20%28Poul%20Anderson%29
The Kith are a starfaring culture featured in a number of science fiction stories by American writer Poul Anderson. They are: "Ghetto" (1954) "The Horn of Time the Hunter" (also known as "Homo Aquaticus", 1963) The novel Starfarers (1998) - John W. Campbell Memorial Award nominee, 1999 The Kith develop out of early interstellar explorers in the 21st and 22nd centuries. Because of the effects of time dilation associated with travel at near-light speeds, the Kith maintain separate settlements ("Kithtowns") in which care was taken to keep their language and culture consistent over the course of millennia. As Kith usually marry among themselves, they seek to avoid in-breeding by a strict exogamy; Kith must find their mates in a ship other their own, marriage between crew members of the same ship being considered a kind of incest. Inevitably, Kith come to regard planet-bound cultures with aloof detachment, as an individual Kith may witness in his or her lifetime the passage of hundreds of years, the rise and fall of empires which can only seem ephemeral. To the ground-dwellers such attitudes come to seem superior and arrogant, and the Kith's apparent near-immortality arouses envy. Although the Kith are instrumental in maintaining the network of trade that makes human interstellar civilization possible, over time they become the object of derision, suspicion and ultimately persecution. As set forth in Starfarers and "Ghetto", the Kithtowns ultimately become ghettos, and pogroms are launched against the Kith. "The Horn of Time the Hunter" suggests that the Kith are ultimately forced to flee human space altogether, and chronicles the return of one group of Kith to human space after hundreds of thousands of years' relativistic travel to the Galactic core. References External links Works by Poul Anderson Fictional species and races Special relativity
Kith (Poul Anderson)
[ "Physics" ]
402
[ "Special relativity", "Theory of relativity" ]
16,597,340
https://en.wikipedia.org/wiki/Large%20Underground%20Xenon%20experiment
The Large Underground Xenon experiment (LUX) aimed to directly detect weakly interacting massive particle (WIMP) dark matter interactions with ordinary matter on Earth. Despite the wealth of (gravitational) evidence supporting the existence of non-baryonic dark matter in the Universe, dark matter particles in our galaxy have never been directly detected in an experiment. LUX utilized a 370 kg liquid xenon detection mass in a time-projection chamber (TPC) to identify individual particle interactions, searching for faint dark matter interactions with unprecedented sensitivity. The LUX experiment, which cost approximately $10 million to build, was located underground at the Sanford Underground Laboratory (SURF, formerly the Deep Underground Science and Engineering Laboratory, or DUSEL) in the Homestake Mine (South Dakota) in Lead, South Dakota. The detector was located in the Davis campus, former site of the Nobel Prize-winning Homestake neutrino experiment led by Raymond Davis. It was operated underground to reduce the background noise signal caused by high-energy cosmic rays at the Earth's surface. The detector was decommissioned in 2016 and is now on display at the Sanford Lab Homestake Visitor Center. Detector principle The detector was isolated from background particles by a surrounding water tank and the earth above. This shielding reduced cosmic rays and radiation interacting with the xenon. Interactions in liquid xenon generate 175 nm ultraviolet photons and electrons. These photons were immediately detected by two arrays of 61 photomultiplier tubes at the top and bottom of the detector. These prompt photons were the S1 signal. Electrons generated by the particle interactions drifted upwards towards the xenon gas by an electric field. The electrons were pulled in the gas at the surface by a stronger electric field, and produced electroluminescence photons detected as the S2 signal. The S1 and subsequent S2 signal constituted a particle interaction in the liquid xenon. The detector was a time-projection chamber (TPC), using the time between S1 and S2 signals to find the interaction depth since electrons move at constant velocity in liquid xenon (around 1–2 km/s, depending on the electric field). The x-y coordinate of the event was inferred from electroluminescence photons at the top array by statistical methods (Monte Carlo and maximum likelihood estimation) to a resolution under 1 cm. Finding dark matter WIMPs would be expected to interact exclusively with the liquid xenon nuclei, resulting in nuclear recoils that would appear very similar to neutron collisions. In order to single out WIMP interactions, neutron events must be minimized, through shielding and ultra-quiet building materials. In order to discern WIMPs from neutrons, the number of single interactions must be compared to multiple events. Since WIMPs are expected to be so weakly interacting, most would pass through the detector unnoticed. Any WIMPs that interact will have negligible chance of repeated interaction. Neutrons, on the other hand, have a reasonably large chance of multiple collisions within the target volume, the frequency of which can be accurately predicted. Using this knowledge, if the ratio of single interactions to multiple interactions exceeds a certain value, the detection of dark matter may be reliably inferred. Collaboration The LUX collaboration was composed of over 100 scientists and engineers across 27 institutions in the US and Europe. LUX was composed of the majority of the US groups that collaborated in the XENON10 experiment, most of the groups in the ZEPLIN III experiment, the majority of the US component of the ZEPLIN II experiment, and groups involved in low-background rare event searches such as Super Kamiokande, SNO, IceCube, Kamland, EXO and Double Chooz. The LUX experiment's co-spokesmen were Richard Gaitskell from Brown University (who acted as co-spokesman from 2007 on) and Daniel McKinsey from University of California, Berkeley (who acted as co-spokesman from 2012 on). Tom Shutt from Case Western Reserve University was LUX co-spokesman between 2007 and 2012. Status Detector assembly began in late 2009. The LUX detector was commissioned overground at SURF for a six-month run. The assembled detector was transported underground from the surface laboratory in a two-day operation in the summer of 2012 and began data taking April 2013, presenting initial results Fall 2013. It was decommissioned in 2016. The next-generation follow-up experiment, the 7-ton LUX-ZEPLIN has been approved, expected to begin in 2020. Results Initial unblinded data taken April to August 2013 were announced on October 30, 2013. In an 85 live-day run with 118 kg fiducial volume, LUX obtained 160 events passing the data analysis selection criteria, all consistent with electron recoil backgrounds. A profile likelihood statistical approach shows this result is consistent with the background-only hypothesis (no WIMP interactions) with a p-value of 0.35. This was the most sensitive dark matter direct detection result in the world, and ruled out low-mass WIMP signal hints such as from CoGeNT and CDMS-II. These results struck out some of the theories about WIMPs, allowing researchers to focus on fewer leads. In the final run from October 2014 to May 2016, at four times its original design sensitivity with 368 kg of liquid xenon, LUX saw no signs of dark matter candidate—WIMPs. According to Ethan Siegel, the results from LUX and XENON1T have provided evidence against the supersymmetric "WIMP Miracle" strong enough to motivate theorists towards alternate models of dark matter. References External links LUX Dark Matter webpage Brown University article Experiments for dark matter search
Large Underground Xenon experiment
[ "Physics" ]
1,158
[ "Dark matter", "Experiments for dark matter search", "Unsolved problems in physics" ]
9,783,078
https://en.wikipedia.org/wiki/Proteins%40home
proteins@home was a volunteer computing project that used the BOINC architecture. The project was run by the Department of Biology at . The project began on December 28, 2006 and ended in June 2008. Purpose proteins@home was a large-scale non-profit protein structure prediction project utilizing volunteer computing to perform intensive computations in a small amount of time. From their website: The amino acid sequence of a protein determines its three-dimensional structure, or 'fold'. Conversely, the three-dimensional structure is compatible with a large, but limited set of amino acid sequences. Enumerating the allowed sequences for a given fold is known as the 'inverse protein folding problem'. We are working to solve this problem for a large number of known protein folds (a representative subset: about 1500 folds). The most expensive step is to build a database of energy functions that describe all these structures. For each structure, we consider all possible sequences of amino acids. Surprisingly, this is computationally tractable, because our energy functions are sums over pairs of interactions. Once this is done, we can explore the space of amino acid sequences in a fast and efficient way, and retain the most favorable sequences. This large-scale mapping of protein sequence space will have applications for predicting protein structure and function, for understanding protein evolution, and for designing new proteins. By joining the project, you will help to build the database of energy functions and advance an important area of science with potential biomedical applications. See also List of volunteer computing projects References External links proteins@home archive Science in society Free science software Protein structure Volunteer computing projects
Proteins@home
[ "Chemistry", "Technology" ]
325
[ "Computer science stubs", "Computer science", "Structural biology", "Computing stubs", "Protein structure" ]
9,788,270
https://en.wikipedia.org/wiki/Automated%20species%20identification
Automated species identification is a method of making the expertise of taxonomists available to ecologists, parataxonomists and others via digital technology and artificial intelligence. Today, most automated identification systems rely on images depicting the species for the identification. Based on precisely identified images of a species, a classifier is trained. Once exposed to a sufficient amount of training data, this classifier can then identify the trained species on previously unseen images. Introduction The automated identification of biological objects such as insects (individuals) and/or groups (e.g., species, guilds, characters) has been a dream among systematists for centuries. The goal of some of the first multivariate biometric methods was to address the perennial problem of group discrimination and inter-group characterization. Despite much preliminary work in the 1950s and '60s, progress in designing and implementing practical systems for fully automated object biological identification has proven frustratingly slow. As recently as 2004 Dan Janzen updated the dream for a new audience: <blockquote>The spaceship lands. He steps out. He points it around. It says 'friendly–unfriendly—edible–poisonous—safe– dangerous—living–inanimate'. On the next sweep it says Quercus oleoides—Homo sapiens—Spondias mombin—Solanum nigrum—Crotalus durissus—Morpho peleides''—serpentine'. This has been in my head since reading science fiction in ninth grade half a century ago.</blockquote> The species identification problem Janzen's preferred solution to this classic problem involved building machines to identify species from their DNA. However, recent developments in computer architectures, as well as innovations in software design, have placed the tools needed to realize Janzen's vision in the hands of the systematics and computer science community not in several years hence, but now; and not just for creating DNA barcodes, but also for identification based on digital images. A survey published in 2004, studies why automated species identification had not become widely employed at this time and whether it would be a realistic option for the future. The authors found that "a small but growing number of studies sought to develop automated species identification systems based on morphological characters". An overview of 20 studies analyzing species' structures, such as cells, pollen, wings, and genitalia, shows identification success rates between 40% and 100% on training sets with 1 to 72 species. However, they also identified four fundamental problems with these systems: (1) training sets—were too small (5-10 specimens per species) and their extension especially for rare species may be difficult, (2) errors in identification—are not sufficiently studied to handle them and to find systematics, (3) scaling—studies consider only small numbers of species (<200 species), and (4) novel species — systems are restricted to the species they have been trained for and will classify any novel observation as one of the known species. A survey published in 2017 systematically compares and discusses progress and findings towards automated plant species identification within the last decade (2005–2015). 120 primary studies have been published in high-quality venues within this time, mainly by authors with computer science background. These studies propose a wealth of computer vision approaches, i.e., features reducing the high-dimensionality of the pixel-based image data while preserving the characteristic information as well as classification methods. The vast majority of these studies analyzes leaves for identification, while only 13 studies propose methods for flower-based identification. The reasons being that leaves can easier be collected and imaged and are available for most of the year. Proposed features capture generic object characteristic, i.e., shape, texture, and color as well as leaf-specific characteristics, i.e., venation and margin. The majority of studies still used datasets for evaluation that contained no more than 250 species. However, there is progress in this regard, one study uses a dataset with >2k and another with >20k species. A system developed in 2022 showed that automated identification achieves accuracy that is sufficiently high for being used in an automated insect surveillance system using electronic traps. By training classifiers on a few hundred images it correctly identified fruit-flies, and can be used for continuous monitoring aimed at detecting species invasion or pest outbreak. Several aspects contribute to the success of this system. Primarily, using e-traps provide a standardized setting, which means that even though they are deployed in different countries and regions, the visual variability, in terms of size view angle and illumination are controlled. This suggests that trap-based systems may be easier to develop than free-view systems for automatic pest identification. There is a shortage of specialists who can identify the very biodiversity whose preservation has become a global concern. In commenting on this problem in palaeontology in 1993, Roger Kaesler recognized: "... we are running out of systematic palaeontologists who have anything approaching synoptic knowledge of a major group of organisms ... Palaeontologists of the next century are unlikely to have the luxury of dealing at length with taxonomic problems ... Palaeontology will have to sustain its level of excitement without the aid of systematists, who have contributed so much to its success."This expertise deficiency cuts as deeply into those commercial industries that rely on accurate identifications (e.g., agriculture, biostratigraphy) as it does into a wide range of pure and applied research programmes (e.g., conservation, biological oceanography, climatology, ecology). It is also commonly, though informally, acknowledged that the technical, taxonomic literature of all organismal groups is littered with examples of inconsistent and incorrect identifications. This is due to a variety of factors, including taxonomists being insufficiently trained and skilled in making identifications (e.g., using different rules-of-thumb in recognizing the boundaries between similar groups), insufficiently detailed original group descriptions and/or illustrations, inadequate access to current monographs and well-curated collections and, of course, taxonomists having different opinions regarding group concepts. Peer review only weeds out the most obvious errors of commission or omission in this area, and then only when an author provides adequate representations (e.g., illustrations, recordings, and gene sequences) of the specimens in question. Systematics too has much to gain from the further development and use of automated identification systems. In order to attract both personnel and resources, systematics must transform itself into a "large, coordinated, international scientific enterprise". Many have identified use of the Internet— especially via the World Wide Web — as the medium through which this transformation can be made. While establishment of a virtual, GenBank-like system for accessing morphological data, audio clips, video files and so forth would be a significant step in the right direction, improved access to observational information and/or text-based descriptions alone will not address either the taxonomic impediment or low identification reproducibility issues successfully. Instead, the inevitable subjectivity associated with making critical decisions on the basis of qualitative criteria must be reduced or, at the very least, embedded within a more formally analytic context. Properly designed, flexible, and robust, automated identification systems, organized around distributed computing architectures and referenced to authoritatively identified collections of training set data (e.g., images, and gene sequences) can, in principle, provide all systematists with access to the electronic data archives and the necessary analytic tools to handle routine identifications of common taxa. Properly designed systems can also recognize when their algorithms cannot make a reliable identification and refer that image to a specialist (whose address can be accessed from another database). Such systems can also include elements of artificial intelligence and so improve their performance the more they are used. Once morphological (or molecular) models of a species have been developed and demonstrated to be accurate, these models can be queried to determine which aspects of the observed patterns of variation and variation limits are being used to achieve the identification, thus opening the way for the discovery of new and (potentially) more reliable taxonomic characters. iNaturalist is a global citizen science project and social network of naturalists that incorporates both human and automatic identification of plants, animals, and other living creatures via browser or mobile apps. Naturalis Biodiversity Center in the Netherlands developed several AI species identification models, including but not limited to: A multi-source model trained with expert-validated data and used by several European biodiversity portals for citizen scientist projects in different countries across Europe; A model for analyzing images from insect camera DIOPSIS; 8 AI models for butterflies, cone snails, bird eggs, rays and sharks egg capsules, as well as masks from different cultures that are in the collections of 5 Dutch museums; Sound recognition models. Pl@ntNet is a global citizen science project which provides an app and a website for plant identification through photographs, based on machine-learning Leaf Snap is an iOS app developed by the Smithsonian Institution that uses visual recognition software to identify North American tree species from photographs of leaves. Google Photos can automatically identify various species in photographs. Plant.id is a web application and API made by FlowerChecker company which uses a neural network trained on photos from FlowerChecker mobile app. See also References cited External links Here are some links to the home pages of species identification systems. The SPIDA and DAISY system are essentially generic and capable of classifying any image material presented. The ABIS and DrawWing''' system are restricted to insects with membranous wings as they operate by matching a specific set of characters based on wing venation. The SPIDA system ABIS DAISY DrawWing LeafSnap Pl@ntNet Insect.id by Kindwise recognizes over 6,000 species including beetles, spiders, centipedes, butterflies, ants, bees and other insect-like animals Mushroom id by Kindwise recognizes over 3,200 species including mushrooms, lichens and slime molds Plant.id by Kindwise recognizes more than 33,000 taxa, including houseplants, garden plants, trees, weeds, fungi, and lichens; it also recognizes common plant diseases Species Automatic identification and data capture Comparative anatomy Bioinformatics Applications of computer vision
Automated species identification
[ "Technology", "Engineering", "Biology" ]
2,108
[ "Biological engineering", "Taxonomy (biology)", "Bioinformatics", "Data", "nan", "Automatic identification and data capture" ]
9,788,885
https://en.wikipedia.org/wiki/Argentinian%20mammarenavirus
Mammarenavirus juninense, better known as the Junin virus or Junín virus (JUNV), is an arenavirus in the Mammarenavirus genus that causes Argentine hemorrhagic fever (AHF). The virus took its original name from the city of Junín, around which the first cases of infection were reported, in 1958. Virology Structure Argentinian mammarenavirus is a negative sense ssRNA enveloped virion with a variable diameter between 50 and 300 nm. The surface of the particle encompasses a layer of T-shaped glycoproteins, each extending up to 10 nm outwards from the envelope, which are important in mediating attachment and entry into host cells. Genome The Argentinian mammarenavirus genome is composed of two single-stranded RNA molecules, each encoding two different genes in an ambisense orientation. The two segments are termed 'short (S)' and 'long (L)' owing to their respective lengths. The short segment (around 3400 nucleotides in length) encodes the nucleocapsid protein and the glycoprotein precursor (GPC). The GPC is subsequently cleaved to form two viral glycoproteins, GP1 and GP2, which ultimately form the T-shaped glycoprotein spike which extends outwards from the viral envelope. . The long segment (around 7200 nucleotides in length) encodes the viral polymerase and a zinc-binding protein. The virus is spread by rodents. Disease and epidemiology A member of the genus Mammarenavirus, Argentinian mammarenavirus characteristically causes Argentine hemorrhagic fever (AHF). AHF leads to severe compromise of the vascular, neurological and immune systems and has a mortality rate between 20 and 30%. Symptoms of the disease are conjunctivitis, purpura, petechiae and occasionally sepsis. The symptoms of the disease can be confusing; the condition can be mistaken for a different one, especially during the first week when it can resemble a flu. Since the discovery of Argentinian mammarenavirus in 1958, the geographical distribution of the pathogen, although still confined to Argentina, has expanded. At the time of discovery, Argentinian mammarenavirus was confined to an area of around 15,000 km2. At the beginning of 2000, the region with reported cases grew to around 150,000 km2. The natural hosts of Argentinian mammarenavirus are rodents, particularly Mus musculus, Calomys spp. and Akodon azarae. Direct rodent-to-human transmission only takes place when a person makes direct contact with the excrement of an infected rodent; this can occur by ingestion of contaminated food or water, inhalation of particles in urine or direct contact of an open wound with rodent feces. Potential therapy A potential novel treatment, the NMT inhibitor, has been shown to completely inhibit JUNV infection in cells based assays. Prevention and control An investigational (in the US) vaccine (Candid1) was developed at the US Army Medical Research Institute for Infectious Disease (USAMRIID) at Ft. Detrick, MD in the 1980s which has shown to be safe, well tolerated and effective in reducing mortality and morbidity due to AHF. The vaccine, which came from an XJ strain of the Argentinian mammarenavirus, was continually passaged a total of 44 times in newborn mouse brains, and a total of 19 times along with cloning in FRhL cells. Over 90% of the volunteers in Phase 1 and 2 clinical trials developed antibodies against the Argentinian mammarenavirus, and 99% developed an adequate immune response specific for Argentinian mammarenavirus. Moreover, a large efficacy study among 6,500 people, where 3,255 individuals were randomly selected to take Candid 1 and 3,245 individuals were randomly selected to take a placebo resulted in 23 cases of Junin-like infections, where 22 out of the 23 cases were from the placebo group. This efficacy study resulted in a 95% vaccine efficacy. Currently, the Candid 1 vaccine, otherwise known as the Junin vaccine, is licensed in Argentina by the regulatory agency of Argentina where Argentinian mammarenavirus is endemic to the region. People in laboratories who come in constant contact with Argentinian mammarenavirus are also recommended to take the Junin vaccine to prevent transmission. References Arenaviridae Rodent-carried diseases Vaccine-preventable diseases
Argentinian mammarenavirus
[ "Biology" ]
959
[ "Vaccination", "Vaccine-preventable diseases" ]
9,790,509
https://en.wikipedia.org/wiki/Hot%20air%20solder%20leveling
HASL or HAL (for hot air (solder) leveling) is a type of finish used on printed circuit boards (PCBs). The PCB is typically dipped into a bath of molten solder so that all exposed copper surfaces are covered by solder. Excess solder is removed by passing the PCB between hot air knives. HASL can be applied with or without lead (Pb), but only lead-free HASL is RoHS compliant. Advantages of HASL Excellent wetting during component soldering. Avoids copper corrosion. Disadvantages of HASL Low planarity on vertical levelers may make this surface finish unsuitable for use with fine pitch components. Improved planarity can be achieved using a horizontal leveler. High thermal stress during the process may introduce defects into PCB. See also Electroless Nickel Immersion Gold (ENIG) Immersion Silver (IAg) Organic Solderability Preservative (OSP) Reflow soldering Wave soldering Printed circuit board manufacturing Soldering
Hot air solder leveling
[ "Engineering" ]
208
[ "Electrical engineering", "Electronic engineering", "Printed circuit board manufacturing" ]
9,790,950
https://en.wikipedia.org/wiki/Regular%20category
In category theory, a regular category is a category with finite limits and coequalizers of all pairs of morphisms called kernel pairs, satisfying certain exactness conditions. In that way, regular categories recapture many properties of abelian categories, like the existence of images, without requiring additivity. At the same time, regular categories provide a foundation for the study of a fragment of first-order logic, known as regular logic. Definition A category C is called regular if it satisfies the following three properties: C is finitely complete. If f : X → Y is a morphism in C, and is a pullback, then the coequalizer of p0, p1 exists. The pair (p0, p1) is called the kernel pair of f. Being a pullback, the kernel pair is unique up to a unique isomorphism. If f : X → Y is a morphism in C, and is a pullback, and if f is a regular epimorphism, then g is a regular epimorphism as well. A regular epimorphism is an epimorphism that appears as a coequalizer of some pair of morphisms. Examples Examples of regular categories include: Set, the category of sets and functions between the sets More generally, every elementary topos Grp, the category of groups and group homomorphisms The category of rings and ring homomorphisms More generally, the category of models of any variety Every bounded meet-semilattice, with morphisms given by the order relation Every abelian category The following categories are not regular: Top, the category of topological spaces and continuous functions Cat, the category of small categories and functors Epi-mono factorization In a regular category, the regular-epimorphisms and the monomorphisms form a factorization system. Every morphism f:X→Y can be factorized into a regular epimorphism e:X→E followed by a monomorphism m:E→Y, so that f=me. The factorization is unique in the sense that if e':X→E' is another regular epimorphism and m':E'→Y is another monomorphism such that f=m'e, then there exists an isomorphism h:E→E' such that he=e' and m'h=m. The monomorphism m is called the image of f. Exact sequences and regular functors In a regular category, a diagram of the form is said to be an exact sequence if it is both a coequalizer and a kernel pair. The terminology is a generalization of exact sequences in homological algebra: in an abelian category, a diagram is exact in this sense if and only if is a short exact sequence in the usual sense. A functor between regular categories is called regular, if it preserves finite limits and coequalizers of kernel pairs. A functor is regular if and only if it preserves finite limits and exact sequences. For this reason, regular functors are sometimes called exact functors. Functors that preserve finite limits are often said to be left exact. Regular logic and regular categories Regular logic is the fragment of first-order logic that can express statements of the form where and are regular formulae i.e. formulae built up from atomic formulae, the truth constant, binary meets (conjunction) and existential quantification. Such formulae can be interpreted in a regular category, and the interpretation is a model of a sequent , if the interpretation of factors through the interpretation of . This gives for each theory (set of sequents) T and for each regular category C a category Mod(T,C) of models of T in C. This construction gives a functor Mod(T,-):RegCat→Cat from the category RegCat of small regular categories and regular functors to small categories. It is an important result that for each theory T there is a regular category R(T), such that for each regular category C there is an equivalence which is natural in C. Here, R(T) is called the classifying category of the regular theory T. Up to equivalence any small regular category arises in this way as the classifying category of some regular theory. Exact (effective) categories The theory of equivalence relations is a regular theory. An equivalence relation on an object of a regular category is a monomorphism into that satisfies the interpretations of the conditions for reflexivity, symmetry and transitivity. Every kernel pair defines an equivalence relation . Conversely, an equivalence relation is said to be effective if it arises as a kernel pair. An equivalence relation is effective if and only if it has a coequalizer and it is the kernel pair of this. A regular category is said to be exact, or exact in the sense of Barr, or effective regular', if every equivalence relation is effective. (Note that the term "exact category" is also used differently, for the exact categories in the sense of Quillen.) Examples of exact categories The category of sets is exact in this sense, and so is any (elementary) topos. Every equivalence relation has a coequalizer, which is found by taking equivalence classes. Every abelian category is exact. Every category that is monadic over the category of sets is exact. The category of Stone spaces is regular, but not exact. See also Allegory (category theory) Topos Exact completion References Categories in category theory
Regular category
[ "Mathematics" ]
1,131
[ "Mathematical structures", "Category theory", "Categories in category theory" ]
9,790,951
https://en.wikipedia.org/wiki/Hydraulic%20pump
A hydraulic pump is a mechanical source of power that converts mechanical power into hydraulic energy (hydrostatic energy i.e. flow, pressure). Hydraulic pumps are used in hydraulic drive systems and can be hydrostatic or hydrodynamic. They generate flow with enough power to overcome pressure induced by a load at the pump outlet. When a hydraulic pump operates, it creates a vacuum at the pump inlet, which forces liquid from the reservoir into the inlet line to the pump and by mechanical action delivers this liquid to the pump outlet and forces it into the hydraulic system. Hydrostatic pumps are positive displacement pumps while hydrodynamic pumps can be fixed displacement pumps, in which the displacement (flow through the pump per rotation of the pump) cannot be adjusted, or variable displacement pumps, which have a more complicated construction that allows the displacement to be adjusted. Hydrodynamic pumps are more frequent in day-to-day life. Hydrostatic pumps of various types all work on the principle of Pascal's law. Types of hydraulic pump Gear pumps Gear pumps (with external teeth) (fixed displacement) are simple and economical pumps. The swept volume or displacement of gear pumps for hydraulics will be between about 1 to 200 milliliters. They have the lowest volumetric efficiency ( ) of all three basic pump types (gear, vane and piston pumps) These pumps create pressure through the meshing of the gear teeth, which forces fluid around the gears to pressurize the outlet side. Some gear pumps can be quite noisy, compared to other types, but modern gear pumps are highly reliable and much quieter than older models. This is in part due to designs incorporating split gears, helical gear teeth and higher precision/quality tooth profiles that mesh and unmesh more smoothly, reducing pressure ripple and related detrimental problems. Another positive attribute of the gear pump, is that catastrophic breakdown is a lot less common than in most other types of hydraulic pumps. This is because the gears gradually wear down the housing and/or main bushings, reducing the volumetric efficiency of the pump gradually until it is all but useless. This often happens long before wear and causes the unit to seize or break down. Hydraulic gear pumps are used in various applications where there are different requirements such as lifting, lowering, opening, closing, or rotating, and they are expected to be safe and long-lasting. Rotary vane pumps A rotary vane pump is a positive-displacement pump that consists of vanes mounted to a rotor that rotates inside a cavity. In some cases these vanes can have variable length and/or be tensioned to maintain contact with the walls as the pump rotates. A critical element in vane pump design is how the vanes are pushed into contact with the pump housing, and how the vane tips are machined at this very point. Several type of "lip" designs are used, and the main objective is to provide a tight seal between the inside of the housing and the vane, and at the same time to minimize wear and metal-to-metal contact. Forcing the vane out of the rotating centre and towards the pump housing is accomplished using spring-loaded vanes, or more traditionally, vanes loaded hydrodynamically (via the pressurized system fluid). Screw pumps Screw pumps (fixed displacement) consist of two Archimedes' screws that intermesh and are enclosed within the same chamber. These pumps are used for high flows at relatively low pressure (max ). They were used on board ships where a constant pressure hydraulic system extended through the whole ship, especially to control ball valves but also to help drive the steering gear and other systems. The advantage of the screw pumps is the low sound level of these pumps; however, the efficiency is not high. The major problem of screw pumps is that the hydraulic reaction force is transmitted in a direction that's axially opposed to the direction of the flow. There are two ways to overcome this problem: put a thrust bearing beneath each rotor; create a hydraulic balance by directing a hydraulic force to a piston under the rotor. Types of screw pumps: single end double end single rotor multi rotor timed multi rotor untimed. Bent axis pumps Bent axis pumps, axial piston pumps and motors using the bent axis principle, fixed or adjustable displacement, exists in two different basic designs. The Thoma-principle (engineer Hans Thoma, Germany, patent 1935) with max 25 degrees angle and the Wahlmark-principle (Gunnar Axel Wahlmark, patent 1960) with spherical-shaped pistons in one piece with the piston rod, piston rings, and maximum 40 degrees between the driveshaft centerline and pistons (Volvo Hydraulics Co.). These have the best efficiency of all pumps. Although in general, the largest displacements are approximately one litre per revolution, if necessary a two-liter swept volume pump can be built. Often variable-displacement pumps are used so that the oil flow can be adjusted carefully. These pumps can in general work with a working pressure of up to 350–420 bars in continuous work. Inline axial piston pumps By using different compensation techniques, the variable displacement type of these pumps can continuously alter fluid discharge per revolution and system pressure based on load requirements, maximum pressure cut-off settings, horsepower/ratio control, and even fully electro proportional systems, requiring no other input than electrical signals. This makes them potentially hugely power saving compared to other constant flow pumps in systems where prime mover/diesel/electric motor rotational speed is constant and required fluid flow is non-constant. Radial piston pumps A radial piston pump is a form of hydraulic pump. The working pistons extend in a radial direction symmetrically around the drive shaft, in contrast to the axial piston pump. Hydraulic pumps, calculation formulas Flow where , flow (m/s) , stroke frequency (Hz) , stroked volume (m) , volumetric efficiency Power where , power (W) , stroke frequency (Hz) , stroked volume (m) , pressure difference over pump (Pa) , mechanical/hydraulic efficiency Mechanical efficiency where , mechanical pump efficiency percent , theoretical torque to drive , actual torque to drive Hydraulic efficiency where , hydraulic pump efficiency , theoretical flow rate output , actual flow rate output References External links External gear pump description Internal gear pump description Pumps Pump, Hydraulic
Hydraulic pump
[ "Physics", "Chemistry" ]
1,270
[ "Pumps", "Turbomachinery", "Physical systems", "Hydraulics", "Fluid dynamics" ]
9,791,475
https://en.wikipedia.org/wiki/Ornstein%E2%80%93Zernike%20equation
In statistical mechanics the Ornstein–Zernike (OZ) equation is an integral equation introduced by Leonard Ornstein and Frits Zernike that relates different correlation functions with each other. Together with a closure relation, it is used to compute the structure factor and thermodynamic state functions of amorphous matter like liquids or colloids. Context The OZ equation has practical importance as a foundation for approximations for computing the pair correlation function of molecules or ions in liquids, or of colloidal particles. The pair correlation function is related via Fourier transform to the static structure factor, which can be determined experimentally using X-ray diffraction or neutron diffraction. The OZ equation relates the pair correlation function to the direct correlation function. The direct correlation function is only used in connection with the OZ equation, which can actually be seen as its definition. Besides the OZ equation, other methods for the computation of the pair correlation function include the virial expansion at low densities, and the Bogoliubov–Born–Green–Kirkwood–Yvon (BBGKY) hierarchy. Any of these methods must be combined with a physical approximation: truncation in the case of the virial expansion, a closure relation for OZ or BBGKY. The equation To keep notation simple, we only consider homogeneous fluids. Thus the pair correlation function only depends on distance, and therefore is also called the radial distribution function. It can be written where the first equality comes from homogeneity, the second from isotropy, and the equivalences introduce new notation. It is convenient to define the total correlation function as: which expresses the influence of molecule 1 on molecule 2 at distance . The OZ equation splits this influence into two contributions, a direct and indirect one. The direct contribution defines the direct correlation function, The indirect part is due to the influence of molecule 1 on a third, labeled molecule 3, which in turn affects molecule 2, directly and indirectly. This indirect effect is weighted by the density and averaged over all the possible positions of molecule 3. By eliminating the indirect influence, is shorter-ranged than and can be more easily modelled and approximated. The radius of is determined by the radius of intermolecular forces, whereas the radius of is of the order of the correlation length. Fourier transform The integral in the OZ equation is a convolution. Therefore, the OZ equation can be resolved by Fourier transform. If we denote the Fourier transforms of and by and , respectively, and use the convolution theorem, we obtain which yields Closure relations As both functions, and , are unknown, one needs an additional equation, known as a closure relation. While the OZ equation is purely formal, the closure must introduce some physically motivated approximation. In the low-density limit, the pair correlation function is given by the Boltzmann factor, with and with the pair potential . Closure relations for higher densities modify this simple relation in different ways. The best known closure approximations are: The Percus–Yevick approximation for particles with impenetrable ("hard") core, the hypernetted-chain approximation, for particles with soft cores and attractive potential tails, the mean spherical approximation, the Rogers-Young approximation. The latter two interpolate in different ways between the former two, and thereby achieve a satisfactory description of particles that have a hard core and attractive forces. See also Hypernetted-chain equation – closure relation Percus–Yevick approximation – closure relation References External links Statistical mechanics Integral equations
Ornstein–Zernike equation
[ "Physics", "Mathematics" ]
722
[ "Statistical mechanics", "Mathematical objects", "Integral equations", "Equations" ]
9,791,594
https://en.wikipedia.org/wiki/Lewis%20Paul
Lewis Paul (died 1759) was the original inventor of roller spinning, the basis of the water frame for spinning cotton in a cotton mill. Life and work Lewis Paul was of Huguenot descent. His father was physician to Lord Shaftesbury. He may have begun work on designing a spinning machine for cotton as early as 1729, but probably did not make practical progress until after 1732 when he met John Wyatt, a carpenter then working in Birmingham for a gun barrel forger. Wyatt had designed a machine, probably for cutting files, in which Paul took an interest. Roller spinning was certainly Paul's idea, and Wyatt built a machine (or model) for him. Paul obtained a patent for this on 24 June 1738. He then set about trying to license his machine, though some licences were granted in satisfaction of debts. In 1741, he set up a machine powered by two asses in the Upper Priory in Birmingham, near his house in Old Square. Mills using the roller spinning patent Edward Cave, a publisher, obtained a licence and set up machines in a warehouse in London. In 1742, he acquired Marvel's Mill on the River Nene at Northampton. He rebuilt the mill to hold four or five water-powered spinning machines, each with 50 spindles. This was thus the first cotton mill. Cave died on 10 January 1754, so that the mill passed to his brother William and his nephew Paul. Samuel Touchet, a London merchant had the mill until 1755, but made no profit. It may then have been let to Lewis Paul, but he died in 1759. The Caves forfeited the lease for non-payment of rent in March 1761 and advertised the mill to let in November 1761. By 1768, the mill had reverted to being a corn mill. Another mill that operated under Paul's patent was at Leominster. This was built in 1744 by John Bourn in partnership with Henry Morris of Lancashire. The mill burnt down in November 1754. Carding machines In 1748, Daniel Bourn and Lewis Paul separately obtained patents for carding machines, which were presumably used in the Leominster and Northampton mills respectively. This carding technology of Lewis Paul and Daniel Bourn seems to be the basis of later carding machines. Achievements The principle of his rolling spinning process was perfected by John Kay and Thomas Highs and promoted by Richard Arkwright. Paul's machine seems only to have been modestly profitable, and it is not clear to what extent his work is reflected in Arkwright's much more successful machine, the water frame, patented in 1769. Like Paul and Bourn, Arkwright subsequently added a carding stage to his machinery, but his use of this as a means of continuing his patent rights beyond the expiry of his original patent failed, because the improvement was not his invention. Bibliography A. P. Wadsworth and J. de L. Mann, The Cotton Industry and Industrial Lancashire (Manchester University Press 1931), 419-448. D. L. Bates, 'Cotton-spinning in Northampton: Edward Cave's Mill' Northamptonshire Past and Present IX(3) (1996), 237-51. W. English, The Textile Industry (Longmans, London 1931), 80-2. R. B. Prosser, ‘Paul, Lewis (d. 1759)’, rev. Gillian Cookson, Oxford Dictionary of National Biography, Oxford University Press, 2004, accessed 25 Feb 2008 References External links English inventors Textile engineering Textile workers Year of birth unknown 1759 deaths Industrial Revolution in England People of the Industrial Revolution Spinning 18th-century British engineers
Lewis Paul
[ "Physics", "Engineering" ]
732
[ "Applied and interdisciplinary physics", "Textile engineering" ]
9,792,297
https://en.wikipedia.org/wiki/Solder%20form
In mathematics, more precisely in differential geometry, a soldering (or sometimes solder form) of a fiber bundle to a smooth manifold is a manner of attaching the fibers to the manifold in such a way that they can be regarded as tangent. Intuitively, soldering expresses in abstract terms the idea that a manifold may have a point of contact with a certain model Klein geometry at each point. In extrinsic differential geometry, the soldering is simply expressed by the tangency of the model space to the manifold. In intrinsic geometry, other techniques are needed to express it. Soldering was introduced in this general form by Charles Ehresmann in 1950. Soldering of a fibre bundle Let M be a smooth manifold, and G a Lie group, and let E be a smooth fibre bundle over M with structure group G. Suppose that G acts transitively on the typical fibre F of E, and that dim F = dim M. A soldering of E to M consists of the following data: A distinguished section o : M → E. A linear isomorphism of vector bundles θ : TM → o*VE from the tangent bundle of M to the pullback of the vertical bundle of E along the distinguished section. In particular, this latter condition can be interpreted as saying that θ determines a linear isomorphism from the tangent space of M at x to the (vertical) tangent space of the fibre at the point determined by the distinguished section. The form θ is called the solder form for the soldering. Special cases By convention, whenever the choice of soldering is unique or canonically determined, the solder form is called the canonical form, or the tautological form. Affine bundles and vector bundles Suppose that E is an affine vector bundle (a vector bundle without a choice of zero section). Then a soldering on E specifies first a distinguished section: that is, a choice of zero section o, so that E may be identified as a vector bundle. The solder form is then a linear isomorphism However, for a vector bundle there is a canonical isomorphism between the vertical space at the origin and the fibre VoE ≈ E. Making this identification, the solder form is specified by a linear isomorphism In other words, a soldering on an affine bundle E is a choice of isomorphism of E with the tangent bundle of M. Often one speaks of a solder form on a vector bundle, where it is understood a priori that the distinguished section of the soldering is the zero section of the bundle. In this case, the structure group of the vector bundle is often implicitly enlarged by the semidirect product of GL(n) with the typical fibre of E (which is a representation of GL(n)). Examples As a special case, for instance, the tangent bundle itself carries a canonical solder form, namely the identity. If M has a Riemannian metric (or pseudo-Riemannian metric), then the covariant metric tensor gives an isomorphism from the tangent bundle to the cotangent bundle, which is a solder form. In Hamiltonian mechanics, the solder form is known as the tautological one-form, or alternately as the Liouville one-form, the Poincaré one-form, the canonical one-form, or the symplectic potential. Consider the Mobius strip as a fiber bundle over the circle. The vertical bundle o*VE is still a Mobius strip, while the tangent bundle TM is the cylinder, so there is no solder form for this. Applications A solder form on a vector bundle allows one to define the torsion and contorsion tensors of a connection. Solder forms occur in the sigma model, where they glue together the tangent space of a spacetime manifold to the tangent space of the field manifold. Vierbeins, or tetrads in general relativity, look like solder forms, in that they glue together coordinate charts on the spacetime manifold, to the preferred, usually orthonormal basis on the tangent space, where calculations can be considerably simplified. That is, the coordinate charts are the in the definitions above, and the frame field is the vertical bundle . In the sigma model, the vierbeins are explicitly the solder forms. Principal bundles In the language of principal bundles, a solder form on a smooth principal G-bundle P over a smooth manifold M is a horizontal and G-equivariant differential 1-form on P with values in a linear representation V of G such that the associated bundle map from the tangent bundle TM to the associated bundle P×G V is a bundle isomorphism. (In particular, V and M must have the same dimension.) A motivating example of a solder form is the tautological or fundamental form on the frame bundle of a manifold. The reason for the name is that a solder form solders (or attaches) the abstract principal bundle to the manifold M by identifying an associated bundle with the tangent bundle. Solder forms provide a method for studying G-structures and are important in the theory of Cartan connections. The terminology and approach is particularly popular in the physics literature. Notes References Differential forms Fiber bundles
Solder form
[ "Engineering" ]
1,079
[ "Tensors", "Differential forms" ]
9,793,263
https://en.wikipedia.org/wiki/Covariance%20function
In probability theory and statistics, the covariance function describes how much two random variables change together (their covariance) with varying spatial or temporal separation. For a random field or stochastic process Z(x) on a domain D, a covariance function C(x, y) gives the covariance of the values of the random field at the two locations x and y: The same C(x, y) is called the autocovariance function in two instances: in time series (to denote exactly the same concept except that x and y refer to locations in time rather than in space), and in multivariate random fields (to refer to the covariance of a variable with itself, as opposed to the cross covariance between two different variables at different locations, Cov(Z(x1), Y(x2))). Admissibility For locations x1, x2, ..., xN ∈ D the variance of every linear combination can be computed as A function is a valid covariance function if and only if this variance is non-negative for all possible choices of N and weights w1, ..., wN. A function with this property is called positive semidefinite. Simplifications with stationarity In case of a weakly stationary random field, where for any lag h, the covariance function can be represented by a one-parameter function which is called a covariogram and also a covariance function. Implicitly the C(xi, xj) can be computed from Cs(h) by: The positive definiteness of this single-argument version of the covariance function can be checked by Bochner's theorem. Parametric families of covariance functions For a given variance , a simple stationary parametric covariance function is the "exponential covariance function" where V is a scaling parameter (correlation length), and d = d(x,y) is the distance between two points. Sample paths of a Gaussian process with the exponential covariance function are not smooth. The "squared exponential" (or "Gaussian") covariance function: is a stationary covariance function with smooth sample paths. The Matérn covariance function and rational quadratic covariance function are two parametric families of stationary covariance functions. The Matérn family includes the exponential and squared exponential covariance functions as special cases. See also Autocorrelation function Correlation function Covariance matrix Kriging Positive-definite kernel Random field Stochastic process Variogram References Geostatistics Spatial analysis Covariance and correlation
Covariance function
[ "Physics" ]
555
[ "Spacetime", "Space", "Spatial analysis" ]
5,793,570
https://en.wikipedia.org/wiki/AP%20Physics%20C%3A%20Electricity%20and%20Magnetism
Advanced Placement (AP) Physics C: Electricity and Magnetism (also known as AP Physics C: E&M or AP E&M) is an introductory physics course administered by the College Board as part of its Advanced Placement program. It is intended to serve as a proxy for a second-semester calculus-based university course in electricity and magnetism. Physics C: E&M may be combined with its mechanics counterpart to form a year-long course that prepares for both exams. History Before 1973, the topics of AP Physics C: Electricity and Magnetism were covered in a singular AP Physics C exam, which included mechanics, electricity, magnetism, optics, fluids, and modern physics. In 1973, this exam was discontinued, and two new exams were created, which each covered Newtonian mechanics and electromagnetism. Before 2006, test-takers paid only once and were given the choice of taking either one or two parts of the Physics C test. This was changed, so now test-takers have to pay twice to take both parts of the AP Physics C test. Before the 2024–25 school year, the multiple choice and free response section were each allotted 45 minutes, with 35 questions for the former and 3 questions for the latter. This made AP Physics C: Electricity and Magnetism, along with Mechanics, the shortest exams offered by the College Board. Unlike other exams, the AP Physics C exams also had 5 options that test-takers could choose from rather than the typical 4. This was changed in an announcement made by College Board in the February 2024 regarding changes to their AP Physics courses for the 2024–25 school year onward, which explained that the multiple choice sections would have 40 questions and the free response sections would have 4 questions. To compensate, College Board allotted 80 minutes for the multiple choice section and 100 minutes for the free response section, making the exams as long as the ones for AP Physics 1 and AP Physics 2. Curriculum E&M is equivalent to an introductory college course in electricity and magnetism for physics or engineering majors. The course modules are: The content of Physics C: E&M overlaps with that of AP Physics 2, but Physics 2 is algebra-based and covers additional topics outside of electromagnetism, while Physics C is calculus-based and only covers electromagnetism. Methods of calculus are used wherever appropriate in formulating physical principles and in applying them to physical problems. Therefore, students should have completed or be concurrently enrolled in a calculus class. Starting in the 2024–25 school year, all units in AP Physics C: Electricity and Magnetism are numbered sequentially after the 7 units in AP Physics C: Mechanics. This starts with Electric Charges, Fields, and Gauss's Law as unit 8 and ends with Electromagnetic Induction as unit 13. Exam The course culminates in an optional exam for which high-performing students may receive some credit towards their college coursework, depending on the institution. Science Practices Assessed Multiple Choice and Free Response Sections of the AP Physics C: Electricity and Magnetism exam are also assessed on scientific practices. Below are tables representing the practices assessed and their weighting for both parts of the exam Grade distribution The grade distributions for the Physics C: Electricity and Magnetism scores since 2010 were: See also Physics Glossary of physics References External links College Board Course Description: Physics Advanced Placement Physics education Standardized tests
AP Physics C: Electricity and Magnetism
[ "Physics" ]
690
[ "Applied and interdisciplinary physics", "Physics education" ]
5,793,608
https://en.wikipedia.org/wiki/Railway%20Technical%20Research%20Institute
, or , is the technical research company under the Japan Railways group of companies. Overview RTRI was established in its current form in 1986 just before Japanese National Railways (JNR) was privatised and split into separate JR group companies. It conducts research on everything related to trains, railways and their operation. It is funded by the government and private rail companies. It works both on developing new railway technology, such as magnetic levitation, and on improving the safety and economy of current technology. Its research areas include earthquake detection and alarm systems, obstacle detection on level crossings, improving adhesion between train wheels and tracks, reducing energy usage, noise barriers and preventing vibrations. RTRI is the main developer in the Japanese SCMaglev program. Offices and test facilities Main office 844 Shin-Kokusai Bldg. 3-4-1 Marunouchi, Chiyoda-ku, Tokyo 100-0005, Japan Research facilities Kunitachi Institute - 2-8-38 Hikari-cho, Kokubunji-shi, Tokyo, 185-8540, Japan Wind Tunnel Technical Center, Maibara, Shiga Shiozawa Snow Testing Station, Minami-Uonuma, Niigata Hino Civil Engineering Testing Station, Hino, Tokyo Gatsugi Anti-Salt Testing Station, Sanpoku, Niigata Gauge Change Train The RTRI is developing a variable gauge system, called the "Gauge Change Train", to allow Shinkansen trains to access lines of the original rail network. Publications Japan Railway & Technical Review Quarterly Report of RTRI - Print: Online: See also British Rail Research Division German Centre for Rail Traffic Research Hydrail References External links Organizations established in 1986 1986 establishments in Japan Rail transport organizations based in Japan Organizations based in Tokyo Kokubunji, Tokyo Railway infrastructure companies Engineering research institutes Japan Railway companies Government-owned railway companies
Railway Technical Research Institute
[ "Engineering" ]
385
[ "Engineering research institutes" ]
5,794,146
https://en.wikipedia.org/wiki/Index%20of%20structural%20engineering%20articles
This is an alphabetical list of articles pertaining specifically to structural engineering. For a broad overview of engineering, please see List of engineering topics. For biographies please see List of engineers. A A-frame – Aerodynamics – Aeroelasticity – Air-supported structure – Airframe – Aluminium – Analytical method – Angular frequency – Angular speed – Architecture – Architectural engineering – Arch – Arch bridge B Base isolation – Beam – Beam axle – Bending – Bifurcation theory – Biomechanics – Boat Building – Body-on-frame – Box girder bridge – Box truss – Bridge engineering – Buckling – Building – Building construction – Building engineering C Cable – Cable-stayed bridge – Cantilever – Cantilever bridge – Carbon-fiber-reinforced polymer – Casing – Casting – Catastrophic failure – Center of mass – Chaos theory – Chassis – Chimneys – Coachwork – Coefficient of thermal expansion – Coil spring – Columns – Composite material – Composite structure – Compression – Compressive stress – Concrete – Concrete cover – Construction – Construction engineering – Construction management – Continuum mechanics – Corrosion – Crane – Creep – Crumple zone – Curvature D Dam – Damper – Damping ratio – Dead and live loads – Deflection – Deformation – Direct stiffness method – Dome – Double wishbone suspension – Duhamel's integral – Dynamical system – Dynamics E Earthquake – Earthquake engineering – Earthquake engineering research – Earthquake engineering structures – Earthquake loss – Earthquake performance evaluation – Earthquake simulation – Elasticity theory – Elasticity – Energy principles in structural mechanics – Engineering mechanics – Euler method – Euler–Bernoulli beam equation F Falsework – Fatigue – Fibre reinforced plastic – Finite element analysis – Finite element method – Finite element method in structural mechanics – Fire safety – Fire protection – Fire protection engineering – First moment of area – Flexibility method – Floating raft system – Floor – Fluid mechanics – Footbridges – Force – Formwork – Foundation engineering – Fracture – Fracture mechanics – Frame – Frequency – Fuselage G Girder – Grout H Hoist – Hollow structural section – Hooke's law – Hull – Hurricane-proof building – Hyperboloid structure I Institution of Structural Engineers J Joint K L Lattice tower – Lever – Leaf spring – Limit state design – Linear elasticity – Linear system – Linkage – Live axle – Load – Load factor M MacPherson strut – Masonry – Mast – Material science – Modulus of elasticity – Mohr–Coulomb theory – Monocoque – Moment – Moment distribution – Moment of inertia – Mortar – Moulding N Newton method – Newtonian mechanics – Non-linear system – Numerical analysis – Non-persistent joint O Offshore engineering – Oscillation P Permissible stress design – Pile – Plastic analysis – Plastic bending – plasticity – Poisson's ratio – Portland cement – Portal frame – Precast concrete – Prestressed concrete – Pressure vessel Q R Radius of gyration – Ready-mix concrete – Rebar – Reinforced concrete – Response spectrum – Retaining wall – Rigid frame – Rotation S Second moment of area – Seismic analysis – Seismic loading – Seismic performance – Seismic retrofit – Seismic risk – Shear – Shear flow – Shear modulus – Shear strain – Shear strength – Shear stress – Shear wall – Shipbuilding – Ship Construction – Shock absorbers – Shotcrete – Shrinkage – Simple machine – Skyscraper – Slab – Solid mechanics – Space frame – Statics – Statically determinate – Statically indeterminate – Statistical method – Steel – Stiffness – Strand jack – Strength of materials – Stress analysis – Stress–strain curve – Strut – Strut bar – Structural analysis – Structural design – Structural dynamics – Structural failure – Structural health monitoring – Structural load – Structural mechanics – Structural steel – Structural system – Subframe – Superleggera – Suspension (disambiguation page) – Suspension bridge T Tall building – Tensile architecture – Tensile strength – Tensile stress – Tensile structure – Tension – Timber – Timber framing – Thermal conductivity – Thermal shock – Thermodynamics – Thermoplastic – Truss – Truss bridge – Torsion – Torsion beam suspension – Torsion box – Tower – Tubular bridge – Tuned mass damper U Unit dummy force method – Unsprung weight V Vehicle dynamics – Vessel – Very large floating structures – Vibration – Vibration control – Virtual work W Wall – Wear – Wedge – Welding – Wheel and axle X Y Yield strength – Young's modulus Z Structural engineering Structural engineering Structural engineering topics
Index of structural engineering articles
[ "Engineering" ]
905
[ "Structural engineering", "Civil engineering", "Construction" ]
9,082,922
https://en.wikipedia.org/wiki/Dielectric%20elastomers
Dielectric elastomers (DEs) are smart material systems that produce large strains and are promising for Soft robotics, Artificial muscle, etc. They belong to the group of electroactive polymers (EAP). DE actuators (DEA) transform electric energy into mechanical work and vice versa. Thus, they can be used as both actuators, sensors, and energy-harvesting devices. They have high elastic energy density and fast response due to being lightweight, highly stretchable, and operating under the electrostatic principle. They have been investigated since the late 1990s. Many prototype applications exist. Every year, conferences are held in the US and Europe. Working principles A DEA is a compliant capacitor (see image), where a passive elastomer film is sandwiched between two compliant electrodes. When a voltage is applied, the electrostatic pressure arising from the Coulomb forces acts between the electrodes. The electrodes squeeze the elastomer film. The equivalent electromechanical pressure is twice the electrostatic pressure and is given by: where is the vacuum permittivity, is the dielectric constant of the polymer and is the thickness of the elastomer film in the current state (during deformation). Usually, strains of DEA are in the order of 10–35%, maximum values reach 300% (the acrylic elastomer VHB 4910, commercially available from 3M, which also supports a high elastic energy density and a high electrical breakdown strength.) Ionic Replacing the electrodes with soft hydrogels allows ionic transport to replace electron transport. Aqueous ionic hydrogels can deliver potentials of multiple kilovolts, despite the onset of electrolysis at below 1.5 V. The difference between the capacitance of the double layer and the dielectric leads to a potential across the dielectric that can be millions of times greater than that across the double layer. Potentials in the kilovolt range can be realized without electrochemically degrading the hydrogel. Deformations are well controlled, reversible, and capable of high-frequency operation. The resulting devices can be perfectly transparent. High-frequency actuation is possible. Switching speeds are limited only by mechanical inertia. The hydrogel's stiffness can be thousands of times smaller than the dielectric's, allowing actuation without mechanical constraint across a range of nearly 100% at millisecond speeds. They can be biocompatible. Remaining issues include drying of the hydrogels, ionic build-up, hysteresis, and electrical shorting. Early experiments in semiconductor device research relied on ionic conductors to investigate field modulation of contact potentials in silicon and to enable the first solid-state amplifiers. Work since 2000 has established the utility of electrolyte gate electrodes. Ionic gels can also serve as elements of high-performance, stretchable graphene transistors. Materials Films of carbon powder or grease loaded with carbon black were early choices as electrodes for the DEAs. Such materials have poor reliability and are not available with established manufacturing techniques. Improved characteristics can be achieved with liquid metal, sheets of graphene, coatings of carbon nanotubes, surface-implanted layers of metallic nanoclusters and corrugated or patterned metal films. These options offer limited mechanical properties, sheet resistances, switching times and easy integration. Silicones and acrylic elastomers are other alternatives. The requirements for an elastomer material are: The material should have low stiffness (especially when large strains are required); The dielectric constant should be high; The electrical breakdown strength should be high. Mechanically prestretching the elastomer film offers the possibility of enhancing the electrical breakdown strength. Further reasons for prestretching include: Film thickness decreases, requiring a lower voltage to obtain the same electrostatic pressure; Avoiding compressive stresses in the film plane directions. The elastomers show a visco-hyperelastic behavior. Models that describe large strains and viscoelasticity are required for the calculation of such actuators. Materials used in research include graphite powder, silicone oil / graphite mixtures, gold electrodes. The electrode should be conductive and compliant. Compliance is important so that the elastomer is not constrained mechanically when elongated. Films of polyacrylamide hydrogels formed with salt water can be laminated onto the dielectric surfaces, replacing electrodes. DEs based on silicone (PDMS) and natural rubber are promising research fields. Properties such as fast response times and efficiency are superior using natural rubber based DEs compared to VHB (acrylic elastomer) based DEs for strains under 15%. Instabilities in Dielectric elastomers Dielectric elastomer actuators are to be designed so as to avoid the phenomenon of dielectric breakdown in their whole course of motion. In addition to the dielectric breakdown, DEAs are susceptible to another failure mode, referred to as the electromechanical instability, which arises due to nonlinear interaction between the electrostatic and the mechanical restoring forces. In several cases, the electromechanical instability precedes the dielectric breakdown. The instability parameters (critical voltage and the corresponding maximum stretch) are dependent on several factors, such as the level of prestretch, temperature, and the deformation dependent permittivity. Additionally, they also depend on the voltage waveform used to drive the actuator. Configurations Configurations include: Framed/In-Plane actuators: A framed or in-plane actuator is an elastomeric film coated/printed with two electrodes. Typically a frame or support structure is mounted around the film. Examples are expanding circles and planars (single and multiple phase.) Cylindrical/Roll actuators: Coated elastomer films are rolled around an axis. By activation, a force and an elongation appear in the axial direction. The actuators can be rolled around a compression spring or without a core. Applications include artificial muscles (prosthetics), mini- and microrobots, and valves. Diaphragm actuators: A diaphragm actuator is made as a planar construction which is then biased in the z-axis to produce out of plane motion. Shell-like actuators: Planar elastomer films are coated at specific locations in the form of electrode segments. With a well-directed activation, the foils assume complex three-dimensional shapes. Examples may be utilized for propelling vehicles through air or water, e.g. for blimps. Stack actuators: Stacking planar actuators can increase deformation. Actuators that shorten under activation are good candidates. Thickness Mode Actuators: The force and stroke moves in the z-direction (out of plane). Thickness mode actuators are a typically a flat film that may stack layers to increase displacement. Bending actuators:The in-plane actuation of dielectric elastomer (DE) based actuator is converted into out-of-plane actuation such as bending or folding using unimorph configuration where one or multiple layers of DE sheets are stacked on top of one layer of inactive substrate. Balloon actuators: Plane elastomer is attached to an air chamber and inflated with a constant volume of air, then the stiffness of the elastomer can be varied by applying electrical load; hence resulting in voltage-controlled bulging of the elastomeric balloon. Applications Dielectric elastomers offer multiple potential applications with the potential to replace many electromagnetic actuators, pneumatics and piezo actuators. A list of potential applications include: References Further reading External links Smart Materials & Structures (EAP/AFC) program at Empa European Scientific Network for Artificial Muscles EuroEAP – International conference on Electromechanically Active Polymer (EAP) transducers & artificial muscles WorldWide Electroactive Polymer Actuators * Webhub: Yoseph Bar-Cohen's link compendium at JPL Danfoss PolyPower The Biomimetics Laboratory at The University of Auckland Dielectric Elastomer Stack Actuators (DESA) at Technische Universität Darmstadt PolyWEC EU Project: New mechanisms and concepts for exploiting electroactive Polymers for Wave Energy Conversion Smart materials Conductive polymers Polymer material properties nl:Smart material
Dielectric elastomers
[ "Chemistry", "Materials_science", "Engineering" ]
1,766
[ "Molecular electronics", "Materials science", "Polymer material properties", "Polymer chemistry", "Smart materials", "Conductive polymers" ]
9,083,935
https://en.wikipedia.org/wiki/Submarine%20earthquake
A submarine, undersea, or underwater earthquake is an earthquake that occurs underwater at the bottom of a body of water, especially an ocean. They are the leading cause of tsunamis. The magnitude can be measured scientifically by the use of the moment magnitude scale and the intensity can be assigned using the Mercalli intensity scale. Understanding plate tectonics helps to explain the cause of submarine earthquakes. The Earth's surface or lithosphere comprises tectonic plates which average approximately in thickness, and are continuously moving very slowly upon a bed of magma in the asthenosphere and inner mantle. The plates converge upon one another, and one subducts below the other, or, where there is only shear stress, move horizontally past each other (see transform plate boundary below). Little movements called fault creep are minor and not measurable. The plates meet with each other, and if rough spots cause the movement to stop at the edges, the motion of the plates continue. When the rough spots can no longer hold, the sudden release of the built-up motion releases, and the sudden movement under the sea floor causes a submarine earthquake. This area of slippage both horizontally and vertically is called the epicenter, and has the highest magnitude, and causes the greatest damage. As with a continental earthquake the severity of the damage is not often caused by the earthquake at the rift zone, but rather by events which are triggered by the earthquake. Where a continental earthquake will cause damage and loss of life on land from fires, damaged structures, and flying objects; a submarine earthquake alters the seabed, resulting in a series of waves, and depending on the length and magnitude of the earthquake, tsunami, which bear down on coastal cities causing property damage and loss of life. Submarine earthquakes can also damage submarine communications cables, leading to widespread disruption of the Internet and international telephone network in those areas. This is particularly common in Asia, where many submarine links cross submarine earthquake zones along Pacific Ring of Fire. Tectonic plate boundaries The different ways in which tectonic plates rub against each other under the ocean or sea floor to create submarine earthquakes. The type of friction created may be due to the characteristic of the geologic fault or the plate boundary as follows. Some of the main areas of large tsunami-producing submarine earthquakes are the Pacific Ring of Fire and the Great Sumatran fault. Convergent plate boundary The older, and denser plate moves below the lighter plate. The further down it moves, the hotter it becomes, until finally melting altogether at the asthenosphere and inner mantle and the crust is actually destroyed. The location where the two oceanic plates actually meet become deeper and deeper creating trenches with each successive action. There is an interplay of various densities of lithosphere rock, asthenosphere magma, cooling ocean water and plate movement for example the Pacific Ring of Fire. Therefore, the site of the sub oceanic trench will be a site of submarine earthquakes; for example the Mariana Trench, Puerto Rico Trench, and the volcanic arc along the Great Sumatran fault. Transform plate boundary A transform-fault boundary, or simply a transform boundary is where two plates will slide past each other, and the irregular pattern of their edges may catch on each other. The lithosphere is neither added to from the asthenosphere nor is it destroyed as in convergent plate action. For example, along the San Andreas Fault strike-slip fault zone, the Pacific plate has been moving along at about 5 cm/yr in a northwesterly direction, whereas the North American plate is moving south-easterly. Divergent plate boundary Rising convection currents occur where two plates are moving away from each other. In the gap, thus produced hot magma rises up, meets the cooler sea water, cools, and solidifies, attaching to either or both tectonic plate edges creating an oceanic spreading ridge. When the fissure again appears, again magma will rise up, and form new lithosphere crust. If the weakness between the two plates allows the heat and pressure of the asthenosphere to build over a large amount of time, a large quantity of magma will be released pushing up on the plate edges and the magma will solidify under the newly raised plate edges, see formation of a submarine volcano. If the fissure is able to come apart because of the two plates moving apart, in a sudden movement, an earthquake tremor may be felt for example at the Mid-Atlantic Ridge between North America and Africa. List of major submarine earthquakes The following is a list of some major submarine earthquakes since the 17th century. Storm-caused earthquakes A 2019 study based on new higher-resolution data from the Transportable Array network of USArray found that large ocean storms could create undersea earthquakes when they passed over certain areas of the ocean floor, including Georges Bank near Cape Cod and the Grand Banks of Newfoundland. They have also been observed in the Pacific Northwest. See also Cascadia subduction zone Fracture zone Geology List of plate tectonics topics List of tectonic plate interactions List of tectonic plates Metamorphism Plate tectonics Sedimentary basin Triple junction References Plate tectonics Tsunami Physical oceanography Types of earthquake
Submarine earthquake
[ "Physics" ]
1,062
[ "Applied and interdisciplinary physics", "Physical oceanography" ]
9,084,531
https://en.wikipedia.org/wiki/Sputter%20deposition
Sputter deposition is a physical vapor deposition (PVD) method of thin film deposition by the phenomenon of sputtering. This involves ejecting material from a "target" that is a source onto a "substrate" such as a silicon wafer. Resputtering is re-emission of the deposited material during the deposition process by ion or atom bombardment. Sputtered atoms ejected from the target have a wide energy distribution, typically up to tens of eV (100,000 K). The sputtered ions (typically only a small fraction of the ejected particles are ionized — on the order of 1 percent) can ballistically fly from the target in straight lines and impact energetically on the substrates or vacuum chamber (causing resputtering). Alternatively, at higher gas pressures, the ions collide with the gas atoms that act as a moderator and move diffusively, reaching the substrates or vacuum chamber wall and condensing after undergoing a random walk. The entire range from high-energy ballistic impact to low-energy thermalized motion is accessible by changing the background gas pressure. The sputtering gas is often an inert gas such as argon. For efficient momentum transfer, the atomic weight of the sputtering gas should be close to the atomic weight of the target, so for sputtering light elements neon is preferable, while for heavy elements krypton or xenon are used. Reactive gases can also be used to sputter compounds. The compound can be formed on the target surface, in-flight or on the substrate depending on the process parameters. The availability of many parameters that control sputter deposition make it a complex process, but also allow experts a large degree of control over the growth and microstructure of the film. Uses One of the earliest widespread commercial applications of sputter deposition, which is still one of its most important applications, is in the production of computer hard disks. Sputtering is used extensively in the semiconductor industry to deposit thin films of various materials in integrated circuit processing. Thin antireflection coatings on glass for optical applications are also deposited by sputtering. Because of the low substrate temperatures used, sputtering is an ideal method to deposit contact metals for thin-film transistors. Another familiar application of sputtering is low-emissivity coatings on glass, used in double-pane window assemblies. The coating is a multilayer containing silver and metal oxides such as zinc oxide, tin oxide, or titanium dioxide. A large industry has developed around tool bit coating using sputtered nitrides, such as titanium nitride, creating the familiar gold colored hard coat. Sputtering is also used as the process to deposit the metal (e.g. aluminium) layer during the fabrication of CDs and DVDs. Hard disk surfaces use sputtered CrOx and other sputtered materials. Sputtering is one of the main processes of manufacturing optical waveguides and is another way for making efficient photovoltaic and thin film solar cells. In 2022, researchers at IMEC built up lab superconducting qubits with coherence times exceeding 100 μs and an average single-qubit gate fidelity of 99.94%, using CMOS-compatible fabrication techniques such as sputtering deposition and subtractive etch. Sputter coating Sputter coating in scanning electron microscopy is a sputter deposition process to cover a specimen with a thin layer of conducting material, typically a metal, such as a gold/palladium (Au/Pd) alloy. A conductive coating is needed to prevent charging of a specimen with an electron beam in conventional SEM mode (high vacuum, high voltage). While metal coatings are also useful for increasing signal to noise ratio (heavy metals are good secondary electron emitters), they are of inferior quality when X-ray spectroscopy is employed. For this reason when using X-ray spectroscopy a carbon coating is preferred. Comparison with other deposition methods An important advantage of sputter deposition is that even materials with very high melting points are easily sputtered while evaporation of these materials in a resistance evaporator or Knudsen cell is problematic or impossible. Sputter deposited films have a composition close to that of the source material. The difference is due to different elements spreading differently because of their different mass (light elements are deflected more easily by the gas) but this difference is constant. Sputtered films typically have a better adhesion on the substrate than evaporated films. A target contains a large amount of material and is maintenance free making the technique suited for ultrahigh vacuum applications. Sputtering sources contain no hot parts (to avoid heating they are typically water cooled) and are compatible with reactive gases such as oxygen. Sputtering can be performed top-down while evaporation must be performed bottom-up. Advanced processes such as epitaxial growth are possible. Some disadvantages of the sputtering process are that the process is more difficult to combine with a lift-off for structuring the film. This is because the diffuse transport, characteristic of sputtering, makes a full shadow impossible. Thus, one cannot fully restrict where the atoms go, which can lead to contamination problems. Also, active control for layer-by-layer growth is difficult compared to pulsed laser deposition and inert sputtering gases are built into the growing film as impurities. Pulsed laser deposition is a variant of the sputtering deposition technique in which a laser beam is used for sputtering. Role of the sputtered and resputtered ions and the background gas is fully investigated during the pulsed laser deposition process. Types of sputter deposition Sputtering sources often employ magnetrons that utilize strong electric and magnetic fields to confine charged plasma particles close to the surface of the sputter target. In a magnetic field, electrons follow helical paths around magnetic field lines, undergoing more ionizing collisions with gaseous neutrals near the target surface than would otherwise occur. (As the target material is depleted, a "racetrack" erosion profile may appear on the surface of the target.) The sputter gas is typically an inert gas such as argon. The extra argon ions created as a result of these collisions lead to a higher deposition rate. The plasma can also be sustained at a lower pressure this way. The sputtered atoms are neutrally charged and so are unaffected by the magnetic trap. Charge build-up on insulating targets can be avoided with the use of RF sputtering where the sign of the anode-cathode bias is varied at a high rate (commonly 13.56 MHz). RF sputtering works well to produce highly insulating oxide films but with the added expense of RF power supplies and impedance matching networks. Stray magnetic fields leaking from ferromagnetic targets also disturb the sputtering process. Specially designed sputter guns with unusually strong permanent magnets must often be used in compensation. Ion-beam sputtering Ion-beam sputtering (IBS) is a method in which the target is external to the ion source. A source can work without any magnetic field like in a hot filament ionization gauge. In a Kaufman source ions are generated by collisions with electrons that are confined by a magnetic field as in a magnetron. They are then accelerated by the electric field emanating from a grid toward a target. As the ions leave the source they are neutralized by electrons from a second external filament. IBS has an advantage in that the energy and flux of ions can be controlled independently. Since the flux that strikes the target is composed of neutral atoms, either insulating or conducting targets can be sputtered. IBS has found application in the manufacture of thin-film heads for disk drives. A pressure gradient between the ion source and the sample chamber is generated by placing the gas inlet at the source and shooting through a tube into the sample chamber. This saves gas and reduces contamination in UHV applications. The principal drawback of IBS is the large amount of maintenance required to keep the ion source operating. Reactive sputtering In reactive sputtering, the sputtered particles from a target material undergo a chemical reaction aiming to deposit a film with different composition on a certain substrate. The chemical reaction that the particles undergo is with a reactive gas introduced into the sputtering chamber such as oxygen or nitrogen, enabling the production of oxide and nitride films, respectively. The introduction of an additional element to the process, i.e. the reactive gas, has a significant influence in the desired depositions, making it more difficult to find ideal working points. Like so, the wide majority of reactive-based sputtering processes are characterized by an hysteresis-like behavior, thus needing proper control of the involved parameters, e.g. the partial pressure of working (or inert) and reactive gases, to undermine it. Berg et al. proposed a significant model, i.e. Berg Model, to estimate the impact upon addition of the reactive gas in sputtering processes. Generally, the influence of the reactive gas' relative pressure and flow were estimated in accordance to the target's erosion and film's deposition rate on the desired substrate. The composition of the film can be controlled by varying the relative pressures of the inert and reactive gases. Film stoichiometry is an important parameter for optimizing functional properties like the stress in SiNx and the index of refraction of SiOx. Ion-assisted deposition In ion-assisted deposition (IAD), the substrate is exposed to a secondary ion beam operating at a lower power than the sputter gun. Usually a Kaufman source, like that used in IBS, supplies the secondary beam. IAD can be used to deposit carbon in diamond-like form on a substrate. Any carbon atoms landing on the substrate which fail to bond properly in the diamond crystal lattice will be knocked off by the secondary beam. NASA used this technique to experiment with depositing diamond films on turbine blades in the 1980s. IAD is used in other important industrial applications such as creating tetrahedral amorphous carbon surface coatings on hard disk platters and hard transition metal nitride coatings on medical implants. High-target-utilization sputtering (HiTUS) Sputtering may also be performed by remote generation of a high density plasma. The plasma is generated in a side chamber opening into the main process chamber, containing the target and the substrate to be coated. As the plasma is generated remotely, and not from the target itself (as in conventional magnetron sputtering), the ion current to the target is independent of the voltage applied to the target. High-power impulse magnetron sputtering (HiPIMS) HiPIMS is a method for physical vapor deposition of thin films which is based on magnetron sputter deposition. HiPIMS utilizes extremely high power densities of the order of kW/cm2 in short pulses (impulses) of tens of microseconds at low duty cycle of < 10%. Gas flow sputtering Gas flow sputtering makes use of the hollow cathode effect, the same effect by which hollow cathode lamps operate. In gas flow sputtering a working gas like argon is led through an opening in a metal subjected to a negative electrical potential. Enhanced plasma densities occur in the hollow cathode, if the pressure in the chamber p and a characteristic dimension L of the hollow cathode obey the Paschen's law 0.5 Pa·m < p·L < 5 Pa·m. This causes a high flux of ions on the surrounding surfaces and a large sputter effect. The hollow-cathode based gas flow sputtering may thus be associated with large deposition rates up to values of a few μm/min. Structure and morphology In 1974 J. A. Thornton applied the structure zone model for the description of thin film morphologies to sputter deposition. In a study on metallic layers prepared by DC sputtering, he extended the structure zone concept initially introduced by Movchan and Demchishin for evaporated films. Thornton introduced a further structure zone T, which was observed at low argon pressures and characterized by densely packed fibrous grains. The most important point of this extension was to emphasize the pressure p as a decisive process parameter. In particular, if hyperthermal techniques like sputtering etc. are used for the sublimation of source atoms, the pressure governs via the mean free path the energy distribution with which they impinge on the surface of the growing film. Next to the deposition temperature Td the chamber pressure or mean free path should thus always be specified when considering a deposition process. Since sputter deposition belongs to the group of plasma-assisted processes, next to neutral atoms also charged species (like argon ions) hit the surface of the growing film, and this component may exert a large effect. Denoting the fluxes of the arriving ions and atoms by Ji and Ja, it turned out that the magnitude of the Ji/Ja ratio plays a decisive role on the microstructure and morphology obtained in the film. The effect of ion bombardment may quantitatively be derived from structural parameters like preferred orientation of crystallites or texture and from the state of residual stress. It has been shown recently that textures and residual stresses may arise in gas-flow sputtered Ti layers that compare to those obtained in macroscopic Ti work pieces subjected to a severe plastic deformation by shot peening. See also Coating References Further reading The Foundations of Vacuum Coating Technology by D. Mattox External links Thin Film Evaporation Guide Sputter Animation Magnetron Sputtering Animation Physical vapor deposition techniques Semiconductor device fabrication Plasma processing Thin film deposition
Sputter deposition
[ "Chemistry", "Materials_science", "Mathematics" ]
2,819
[ "Microtechnology", "Thin film deposition", "Coatings", "Thin films", "Semiconductor device fabrication", "Planes (geometry)", "Solid state engineering" ]
9,089,819
https://en.wikipedia.org/wiki/Buckley%E2%80%93Leverett%20equation
In fluid dynamics, the Buckley–Leverett equation is a conservation equation used to model two-phase flow in porous media. The Buckley–Leverett equation or the Buckley–Leverett displacement describes an immiscible displacement process, such as the displacement of oil by water, in a one-dimensional or quasi-one-dimensional reservoir. This equation can be derived from the mass conservation equations of two-phase flow, under the assumptions listed below. Equation In a quasi-1D domain, the Buckley–Leverett equation is given by: where is the wetting-phase (water) saturation, is the total flow rate, is the rock porosity, is the area of the cross-section in the sample volume, and is the fractional flow function of the wetting phase. Typically, is an S-shaped, nonlinear function of the saturation , which characterizes the relative mobilities of the two phases: where and denote the wetting and non-wetting phase mobilities. and denote the relative permeability functions of each phase and and represent the phase viscosities. Assumptions The Buckley–Leverett equation is derived based on the following assumptions: Flow is linear and horizontal Both wetting and non-wetting phases are incompressible Immiscible phases Negligible capillary pressure effects (this implies that the pressures of the two phases are equal) Negligible gravitational forces General solution The characteristic velocity of the Buckley–Leverett equation is given by: The hyperbolic nature of the equation implies that the solution of the Buckley–Leverett equation has the form , where is the characteristic velocity given above. The non-convexity of the fractional flow function also gives rise to the well known Buckley-Leverett profile, which consists of a shock wave immediately followed by a rarefaction wave. See also Capillary pressure Permeability (fluid) Relative permeability Darcy's law References External links Buckley-Leverett Equation and Uses in Porous Media Conservation equations Equations of fluid dynamics
Buckley–Leverett equation
[ "Physics", "Chemistry", "Mathematics" ]
418
[ "Fluid dynamics stubs", "Equations of fluid dynamics", "Equations of physics", "Conservation laws", "Mathematical objects", "Equations", "Fluid dynamics", "Conservation equations", "Symmetry", "Physics theorems" ]
11,268,035
https://en.wikipedia.org/wiki/Alloy%20%28specification%20language%29
In computer science and software engineering, Alloy is a declarative specification language for expressing complex structural constraints and behavior in a software system. Alloy provides a simple structural modeling tool based on first-order logic. Alloy is targeted at the creation of micro-models that can then be automatically checked for correctness. Alloy specifications can be checked using the Alloy Analyzer. Although Alloy is designed with automatic analysis in mind, Alloy differs from many specification languages designed for model-checking in that it permits the definition of infinite models. The Alloy Analyzer is designed to perform finite scope checks even on infinite models. The Alloy language and analyzer are developed by a team led by Daniel Jackson at the Massachusetts Institute of Technology in the United States. History and influences The first version of the Alloy language appeared in 1997. It was a rather limited object modeling language. Succeeding iterations of the language "added quantifiers, higher arity relations, polymorphism, subtyping, and signatures". The mathematical underpinnings of the language were heavily influenced by the Z notation, and the syntax of Alloy owes more to languages such as Object Constraint Language. The Alloy Analyzer The Alloy Analyzer was specifically developed to support so-called "lightweight formal methods". As such, it is intended to provide fully automated analysis, in contrast to the interactive theorem proving techniques commonly used with specification languages similar to Alloy. Development of the Analyzer was originally inspired by the automated analysis provided by model checkers. However, model-checking is ill-suited to the kind of models that are typically developed in Alloy, and as a result the core of the Analyzer was eventually implemented as a model-finder built atop a boolean SAT solver. Through version 3.0, the Alloy Analyzer incorporated an integral SAT-based model-finder based on an off-the-shelf SAT-solver. However, as of version 4.0 the Analyzer makes use of the Kodkod model-finder, for which the Analyzer acts as a front-end. Both model-finders essentially translate a model expressed in relational logic into a corresponding boolean logic formula, and then invoke an off-the-shelf SAT-solver on the boolean formula. In the event that the solver finds a solution, the result is translated back into a corresponding binding of constants to variables in the relational logic model. In order to ensure the model-finding problem is decidable, the Alloy Analyzer performs model-finding over restricted scopes consisting of a user-defined finite number of objects. This has the effect of limiting the generality of the results produced by the Analyzer. However, the designers of the Alloy Analyzer justify the decision to work within limited scopes through an appeal to the small scope hypothesis: that a high proportion of bugs can be found by testing a program for all test inputs within some small scope. Model structure Alloy models are relational in nature, and are composed of several different kinds of statements: Signatures define the vocabulary of a model by creating new sets sig Object{} defines a signature Object sig List{ head : lone Node } defines a signature List that contains a field head of type Node and multiplicity lone - this establishes the existence of a relation between Lists and Nodes such that every List is associated with no more than one head Node Facts are constraints that are assumed to always hold Predicates are parameterized constraints, and can be used to represent operations Functions are expressions that return results Assertions are assumptions about the model Because Alloy is a declarative language the meaning of a model is unaffected by the order of statements. References External links Alloy website Alloy Github Repository Guide to Alloy Kodkod analysis engine website at MIT An Alloy Metamodel in Ecore Formal methods tools Satisfiability problems Massachusetts Institute of Technology software Computer-related introductions in 1997 Formal specification languages Z notation
Alloy (specification language)
[ "Mathematics" ]
796
[ "Automated theorem proving", "Z notation", "Mathematical software", "Computational problems", "Formal methods tools", "Mathematical problems", "Satisfiability problems" ]
11,268,193
https://en.wikipedia.org/wiki/Microoptomechanical%20systems
Microoptomechanical systems (MOMS), also written as micro-optomechanical systems, are a special class of microelectromechanical systems (MEMS) which use optical and mechanical, but not electronic components. See also Microoptoelectromechanical systems (MOEMS) Nanoelectromechanical systems (NEMS) References Microtechnology
Microoptomechanical systems
[ "Materials_science", "Engineering" ]
85
[ "Materials science", "Microtechnology" ]
11,268,558
https://en.wikipedia.org/wiki/Microlithography
Microlithography is a general name for any manufacturing process that can create a minutely patterned thin film of protective materials over a substrate, such as a silicon wafer, in order to protect selected areas of it during subsequent etching, deposition, or implantation operations. The term is normally used for processes that can reliably produce features of microscopic size, such as 10 micrometres or less. The term nanolithography may be used to designate processes that can produce nanoscale features, such as less than 100 nanometres. Microlithography is a microfabrication process that is extensively used in the semiconductor industry and also manufacture microelectromechanical systems. Processes Specific microlithography processes include: Photolithography using light projected on a photosensitive material film (photoresist). Electron beam lithography, using a steerable electron beam. Nanoimprinting Interference lithography Magnetolithography Scanning probe lithography Surface-charge lithography Diffraction lithography These processes differ in speed and cost, as well as in the material they can be applied to and the range of feature sizes they can produce. For instance, while the size of features achievable with photolithography is limited by the wavelength of the light used, the technique it is considerably faster and simpler than electron beam lithography, that can achieve much smaller ones. Applications The main application for microlithography is fabrication of integrated circuits ("electronic chips"), such as solid-state memories and microprocessors. They can also be used to create diffraction gratings, microscope calibration grids, and other flat structures with microscopic details. See also Printed circuit board References Integrated circuits Lithography (microfabrication)
Microlithography
[ "Materials_science", "Technology", "Engineering" ]
373
[ "Computer engineering", "Microtechnology", "Nanotechnology", "Integrated circuits", "Lithography (microfabrication)" ]
11,270,253
https://en.wikipedia.org/wiki/Solar%20dial
A solar dial is a type of time switch used primarily for controlling lighting. The benefit of a solar dial over a conventional 'on-off' time switch is the ability to 'track' the sunrise and sunset times for a particular latitude (which is specified when the unit is purchased). The solar dial 'adjusts' itself by a fractional amount each day, thereby ensuring that street lighting is switched on and off when required throughout the year. Many dials also have an additional 'part night' facility allowing for a switch-off in the middle of the night, and then back on in the morning if needed. This 'part night' option was widely adopted in the United Kingdom for street lighting in the 1970s and 1980s in order to conserve energy. Some solar dial switches have a clockwork or battery 'reserve' to maintain time accuracy in cases of power outage. If this is lacking, the switch would have to be reset every time the power fails, a labour-intensive task. Frequently, one time switch with a heavier switch rating is used to control a whole series of lighting columns, perhaps one side of a street, and another to control the opposite side. Many columns are however fitted with individual clocks, especially on alleyways, pathways, and areas in which a single column stands alone. Sometimes the time switch is housed in a box fitted to a wall or telegraph pole, and the lanterns are powered/switched by means of an extra (fifth) core on the overhead cables. Obsolescence The solar dial time switch has largely been superseded by photocell control, which is cheaper and requires less maintenance. Solar dials are still used for lighting stairwells and car parks, and in some cases local authorities may request them for street lighting, though this is rare. Solar dials are often found in the rural United Kingdom , but as these fail they are sometimes replaced by a photocell, usually on a new lantern and sometimes with a whole new column. More recently, digital sunrise/sunset tracking time switches have appeared on the market, but these are generally too expensive for large scale use in street lighting and have not been adopted for this purpose, except in a few rare instances. Sangamo still manufacture and sell three models of solar dials from their factory in Port Glasgow. Solar dials continue to be used in retail premises, security systems, lighting and industrial heating systems. Old examples from the 1950s, 1960s and 1970s are collected by enthusiasts. See also Street Light References External links Incatron Time Switch and Lighting Control Archive Clocks Control devices
Solar dial
[ "Physics", "Technology", "Engineering" ]
515
[ "Machines", "Control devices", "Clocks", "Measuring instruments", "Physical systems", "Control engineering" ]
17,785,628
https://en.wikipedia.org/wiki/Light%20cone%20gauge
In theoretical physics, light cone gauge is an approach to remove the ambiguities arising from a gauge symmetry. While the term refers to several situations, a null component of a field A is set to zero (or a simple function of other variables) in all cases. The advantage of light-cone gauge is that fields, e.g. gluons in the QCD case, are transverse. Consequently, all ghosts and other unphysical degrees of freedom are eliminated. The disadvantage is that some symmetries such as Lorentz symmetry become obscured (they become non-manifest, i.e. hard to prove). Gauge theory In gauge theory, light-cone gauge refers to the condition where It is a method to get rid of the redundancies implied by Yang–Mills symmetry. String theory In string theory, light-cone gauge fixes the reparameterization invariance on the world sheet by where is a constant and is the worldsheet time. See also light cone coordinates References Theoretical physics
Light cone gauge
[ "Physics" ]
208
[ "Theoretical physics", "Theoretical physics stubs" ]
17,785,651
https://en.wikipedia.org/wiki/Light%20front%20quantization
The light-front quantization of quantum field theories provides a useful alternative to ordinary equal-time quantization. In particular, it can lead to a relativistic description of bound systems in terms of quantum-mechanical wave functions. The quantization is based on the choice of light-front coordinates, where plays the role of time and the corresponding spatial coordinate is . Here, is the ordinary time, is one Cartesian coordinate, and is the speed of light. The other two Cartesian coordinates, and , are untouched and often called transverse or perpendicular, denoted by symbols of the type . The choice of the frame of reference where the time and -axis are defined can be left unspecified in an exactly soluble relativistic theory, but in practical calculations some choices may be more suitable than others. Overview In practice, virtually all measurements are made at fixed light-front time. For example, when an electron scatters on a proton as in the famous SLAC experiments that discovered the quark structure of hadrons, the interaction with the constituents occurs at a single light-front time. When one takes a flash photograph, the recorded image shows the object as the front of the light wave from the flash crosses the object. Thus Dirac used the terminology "light-front" and "front form" in contrast to ordinary instant time and "instant form". Light waves traveling in the negative direction continue to propagate in at a single light-front time . As emphasized by Dirac, Lorentz boosts of states at fixed light-front time are simple kinematic transformations. The description of physical systems in light-front coordinates is unchanged by light-front boosts to frames moving with respect to the one specified initially. This also means that there is a separation of external and internal coordinates (just as in nonrelativistic systems), and the internal wave functions are independent of the external coordinates, if there is no external force or field. In contrast, it is a difficult dynamical problem to calculate the effects of boosts of states defined at a fixed instant time . The description of a bound state in a quantum field theory, such as an atom in quantum electrodynamics (QED) or a hadron in quantum chromodynamics (QCD), generally requires multiple wave functions, because quantum field theories include processes which create and annihilate particles. The state of the system then does not have a definite number of particles, but is instead a quantum-mechanical linear combination of Fock states, each with a definite particle number. Any single measurement of particle number will return a value with a probability determined by the amplitude of the Fock state with that number of particles. These amplitudes are the light-front wave functions. The light-front wave functions are each frame-independent and independent of the total momentum. The wave functions are the solution of a field-theoretic analog of the Schrödinger equation of nonrelativistic quantum mechanics. In the nonrelativistic theory the Hamiltonian operator is just a kinetic piece and a potential piece . The wave function is a function of the coordinate , and is the energy. In light-front quantization, the formulation is usually written in terms of light-front momenta , with a particle index, , , and the particle mass, and light-front energies . They satisfy the mass-shell condition The analog of the nonrelativistic Hamiltonian is the light-front operator , which generates translations in light-front time. It is constructed from the Lagrangian for the chosen quantum field theory. The total light-front momentum of the system, , is the sum of the single-particle light-front momenta. The total light-front energy is fixed by the mass-shell condition to be , where is the invariant mass of the system. The Schrödinger-like equation of light-front quantization is then . This provides a foundation for a nonperturbative analysis of quantum field theories that is quite distinct from the lattice approach. Quantization on the light-front provides the rigorous field-theoretical realization of the intuitive ideas of the parton model which is formulated at fixed in the infinite-momentum frame. (see #Infinite momentum frame). The same results are obtained in the front form for any frame; e.g., the structure functions and other probabilistic parton distributions measured in deep inelastic scattering are obtained from the squares of the boost-invariant light-front wave functions, the eigensolution of the light-front Hamiltonian. The Bjorken kinematic variable of deep inelastic scattering becomes identified with the light-front fraction at small . The Balitsky–Fadin–Kuraev–Lipatov (BFKL) Regge behavior of structure functions can be demonstrated from the behavior of light-front wave functions at small . The Dokshitzer–Gribov–Lipatov–Altarelli–Parisi (DGLAP) evolution of structure functions and the Efremov–Radyushkin–Brodsky–Lepage (ERBL) evolution of distribution amplitudes in are properties of the light-front wave functions at high transverse momentum. Computing hadronic matrix elements of currents is particularly simple on the light-front, since they can be obtained rigorously as overlaps of light-front wave functions as in the Drell–Yan–West formula. The gauge-invariant meson and baryon distribution amplitudes which control hard exclusive and direct reactions are the valence light-front wave functions integrated over transverse momentum at fixed . The "ERBL" evolution of distribution amplitudes and the factorization theorems for hard exclusive processes can be derived most easily using light-front methods. Given the frame-independent light-front wave functions, one can compute a large range of hadronic observables including generalized parton distributions, Wigner distributions, etc. For example, the "handbag" contribution to the generalized parton distributions for deeply virtual Compton scattering, which can be computed from the overlap of light-front wave functions, automatically satisfies the known sum rules. The light-front wave functions contain information about novel features of QCD. These include effects suggested from other approaches, such as color transparency, hidden color, intrinsic charm, sea-quark symmetries, dijet diffraction, direct hard processes, and hadronic spin dynamics. One can also prove fundamental theorems for relativistic quantum field theories using the front form, including: (a) the cluster decomposition theorem and (b) the vanishing of the anomalous gravitomagnetic moment for any Fock state of a hadron; one also can show that a nonzero anomalous magnetic moment of a bound state requires nonzero angular momentum of the constituents. The cluster properties of light-front time-ordered perturbation theory, together with conservation, can be used to elegantly derive the Parke–Taylor rules for multi-gluon scattering amplitudes. The counting-rule behavior of structure functions at large and Bloom–Gilman duality have also been derived in light-front QCD (LFQCD). The existence of "lensing effects" at leading twist, such as the -odd "Sivers effect" in spin-dependent semi-inclusive deep-inelastic scattering, was first demonstrated using light-front methods. Light-front quantization is thus the natural framework for the description of the nonperturbative relativistic bound-state structure of hadrons in quantum chromodynamics. The formalism is rigorous, relativistic, and frame-independent. However, there exist subtle problems in LFQCD that require thorough investigation. For example, the complexities of the vacuum in the usual instant-time formulation, such as the Higgs mechanism and condensates in theory, have their counterparts in zero modes or, possibly, in additional terms in the LFQCD Hamiltonian that are allowed by power counting. Light-front considerations of the vacuum as well as the problem of achieving full covariance in LFQCD require close attention to the light-front singularities and zero-mode contributions. The truncation of the light-front Fock-space calls for the introduction of effective quark and gluon degrees of freedom to overcome truncation effects. Introduction of such effective degrees of freedom is what one desires in seeking the dynamical connection between canonical (or current) quarks and effective (or constituent) quarks that Melosh sought, and Gell-Mann advocated, as a method for truncating QCD. The light-front Hamiltonian formulation thus opens access to QCD at the amplitude level and is poised to become the foundation for a common treatment of spectroscopy and the parton structure of hadrons in a single covariant formalism, providing a unifying connection between low-energy and high-energy experimental data that so far remain largely disconnected. Fundamentals Front-form relativistic quantum mechanics was introduced by Paul Dirac in a 1949 paper published in Reviews of Modern Physics. Light-front quantum field theory is the front-form representation of local relativistic quantum field theory. The relativistic invariance of a quantum theory means that the observables (probabilities, expectation values and ensemble averages) have the same values in all inertial coordinate systems. Since different inertial coordinate systems are related by inhomogeneous Lorentz transformations (Poincaré transformations), this requires that the Poincaré group is a symmetry group of the theory. Wigner and Bargmann showed that this symmetry must be realized by a unitary representation of the connected component of the Poincaré group on the Hilbert space of the quantum theory. The Poincaré symmetry is a dynamical symmetry because Poincaré transformations mix both space and time variables. The dynamical nature of this symmetry is most easily seen by noting that the Hamiltonian appears on the right-hand side of three of the commutators of the Poincaré generators, , where are components of the linear momentum and are components of rotation-less boost generators. If the Hamiltonian includes interactions, i.e. , then the commutation relations cannot be satisfied unless at least three of the Poincaré generators also include interactions. Dirac's paper introduced three distinct ways to minimally include interactions in the Poincaré Lie algebra. He referred to the different minimal choices as the "instant-form", "point-form" and "front-from" of the dynamics. Each "form of dynamics" is characterized by a different interaction-free (kinematic) subgroup of the Poincaré group. In Dirac's instant-form dynamics the kinematic subgroup is the three-dimensional Euclidean subgroup generated by spatial translations and rotations, in Dirac's point-form dynamics the kinematic subgroup is the Lorentz group and in Dirac's "light-front dynamics" the kinematic subgroup is the group of transformations that leave a three-dimensional hyperplane tangent to the light cone invariant. A light front is a three-dimensional hyperplane defined by the condition: with , where the usual convention is to choose . Coordinates of points on the light-front hyperplane are The Lorentz invariant inner product of two four-vectors, and , can be expressed in terms of their light-front components as In a front-form relativistic quantum theory the three interacting generators of the Poincaré group are , the generator of translations normal to the light front, and , the generators of rotations transverse to the light-front. is called the "light-front" Hamiltonian. The kinematic generators, which generate transformations tangent to the light front, are free of interaction. These include and , which generate translations tangent to the light front, which generates rotations about the axis, and the generators , and of light-front preserving boosts, which form a closed subalgebra. Light-front quantum theories have the following distinguishing properties: Only three Poincaré generators include interactions. All of Dirac's other forms of the dynamics require four or more interacting generators. The light-front boosts are a three-parameter subgroup of the Lorentz group that leave the light front invariant. The spectrum of the kinematic generator, , is the positive real line. These properties have consequences that are useful in applications. There is no loss of generality in using light-front relativistic quantum theories. For systems of a finite number of degrees of freedom there are explicit -matrix-preserving unitary transformations that transform theories with light-front kinematic subgroups to equivalent theories with instant-form or point-form kinematic subgroups. One expects that this is true in quantum field theory, although establishing the equivalence requires a nonperturbative definition of the theories in different forms of dynamics. Light-front Commutation Relations Canonical commutation relations at equal time are the centerpiece of the canonical quantization method to quantized fields. In the standard quantization method (the "Instant Form" in Dirac's classification of relativistic dynamics), the relations are, for example here for a spin-0 field and its canonical conjugate : where the relations are taken at equal time , and and are the space variables. The equal-time requirement imposes that is a spacelike quantity. The non-zero value of the commutator expresses the fact that when and are separated by a spacelike distance, they cannot communicate with each other and thus commute, except when their separation . In the Light-Front form however, fields at equal time are causally linked (i.e., they can communicate) since the Light-Front time is along the light cone. Consequently, the Light-Front canonical commutation relations are different. For instance: where is the antisymmetric Heaviside step function. On the other hand, the commutation relations for the creation and annihilation operators are similar for both the Instant and Light-Front forms: where and are the wavevectors of the fields, and . Light-front boosts In general if one multiplies a Lorentz boost on the right by a momentum-dependent rotation, which leaves the rest vector unchanged, the result is a different type of boost. In principle there are as many different kinds of boosts as there are momentum-dependent rotations. The most common choices are rotation-less boosts, helicity boosts, and light-front boosts. The light-front boost () is a Lorentz boost that leaves the light front invariant. The light-front boosts are not only members of the light-front kinematic subgroup, but they also form a closed three-parameter subgroup. This has two consequences. First, because the boosts do not involve interactions, the unitary representations of light-front boosts of an interacting system of particles are tensor products of single-particle representations of light-front boosts. Second, because these boosts form a subgroup, arbitrary sequences of light-front boosts that return to the starting frame do not generate Wigner rotations. The spin of a particle in a relativistic quantum theory is the angular momentum of the particle in its rest frame. Spin observables are defined by boosting the particle's angular momentum tensor to the particle's rest frame where is a Lorentz boost that transforms to . The components of the resulting spin vector, , always satisfy commutation relations, but the individual components will depend on the choice of boost . The light-front components of the spin are obtained by choosing to be the inverse of the light-front preserving boost, (). The light-front components of the spin are the components of the spin measured in the particle's rest frame after transforming the particle to its rest frame with the light-front preserving boost (). The light-front spin is invariant with respect to light-front preserving-boosts because these boosts do not generate Wigner rotations. The component of this spin along the direction is called the light-front helicity. In addition to being invariant, it is also a kinematic observable, i.e. free of interactions. It is called a helicity because the spin quantization axis is determined by the orientation of the light front. It differs from the Jacob–Wick helicity, where the quantization axis is determined by the direction of the momentum. These properties simplify the computation of current matrix elements because (1) initial and final states in different frames are related by kinematic Lorentz transformations, (2) the one-body contributions to the current matrix, which are important for hard scattering, do not mix with the interaction-dependent parts of the current under light front boosts and (3) the light-front helicities remain invariant with respect to the light-front boosts. Thus, light-front helicity is conserved by every interaction at every vertex. Because of these properties, front-form quantum theory is the only form of relativistic dynamics that has true "frame-independent" impulse approximations, in the sense that one-body current operators remain one-body operators in all frames related by light-front boosts and the momentum transferred to the system is identical to the momentum transferred to the constituent particles. Dynamical constraints, which follow from rotational covariance and current covariance, relate matrix elements with different magnetic quantum numbers. This means that consistent impulse approximations can only be applied to linearly independent current matrix elements. Spectral condition A second unique feature of light-front quantum theory follows because the operator is non-negative and kinematic. The kinematic feature means that the generator is the sum of the non-negative single-particle generators, (. It follows that if is zero on a state, then each of the individual must also vanish on the state. In perturbative light-front quantum field theory this property leads to a suppression of a large class of diagrams, including all vacuum diagrams, which have zero internal . The condition corresponds to infinite momentum . Many of the simplifications of light-front quantum field theory are realized in the infinite momentum limit of ordinary canonical field theory (see #Infinite momentum frame). An important consequence of the spectral condition on and the subsequent suppression of the vacuum diagrams in perturbative field theory is that the perturbative vacuum is the same as the free-field vacuum. This results in one of the great simplifications of light-front quantum field theory, but it also leads to some puzzles with regard the formulation of theories with spontaneously broken symmetries. Equivalence of forms of dynamics Sokolov demonstrated that relativistic quantum theories based on different forms of dynamics are related by -matrix-preserving unitary transformations. The equivalence in field theories is more complicated because the definition of the field theory requires a redefinition of the ill-defined local operator products that appear in the dynamical generators. This is achieved through renormalization. At the perturbative level, the ultraviolet divergences of a canonical field theory are replaced by a mixture of ultraviolet and infrared divergences in light-front field theory. These have to be renormalized in a manner that recovers the full rotational covariance and maintains the -matrix equivalence. The renormalization of light front field theories is discussed in Light-front computational methods#Renormalization group. Classical vs quantum One of the properties of the classical wave equation is that the light-front is a characteristic surface for the initial value problem. This means the data on the light front is insufficient to generate a unique evolution off of the light front. If one thinks in purely classical terms one might anticipate that this problem could lead to an ill-defined quantum theory upon quantization. In the quantum case the problem is to find a set of ten self-adjoint operators that satisfy the Poincaré Lie algebra. In the absence of interactions, Stone's theorem applied to tensor products of known unitary irreducible representations of the Poincaré group gives a set of self-adjoint light-front generators with all of the required properties. The problem of adding interactions is no different than it is in non-relativistic quantum mechanics, except that the added interactions also need to preserve the commutation relations. There are, however, some related observations. One is that if one takes seriously the classical picture of evolution off of surfaces with different values of , one finds that the surfaces with are only invariant under a six parameter subgroup. This means that if one chooses a quantization surface with a fixed non-zero value of , the resulting quantum theory would require a fourth interacting generator. This does not happen in light-front quantum mechanics; all seven kinematic generators remain kinematic. The reason is that the choice of light front is more closely related to the choice of kinematic subgroup, than the choice of an initial value surface. In quantum field theory, the vacuum expectation value of two fields restricted to the light front are not well-defined distributions on test functions restricted to the light front. They only become well defined distributions on functions of four space time variables. Rotational invariance The dynamical nature of rotations in light-front quantum theory means that preserving full rotational invariance is non-trivial. In field theory, Noether's theorem provides explicit expressions for the rotation generators, but truncations to a finite number of degrees of freedom can lead to violations of rotational invariance. The general problem is how to construct dynamical rotation generators that satisfy Poincaré commutation relations with and the rest of the kinematic generators. A related problem is that, given that the choice of orientation of the light front manifestly breaks the rotational symmetry of the theory, how is the rotational symmetry of the theory recovered? Given a dynamical unitary representation of rotations, , the product of a kinematic rotation with the inverse of the corresponding dynamical rotation is a unitary operator that (1) preserves the -matrix and (2) changes the kinematic subgroup to a kinematic subgroup with a rotated light front, . Conversely, if the -matrix is invariant with respect to changing the orientation of the light-front, then the dynamical unitary representation of rotations, , can be constructed using the generalized wave operators for different orientations of the light front and the kinematic representation of rotations Because the dynamical input to the -matrix is , the invariance of the -matrix with respect to changing the orientation of the light front implies the existence of a consistent dynamical rotation generator without the need to explicitly construct that generator. The success or failure of this approach is related to ensuring the correct rotational properties of the asymptotic states used to construct the wave operators, which in turn requires that the subsystem bound states transform irreducibly with respect to . These observations make it clear that the rotational covariance of the theory is encoded in the choice of light-front Hamiltonian. Karmanov introduced a covariant formulation of light-front quantum theory, where the orientation of the light front is treated as a degree of freedom. This formalism can be used to identify observables that do not depend on the orientation, , of the light front (see #Covariant formulation). While the light-front components of the spin are invariant under light-front boosts, they Wigner rotate under rotation-less boosts and ordinary rotations. Under rotations the light-front components of the single-particle spins of different particles experience different Wigner rotations. This means that the light-front spin components cannot be directly coupled using the standard rules of angular momentum addition. Instead, they must first be transformed to the more standard canonical spin components, which have the property that the Wigner rotation of a rotation is the rotation. The spins can then be added using the standard rules of angular momentum addition and the resulting composite canonical spin components can be transformed back to the light-front composite spin components. The transformations between the different types of spin components are called Melosh rotations. They are the momentum-dependent rotations constructed by multiplying a light-front boost followed by the inverse of the corresponding rotation-less boost. In order to also add the relative orbital angular momenta, the relative orbital angular momenta of each particle must also be converted to a representation where they Wigner rotate with the spins. While the problem of adding spins and internal orbital angular momenta is more complicated, it is only total angular momentum that requires interactions; the total spin does not necessarily require an interaction dependence. Where the interaction dependence explicitly appears is in the relation between the total spin and the total angular momentum where here and contain interactions. The transverse components of the light-front spin, may or may not have an interaction dependence; however, if one also demands cluster properties, then the transverse components of total spin necessarily have an interaction dependence. The result is that by choosing the light front components of the spin to be kinematic it is possible to realize full rotational invariance at the expense of cluster properties. Alternatively it is easy to realize cluster properties at the expense of full rotational symmetry. For models of a finite number of degrees of freedom there are constructions that realize both full rotational covariance and cluster properties; these realizations all have additional many-body interactions in the generators that are functions of fewer-body interactions. The dynamical nature of the rotation generators means that tensor and spinor operators, whose commutation relations with the rotation generators are linear in the components of these operators, impose dynamical constraints that relate different components of these operators. Nonperturbative dynamics The strategy for performing nonperturbative calculations in light-front field theory is similar to the strategy used in lattice calculations. In both cases a nonperturbative regularization and renormalization are used to try to construct effective theories of a finite number of degrees of freedom that are insensitive to the eliminated degrees of freedom. In both cases the success of the renormalization program requires that the theory has a fixed point of the renormalization group; however, the details of the two approaches differ. The renormalization methods used in light-front field theory are discussed in Light-front computational methods#Renormalization group. In the lattice case the computation of observables in the effective theory involves the evaluation of large-dimensional integrals, while in the case of light-front field theory solutions of the effective theory involve solving large systems of linear equations. In both cases multi-dimensional integrals and linear systems are sufficiently well understood to formally estimate numerical errors. In practice such calculations can only be performed for the simplest systems. Light-front calculations have the special advantage that the calculations are all in Minkowski space and the results are wave functions and scattering amplitudes. Relativistic quantum mechanics While most applications of light-front quantum mechanics are to the light-front formulation of quantum field theory, it is also possible to formulate relativistic quantum mechanics of finite systems of directly interacting particles with a light-front kinematic subgroup. Light-front relativistic quantum mechanics is formulated on the direct sum of tensor products of single-particle Hilbert spaces. The kinematic representation of the Poincaré group on this space is the direct sum of tensor products of the single-particle unitary irreducible representations of the Poincaré group. A front-form dynamics on this space is defined by a dynamical representation of the Poincaré group on this space where when is in the kinematic subgroup of the Poincare group. One of the advantages of light-front quantum mechanics is that it is possible to realize exact rotational covariance for system of a finite number of degrees of freedom. The way that this is done is to start with the non-interacting generators of the full Poincaré group, which are sums of single-particle generators, construct the kinematic invariant mass operator, the three kinematic generators of translations tangent to the light-front, the three kinematic light-front boost generators and the three components of the light-front spin operator. The generators are well-defined functions of these operators given by () and . Interactions that commute with all of these operators except the kinematic mass are added to the kinematic mass operator to construct a dynamical mass operator. Using this mass operator in () and the expression for gives a set of dynamical Poincare generators with a light-front kinematic subgroup. A complete set of irreducible eigenstates can be found by diagonalizing the interacting mass operator in a basis of simultaneous eigenstates of the light-front components of the kinematic momenta, the kinematic mass, the kinematic spin and the projection of the kinematic spin on the axis. This is equivalent to solving the center-of-mass Schrödinger equation in non-relativistic quantum mechanics. The resulting mass eigenstates transform irreducibly under the action of the Poincare group. These irreducible representations define the dynamical representation of the Poincare group on the Hilbert space. This representation fails to satisfy cluster properties, but this can be restored using a front-form generalization of the recursive construction given by Sokolov. Infinite momentum frame The infinite momentum frame (IMF) was originally introduced to provide a physical interpretationof the Bjorken variable measured in deep inelastic lepton-proton scattering in Feynman's parton model. (Here is the square of the spacelike momentum transfer imparted by the lepton and is the energy transferred in the proton's rest frame.) If one considers a hypothetical Lorentz frame where the observer is moving at infinite momentum, , in the negative direction, then can be interpreted as the longitudinal momentum fraction carried by the struck quark (or "parton") in the incoming fast moving proton. The structure function of the proton measured in the experiment is then given by the square of its instant-form wave function boosted to infinite momentum. Formally, there is a simple connection between the Hamiltonian formulation of quantum field theories quantized at fixed time (the "instant form" ) where the observer is moving at infinite momentum and light-front Hamiltonian theory quantized at fixed light-front time (the "front form"). A typical energy denominator in the instant-form is where is the sum of energies of the particles in the intermediate state. In the IMF, where the observer moves at high momentum in the negative direction, the leading terms in cancel, and the energy denominator becomes where is invariant mass squared of the initial state. Thus, by keeping the terms in in the instant form, one recovers the energy denominator which appears in light-front Hamiltonian theory. This correspondence has a physical meaning: measurements made by an observer moving at infinite momentum is analogous to making observations approaching the speed of light—thus matching to the front form where measurements are made along the front of a light wave. An example of an application to quantum electrodynamics can be found in the work of Brodsky, Roskies and Suaya. The vacuum state in the instant form defined at fixed is acausal and infinitely complicated. For example, in quantum electrodynamics, bubble graphs of all orders, starting with the intermediate state, appear in the ground state vacuum; however, as shown by Weinberg, such vacuum graphs are frame-dependent and formally vanish by powers of as the observer moves at . Thus, one can again match the instant form to the front-form formulation where such vacuum loop diagrams do not appear in the QED ground state. This is because the momentum of each constituent is positive, but must sum to zero in the vacuum state since the momenta are conserved. However, unlike the instant form, no dynamical boosts are required, and the front form formulation is causal and frame-independent. The infinite momentum frame formalism is useful as an intuitive tool; however, the limit is not a rigorous limit, and the need to boost the instant-form wave function introduces complexities. Covariant formulation In light-front coordinates, , , the spatial coordinates do not enter symmetrically: the coordinate is distinguished, whereas and do not appear at all. This non-covariant definition destroys the spatial symmetry that, in its turn, results in a few difficulties related to the fact that some transformation of the reference frame may change the orientation of the light-front plane. That is, the transformations of the reference frame and variation of orientation of the light-front plane are not decoupled from each other. Since the wave function depends dynamically on the orientation of the plane where it is defined, under these transformations the light-front wave function is transformed by dynamical operators (depending on the interaction). Therefore, in general, one should know the interaction to go from given reference frame to the new one. The loss of symmetry between the coordinates and complicates also the construction of the states with definite angular momentum since the latter is just a property of the wave function relative to the rotations which affects all the coordinates . To overcome this inconvenience, there was developed the explicitly covariant version of light-front quantization (reviewed by Carbonell et al.), in which the state vector is defined on the light-front plane of general orientation: (instead of ), where is a four-dimensional vector in the four-dimensional space-time and is also a four-dimensional vector with the property . In the particular case we come back to the standard construction. In the explicitly covariant formulation the transformation of the reference frame and the change of orientation of the light-front plane are decoupled. All the rotations and the Lorentz transformations are purely kinematical (they do not require knowledge of the interaction), whereas the (dynamical) dependence on the orientation of the light-front plane is covariantly parametrized by the wave function dependence on the four-vector . There were formulated the rules of graph techniques which, for a given Lagrangian, allow to calculate the perturbative decomposition of the state vector evolving in the light-front time (in contrast to the evolution in the direction or ). For the instant form of dynamics, these rules were first developed by Kadyshevsky. By these rules, the light-front amplitudes are represented as the integrals over the momenta of particles in intermediate states. These integrals are three-dimensional, and all the four-momenta are on the corresponding mass shells , in contrast to the Feynman rules containing four-dimensional integrals over the off-mass-shell momenta. However, the calculated light-front amplitudes, being on the mass shell, are in general the off-energy-shell amplitudes. This means that the on-mass-shell four-momenta, which these amplitudes depend on, are not conserved in the direction (or, in general, in the direction ). The off-energy shell amplitudes do not coincide with the Feynman amplitudes, and they depend on the orientation of the light-front plane. In the covariant formulation, this dependence is explicit: the amplitudes are functions of . This allows one to apply to them in full measure the well known techniques developed for the covariant Feynman amplitudes (constructing the invariant variables, similar to the Mandelstam variables, on which the amplitudes depend; the decompositions, in the case of particles with spins, in invariant amplitudes; extracting electromagnetic form factors; etc.). The irreducible off-energy-shell amplitudes serve as the kernels of equations for the light-front wave functions. The latter ones are found from these equations and used to analyze hadrons and nuclei. For spinless particles, and in the particular case of , the amplitudes found by the rules of covariant graph techniques, after replacement of variables, are reduced to the amplitudes given by the Weinberg rules in the infinite momentum frame. The dependence on orientation of the light-front plane manifests itself in the dependence of the off-energy-shell Weinberg amplitudes on the variables taken separately but not in some particular combinations like the Mandelstam variables . On the energy shell, the amplitudes do not depend on the four-vector determining orientation of the corresponding light-front plane. These on-energy-shell amplitudes coincide with the on-mass-shell amplitudes given by the Feynman rules. However, the dependence on can survive because of approximations. Angular momentum The covariant formulation is especially useful for constructing the states with definite angular momentum. In this construction, the four-vector participates on equal footing with other four-momenta, and, therefore, the main part of this problem is reduced to the well known one. For example, as is well known, the wave function of a non-relativistic system, consisting of two spinless particles with the relative momentum and with total angular momentum , is proportional to the spherical function : , where and is a function depending on the modulus . The angular momentum operator reads: . Then the wave function of a relativistic system in the covariant formulation of light-front dynamics obtains the similar form: where and are functions depending, in addition to , on the scalar product . The variables , are invariant not only under rotations of the vectors , but also under rotations and the Lorentz transformations of initial four-vectors , . The second contribution means that the operator of the total angular momentum in explicitly covariant light-front dynamics obtains an additional term: . For non-zero spin particles this operator obtains the contribution of the spin operators: The fact that the transformations changing the orientation of the light-front plane are dynamical (the corresponding generators of the Poincare group contain interaction) manifests itself in the dependence of the coefficients on the scalar product varying when the orientation of the unit vector changes (for fixed ). This dependence (together with the dependence on ) is found from the dynamical equation for the wave function. A peculiarity of this construction is in the fact that there exists the operator which commutes both with the Hamiltonian and with . Then the states are labeled also by the eigenvalue of the operator : . For given angular momentum , there are such the states. All of them are degenerate, i.e. belong to the same mass (if we do not make an approximation). However, the wave function should also satisfy the so-called angular condition After satisfying it, the solution obtains the form of a unique superposition of the states with different eigenvalues . The extra contribution in the light-front angular momentum operator increases the number of spin components in the light-front wave function. For example, the non-relativistic deuteron wave function is determined by two components (- and -waves). Whereas, the relativistic light-front deuteron wave function is determined by six components. These components were calculated in the one-boson exchange model. Goals and prospects The central issue for light-front quantization is the rigorous description of hadrons, nuclei, and systems thereof from first principles in QCD. The main goals of the research using light-front dynamics are: Evaluation of masses and wave functions of hadrons using the light-front Hamiltonian of QCD. The analysis of hadronic and nuclear phenomenology based on fundamental quark and gluon dynamics, taking advantage of the connections between quark-gluon and nuclear many-body methods. Understanding of the properties of QCD at finite temperatures and densities, which is relevant for understanding the early universe as well as compact stellar objects. Developing predictions for tests at the new and upgraded hadron experimental facilities -- JLAB, LHC, RHIC, J-PARC, GSI(FAIR). Analyzing the physics of intense laser fields, including a nonperturbative approach to strong-field QED. Providing bottom-up fitness tests for model theories as exemplified in the case of Standard Model. The nonperturbative analysis of light-front QCD requires the following: Continue testing the light-front Hamiltonian approach in simple theories in order to improve our understanding of its peculiarities and treacherous points vis a vis manifestly-covariant quantization methods. This will include work on theories such as Yukawa theory and QED and on theories with unbroken supersymmetry, in order to understand the strengths and limitations of different methods. Much progress has already been made along these lines. Construct symmetry-preserving regularization and renormalization schemes for light-front QCD, to include the Pauli–Villars-based method of the St. Petersburg group, Glazek-Wilson similarity renormalization-group procedure for Hamiltonians, Mathiot-Grange test functions, Karmanov-Mathiot-Smirnov realization of sector-dependent renormalization, and determine how to incorporate symmetry breaking in light-front quantization; this is likely to require an analysis of zero modes and in-hadron condensates. Develop computer codes which implement the regularization and renormalization schemes. Provide a platform-independent, well-documented core of routines that allow investigators to implement different numerical approximations to field-theoretic eigenvalue problems, including the light-front coupled-cluster method<ref name="LFCC}. Consider various quadrature schemes and basis sets, including Discretized Light-Cone Quantization (DLCQ), finite elements, function expansions, and the complete orthonormal wave functions obtained from AdS/QCD. This will build on the Lanczos-based MPI code developed for nonrelativistic nuclear physics applications and similar codes for Yukawa theory and lower-dimensional supersymmetric Yang—Mills theories. Address the problem of computing rigorous bounds on truncation errors, particularly for energy scales where QCD is strongly coupled. Understand the role of renormalization group methods, asymptotic freedom and spectral properties of in quantifying truncation errors. Solve for hadronic masses and wave functions. Use these wave functions to compute form factors, generalized parton distributions, scattering amplitudes, and decay rates. Compare with perturbation theory, lattice QCD, and model calculations, using insights from AdS/QCD, where possible. Study the transition to nuclear degrees of freedom, beginning with light nuclei. Classify the spectrum with respect to total angular momentum. In equal-time quantization, the three generators of rotations are kinematic, and the analysis of total angular momentum is relatively simple. In light-front quantization, only the generator of rotations around the -axis is kinematic; the other two, of rotations about axes and , are dynamical. To solve the angular momentum classification problem, the eigenstates and spectra of the sum of squares of these generators must be constructed. This is the price to pay for having more kinematical generators than in equal-time quantization, where all three boosts are dynamical. In light-front quantization, the boost along is kinematic, and this greatly simplifies the calculation of matrix elements that involve boosts, such as the ones needed to calculate form factors. The relation to covariant Bethe–Salpeter approaches projected on the light-front may help in understanding the angular momentum issue and its relationship to the Fock-space truncation of the light-front Hamiltonian. Model-independent constraints from the general angular condition, which must be satisfied by the light-front helicity amplitudes, should also be explored. The contribution from the zero mode appears necessary for the hadron form factors to satisfy angular momentum conservation, as expressed by the angular condition. The relation to light-front quantum mechanics, where it is possible to exactly realize full rotational covariance and construct explicit representations of the dynamical rotation generators, should also be investigated. Explore the AdS/QCD correspondence and light front holography. The approximate duality in the limit of massless quarks motivates few-body analyses of meson and baryon spectra based on a one-dimensional light-front Schrödinger equation in terms of the modified transverse coordinate . Models that extend the approach to massive quarks have been proposed, but a more fundamental understanding within QCD is needed. The nonzero quark masses introduce a non-trivial dependence on the longitudinal momentum, and thereby highlight the need to understand the representation of rotational symmetry within the formalism. Exploring AdS/QCD wave functions as part of a physically motivated Fock-space basis set to diagonalize the LFQCD Hamiltonian should shed light on both issues. The complementary Ehrenfest interpretation can be used to introduce effective degrees of freedom such as diquarks in baryons. Develop numerical methods/computer codes to directly evaluate the partition function (viz. thermodynamic potential) as the basic thermodynamic quantity. Compare to lattice QCD, where applicable, and focus on a finite chemical potential, where reliable lattice QCD results are presently available only at very small (net) quark densities. There is also an opportunity for use of light-front AdS/QCD to explore non-equilibrium phenomena such as transport properties during the very early state of a heavy ion collision. Light-front AdS/QCD opens the possibility to investigate hadron formation in such a non-equilibrated strongly coupled quark-gluon plasma. Develop a light-front approach to the neutrino oscillation experiments possible at Fermilab and elsewhere, with the goal of reducing the energy spread of the neutrino-generating hadronic sources, so that the three-energy-slits interference picture of the oscillation pattern can be resolved and the front form of Hamiltonian dynamics utilized in providing the foundation for qualitatively new (treating the vacuum differently) studies of neutrino mass generation mechanisms. If the renormalization group procedure for effective particles (RGPEP) does allow one to study intrinsic charm, bottom, and glue in a systematically renormalized and convergent light-front Fock-space expansion, one might consider a host of new experimental studies of production processes using the intrinsic components that are not included in the calculations based on gluon and quark splitting functions. See also Light-front computational methods Light-front quantization applications Quantum field theories Quantum chromodynamics Quantum electrodynamics Light-front holography References External links ILCAC, Inc., the International Light-Cone Advisory Committee. Publications on light-front dynamics, maintained by A. Harindranath. Quantum chromodynamics Theoretical physics
Light front quantization
[ "Physics" ]
9,745
[ "Theoretical physics" ]
17,787,148
https://en.wikipedia.org/wiki/Cyber%E2%80%93physical%20system
Cyber-Physical Systems (CPS) are mechanisms controlled and monitored by computer algorithms, tightly integrated with the internet and its users. In cyber-physical systems, physical and software components are deeply intertwined, able to operate on different spatial and temporal scales, exhibit multiple and distinct behavioral modalities, and interact with each other in ways that change with context. CPS involves transdisciplinary approaches, merging theory of cybernetics, mechatronics, design and process science. The process control is often referred to as embedded systems. In embedded systems, the emphasis tends to be more on the computational elements, and less on an intense link between the computational and physical elements. CPS is also similar to the Internet of Things (IoT), sharing the same basic architecture; nevertheless, CPS presents a higher combination and coordination between physical and computational elements. Examples of CPS include smart grid, autonomous automobile systems, medical monitoring, industrial control systems, robotics systems, recycling and automatic pilot avionics. Precursors of cyber-physical systems can be found in areas as diverse as aerospace, automotive, chemical processes, civil infrastructure, energy, healthcare, manufacturing, transportation, entertainment, and consumer appliances. Overview Unlike more traditional embedded systems, a full-fledged CPS is typically designed as a network of interacting elements with physical input and output instead of as standalone devices. The notion is closely tied to concepts of robotics and sensor networks with intelligence mechanisms proper of computational intelligence leading the pathway. Ongoing advances in science and engineering improve the link between computational and physical elements by means of intelligent mechanisms, increasing the adaptability, autonomy, efficiency, functionality, reliability, safety, and usability of cyber-physical systems. This will broaden the potential of cyber-physical systems in several directions, including: intervention (e.g., collision avoidance); precision (e.g., robotic surgery and nano-level manufacturing); operation in dangerous or inaccessible environments (e.g., search and rescue, firefighting, and deep-sea exploration); coordination (e.g., air traffic control, war fighting); efficiency (e.g., zero-net energy buildings); and augmentation of human capabilities (e.g. in healthcare monitoring and delivery). Mobile cyber-physical systems Mobile cyber-physical systems, in which the physical system under study has inherent mobility, are a prominent subcategory of cyber-physical systems. Examples of mobile physical systems include mobile robotics and electronics transported by humans or animals. The rise in popularity of smartphones has increased interest in the area of mobile cyber-physical systems. Smartphone platforms make ideal mobile cyber-physical systems for a number of reasons, including: Significant computational resources, such as processing capability, local storage Multiple sensory input/output devices, such as touch screens, cameras, GPS chips, speakers, microphone, light sensors, proximity sensors Multiple communication mechanisms, such as WiFi, 4G, EDGE, Bluetooth for interconnecting devices to either the Internet, or to other devices High-level programming languages that enable rapid development of mobile CPS node software, such as Java, C#, or JavaScript Readily available application distribution mechanisms, such as Google Play Store and Apple App Store End-user maintenance and upkeep, including frequent re-charging of the battery For tasks that require more resources than are locally available, one common mechanism for rapid implementation of smartphone-based mobile cyber-physical system nodes utilizes the network connectivity to link the mobile system with either a server or a cloud environment, enabling complex processing tasks that are impossible under local resource constraints. Examples of mobile cyber-physical systems include applications to track and analyze CO emissions, detect traffic accidents, insurance telematics and provide situational awareness services to first responders, measure traffic, and monitor cardiac patients. Examples Common applications of CPS typically fall under sensor-based communication-enabled autonomous systems. For example, many wireless sensor networks monitor some aspect of the environment and relay the processed information to a central node. Other types of CPS include smart grid, autonomous automotive systems, medical monitoring, process control systems, distributed robotics, recycling and automatic pilot avionics. A real-world example of such a system is the Distributed Robot Garden at MIT in which a team of robots tend a garden of tomato plants. This system combines distributed sensing (each plant is equipped with a sensor node monitoring its status), navigation, manipulation and wireless networking. A focus on the control system aspects of CPS that pervade critical infrastructure can be found in the efforts of the Idaho National Laboratory and collaborators researching resilient control systems. This effort takes a holistic approach to next generation design, and considers the resilience aspects that are not well quantified, such as cyber security, human interaction and complex interdependencies. Another example is MIT's ongoing CarTel project where a fleet of taxis work by collecting real-time traffic information in the Boston area. Together with historical data, this information is then used for calculating fastest routes for a given time of the day. CPS are also used in electric grids to perform advanced control, especially in the smart grids context to enhance the integration of distributed renewable generation.The Special remedial action scheme are needed to limit the current flows in the grid when wind farm generation is too high. Distributed CPS are a key solution for this type of issues In industry the cyber-physical systems empowered by Cloud technologies have led to novel approaches that paved the path to Industry 4.0 as the European Commission IMC-AESOP project with partners such as Schneider Electric, SAP, Honeywell, Microsoft etc. demonstrated. Design A challenge in the development of embedded and cyber-physical systems is the large differences in the design practice between the various engineering disciplines involved, such as software and mechanical engineering. Additionally, as of today there is no "language" in terms of design practice that is common to all the involved disciplines in CPS. Today, in a marketplace where rapid innovation is assumed to be essential, engineers from all disciplines need to be able to explore system designs collaboratively, allocating responsibilities to software and physical elements, and analyzing trade-offs between them. Recent advances show that coupling disciplines by using co-simulation will allow disciplines to cooperate without enforcing new tools or design methods. Results from the MODELISAR project show that this approach is viable by proposing a new standard for co-simulation in the form of the Functional Mock-up Interface. Importance The US National Science Foundation (NSF) has identified cyber-physical systems as a key area of research. Starting in late 2006, the NSF and other United States federal agencies sponsored several workshops on cyber-physical systems. See also Digital twin Indoor positioning system Industry 4.0 Intelligent maintenance system Internet of Things Responsive computer-aided design Signal-flow graph References Further reading Edward A. Lee, Cyber-Physical Systems - Are Computing Foundations Adequate? Paulo Tabuada, Cyber-Physical Systems: Position Paper Rajesh Gupta, Programming Models and Methods for Spatio-Temporal Actions and Reasoning in Cyber-Physical Systems Edward A. Lee and Sanjit A. Seshia, Introduction to Embedded Systems - A Cyber-Physical Systems Approach, http://LeeSeshia.org, 2011. Riham AlTawy and Amr M. Youssef Security Trade-offs in Cyber Physical Systems: A Case Study Survey on Implantable Medical Devices Ibtihaj Ahmad et al., Security Aspects of Cyber Physical Systems Charles R. Robinson et al., (Research Perspective) Bridging the stakeholder communities that produce cyber-physical systems, 2024 External links The CPS Virtual Organization -Physical Systems Week Conference Illustrates current research in the area Transactions on -Physical Systems ACM Journal in this area Computer systems Systems theory
Cyber–physical system
[ "Technology", "Engineering" ]
1,583
[ "Computer science", "Computers", "Computer engineering", "Computer systems" ]
17,787,631
https://en.wikipedia.org/wiki/Bi-isotropic%20material
In physics, engineering and materials science, bi-isotropic materials have the special optical property that they can rotate the polarization of light in either refraction or transmission. This does not mean all materials with twist effect fall in the bi-isotropic class. The twist effect of the class of bi-isotropic materials is caused by the chirality and non-reciprocity of the structure of the media, in which the electric and magnetic field of an electromagnetic wave (or simply, light) interact in an unusual way. Definition For most materials, the electric field E and electric displacement field D (as well as the magnetic field B and inductive magnetic field H) are parallel to one another. These simple mediums are called isotropic, and the relationships between the fields can be expressed using constants. For more complex materials, such as crystals and many metamaterials, these fields are not necessarily parallel. When one set of the fields are parallel, and one set are not, the material is called anisotropic. Crystals typically have D fields which are not aligned with the E fields, while the B and H fields remain related by a constant. Materials where either pair of fields is not parallel are called anisotropic. In bi-isotropic media, the electric and magnetic fields are coupled. The constitutive relations are D, E, B, H, ε and μ are corresponding to usual electromagnetic qualities. ξ and ζ are the coupling constants, which is the intrinsic constant of each media. This can be generalized to the case where ε, μ, ξ and ζ are tensors (i.e. they depend on the direction within the material), in which case the media is referred to as bi-anisotropic. Coupling constant ξ and ζ can be further related to the Tellegen (referred to as reciprocity) χ and chirality κ parameter after substitution of the above equations into the constitutive relations, gives Classification Examples Pasteur media can be made by mixing metal helices of one handedness into a resin. Care must be exercised to secure isotropy: the helices must be randomly oriented so that there is no special direction. The magnetoelectric effect can be understood from the helix as it is exposed to the electromagnetic field. The helix geometry can be considered as an inductor. For such a structure the magnetic component of an EM wave induces a current on the wire and further influences the electric component of the same EM wave. From the constitutive relations, for Pasteur media, χ = 0, Hence, the D field is delayed by a phase i due to the response from the H field. Tellegen media is the opposite of Pasteur media, which is electromagnetic: the electric component will cause the magnetic component to change. Such a medium is not as straightforward as the concept of handedness. Electric dipoles bonded with magnets belong to this kind of media. When the dipoles align themselves to the electric field component of the EM wave, the magnets will also respond, as they are bounded together. The change in direction of the magnets will therefore change the magnetic component of the EM wave, and so on. From the constitutive relations, for Tellegen media, κ = 0, This implies that the B field responds in phase with the H field. See also Anisotropy Chirality (electromagnetism) Metamaterial Reciprocity (electromagnetism) Maxwell's_equations#Constitutive_relations References Orientation (geometry) Materials science
Bi-isotropic material
[ "Physics", "Materials_science", "Mathematics", "Engineering" ]
742
[ "Applied and interdisciplinary physics", "Materials science", "Topology", "Space", "nan", "Geometry", "Spacetime", "Orientation (geometry)" ]
17,790,253
https://en.wikipedia.org/wiki/Cobalt%28II%29%20iodide
Cobalt(II) iodide or cobaltous iodide are the inorganic compounds with the formula CoI2 and the hexahydrate CoI2(H2O)6. These salts are the principal iodides of cobalt. Synthesis Cobalt(II) iodide is prepared by treating cobalt powder with gaseous hydrogen iodide. The hydrated form CoI2.6H2O can be prepared by the reaction of cobalt(II) oxide (or related cobalt compounds) with hydroiodic acid. Cobalt(II) iodide crystallizes in two polymorphs, the α- and β-forms. The α-polymorph consists of black hexagonal crystals, which turn dark green when exposed to air. Under a vacuum at 500 °C, samples of α-CoI2 sublime, yielding the β-polymorph as a yellow crystals. β-CoI2 also readily absorbs moisture from the air, converting into green hydrate. At 400 °C, β-CoI2 reverts to the α-form. Structures The anhydrous salts adopt the cadmium halide structures. The hexaaquo salt consists of separated [Co(H2O)6]2+ and iodide ions as verified crystallographically. Reactions and applications Anhydrous cobalt(II) iodide is sometimes used to test for the presence of water in various solvents. Cobalt(II) iodide is used as a catalyst, e.g. in carbonylations. It catalyzes the reaction of diketene with Grignard reagents, useful for the synthesis of terpenoids References Cobalt(II) compounds Iodides Metal halides
Cobalt(II) iodide
[ "Chemistry" ]
364
[ "Inorganic compounds", "Metal halides", "Salts" ]
17,791,786
https://en.wikipedia.org/wiki/Denver%20Convergence%20Vorticity%20Zone
The Denver Convergence Vorticity Zone (DCVZ) is an orographically-induced atmospheric phenomenon characterized by convergent winds in the High Plains just east of the Denver metropolitan area, typically in length and oriented in a north-south direction. This meteorological feature was subject to scientific scrutiny following a large outbreak of Denver-area tornadoes in 1981 and is implicated in the propensity of the area to spawn landspout (misocyclone) and supercell (mesocyclone) tornadoes. The DCVZ is often associated with the Denver Cyclone effect, which some consider as a more fully developed iteration of the DCVZ, although the Denver Cyclone is considered a distinct atmospheric phenomenon by some scientists. Characteristics DCVZ conditions form when a low-level moist, southeasterly flowing air mass meets the Palmer Divide, a ridge that extends east of the Colorado Front Range. If the moist air lifts over the ridge and meets northwesterly winds originating in the Rocky Mountain foothills, winds may converge to create enhanced cyclonic vorticity. A study conducted between 1981 and 1989 demonstrated that the DCVZ formed on one-third of all days during the convective season (May through August). DCVZ conditions are often associated with the Denver Cyclone effect, which is characterized by the formation of a large gyre near the city center. Role in atmospheric convection and tornado formation When a DCVZ and especially Denver Cyclone develop, an otherwise capped atmosphere devoid of deep, moist atmospheric convection (e.g. thunderclouds) may break into cumulonimbus and cumulus congestus clouds. Once initiated these thunderclouds may form very rapidly. Dry microbursts and landspouts may occur in the early stages of development whereas wet microbursts and occasionally mesocyclonic tornadoes during later stages. All of these are recognized as fairly common and as hazards for Denver International Airport (DIA), both the former location at Stapleton and the newer location farther east. Various measures were adopted to identify these hazards and take action to mitigate when present. Many studies document the role of the DCVZ in tornado outbreaks across the Denver area. Using climatic data from the 1980s, one researcher suggested that the presence of a strong June DCVZ is associated with a 70% chance of zone-area tornado formation. See also Climate of Colorado Colorado low Geography of Colorado References External links A Subsynoptic Analysis of the Denver Tornadoes of 3 June 1981 Observations of the DCVZ Using Mobile Mesonet Data (Albert E. Pietrycha and Erik N. Rasmussen) Discussion of Boulder Tornado Touchdown - June 1997 Atmospheric dynamics Regional climate effects Geography of Colorado Geography of Denver Denver metropolitan area
Denver Convergence Vorticity Zone
[ "Chemistry" ]
558
[ "Atmospheric dynamics", "Fluid dynamics" ]
17,795,435
https://en.wikipedia.org/wiki/Abbott-Firestone%20curve
The Abbott-Firestone curve or bearing area curve (BAC) describes the surface texture of an object. The curve can be found from a profile trace by drawing lines parallel to the datum and measuring the fraction of the line which lies within the profile. Mathematically it is the cumulative probability density function of the surface profile's height and can be calculated by integrating the probability density function. The Abbott-Firestone curve was first described by Ernest James Abbott and Floyd Firestone in 1933. It is useful for understanding the properties of sealing and bearing surfaces. It is commonly used in the engineering and manufacturing of piston cylinder bores of internal combustion engines. The shape of the curve is distilled into several of the surface roughness parameters, especially the Rk family of parameters. References Engineering mechanics Tribology
Abbott-Firestone curve
[ "Chemistry", "Materials_science", "Engineering" ]
165
[ "Tribology", "Materials science", "Surface science", "Civil engineering", "Mechanical engineering", "Engineering mechanics" ]
17,800,165
https://en.wikipedia.org/wiki/Dendronized%20polymer
In polymer chemistry and materials science, dendronized polymers (British English: dendronised polymers; ) are linear polymers to every repeat unit of which dendrons are attached. Dendrons are regularly-branched, tree-like fragments and for larger ones the polymer backbone is wrapped to give sausage-like, cylindrical molecular objects. Figure 1 shows a cartoon representation with the backbone in red and the dendrons like cake slices in green. It also provides a concrete chemical structure showing a polymethylmethacrylate (PMMA) backbone, the methyl group () of which is replaced by a dendron of the third generation (three consecutive branching points). Figure 1. Cartoon representation (left) and a concrete example of a third generation dendronized polymer (right). The peripheral amine groups are modified by a substituent X which often is a protection group. Upon deprotection and modification substantial property changes can be achieved. The subscript n denotes the number of repeat units. Structure and applications Dendronized polymers can contain several thousands of dendrons in one macromolecule and have a stretched out, anisotropic structure. In this regard they differ from the more or less spherically shaped dendrimers, where a few dendrons are attached to a small, dot-like core resulting in an isotropic structure. Depending on dendron generation, the polymers differ in thickness as the atomic force microscopy image shows (Figure 2). Neutral and charged dendronized polymers are highly soluble in organic solvents and in water, respectively. This is due to their low tendency to entangle. Dendronized polymers have been synthesized with, e.g., polymethylmethacrylate, polystyrene, polyacetylene, polyphenylene, polythiophene, polyfluorene, poly(phenylene vinylene), poly(phenylene acetylene), polysiloxane, polyoxanorbornene, poly(ethylene imine) (PEI) backbones. Molar masses up to 200,000,000 g/mol have been obtained. Dendronized polymers have been investigated for/as bulk structure control, responsivity to external stimuli, single molecule chemistry, templates for nanoparticle formation, catalysis, electro-optical devices, and bio-related applications. Particularly attractive is the use of water-soluble dendronized polymers for the immobilization of enzymes on solid surfaces (inside glass tubes or microfluidic devices) and for the preparation of dendronized polymer-enzyme conjugates. Synthesis The two main approaches into this class of polymers are the macromonomer route and the attach-to route. In the former, a monomer which already carries the dendron of final size is polymerized. In the latter the dendrons are constructed generation by generation directly on an already existing polymer. Figure 4 illustrates the difference for a simple case. The macromonomer route results in shorter chains for higher generations and the attach-to route is prone to lead to structure imperfections as an enormous number of chemical reactions have to be performed for each macromolecule. History The name “dendronized polymer” which meanwhile is internationally accepted was coined by Schlüter in 1998. The first report on such a macromolecule which at that time was called “Rod-shaped Dendrimer” goes back to a patent by Tomalia in 1987 and was followed by Percec's first mentioning in the open literature of a polymer with “tapered side chains” in 1992. In 1994 the potential of these polymers as cylindrical nanoobjects was recognized. Many groups worldwide contributed to this field. They can be found in review articles. See also dendrimer polymer brush References Polymers Soft matter
Dendronized polymer
[ "Physics", "Chemistry", "Materials_science" ]
801
[ "Polymers", "Soft matter", "Condensed matter physics", "Polymer chemistry" ]
17,801,223
https://en.wikipedia.org/wiki/Methylmalonic%20acid%20semialdehyde
Methylmalonic acid semialdehyde is an intermediate in the metabolism of thymine and valine. See also Methylmalonate-semialdehyde dehydrogenase (acylating) References Aldehydes Carboxylic acids Aldehydic acids
Methylmalonic acid semialdehyde
[ "Chemistry" ]
57
[ "Carboxylic acids", "Functional groups", "Organic compounds", "Organic compound stubs", "Organic chemistry stubs" ]
17,801,764
https://en.wikipedia.org/wiki/Hot%20working
In metallurgy, hot working refers to processes where metals are plastically deformed above their recrystallization temperature. Being above the recrystallization temperature allows the material to recrystallize during deformation. This is important because recrystallization keeps the materials from strain hardening, which ultimately keeps the yield strength and hardness low and ductility high. This contrasts with cold working. Many kinds of working, including rolling, forging, extrusion, and drawing, can be done with hot metal. Temperature The lower limit of the hot working temperature is determined by its recrystallization temperature. As a guideline, the lower limit of the hot working temperature of a material is 60% its melting temperature (on an absolute temperature scale). The upper limit for hot working is determined by various factors, such as: excessive oxidation, grain growth, or an undesirable phase transformation. In practice materials are usually heated to the upper limit first to keep forming forces as low as possible and to maximize the amount of time available to hot work the workpiece. The most important aspect of any hot working process is controlling the temperature of the workpiece. 90% of the energy imparted into the workpiece is converted into heat. Therefore, if the deformation process is quick enough the temperature of the workpiece should rise, however, this does not usually happen in practice. Most of the heat is lost through the surface of the workpiece into the cooler tooling. This causes temperature gradients in the workpiece, usually due to non-uniform cross-sections where the thinner sections are cooler than the thicker sections. Ultimately, this can lead to cracking in the cooler, less ductile surfaces. One way to minimize the problem is to heat the tooling. The hotter the tooling the less heat lost to it, but as the tooling temperature rises, the tool life decreases. Therefore the tooling temperature must be compromised; commonly, hot working tooling is heated to 500–850 °F (325–450 °C). Advantages and disadvantages The advantages are: Decrease in yield strength, therefore it is easier to work and uses less energy or force Increase in ductility Elevated temperatures increase diffusion which can remove or reduce chemical inhomogeneities Pores may reduce in size or close completely during deformation In steel, the weak, ductile, face-centered-cubic austenite microstructure is deformed instead of the strong body-centered-cubic ferrite microstructure found at lower temperatures Usually the initial workpiece that is hot worked was originally cast. The microstructure of cast items does not optimize the engineering properties, from a microstructure standpoint. Hot working improves the engineering properties of the workpiece because it replaces the microstructure with one that has fine spherical shaped grains. These grains increase the strength, ductility, and toughness of the material. The engineering properties can also be improved by reorienting the inclusions (impurities). In the cast state the inclusions are randomly oriented, which, when intersecting the surface, can be a propagation point for cracks. When the material is hot worked the inclusions tend to flow with the contour of the surface, creating stringers. As a whole the strings create a flow structure, where the properties are anisotropic (different based on direction). With the stringers oriented parallel to the surface it strengthens the workpiece, especially with respect to fracturing. The stringers act as "crack-arrestors" because the crack will want to propagate through the stringer and not along it. The disadvantages are: Undesirable reactions between the metal and the surrounding atmosphere (scaling or rapid oxidation of the workpiece) Less precise tolerances due to thermal contraction and warping from uneven cooling Grain structure may vary throughout the metal for various reasons Requires a heating unit of some kind such as a gas or diesel furnace or an induction heater, which can be very expensive Processes Rolling Hot rolling Hot spinning Extrusion Forging Drawing Rotary piercing References Notes Bibliography . Metalworking Metallurgical processes
Hot working
[ "Chemistry", "Materials_science" ]
845
[ "Metallurgical processes", "Metallurgy" ]
14,919,914
https://en.wikipedia.org/wiki/1-Methylnicotinamide
1-Methylnicotinamide (1-MNA, trigonellamide) is a prototypic organic cation. 1-Methylnicotinamide is the methylated amide of Nicotinamide (niacinamide, vitamin B3). 1-Methylnicotinamide is an endogenic substance that is produced in the liver when Nicotinamide is metabolized. It is a typical substance secreted in the kidney. It participates in the nicotinamide salvage pathway within the NAD+ (nicotinamide adenine dinucleotide) metabolic pathway, thereby contributing to optimizing NAD+ levels. Occurrence To date, the highest natural concentration of 1-methylnicotinamide has been found in the alga Undaria pinnatifida (3.2 mg/100 g of dried algae) and green tea leaves (3 mg/100 g of product). Other products with notable 1-MNA content include celery (1.6 mg/100 g of product), Chinese black mushrooms (shiitake, 1.3 mg/100 g), and fermented soybeans (natto, 1.0 mg/100 g). Biosynthesis 1-Methylnicotinamide can be produced in the liver by nicotinamide N-methyltransferase (NNMT). The reaction takes place during the metabolism of NAD+ (nicotinamide adenine dinucleotide). NNMT is also present in brain tissue, adipose tissue, muscle tissue, kidneys, and skin. NNMT (nicotinamide N-methyltransferase) is an enzyme that in humans is encoded by the NNMT gene. NNMT catalyzes the methylation of nicotinamide and similar compounds using the methyl donor S-adenosyl methionine (SAM-e) to produce S-adenosyl--homocysteine (SAH) and 1-methylnicotinamide. NNMT is highly expressed in the human liver. Role in the body Scientific research highlights numerous therapeutic and health-promoting properties of 1-MNA, including vascular protective, anticoagulant, anti-atherosclerotic, anti-inflammatory, neuroprotective, and endurance-enhancing effects. Vascular Protective Effects 1-MNA exerts beneficial effects on blood vessels through its action on the vascular endothelium. It improves the bioavailability of nitric oxide (NO), which is crucial for vasodilation, and regulates the activity of endothelial nitric oxide synthase (eNOS), the enzyme responsible for NO synthesis. These effects have been demonstrated in both in vivo and in vitro studies. Oral administration of 1-MNA has been shown to increase the diameter of the brachial artery (as measured by flow-mediated dilation, FMD) and stimulate NO release from human endothelial cells in both healthy individuals and those with hypercholesterolemia increased. Additionally, in cases of vascular dysfunction (e.g., hypertriglyceridemia or diabetes), 1-MNA restored normal NO-dependent vasodilation. By increasing NO bioavailability, 1-MNA may counteract endothelial dysfunction, support endothelial regeneration, and improve vascular function, particularly in the context of cardiovascular risk. Anticoagulant Effects 1-Methylnicotinamide is an endogenous activator of prostacyclin synthesis and can therefore regulate thrombolytic and inflammatory processes in the cardiovascular system. It inhibits platelet-dependent thrombosis through a mechanism involving cyclooxygenase-2 and prostacyclin (PGI2) and increases nitric oxide bioavailability in the endothelium. Endogenous prostacyclin (PGI2) plays a critical role in preventing platelet aggregation and thrombus formation. A deficiency in PGI2 can lead to increased platelet aggregation and arterial thrombi. Anti-atherosclerotic and Anti-inflammatory Effects 1-MNA exhibits anti-atherosclerotic and anti-inflammatory properties by improving the prostacyclin- and NO-dependent secretory function of the vascular endothelium, inhibiting platelet activation, reducing inflammation within atherosclerotic plaques, and lowering systemic inflammation and TNF-α levels. The anti-inflammatory effects of 1-MNA are linked to its ability to stimulate endogenous PGI2 secretion and reduce IL-4 and TNF-α levels. These effects are mediated by endothelial mechanisms rather than a direct impact on immune cell function, ensuring that the body’s immune response is not weakened. NAD+ Optimization 1-MNA is an inhibitor of nicotinamide N-methyltransferase (NNMT). By inhibiting NNMT activity, it regulates NAD+ biosynthesis via the nicotinamide salvage pathway, the primary route for NAD+ synthesis in mammals. By participating in this pathway, 1-MNA optimizes NAD+ levels. Impact on SIRT1 Research published in Nature Medicine indicates that 1-MNA enhances SIRT1 expression and stability. SIRT1 is an enzyme associated with longevity. Studies using the nematode Caenorhabditis elegans indicate that 1-MNA supplementation may extend lifespan. These studies also link 1-MNA to SIRT1. Neuroprotective Effects Animal experiments with diabetic rats have shown that 1-methylnicotinamide positively effects degenerative changes in the brain, allowing cognitive performance to be maintained longer. It also prevents depressive behavior with efficacy comparable to the common antidepressant fluoxetine. This effect is attributed to the reduction of neuroinflammation, pro-inflammatory cytokines (IL-6, TNF-α), and increased expression of BDNF (brain-derived neurotrophic factor), a protein supporting neuron survival and growth. The neuroprotective effects of 1-MNA involve shielding against neurotoxins, amyloid-beta plaques in the brain, neuroinflammatory responses, and neuronal apoptosis. It has been shown to improve memory deficits and cognitive functions, suggesting potential for treating neurodegenerative disorders. Enhancement of Physical Performance 1-MNA acts as a myokine, supporting the utilization of amino acids for gluconeogenesis in the liver and stimulating lipolysis in adipose tissue, thereby providing energy for muscles. Studies indicate that 1-MNA supplementation improves exercise tolerance and reduces fatigue. After one month of supplementation with 58 mg of 1-MNA, post-COVID-19 patients reported improved distances in a 6-minute walk test (6MWT), with 92% of participants experiencing better outcomes compared to controls. Additional studies highlight 1-MNA’s ability to enhance physical performance by stimulating PGI2 release, protecting microcirculation, and ensuring adequate blood flow to muscle tissues. This mechanism may reduce cardiovascular risks associated with physical exertion, particularly in individuals with impaired endothelial response. Commercialization 1-MNA has been approved for use in food products in the form of 1-MNA chloride. The approval process in the European Union was successfully completed by PHARMENA SA. In 2017, the European Food Safety Authority (EFSA) confirmed the safety of 1-MNA chloride in food supplements, leading to its authorization in 2018 under EU Regulation 2018/1123. 1-MNA chloride is currently used in dietary supplements. Other chemical forms of 1-MNA are not currently allowed on the market as food. Safety The safety of 1-MNA chloride has been thoroughly evaluated by EFSA, confirming its safe use. It must meet quality parameters defined in EU Regulation 2018/1123. References Drugs acting on the genito-urinary system Cations Nicotinamides Pyridinium compounds
1-Methylnicotinamide
[ "Physics", "Chemistry" ]
1,683
[ "Cations", "Ions", "Matter" ]
14,920,392
https://en.wikipedia.org/wiki/Optics%20and%20Spectroscopy
Optics and Spectroscopy is a monthly peer-reviewed scientific journal. It is the English version of the Russian journal () that was established in 1956. The journal was aided in development by Patricia Wakeling through a grant to her from the National Science Foundation. It covers research on spectroscopy of electromagnetic waves, from radio waves to X-rays, and related topics in optics, including quantum optics. External links Optics journals Spectroscopy journals Science and technology in Russia Science and technology in the Soviet Union Academic journals established in 1956 Monthly journals English-language journals
Optics and Spectroscopy
[ "Physics", "Chemistry", "Astronomy" ]
108
[ "Spectroscopy stubs", "Spectrum (physical sciences)", "Astronomy stubs", "Spectroscopy journals", "Molecular physics stubs", "Spectroscopy", "Physical chemistry stubs" ]
651,196
https://en.wikipedia.org/wiki/Reproducing%20kernel%20Hilbert%20space
In functional analysis, a reproducing kernel Hilbert space (RKHS) is a Hilbert space of functions in which point evaluation is a continuous linear functional. Specifically, a Hilbert space of functions from a set (to or ) is an RKHS if, for each , there exists a function such that for all , The function is called the reproducing kernel, and it reproduces the value of at via the inner product. An immediate consequence of this property is that convergence in norm implies uniform convergence on any subset of on which is bounded. However, the converse does not necessarily hold. Often the set carries a topology, and depends continuously on , in which case: convergence in norm implies uniform convergence on compact subsets of . It is not entirely straightforward to construct natural examples of a Hilbert space which are not an RKHS in a non-trivial fashion. Some examples, however, have been found. While, formally, L2 spaces are defined as Hilbert spaces of equivalence classes of functions, this definition can trivially be extended to a Hilbert space of functions by choosing a (total) function as a representative for each equivalence class. However, no choice of representatives can make this space an RKHS ( would need to be the non-existent Dirac delta function). However, there are RKHSs in which the norm is an L2-norm, such as the space of band-limited functions (see the example below). An RKHS is associated with a kernel that reproduces every function in the space in the sense that for every in the set on which the functions are defined, "evaluation at " can be performed by taking an inner product with a function determined by the kernel. Such a reproducing kernel exists if and only if every evaluation functional is continuous. The reproducing kernel was first introduced in the 1907 work of Stanisław Zaremba concerning boundary value problems for harmonic and biharmonic functions. James Mercer simultaneously examined functions which satisfy the reproducing property in the theory of integral equations. The idea of the reproducing kernel remained untouched for nearly twenty years until it appeared in the dissertations of Gábor Szegő, Stefan Bergman, and Salomon Bochner. The subject was eventually systematically developed in the early 1950s by Nachman Aronszajn and Stefan Bergman. These spaces have wide applications, including complex analysis, harmonic analysis, and quantum mechanics. Reproducing kernel Hilbert spaces are particularly important in the field of statistical learning theory because of the celebrated representer theorem which states that every function in an RKHS that minimises an empirical risk functional can be written as a linear combination of the kernel function evaluated at the training points. This is a practically useful result as it effectively simplifies the empirical risk minimization problem from an infinite dimensional to a finite dimensional optimization problem. For ease of understanding, we provide the framework for real-valued Hilbert spaces. The theory can be easily extended to spaces of complex-valued functions and hence include the many important examples of reproducing kernel Hilbert spaces that are spaces of analytic functions. Definition Let be an arbitrary set and a Hilbert space of real-valued functions on , equipped with pointwise addition and pointwise scalar multiplication. The evaluation functional over the Hilbert space of functions is a linear functional that evaluates each function at a point , We say that H is a reproducing kernel Hilbert space if, for all in , is continuous at every in or, equivalently, if is a bounded operator on , i.e. there exists some such that Although is assumed for all , it might still be the case that . While property () is the weakest condition that ensures both the existence of an inner product and the evaluation of every function in at every point in the domain, it does not lend itself to easy application in practice. A more intuitive definition of the RKHS can be obtained by observing that this property guarantees that the evaluation functional can be represented by taking the inner product of with a function in . This function is the so-called reproducing kernel for the Hilbert space from which the RKHS takes its name. More formally, the Riesz representation theorem implies that for all in there exists a unique element of with the reproducing property, Since is itself a function defined on with values in the field (or in the case of complex Hilbert spaces) and as is in we have that where is the element in associated to . This allows us to define the reproducing kernel of as a function (or in the complex case) by From this definition it is easy to see that (or in the complex case) is both symmetric (resp. conjugate symmetric) and positive definite, i.e. for every The Moore–Aronszajn theorem (see below) is a sort of converse to this: if a function satisfies these conditions then there is a Hilbert space of functions on for which it is a reproducing kernel. Examples The simplest example of a reproducing kernel Hilbert space is the space where is a set and is the counting measure on . For , the reproducing kernel is the indicator function of the one point set . Nontrivial reproducing kernel Hilbert spaces often involve analytic functions, as we now illustrate by example. Consider the Hilbert space of bandlimited continuous functions . Fix some cutoff frequency and define the Hilbert space where is the set of square integrable functions, and is the Fourier transform of . As the inner product, we use Since this is a closed subspace of , it is a Hilbert space. Moreover, the elements of are smooth functions on that tend to zero at infinity, essentially by the Riemann-Lebesgue lemma. In fact, the elements of are the restrictions to of entire holomorphic functions, by the Paley–Wiener theorem. From the Fourier inversion theorem, we have It then follows by the Cauchy–Schwarz inequality and Plancherel's theorem that, for all , This inequality shows that the evaluation functional is bounded, proving that is indeed a RKHS. The kernel function in this case is given by The Fourier transform of defined above is given by which is a consequence of the time-shifting property of the Fourier transform. Consequently, using Plancherel's theorem, we have Thus we obtain the reproducing property of the kernel. in this case is the "bandlimited version" of the Dirac delta function, and that converges to in the weak sense as the cutoff frequency tends to infinity. Moore–Aronszajn theorem We have seen how a reproducing kernel Hilbert space defines a reproducing kernel function that is both symmetric and positive definite. The Moore–Aronszajn theorem goes in the other direction; it states that every symmetric, positive definite kernel defines a unique reproducing kernel Hilbert space. The theorem first appeared in Aronszajn's Theory of Reproducing Kernels, although he attributes it to E. H. Moore. Theorem. Suppose K is a symmetric, positive definite kernel on a set X. Then there is a unique Hilbert space of functions on X for which K is a reproducing kernel. Proof. For all x in X, define Kx = K(x, ⋅ ). Let H0 be the linear span of {Kx : x ∈ X}. Define an inner product on H0 by which implies . The symmetry of this inner product follows from the symmetry of K and the non-degeneracy follows from the fact that K is positive definite. Let H be the completion of H0 with respect to this inner product. Then H consists of functions of the form Now we can check the reproducing property (): To prove uniqueness, let G be another Hilbert space of functions for which K is a reproducing kernel. For every x and y in X, () implies that By linearity, on the span of . Then because G is complete and contains H0 and hence contains its completion. Now we need to prove that every element of G is in H. Let be an element of G. Since H is a closed subspace of G, we can write where and . Now if then, since K is a reproducing kernel of G and H: where we have used the fact that belongs to H so that its inner product with in G is zero. This shows that in G and concludes the proof. Integral operators and Mercer's theorem We may characterize a symmetric positive definite kernel via the integral operator using Mercer's theorem and obtain an additional view of the RKHS. Let be a compact space equipped with a strictly positive finite Borel measure and a continuous, symmetric, and positive definite function. Define the integral operator as where is the space of square integrable functions with respect to . Mercer's theorem states that the spectral decomposition of the integral operator of yields a series representation of in terms of the eigenvalues and eigenfunctions of . This then implies that is a reproducing kernel so that the corresponding RKHS can be defined in terms of these eigenvalues and eigenfunctions. We provide the details below. Under these assumptions is a compact, continuous, self-adjoint, and positive operator. The spectral theorem for self-adjoint operators implies that there is an at most countable decreasing sequence such that and , where the form an orthonormal basis of . By the positivity of for all One can also show that maps continuously into the space of continuous functions and therefore we may choose continuous functions as the eigenvectors, that is, for all Then by Mercer's theorem may be written in terms of the eigenvalues and continuous eigenfunctions as for all such that This above series representation is referred to as a Mercer kernel or Mercer representation of . Furthermore, it can be shown that the RKHS of is given by where the inner product of given by This representation of the RKHS has application in probability and statistics, for example to the Karhunen-Loève representation for stochastic processes and kernel PCA. Feature maps A feature map is a map , where is a Hilbert space which we will call the feature space. The first sections presented the connection between bounded/continuous evaluation functions, positive definite functions, and integral operators and in this section we provide another representation of the RKHS in terms of feature maps. Every feature map defines a kernel via Clearly is symmetric and positive definiteness follows from the properties of inner product in . Conversely, every positive definite function and corresponding reproducing kernel Hilbert space has infinitely many associated feature maps such that () holds. For example, we can trivially take and for all . Then () is satisfied by the reproducing property. Another classical example of a feature map relates to the previous section regarding integral operators by taking and . This connection between kernels and feature maps provides us with a new way to understand positive definite functions and hence reproducing kernels as inner products in . Moreover, every feature map can naturally define a RKHS by means of the definition of a positive definite function. Lastly, feature maps allow us to construct function spaces that reveal another perspective on the RKHS. Consider the linear space We can define a norm on by It can be shown that is a RKHS with kernel defined by . This representation implies that the elements of the RKHS are inner products of elements in the feature space and can accordingly be seen as hyperplanes. This view of the RKHS is related to the kernel trick in machine learning. Properties Useful properties of RKHSs: Let be a sequence of sets and be a collection of corresponding positive definite functions on It then follows that is a kernel on Let then the restriction of to is also a reproducing kernel. Consider a normalized kernel such that for all . Define a pseudo-metric on X as By the Cauchy–Schwarz inequality, This inequality allows us to view as a measure of similarity between inputs. If are similar then will be closer to 1 while if are dissimilar then will be closer to 0. The closure of the span of coincides with . Common examples Bilinear kernels The RKHS corresponding to this kernel is the dual space, consisting of functions satisfying . Polynomial kernels Radial basis function kernels These are another common class of kernels which satisfy . Some examples include: Gaussian or squared exponential kernel: Laplacian kernel: The squared norm of a function in the RKHS with this kernel is: Bergman kernels We also provide examples of Bergman kernels. Let X be finite and let H consist of all complex-valued functions on X. Then an element of H can be represented as an array of complex numbers. If the usual inner product is used, then Kx is the function whose value is 1 at x and 0 everywhere else, and can be thought of as an identity matrix since In this case, H is isomorphic to . The case of (where denotes the unit disc) is more sophisticated. Here the Bergman space is the space of square-integrable holomorphic functions on . It can be shown that the reproducing kernel for is Lastly, the space of band limited functions in with bandwidth is a RKHS with reproducing kernel Extension to vector-valued functions In this section we extend the definition of the RKHS to spaces of vector-valued functions as this extension is particularly important in multi-task learning and manifold regularization. The main difference is that the reproducing kernel is a symmetric function that is now a positive semi-definite matrix for every in . More formally, we define a vector-valued RKHS (vvRKHS) as a Hilbert space of functions such that for all and and This second property parallels the reproducing property for the scalar-valued case. This definition can also be connected to integral operators, bounded evaluation functions, and feature maps as we saw for the scalar-valued RKHS. We can equivalently define the vvRKHS as a vector-valued Hilbert space with a bounded evaluation functional and show that this implies the existence of a unique reproducing kernel by the Riesz Representation theorem. Mercer's theorem can also be extended to address the vector-valued setting and we can therefore obtain a feature map view of the vvRKHS. Lastly, it can also be shown that the closure of the span of coincides with , another property similar to the scalar-valued case. We can gain intuition for the vvRKHS by taking a component-wise perspective on these spaces. In particular, we find that every vvRKHS is isometrically isomorphic to a scalar-valued RKHS on a particular input space. Let . Consider the space and the corresponding reproducing kernel As noted above, the RKHS associated to this reproducing kernel is given by the closure of the span of where for every set of pairs The connection to the scalar-valued RKHS can then be made by the fact that every matrix-valued kernel can be identified with a kernel of the form of () via Moreover, every kernel with the form of () defines a matrix-valued kernel with the above expression. Now letting the map be defined as where is the component of the canonical basis for , one can show that is bijective and an isometry between and . While this view of the vvRKHS can be useful in multi-task learning, this isometry does not reduce the study of the vector-valued case to that of the scalar-valued case. In fact, this isometry procedure can make both the scalar-valued kernel and the input space too difficult to work with in practice as properties of the original kernels are often lost. An important class of matrix-valued reproducing kernels are separable kernels which can factorized as the product of a scalar valued kernel and a -dimensional symmetric positive semi-definite matrix. In light of our previous discussion these kernels are of the form for all in and in . As the scalar-valued kernel encodes dependencies between the inputs, we can observe that the matrix-valued kernel encodes dependencies among both the inputs and the outputs. We lastly remark that the above theory can be further extended to spaces of functions with values in function spaces but obtaining kernels for these spaces is a more difficult task. Connection between RKHSs and the ReLU function The ReLU function is commonly defined as and is a mainstay in the architecture of neural networks where it is used as an activation function. One can construct a ReLU-like nonlinear function using the theory of reproducing kernel Hilbert spaces. Below, we derive this construction and show how it implies the representation power of neural networks with ReLU activations. We will work with the Hilbert space of absolutely continuous functions with and square integrable (i.e. ) derivative. It has the inner product To construct the reproducing kernel it suffices to consider a dense subspace, so let and . The Fundamental Theorem of Calculus then gives where and i.e. This implies reproduces . Moreover the minimum function on has the following representations with the ReLu function: Using this formulation, we can apply the representer theorem to the RKHS, letting one prove the optimality of using ReLU activations in neural network settings. See also Positive definite kernel Mercer's theorem Kernel trick Kernel embedding of distributions Representer theorem Notes References Alvarez, Mauricio, Rosasco, Lorenzo and Lawrence, Neil, “Kernels for Vector-Valued Functions: a Review,” https://arxiv.org/abs/1106.6251, June 2011. Berlinet, Alain and Thomas, Christine. Reproducing kernel Hilbert spaces in Probability and Statistics, Kluwer Academic Publishers, 2004. De Vito, Ernest, Umanita, Veronica, and Villa, Silvia. "An extension of Mercer theorem to vector-valued measurable kernels," , June 2013. Durrett, Greg. 9.520 Course Notes, Massachusetts Institute of Technology, https://www.mit.edu/~9.520/scribe-notes/class03_gdurett.pdf, February 2010. Okutmustur, Baver. “Reproducing Kernel Hilbert Spaces,” M.S. dissertation, Bilkent University, https://users.metu.edu.tr/baver/MS.Thesis.pdf, August 2005. Paulsen, Vern. “An introduction to the theory of reproducing kernel Hilbert spaces,” https://citeseerx.ist.psu.edu/document?repid=rep1&type=pdf&doi=440218056738e05b5ab43679f932a9f33fccee87. Rosasco, Lorenzo and Poggio, Thomas. "A Regularization Tour of Machine Learning – MIT 9.520 Lecture Notes" Manuscript, Dec. 2014. Wahba, Grace, Spline Models for Observational Data, SIAM, 1990. Hilbert spaces
Reproducing kernel Hilbert space
[ "Physics" ]
3,987
[ "Hilbert spaces", "Quantum mechanics" ]
652,078
https://en.wikipedia.org/wiki/Second%20fundamental%20form
In differential geometry, the second fundamental form (or shape tensor) is a quadratic form on the tangent plane of a smooth surface in the three-dimensional Euclidean space, usually denoted by (read "two"). Together with the first fundamental form, it serves to define extrinsic invariants of the surface, its principal curvatures. More generally, such a quadratic form is defined for a smooth immersed submanifold in a Riemannian manifold. Surface in R3 Motivation The second fundamental form of a parametric surface in was introduced and studied by Gauss. First suppose that the surface is the graph of a twice continuously differentiable function, , and that the plane is tangent to the surface at the origin. Then and its partial derivatives with respect to and vanish at (0,0). Therefore, the Taylor expansion of f at (0,0) starts with quadratic terms: and the second fundamental form at the origin in the coordinates is the quadratic form For a smooth point on , one can choose the coordinate system so that the plane is tangent to at , and define the second fundamental form in the same way. Classical notation The second fundamental form of a general parametric surface is defined as follows. Let be a regular parametrization of a surface in , where is a smooth vector-valued function of two variables. It is common to denote the partial derivatives of with respect to and by and . Regularity of the parametrization means that and are linearly independent for any in the domain of , and hence span the tangent plane to at each point. Equivalently, the cross product is a nonzero vector normal to the surface. The parametrization thus defines a field of unit normal vectors : The second fundamental form is usually written as its matrix in the basis of the tangent plane is The coefficients at a given point in the parametric -plane are given by the projections of the second partial derivatives of at that point onto the normal line to and can be computed with the aid of the dot product as follows: For a signed distance field of Hessian , the second fundamental form coefficients can be computed as follows: Physicist's notation The second fundamental form of a general parametric surface is defined as follows. Let be a regular parametrization of a surface in , where is a smooth vector-valued function of two variables. It is common to denote the partial derivatives of with respect to by , . Regularity of the parametrization means that and are linearly independent for any in the domain of , and hence span the tangent plane to at each point. Equivalently, the cross product is a nonzero vector normal to the surface. The parametrization thus defines a field of unit normal vectors : The second fundamental form is usually written as The equation above uses the Einstein summation convention. The coefficients at a given point in the parametric -plane are given by the projections of the second partial derivatives of at that point onto the normal line to and can be computed in terms of the normal vector as follows: Hypersurface in a Riemannian manifold In Euclidean space, the second fundamental form is given by where is the Gauss map, and the differential of regarded as a vector-valued differential form, and the brackets denote the metric tensor of Euclidean space. More generally, on a Riemannian manifold, the second fundamental form is an equivalent way to describe the shape operator (denoted by ) of a hypersurface, where denotes the covariant derivative of the ambient manifold and a field of normal vectors on the hypersurface. (If the affine connection is torsion-free, then the second fundamental form is symmetric.) The sign of the second fundamental form depends on the choice of direction of (which is called a co-orientation of the hypersurface - for surfaces in Euclidean space, this is equivalently given by a choice of orientation of the surface). Generalization to arbitrary codimension The second fundamental form can be generalized to arbitrary codimension. In that case it is a quadratic form on the tangent space with values in the normal bundle and it can be defined by where denotes the orthogonal projection of covariant derivative onto the normal bundle. In Euclidean space, the curvature tensor of a submanifold can be described by the following formula: This is called the Gauss equation, as it may be viewed as a generalization of Gauss's Theorema Egregium. For general Riemannian manifolds one has to add the curvature of ambient space; if is a manifold embedded in a Riemannian manifold then the curvature tensor of with induced metric can be expressed using the second fundamental form and , the curvature tensor of : See also First fundamental form Gaussian curvature Gauss–Codazzi equations Shape operator Third fundamental form Tautological one-form References External links Steven Verpoort (2008) Geometry of the Second Fundamental Form: Curvature Properties and Variational Aspects from Katholieke Universiteit Leuven. Differential geometry Differential geometry of surfaces Riemannian geometry Curvature (mathematics) Tensors
Second fundamental form
[ "Physics", "Engineering" ]
1,034
[ "Geometric measurement", "Tensors", "Physical quantities", "Curvature (mathematics)" ]
652,531
https://en.wikipedia.org/wiki/Photovoltaics
Photovoltaics (PV) is the conversion of light into electricity using semiconducting materials that exhibit the photovoltaic effect, a phenomenon studied in physics, photochemistry, and electrochemistry. The photovoltaic effect is commercially used for electricity generation and as photosensors. A photovoltaic system employs solar modules, each comprising a number of solar cells, which generate electrical power. PV installations may be ground-mounted, rooftop-mounted, wall-mounted or floating. The mount may be fixed or use a solar tracker to follow the sun across the sky. Photovoltaic technology helps to mitigate climate change because it emits much less carbon dioxide than fossil fuels. Solar PV has specific advantages as an energy source: once installed, its operation does not generate any pollution or any greenhouse gas emissions; it shows scalability in respect of power needs and silicon has large availability in the Earth's crust, although other materials required in PV system manufacture such as silver may constrain further growth in the technology. Other major constraints identified include competition for land use. The use of PV as a main source requires energy storage systems or global distribution by high-voltage direct current power lines causing additional costs, and also has a number of other specific disadvantages such as variable power generation which have to be balanced. Production and installation does cause some pollution and greenhouse gas emissions, though only a fraction of the emissions caused by fossil fuels. Photovoltaic systems have long been used in specialized applications as stand-alone installations and grid-connected PV systems have been in use since the 1990s. Photovoltaic modules were first mass-produced in 2000, when the German government funded a one hundred thousand roof program. Decreasing costs has allowed PV to grow as an energy source. This has been partially driven by massive Chinese government investment in developing solar production capacity since 2000, and achieving economies of scale. Improvements in manufacturing technology and efficiency have also led to decreasing costs. Net metering and financial incentives, such as preferential feed-in tariffs for solar-generated electricity, have supported solar PV installations in many countries. Panel prices dropped by a factor of 4 between 2004 and 2011. Module prices dropped by about 90% over the 2010s. In 2022, worldwide installed PV capacity increased to more than 1 terawatt (TW) covering nearly two percent of global electricity demand. After hydro and wind powers, PV is the third renewable energy source in terms of global capacity. In 2022, the International Energy Agency expected a growth by over 1 TW from 2022 to 2027. In some instances, PV has offered the cheapest source of electrical power in regions with a high solar potential, with a bid for pricing as low as 0.015 US$/kWh in Qatar in 2023. In 2023, the International Energy Agency stated in its World Energy Outlook that '[f]or projects with low cost financing that tap high quality resources, solar PV is now the cheapest source of electricity in history. Etymology The term "photovoltaic" comes from the Greek () meaning "light", and from "volt", the unit of electromotive force, the volt, which in turn comes from the last name of the Italian physicist Alessandro Volta, inventor of the battery (electrochemical cell). The term "photovoltaic" has been in use in English since 1849. History In 1989, the German Research Ministry initiated the first ever program to finance PV roofs (2200 roofs). A program led by Walter Sandtner in Bonn, Germany. In 1994, Japan followed in their footsteps and conducted a similar program with 539 residential PV systems installed. Since, many countries have continued to produce and finance PV systems in an exponential speed. Solar cells Photovoltaics are best known as a method for generating electric power by using solar cells to convert energy from the sun into a flow of electrons by the photovoltaic effect. Solar cells produce direct current electricity from sunlight which can be used to power equipment or to recharge batteries. The first practical application of photovoltaics was to power orbiting satellites and other spacecraft, but today the majority of photovoltaic modules are used for grid-connected systems for power generation. In this case an inverter is required to convert the DC to AC. There is also a smaller market for stand alone systems for remote dwellings, boats, recreational vehicles, electric cars, roadside emergency telephones, remote sensing, and cathodic protection of pipelines. Photovoltaic power generation employs solar modules composed of a number of solar cells containing a semiconductor material. Copper solar cables connect modules (module cable), arrays (array cable), and sub-fields. Because of the growing demand for renewable energy sources, the manufacturing of solar cells and photovoltaic arrays has advanced considerably in recent years. Cells require protection from the environment and are usually packaged tightly in solar modules. Photovoltaic module power is measured under standard test conditions (STC) in "Wp" (watts peak). The actual power output at a particular place may be less than or greater than this rated value, depending on geographical location, time of day, weather conditions, and other factors. Solar photovoltaic array capacity factors are typically under 25% when not coupled with storage, which is lower than many other industrial sources of electricity. Solar cell efficiencies Performance and degradation Module performance is generally rated under standard test conditions (STC): irradiance of 1,000 W/m2, solar spectrum of AM 1.5 and module temperature at 25 °C. The actual voltage and current output of the module changes as lighting, temperature and load conditions change, so there is never one specific voltage at which the module operates. Performance varies depending on geographic location, time of day, the day of the year, amount of solar irradiance, direction and tilt of modules, cloud cover, shading, soiling, state of charge, and temperature. Performance of a module or panel can be measured at different time intervals with a DC clamp meter or shunt and logged, graphed, or charted with a chart recorder or data logger. For optimum performance, a solar panel needs to be made of similar modules oriented in the same direction perpendicular to direct sunlight. Bypass diodes are used to circumvent broken or shaded panels and optimize output. These bypass diodes are usually placed along groups of solar cells to create a continuous flow. Electrical characteristics include nominal power (PMAX, measured in W), open-circuit voltage (VOC), short-circuit current (ISC, measured in amperes), maximum power voltage (VMPP), maximum power current (IMPP), peak power (watt-peak, Wp), and module efficiency (%). Open-circuit voltage or VOC is the maximum voltage the module can produce when not connected to an electrical circuit or system. VOC can be measured with a voltmeter directly on an illuminated module's terminals or on its disconnected cable. The peak power rating, Wp, is the maximum output under standard test conditions (not the maximum possible output). Typical modules, which could measure approximately , will be rated from as low as 75 W to as high as 600 W, depending on their efficiency. At the time of testing, the test modules are binned according to their test results, and a typical manufacturer might rate their modules in 5 W increments, and either rate them at +/- 3%, +/-5%, +3/-0% or +5/-0%. Influence of temperature The performance of a photovoltaic (PV) module depends on the environmental conditions, mainly on the global incident irradiance G in the plane of the module. However, the temperature T of the p–n junction also influences the main electrical parameters: the short circuit current ISC, the open circuit voltage VOC and the maximum power Pmax. In general, it is known that VOC shows a significant inverse correlation with T, while for ISC this correlation is direct, but weaker, so that this increase does not compensate for the decrease in VOC. As a consequence, Pmax decreases when T increases. This correlation between the power output of a solar cell and the working temperature of its junction depends on the semiconductor material, and is due to the influence of T on the concentration, lifetime, and mobility of the intrinsic carriers, i.e., electrons and gaps. inside the photovoltaic cell. Temperature sensitivity is usually described by temperature coefficients, each of which expresses the derivative of the parameter to which it refers with respect to the junction temperature. The values of these parameters, which can be found in any data sheet of the photovoltaic module, are the following: β: VOC variation coefficient with respect to T, given by ∂VOC/∂T. α: Coefficient of variation of ISC with respect to T, given by ∂ISC/∂T. δ: Coefficient of variation of Pmax with respect to T, given by ∂Pmax/∂T. Techniques for estimating these coefficients from experimental data can be found in the literature. Degradation The ability of solar modules to withstand damage by rain, hail, heavy snow load, and cycles of heat and cold varies by manufacturer, although most solar panels on the U.S. market are UL listed, meaning they have gone through testing to withstand hail. Potential-induced degradation (also called PID) is a potential-induced performance degradation in crystalline photovoltaic modules, caused by so-called stray currents. This effect may cause power loss of up to 30%. The largest challenge for photovoltaic technology is the purchase price per watt of electricity produced. Advancements in photovoltaic technologies have brought about the process of "doping" the silicon substrate to lower the activation energy thereby making the panel more efficient in converting photons to retrievable electrons. Chemicals such as boron (p-type) are applied into the semiconductor crystal in order to create donor and acceptor energy levels substantially closer to the valence and conductor bands. In doing so, the addition of boron impurity allows the activation energy to decrease twenty-fold from 1.12 eV to 0.05 eV. Since the potential difference (EB) is so low, the boron is able to thermally ionize at room temperatures. This allows for free energy carriers in the conduction and valence bands thereby allowing greater conversion of photons to electrons. The power output of a photovoltaic (PV) device decreases over time. This decrease is due to its exposure to solar radiation as well as other external conditions. The degradation index, which is defined as the annual percentage of output power loss, is a key factor in determining the long-term production of a photovoltaic plant. To estimate this degradation, the percentage of decrease associated with each of the electrical parameters. The individual degradation of a photovoltaic module can significantly influence the performance of a complete string. Furthermore, not all modules in the same installation decrease their performance at exactly the same rate. Given a set of modules exposed to long-term outdoor conditions, the individual degradation of the main electrical parameters and the increase in their dispersion must be considered. As each module tends to degrade differently, the behavior of the modules will be increasingly different over time, negatively affecting the overall performance of the plant. There are several studies dealing with the power degradation analysis of modules based on different photovoltaic technologies available in the literature. According to a recent study, the degradation of crystalline silicon modules is very regular, oscillating between 0.8% and 1.0% per year. On the other hand, if we analyze the performance of thin-film photovoltaic modules, an initial period of strong degradation is observed (which can last several months and up to two years), followed by a later stage in which the degradation stabilizes, being then comparable to that of crystalline silicon. Strong seasonal variations are also observed in such thin-film technologies because the influence of the solar spectrum is much greater. For example, for modules of amorphous silicon, micromorphic silicon or cadmium telluride, we are talking about annual degradation rates for the first years of between 3% and 4%. However, other technologies, such as CIGS, show much lower degradation rates, even in those early years. Manufacturing of PV systems Overall the manufacturing process of creating solar photovoltaics is simple in that it does not require the culmination of many complex or moving parts. Because of the solid-state nature of PV systems, they often have relatively long lifetimes, anywhere from 10 to 30 years. To increase the electrical output of a PV system, the manufacturer must simply add more photovoltaic components. Because of this, economies of scale are important for manufacturers as costs decrease with increasing output. While there are many types of PV systems known to be effective, crystalline silicon PV accounted for around 90% of the worldwide production of PV in 2013. Manufacturing silicon PV systems has several steps. First, polysilicon is processed from mined quartz until it is very pure (semi-conductor grade). This is melted down when small amounts of boron, a group III element, are added to make a p-type semiconductor rich in electron holes. Typically using a seed crystal, an ingot of this solution is grown from the liquid polycrystalline. The ingot may also be cast in a mold. Wafers of this semiconductor material are cut from the bulk material with wire saws, and then go through surface etching before being cleaned. Next, the wafers are placed into a phosphorus vapor deposition furnace which lays a very thin layer of phosphorus, a group V element, which creates an n-type semiconducting surface. To reduce energy losses, an anti-reflective coating is added to the surface, along with electrical contacts. After finishing the cell, cells are connected via electrical circuit according to the specific application and prepared for shipping and installation. Environmental costs of manufacture Solar photovoltaic power is not entirely "clean energy": production produces greenhouse gas emissions, materials used to build the cells are potentially unsustainable and will run out eventually, the technology uses toxic substances which cause pollution, and there are no viable technologies for recycling solar waste. Data required to investigate their impact are sometimes affected by a rather large amount of uncertainty. The values of human labor and water consumption, for example, are not precisely assessed due to the lack of systematic and accurate analyses in the scientific literature. One difficulty in determining effects due to PV is to determine if the wastes are released to the air, water, or soil during the manufacturing phase. Life-cycle assessments, which look at all different environment effects ranging from global warming potential, pollution, water depletion and others, are unavailable for PV. Instead, studies have tried to estimate the impact and potential impact of various types of PV, but these estimates are usually restricted to simply assessing energy costs of the manufacture and/or transport, because these are new technologies and the total environmental impact of their components and disposal methods are unknown, even for commercially available first generation solar cells, let alone experimental prototypes with no commercial viability. Thus, estimates of the environmental impact of PV have focused on carbon dioxide equivalents per kWh or energy pay-back time (EPBT). The EPBT describes the timespan a PV system needs to operate in order to generate the same amount of energy that was used for its manufacture. Another study includes transport energy costs in the EPBT. The EPBT has also been defined completely differently as "the time needed to compensate for the total renewable- and non-renewable primary energy required during the life cycle of a PV system" in another study, which also included installation costs. This energy amortization, given in years, is also referred to as break-even energy payback time. The lower the EPBT, the lower the environmental cost of solar power. The EPBT depends vastly on the location where the PV system is installed (e.g. the amount of sunlight available and the efficiency of the electrical grid) and on the type of system, namely the system's components. A 2015 review of EPBT estimates of first and second-generation PV suggested that there was greater variation in embedded energy than in efficiency of the cells implying that it was mainly the embedded energy that needs to reduce to have a greater reduction in EPBT. In general, the most important component of solar panels, which accounts for much of the energy use and greenhouse gas emissions, is the refining of the polysilicon. As to how much percentage of the EPBT this silicon depends on the type of system. A fully autarkic system requires additional components ('Balance of System', the power inverters, storage, etc.) which significantly increase the energy cost of manufacture, but in a simple rooftop system, some 90% of the energy cost is from silicon, with the remainder coming from the inverters and module frame. The EPBT relates closely to the concepts of net energy gain (NEG) and energy returned on energy invested (EROI). They are both used in energy economics and refer to the difference between the energy expended to harvest an energy source and the amount of energy gained from that harvest. The NEG and EROI also take the operating lifetime of a PV system into account and a working life of 25 to 30 years is typically assumed. From these metrics, the Energy payback Time can be derived by calculation. EPBT improvements PV systems using crystalline silicon, by far the majority of the systems in practical use, have such a high EPBT because silicon is produced by the reduction of high-grade quartz sand in electric furnaces. This coke-fired smelting process occurs at high temperatures of more than 1000 °C and is very energy intensive, using about 11 kilowatt-hours (kWh) per produced kilogram of silicon. The energy requirements of this process makes the energy cost per unit of silicon produced relatively inelastic, which means that the production process itself will not become more efficient in the future. Nonetheless, the energy payback time has shortened significantly over the last years, as crystalline silicon cells became ever more efficient in converting sunlight, while the thickness of the wafer material was constantly reduced and therefore required less silicon for its manufacture. Within the last ten years, the amount of silicon used for solar cells declined from 16 to 6 grams per watt-peak. In the same period, the thickness of a c-Si wafer was reduced from 300 μm, or microns, to about 160–190 μm. The sawing techniques that slice crystalline silicon ingots into wafers have also improved by reducing the kerf loss and making it easier to recycle the silicon sawdust. Effects from first generation PV Crystalline silicon modules are the most extensively studied PV type in terms of LCA since they are the most commonly used. Mono-crystalline silicon photovoltaic systems (mono-si) have an average efficiency of 14.0%. The cells tend to follow a structure of front electrode, anti-reflection film, n-layer, p-layer, and back electrode, with the sun hitting the front electrode. EPBT ranges from 1.7 to 2.7 years. The cradle to gate of CO2-eq/kWh ranges from 37.3 to 72.2 grams when installed in Southern Europe. Techniques to produce multi-crystalline silicon (multi-si) photovoltaic cells are simpler and cheaper than mono-si, however tend to make less efficient cells, an average of 13.2%. EPBT ranges from 1.5 to 2.6 years. The cradle to gate of CO2-eq/kWh ranges from 28.5 to 69 grams when installed in Southern Europe. Assuming that the following countries had a high-quality grid infrastructure as in Europe, in 2020 it was calculated it would take 1.28 years in Ottawa, Canada, for a rooftop photovoltaic system to produce the same amount of energy as required to manufacture the silicon in the modules in it (excluding the silver, glass, mounts and other components), 0.97 years in Catania, Italy, and 0.4 years in Jaipur, India. Outside of Europe, where net grid efficiencies are lower, it would take longer. This 'energy payback time' can be seen as the portion of time during the useful lifetime of the module in which the energy production is polluting. At best, this means that a 30-year old panel has produced clean energy for 97% of its lifetime, or that the silicon in the modules in a solar panel produce 97% less greenhouse gas emissions than a coal-fired plant for the same amount of energy (assuming and ignoring many things). Some studies have looked beyond EPBT and GWP to other environmental effects. In one such study, conventional energy mix in Greece was compared to multi-si PV and found a 95% overall reduction in effects including carcinogens, eco-toxicity, acidification, eutrophication, and eleven others. Impact from second generation PV Cadmium telluride (CdTe) is one of the fastest-growing thin film based solar cells which are collectively known as second-generation devices. This new thin-film device also shares similar performance restrictions (Shockley-Queisser efficiency limit) as conventional Si devices but promises to lower the cost of each device by both reducing material and energy consumption during manufacturing. The global market share of CdTe was 4.7% in 2008. This technology's highest power conversion efficiency is 21%. The cell structure includes glass substrate (around 2 mm), transparent conductor layer, CdS buffer layer (50–150 nm), CdTe absorber and a metal contact layer. CdTe PV systems require less energy input in their production than other commercial PV systems per unit electricity production. The average CO2-eq/kWh is around 18 grams (cradle to gate). CdTe has the fastest EPBT of all commercial PV technologies, which varies between 0.3 and 1.2 years. Effects from third generation PV Third-generation PVs are designed to combine the advantages of both the first and second generation devices and they do not have Shockley-Queisser limit, a theoretical limit for first and second generation PV cells. The thickness of a third generation device is less than 1 μm. Two new promising thin film technologies are copper zinc tin sulfide (Cu2ZnSnS4 or CZTS), zinc phosphide (Zn3P2) and single-walled carbon nano-tubes (SWCNT). These thin films are currently only produced in the lab but may be commercialized in the future. The manufacturing of CZTS and (Zn3P2) processes are expected to be similar to those of current thin film technologies of CIGS and CdTe, respectively. While the absorber layer of SWCNT PV is expected to be synthesized with CoMoCAT method. by Contrary to established thin films such as CIGS and CdTe, CZTS, Zn3P2, and SWCNT PVs are made from earth abundant, nontoxic materials and have the potential to produce more electricity annually than the current worldwide consumption. While CZTS and Zn3P2 offer good promise for these reasons, the specific environmental implications of their commercial production are not yet known. Global warming potential of CZTS and Zn3P2 were found 38 and 30 grams CO2-eq/kWh while their corresponding EPBT were found 1.85 and 0.78 years, respectively. Overall, CdTe and Zn3P2 have similar environmental effects but can slightly outperform CIGS and CZTS. A study on environmental impacts of SWCNT PVs by Celik et al., including an existing 1% efficient device and a theoretical 28% efficient device, found that, compared to monocrystalline Si, the environmental impacts from 1% SWCNT was ~18 times higher due mainly to the short lifetime of three years. Economics There have been major changes in the underlying costs, industry structure and market prices of solar photovoltaics technology, over the years, and gaining a coherent picture of the shifts occurring across the industry value chain globally is a challenge. This is due to: "the rapidity of cost and price changes, the complexity of the PV supply chain, which involves a large number of manufacturing processes, the balance of system (BOS) and installation costs associated with complete PV systems, the choice of different distribution channels, and differences between regional markets within which PV is being deployed". Further complexities result from the many different policy support initiatives that have been put in place to facilitate photovoltaics commercialisation in various countries. Renewable energy technologies have generally gotten cheaper since their invention. Renewable energy systems have become cheaper to build than fossil fuel power plants across much of the world, thanks to advances in wind and solar energy technology, in particular. Implications for electricity bill management and energy investment There is no silver bullet in electricity or energy demand and bill management, because customers (sites) have different specific situations, e.g. different comfort/convenience needs, different electricity tariffs, or different usage patterns. Electricity tariff may have a few elements, such as daily access and metering charge, energy charge (based on kWh, MWh) or peak demand charge (e.g. a price for the highest 30min energy consumption in a month). PV is a promising option for reducing energy charges when electricity prices are reasonably high and continuously increasing, such as in Australia and Germany. However, for sites with peak demand charge in place, PV may be less attractive if peak demands mostly occur in the late afternoon to early evening, for example in residential communities. Overall, energy investment is largely an economic decision and it is better to make investment decisions based on systematic evaluation of options in operational improvement, energy efficiency, onsite generation and energy storage. Hardware costs In 1977 crystalline silicon solar cell prices were at $76.67/W. Although wholesale module prices remained flat at around $3.50 to $4.00/W in the early 2000s due to high demand in Germany and Spain afforded by generous subsidies and shortage of polysilicon, demand crashed with the abrupt ending of Spanish subsidies after the market crash of 2008, and the price dropped rapidly to $2.00/W. Manufacturers were able to maintain a positive operating margin despite a 50% drop in income due to innovation and reductions in costs. In late 2011, factory-gate prices for crystalline-silicon photovoltaic modules suddenly dropped below the $1.00/W mark, taking many in the industry by surprise, and has caused a number of solar manufacturing companies to go bankrupt throughout the world. The $1.00/W cost is often regarded in the PV industry as marking the achievement of grid parity for PV, but most experts do not believe this price point is sustainable. Technological advancements, manufacturing process improvements, and industry re-structuring, may mean that further price reductions are possible. The average retail price of solar cells as monitored by the Solarbuzz group fell from $3.50/watt to $2.43/watt over the course of 2011. In 2013 wholesale prices had fallen to $0.74/W. This has been cited as evidence supporting 'Swanson's law', an observation similar to the famous Moore's Law, which claims that solar cell prices fall 20% for every doubling of industry capacity. The Fraunhofer Institute defines the 'learning rate' as the drop in prices as the cumulative production doubles, some 25% between 1980 and 2010. Although the prices for modules have dropped quickly, current inverter prices have dropped at a much lower rate, and in 2019 constitute over 61% of the cost per kWp, from a quarter in the early 2000s. Note that the prices mentioned above are for bare modules, another way of looking at module prices is to include installation costs. In the US, according to the Solar Energy Industries Association, the price of installed rooftop PV modules for homeowners fell from $9.00/W in 2006 to $5.46/W in 2011. Including the prices paid by industrial installations, the national installed price drops to $3.45/W. This is markedly higher than elsewhere in the world, in Germany homeowner rooftop installations averaged at $2.24/W. The cost differences are thought to be primarily based on the higher regulatory burden and lack of a national solar policy in the US. By the end of 2012 Chinese manufacturers had production costs of $0.50/W in the cheapest modules. In some markets distributors of these modules can earn a considerable margin, buying at factory-gate price and selling at the highest price the market can support ('value-based pricing'). In California PV reached grid parity in 2011, which is usually defined as PV production costs at or below retail electricity prices (though often still above the power station prices for coal or gas-fired generation without their distribution and other costs). Grid parity had been reached in 19 markets in 2014. By 2024, massive increases of production of solar panels in China had caused module prices to drop to as low as $0.11/W, an over 90 percent reduction from 2011 prices. Levelised cost of electricity The levelised cost of electricity (LCOE) is the cost per kWh based on the costs distributed over the project lifetime, and is thought to be a better metric for calculating viability than price per wattage. LCOEs vary dramatically depending on the location. The LCOE can be considered the minimum price customers will have to pay the utility company in order for it to break even on the investment in a new power station. Grid parity is roughly achieved when the LCOE falls to a similar price as conventional local grid prices, although in actuality the calculations are not directly comparable. Large industrial PV installations had reached grid parity in California in 2011. Grid parity for rooftop systems was still believed to be much farther away at this time. Many LCOE calculations are not thought to be accurate, and a large amount of assumptions are required. Module prices may drop further, and the LCOE for solar may correspondingly drop in the future. Because energy demands rise and fall over the course of the day, and solar power is limited by the fact that the sun sets, solar power companies must also factor in the additional costs of supplying a more stable alternative energy supplies to the grid in order to stabilize the system, or storing the energy. These costs are not factored into LCOE calculations, nor are special subsidies or premiums that may make buying solar power more attractive. The unreliability and temporal variation in generation of solar and wind power is a major problem. Too much of these volatile power sources can cause instability of the entire grid. As of 2017 power-purchase agreement prices for solar farms below $0.05/kWh are common in the United States, and the lowest bids in some Persian Gulf countries were about $0.03/kWh. The goal of the United States Department of Energy is to achieve a levelised cost of energy for solar PV of $0.03/kWh for utility companies. Subsidies and financing Financial incentives for photovoltaics, such as feed-in tariffs (FITs), were often offered to electricity consumers to install and operate solar-electric generating systems, and in some countries such subsidies are the only way photovoltaics can remain economically profitable. PV FITs were crucial for early growth of photovoltaics. Germany and Spain were the most important countries offering subsidies for PV, and the policies of these countries drove demand. Some US solar cell manufacturing companies have repeatedly complained that the dropping prices of PV module costs have been achieved due to subsidies by the government of China, and the dumping of these products below fair market prices. US manufacturers generally recommend high tariffs on foreign supplies to allow them remain profitable. In response to these concerns, the Obama administration began to levy tariffs on US consumers of these products in 2012 to raise prices for domestic manufacturers. The USA, however, also subsidies the industry. Some environmentalists have promoted the idea that government incentives should be used in order to expand the PV manufacturing industry to reduce costs of PV-generated electricity much more rapidly to a level where it is able to compete with fossil fuels in a free market. This is based on the theory that when the manufacturing capacity doubles, economies of scale will cause the prices of the solar products to halve. In many countries access to capital is lacking to develop PV projects. To solve this problem, securitization is sometimes used to accelerate development of solar photovoltaic projects. Other Photovoltaic power is also generated during a time of day that is close to peak demand (precedes it) in electricity systems with high use of air conditioning. Since large-scale PV operation requires back-up in the form of spinning reserves, its marginal cost of generation in the middle of the day is typically lowest, but not zero, when PV is generating electricity. This can be seen in Figure 1 of this paper:. For residential properties with private PV facilities networked to the grid, the owner may be able earn extra money when the time of generation is included, as electricity is worth more during the day than at night. One journalist theorised in 2012 that if the energy bills of Americans were forced upwards by imposing an extra tax of $50/ton on carbon dioxide emissions from coal-fired power, this could have allowed solar PV to appear more cost-competitive to consumers in most locations. Growth Solar photovoltaics formed the largest body of research among the seven sustainable energy types examined in a global bibliometric study, with the annual scientific output growing from 9,094 publications in 2011 to 14,447 publications in 2019. Likewise, the application of solar photovoltaics is growing rapidly and the worldwide installed capacity reached one terawatt in April 2022. The total power output of the world's PV capacity in a calendar year is now beyond 500 TWh of electricity. This represents 2% of worldwide electricity demand. More than 100 countries, such as Brazil and India, use solar PV. China is followed by the United States and Japan, while installations in Germany, once the world's largest producer, have been slowing down. Honduras generated the highest percentage of its energy from solar in 2019, 14.8%. As of 2019, Vietnam has the highest installed capacity in Southeast Asia, about 4.5 GW. The annualized installation rate of about 90 W per capita per annum places Vietnam among world leaders. Generous Feed-in tariff (FIT) and government supporting policies such as tax exemptions were the key to enable Vietnam's solar PV boom. Underlying drivers include the government's desire to enhance energy self-sufficiency and the public's demand for local environmental quality. A key barrier is limited transmission grid capacity. China has the world's largest solar power capacity, with 390 GW of installed capacity in 2022 compared with about 200 GW in the European Union, according to International Energy Agency data. Other countries with the world's largest solar power capacities include the United States, Japan and Germany. In 2017, it was thought probable that by 2030 global PV installed capacities could be between 3,000 and 10,000 GW. Greenpeace in 2010 claimed that 1,845 GW of PV systems worldwide could be generating approximately 2,646 TWh/year of electricity by 2030, and by 2050 over 20% of all electricity could be provided by PV. Applications There are many practical applications for the use of solar panels or photovoltaics covering every technological domain under the sun. From the fields of the agricultural industry as a power source for irrigation to its usage in remote health care facilities to refrigerate medical supplies. Other applications include power generation at various scales and attempts to integrate them into homes and public infrastructure. PV modules are used in photovoltaic systems and include a large variety of electrical devices. Photovoltaic systems A photovoltaic system, or solar PV system is a power system designed to supply usable solar power by means of photovoltaics. It consists of an arrangement of several components, including solar panels to absorb and directly convert sunlight into electricity, a solar inverter to change the electric current from DC to AC, as well as mounting, cabling and other electrical accessories. PV systems range from small, roof-top mounted or building-integrated systems with capacities from a few to several tens of kilowatts, to large utility-scale power stations of hundreds of megawatts. Nowadays, most PV systems are grid-connected, while stand-alone systems only account for a small portion of the market. Photo sensors Photosensors are sensors of light or other electromagnetic radiation. A photo detector has a p–n junction that converts light photons into current. The absorbed photons make electron–hole pairs in the depletion region. Photodiodes and photo transistors are a few examples of photo detectors. Solar cells convert some of the light energy absorbed into electrical energy. Experimental technology Crystalline silicon photovoltaics are only one type of PV, and while they represent the majority of solar cells produced currently there are many new and promising technologies that have the potential to be scaled up to meet future energy needs. As of 2018, crystalline silicon cell technology serves as the basis for several PV module types, including monocrystalline, multicrystalline, mono PERC, and bifacial. Another newer technology, thin-film PV, are manufactured by depositing semiconducting layers of perovskite, a mineral with semiconductor properties, on a substrate in vacuum. The substrate is often glass or stainless-steel, and these semiconducting layers are made of many types of materials including cadmium telluride (CdTe), copper indium diselenide (CIS), copper indium gallium diselenide (CIGS), and amorphous silicon (a-Si). After being deposited onto the substrate the semiconducting layers are separated and connected by electrical circuit by laser scribing. Perovskite solar cells are a very efficient solar energy converter and have excellent optoelectronic properties for photovoltaic purposes, but their upscaling from lab-sized cells to large-area modules is still under research. Thin-film photovoltaic materials may possibly become attractive in the future, because of the reduced materials requirements and cost to manufacture modules consisting of thin-films as compared to silicon-based wafers. In 2019 university labs at Oxford, Stanford and elsewhere reported perovskite solar cells with efficiencies of 20-25%. CIGS Copper indium gallium selenide (CIGS) is a thin film solar cell based on the copper indium diselenide (CIS) family of chalcopyrite semiconductors. CIS and CIGS are often used interchangeably within the CIS/CIGS community. The cell structure includes soda lime glass as the substrate, Mo layer as the back contact, CIS/CIGS as the absorber layer, cadmium sulfide (CdS) or Zn (S,OH)x as the buffer layer, and ZnO:Al as the front contact. CIGS is approximately 1/100 the thickness of conventional silicon solar cell technologies. Materials necessary for assembly are readily available, and are less costly per watt of solar cell. CIGS based solar devices resist performance degradation over time and are highly stable in the field. Reported global warming potential impacts of CIGS ranges 20.5–58.8 grams CO2-eq/kWh of electricity generated for different solar irradiation (1,700 to 2,200 kWh/m2/y) and power conversion efficiency (7.8 – 9.12%). EPBT ranges from 0.2 to 1.4 years, while harmonized value of EPBT was found 1.393 years. Toxicity is an issue within the buffer layer of CIGS modules because it contains cadmium and gallium. CIS modules do not contain any heavy metals. Perovskite solar cells Dye-Sensitized Solar Cells Dye-sensitized solar cells (DSCs) are a novel thin film solar cell. These solar cells operate under ambient light better than other photovoltaic technologies. They work with light being absorbed in a sensitizing dye between two charge transport materials. Dye surrounds TiO2 nanoparticles which are in a sintered network. TiO2 acts as conduction band in an n-type semiconductor; the scaffold for adorned dye molecules and transports elections during excitation. For TiO2 DSC technology, sample preparation at high temperatures is very effective because higher temperatures produce more suitable textural properties. Another example of DSCs is the copper complex with Cu (II/I) as a redox shuttle with TMBY (4,4',6,6'-tetramethyl-2,2'bipyridine). DSCs show great performance with artificial and indoor light. From a range of 200 lux to 2,000 lux, these cells operate at conditions of a maximum efficiency of 29.7%. However, there have been issues with DSCs, many of which come from the liquid electrolyte. The solvent is hazardous, and will permeate most plastics. Because it is liquid, it is unstable to temperature variation, leading to freezing in cold temperatures and expansion in warm temperatures causing failure. Another disadvantage is that the solar cell is not ideal for large scale application because of its low efficiency. Some of the benefits for DSC is that it can be used in a variety of light levels (including cloudy conditions), it has a low production cost, and it does not degrade under sunlight, giving it a longer lifetime then other types of thin film solar cells. OPV Other possible future PV technologies include organic, dye-sensitized and quantum-dot photovoltaics. Organic photovoltaics (OPVs) fall into the thin-film category of manufacturing, and typically operate around the 12% efficiency range which is lower than the 12–21% typically seen by silicon-based PVs. Because organic photovoltaics require very high purity and are relatively reactive they must be encapsulated which vastly increases the cost of manufacturing and means that they are not feasible for large scale-up. Dye-sensitized PVs are similar in efficiency to OPVs but are significantly easier to manufacture. However, these dye-sensitized photovoltaics present storage problems because the liquid electrolyte is toxic and can potentially permeate the plastics used in the cell. Quantum dot solar cells are solution-processed, meaning they are potentially scalable, but currently they peak at 12% efficiency. Organic and polymer photovoltaic (OPV) are a relatively new area of research. The tradition OPV cell structure layers consist of a semi-transparent electrode, electron blocking layer, tunnel junction, holes blocking layer, electrode, with the sun hitting the transparent electrode. OPV replaces silver with carbon as an electrode material lowering manufacturing cost and making them more environmentally friendly. OPV are flexible, low weight, and work well with roll-to roll manufacturing for mass production. OPV uses "only abundant elements coupled to an extremely low embodied energy through very low processing temperatures using only ambient processing conditions on simple printing equipment enabling energy pay-back times". Current efficiencies range 1–6.5%, however theoretical analyses show promise beyond 10% efficiency. Many different configurations of OPV exist using different materials for each layer. OPV technology rivals existing PV technologies in terms of EPBT even if they currently present a shorter operational lifetime. A 2013 study analyzed 12 different configurations all with 2% efficiency, the EPBT ranged from 0.29 to 0.52 years for 1 m2 of PV. The average CO2-eq/kWh for OPV is 54.922 grams. Thermophotovoltaics Solar module alignment A number of solar modules may also be mounted vertically above each other in a tower, if the zenith distance of the Sun is greater than zero, and the tower can be turned horizontally as a whole and each module additionally around a horizontal axis. In such a tower the modules can follow the Sun exactly. Such a device may be described as a ladder mounted on a turnable disk. Each step of that ladder is the middle axis of a rectangular solar panel. In case the zenith distance of the Sun reaches zero, the "ladder" may be rotated to the north or the south to avoid a solar module producing a shadow on a lower one. Instead of an exactly vertical tower one can choose a tower with an axis directed to the polar star, meaning that it is parallel to the rotation axis of the Earth. In this case the angle between the axis and the Sun is always larger than 66 degrees. During a day it is only necessary to turn the panels around this axis to follow the Sun. Installations may be ground-mounted (and sometimes integrated with farming and grazing) or built into the roof or walls of a building (building-integrated photovoltaics). Where land may be limited, PV can be deployed as floating solar. In 2008 the Far Niente Winery pioneered the world's first "floatovoltaic" system by installing 994 photovoltaic solar panels onto 130 pontoons and floating them on the winery's irrigation pond. A benefit of the set up is that the panels are kept at a lower temperature than they would be on land, leading to a higher efficiency of solar energy conversion. The floating panels also reduce the amount of water lost through evaporation and inhibit the growth of algae. Concentrator photovoltaics is a technology that contrary to conventional flat-plate PV systems uses lenses and curved mirrors to focus sunlight onto small, but highly efficient, multi-junction solar cells. These systems sometimes use solar trackers and a cooling system to increase their efficiency. Efficiency In 2019, the world record for solar cell efficiency at 47.1% was achieved by using multi-junction concentrator solar cells, developed at National Renewable Energy Laboratory, Colorado, US. The highest efficiencies achieved without concentration include a material by Sharp Corporation at 35.8% using a proprietary triple-junction manufacturing technology in 2009, and Boeing Spectrolab (40.7% also using a triple-layer design). There is an ongoing effort to increase the conversion efficiency of PV cells and modules, primarily for competitive advantage. In order to increase the efficiency of solar cells, it is important to choose a semiconductor material with an appropriate band gap that matches the solar spectrum. This will enhance the electrical and optical properties. Improving the method of charge collection is also useful for increasing the efficiency. There are several groups of materials that are being developed. Ultrahigh-efficiency devices (η>30%) are made by using GaAs and GaInP2 semiconductors with multijunction tandem cells. High-quality, single-crystal silicon materials are used to achieve high-efficiency, low cost cells (η>20%). Recent developments in organic photovoltaic cells (OPVs) have made significant advancements in power conversion efficiency from 3% to over 15% since their introduction in the 1980s. To date, the highest reported power conversion efficiency ranges 6.7–8.94% for small molecule, 8.4–10.6% for polymer OPVs, and 7–21% for perovskite OPVs. OPVs are expected to play a major role in the PV market. Recent improvements have increased the efficiency and lowered cost, while remaining environmentally-benign and renewable. Several companies have begun embedding power optimizers into PV modules called smart modules. These modules perform maximum power point tracking (MPPT) for each module individually, measure performance data for monitoring, and provide additional safety features. Such modules can also compensate for shading effects, wherein a shadow falling across a section of a module causes the electrical output of one or more strings of cells in the module to decrease. One of the major causes for the decreased performance of cells is overheating. The efficiency of a solar cell declines by about 0.5% for every 1 degree Celsius increase in temperature. This means that a 100 degree increase in surface temperature could decrease the efficiency of a solar cell by about half. Self-cooling solar cells are one solution to this problem. Rather than using energy to cool the surface, pyramid and cone shapes can be formed from silica, and attached to the surface of a solar panel. Doing so allows visible light to reach the solar cells, but reflects infrared rays (which carry heat). Advantages Pollution and energy in production The 122 PW of sunlight reaching the Earth's surface is plentiful—almost 10,000 times more than the 13 TW equivalent of average power consumed in 2005 by humans. This abundance leads to the suggestion that it will not be long before solar energy will become the world's primary energy source. Additionally, solar radiation has the highest power density (global mean of 170 W/m2) among renewable energies. Solar power is pollution-free during use, which enables it to cut down on pollution when it is substituted for other energy sources. For example, MIT estimated that 52,000 people per year die prematurely in the U.S. from coal-fired power plant pollution and all but one of these deaths could be prevented from using PV to replace coal. Production end-wastes and emissions are manageable using existing pollution controls. End-of-use recycling technologies are under development and policies are being produced that encourage recycling from producers. Solar panels are usually guaranteed for 25 years (but inverters tend to fail sooner), with little maintenance or intervention after their initial set-up, so after the initial capital cost of building any solar power plant, operating costs are extremely low compared to existing power technologies. Rooftop solar can be used locally, thus reducing transmission/distribution losses. Solar cell research investment Compared to fossil and nuclear energy sources, very little research money has been invested in the development of solar cells, so there is considerable room for improvement. Nevertheless, experimental high efficiency solar cells already have efficiencies of over 40% in case of concentrating photovoltaic cells and efficiencies are rapidly rising while mass-production costs are rapidly falling. Housing subsidies In some states of the United States, much of the investment in a home-mounted system may be lost if the homeowner moves and the buyer puts less value on the system than the seller. The city of Berkeley developed an innovative financing method to remove this limitation, by adding a tax assessment that is transferred with the home to pay for the solar panels. Now known as PACE, Property Assessed Clean Energy, 30 U.S. states have duplicated this solution. Disadvantages Impact on electricity network For behind-the-meter rooftop photovoltaic systems, the energy flow becomes two-way. When there is more local generation than consumption, electricity is exported to the grid, allowing for net metering. However, electricity networks traditionally are not designed to deal with two-way energy transfer, which may introduce technical issues. An over-voltage issue may come out as the electricity flows from these PV households back to the network. There are solutions to manage the over-voltage issue, such as regulating PV inverter power factor, new voltage and energy control equipment at electricity distributor level, re-conductor the electricity wires, demand side management, etc. There are often limitations and costs related to these solutions. High generation during the middle of the day reduces the net generation demand, but higher peak net demand as the sun goes down can require rapid ramping of utility generating stations, producing a load profile called the duck curve. See also Agrivoltaic American Solar Energy Society Anomalous photovoltaic effect Cost of electricity by source Energy demand management List of photovoltaics companies Photoelectrochemical cell Renewable energy commercialization Solar cell fabric Solar module quality assurance Solar photovoltaic monitoring Solar power by country Solar thermal energy Theory of solar cell References Further reading Quantum chemistry Electrochemistry Energy conversion Optoelectronics
Photovoltaics
[ "Physics", "Chemistry" ]
10,792
[ "Quantum chemistry", "Quantum mechanics", "Theoretical chemistry", "Electrochemistry", " molecular", "Atomic", " and optical physics" ]
652,816
https://en.wikipedia.org/wiki/Mixing%20%28process%20engineering%29
In industrial process engineering, mixing is a unit operation that involves manipulation of a heterogeneous physical system with the intent to make it more homogeneous. Familiar examples include pumping of the water in a swimming pool to homogenize the water temperature, and the stirring of pancake batter to eliminate lumps (deagglomeration). Mixing is performed to allow heat and/or mass transfer to occur between one or more streams, components or phases. Modern industrial processing almost always involves some form of mixing. Some classes of chemical reactors are also mixers. With the right equipment, it is possible to mix a solid, liquid or gas into another solid, liquid or gas. A biofuel fermenter may require the mixing of microbes, gases and liquid medium for optimal yield; organic nitration requires concentrated (liquid) nitric and sulfuric acids to be mixed with a hydrophobic organic phase; production of pharmaceutical tablets requires blending of solid powders. The opposite of mixing is segregation. A classical example of segregation is the brazil nut effect. The mathematics of mixing is highly abstract, and is a part of ergodic theory, itself a part of chaos theory. Mixing classification The type of operation and equipment used during mixing depends on the state of materials being mixed (liquid, semi-solid, or solid) and the miscibility of the materials being processed. In this context, the act of mixing may be synonymous with stirring-, or kneading-processes. Liquid–liquid mixing Mixing of liquids occurs frequently in process engineering. The nature of liquids to blend determines the equipment used. Single-phase blending tends to involve low-shear, high-flow mixers to cause liquid engulfment, while multi-phase mixing generally requires the use of high-shear, low-flow mixers to create droplets of one liquid in laminar, turbulent or transitional flow regimes, depending on the Reynolds number of the flow. Turbulent or transitional mixing is frequently conducted with turbines or impellers; laminar mixing is conducted with helical ribbon or anchor mixers. Single-phase blending Mixing of liquids that are miscible or at least soluble in each other occurs frequently in engineering (and in everyday life). An everyday example would be the addition of milk or cream to tea or coffee. Since both liquids are water-based, they dissolve easily in one another. The momentum of the liquid being added is sometimes enough to cause enough turbulence to mix the two, since the viscosity of both liquids is relatively low. If necessary, a spoon or paddle could be used to complete the mixing process. Blending in a more viscous liquid, such as honey, requires more mixing power per unit volume to achieve the same homogeneity in the same amount of time. Gas–gas mixing Solid–solid mixing Dry blenders are a type of industrial mixer which are typically used to blend multiple dry components until they are homogeneous. Often minor liquid additions are made to the dry blend to modify the product formulation. Blending times using dry ingredients are often short (15–30 minutes) but are somewhat dependent upon the varying percentages of each component, and the difference in the bulk densities of each. Ribbon, paddle, tumble and vertical blenders are available. Many products including pharmaceuticals, foods, chemicals, fertilizers, plastics, pigments, and cosmetics are manufactured in these designs. Dry blenders range in capacity from half-cubic-foot laboratory models to 500-cubic-foot production units. A wide variety of horsepower-and-speed combinations and optional features such as sanitary finishes, vacuum construction, special valves and cover openings are offered by most manufacturers. Blending powders is one of the oldest unit-operations in the solids handling industries. For many decades powder blending has been used just to homogenize bulk materials. Many different machines have been designed to handle materials with various bulk solids properties. On the basis of the practical experience gained with these different machines, engineering knowledge has been developed to construct reliable equipment and to predict scale-up and mixing behavior. Nowadays the same mixing technologies are used for many more applications: to improve product quality, to coat particles, to fuse materials, to wet, to disperse in liquid, to agglomerate, to alter functional material properties, etc. This wide range of applications of mixing equipment requires a high level of knowledge, long time experience and extended test facilities to come to the optimal selection of equipment and processes. Solid-solid mixing can be performed either in batch mixers, which is the simpler form of mixing, or in certain cases in continuous dry-mix, more complex but which provide interesting advantages in terms of segregation, capacity and validation. One example of a solid–solid mixing process is mulling foundry molding sand, where sand, bentonite clay, fine coal dust and water are mixed to a plastic, moldable and reusable mass, applied for molding and pouring molten metal to obtain sand castings that are metallic parts for automobile, machine building, construction or other industries. Mixing mechanisms In powder two different dimensions in the mixing process can be determined: convective mixing and intensive mixing. In the case of convective mixing material in the mixer is transported from one location to another. This type of mixing leads to a less ordered state inside the mixer, the components that must be mixed are distributed over the other components. With progressing time the mixture becomes more randomly ordered. After a certain mixing time the ultimate random state is reached. Usually this type of mixing is applied for free-flowing and coarse materials. Possible threats during macro mixing is the de-mixing of the components, since differences in size, shape or density of the different particles can lead to segregation. When materials are cohesive, which is the case with e.g. fine particles and also with wet material, convective mixing is no longer sufficient to obtain a randomly ordered mixture. The relative strong inter-particle forces form lumps, which are not broken up by the mild transportation forces in the convective mixer. To decrease the lump size additional forces are necessary; i.e. more energy intensive mixing is required. These additional forces can either be impact forces or shear forces. Liquid–solid mixing Liquid–solid mixing is typically done to suspend coarse free-flowing solids, or to break up lumps of fine agglomerated solids. An example of the former is the mixing granulated sugar into water; an example of the latter is the mixing of flour or powdered milk into water. In the first case, the particles can be lifted into suspension (and separated from one another) by bulk motion of the fluid; in the second, the mixer itself (or the high shear field near it) must destabilize the lumps and cause them to disintegrate. One example of a solid–liquid mixing process in industry is concrete mixing, where cement, sand, small stones or gravel and water are commingled to a homogeneous self-hardening mass, used in the construction industry. Solid suspension Suspension of solids into a liquid is done to improve the rate of mass transfer between the solid and the liquid. Examples include dissolving a solid reactant into a solvent, or suspending catalyst particles in liquid to improve the flow of reactants and products to and from the particles. The associated eddy diffusion increases the rate of mass transfer within the bulk of the fluid, and the convection of material away from the particles decreases the size of the boundary layer, where most of the resistance to mass transfer occurs. Axial-flow impellers are preferred for solid suspension because solid suspension needs momentum rather than shear, although radial-flow impellers can be used in a tank with baffles, which converts some of the rotational motion into vertical motion. When the solid is denser than the liquid (and therefore collects at the bottom of the tank), the impeller is rotated so that the fluid is pushed downwards; when the solid is less dense than the liquid (and therefore floats on top), the impeller is rotated so that the fluid is pushed upwards (though this is relatively rare). The equipment preferred for solid suspension produces large volumetric flows but not necessarily high shear; high flow-number turbine impellers, such as hydrofoils, are typically used. Multiple turbines mounted on the same shaft can reduce power draw. The degree of homogeneity of a solid-liquid suspension can be described by the RSD (Relative Standard Deviation of the solid volume fraction field in the mixing tank). A perfect suspension would have a RSD of 0% but in practice, a RSD inferior or equal to 20% can be sufficient for the suspension to be considered homogeneous, although this is case-dependent. The RSD can be obtained by experimental measurements or by calculations. Measurements can be performed at full scale but this is generally unpractical, so it is common to perform measurements at small scale and use a "scale-up" criterion to extrapolate the RSD from small to full scale. Calculations can be performed using a computational fluid dynamics software or by using correlations built on theoretical developments, experimental measurements and/or computational fluid dynamics data. Computational fluid dynamics calculations are quite accurate and can accommodate virtually any tank and agitator designs, but they require expertise and long computation time. Correlations are easy to use but are less accurate and don't cover any possible designs. The most popular correlation is the ‘just suspended speed’ correlation published by Zwietering (1958). It's an easy to use correlation but it is not meant for homogeneous suspension. It only provides a crude estimate of the stirring speed for ‘bad’ quality suspensions (partial suspensions) where no particle remains at the bottom for more than 1 or 2 seconds. Another equivalent correlation is the correlation from Mersmann (1998). For ‘good’ quality suspensions, some examples of useful correlations can be found in the publications of Barresi (1987), Magelli (1991), Cekinski (2010) or Macqueron (2017). Machine learning can also be used to build models way more accurate than "classical" correlations. Solid deagglomeration Very fine powders, such as titanium dioxide pigments, and materials that have been spray dried may agglomerate or form lumps during transportation and storage. Starchy materials or those that form gels when exposed to solvent can form lumps that are wetted on the outside but dry on the inside. These types of materials are not easily mixed into liquid with the types of mixers preferred for solid suspension because the agglomerate particles must be subjected to intense shear to be broken up. In some ways, deagglomeration of solids is similar to the blending of immiscible liquids, except for the fact that coalescence is usually not a problem. An everyday example of this type of mixing is the production of milkshakes from liquid milk and solid ice cream. Liquid–gas mixing Liquids and gases are typically mixed to allow mass transfer to occur. For instance, in the case of air stripping, gas is used to remove volatiles from a liquid. Typically, a packed column is used for this purpose, with the packing acting as a motionless mixer and the air pump providing the driving force. When a tank and impeller are used, the objective is typically to ensure that the gas bubbles remain in contact with the liquid for as long as possible. This is especially important if the gas is expensive, such as pure oxygen, or diffuses slowly into the liquid. Mixing in a tank is also useful when a (relatively) slow chemical reaction is occurring in the liquid phase, and so the concentration difference in the thin layer near the bubble is close to that of the bulk. This reduces the driving force for mass transfer. If there is a (relatively) fast chemical reaction in the liquid phase, it is sometimes advantageous to disperse but not recirculate the gas bubbles, ensuring that they are in plug flow and can transfer mass more efficiently. Rushton turbines have been traditionally used to disperse gases into liquids, but newer options, such as the Smith turbine and Bakker turbine are becoming more prevalent. One of the issues is that as the gas flow increases, more and more of the gas accumulates in the low pressure zones behind the impeller blades, which reduces the power drawn by the mixer (and therefore its effectiveness). Newer designs, such as the GDX impeller, have nearly eliminated this problem. Gas–solid mixing Gas–solid mixing may be conducted to transport powders or small particulate solids from one place to another, or to mix gaseous reactants with solid catalyst particles. In either case, the turbulent eddies of the gas must provide enough force to suspend the solid particles, which otherwise sink under the force of gravity. The size and shape of the particles is an important consideration, since different particles have different drag coefficients, and particles made of different materials have different densities. A common unit operation the process industry uses to separate gases and solids is the cyclone, which slows the gas and causes the particles to settle out. Multiphase mixing Multiphase mixing occurs when solids, liquids and gases are combined in one step. This may occur as part of a catalytic chemical process, in which liquid and gaseous reagents must be combined with a solid catalyst (such as hydrogenation); or in fermentation, where solid microbes and the gases they require must be well-distributed in a liquid medium. The type of mixer used depends upon the properties of the phases. In some cases, the mixing power is provided by the gas itself as it moves up through the liquid, entraining liquid with the bubble plume. This draws liquid upwards inside the plume, and causes liquid to fall outside the plume. If the viscosity of the liquid is too high to allow for this (or if the solid particles are too heavy), an impeller may be needed to keep the solid particles suspended. Basic nomenclature For liquid mixing, the nomenclature is rather standardized: Impeller Diameter, "D" is measured for industrial mixers as the maximum diameter swept around the axis of rotation. Rotational Speed, "N" is usually measured in revolutions per minute (RPM) or revolutions per second (RPS). This variable refers to the rotational speed of the impeller as this number can differ along points of the drive train. Tank Diameter, "T" The inside diameter of a cylindrical vessel. Most mixing vessels receiving industrial mixers will be cylindrical. Power, "P" Is the energy input into a system usually by an electric motor or a pneumatic motor Impeller Pumping Capacity, "Q" The resulting fluid motion from impeller rotation. Constitutive equations Many of the equations used for determining the output of mixers are empirically derived, or contain empirically derived constants. Since mixers operate in the turbulent regime, many of the equations are approximations that are considered acceptable for most engineering purposes. When a mixing impeller rotates in the fluid, it generates a combination of flow and shear. The impeller generated flow can be calculated with the following equation: Flow numbers for impellers have been published in the North American Mixing Forum sponsored Handbook of Industrial Mixing. The power required to rotate an impeller can be calculated using the following equations: (Turbulent regime) (Laminar regime) is the (dimensionless) power number, which is a function of impeller geometry; is the density of the fluid; is the rotational speed, typically rotations per second; is the diameter of the impeller; is the laminar power constant; and is the viscosity of the fluid. Note that the mixer power is strongly dependent upon the rotational speed and impeller diameter, and linearly dependent upon either the density or viscosity of the fluid, depending on which flow regime is present. In the transitional regime, flow near the impeller is turbulent and so the turbulent power equation is used. The time required to blend a fluid to within 5% of the final concentration, , can be calculated with the following correlations: (Turbulent regime) (Transitional region) (Laminar regime) The Transitional/Turbulent boundary occurs at The Laminar/Transitional boundary occurs at Laboratory mixing At a laboratory scale, mixing is achieved by magnetic stirrers or by simple hand-shaking. Sometimes mixing in laboratory vessels is more thorough and occurs faster than is possible industrially. Magnetic stir bars are radial-flow mixers that induce solid body rotation in the fluid being mixed. This is acceptable on a small scale, since the vessels are small and mixing therefore occurs rapidly (short blend time). A variety of stir bar configurations exist, but because of the small size and (typically) low viscosity of the fluid, it is possible to use one configuration for nearly all mixing tasks. The cylindrical stir bar can be used for suspension of solids, as seen in iodometry, deagglomeration (useful for preparation of microbiology growth medium from powders), and liquid–liquid blending. Another peculiarity of laboratory mixing is that the mixer rests on the bottom of the vessel instead of being suspended near the center. Furthermore, the vessels used for laboratory mixing are typically more widely varied than those used for industrial mixing; for instance, Erlenmeyer flasks, or Florence flasks may be used in addition to the more cylindrical beaker. Mixing in microfluidics When scaled down to the microscale, fluid mixing behaves radically different. This is typically at sizes from a couple (2 or 3) millimeters down to the nanometer range. At this size range normal advection does not happen unless it is forced by a hydraulic pressure gradient. Diffusion is the dominant mechanism whereby two different fluids come together. Diffusion is a relatively slow process. Hence a number of researchers had to devise ways to get the two fluids to mix. This involved Y junctions, T junctions, three-way intersections and designs where the interfacial area between the two fluids is maximized. Beyond just interfacing the two liquids people also made twisting channels to force the two fluids to mix. These included multilayered devices where the fluids would corkscrew, looped devices where the fluids would flow around obstructions and wavy devices where the channel would constrict and flare out. Additionally channels with features on the walls like notches or groves were tried. One way to know if mixing is happening due to advection or diffusion is by finding the Peclet number. It is the ratio of advection to diffusion. At high Peclet numbers (> 1), advection dominates. At low Peclet numbers (< 1), diffusion dominates. Peclet number = (flow velocity × mixing path) / diffusion coefficient Industrial mixing equipment At an industrial scale, efficient mixing can be difficult to achieve. A great deal of engineering effort goes into designing and improving mixing processes. Mixing at industrial scale is done in batches (dynamic mixing), inline or with help of static mixers. Moving mixers are powered with electric motors that operate at standard speeds of 1800 or 1500 RPM, which is typically much faster than necessary. Gearboxes are used to reduce speed and increase torque. Some applications require the use of multi-shaft mixers, in which a combination of mixer types are used to completely blend the product. In addition to performing typical batch mixing operations, some mixing can be done continuously. Using a machine like the Continuous Processor, one or more dry ingredients and one or more liquid ingredients can be accurately and consistently metered into the machine and see a continuous, homogeneous mixture come out the discharge of the machine. Many industries have converted to continuous mixing for many reasons. Some of those are ease of cleaning, lower energy consumption, smaller footprint, versatility, control, and many others. Continuous mixers, such as the twin-screw Continuous Processor, also have the ability to handle very high viscosities. Turbines A selection of turbine geometries and power numbers are shown below. Different types of impellers are used for different tasks; for instance, Rushton turbines are useful for dispersing gases into liquids, but are not very helpful for dispersing settled solids into liquid. Newer turbines have largely supplanted the Rushton turbine for gas–liquid mixing, such as the Smith turbine and Bakker turbine. The power number is an empirical measure of the amount of torque needed to drive different impellers in the same fluid at constant power per unit volume; impellers with higher power numbers require more torque but operate at lower speed than impellers with lower power numbers, which operate at lower torque but higher speeds. Planetary mixer A planetary mixer is a device used to mix round products including adhesives, pharmaceuticals, foods (including dough), chemicals, solid rocket propellants, electronics, plastics and pigments. Planetary mixers are ideal for mixing and kneading viscous pastes (up to 6 million centipoise) under atmospheric or vacuum conditions. Capacities range from through . Many options including jacketing for heating or cooling, vacuum or pressure, vari speed drives, etc. are available. Planetary blades each rotate on their own axes, and at the same time on a common axis, thereby providing complete mixing in a very short timeframe. Large industrial scale planetary mixers are used in the production of solid rocket fuel for long-range ballistic missiles. They are used to blend and homgenize the components of solid rocket propellant, ensuring a consistent and stable mixture of fuel & oxidizer. ResonantAcoustic mixer ResonantAcoustic mixing (RAM) is able to mix, coat, mill, and sieve materials without impellers or blades touching the materials, yet typically 10X-100X faster than alternative technologies by generating a high level of energy (up to 100 g) through seeking and operating at the resonant condition of the mechanical system - at all times. ResonantAcoustic mixers from lab scale to industrial production to continuous mixing are used for energetic materials like explosives, propellants, and pyrotechnic compositions, as well as pharmaceuticals, powder metallurgy, 3D printing, rechargeable battery materials, and battery recycling. Close-clearance mixers There are two main types of close-clearance mixers: anchors and helical ribbons. Anchor mixers induce solid-body rotation and do not promote vertical mixing, but helical ribbons do. Close clearance mixers are used in the laminar regime, because the viscosity of the fluid overwhelms the inertial forces of the flow and prevents the fluid leaving the impeller from entraining the fluid next to it. Helical ribbon mixers are typically rotated to push material at the wall downwards, which helps circulate the fluid and refresh the surface at the wall. High shear dispersers High shear dispersers create intense shear near the impeller but relatively little flow in the bulk of the vessel. Such devices typically resemble circular saw blades and are rotated at high speed. Because of their shape, they have a relatively low drag coefficient and therefore require comparatively little torque to spin at high speed. High shear dispersers are used for forming emulsions (or suspensions) of immiscible liquids and solid deagglomeration. Static mixers Static mixers are used when a mixing tank would be too large, too slow, or too expensive to use in a given process. Liquid whistles Liquid whistles are a kind of static mixer which pass fluid at high pressure through an orifice and subsequently over a blade. This subjects the fluid to high turbulent stresses and may result in mixing, emulsification, deagglomeration and disinfection. Other Ribbon Blender Ribbon blenders are very common in process industries for performing dry-mixing operations. The mixing is performed thanks to 2 helix (ribbon) welded on the shafts. Both helix move the product in opposite directions thus achieving the mixing (see picture of ribbon blender). V Blender Twin-Screw Continuous Blender Continuous Processor Cone Screw Blender Screw Blender Double Cone Blender Double Planetary High Viscosity Mixer Counter-rotating Double & Triple Shaft Vacuum Mixer High Shear Rotor Stator Impinging mixer Dispersion Mixers Paddle Jet Mixer Mobile Mixers Drum Blenders Intermix mixer Horizontal Mixer Hot/Cold mixing combination Vertical mixer Turbomixer Banbury mixer The Banbury mixer is a brand of internal batch mixer, named for inventor Fernley H. Banbury. The "Banbury" trademark is owned by Farrel Corporation. Internal batch mixers such as the Banbury mixer are used for mixing or compounding rubber and plastics. The original design dates back to 1916. The mixer consists of two rotating spiral-shaped blades encased in segments of cylindrical housings. These intersect so as to leave a ridge between the blades. The blades may be cored for circulation of heating or cooling. Its invention resulted in major labor and capital savings in the tire industry, doing away with the initial step of roller-milling rubber. It is also used for reinforcing fillers in a resin system. See also Mixing paddle Dry blending References Further reading Dry Blender Selection Criteria Technical Paper External links Wiki on equipment for mixing bulk solids and powders Visualizations of fluid dynamics in mixing processes A textbook chapter on mixing in the food industry Information on Solids mixing - powderprocess.net Unit operations Industrial machinery Plastics industry Rotating machines
Mixing (process engineering)
[ "Physics", "Chemistry", "Technology", "Engineering" ]
5,211
[ "Machines", "Unit operations", "Physical systems", "Rotating machines", "Chemical process engineering", "Industrial machinery" ]
653,060
https://en.wikipedia.org/wiki/Methyl%20ethyl%20ketone%20peroxide
Methyl ethyl ketone peroxide (MEKP) is an organic peroxide with the formula [(CH3)(C2H5)C(O2H)]2O2. MEKP is a colorless oily liquid. It is widely used in vulcanization (crosslinking) of polymers. It is derived from the reaction of methyl ethyl ketone and hydrogen peroxide under acidic conditions. Several products result from this reaction including a cyclic dimer. The linear dimer, the topic of this article, is the most prevalent. and this is the form that is typically quoted in the commercially available material. Solutions of 30 to 40% MEKP are used in industry and by hobbyists as catalyst to initiate the crosslinking of unsaturated polyester resins used in fiberglass, and casting. For this application, MEKP often is dissolved in a phlegmatizer such as dimethyl phthalate, cyclohexane peroxide, or to reduce sensitivity to shock. Benzoyl peroxide can be used for the same purpose. Safety Whereas acetone peroxide is a white powder at STP, MEKP is slightly less sensitive to shock and temperature, and more stable in storage. MEKP is a severe skin irritant and can cause progressive corrosive damage or blindness. The volatile decomposition products of MEKP can contribute to the formation of vapor-phase explosions. Ensuring safe storage is important, and the maximum storage temperature should be limited to below 30 °C. Notes External links CDC - NIOSH Pocket Guide to Chemical Hazards The Register: Mass murder in the skies: was the plot feasible? New York Times: Details Emerge in British Terror Case The Free Information Society: HMTD Synthesis How MEKP cures Unsaturated Polyester Resin (video animation) Liquid explosives Ketals Organic peroxides Radical initiators Organic peroxide explosives
Methyl ethyl ketone peroxide
[ "Chemistry", "Materials_science" ]
397
[ "Ketals", "Radical initiators", "Functional groups", "Organic compounds", "Polymer chemistry", "Reagents for organic chemistry", "Explosive chemicals", "Organic peroxide explosives", "Organic peroxides" ]
653,183
https://en.wikipedia.org/wiki/Delocalized%20electron
In chemistry, delocalized electrons are electrons in a molecule, ion or solid metal that are not associated with a single atom or a covalent bond. The term delocalization is general and can have slightly different meanings in different fields: In organic chemistry, it refers to resonance in conjugated systems and aromatic compounds. In solid-state physics, it refers to free electrons that facilitate electrical conduction. In quantum chemistry, it refers to molecular orbital electrons that have extended over several adjacent atoms. Resonance In the simple aromatic ring of benzene, the delocalization of six π electrons over the C6 ring is often graphically indicated by a circle. The fact that the six C-C bonds are equidistant is one indication that the electrons are delocalized; if the structure were to have isolated double bonds alternating with discrete single bonds, the bond would likewise have alternating longer and shorter lengths. In valence bond theory, delocalization in benzene is represented by resonance structures. Electrical conduction Delocalized electrons also exist in the structure of solid metals. Metallic structure consists of aligned positive ions (cations) in a "sea" of delocalized electrons. This means that the electrons are free to move throughout the structure, and gives rise to properties such as conductivity. In diamond all four outer electrons of each carbon atom are 'localized' between the atoms in covalent bonding. The movement of electrons is restricted and diamond does not conduct an electric current. In graphite, each carbon atom uses only 3 of its 4 outer energy level electrons in covalently bonding to three other carbon atoms in a plane. Each carbon atom contributes one electron to a delocalized system of electrons that is also a part of the chemical bonding. The delocalized electrons are free to move throughout the plane. For this reason, graphite conducts electricity along the planes of carbon atoms, but does not conduct in a direction at right angles to the plane. Molecular orbitals Standard ab initio quantum chemistry methods lead to delocalized orbitals that, in general, extend over an entire molecule and have the symmetry of the molecule. Localized orbitals may then be found as linear combinations of the delocalized orbitals, given by an appropriate unitary transformation. In the methane molecule, ab initio calculations show bonding character in four molecular orbitals, sharing the electrons uniformly among all five atoms. There are two orbital levels, a bonding molecular orbital formed from the 2s orbital on carbon and triply degenerate bonding molecular orbitals from each of the 2p orbitals on carbon. The localized sp3 orbitals corresponding to each individual bond in valence bond theory can be obtained from a linear combination of the four molecular orbitals. See also Aromatic ring current Electride Solvated electron References Chemical bonding Electron states
Delocalized electron
[ "Physics", "Chemistry", "Materials_science" ]
573
[ "Electron", "Condensed matter physics", "nan", "Chemical bonding", "Electron states" ]
653,273
https://en.wikipedia.org/wiki/Electrostatic%20levitation
Electrostatic levitation is the process of using an electric field to levitate a charged object and counteract the effects of gravity. It was used, for instance, in Robert Millikan's oil drop experiment and is used to suspend the gyroscopes in Gravity Probe B during launch. Due to Earnshaw's theorem, no static arrangement of classical electrostatic fields can be used to stably levitate a point charge. There is an equilibrium point where the two fields cancel, but it is an unstable equilibrium. By using feedback techniques it is possible to adjust the charges to achieve a quasi static levitation. Earnshaw's theorem The idea of particle instability in an electrostatic field originated with Samuel Earnshaw in 1839 and was formalized by James Clerk Maxwell in 1874 who gave it the title "Earnshaw's theorem" and proved it with the Laplace equation. Earnshaw's theorem explains why a system of electrons is not stable and was invoked by Niels Bohr in his atom model of 1913 when criticizing J. J. Thomson's atom. Earnshaw's theorem holds that a charged particle suspended in an electrostatic field is unstable, because the forces of attraction and repulsion vary at an equal rate that is proportional to the inverse square law and remain in balance wherever a particle moves. Since the forces remain in balance, there is no inequality to provide a restoring force; and the particle remains unstable and can freely move without restriction. Levitation The first electrostatic levitator was invented by Dr. Won-Kyu Rhim at NASA's Jet Propulsion Laboratory in 1993. A charged sample of 2 mm in diameter can be levitated in a vacuum chamber between two electrodes positioned vertically with an electrostatic field in between. The field is controlled through a feedback system to keep the levitated sample at a predetermined position. Several copies of this system have been made in JAXA and NASA, and the original system has been transferred to California Institute of Technology with an upgraded setup of tetrahedra four beam laser heating system. On the Moon the photoelectric effect and electrons in the solar wind charge fine layers of Moon dust on the surface forming an atmosphere of dust that floats in "fountains" over the surface of the Moon. See also Magnetic levitation Optical levitation Acoustic levitation Aerodynamic levitation Biefeld-Brown effect Ionocraft (Lifter) Van de Graff generator References External links JLN Labs: Levitators Electrostatic levitator — Marshall Space Flight Center Electrostatic levitation raises dust particles off the surface of the moon Hybrid electric/acoustic levitation Electrostatic levitation and transportation of glass or silicon plates Electrostatic levitation of various materials including silicon, cobalt palladium, aluminium and other compounds Electrostatics Levitation
Electrostatic levitation
[ "Physics" ]
569
[ "Physical phenomena", "Motion (physics)", "Levitation" ]
653,348
https://en.wikipedia.org/wiki/Human%20musculoskeletal%20system
The human musculoskeletal system (also known as the human locomotor system, and previously the activity system) is an organ system that gives humans the ability to move using their muscular and skeletal systems. The musculoskeletal system provides form, support, stability, and movement to the body. The human musculoskeletal system is made up of the bones of the skeleton, muscles, cartilage, tendons, ligaments, joints, and other connective tissue that supports and binds tissues and organs together. The musculoskeletal system's primary functions include supporting the body, allowing motion, and protecting vital organs. The skeletal portion of the system serves as the main storage system for calcium and phosphorus and contains critical components of the hematopoietic system. This system describes how bones are connected to other bones and muscle fibers via connective tissue such as tendons and ligaments. The bones provide stability to the body. Muscles keep bones in place and also play a role in the movement of bones. To allow motion, different bones are connected by joints. Cartilage prevents the bone ends from rubbing directly onto each other. Muscles contract to move the bone attached at the joint. There are, however, diseases and disorders that may adversely affect the function and overall effectiveness of the system. These diseases can be difficult to diagnose due to the close relation of the musculoskeletal system to other internal systems. The musculoskeletal system refers to the system having its muscles attached to an internal skeletal system and is necessary for humans to move to a more favorable position. Complex issues and injuries involving the musculoskeletal system are usually handled by a physiatrist (specialist in physical medicine and rehabilitation) or an orthopaedic surgeon. Subsystems Skeletal The skeletal system serves many important functions; it provides the shape and form for the body, support and protection, allows bodily movement, produces blood for the body, and stores minerals. The number of bones in the human skeletal system is a controversial topic. Humans are born with over 300 bones; however, many bones fuse together between birth and maturity. As a result, an average adult skeleton consists of 206 bones. The number of bones varies according to the method used to derive the count. While some consider certain structures to be a single bone with multiple parts, others may see it as a single part with multiple bones. There are five general classifications of bones. These are long bones, short bones, flat bones, irregular bones, and sesamoid bones. The human skeleton is composed of both fused and individual bones supported by ligaments, tendons, muscles and cartilage. It is a complex structure with two distinct divisions; the axial skeleton, which includes the vertebral column, and the appendicular skeleton. Function The skeletal system serves as a framework for tissues and organs to attach themselves to. This system acts as a protective structure for vital organs. Major examples of this are the brain being protected by the skull and the lungs being protected by the rib cage. Located in long bones are two distinctions of bone marrow (yellow and red). The yellow marrow has fatty connective tissue and is found in the marrow cavity. During starvation, the body uses the fat in yellow marrow for energy. The red marrow of some bones is an important site for blood cell production, approximately 2.6 million red blood cells per second in order to replace existing cells that have been destroyed by the liver. Here all erythrocytes, platelets, and most leukocytes form in adults. From the red marrow, erythrocytes, platelets, and leukocytes migrate to the blood to do their special tasks. Another function of bones is the storage of certain minerals. Calcium and phosphorus are among the main minerals being stored. The importance of this storage "device" helps to regulate mineral balance in the bloodstream. When the fluctuation of minerals is high, these minerals are stored in the bone; when it is low they will be withdrawn from the bone. Muscular There are three types of muscles—cardiac, skeletal, and smooth. Smooth muscles are used to control the flow of substances within the lumens of hollow organs, and are not consciously controlled. Skeletal and cardiac muscles have striations that are visible under a microscope due to the components within their cells. Only skeletal and smooth muscles are part of the musculoskeletal system and only the muscles can move the body. Cardiac muscles are found in the heart and are used only to circulate blood; like the smooth muscles, these muscles are not under conscious control. Skeletal muscles are attached to bones and arranged in opposing groups around joints. Muscles are innervated, whereby nervous signals are communicated by nerves, which conduct electrical currents from the central nervous system and cause the muscles to contract. Contraction initiation In mammals, when a muscle contracts, a series of reactions occur. Muscle contraction is stimulated by the motor neuron sending a message to the muscles from the somatic nervous system. Depolarization of the motor neuron results in neurotransmitters being released from the nerve terminal. The space between the nerve terminal and the muscle cell is called the neuromuscular junction. These neurotransmitters diffuse across the synapse and bind to specific receptor sites on the cell membrane of the muscle fiber. When enough receptors are stimulated, an action potential is generated and the permeability of the sarcolemma is altered. This process is known as initiation. Tendons A tendon is a tough, flexible band of fibrous connective tissue that connects muscles to bones. The extra-cellular connective tissue between muscle fibers binds to tendons at the distal and proximal ends, and the tendon binds to the periosteum of individual bones at the muscle's origin and insertion. As muscles contract, tendons transmit the forces to the relatively rigid bones, pulling on them and causing movement. Tendons can stretch substantially, allowing them to function as springs during locomotion, thereby saving energy. Joints, ligaments and bursae The Joints are structures that connect individual bones and may allow bones to move against each other to cause movement. There are three divisions of joints, diarthroses which allow extensive mobility between two or more articular heads; amphiarthrosis, which is a joint that allows some movement, and false joints or synarthroses, joints that are immovable, that allow little or no movement and are predominantly fibrous. Synovial joints, joints that are not directly joined, are lubricated by a solution called synovial fluid that is produced by the synovial membranes. This fluid lowers the friction between the articular surfaces and is kept within an articular capsule, binding the joint with its taut tissue. Ligaments A ligament is a small band of dense, white, fibrous elastic tissue. Ligaments connect the ends of bones together in order to form a joint. Most ligaments limit dislocation, or prevent certain movements that may cause breaks. Since they are only elastic they increasingly lengthen when under pressure. When this occurs the ligament may be susceptible to break resulting in an unstable joint. Ligaments may also restrict some actions: movements such as hyper extension and hyper flexion are restricted by ligaments to an extent. Also ligaments prevent certain directional movement. Bursae A bursa is a small fluid-filled sac made of white fibrous tissue and lined with synovial membrane. Bursa may also be formed by a synovial membrane that extends outside of the joint capsule. It provides a cushion between bones and tendons or muscles around a joint; bursa are filled with synovial fluid and are found around almost every major joint of the body. Clinical significance Because many other body systems, including the vascular, nervous, and integumentary systems, are interrelated, disorders of one of these systems may also affect the musculoskeletal system and complicate the diagnosis of the disorder's origin. Diseases of the musculoskeletal system mostly encompass functional disorders or motion discrepancies; the level of impairment depends specifically on the problem and its severity. In a study of hospitalizations in the United States, the most common inpatient OR procedures in 2012 involved the musculoskeletal system: knee arthroplasty, laminectomy, hip replacement, and spinal fusion. Articular (of or pertaining to the joints) disorders are the most common. However, also among the diagnoses are: primary muscular diseases, neurologic (related to the medical science that deals with the nervous system and disorders affecting it) deficits, toxins, endocrine abnormalities, metabolic disorders, infectious diseases, blood and vascular disorders, and nutritional imbalances. Disorders of muscles from another body system can bring about irregularities such as: impairment of ocular motion and control, respiratory dysfunction, and bladder malfunction. Complete paralysis, paresis, or ataxia may be caused by primary muscular dysfunctions of infectious or toxic origin; however, the primary disorder is usually related to the nervous system, with the muscular system acting as the effector organ, an organ capable of responding to a stimulus, especially a nerve impulse. One understated disorder that begins during pregnancy is pelvic girdle pain. It is complex, multi-factorial, and likely to be also represented by a series of sub-groups driven by pain varying from peripheral or central nervous system, altered laxity/stiffness of muscles, laxity to injury of tendinous/ligamentous structures to maladaptive body mechanics. See also Skeletal muscles of the human body Skeletal muscle Muscular system References Dance science
Human musculoskeletal system
[ "Biology" ]
2,008
[ "Organ systems", "Musculoskeletal system" ]
653,780
https://en.wikipedia.org/wiki/Normal%20number%20%28computing%29
In computing, a normal number is a non-zero number in a floating-point representation which is within the balanced range supported by a given floating-point format: it is a floating point number that can be represented without leading zeros in its significand. The magnitude of the smallest normal number in a format is given by: where b is the base (radix) of the format (like common values 2 or 10, for binary and decimal number systems), and depends on the size and layout of the format. Similarly, the magnitude of the largest normal number in a format is given by where p is the precision of the format in digits and is related to as: In the IEEE 754 binary and decimal formats, b, p, , and have the following values: For example, in the smallest decimal format in the table (decimal32), the range of positive normal numbers is 10−95 through 9.999999 × 1096. Non-zero numbers smaller in magnitude than the smallest normal number are called subnormal numbers (or denormal numbers). Zero is considered neither normal nor subnormal. See also Normalized number Half-precision floating-point format Single-precision floating-point format Double-precision floating-point format References Computer arithmetic
Normal number (computing)
[ "Mathematics" ]
264
[ "Computer arithmetic", "Arithmetic" ]
654,098
https://en.wikipedia.org/wiki/Symmetric%20algebra
In mathematics, the symmetric algebra (also denoted on a vector space over a field is a commutative algebra over that contains , and is, in some sense, minimal for this property. Here, "minimal" means that satisfies the following universal property: for every linear map from to a commutative algebra , there is a unique algebra homomorphism such that , where is the inclusion map of in . If is a basis of , the symmetric algebra can be identified, through a canonical isomorphism, to the polynomial ring , where the elements of are considered as indeterminates. Therefore, the symmetric algebra over can be viewed as a "coordinate free" polynomial ring over . The symmetric algebra can be built as the quotient of the tensor algebra by the two-sided ideal generated by the elements of the form . All these definitions and properties extend naturally to the case where is a module (not necessarily a free one) over a commutative ring. Construction From tensor algebra It is possible to use the tensor algebra to describe the symmetric algebra . In fact, can be defined as the quotient algebra of by the two-sided ideal generated by the commutators It is straightforward to verify that the resulting algebra satisfies the universal property stated in the introduction. Because of the universal property of the tensor algebra, a linear map from to a commutative algebra extends to an algebra homomorphism , which factors through because is commutative. The extension of to an algebra homomorphism is unique because generates as a -algebra. This results also directly from a general result of category theory, which asserts that the composition of two left adjoint functors is also a left adjoint functor. Here, the forgetful functor from commutative algebras to vector spaces or modules (forgetting the multiplication) is the composition of the forgetful functors from commutative algebras to associative algebras (forgetting commutativity), and from associative algebras to vectors or modules (forgetting the multiplication). As the tensor algebra and the quotient by commutators are left adjoint to these forgetful functors, their composition is left adjoint to the forgetful functor from commutative algebra to vectors or modules, and this proves the desired universal property. From polynomial ring The symmetric algebra can also be built from polynomial rings. If is a -vector space or a free -module, with a basis , let be the polynomial ring that has the elements of as indeterminates. The homogeneous polynomials of degree one form a vector space or a free module that can be identified with . It is straightforward to verify that this makes a solution to the universal problem stated in the introduction. This implies that and are canonically isomorphic, and can therefore be identified. This results also immediately from general considerations of category theory, since free modules and polynomial rings are free objects of their respective categories. If is a module that is not free, it can be written where is a free module, and is a submodule of . In this case, one has where is the ideal generated by . (Here, equals signs mean equality up to a canonical isomorphism.) Again this can be proved by showing that one has a solution of the universal property, and this can be done either by a straightforward but boring computation, or by using category theory, and more specifically, the fact that a quotient is the solution of the universal problem for morphisms that map to zero a given subset. (Depending on the case, the kernel is a normal subgroup, a submodule or an ideal, and the usual definition of quotients can be viewed as a proof of the existence of a solution of the universal problem.) Grading The symmetric algebra is a graded algebra. That is, it is a direct sum where called the th symmetric power of , is the vector subspace or submodule generated by the products of elements of . (The second symmetric power is sometimes called the symmetric square of ). This can be proved by various means. One follows from the tensor-algebra construction: since the tensor algebra is graded, and the symmetric algebra is its quotient by a homogeneous ideal: the ideal generated by all where and are in , that is, homogeneous of degree one. In the case of a vector space or a free module, the gradation is the gradation of the polynomials by the total degree. A non-free module can be written as , where is a free module of base ; its symmetric algebra is the quotient of the (graded) symmetric algebra of (a polynomial ring) by the homogeneous ideal generated by the elements of , which are homogeneous of degree one. One can also define as the solution of the universal problem for -linear symmetric functions from into a vector space or a module, and then verify that the direct sum of all satisfies the universal problem for the symmetric algebra. Relationship with symmetric tensors As the symmetric algebra of a vector space is a quotient of the tensor algebra, an element of the symmetric algebra is not a tensor, and, in particular, is not a symmetric tensor. However, symmetric tensors are strongly related to the symmetric algebra. A symmetric tensor of degree is an element of that is invariant under the action of the symmetric group More precisely, given the transformation defines a linear endomorphism of . A symmetric tensor is a tensor that is invariant under all these endomorphisms. The symmetric tensors of degree form a vector subspace (or module) . The symmetric tensors are the elements of the direct sum which is a graded vector space (or a graded module). It is not an algebra, as the tensor product of two symmetric tensors is not symmetric in general. Let be the restriction to of the canonical surjection If is invertible in the ground field (or ring), then is an isomorphism. This is always the case with a ground field of characteristic zero. The inverse isomorphism is the linear map defined (on products of vectors) by the symmetrization The map is not injective if the characteristic is less than +1; for example is zero in characteristic two. Over a ring of characteristic zero, can be non surjective; for example, over the integers, if and are two linearly independent elements of that are not in , then since In summary, over a field of characteristic zero, the symmetric tensors and the symmetric algebra form two isomorphic graded vector spaces. They can thus be identified as far as only the vector space structure is concerned, but they cannot be identified as soon as products are involved. Moreover, this isomorphism does not extend to the cases of fields of positive characteristic and rings that do not contain the rational numbers. Categorical properties Given a module over a commutative ring , the symmetric algebra can be defined by the following universal property: For every -linear map from to a commutative -algebra , there is a unique -algebra homomorphism such that where is the inclusion of in . As for every universal property, as soon as a solution exists, this defines uniquely the symmetric algebra, up to a canonical isomorphism. It follows that all properties of the symmetric algebra can be deduced from the universal property. This section is devoted to the main properties that belong to category theory. The symmetric algebra is a functor from the category of -modules to the category of -commutative algebra, since the universal property implies that every module homomorphism can be uniquely extended to an algebra homomorphism The universal property can be reformulated by saying that the symmetric algebra is a left adjoint to the forgetful functor that sends a commutative algebra to its underlying module. Symmetric algebra of an affine space One can analogously construct the symmetric algebra on an affine space. The key difference is that the symmetric algebra of an affine space is not a graded algebra, but a filtered algebra: one can determine the degree of a polynomial on an affine space, but not its homogeneous parts. For instance, given a linear polynomial on a vector space, one can determine its constant part by evaluating at 0. On an affine space, there is no distinguished point, so one cannot do this (choosing a point turns an affine space into a vector space). Analogy with exterior algebra The Sk are functors comparable to the exterior powers; here, though, the dimension grows with k; it is given by where n is the dimension of V. This binomial coefficient is the number of n-variable monomials of degree k. In fact, the symmetric algebra and the exterior algebra appear as the isotypical components of the trivial and sign representation of the action of acting on the tensor product (for example over the complex field) As a Hopf algebra The symmetric algebra can be given the structure of a Hopf algebra. See Tensor algebra for details. As a universal enveloping algebra The symmetric algebra S(V) is the universal enveloping algebra of an abelian Lie algebra, i.e. one in which the Lie bracket is identically 0. See also exterior algebra, the alternating algebra analog graded-symmetric algebra, a common generalization of a symmetric algebra and an exterior algebra Weyl algebra, a quantum deformation of the symmetric algebra by a symplectic form Clifford algebra, a quantum deformation of the exterior algebra by a quadratic form References Algebras Multilinear algebra Polynomials Ring theory
Symmetric algebra
[ "Mathematics" ]
1,936
[ "Mathematical structures", "Polynomials", "Algebras", "Ring theory", "Fields of abstract algebra", "Algebraic structures", "Algebra" ]
654,161
https://en.wikipedia.org/wiki/Directional%20sound
Directional Sound refers to the notion of using various devices to create fields of sound which spread less than most (small) traditional loudspeakers. Several techniques are available to accomplish this, and each has its benefits and drawbacks. Ultimately, choosing a directional sound device depends greatly on the environment in which it is deployed as well as the content that will be reproduced. Keeping these factors in mind will yield the best results through any evaluation of directional sound technologies. Systems which guide evacuees during an emergency by the emission of pink noise to the exits are often also called "directional sound" systems. Basic theory In all wave-producing sources, the directivity of any source, at maximum, corresponds to the size of the source compared to the wavelengths it is generating: The larger the source is compared to the wavelength of the sound waves, the more directional beam results . The specific transduction method has no impact on the directivity of the resulting sound field; the analysis relies only on the aperture function of the source, per the Huygens–Fresnel principle. The ultrasonic devices achieve high directivity by modulating audible sound onto high frequency ultrasound. The higher frequency sound waves have a shorter wavelength and thus don't spread out as rapidly. For this reason, the resulting directivity of these devices is far higher than physically possible with any loudspeaker system. However, they are reported to have limited low-frequency reproduction abilities. See sound from ultrasound for more information. Speaker arrays While a large loudspeaker is naturally more directional because of its large size, a source with equivalent directivity can be made by utilizing an array of traditional small loudspeakers, all driven together in-phase. Acoustically equal to a large speaker, this creates a larger source size compared to wavelength, and the resulting sound field is narrowed compared to a single small speaker. Large speaker arrays have been used in hundreds of arena sound systems to mitigate noise that would ordinarily travel to adjoining neighborhoods, along with limited applications in other applications where some degree of directivity is helpful, such as museums or similar display applications that can tolerate large speaker dimensions. Traditional speaker arrays can be fabricated in any shape or size, but a reduced physical dimension (relative to wavelength) will inherently sacrifice directivity in that dimension. The larger the speaker array, the more directional, and the smaller the size of the speaker array, the less directional it is. This is fundamental physics, and cannot be bypassed, even by using phased arrays or other signal processing methods. This is because the directivity pattern of any wave source is the Fourier Transform of the source function. Phased array design is, however, sometimes useful for beamsteering, or for sidelobe mitigation, but making these compromises necessarily reduces directivity. Acoustically, speaker arrays are essentially the same as sound domes, which have also been available for decades; the size of the dome opening mimics the acoustic properties of a large speaker of the same diameter (or, equivalently, a large speaker array of the same diameter). Domes, however, tend to weigh much less than the weight of comparable speaker arrays (15 lbs vs. 37 lbs, per the manufacturer's websites), and are far less expensive. Other types of large speaker panels, such as electrostatic loudspeakers, tend to be more directional than small speakers, for the same reasons as above; they are somewhat more directional only because they tend to be physically larger than most common loudspeakers. Correspondingly, an electrostatic loudspeaker the size of a small traditional speaker would be non-directional. The directivity for various source sizes and shapes is given in. The directivity is shown to be a function only of the source size and shape, not of the specific type of transducer used. See also Loudspeaker Parabolic loudspeaker Sonic weaponry Sound from ultrasound References Sound Acoustics Ultrasound
Directional sound
[ "Physics" ]
799
[ "Classical mechanics", "Acoustics" ]
654,168
https://en.wikipedia.org/wiki/Receptor%20antagonist
A receptor antagonist is a type of receptor ligand or drug that blocks or dampens a biological response by binding to and blocking a receptor rather than activating it like an agonist. Antagonist drugs interfere in the natural operation of receptor proteins. They are sometimes called blockers; examples include alpha blockers, beta blockers, and calcium channel blockers. In pharmacology, antagonists have affinity but no efficacy for their cognate receptors, and binding will disrupt the interaction and inhibit the function of an agonist or inverse agonist at receptors. Antagonists mediate their effects by binding to the active site or to the allosteric site on a receptor, or they may interact at unique binding sites not normally involved in the biological regulation of the receptor's activity. Antagonist activity may be reversible or irreversible depending on the longevity of the antagonist–receptor complex, which, in turn, depends on the nature of antagonist–receptor binding. The majority of drug antagonists achieve their potency by competing with endogenous ligands or substrates at structurally defined binding sites on receptors. Etymology The English word antagonist in pharmaceutical terms comes from the Greek ἀνταγωνιστής – antagonistēs, "opponent, competitor, villain, enemy, rival", which is derived from anti- ("against") and agonizesthai ("to contend for a prize"). Antagonists were discovered in the 20th century by American biologist Bailey Edgren. Receptors Biochemical receptors are large protein molecules that can be activated by the binding of a ligand such as a hormone or a drug. Receptors can be membrane-bound, as cell surface receptors, or inside the cell as intracellular receptors, such as nuclear receptors including those of the mitochondrion. Binding occurs as a result of non-covalent interactions between the receptor and its ligand, at locations called the binding site on the receptor. A receptor may contain one or more binding sites for different ligands. Binding to the active site on the receptor regulates receptor activation directly. The activity of receptors can also be regulated by the binding of a ligand to other sites on the receptor, as in allosteric binding sites. Antagonists mediate their effects through receptor interactions by preventing agonist-induced responses. This may be accomplished by binding to the active site or the allosteric site. In addition, antagonists may interact at unique binding sites not normally involved in the biological regulation of the receptor's activity to exert their effects. The term antagonist was originally coined to describe different profiles of drug effects. The biochemical definition of a receptor antagonist was introduced by Ariens and Stephenson in the 1950s. The current accepted definition of receptor antagonist is based on the receptor occupancy model. It narrows the definition of antagonism to consider only those compounds with opposing activities at a single receptor. Agonists were thought to turn "on" a single cellular response by binding to the receptor, thus initiating a biochemical mechanism for change within a cell. Antagonists were thought to turn "off" that response by 'blocking' the receptor from the agonist. This definition also remains in use for physiological antagonists, substances that have opposing physiological actions, but act at different receptors. For example, histamine lowers arterial pressure through vasodilation at the histamine H1 receptor, while adrenaline raises arterial pressure through vasoconstriction mediated by alpha-adrenergic receptor activation. Our understanding of the mechanism of drug-induced receptor activation and receptor theory and the biochemical definition of a receptor antagonist continues to evolve. The two-state model of receptor activation has given way to multistate models with intermediate conformational states. The discovery of functional selectivity and that ligand-specific receptor conformations occur and can affect interaction of receptors with different second messenger systems may mean that drugs can be designed to activate some of the downstream functions of a receptor but not others. This means efficacy may actually depend on where that receptor is expressed, altering the view that efficacy at a receptor is receptor-independent property of a drug. Pharmacodynamics Efficacy and potency By definition, antagonists display no efficacy to activate the receptors they bind. Antagonists do not maintain the ability to activate a receptor. Once bound, however, antagonists inhibit the function of agonists, inverse agonists, and partial agonists. In functional antagonist assays, a dose-response curve measures the effect of the ability of a range of concentrations of antagonists to reverse the activity of an agonist. The potency of an antagonist is usually defined by its half maximal inhibitory concentration (i.e., IC50 value). This can be calculated for a given antagonist by determining the concentration of antagonist needed to elicit half inhibition of the maximum biological response of an agonist. Elucidating an IC50 value is useful for comparing the potency of drugs with similar efficacies, however the dose-response curves produced by both drug antagonists must be similar. The lower the IC50 the greater the potency of the antagonist, and the lower the concentration of drug that is required to inhibit the maximum biological response. Lower concentrations of drugs may be associated with fewer side-effects. Affinity The affinity of an antagonist for its binding site (Ki), i.e. its ability to bind to a receptor, will determine the duration of inhibition of agonist activity. The affinity of an antagonist can be determined experimentally using Schild regression or for competitive antagonists in radioligand binding studies using the Cheng-Prusoff equation. Schild regression can be used to determine the nature of antagonism as beginning either competitive or non-competitive and Ki determination is independent of the affinity, efficacy or concentration of the agonist used. However, it is important that equilibrium has been reached. The effects of receptor desensitization on reaching equilibrium must also be taken into account. The affinity constant of antagonists exhibiting two or more effects, such as in competitive neuromuscular-blocking agents that also block ion channels as well as antagonising agonist binding, cannot be analyzed using Schild regression. Schild regression involves comparing the change in the dose ratio, the ratio of the EC50 of an agonist alone compared to the EC50 in the presence of a competitive antagonist as determined on a dose response curve. Altering the amount of antagonist used in the assay can alter the dose ratio. In Schild regression, a plot is made of the log (dose ratio-1) versus the log concentration of antagonist for a range of antagonist concentrations. The affinity or Ki is where the line cuts the x-axis on the regression plot. Whereas, with Schild regression, antagonist concentration is varied in experiments used to derive Ki values from the Cheng-Prusoff equation, agonist concentrations are varied. Affinity for competitive agonists and antagonists is related by the Cheng-Prusoff factor used to calculate the Ki (affinity constant for an antagonist) from the shift in IC50 that occurs during competitive inhibition. The Cheng-Prusoff factor takes into account the effect of altering agonist concentration and agonist affinity for the receptor on inhibition produced by competitive antagonists. Types Competitive Competitive antagonists bind to receptors at the same binding site (active site) as the endogenous ligand or agonist, but without activating the receptor. Agonists and antagonists "compete" for the same binding site on the receptor. Once bound, an antagonist will block agonist binding. Sufficient concentrations of an antagonist will displace the agonist from the binding sites, resulting in a lower frequency of receptor activation. The level of activity of the receptor will be determined by the relative affinity of each molecule for the site and their relative concentrations. High concentrations of a competitive agonist will increase the proportion of receptors that the agonist occupies, higher concentrations of the antagonist will be required to obtain the same degree of binding site occupancy. In functional assays using competitive antagonists, a parallel rightward shift of agonist dose–response curves with no alteration of the maximal response is observed. Competitive antagonists are used to prevent the activity of drugs, and to reverse the effects of drugs that have already been consumed. Naloxone (also known as Narcan) is used to reverse opioid overdose caused by drugs such as heroin or morphine. Similarly, Ro15-4513 is an antidote to alcohol and flumazenil is an antidote to benzodiazepines. Competitive antagonists are sub-classified as reversible (surmountable) or irreversible (insurmountable) competitive antagonists, depending on how they interact with their receptor protein targets. Reversible antagonists, which bind via noncovalent intermolecular forces, will eventually dissociate from the receptor, freeing the receptor to be bound again. Irreversible antagonists bind via covalent intermolecular forces. Because there is not enough free energy to break covalent bonds in the local environment, the bond is essentially "permanent", meaning the receptor-antagonist complex will never dissociate. The receptor will thereby remain permanently antagonized until it is ubiquitinated and thus destroyed. Non-competitive A non-competitive antagonist is a type of insurmountable antagonist that may act in one of two ways: by binding to an allosteric site of the receptor, or by irreversibly binding to the active site of the receptor. The former meaning has been standardised by the IUPHAR, and is equivalent to the antagonist being called an allosteric antagonist. While the mechanism of antagonism is different in both of these phenomena, they are both called "non-competitive" because the end-results of each are functionally very similar. Unlike competitive antagonists, which affect the amount of agonist necessary to achieve a maximal response but do not affect the magnitude of that maximal response, non-competitive antagonists reduce the magnitude of the maximum response that can be attained by any amount of agonist. This property earns them the name "non-competitive" because their effects cannot be negated, no matter how much agonist is present. In functional assays of non-competitive antagonists, depression of the maximal response of agonist dose-response curves, and in some cases, rightward shifts, is produced. The rightward shift will occur as a result of a receptor reserve (also known as spare receptors) and inhibition of the agonist response will only occur when this reserve is depleted. An antagonist that binds to the active site of a receptor is said to be "non-competitive" if the bond between the active site and the antagonist is irreversible or nearly so. This usage of the term "non-competitive" may not be ideal, however, since the term "irreversible competitive antagonism" may also be used to describe the same phenomenon without the potential for confusion with the second meaning of "non-competitive antagonism" discussed below. The second form of "non-competitive antagonists" act at an allosteric site. These antagonists bind to a distinctly separate binding site from the agonist, exerting their action to that receptor via the other binding site. They do not compete with agonists for binding at the active site. The bound antagonists may prevent conformational changes in the receptor required for receptor activation after the agonist binds. Cyclothiazide has been shown to act as a reversible non-competitive antagonist of mGluR1 receptor. Another example of a non-competitive is phenoxybenzamine which binds irreversibly (with covalent bonds) to alpha-adrenergic receptors, which in turn reduces the fraction of available receptors and reduces the maximal effect that can be produced by the agonist. Uncompetitive Uncompetitive antagonists differ from non-competitive antagonists in that they require receptor activation by an agonist before they can bind to a separate allosteric binding site. This type of antagonism produces a kinetic profile in which "the same amount of antagonist blocks higher concentrations of agonist better than lower concentrations of agonist". Memantine, used in the treatment of Alzheimer's disease, is an uncompetitive antagonist of the NMDA receptor. Silent antagonists Silent antagonists are competitive receptor antagonists that have zero intrinsic activity for activating a receptor. They are true antagonists, so to speak. The term was created to distinguish fully inactive antagonists from weak partial agonists or inverse agonists. Partial agonists Partial agonists are defined as drugs that, at a given receptor, might differ in the amplitude of the functional response that they elicit after maximal receptor occupancy. Although they are agonists, partial agonists can act as a competitive antagonist in the presence of a full agonist, as it competes with the full agonist for receptor occupancy, thereby producing a net decrease in the receptor activation as compared to that observed with the full agonist alone. Clinically, their usefulness is derived from their ability to enhance deficient systems while simultaneously blocking excessive activity. Exposing a receptor to a high level of a partial agonist will ensure that it has a constant, weak level of activity, whether its normal agonist is present at high or low levels. In addition, it has been suggested that partial agonism prevents the adaptive regulatory mechanisms that frequently develop after repeated exposure to potent full agonists or antagonists. E.g. Buprenorphine, a partial agonist of the μ-opioid receptor, binds with weak morphine-like activity and is used clinically as an analgesic in pain management and as an alternative to methadone in the treatment of opioid dependence. Inverse agonists An inverse agonist can have effects similar to those of an antagonist, but causes a distinct set of downstream biological responses. Constitutively active receptors that exhibit intrinsic or basal activity can have inverse agonists, which not only block the effects of binding agonists like a classical antagonist but also inhibit the basal activity of the receptor. Many drugs previously classified as antagonists are now beginning to be reclassified as inverse agonists because of the discovery of constitutive active receptors. Antihistamines, originally classified as antagonists of histamine H1 receptors have been reclassified as inverse agonists. Reversibility Many antagonists are reversible antagonists that, like most agonists, will bind and unbind a receptor at rates determined by receptor-ligand kinetics. Irreversible antagonists covalently bind to the receptor target and, in general, cannot be removed; inactivating the receptor for the duration of the antagonist effects is determined by the rate of receptor turnover, the rate of synthesis of new receptors. Phenoxybenzamine is an example of an irreversible alpha blocker—it permanently binds to α adrenergic receptors, preventing adrenaline and noradrenaline from binding. Inactivation of receptors normally results in a depression of the maximal response of agonist dose-response curves and a right shift in the curve occurs where there is a receptor reserve similar to non-competitive antagonists. A washout step in the assay will usually distinguish between non-competitive and irreversible antagonist drugs, as effects of non-competitive antagonists are reversible and activity of agonist will be restored. Irreversible competitive antagonists also involve competition between the agonist and antagonist of the receptor, but the rate of covalent bonding differs and depends on affinity and reactivity of the antagonist. For some antagonists, there may be a distinct period during which they behave competitively (regardless of basal efficacy), and freely associate to and dissociate from the receptor, determined by receptor-ligand kinetics. But, once irreversible bonding has taken place, the receptor is deactivated and degraded. As for non-competitive antagonists and irreversible antagonists in functional assays with irreversible competitive antagonist drugs, there may be a shift in the log concentration–effect curve to the right, but, in general, both a decrease in slope and a reduced maximum are obtained. See also Enzyme inhibitor Growth factor receptor inhibitor Selective receptor modulator References External links Signal transduction Pharmacodynamics
Receptor antagonist
[ "Chemistry", "Biology" ]
3,390
[ "Pharmacology", "Pharmacodynamics", "Signal transduction", "Receptor antagonists", "Biochemistry", "Neurochemistry" ]
2,322,579
https://en.wikipedia.org/wiki/Coronal%20plane
The coronal plane (also known as the frontal plane) is an anatomical plane that divides the body into dorsal and ventral sections. It is perpendicular to the sagittal and transverse planes. Details The coronal plane is an example of a longitudinal plane. For a human, the mid-coronal plane would transect a standing body into two halves (front and back, or anterior and posterior) in an imaginary line that cuts through both shoulders. The description of the coronal plane applies to most animals as well as humans even though humans walk upright and the various planes are usually shown in the vertical orientation. The sternal plane (planum sternale) is a coronal plane which transects the front of the sternum. Etymology The term is derived from Latin corona ('garland, crown'), from Ancient Greek κορώνη (korōnē, 'garland, wreath'). The coronal plane is so called because it lies in the same direction as the coronal suture. Additional images See also Anatomical terms of location Sagittal plane Transverse plane References External links Anatomical planes
Coronal plane
[ "Mathematics" ]
226
[ "Planes (geometry)", "Anatomical planes" ]
2,322,835
https://en.wikipedia.org/wiki/Metallography
Metallography is the study of the physical structure and components of metals, by using microscopy. Ceramic and polymeric materials may also be prepared using metallographic techniques, hence the terms ceramography, plastography and, collectively, materialography. Preparing metallographic specimens The surface of a metallographic specimen is prepared by various methods of grinding, polishing, and etching. After preparation, it is often analyzed using optical or electron microscopy. Using only metallographic techniques, a skilled technician can identify alloys and predict material properties. Mechanical preparation is the most common preparation method. Successively finer abrasive particles are used to remove material from the sample surface until the desired surface quality is achieved. Many different machines are available for doing this grinding and polishing, which are able to meet different demands for quality, capacity, and reproducibility. A systematic preparation method is the easiest way to achieve the true structure. Sample preparation must therefore pursue rules which are suitable for most materials. Different materials with similar properties (hardness and ductility) will respond alike and thus require the same consumables during preparation. Metallographic specimens are typically "mounted" using a hot compression thermosetting resin. In the past, phenolic thermosetting resins have been used, but modern epoxy is becoming more popular because reduced shrinkage during curing results in a better mount with superior edge retention. A typical mounting cycle will compress the specimen and mounting media to and heat to a temperature of . When specimens are very sensitive to temperature, "cold mounts" may be made with a two-part epoxy resin. Mounting a specimen provides a safe, standardized, and ergonomic way by which to hold a sample during the grinding and polishing operations. After mounting, the specimen is wet ground to reveal the surface of the metal. The specimen is successively ground with finer and finer abrasive media. Silicon carbide abrasive paper was the first method of grinding and is still used today. Many metallographers, however, prefer to use a diamond grit suspension which is dosed onto a reusable fabric pad throughout the polishing process. Diamond grit in suspension might start at 9 micrometres and finish at one micrometre. Generally, polishing with diamond suspension gives finer results than using silicon carbide papers (SiC papers), especially with revealing porosity, which silicon carbide paper sometimes "smear" over. After grinding the specimen, polishing is performed. Typically, a specimen is polished with a slurry of alumina, silica, or diamond on a napless cloth to produce a scratch-free mirror finish, free from smear, drag, or pull-outs and with minimal deformation remaining from the preparation process. After polishing, certain microstructural constituents can be seen with the microscope, e.g., inclusions and nitrides. If the crystal structure is non-cubic (e.g., a metal with a hexagonal-closed packed crystal structure, such as Ti or Zr) the microstructure can be revealed without etching using crossed polarized light (light microscopy). Otherwise, the microstructural constituents of the specimen are revealed by using a suitable chemical or electrolytic etchant. Non-destructive surface analysis techniques can involve applying a thin film or varnish that can be peeled off after drying and examined under a microscope. The technique was developed by Pierre Armand Jacquet and others in 1957. Analysis techniques Many different microscopy techniques are used in metallographic analysis. Prepared specimens should be examined with the unaided eye after etching to detect any visible areas that have responded to the etchant differently from the norm as a guide to where microscopical examination should be employed. Light optical microscopy (LOM) examination should always be performed prior to any electron metallographic (EM) technique, as these are more time-consuming to perform and the instruments are much more expensive. Further, certain features can be best observed with the LOM, e.g., the natural color of a constituent can be seen with the LOM but not with EM systems. Also, image contrast of microstructures at relatively low magnifications, e.g., <500X, is far better with the LOM than with the scanning electron microscope (SEM), while transmission electron microscopes (TEM) generally cannot be utilized at magnifications below about 2000 to 3000X. LOM examination is fast and can cover a large area. Thus, the analysis can determine if the more expensive, more time-consuming examination techniques using the SEM or the TEM are required and where on the specimen the work should be concentrated. Design, resolution, and image contrast Light microscopes are designed for placement of the specimen's polished surface on the specimen stage either upright or inverted. Each type has advantages and disadvantages. Most LOM work is done at magnifications between 50 and 1000X. However, with a good microscope, it is possible to perform examination at higher magnifications, e.g., 2000X, and even higher, as long as diffraction fringes are not present to distort the image. However, the resolution limit of the LOM will not be better than about 0.2 to 0.3 micrometers. Special methods are used at magnifications below 50X, which can be very helpful when examining the microstructure of cast specimens where greater spatial coverage in the field of view may be required to observe features such as dendrites. Besides considering the resolution of the optics, one must also maximize visibility by maximizing image contrast. A microscope with excellent resolution may not be able to image a structure, that is there is no visibility, if image contrast is poor. Image contrast depends upon the quality of the optics, coatings on the lenses, and reduction of flare and glare; but, it also requires proper specimen preparation and good etching techniques. So, obtaining good images requires maximum resolution and image contrast. Bright- and dark-field microscopy Most LOM observations are conducted using bright-field (BF) illumination, where the image of any flat feature perpendicular to the incident light path is bright, or appears to be white. But, other illumination methods can be used and, in some cases, may provide superior images with greater detail. Dark-field microscopy (DF), is an alternative method of observation that provides high-contrast images and actually greater resolution than bright-field. In dark-field illumination, the light from features perpendicular to the optical axis is blocked and appears dark while the light from features inclined to the surface, which look dark in BF, appear bright, or "self-luminous" in DF. Grain boundaries, for example, are more vivid in DF than BF. Polarized light microscopy Polarized light (PL) is very useful when studying the structure of metals with non-cubic crystal structures (mainly metals with hexagonal close-packed (hcp) crystal structures). If the specimen is prepared with minimal damage to the surface, the structure can be seen vividly in cross-polarized light (the optic axis of the polarizer and analyzer are 90 degrees to each other, i.e., crossed). In some cases, an hcp metal can be chemically etched and then examined more effectively with PL. Tint etched surfaces, where a thin film (such as a sulfide, molybdate, chromate or elemental selenium film) is grown epitaxially on the surface to a depth where interference effects are created when examined with BF producing color images, can be improved with PL. If it is difficult to get a good interference film with good coloration, the colors can be improved by examination in PL using a sensitive tint (ST) filter. Differential interference contrast microscopy Another useful imaging mode is differential interference contrast (DIC), which is usually obtained with a system designed by the Polish physicist Georges Nomarski. This system gives the best detail. DIC converts minor height differences on the plane-of-polish, invisible in BF, into visible detail. The detail in some cases can be quite striking and very useful. If an ST filter is used along with a Wollaston prism, color is introduced. The colors are controlled by the adjustment of the Wollaston prism, and have no specific physical meaning, per se. But, visibility may be better. Oblique illumination DIC has largely replaced the older oblique illumination (OI) technique, which was available on reflected light microscopes prior to about 1975. In OI, the vertical illuminator is offset from perpendicular, producing shading effects that reveal height differences. This procedure reduces resolution and yields uneven illumination across the field of view. Nevertheless, OI was useful when people needed to know if a second phase particle was standing above or was recessed below the plane-of-polish, and is still available on a few microscopes. OI can be created on any microscope by placing a piece of paper under one corner of the mount so that the plane-of-polish is no longer perpendicular to the optical axis. SRAS microscopy Spatially resolve acoustic spectroscopy (SRAS) is an optical technique that uses optically generated high frequency surface acoustic waves to probe the direction elastic parameters of the surface and, as such, it can vividly reveal the surface microstructure of metals. It can also image the crystallographic orientation and determine the single crystal elasticity matrix of the material. Scanning electron and transmission electron microscopes If a specimen must be observed at higher magnification, it can be examined with a scanning electron microscope (SEM), or a transmission electron microscope (TEM). When equipped with an energy dispersive spectrometer (EDS), the chemical composition of the microstructural features can be determined. The ability to detect low-atomic number elements, such as carbon, oxygen, and nitrogen, depends upon the nature of the detector used. But, quantification of these elements by EDS is difficult and their minimum detectable limits are higher than when a wavelength-dispersive spectrometer (WDS) is used. But quantification of composition by EDS has improved greatly over time. The WDS system has historically had better sensitivity (ability to detect low amounts of an element) and ability to detect low-atomic weight elements, as well as better quantification of compositions, compared to EDS, but it was slower to use. Again, in recent years, the speed required to perform WDS analysis has improved substantially. Historically, EDS was used with the SEM while WDS was used with the electron microprobe analyzer (EMPA). Today, EDS and WDS is used with both the SEM and the EMPA. However, a dedicated EMPA is not as common as an SEM. X-ray diffraction techniques Characterization of microstructures has also been performed using x-ray diffraction (XRD) techniques for many years. XRD can be used to determine the percentages of various phases present in a specimen if they have different crystal structures. For example, the amount of retained austenite in a hardened steel is best measured using XRD (ASTM E 975). If a particular phase can be chemically extracted from a bulk specimen, it can be identified using XRD based on the crystal structure and lattice dimensions. This work can be complemented by EDS and/or WDS analysis where the chemical composition is quantified. But EDS and WDS are difficult to apply to particles less than 2-3 micrometers in diameter. For smaller particles, diffraction techniques can be performed using the TEM for identification and EDS can be performed on small particles if they are extracted from the matrix using replication methods to avoid detection of the matrix along with the precipitate. Quantitative metallography A number of techniques exist to quantitatively analyze metallographic specimens. These techniques are valuable in the research and production of all metals and alloys and non-metallic or composite materials. Microstructural quantification is performed on a prepared, two-dimensional plane through the three-dimensional part or component. Measurements may involve simple metrology techniques, e.g., the measurement of the thickness of a surface coating, or the apparent diameter of a discrete second-phase particle, (for example, spheroidal graphite in ductile iron). Measurement may also require application of stereology to assess matrix and second-phase structures. Stereology is the field of taking 0-, 1- or 2-dimensional measurements on the two-dimensional sectioning plane and estimating the amount, size, shape or distribution of the microstructure in three dimensions. These measurements may be made using manual procedures with the aid of templates overlaying the microstructure, or with automated image analyzers. In all cases, adequate sampling must be made to obtain a proper statistical basis for the measurement. Efforts to eliminate bias are required. Some of the most basic measurements include determination of the volume fraction of a phase or constituent, measurement of the grain size in polycrystalline metals and alloys, measurement of the size and size distribution of particles, assessment of the shape of particles, and spacing between particles. Standards organizations, including ASTM International's Committee E-4 on Metallography and some other national and international organizations, have developed standard test methods describing how to characterize microstructures quantitatively. For example, the amount of a phase or constituent, that is, its volume fraction, is defined in ASTM E 562; manual grain size measurements are described in ASTM E 112 (equiaxed grain structures with a single size distribution) and E 1182 (specimens with a bi-modal grain size distribution); while ASTM E 1382 describes how any grain size type or condition can be measured using image analysis methods. Characterization of nonmetallic inclusions using standard charts is described in ASTM E 45 (historically, E 45 covered only manual chart methods and an image analysis method for making such chart measurements was described in ASTM E 1122. The image analysis methods are currently being incorporated into E 45). A stereological method for characterizing discrete second-phase particles, such as nonmetallic inclusions, carbides, graphite, etc., is presented in ASTM E 1245. See also Henry Clifton Sorby Holger F. Struer References "Metallographic and Materialographic Specimen Preparation, Light Microscopy, Image Analysis and Hardness Testing", Kay Geels in collaboration with Struers A/S, ASTM International 2006. Metallography and Microstructures, Vol. 9, ASM Handbook, ASM International, Materials Park, OH, 2005. Metallography: Principles and Practice, G. F. Vander Voort, ASM International, Materials Park, OH, 1999. Vol. 03.01 of the ASTM Standards covers standards devoted to metallography (and mechanical property testing) G. Petzow, Metallographic Etching, 2nd Ed., ASM International, 1999. Metalog Guide, L. Bjerregaard, K. Geels, B. Ottesen, M. Rückert, Struers A/S, Copenhagen, Denmark, 2000. External links HKDH Bhadeshia An Introduction to Sample Preparation for Metallography, Cambridge University. Video on metallography Metallography Part I - Macroscopic Techniques, Karlsruhe University of Applied Sciences. Video on metallography Metallography Part II - Microscopic Techniques, Karlsruhe University of Applied Sciences. Materials testing Metallurgy
Metallography
[ "Chemistry", "Materials_science", "Engineering" ]
3,250
[ "Metallurgy", "Materials testing", "Materials science", "nan" ]
2,323,901
https://en.wikipedia.org/wiki/Underground%20living
Underground living refers to living below the ground's surface, whether in natural or manmade caves or structures (earth shelters). Underground dwellings are an alternative to above-ground dwellings for some home seekers, including those who are looking to minimize impact on the environment. Factories and office buildings can benefit from underground facilities for many of the same reasons as underground dwellings such as noise abatement, energy use, and security. Some advantages of underground houses include resistance to severe weather, quiet living space, an unobtrusive presence in the surrounding landscape, and a nearly constant interior temperature due to the natural insulating properties of the surrounding earth. One appeal is the energy efficiency and environmental friendliness of underground dwellings. However, underground living does have certain disadvantages, such as the potential for flooding, which in some cases may require special pumping systems to be installed. It is the preferred mode of housing to communities in such extreme environments as Italy's Sassi di Matera, Australia's Coober Pedy, Berber caves as those in Matmâta, Tunisia, and even Amundsen–Scott South Pole Station. Often, underground living structures are not entirely underground; typically, they can be exposed on one side when built into a hill. This exposure can significantly improve interior lighting, although at the expense of greater exposure to the elements. History There is only written documentation of Scythian and German subterranean dwellings. Remnants have been found in Switzerland, Mecklenburg and southern Bavaria, "They had a round shape with a kettle-like widening at the bottom, from eleven to fifteen metres in diameter, and from two to four metres in depth". In the final stage of World War II, the Nazis relocated entire armaments factories underground, as the Allies' air supremacy made surface structures vulnerable to daylight strategic bombing raids. Construction methods In parts of rural Australia, subterranean houses are built in a manner similar to prairie dog holes. There is a "chimney" placed higher than ground-level and a lower, ground-level, entrance. This orientation causes a continuous breeze throughout the house, reducing or eliminating the need for air conditioning. Sustainable Development of Urban Underground Space (UUS) As a step towards achieving the United Nations' SDGs (in particular Goal 11: Make cities and human settlements inclusive, safe, resilient and sustainable), urban cities in developed economies of the world are increasingly looking "downwards" rather than expanding limited land resources at the surface. Helsinki, Singapore, Hong Kong, Minneapolis, Tokyo, Shanghai, Montreal etc. are some of the benchmark cities in this regard. Underground space as a valuable land resource can be integrated into a general urban resources management scheme and development policy, by rationalizing resource supply according to economic demand, and by coordinating stakeholders from the public administration, private administration, private developers and users. The consideration of the other dimension (underground) in city planning holds a promising future for sustainable underground living, where it can contribute to making cities more liveable, resilient and inclusive. Historically planning of subsurface facilities has been subject to an ad-hoc development approach by separate sectors and disciplines. Successful integration of Urban Underground Space into city planning however requires a synergy of several disciplines and stakeholders to achieve rational use of space resources. Structures There are various ways to develop structures for underground living. Caves (Natural) have been used for millennia as shelter. Caves (Constructed)/Dugouts are a common structure for underground living. Although the tunnelling techniques required to make them have been well developed by the mining industry, they can be considerably more costly and dangerous to make than some of the alternatives. On the plus side, they can be quite deep. Some examples would be the Sassi di Matera in Italy, declared by UNESCO a World Heritage Site, and the town of Coober Pedy in Australia, built underground to avoid the blistering heat of the Outback. One of the traditional house types in China is the Yaodong, a cave house. Also, see the Nok and Mamproug Cave Dwellings in Togo, Africa. Earth berm structures are essentially traditional homes that have then been buried, typically leaving at least one wall exposed for lighting and ventilation. However, because they are to be buried, the structures must be made of materials capable of surviving the increased weight and moisture of being underground. Rammed earth structures are not truly underground, in the sense of being below grade or buried beneath a berm. Instead, they are structures made of tightly packed earth, similar to concrete but without the binding properties of cement. These structures share many properties with traditional adobe construction. Culvert structures are a very simple approach. Large precast concrete pipes and boxes a few metres across are assembled into the desired arrangement of rooms and hallways onsite, either atop the existing ground or below grade in excavated trenches, then buried. This approach can also be referred to as Cut and Cover. Urban underground living is so common that few even think of it as underground. Many shopping malls are partially or totally underground, in the sense that they are below grade. Though not as exotic as the other underground structures, those working in such urban underground structures are in fact living underground. Shaft structures. For example, Taisei Corporation proposed to build Alice City in Tokyo Japan. The project would incorporate a very wide and deep shaft, within which would be built levels for habitation, all looking in toward a hollow core topped with a huge skylight. Tunnels, including storm drains, are used by homeless people as shelter in large cities. In fiction Underground living has been a feature of fiction, such as the hobbit holes of the Shire as described in the stories of J. R. R. Tolkien and The Underground City by Jules Verne. Some films are almost entirely set underground, such as THX 1138. The Fallout series also has underground shelters called Vaults. The majority of the early short science-fiction story "The Machine Stops" by British author E.M. Forster is set in an imagined underground city. See also Parent categories: : underground structures , umbrella article for underground dwellings and facilities Types of underground living spaces and people, and related topics: Notes References Jochelson, Waldemar. (1906). "Past and Present Subterranean Dwellings of the Tribes of North Eastern Asia and North Western America." In Congrès International des Américanistes, XVe session tenue à Québec en 1906, Vol. 2. Quebec: International Congress of Americanists, 1906, pp. 115–128. Reprinted Nendeln, Liechtenstein: Kraus Reprint, 1968. External links Living Semi-subterranean structures Underground construction
Underground living
[ "Engineering" ]
1,341
[ "Underground construction", "Civil engineering", "Construction" ]
2,324,711
https://en.wikipedia.org/wiki/Rogers%E2%80%93Ramanujan%20identities
In mathematics, the Rogers–Ramanujan identities are two identities related to basic hypergeometric series and integer partitions. The identities were first discovered and proved by , and were subsequently rediscovered (without a proof) by Srinivasa Ramanujan some time before 1913. Ramanujan had no proof, but rediscovered Rogers's paper in 1917, and they then published a joint new proof . independently rediscovered and proved the identities. Definition The Rogers–Ramanujan identities are and . Here, denotes the q-Pochhammer symbol. Combinatorial interpretation Consider the following: is the generating function for partitions with exactly parts such that adjacent parts have difference at least 2. is the generating function for partitions such that each part is congruent to either 1 or 4 modulo 5. is the generating function for partitions with exactly parts such that adjacent parts have difference at least 2 and such that the smallest part is at least 2. is the generating function for partitions such that each part is congruent to either 2 or 3 modulo 5. The Rogers–Ramanujan identities could be now interpreted in the following way. Let be a non-negative integer. The number of partitions of such that the adjacent parts differ by at least 2 is the same as the number of partitions of such that each part is congruent to either 1 or 4 modulo 5. The number of partitions of such that the adjacent parts differ by at least 2 and such that the smallest part is at least 2 is the same as the number of partitions of such that each part is congruent to either 2 or 3 modulo 5. Alternatively, The number of partitions of such that with parts the smallest part is at least is the same as the number of partitions of such that each part is congruent to either 1 or 4 modulo 5. The number of partitions of such that with parts the smallest part is at least is the same as the number of partitions of such that each part is congruent to either 2 or 3 modulo 5. Application to partitions Since the terms occurring in the identity are generating functions of certain partitions, the identities make statements about partitions (decompositions) of natural numbers. The number sequences resulting from the coefficients of the Maclaurin series of the Rogers–Ramanujan functions G and H are special partition number sequences of level 5: The number sequence (OEIS code: A003114) represents the number of possibilities for the affected natural number n to decompose this number into summands of the patterns 5a + 1 or 5a + 4 with a ∈ . Thus gives the number of decays of an integer n in which adjacent parts of the partition differ by at least 2, equal to the number of decays in which each part is equal to 1 or 4 mod 5 is. And the number sequence (OEIS code: A003106) analogously represents the number of possibilities for the affected natural number n to decompose this number into summands of the patterns 5a + 2 or 5a + 3 with a ∈ . Thus gives the number of decays of an integer n in which adjacent parts of the partition differ by at least 2 and in which the smallest part is greater than or equal to 2 is equal the number of decays whose parts are equal to 2 or 3 mod 5. This will be illustrated as examples in the following two tables: Rogers–Ramanujan continued fractions R and S Definition of the continued fractions The following continued fraction is called Rogers–Ramanujan continued fraction, Continuing fraction is called alternating Rogers–Ramanujan continued fraction! {| class="wikitable" !Standardized continued fraction !Alternating continued fraction |- | | |} The factor creates a quotient of module functions and it also makes these shown continued fractions modular: This definition applies for the continued fraction mentioned: This is the definition of the Ramanujan theta function: With this function, the continued fraction R can be created this way: . The connection between the continued fraction and the Rogers–Ramanujan functions was already found by Rogers in 1894 (and later independently by Ramanujan). The continued fraction can also be expressed by the Dedekind eta function: The alternating continued fraction has the following identities to the remaining Rogers–Ramanujan functions and to the Ramanujan theta function described above: Identities with Jacobi theta functions The following definitions are valid for the Jacobi "Theta-Nullwert" functions: And the following product definitions are identical to the total definitions mentioned: These three so-called theta zero value functions are linked to each other using the Jacobian identity: The mathematicians Edmund Taylor Whittaker and George Neville Watson discovered these definitional identities. The Rogers–Ramanujan continued fraction functions and have these relationships to the theta Nullwert functions: The element of the fifth root can also be removed from the elliptic nome of the theta functions and transferred to the external tangent function. In this way, a formula can be created that only requires one of the three main theta functions: Modular modified functions of G and H Definition of the modular form of G and H An elliptic function is a modular function if this function in dependence on the elliptic nome as an internal variable function results in a function, which also results as an algebraic combination of Legendre's elliptic modulus and its complete elliptic integrals of the first kind in the K and K' form. The Legendre's elliptic modulus is the numerical eccentricity of the corresponding ellipse. If you set (where the imaginary part of is positive), following two functions are modular functions! If q = e2πiτ, then q−1/60G(q) and q11/60H(q) are modular functions of τ. For the Rogers–Ramanujan continued fraction R(q) this formula is valid based on the described modular modifications of G and H: Special values These functions have the following values for the reciprocal of Gelfond's constant and for the square of this reciprocal: The Rogers–Ramanujan continued fraction takes the following ordinate values for these abscissa values: {| class="wikitable" | |- | |} Dedekind eta function identities Derivation by the geometric mean Given are the mentioned definitions of and in this already mentioned way: The Dedekind eta function identities for the functions G and H result by combining only the following two equation chains: The quotient is the Rogers Ramanujan continued fraction accurately: But the product leads to a simplified combination of Pochhammer operators: The geometric mean of these two equation chains directly lead to following expressions in dependence of the Dedekind eta function in their Weber form: In this way the modulated functions and are represented directly using only the continued fraction R and the Dedekind eta function quotient! With the Pochhammer products alone, the following identity then applies to the non-modulated functions G and H: Pentagonal number theorem For the Dedekind eta function according to Weber's definition these formulas apply: The fourth formula describes the pentagonal number theorem because of the exponents! These basic definitions apply to the pentagonal numbers and the card house numbers: The fifth formula contains the Regular Partition Numbers as coefficients. The Regular Partition Number Sequence itself indicates the number of ways in which a positive integer number can be split into positive integer summands. For the numbers to , the associated partition numbers with all associated number partitions are listed in the following table: Further Dedekind eta identities The following further simplification for the modulated functions and can be undertaken. This connection applies especially to the Dedekind eta function from the fifth power of the elliptic nome: These two identities with respect to the Rogers–Ramanujan continued fraction were given for the modulated functions and : The combination of the last three formulas mentioned results in the following pair of formulas: {| class="wikitable" | |- | |} Reduced Weber modular function The Weber modular functions in their reduced form are an efficient way of computing the values of the Rogers–Ramanujan functions: First of all we introduce the reduced Weber modular functions in that pattern: This function fulfills following equation of sixth degree: {| class="wikitable" | |} Therefore this function is an algebraic function indeed. But along with the Abel–Ruffini theorem this function in relation to the eccentricity can not be represented by elementary expressions. However there are many values that in fact can be expressed elementarily. Four examples shall be given for this: First example: {| class="wikitable" | |- | |} Second example: {| class="wikitable" | |- | |} Third example: {| class="wikitable" | |- | |} Fourth example: {| class="wikitable" | |- | |} For that function, a further expression is valid: Exact eccentricity identity for the functions G and H In this way the accurate eccentricity dependent formulas for the functions G and H can be generated: Following Dedekind eta function quotient has this eccentricity dependency: This is the eccentricity dependent formula for the continued fraction R: The last three now mentioned formulas will be inserted into the final formulas mentioned in the section above: {| class="wikitable" | |- | |} On the left side of the balances the functions and in relation to the elliptic nome function are written down directly. And on the right side an algebraic combination of the eccentricity is formulated. Therefore these functions and are modular functions indeed! Application to quintic equations Discovery of the corresponding modulus by Charles Hermite The general case of quintic equations in the Bring–Jerrard form has a non-elementary solution based on the Abel–Ruffini theorem and will now be explained using the elliptic nome of the corresponding modulus, described by the lemniscate elliptic functions in a simplified way. The real solution for all real values can be determined as follows: Alternatively, the same solution can be presented in this way: The mathematician Charles Hermite determined the value of the elliptic modulus k in relation to the coefficient of the absolute term of the Bring–Jerrard form. In his essay "Sur la résolution de l'Équation du cinquiéme degré Comptes rendus" he described the calculation method for the elliptic modulus in terms of the absolute term. The Italian version of his essay "Sulla risoluzione delle equazioni del quinto grado" contains exactly on page 258 the upper Bring–Jerrard equation formula, which can be solved directly with the functions based on the corresponding elliptic modulus. This corresponding elliptic modulus can be worked out by using the square of the Hyperbolic lemniscate cotangent. For the derivation of this, please see the Wikipedia article lemniscate elliptic functions! The elliptic nome of this corresponding modulus is represented here with the letter Q: The abbreviation ctlh expresses the Hyperbolic Lemniscate Cotangent and the abbreviation aclh represents the Hyperbolic Lemniscate Areacosine! Calculation examples Two examples of this solution algorithm are now mentioned: First calculation example: {|class = "wikitable" | Quintic Bring–Jerrard equation: Solution formula: Decimal places of the nome: Decimal places of the solution: |} Second calculation example: {|class = "wikitable" | Quintic Bring–Jerrard equation: Solution: Decimal places of the nome: Decimal places of the solution: |} Applications in Physics The Rogers–Ramanujan identities appeared in Baxter's solution of the hard hexagon model in statistical mechanics. The demodularized standard form of the Ramanujan's continued fraction unanchored from the modular form is as follows:: Relations to affine Lie algebras and vertex operator algebras James Lepowsky and Robert Lee Wilson were the first to prove Rogers–Ramanujan identities using completely representation-theoretic techniques. They proved these identities using level 3 modules for the affine Lie algebra . In the course of this proof they invented and used what they called -algebras. Lepowsky and Wilson's approach is universal, in that it is able to treat all affine Lie algebras at all levels. It can be used to find (and prove) new partition identities. First such example is that of Capparelli's identities discovered by Stefano Capparelli using level 3 modules for the affine Lie algebra . See also Rogers polynomials Continuous q-Hermite polynomials References W.N. Bailey, Generalized Hypergeometric Series, (1935) Cambridge Tracts in Mathematics and Mathematical Physics, No. 32, Cambridge University Press, Cambridge. George Gasper and Mizan Rahman, Basic Hypergeometric Series, 2nd Edition, (2004), Encyclopedia of Mathematics and Its Applications, 96, Cambridge University Press, Cambridge. . Bruce C. Berndt, Heng Huat Chan, Sen-Shan Huang, Soon-Yi Kang, Jaebum Sohn, Seung Hwan Son, The Rogers–Ramanujan Continued Fraction, J. Comput. Appl. Math. 105 (1999), pp. 9–24. Cilanne Boulet, Igor Pak, A Combinatorial Proof of the Rogers–Ramanujan and Schur Identities, Journal of Combinatorial Theory, Ser. A, vol. 113 (2006), 1019–1030. James Lepowsky and Robert L. Wilson, Construction of the affine Lie algebra , Comm. Math. Phys. 62 (1978) 43-53. James Lepowsky and Robert L. Wilson, A new family of algebras underlying the Rogers–Ramanujan identities, Proc. Natl. Acad. Sci. USA 78 (1981), 7254-7258. James Lepowsky and Robert L. Wilson, The structure of standard modules, I: Universal algebras and the Rogers–Ramanujan identities, Invent. Math. 77 (1984), 199-290. James Lepowsky and Robert L. Wilson, The structure of standard modules, II: The case , principal gradation, Invent. Math. 79 (1985), 417-442. Stefano Capparelli, Vertex operator relations for affine algebras and combinatorial identities'', Thesis (Ph.D.)–Rutgers The State University of New Jersey - New Brunswick. 1988. 107 pp. External links Hypergeometric functions Integer partitions Mathematical identities Q-analogs Modular forms Srinivasa Ramanujan
Rogers–Ramanujan identities
[ "Mathematics" ]
3,047
[ "Integer partitions", "Number theory", "Combinatorics", "Mathematical problems", "Modular forms", "Mathematical identities", "Mathematical theorems", "Q-analogs", "Algebra" ]
2,325,044
https://en.wikipedia.org/wiki/Impact%20%28mechanics%29
In mechanics, an impact is when two bodies collide. During this collision, both bodies decelerate. The deceleration causes a high force or shock, applied over a short time period. A high force, over a short duration, usually causes more damage to both bodies than a lower force applied over a proportionally longer duration. At normal speeds, during a perfectly inelastic collision, an object struck by a projectile will deform, and this deformation will absorb most or all of the force of the collision. Viewed from a conservation of energy perspective, the kinetic energy of the projectile is changed into heat and sound energy, as a result of the deformations and vibrations induced in the struck object. However, these deformations and vibrations cannot occur instantaneously. A high-velocity collision (an impact) does not provide sufficient time for these deformations and vibrations to occur. Thus, the struck material behaves as if it were more brittle than it would otherwise be, and the majority of the applied force goes into fracturing the material. Or, another way to look at it is that materials actually are more brittle on short time scales than on long time scales: this is related to time-temperature superposition. Impact resistance decreases with an increase in the modulus of elasticity, which means that stiffer materials will have less impact resistance. Resilient materials will have better impact resistance. Different materials can behave in quite different ways in impact when compared with static loading conditions. Ductile materials like steel tend to become more brittle at high loading rates, and spalling may occur on the reverse side to the impact if penetration doesn't occur. The way in which the kinetic energy is distributed through the section is also important in determining its response. Projectiles apply a Hertzian contact stress at the point of impact to a solid body, with compression stresses under the point, but with bending loads a short distance away. Since most materials are weaker in tension than compression, this is the zone where cracks tend to form and grow. Applications A nail is pounded with a series of impacts, each by a single hammer blow. These high velocity impacts overcome the static friction between the nail and the substrate. A pile driver achieves the same end, although on a much larger scale, the method being commonly used during civil construction projects to make building and bridge foundations. An impact wrench is a device designed to impart torque impacts to bolts to tighten or loosen them. At normal speeds, the forces applied to the bolt would be dispersed, via friction, to the mating threads. However, at impact speeds, the forces act on the bolt to move it before they can be dispersed. In ballistics, bullets utilize impact forces to puncture surfaces that could otherwise resist substantial forces. A rubber sheet, for example, behaves more like glass at typical bullet speeds. That is, it fractures, and does not stretch or vibrate. The field of applications of impact theory ranges from the optimization of material processing, impact testing, dynamics of granular media to medical applications related to the biomechanics of the human body, especially the hip- and knee-joints. Also, it has vast applications in the automotive and military industries. Impacts causing damage Road traffic accidents usually involve impact loading, such as when a car hits a traffic bollard, water hydrant or tree, the damage being localized to the impact zone. When vehicles collide, the damage increases with the relative velocity of the vehicles, the damage increasing as the square of the velocity since it is the impact kinetic energy (1/2 mv2) which is the variable of importance. Much design effort is made to improve the impact resistance of cars so as to minimize user injury. It can be achieved in several ways: by enclosing the driver and passengers in a safety cell for example. The cell is reinforced so it will survive in high speed crashes, and so protect the users. Parts of the body shell outside the cell are designed to crumple progressively, absorbing most of the kinetic energy which must be dissipated by the impact. Various impact test are used to assess the effects of high loading, both on products and standard slabs of material. The Charpy test and Izod test are two examples of standardized methods which are used widely for testing materials. Ball or projectile drop tests are used for assessing product impacts. The Columbia disaster was caused by impact damage when a chunk of polyurethane foam impacted the carbon fibre composite wing of the Space Shuttle. Although tests had been conducted before the disaster, the test chunks were much smaller than the chunk that fell away from the booster rocket and hit the exposed wing. When fragile items are shipped, impacts and drops can cause product damage. Protective packaging and cushioning help reduce the peak acceleration by extending the duration of the shock or impact. See also Charpy impact test Coefficient of restitution Compression (physical) Cushioning Fall factor Impact driver Impact sensor Impact wrench Impulse (physics) Izod impact strength test Jerk (physics) Road traffic accident Shock Shock data logger Tension (physics) Write-off References Sources Goldsmith, W. (1960). Impact: The Theory and Physical Behaviour of Colliding Solids Dover Publications, Poursartip, A. (1993). Instrumented Impact Testing at High Velocities, Journal of Composites Technology and Research, 15(1). Toropov, AI. (1998). Dynamic Calibration of Impact Test Instruments, Journal of Testing and Evaluation, 24(4). Fracture mechanics Mechanical failure modes Collision
Impact (mechanics)
[ "Physics", "Materials_science", "Technology", "Engineering" ]
1,127
[ "Structural engineering", "Mechanical failure modes", "Fracture mechanics", "Technological failures", "Materials science", "Mechanics", "Materials degradation", "Mechanical failure", "Collision" ]
2,327,063
https://en.wikipedia.org/wiki/PAMELA%20detector
PAMELA (Payload for Antimatter Matter Exploration and Light-nuclei Astrophysics) was a cosmic ray research module attached to an Earth orbiting satellite. PAMELA was launched on 15 June 2006 and was the first satellite-based experiment dedicated to the detection of cosmic rays, with a particular focus on their antimatter component, in the form of positrons and antiprotons. Other objectives included long-term monitoring of the solar modulation of cosmic rays, measurements of energetic particles from the Sun, high-energy particles in Earth's magnetosphere and Jovian electrons. It was also hoped that it may detect evidence of dark matter annihilation. PAMELA operations were terminated in 2016, as were the operations of the host-satellite Resurs-DK1. The experiment was a recognized CERN experiment (RE2B). Development and launch PAMELA was the largest device up to the time built by the Wizard collaboration, which includes Russia, Italy, Germany and Sweden and has been involved in many satellite and balloon-based cosmic ray experiments such as Fermi-GLAST. The 470 kg, US$32 million (EU€24.8 million, UK£16.8 million) instrument was originally projected to have a three-year mission. However, this durable module remained operational and made significant scientific contributions until 2016. PAMELA is mounted on the upward-facing side of the Resurs-DK1 Russian satellite. It was launched by a Soyuz rocket from Baikonur Cosmodrome on 15 June 2006. PAMELA has been put in a polar elliptical orbit at an altitude between 350 and 610 km, with an inclination of 70°. Design The apparatus is 1.3 m high, has a total mass of 470 kg and a power consumption of 335 W. The instrument is built around a permanent magnet spectrometer with a silicon microstrip tracker that provides rigidity and dE/dx information. At its bottom is a silicon-tungsten imaging calorimeter, a neutron detector and a shower tail scintillator to perform lepton/hadron discrimination. A Time of Flight (ToF), made of three layers of plastic scintillators, is used to measure the velocity and charge of the particle. An anticounter system made of scintillators surrounding the apparatus is used to reject false triggers and albedo particles during off-line analysis. Results Preliminary data (released August 2008, ICHEP Philadelphia) indicate an excess of positrons in the range 10–60 GeV. This is thought to be a possible sign of dark matter annihilation: hypothetical WIMPs colliding with and annihilating each other to form gamma rays, matter and antimatter particles. Another explanation considered for the indication mentioned above is the production of electron-positron pairs on pulsars with subsequent acceleration in the vicinity of the pulsar. The first two years of data were released in October 2008 in three publications. The positron excess was confirmed and found to persist up to 90 GeV. Surprisingly, no excess of antiprotons was found. This is inconsistent with predictions from most models of dark matter sources, in which the positron and antiproton excesses are correlated. A paper, published on 15 July 2011, confirmed earlier speculation that the Van Allen belt could confine a significant flux of antiprotons produced by the interaction of the Earth's upper atmosphere with cosmic rays. The energy of the antiprotons has been measured in the range of 60–750 MeV. Cosmic rays collide with atoms in the upper atmosphere creating antineutrons, which in turn decay to produce the antiprotons. They were discovered in a part of the Van Allen belt closest to Earth. When an antiproton interacts with a normal particle, both are annihilated. Data from PAMELA indicated that these annihilation events occurred a thousand times more often than would be expected in the absence of antimatter. The data that contained evidence of antimatter were gathered between July 2006 and December 2008. Boron and carbon flux measurements were published in July 2014, important to explaining trends in cosmic ray positron fraction. The summary document of the operations of PAMELA was published in 2017. Sources of error Between 1 and 100 GeV, PAMELA is exposed to one hundred times as many electrons as antiprotons. At 1 GeV there are one thousand times as many protons as positrons and at 100 GeV ten thousand times as many. Therefore, to correctly determine the antimatter abundances, it is critical that PAMELA is able to reject the matter background. The PAMELA collaboration claimed in "The electron hadron separation performance of the PAMELA electromagnetic calorimeter" that less than one proton in 100,000 is able to pass the calorimeter selection and be misidentified as a positron when the energy is less than 200 GeV. The ratio of matter to antimatter in cosmic rays of energy less than 10 GeV that reach PAMELA from outside the Solar System depends on solar activity and in particular on the point in the 11 year solar cycle. The PAMELA team has invoked this effect to explain the discrepancy between their low energy results and those obtained by CAPRICE, HEAT and AMS-01, which were collected during that half of the cycle when the solar magnetic field had the opposite polarity. These results are consistent with the series of positron / electron measurements obtained by AESOP, which has spanned coverage over both polarities. Also the PAMELA experiment has contradicted an earlier claim by the HEAT experiment of anomalous positrons in the 6 GeV to 10 GeV range. See also AMS-02 is a high energy physics experiment mounted to the exterior of the International Space Station featuring advanced particle identification and large acceptance of 0.3m2sr. AMS-02 has been in operation since May 2011. More than 100 billion charged cosmic ray events were recorded by AMS so far. References External links PAMELA experiment's old homepage PAMELA experiment's homepage Cosmic-ray experiments Experiments for dark matter search Space science experiments Piggyback mission CERN experiments
PAMELA detector
[ "Physics" ]
1,262
[ "Dark matter", "Experiments for dark matter search", "Unsolved problems in physics" ]
2,327,478
https://en.wikipedia.org/wiki/Mulliken%20population%20analysis
Mulliken charges arise from the Mulliken population analysis and provide a means of estimating partial atomic charges from calculations carried out by the methods of computational chemistry, particularly those based on the linear combination of atomic orbitals molecular orbital method, and are routinely used as variables in linear regression (QSAR) procedures. The method was developed by Robert S. Mulliken, after whom the method is named. If the coefficients of the basis functions in the molecular orbital are Cμi for the μ'th basis function in the i'th molecular orbital, the density matrix terms are: for a closed shell system where each molecular orbital is doubly occupied. The population matrix then has terms is the overlap matrix of the basis functions. The sum of all terms of summed over is the gross orbital product for orbital - . The sum of the gross orbital products is N - the total number of electrons. The Mulliken population assigns an electronic charge to a given atom A, known as the gross atom population: as the sum of over all orbitals belonging to atom A. The charge, , is then defined as the difference between the number of electrons on the isolated free atom, which is the atomic number , and the gross atom population: Mathematical problems Off-diagonal terms One problem with this approach is the equal division of the off-diagonal terms between the two basis functions. This leads to charge separations in molecules that are exaggerated. In a modified Mulliken population analysis, this problem can be reduced by dividing the overlap populations between the corresponding orbital populations and in the ratio between the latter. This choice, although still arbitrary, relates the partitioning in some way to the electronegativity difference between the corresponding atoms. Ill definition Another problem is the Mulliken charges are explicitly sensitive to the basis set choice. In principle, a complete basis set for a molecule can be spanned by placing a large set of functions on a single atom. In the Mulliken scheme, all the electrons would then be assigned to this atom. The method thus has no complete basis set limit, as the exact value depends on the way the limit is approached. This also means that the charges are ill defined, as there is no exact answer. As a result, the basis set convergence of the charges does not exist, and different basis set families may yield drastically different results. These problems can be addressed by modern methods for computing net atomic charges, such as density derived electrostatic and chemical (DDEC) analysis, electrostatic potential analysis, and natural population analysis. See also Partial charge, for other methods used to estimate atomic charges in molecules. Chirgwin-Coulson weights, orbital overlap used to compute Mulliken populations References Quantum chemistry
Mulliken population analysis
[ "Physics", "Chemistry" ]
552
[ "Quantum chemistry", "Quantum mechanics", "Theoretical chemistry", " molecular", "Atomic", " and optical physics" ]
2,328,120
https://en.wikipedia.org/wiki/Proton-exchange%20membrane
A proton-exchange membrane, or polymer-electrolyte membrane (PEM), is a semipermeable membrane generally made from ionomers and designed to conduct protons while acting as an electronic insulator and reactant barrier, e.g. to oxygen and hydrogen gas. This is their essential function when incorporated into a membrane electrode assembly (MEA) of a proton-exchange membrane fuel cell or of a proton-exchange membrane electrolyser: separation of reactants and transport of protons while blocking a direct electronic pathway through the membrane. PEMs can be made from either pure polymer membranes or from composite membranes, where other materials are embedded in a polymer matrix. One of the most common and commercially available PEM materials is the fluoropolymer (PFSA) Nafion, a DuPont product. While Nafion is an ionomer with a perfluorinated backbone like Teflon, there are many other structural motifs used to make ionomers for proton-exchange membranes. Many use polyaromatic polymers, while others use partially fluorinated polymers. Proton-exchange membranes are primarily characterized by proton conductivity (σ), methanol permeability (P), and thermal stability. PEM fuel cells use a solid polymer membrane (a thin plastic film) which is permeable to protons when it is saturated with water, but it does not conduct electrons. History Early proton-exchange membrane technology was developed in the early 1960s by Leonard Niedrach and Thomas Grubb, chemists working for the General Electric Company. Significant government resources were devoted to the study and development of these membranes for use in NASA's Project Gemini spaceflight program. A number of technical problems led NASA to forego the use of proton-exchange membrane fuel cells in favor of batteries as a lower capacity but more reliable alternative for Gemini missions 1–4. An improved generation of General Electric's PEM fuel cell was used in all subsequent Gemini missions, but was abandoned for the subsequent Apollo missions. The fluorinated ionomer Nafion, which is today the most widely utilized proton-exchange membrane material, was developed by DuPont plastics chemist Walther Grot. Grot also demonstrated its usefulness as an electrochemical separator membrane. In 2014, Andre Geim of the University of Manchester published initial results on atom thick monolayers of graphene and boron nitride which allowed only protons to pass through the material, making them a potential replacement for fluorinated ionomers as a PEM material. Fuel cell PEMFCs have some advantages over other types of fuel cells such as solid oxide fuel cells (SOFC). PEMFCs operate at a lower temperature, are lighter and more compact, which makes them ideal for applications such as cars. However, some disadvantages are: the ~80 °C operating temperature is too low for cogeneration like in SOFCs, and that the electrolyte for PEMFCs must be water-saturated. However, some fuel-cell cars, including the Toyota Mirai, operate without humidifiers, relying on rapid water generation and the high rate of back-diffusion through thin membranes to maintain the hydration of the membrane, as well as the ionomer in the catalyst layers. High-temperature PEMFCs operate between 100 °C and 200 °C, potentially offering benefits in electrode kinetics and heat management, and better tolerance to fuel impurities, particularly CO in reformate. These improvements potentially could lead to higher overall system efficiencies. However, these gains have yet to be realized, as the gold-standard perfluorinated sulfonic acid (PFSA) membranes lose function rapidly at 100 °C and above if hydration drops below ~100%, and begin to creep in this temperature range, resulting in localized thinning and overall lower system lifetimes. As a result, new anhydrous proton conductors, such as protic organic ionic plastic crystals (POIPCs) and protic ionic liquids, are actively studied for the development of suitable PEMs. The fuel for the PEMFC is hydrogen, and the charge carrier is the hydrogen ion (proton). At the anode, the hydrogen molecule is split into hydrogen ions (protons) and electrons. The hydrogen ions permeate across the electrolyte to the cathode, while the electrons flow through an external circuit and produce electric power. Oxygen, usually in the form of air, is supplied to the cathode and combines with the electrons and the hydrogen ions to produce water. The reactions at the electrodes are as follows: Anode reaction: 2H2 → 4H+ + 4e− Cathode reaction: O2 + 4H+ + 4e− → 2H2O Overall cell reaction: 2H2 + O2 → 2H2O + heat + electrical energy The theoretical exothermic potential is +1.23 V overall. Applications The primary application of proton-exchange membranes is in PEM fuel cells. These fuel cells have a wide variety of commercial and military applications including in the aerospace, automotive, and energy industries. Early PEM fuel cell applications were focused within the aerospace industry. The then-higher capacity of fuel cells compared to batteries made them ideal as NASA's Project Gemini began to target longer duration space missions than had previously been attempted. , the automotive industry as well as personal and public power generation are the largest markets for proton-exchange membrane fuel cells. PEM fuel cells are popular in automotive applications due to their relatively low operating temperature and their ability to start up quickly even in below-freezing conditions. As of March 2019 there were 6,558 fuel cell vehicles on the road in the United States, with the Toyota Mirai being the most popular model. PEM fuel cells have seen successful implementation in other forms of heavy machinery as well, with Ballard Power Systems supplying forklifts based on the technology. The primary challenge facing automotive PEM technology is the safe and efficient storage of hydrogen, currently an area of high research activity. Polymer electrolyte membrane electrolysis is a technique by which proton-exchange membranes are used to decompose water into hydrogen and oxygen gas. The proton-exchange membrane allows for the separation of produced hydrogen from oxygen, allowing either product to be exploited as needed. This process has been used variously to generate hydrogen fuel and oxygen for life-support systems in vessels such as US and Royal Navy submarines. A recent example is the construction of a 20 MW Air Liquide PEM electrolyzer plant in Québec. Similar PEM-based devices are available for the industrial production of ozone. See also Alkali anion exchange membrane Artificial membrane Dry electrolyte Dynamic mechanical analysis Electrolysis of water Electroosmotic pump Gas diffusion electrode Isotope electrochemistry Membrane electrode assembly Proton exchange membrane electrolysis Roll-to-roll References External links Dry solid polymer electrolyte battery EC-supported STREP program on high pressure PEM water electrolysis Fuel cells Electrochemistry Polymers Hydrogen technologies Membrane technology Proton
Proton-exchange membrane
[ "Chemistry", "Materials_science" ]
1,449
[ "Separation processes", "Electrochemistry", "Membrane technology", "Polymer chemistry", "Polymers" ]
12,213,637
https://en.wikipedia.org/wiki/QM/MM
The hybrid QM/MM (quantum mechanics/molecular mechanics) approach is a molecular simulation method that combines the strengths of ab initio QM calculations (accuracy) and MM (speed) approaches, thus allowing for the study of chemical processes in solution and in proteins. The QM/MM approach was introduced in the 1976 paper of Warshel and Levitt. They, along with Martin Karplus, won the 2013 Nobel Prize in Chemistry for "the development of multiscale models for complex chemical systems". Efficiency An important advantage of QM/MM methods is their efficiency. The cost of doing classical molecular mechanics (MM) simulations in the most straightforward case scales as O(N2), where N is the number of atoms in the system. This is mainly due to electrostatic interactions term (every particle interacts with everything else). However, use of cutoff radius, periodic pair-list updates and more recently the variations of the particle mesh Ewald (PME) method has reduced this to between O(N) to O(N2). In other words, if a system with twice as many atoms is simulated then it would take between twice to four times as much computing power. On the other hand, the simplest ab initio calculations formally scale as O(N3) or worse (restricted Hartree–Fock calculations have been suggested to scale ~O(N2.7)). Here in the ab initio calculations, N stands for the number of basis functions rather than the number of atoms. Each atom has at least as many basis functions as is the number of electrons. To overcome the limitation, a small part of the system that is of major interest is treated quantum-mechanically (for instance, the active site of an enzyme) and the remaining system is treated classically. Calculating the energy of the combined system The energy of the combined system may be calculated in two different ways. The simplest is referred to as the 'subtractive scheme' which was proposed by Maseras and Morokuma in 1995. In the subtractive scheme the energy of the entire system is calculated using a molecular mechanics force field, then the energy of the QM system is added (calculated using a QM method), finally the MM energy of the QM system is subtracted. In this equation would refer to the energy of the QM region as calculated using molecular mechanics. In this scheme, the interaction between the two regions will only be considered at a MM level of theory. In practice, a more widely used approach is a more accurate, additive method. The equation for this consists of three terms: The index labels the nuclei in the QM region whereas labels the MM nuclei. The first two terms represent the interaction between the total charge density (due to electrons and cores) in the QM region and classical charges of the MM region. The third term accounts for dispersion interactions across the QM/MM boundary. Any covalent bond-stretching potentials that cross the boundary are accounted for by the fourth term. The final two terms account for the energy across the boundary that arises from bending covalent bonds and torsional potentials. At least one of the atoms in the angles or will be a QM atom with the others being MM atoms. Reducing the computational cost of calculating QM-MM interactions Evaluating the charge-charge term in the QM/MM interaction equation given previously can be very computationally expensive (consider the number of evaluations required a system with 106 grid points for the electron density of the QM system and 104 MM atoms). A method by which this issue can be mitigated is to construct three concentric spheres around the QM region and evaluate which one of these spheres the MM atoms lie within. If the MM atoms reside within the innermost sphere their interactions with the QM system are treated as per the equation for . The MM charges that lie within the second sphere (but not the first) interact with the QM region by giving the QM nuclei constructed charges. These charges are determined by the RESP approach in an attempt to mimic electron density. Using this approach the changing charges on the QM nuclei during the course of a simulation are accounted for. In the third outermost region the classical charges interact with the multipole moments of the quantum charge distribution. By calculating charge-charge interactions by using successively more approximate methods it is possible to obtain a very significant reduction in computational cost whilst not suffering a significant loss in accuracy. The electrostatic QM-MM interaction Electrostatic interactions between the QM and MM region may be considered at different levels of sophistication. These methods can be classified as either mechanical embedding, electrostatic embedding or polarized embedding. Mechanical embedding Mechanical embedding treats the electrostatic interactions at the MM level, though simpler than the other methods, certain issues may occur, in part due to the extra difficulty in assigning appropriate MM properties such as atom centered point charges to the QM region. The QM region being simulated is the site of the reaction thus it is likely that during the course of the reaction the charge distribution will change resulting in a high level of error if a single set of MM electrostatic parameters is used to describe it. Another problem is the fact that mechanical embedding will not consider the effects of electrostatic interactions with the MM system on the electronic structure of the QM system. Electronic embedding Electrostatic embedding does not require the MM electrostatic parameters for the QM. This is due to it considering the effects of the electrostatic interactions by including certain one electron terms in the QM regions Hamiltonian. This means that polarization of the QM system by the electrostatic interactions with the MM system will now be accounted for. Though an improvement on the mechanical embedding scheme it comes at the cost of increased complexity hence requiring more computational effort. Another issue is it neglects the effects of the QM system on the MM system whereas in reality both systems would polarize each other until an equilibrium is met. In order to construct the required one electron terms for the MM region it is possible to utilize the partial charges described by the MM calculation. This is the most popular method for constructing the QM Hamiltonian however it may not be suitable for all systems. Polarized embedding Whereas electrostatic embedding accounts for the polarisation of the QM system by the MM system, neglecting the polarization of the MM system by the QM system, polarized embedding accounts for both the polarization of the MM system by the QM. These models allow for flexible MM charges and fall into two categories. In the first category, the MM region is polarized by the QM electric field but then does not act back on the QM system. In the second category are fully self-consistent formulations which allow for mutual polarization between the QM and the MM systems. Polarized embedding schemes have scarcely been applied to bio-molecular simulations and have essentially been restricted to explicit solvation models where the solute will be treated as a QM system and the solvent a polarizable force field. Problems involved with QM/MM Even though QM/MM methods are often very efficient, they are still rather tricky to handle. A researcher has to limit the regions (atomic sites) which are simulated by QM, however methods have been developed that allow particles to move between the QM and MM region. Moving the limitation borders can both affect the results and the time computing the results. The way the QM and MM systems are coupled can differ substantially depending on the arrangement of particles in the system and their deviations from equilibrium positions in time. Usually, limits are set at carbon-carbon bonds and avoided in regions that are associated with charged groups, since such an electronically variant limit can influence the quality of the model. Covalent bonds across the QM-MM boundary Directly connected atoms, where one is described by QM and the other by MM are referred to as Junction atoms. Having the boundary between the QM region and MM region pass through a covalent bond may prove problematic however this is sometimes unavoidable. When it does occur it is important that the bond of the QM atom be capped in order to prevent the appearance of bond cleavage in the QM system. Boundary schemes In systems where the QM/MM boundary cuts a bond three issues must be dealt with. First, the dangling bond of the QM system must be capped, this is because it is undesirable to truncate the QM system (treating the bond as if it has been cleaved will yield very unrealistic calculations). The second issue relates to polarisation, more specifically for electrostatic or polarized embedding it is important to ensure that the proximity of the MM charges near the boundary does not cause over-polarisation of the QM density. The final issue is the bonding MM terms must be carefully selected in order to prevent double counting of interactions when looking at bonds across the boundary. Overall the goal is to obtain a good description of QM-MM interactions at the boundary between the QM and the MM system and there are three schemes by which this can be achieved. Link atom schemes Link atom schemes introduce an additional atomic centre (usually a hydrogen atom). This atom is not part of the real system. It is covalently bonded to the atom being described by quantum mechanics which serves to saturate its valency (by replacing the bond that has been broken). Boundary atom schemes In boundary atom schemes, the MM atom which is bonded across the boundary to a QM atom is replaced with a special boundary atom which appears in both the QM and the MM calculation. In the MM calculation, it simply behaves as an MM atom but in the QM system it mimics the electronic character of the MM atom bounded across the boundary to the QM atom. Localized-orbital schemes These schemes place hybrid orbitals at the boundary and keep some of them frozen. These orbitals cap the QM region and replace the cut bond. BuRNN BuRNN (Buffer Region Neural Network) approach was developed as an alternative to QM/MM methods. Its focus is to reduce artifacts that are created in between QM and MM region by introducing buffer region between them. Buffer region experiences full electronic polarization by the QM region and together with QM region is described by NN (neural network) trained on QM calculations. The substitution of QM calculations for NN also speeds up overall simulation. BuRNN was introduced in the 2022 paper of Lier, Poliak, Marquetand, Westermayr, and Oostenbrink. See also ONIOM: "our own n-layered integrated molecular orbital and molecular mechanics" List of quantum chemistry and solid state physics software List of software for molecular mechanics modeling References Computational chemistry Molecular dynamics
QM/MM
[ "Physics", "Chemistry" ]
2,234
[ "Molecular physics", "Computational physics", "Molecular dynamics", "Computational chemistry", "Theoretical chemistry" ]
12,217,323
https://en.wikipedia.org/wiki/Bounded%20deformation
In mathematics, a function of bounded deformation is a function whose distributional derivatives are not quite well-behaved-enough to qualify as functions of bounded variation, although the symmetric part of the derivative matrix does meet that condition. Thought of as deformations of elasto-plastic bodies, functions of bounded deformation play a major role in the mathematical study of materials, e.g. the Francfort-Marigo model of brittle crack evolution. More precisely, given an open subset Ω of Rn, a function u : Ω → Rn is said to be of bounded deformation if the symmetrized gradient ε(u) of u, is a bounded, symmetric n × n matrix-valued Radon measure. The collection of all functions of bounded deformation is denoted BD(Ω; Rn), or simply BD, introduced essentially by P.-M. Suquet in 1978. BD is a strictly larger space than the space BV of functions of bounded variation. One can show that if u is of bounded deformation then the measure ε(u) can be decomposed into three parts: one absolutely continuous with respect to Lebesgue measure, denoted e(u) dx; a jump part, supported on a rectifiable (n − 1)-dimensional set Ju of points where u has two different approximate limits u+ and u−, together with a normal vector νu; and a "Cantor part", which vanishes on Borel sets of finite Hn−1-measure (where Hk denotes k-dimensional Hausdorff measure). A function u is said to be of special bounded deformation if the Cantor part of ε(u) vanishes, so that the measure can be written as where H n−1 | Ju denotes H n−1 on the jump set Ju and denotes the symmetrized dyadic product: The collection of all functions of special bounded deformation is denoted SBD(Ω; Rn), or simply SBD. References Functional analysis Materials science Solid mechanics
Bounded deformation
[ "Physics", "Materials_science", "Mathematics", "Engineering" ]
410
[ "Solid mechanics", "Functions and mappings", "Applied and interdisciplinary physics", "Functional analysis", "Mathematical objects", "Materials science", "Mechanics", "Mathematical relations", "nan" ]
9,794,905
https://en.wikipedia.org/wiki/High-resolution%20manometry
High-resolution manometry (HRM) is a gastrointestinal motility diagnostic system that measures intraluminal pressure activity in the gastrointestinal tract using a series of closely spaced pressure sensors. For a manometry system to be classified as "high-resolution" as opposed to "conventional", the pressure sensors need to be spaced at most 1 cm apart. Two dominant pressure transduction technologies are used: solid state pressure sensors and water perfused pressure sensors. Each pressure transduction technology has its own inherent advantages and disadvantages. HRM systems also require advanced computer hardware and software to store and analyze the manometry data. See also Functional Lumen Imaging Probe Anorectal manometry References External links HRM systems (from EBNeuro S.p.A.) HRM systems (from Sierra) Medical tests Medical physics
High-resolution manometry
[ "Physics" ]
179
[ "Applied and interdisciplinary physics", "Medical physics" ]
9,795,740
https://en.wikipedia.org/wiki/Reaction%20engine
A reaction engine is an engine or motor that produces thrust by expelling reaction mass (reaction propulsion), in accordance with Newton's third law of motion. This law of motion is commonly paraphrased as: "For every action force there is an equal, but opposite, reaction force." Examples include jet engines, rocket engines, pump-jets, and more uncommon variations such as Hall effect thrusters, ion drives, mass drivers, and nuclear pulse propulsion. Discovery The discovery of the reaction engine has been attributed to the Romanian inventor Alexandru Ciurcu and to the French journalist . Energy use Propulsive efficiency For all reaction engines that carry on-board propellant (such as rocket engines and electric propulsion drives) some energy must go into accelerating the reaction mass. Every engine wastes some energy, but even assuming 100% efficiency, the engine needs energy amounting to (where M is the mass of propellent expended and is the exhaust velocity), which is simply the energy to accelerate the exhaust. Comparing the rocket equation (which shows how much energy ends up in the final vehicle) and the above equation (which shows the total energy required) shows that even with 100% engine efficiency, certainly not all energy supplied ends up in the vehicle – some of it, indeed usually most of it, ends up as kinetic energy of the exhaust. If the specific impulse () is fixed, for a mission delta-v, there is a particular that minimises the overall energy used by the rocket. This comes to an exhaust velocity of about ⅔ of the mission delta-v (see the energy computed from the rocket equation). Drives with a specific impulse that is both high and fixed such as Ion thrusters have exhaust velocities that can be enormously higher than this ideal, and thus end up powersource limited and give very low thrust. Where the vehicle performance is power limited, e.g. if solar power or nuclear power is used, then in the case of a large the maximum acceleration is inversely proportional to it. Hence the time to reach a required delta-v is proportional to . Thus the latter should not be too large. On the other hand, if the exhaust velocity can be made to vary so that at each instant it is equal and opposite to the vehicle velocity then the absolute minimum energy usage is achieved. When this is achieved, the exhaust stops in space and has no kinetic energy; and the propulsive efficiency is 100% all the energy ends up in the vehicle (in principle such a drive would be 100% efficient, in practice there would be thermal losses from within the drive system and residual heat in the exhaust). However, in most cases this uses an impractical quantity of propellant, but is a useful theoretical consideration. Some drives (such as VASIMR or electrodeless plasma thruster) actually can significantly vary their exhaust velocity. This can help reduce propellant usage and improve acceleration at different stages of the flight. However the best energetic performance and acceleration is still obtained when the exhaust velocity is close to the vehicle speed. Proposed ion and plasma drives usually have exhaust velocities enormously higher than that ideal (in the case of VASIMR the lowest quoted speed is around 15 km/s compared to a mission delta-v from high Earth orbit to Mars of about 4 km/s). For a mission, for example, when launching from or landing on a planet, the effects of gravitational attraction and any atmospheric drag must be overcome by using fuel. It is typical to combine the effects of these and other effects into an effective mission delta-v. For example, a launch mission to low Earth orbit requires about 9.3–10 km/s delta-v. These mission delta-vs are typically numerically integrated on a computer. Cycle efficiency All reaction engines lose some energy, mostly as heat. Different reaction engines have different efficiencies and losses. For example, rocket engines can be up to 60–70% energy efficient in terms of accelerating the propellant. The rest is lost as heat and thermal radiation, primarily in the exhaust. Oberth effect Reaction engines are more energy efficient when they emit their reaction mass when the vehicle is travelling at high speed. This is because the useful mechanical energy generated is simply force times distance, and when a thrust force is generated while the vehicle moves, then: where F is the force and d is the distance moved. Dividing by length of time of motion we get: Hence: where P is the useful power and v is the speed. Hence, v should be as high as possible, and a stationary engine does no useful work. Delta-v and propellant Exhausting the entire usable propellant of a spacecraft through the engines in a straight line in free space would produce a net velocity change to the vehicle; this number is termed delta-v (). If the exhaust velocity is constant then the total of a vehicle can be calculated using the rocket equation, where M is the mass of propellant, P is the mass of the payload (including the rocket structure), and is the velocity of the rocket exhaust. This is known as the Tsiolkovsky rocket equation: For historical reasons, as discussed above, is sometimes written as where is the specific impulse of the rocket, measured in seconds, and is the gravitational acceleration at sea level. For a high delta-v mission, the majority of the spacecraft's mass needs to be reaction mass. Because a rocket must carry all of its reaction mass, most of the initially-expended reaction mass goes towards accelerating reaction mass rather than payload. If the rocket has a payload of mass P, the spacecraft needs to change its velocity by , and the rocket engine has exhaust velocity ve, then the reaction mass M which is needed can be calculated using the rocket equation and the formula for : For much smaller than ve, this equation is roughly linear, and little reaction mass is needed. If is comparable to ve, then there needs to be about twice as much fuel as combined payload and structure (which includes engines, fuel tanks, and so on). Beyond this, the growth is exponential; speeds much higher than the exhaust velocity require very high ratios of fuel mass to payload and structural mass. For a mission, for example, when launching from or landing on a planet, the effects of gravitational attraction and any atmospheric drag must be overcome by using fuel. It is typical to combine the effects of these and other effects into an effective mission delta-v. For example, a launch mission to low Earth orbit requires about 9.3–10 km/s delta-v. These mission delta-vs are typically numerically integrated on a computer. Some effects such as Oberth effect can only be significantly utilised by high thrust engines such as rockets; i.e., engines that can produce a high g-force (thrust per unit mass, equal to delta-v per unit time). Energy In the ideal case is useful payload and is reaction mass (this corresponds to empty tanks having no mass, etc.). The energy required can simply be computed as This corresponds to the kinetic energy the expelled reaction mass would have at a speed equal to the exhaust speed. If the reaction mass had to be accelerated from zero speed to the exhaust speed, all energy produced would go into the reaction mass and nothing would be left for kinetic energy gain by the rocket and payload. However, if the rocket already moves and accelerates (the reaction mass is expelled in the direction opposite to the direction in which the rocket moves) less kinetic energy is added to the reaction mass. To see this, if, for example, =10 km/s and the speed of the rocket is 3 km/s, then the speed of a small amount of expended reaction mass changes from 3 km/s forwards to 7 km/s rearwards. Thus, although the energy required is 50 MJ per kg reaction mass, only 20 MJ is used for the increase in speed of the reaction mass. The remaining 30 MJ is the increase of the kinetic energy of the rocket and payload. In general: Thus the specific energy gain of the rocket in any small time interval is the energy gain of the rocket including the remaining fuel, divided by its mass, where the energy gain is equal to the energy produced by the fuel minus the energy gain of the reaction mass. The larger the speed of the rocket, the smaller the energy gain of the reaction mass; if the rocket speed is more than half of the exhaust speed the reaction mass even loses energy on being expelled, to the benefit of the energy gain of the rocket; the larger the speed of the rocket, the larger the energy loss of the reaction mass. We have where is the specific energy of the rocket (potential plus kinetic energy) and is a separate variable, not just the change in . In the case of using the rocket for deceleration; i.e., expelling reaction mass in the direction of the velocity, should be taken negative. The formula is for the ideal case again, with no energy lost on heat, etc. The latter causes a reduction of thrust, so it is a disadvantage even when the objective is to lose energy (deceleration). If the energy is produced by the mass itself, as in a chemical rocket, the fuel value has to be , where for the fuel value also the mass of the oxidizer has to be taken into account. A typical value is = 4.5 km/s, corresponding to a fuel value of 10.1MJ/kg. The actual fuel value is higher, but much of the energy is lost as waste heat in the exhaust that the nozzle was unable to extract. The required energy is Conclusions: for we have for a given , the minimum energy is needed if , requiring an energy of . In the case of acceleration in a fixed direction, and starting from zero speed, and in the absence of other forces, this is 54.4% more than just the final kinetic energy of the payload. In this optimal case the initial mass is 4.92 times the final mass. These results apply for a fixed exhaust speed. Due to the Oberth effect and starting from a nonzero speed, the required potential energy needed from the propellant may be less than the increase in energy in the vehicle and payload. This can be the case when the reaction mass has a lower speed after being expelled than before – rockets are able to liberate some or all of the initial kinetic energy of the propellant. Also, for a given objective such as moving from one orbit to another, the required may depend greatly on the rate at which the engine can produce and maneuvers may even be impossible if that rate is too low. For example, a launch to Low Earth orbit (LEO) normally requires a of ca. 9.5 km/s (mostly for the speed to be acquired), but if the engine could produce at a rate of only slightly more than g, it would be a slow launch requiring altogether a very large (think of hovering without making any progress in speed or altitude, it would cost a of 9.8 m/s each second). If the possible rate is only or less, the maneuver can not be carried out at all with this engine. The power is given by where is the thrust and the acceleration due to it. Thus the theoretically possible thrust per unit power is 2 divided by the specific impulse in m/s. The thrust efficiency is the actual thrust as percentage of this. If, e.g., solar power is used, this restricts ; in the case of a large the possible acceleration is inversely proportional to it, hence the time to reach a required delta-v is proportional to ; with 100% efficiency: for we have Examples: power, 1000W; mass, 100 kg; = 5 km/s, = 16 km/s, takes 1.5 months. power, 1000W; mass, 100 kg; = 5 km/s, = 50 km/s, takes 5 months. Thus should not be too large. Power to thrust ratio The power to thrust ratio is simply: Thus for any vehicle power P, the thrust that may be provided is: Example Suppose a 10,000 kg space probe will be sent to Mars. The required from LEO is approximately 3000 m/s, using a Hohmann transfer orbit. For the sake of argument, assume the following thrusters are options to be used: Observe that the more fuel-efficient engines can use far less fuel; their mass is almost negligible (relative to the mass of the payload and the engine itself) for some of the engines. However, these require a large total amount of energy. For Earth launch, engines require a thrust to weight ratio of more than one. To do this with the ion or more theoretical electrical drives, the engine would have to be supplied with one to several gigawatts of power, equivalent to a major metropolitan generating station. From the table it can be seen that this is clearly impractical with current power sources. Alternative approaches include some forms of laser propulsion, where the reaction mass does not provide the energy required to accelerate it, with the energy instead being provided from an external laser or other beam-powered propulsion system. Small models of some of these concepts have flown, although the engineering problems are complex and the ground-based power systems are not a solved problem. Instead, a much smaller, less powerful generator may be included which will take much longer to generate the total energy needed. This lower power is only sufficient to accelerate a tiny amount of fuel per second, and would be insufficient for launching from Earth. However, over long periods in orbit where there is no friction, the velocity will be finally achieved. For example, it took the SMART-1 more than a year to reach the Moon, whereas with a chemical rocket it takes a few days. Because the ion drive needs much less fuel, the total launched mass is usually lower, which typically results in a lower overall cost, but the journey takes longer. Mission planning therefore frequently involves adjusting and choosing the propulsion system so as to minimise the total cost of the project, and can involve trading off launch costs and mission duration against payload fraction. Types of reaction engines Rocket-like Rocket engine Ion thruster Airbreathing Turbojet Turbofan Pulsejet Ramjet Scramjet Liquid Pump-jet Rotary Aeolipile Solid exhaust Mass driver See also Internal combustion engine Jet force Jet propulsion Thruster (disambiguation) Notes References External links Popular Science May 1945 Engine technology Aerospace engineering French inventions Romanian inventions
Reaction engine
[ "Technology", "Engineering" ]
2,978
[ "Engine technology", "Engines", "Aerospace engineering" ]
9,797,479
https://en.wikipedia.org/wiki/Rotor%20%28electric%29
The rotor is a moving component of an electromagnetic system in the electric motor, electric generator, or alternator. Its rotation is due to the interaction between the windings and magnetic fields which produces a torque around the rotor's axis. Early development An early example of electromagnetic rotation was the first rotary machine built by Ányos Jedlik with electromagnets and a commutator, in 1826-27. Other pioneers in the field of electricity include Hippolyte Pixii who built an alternating current generator in 1832, and William Ritchie's construction of an electromagnetic generator with four rotor coils, a commutator and brushes, also in 1832. Development quickly included more useful applications such as Moritz Hermann Jacobi's motor that could lift 10 to 12 pounds with a speed of one foot per second, about 15 watts of mechanical power in 1834. In 1835, Francis Watkins describes an electrical "toy" he created; he is generally regarded as one of the first to understand the interchangeability of motor and generator. Type and construction of rotors Induction (asynchronous) motors, generators and alternators (synchronous) have an electromagnetic system consisting of a stator and rotor. There are two designs for the rotor in an induction motor: squirrel cage and wound. In generators and alternators, the rotor designs are salient pole or cylindrical. Squirrel-cage rotor The squirrel-cage rotor consists of laminated steel in the core with evenly spaced bars of copper or aluminum placed axially around the periphery, permanently shorted at the ends by the end rings. This simple and rugged construction makes it the favorite for most applications. The assembly has a twist: the bars are slanted, or skewed, to reduce magnetic hum and slot harmonics and to reduce the tendency of locking. Housed in the stator, the rotor and stator teeth can lock when they are in equal number and the magnets position themselves equally apart, opposing rotation in both directions. Bearings at each end mount the rotor in its housing, with one end of the shaft protruding to allow the attachment of the load. In some motors, there is an extension at the non-driving end for speed sensors or other electronic controls. The generated torque forces motion through the rotor to the load. Wound rotor The wound rotor is a cylindrical core made of steel lamination with slots to hold the wires for its 3-phase windings which are evenly spaced at 120 electrical degrees apart and connected in a 'Y' configuration. The rotor winding terminals are brought out and attached to the three slips rings with brushes, on the shaft of the rotor. Brushes on the slip rings allow for external three-phase resistors to be connected in series to the rotor windings for providing speed control. The external resistances become a part of the rotor circuit to produce a large torque when starting the motor. As the motor speeds up, the resistances can be reduced to zero. Salient pole rotor A salient pole rotor is built upon a stack of "star shaped" steel laminations, typically with 2 or 3 or 4 or 6, maybe even 18 or more "radial prongs" sticking out from the middle, each of which is wound with copper wire to form a discrete outward facing electromagnet pole. The inward facing ends of each prong are magnetically grounded into the common central body of the rotor. The poles are supplied by direct current or magnetized by permanent magnets. The armature with a three-phase winding is on the stator where voltage is induced. Direct current (DC), from an external exciter or from a diode bridge mounted on the rotor shaft, produces a magnetic field and energizes the rotating field windings and alternating current energizes the armature windings simultaneously. A salient pole ends in a pole shoe, a high-permeability part with an outer surface shaped as a segment of a cylinder to homogenize the distribution of the magnetic flux to the stator. Non-salient rotor The cylindrical shaped rotor is made of a solid steel shaft with slots running along the outside length of the cylinder for holding the field windings of the rotor which are laminated copper bars inserted into the slots and is secured by wedges. The slots are insulated from the windings and are held at the end of the rotor by slip rings. An external direct current (DC) source is connected to the concentrically mounted slip rings with brushes running along the rings. The brushes make electrical contact with the rotating slip rings. DC current is also supplied through brushless excitation from a rectifier mounted on the machine shaft that converts alternating current to direct current. Operating principle In a three-phase induction machine, alternating current supplied to the stator windings energizes it to create a rotating magnetic flux. The flux generates a magnetic field in the air gap between the stator and the rotor and induces a voltage which produces current through the rotor bars. The rotor circuit is shorted and current flows in the rotor conductors. The action of the rotating flux and the current produces a force that generates a torque to start the motor. An alternator rotor is made up of a wire coil enveloped around an iron core. The magnetic component of the rotor is made from steel laminations to aid stamping conductor slots to specific shapes and sizes. As currents travel through the wire coil a magnetic field is created around the core, which is referred to as field current. The field current strength controls the power level of the magnetic field. Direct current (DC) drives the field current in one direction, and is delivered to the wire coil by a set of brushes and slip rings. Like any magnet, the magnetic field produced has a north and a south pole. The normal clockwise direction of the motor that the rotor is powering can be manipulated by using the magnets and magnetic fields installed in the design of the rotor, allowing the motor to run in reverse or counterclockwise. Characteristics of rotors Squirrel-cage rotor This rotor rotates at a speed less than the stator rotating magnetic field or synchronous speed. Rotor slip provides necessary induction of rotor currents for motor torque, which is in proportion to slip. When rotor speed increases, the slip decreases. Increasing the slip increases induced motor current, which in turn increases rotor current, resulting in a higher torque for increase load demands. Wound rotor This rotor operates at constant speed and has lower starting current External resistance added to rotor circuit, increases starting torque Motor running efficiency improves as external resistance is reduced when motor speed up. Higher torque and speed control Salient pole rotor This rotor operates at a speed below 1500 rpm (revolutions per minute) and 40% of its rated torque without excitation It has a large diameter and short axial length Air gap is non uniform Rotor has low mechanical strength Cylindrical rotor The rotor operates at speed between 1500-3600 rpm It has strong mechanical strength Air gap is uniform Its diameter is small and has a large axial length and requires a higher torque than salient pole rotor Rotor equations Rotor bar voltage The rotating magnetic field induces a voltage in the rotor bars as it passes over them. This equation applies to induced voltage in the rotor bars. where: = induced voltage = magnetic field = conductor length = synchronous speed = conductor speed Torque in rotor A torque is produced by the force produced through the interactions of the magnetic field and current as expressed by the given: Ibid where: = force = torque = radius of rotor rings = rotor bar Induction motor slip A stator magnetic field rotates at synchronous speed, Ibid where: = frequency = number of poles If = rotor speed, the slip, S for an induction motor is expressed as: mechanical speed of rotor, in terms of slip and synchronous speed: Relative speed of slip: Frequency of induced voltages and currents See also Armature (electrical engineering) - any "rotor" that carries some form of alternating current Balancing machine Commutator (electric) Electric motor Field coil Rotordynamics Stator References Electric motors
Rotor (electric)
[ "Technology", "Engineering" ]
1,648
[ "Electrical engineering", "Engines", "Electric motors" ]
9,798,607
https://en.wikipedia.org/wiki/Sporogenesis
Sporogenesis is the production of spores in biology. The term is also used to refer to the process of reproduction via spores. Reproductive spores were found to be formed in eukaryotic organisms, such as plants, algae and fungi, during their normal reproductive life cycle. Dormant spores are formed, for example by certain fungi and algae, primarily in response to unfavorable growing conditions. Most eukaryotic spores are haploid and form through cell division, though some types are diploids or dikaryons and form through cell fusion. Reproduction via spores Reproductive spores are generally the result of cell division, most commonly meiosis (e.g. in plant sporophytes). Sporic meiosis is needed to complete the sexual life cycle of the organisms using it. In some cases, sporogenesis occurs via mitosis (e.g. in some fungi and algae). Mitotic sporogenesis is a form of asexual reproduction. Examples are the conidial fungi Aspergillus and Penicillium, for which mitospore formation appears to be the primary mode of reproduction. Other fungi, such as ascomycetes, utilize both mitotic and meiotic spores. The red alga Polysiphonia alternates between mitotic and meiotic sporogenesis and both processes are required to complete its complex reproductive life cycle. In the case of dormant spores in eukaryotes, sporogenesis often occurs as a result of fertilization or karyogamy forming a diploid spore equivalent to a zygote. Therefore, zygospores are the result of sexual reproduction. Reproduction via spores involves the spreading of the spores by water or air. Algae and some fungi (chytrids) often use motile zoospores that can swim to new locations before developing into sessile organisms. Airborne spores are obvious in fungi, for example when they are released from puffballs. Other fungi have more active spore dispersal mechanisms. For example, the fungus Pilobolus can shoot its sporangia towards light. Plant spores designed for dispersal are also referred to as diaspores. Plant spores are most obvious in the reproduction of ferns and mosses. However, they also exist in flowering plants where they develop hidden inside the flower. For example, the pollen grains of flowering plants develop out of microspores produced in the anthers. Reproductive spores grow into multicellular haploid individuals or sporelings. In heterosporous organisms, two types of spores exist: microspores give rise to males and megaspores to females. In homosporous organisms, all spores look alike and grow into individuals carrying reproductive parts of both genders. Formation of reproductive spores Sporogenesis occurs in reproductive structures termed sporangia. The process involves sporogenous cells (sporocytes, also called spore mother cells) undergoing cell division to give rise to spores. In meiotic sporogenesis, a diploid spore mother cell within the sporangium undergoes meiosis, producing a tetrad of haploid spores. In organisms that are heterosporous, two types of spores occur: Microsporangia produce male microspores, and megasporangia produce female megaspores. In megasporogenesis, often three of the four spores degenerate after meiosis, whereas in microsporogenesis all four microspores survive. In gymnosperms, such as conifers, microspores are produced through meiosis from microsporocytes in microstrobili or male cones. In flowering plants, microspores are produced in the anthers of flowers. Each anther contains four pollen sacs, which contain the microsporocytes. After meiosis, each microspore undergoes mitotic cell division, giving rise to multicellular pollen grains (six nuclei in gymnosperms, three nuclei in flowering plants). Megasporogenesis occurs in megastrobili in conifers (for example a pine cone) and inside the ovule in the flowers of flowering plants. A megasporocyte inside a megasporangium or ovule undergoes meiosis, producing four megaspores. Only one is a functional megaspore whereas the others stay dysfunctional or degenerate. The megaspore undergoes several mitotic divisions to develop into a female gametophyte (for example the seven-cell/eight-nuclei embryo sac in flowering plants). Mitospore formation Some fungi and algae produce mitospores through mitotic cell division within a sporangium. In fungi, such mitospores are referred to as conidia. Formation of dormant spores Some algae, and fungi form resting spores made to survive unfavorable conditions. Typically, changes in the environment from favorable to unfavorable growing conditions will trigger a switch from asexual reproduction to sexual reproduction in these organisms. The resulting spores are protected through the formation of a thick cell wall and can withstand harsh conditions such as drought or extreme temperatures. Examples are chlamydospores, teliospores, zygospores, and myxospores. Similar survival structures produced in some bacteria are known as endospores. Chlamydospore and teliospore formation Chlamydospores are generally multicellular, asexual structures. Teliospores are a form of chlamydospore produced through the fusion of cells or hyphae where the nuclei of the fused cells stay separate. These nuclei undergo karyogamy and meiosis upon germination of the spore. Zygospore, oospore and auxospore formation Zygospores are formed in certain fungi (zygomycota, for example Rhizopus) and some algae (for example Chlamydomonas). The zygospore forms through the isogamic fusion of two cells (motile single cells in Chlamydomonas) or sexual conjugation between two hyphae (in zygomycota). Plasmogamy is followed by karyogamy, therefore zygospores are diploid (zygotes). They will undergo zygotic meiosis upon germinating. In oomycetes, the zygote forms through the fertilization of an egg cell with a sperm nucleus and enters a resting stage as a diploid, thick-walled oospore. The germinating oospore undergoes mitosis and gives rise to diploid hyphae which reproduce asexually via mitotic zoospores as long as conditions are favorable. In diatoms, fertilization gives rise to a zygote termed auxospore. Besides sexual reproduction and as a resting stage, the function of an auxospore is the restoration of the original cell size, as diatoms get progressively smaller during mitotic cell division. Auxospores divide by mitosis. Endospore formation The term sporogenesis can also refer to endospore formation in bacteria, which allows the cells to survive unfavorable conditions. Endospores are not reproductive structures and their formation does not require cell fusion or division. Instead, they form through the production of an encapsulating spore coat within the spore-forming cell. Parts of the spore There are many parts of the spore 'plant'. The structure enclosing a group of spores is called a sporangium. Bibliography S.S. Mader (2007): Biology, 9th edition, McGraw Hill Companies, New York, P.H. Raven, R.F. Evert, S.E. Eichhorn (2005): Biology of Plants, 7th Edition, W.H. Freeman and Company Publishers, New York, Reproduction Reproductive system Plant reproduction
Sporogenesis
[ "Biology" ]
1,634
[ "Behavior", "Reproductive system", "Plant reproduction", "Plants", "Reproduction", "Sex", "Biological interactions", "Organ systems" ]
9,801,523
https://en.wikipedia.org/wiki/Repressilator
The repressilator is a genetic regulatory network consisting of at least one feedback loop with at least three genes, each expressing a protein that represses the next gene in the loop. In biological research, repressilators have been used to build cellular models and understand cell function. There are both artificial and naturally-occurring repressilators. Recently, the naturally-occurring repressilator clock gene circuit in Arabidopsis thaliana (A. thaliana) and mammalian systems have been studied. Artificial Repressilators Artificial repressilators were first engineered by Michael Elowitz and Stanislas Leibler in 2000, complementing other research projects studying simple systems of cell components and function. In order to understand and model the design and cellular mechanisms that confers a cell’s function, Elowitz and Leibler created an artificial network consisting of a loop with three transcriptional repressors. This network was designed from scratch to exhibit a stable oscillation that acts like an electrical oscillator system with fixed time periods. The network was implemented in Escherichia coli (E. coli) via recombinant DNA transfer. It was then verified that the engineered colonies did indeed exhibit the desired oscillatory behavior. The repressilator consists of three genes connected in a feedback loop, such that each gene represses the next gene in the loop and is repressed by the previous gene. In the synthetic insertion into E. Coli, green fluorescent protein (GFP) was used as a reporter so that the behavior of the network could be observed using fluorescence microscopy. The design of the repressilator was guided by biological and circuit principles with discrete and stochastic models of analysis. Six differential equations were used to model the kinetics of the repressilator system based on protein and mRNA concentrations, as well as appropriate parameter and Hill coefficient values. In the study, Elowitz and Leibler generated figures showing oscillations of repressor proteins, using integration and typical parameter values as well as a stochastic version of the repressilator model using similar parameters. These models were analyzed to determine the values of various rates that would yield a sustained oscillation. It was found that these oscillations were favored by promoters coupled to efficient ribosome binding sites, cooperative transcriptional repressors, and comparable protein and mRNA decay rates. This analysis motivated two design features which were engineered into the genes. First, promoter regions were replaced with a more efficient hybrid promoter which combined the E. coli phage lambda PL (λ PL) promoter with lac repressor (Lacl) and Tet repressor (TetR) operator sequences. Second, to reduce the disparity between the lifetimes of the repressor proteins and the mRNAs, a carboxy terminal tag based on the ssrA-RNA sequence was added at the 3' end of each repressor gene. This tag is recognized by proteases which target the protein for degradation. The design was implemented using a low-copy plasmid encoding the repressilator and a higher-copy reporter, which were used to transform a culture of E. coli. Naturally Occurring Repressilators Plants Circadian circuits in plants feature a transcriptional regulatory feedback loop called the repressilator. In the core oscillator loop (outlined in gray) in A. thaliana, light is first sensed by two cryptochromes and five phytochromes. Two transcription factors, Circadian Clock Associated 1 (CCA1) and Late Elongated Hypocotyl (LHY), repress genes associated with evening expression like Timing of CAB expression 1 (TOC1) and activate genes associated with morning expression by binding to their promoters. TOC1, an evening gene, positively regulates CCA1 and LHY via an unknown mechanism. Evening-phased transcription factor CCA1 Hiking Expedition (CHE) and histone demethylase jumonji C domain-containing 5 (JMJD5) directly repress CCA1. Other components have been found to be expressed throughout the day and either directly or indirectly inhibit or activate a consequent element in the circadian circuit, thereby creating a complex, robust and flexible network of feedback loops. Morning-Phase Expression The morning-phase expression loop refers to the genes and proteins that regulate rhythms during the day in A. thaliana. The two main genes are LHY and CCA1, which encode LHY and CCA1 transcription factors. These proteins form heterodimers that enter the nucleus and bind to the TOC1 gene promoter, repressing the production of TOC1 protein. When TOC1 protein is expressed, it serves to regulate LHY and CCA1 by inhibition of their transcription. This was later supported in 2012 by Dr. Alexandra Pokhilo, who used computational analyses to show that TOC1 served this role as an inhibitor of LHY and CCA1 expression. The morning loop serves to inhibit hypocotyl elongation, in contrast with the evening-phase loop which promotes hypocotyl elongation. The morning phase loop has shown to be incapable of supporting circadian oscillation when evening-phase expression genes have been mutated, suggesting the interdependency of each component in this naturally-occurring repressilator. Evening-Phase Expression Early Flowering 3 (ELF3), Early Flowering 4 (ELF4) and Phytoclock1 (LUX) are the key elements in evening-phased clock gene expression in A. thaliana. They form the evening complex, in which LUX binds to the promoters of Phytochrome Interacting Factor 4 (PIF4) and Phytochrome Interacting Factor 5 (PIF5) and inhibits them. As a result, hypocotyl elongation is repressed in the early-evening. When the inhibition is alleviated late at night, the hypocotyl elongates. Photoperiod flowering is controlled by output gene Gigantea (GI). GI is activated at night and activates the expression of Constans (CO), which activates the expression of Flowering Locus T (FT). FT then causes flowering in long-days. Mammals Mammals evolved an endogenous timing mechanism to coordinate both physiology and behavior to the 24 hour period. In 2016, researchers identified a sequence of three subsequent inhibitions within this mechanism that they identified as a repressilator, which is now believed to serve as a major core element of this circadian network. The necessity of this system was established through a series of gene knockouts amongst cryptochrome (Cry), period (Per), and Rev-erb -- core mammalian clock genes whose knockouts lead to arrhythmicity. The model that these researchers generated includes Bmal1 as a driver of E-box mediated transcription, Per2 and Cry1 as early and late E-box repressors, respectively, as well as the D-box regulator Dbp and the nuclear receptor Rev-erb-α. The sequential inhibitions by Rev-erb, Per and Cry1 can generate sustained oscillations, and by clamping all other components except for this repressilator oscillations persisted with similar amplitudes and periods. All oscillating networks seem to involve any combination of these three core genes, as demonstrated in various schematics released by researchers. Recent Work The repressilator model has been used to model and study other biological pathways and systems. Since, extensive work into the repressilator’s modeling capacities has been performed. In 2003, the repressilator’s representation and validation of biological models, being a model with many variables, was performed using the Simpathica system, which verified that the model does indeed oscillate with all of its complexities. As stated in Elowitz and Leibler’s original work, the ultimate goal for repressilator research is to build an artificial circadian clock that mirrors its natural, endogenous counterpart. This would involve developing an artificial clock with reduced noise and temperature compensation in order to better understand circadian rhythms that can be found in every domain of life. Disruption of circadian rhythms may lead to loss of rhythmicity in metabolic and transcriptional processes, and even quicken the onset of certain neurodegenerative diseases such as Alzheimer's disease. In 2017, oscillators that generated circadian rhythms and were not influenced much by temperature were created in a laboratory. Pathologically, the repressilator model can be used to model cell growth and abnormalities that may arise, such as those present in cancer cells. In doing so, new treatments may be developed based on circadian activity of cancerous cells. Additionally, in 2016, a research team improved upon the previous design of the repressilator. Following noise (signal processing) analysis, the authors moved the GFP reporter construct onto the repressilator plasmid and removed the ssrA degradation tags from each repressor protein. This extended the period and improved the regularity of the oscillations of the repressilator. In 2019, a study furthered Elowitz and Leibler's model by improving the repressilator system by achieving a model with a unique steady state and new rate function. This experiment expanded the current knowledge of repression and gene regulation. Significance Synthetic Biology Artificial repressilators were discovered by implanting a synthetic inhibition loop into E. coli.  This represented the first implementation of synthetic oscillations into an organism. Further implications of this include the possibility of rescuing mutated components of oscillations synthetically in model organisms. The artificial repressilator is a milestone of synthetic biology which shows that genetic regulatory networks can be designed and implemented to perform novel functions. However, it was found that the cells' oscillations drifted out of phase after a period of time and the artificial repressilator's activity was influenced by cell growth. The initial experiment therefore gave new appreciation to the circadian clock found in many organisms, as endogenous repressilators are significantly more robust than implanted artificial repressilators. New investigations at the RIKEN Quantitative Biology Center have found that chemical modifications to a single protein molecule could form a temperature independent, self-sustainable oscillator . Artificial repressilators could potentially aid research and treatments in fields ranging from circadian biology to endocrinology. They are increasingly able to demonstrate the synchronization inherent to natural biological systems and the factors that affect them. Circadian Biology A better understanding of the naturally-occurring repressilator in model organisms with endogenous, circadian timings, like A. thaliana, has applications in agriculture, especially in regards to plant rearing and livestock management. References External links Direct link to the repressilator model and a Description in BioModels Database A simulation of the repressilator in R: https://gist.github.com/AndreyAkinshin/37f3e68a1576f9ea1e5c01f2fd64fe5e An online simulation of the repressilator: https://www.yschaerli.com/repressilator.html A diagram of the system of feedback loops in A. thaliana Direct link to information about the CCA1 gene and the role it plays in A. thaliana Synthetic biology
Repressilator
[ "Engineering", "Biology" ]
2,393
[ "Synthetic biology", "Biological engineering", "Molecular genetics", "Bioinformatics" ]
9,801,876
https://en.wikipedia.org/wiki/Simultaneous%20nitrification%E2%80%93denitrification
Simultaneous nitrification–denitrification (SNdN) is a wastewater treatment process. Microbial simultaneous nitrification-denitrification is the conversion of the ammonium ion to nitrogen gas in a single bioreactor. The process is dependent on floc characteristics, reaction kinetics, mass loading of readily biodegradable chemical oxygen demand {rbCOD}, and the dissolved oxygen {DO} concentration. Microbiology The oxidation of the ammonium to nitrogen gas has been achieved with attached growth and suspended growth wastewater treatment processes. The most common bacteria responsible for the two step conversion are the autotrophic organisms, Nitrosomonas and Nitrobacter, and many different heterotrophs. The former obtain energy from the oxidation of ammonia, obtain carbon from CO2, and use oxygen as the electron acceptor. They are termed autotrophic because of their carbon source and termed aerobes because of their aerobic environment. The heterotrophic organisms are responsible for denitrification or the reduction of nitrate, NO3−, to nitrogen gas, N2. They use carbon from complex organic compounds, prefer low to zero dissolved oxygen, and use nitrate as the electron acceptor. Systems Design The most common design uses two different basins: one catering to the autotrophic bacteria and the second to the heterotrophic bacteria. However, SNdN accommodates to both in one basin with strict control of DO. This has been done in two common approaches. One is to develop an oxygen gradient by adding oxygen in one location in the basin. Near the O2 injection point, a high DO concentration is maintained allowing for nitrification and oxidation of other organic compounds. Oxygen is the electron acceptor and is depleted. The DO level in localized environments decreases with increasing distance from the injection point. In these low DO locations, the heterotrophic bacteria complete the nitrogen removal. The Orbal process is a technology in practice today using this method. The other method is to produce an oxygen gradient within the bio floc. The DO concentration remains high in the outside rings of the floc where nitrification occurs but low in the inner rings of the floc where denitrification occurs. This method is dependent on the floc size and characteristics; however controlling flocs is not well understood and is an active field of study Typically, SNdN has slower ammonia and nitrate utilization rates as compared to separate basin designs because only a fraction of the total biomass is participating in either the nitrification or the denitrification steps. The SNdN limitation due to partial active biomass has led to research in novel bacteria and system designs. Huang achieved significant ammonia removal in an attached growth process with ciliated columns packed with granular sulfur where the denitrifying bacteria used the sulfur as the electron donor and nitrate as the electron acceptor. Another well established pathway is via autotrophic denitrifying bacteria in the process termed the Anammox process. It is typically used for high ammonia strength wastewater. Notes References Nitrogen cycle Water treatment
Simultaneous nitrification–denitrification
[ "Chemistry", "Engineering", "Environmental_science" ]
629
[ "Water treatment", "Water pollution", "Nitrogen cycle", "Environmental engineering", "Water technology", "Metabolism" ]
9,804,678
https://en.wikipedia.org/wiki/Polyether%20block%20amide
Polyether block amide or PEBA is a thermoplastic elastomer (TPE). It is known under the tradename of PEBAX® (Arkema) and VESTAMID® E (Evonik Industries). It is a block copolymer obtained by polycondensation of a carboxylic acid polyamide (PA6, PA11, PA12) with an alcohol termination polyether (Polytetramethylene glycol PTMG), PEG). The general chemical structure is: HO - (CO - PA - CO - O - PE - O)n - H PEBA is a high performance thermoplastic elastomer. It is used to replace common elastomers – thermoplastic polyurethanes, polyester elastomers, and silicones - for these characteristics: lower density among TPE, superior mechanical and dynamic properties (flexibility, impact resistance, energy return, fatigue resistance) and keeping these properties at low temperature (lower than -40 °C), and good resistance against a wide range of chemicals. It is sensitive to UV degradation, however. Applications PEBA is found in the sports equipment market: for damping system components and midsoles of high end shoes (running, track & field, football, baseball, basketball, trekking, etc.) where it is appreciated for its low density, damping properties, energy return and flexibility. PEBA is also appreciated by winter sports participants as it enables design of the lightest alpine and Nordic ski boots while providing some resistance to extreme environment (low temperatures, UV exposure, moisture). It is used in various other sports applications such as racquet grommets and golf balls. PEBA is used in medical products such as catheters for its flexibility, its good mechanical properties at low and high temperatures, and its softness. It is also widely used in the manufacture of electric and electronic goods such as cables and wire coatings, electronic device casings, components, etc. PEBA can be used to make textiles as well as breathable film, fresh feeling fibres or non-woven fabrics. Some hydrophilic grades of PEBA are also used for their antistatic and antidust properties. Since no chemical additives are required to achieve these properties, products can be recycled at end of life. Physical properties References Elastomers Copolymers Polymers Thermoplastic elastomers
Polyether block amide
[ "Chemistry", "Materials_science" ]
511
[ "Polymers", "Synthetic materials", "Polymer chemistry", "Elastomers" ]
5,799,154
https://en.wikipedia.org/wiki/Structural%20mechanics
Structural mechanics or mechanics of structures is the computation of deformations, deflections, and internal forces or stresses (stress equivalents) within structures, either for design or for performance evaluation of existing structures. It is one subset of structural analysis. Structural mechanics analysis needs input data such as structural loads, the structure's geometric representation and support conditions, and the materials' properties. Output quantities may include support reactions, stresses and displacements. Advanced structural mechanics may include the effects of stability and non-linear behaviors. Mechanics of structures is a field of study within applied mechanics that investigates the behavior of structures under mechanical loads, such as bending of a beam, buckling of a column, torsion of a shaft, deflection of a thin shell, and vibration of a bridge. There are three approaches to the analysis: the energy methods, flexibility method or direct stiffness method which later developed into finite element method and the plastic analysis approach. Energy method Energy principles in structural mechanics Flexibility method Flexibility method Stiffness methods Direct stiffness method Finite element method in structural mechanics Plastic analysis approach Plastic analysis Major topics Beam theory Buckling Earthquake engineering Finite element method in structural mechanics Plates and shells Torsion Trusses Stiffening Structural dynamics Structural instability References Building engineering Structural engineering Solid mechanics Mechanics Earthquake engineering
Structural mechanics
[ "Physics", "Engineering" ]
257
[ "Structural engineering", "Solid mechanics", "Building engineering", "Construction", "Civil engineering", "Mechanics", "Mechanical engineering", "Earthquake engineering", "Architecture" ]
5,799,888
https://en.wikipedia.org/wiki/Accessible%20surface%20area
The accessible surface area (ASA) or solvent-accessible surface area (SASA) is the surface area of a biomolecule that is accessible to a solvent. Measurement of ASA is usually described in units of square angstroms (a standard unit of measurement in molecular biology). ASA was first described by Lee & Richards in 1971 and is sometimes called the Lee-Richards molecular surface. ASA is typically calculated using the 'rolling ball' algorithm developed by Shrake & Rupley in 1973. This algorithm uses a sphere (of solvent) of a particular radius to 'probe' the surface of the molecule. Methods of calculating ASA Shrake–Rupley algorithm The Shrake–Rupley algorithm is a numerical method that draws a mesh of points equidistant from each atom of the molecule and uses the number of these points that are solvent accessible to determine the surface area. The points are drawn at a water molecule's estimated radius beyond the van der Waals radius, which is effectively similar to ‘rolling a ball’ along the surface. All points are checked against the surface of neighboring atoms to determine whether they are buried or accessible. The number of points accessible is multiplied by the portion of surface area each point represents to calculate the ASA. The choice of the 'probe radius' does have an effect on the observed surface area, as using a smaller probe radius detects more surface details and therefore reports a larger surface. A typical value is 1.4Å, which approximates the radius of a water molecule. Another factor that affects the results is the definition of the VDW radii of the atoms in the molecule under study. For example, the molecule may often lack hydrogen atoms, which are implicit in the structure. The hydrogen atoms may be implicitly included in the atomic radii of the 'heavy' atoms, with a measure called the 'group radii'. In addition, the number of points created on the van der Waals surface of each atom determines another aspect of discretization, where more points provide an increased level of detail. LCPO method The LCPO method uses a linear approximation of the two-body problem for a quicker analytical calculation of ASA. The approximations used in LCPO result in an error in the range of 1-3 Ų. Power Diagram method Recently, a method was presented that calculates ASA fast and analytically using a power diagram. Applications Accessible surface area is often used when calculating the transfer free energy required to move a biomolecule from an aqueous solvent to a non-polar solvent, such as a lipid environment. The LCPO method is also used when calculating implicit solvent effects in the molecular dynamics software package AMBER. It is recently suggested that (predicted) accessible surface area can be used to improve prediction of protein secondary structure. Relation to solvent-excluded surface The ASA is closely related to the concept of the solvent-excluded surface (also known as the Connolly's molecular surface area or simply Connolly surface), which is imagined as a cavity in bulk solvent. It is also calculated in practice via a rolling-ball algorithm developed by Frederic Richards and implemented three-dimensionally by Michael Connolly in 1983 and Tim Richmond in 1984. Connolly spent several more years perfecting the method. See also Implicit solvation Van der Waals surface VADAR tool for analyzing peptide and protein structures Relative accessible surface area Notes References External links Network Science, Part 5: Solvent-Accessible Surfaces AREAIMOL is a command line tool in the CCP4 Program Suite for calculating ASA. NACCESS solvent accessible area calculations. FreeSASA Open source command line tool, C library and Python module for calculating ASA. Surface Racer Oleg Tsodikov's Surface Racer program. Solvent accessible and molecular surface area and average curvature calculation. Free for academic use. ASA.py — a Python-based implementation of the Shrake-Rupley algorithm. Michel Sanner's Molecular Surface – the fastest program to calculate the excluded surface. pov4grasp render molecular surfaces. Molecular Surface Package — Michael Connolly's program. Volume Voxelator — A web-based tool to generate excluded surfaces. ASV freeware Analytical calculation of the volume and surface of the union of n spheres (Monte-Carlo calculation also provided). Vorlume Computing Surface Area and Volume of a Family of 3D Balls. GetArea Calculate solvent accessible surface area of proteins online. Molecular modelling Computational chemistry Molecular dynamics Protein structure
Accessible surface area
[ "Physics", "Chemistry" ]
912
[ "Molecular physics", "Computational physics", "Molecular dynamics", "Computational chemistry", "Theoretical chemistry", "Molecular modelling", "Structural biology", "Protein structure" ]
7,562,718
https://en.wikipedia.org/wiki/Mean%20sojourn%20time
The mean sojourn time (or sometimes mean waiting time) for an object in a dynamical system is the amount of time an object is expected to spend in a system before leaving the system permanently. This concept is widely used in various fields, including physics, chemistry, and stochastic processes, to study the behavior of systems over time. Concepts Concept Imagine someone is standing in line to buy a ticket at the counter. After a minute, by observing the number of customers behind them, this person can estimate the rate at which customers are entering the system (in this case, the waiting line) per unit time (one minute). By dividing the number of customers ahead by this "flow" of customers, one can estimate how much longer the wait will be to reach the counter. Formally, consider the waiting line as a system S into which there is a flow of particles (customers) and where the process of “buying a ticket” means that the particle leaves the system. This waiting time is commonly referred to as transit time. Applying Little's theorem once, the expected steady state number of particles in S equals the flow of particles into S times the mean transit time. Similar theorems have been discovered in other fields, and in physiology it was earlier known as one of the Stewart-Hamilton equations (which is used to estimate the blood volume of organs). Generalizations Consider a system S in the form of a closed domain of finite volume in the Euclidean space. Further, consider the situation where there is a stream of ”equivalent” particles into S (number of particles per time unit) where each particle retains its identity while being in S and eventually – after a finite time – leaves the system irreversibly (i.e., for these particles the system is "open"). The figure above depicts the thought motion history of a single such particle, which thus moves in and out of subsystem s three times, each of which results in a transit time, namely the time spent in the subsystem between entrance and exit. The sum of these transit times is the sojourn time of s for that particular particle. If the motions of the particles are looked upon as realizations of one and the same stochastic process, it is meaningful to speak of the mean value of this sojourn time. That is, the mean sojourn time of a subsystem is the total time a particle is expected to spend in the subsystem s before leaving S for good. To see a practical significance of this quantity, we must understand that as a law of physics if the stream of particles into S is constant and all other relevant factors are kept constant, S will eventually reach steady state (i.e., the number and distribution of particles is constant everywhere in S). It can then be demonstrated that the steady state number of particles in the subsystem s equals the stream of particles into the system S times the mean sojourn time of the subsystem. This is thus a more general form of what above was referred to as Little's theorem, and it might be called the mass-time equivalence: (expected steady state amount in s) = (stream into S) (mean sojourn time of s) This has also been called the occupancy principle (where mean sojourn time is then referred to as occupancy). This mass-time equivalence has been applied in medicine for the study of metabolism of individual organs. This is a generalization of what in queuing theory is sometimes referred to as Little's theorem that applies only to the whole system S (not to an arbitrary subsystem as in the mass-time equivalence); the mean sojourn time in the Little's theorem can be interpreted as mean transit time. As likely evident from the discussion of the figure above, there is a fundamental difference between the meaning of the two quantities of sojourn time and transit time: the generality of the mass-time equivalence is very much due to the special meaning of the notion of sojourn time. When the whole system is considered (as in Little's law) is it true that sojourn time always equals transit time. Examples of Applications: 1) Queuing Theory: In queuing systems, it corresponds to the average time a customer or job spends in the system or a specific queue. 2) Physics: Used to describe trapping times in potential wells or energy barriers in molecular dynamics. 3) Markov Chains: Describes the time a system spends in a transient state before transitioning. See also Ergodic theory Queuing theory Mean free path First Passage Time References Bergner, DMP--A kinetics of macroscopic particles in open heterogeneous systems Statistical mechanics
Mean sojourn time
[ "Physics" ]
981
[ "Statistical mechanics" ]
7,564,980
https://en.wikipedia.org/wiki/Gaugino%20condensation
In quantum field theory, gaugino condensation is the nonzero vacuum expectation value in some models of a bilinear expression constructed in theories with supersymmetry from the superpartner of a gauge boson called the gaugino. The gaugino and the bosonic gauge field and the D-term are all components of a supersymmetric vector superfield in the Wess–Zumino gauge. where represents the gaugino field (a spinor) and is an energy scale, and represent Lie algebra indices and and represent van der Waerden (two component spinor) indices. The mechanism is somewhat analogous to chiral symmetry breaking and is an example of a fermionic condensate. In the superfield notation, is the gauge field strength and is a chiral superfield. is also a chiral superfield and we see that what acquires a nonzero VEV is not the F-term of this chiral superfield. Because of this, gaugino condensation in and of itself does not lead to supersymmetry breaking. If we also have supersymmetry breaking, it is caused by something other than the gaugino condensate. However, a gaugino condensate definitely breaks U(1)R symmetry as has an R-charge of 2. See also Tachyon condensation References Supersymmetric quantum field theory Gauge theories
Gaugino condensation
[ "Physics" ]
298
[ "Symmetry", "Supersymmetric quantum field theory", "Quantum mechanics", "Supersymmetry", "Quantum physics stubs" ]
1,092,110
https://en.wikipedia.org/wiki/Bulk%20modulus
The bulk modulus ( or or ) of a substance is a measure of the resistance of a substance to bulk compression. It is defined as the ratio of the infinitesimal pressure increase to the resulting relative decrease of the volume. Other moduli describe the material's response (strain) to other kinds of stress: the shear modulus describes the response to shear stress, and Young's modulus describes the response to normal (lengthwise stretching) stress. For a fluid, only the bulk modulus is meaningful. For a complex anisotropic solid such as wood or paper, these three moduli do not contain enough information to describe its behaviour, and one must use the full generalized Hooke's law. The reciprocal of the bulk modulus at fixed temperature is called the isothermal compressibility. Definition The bulk modulus (which is usually positive) can be formally defined by the equation where is pressure, is the initial volume of the substance, and denotes the derivative of pressure with respect to volume. Since the volume is inversely proportional to the density, it follows that where is the initial density and denotes the derivative of pressure with respect to density. The inverse of the bulk modulus gives a substance's compressibility. Generally the bulk modulus is defined at constant temperature as the isothermal bulk modulus, but can also be defined at constant entropy as the adiabatic bulk modulus. Thermodynamic relation Strictly speaking, the bulk modulus is a thermodynamic quantity, and in order to specify a bulk modulus it is necessary to specify how the pressure varies during compression: constant-temperature (isothermal ), constant-entropy (isentropic ), and other variations are possible. Such distinctions are especially relevant for gases. For an ideal gas, an isentropic process has: where is the heat capacity ratio. Therefore, the isentropic bulk modulus is given by Similarly, an isothermal process of an ideal gas has: Therefore, the isothermal bulk modulus is given by . When the gas is not ideal, these equations give only an approximation of the bulk modulus. In a fluid, the bulk modulus and the density determine the speed of sound (pressure waves), according to the Newton-Laplace formula In solids, and have very similar values. Solids can also sustain transverse waves: for these materials one additional elastic modulus, for example the shear modulus, is needed to determine wave speeds. Measurement It is possible to measure the bulk modulus using powder diffraction under applied pressure. It is a property of a fluid which shows its ability to change its volume under its pressure. Selected values A material with a bulk modulus of 35 GPa loses one percent of its volume when subjected to an external pressure of 0.35 GPa (~) (assumed constant or weakly pressure dependent bulk modulus). Microscopic origin Interatomic potential and linear elasticity Since linear elasticity is a direct result of interatomic interaction, it is related to the extension/compression of bonds. It can then be derived from the interatomic potential for crystalline materials. First, let us examine the potential energy of two interacting atoms. Starting from very far points, they will feel an attraction towards each other. As they approach each other, their potential energy will decrease. On the other hand, when two atoms are very close to each other, their total energy will be very high due to repulsive interaction. Together, these potentials guarantee an interatomic distance that achieves a minimal energy state. This occurs at some distance r0, where the total force is zero: Where U is interatomic potential and r is the interatomic distance. This means the atoms are in equilibrium. To extend the two atoms approach into solid, consider a simple model, say, a 1-D array of one element with interatomic distance of r, and the equilibrium distance is r0. Its potential energy-interatomic distance relationship has similar form as the two atoms case, which reaches minimal at r0, The Taylor expansion for this is: At equilibrium, the first derivative is 0, so the dominant term is the quadratic one. When displacement is small, the higher order terms should be omitted. The expression becomes: Which is clearly linear elasticity. Note that the derivation is done considering two neighboring atoms, so the Hook's coefficient is: This form can be easily extended to 3-D case, with volume per atom(Ω) in place of interatomic distance. See also Elasticity tensor Volumetric strain References Further reading Elasticity (physics) Mechanical quantities
Bulk modulus
[ "Physics", "Materials_science", "Mathematics" ]
947
[ "Physical phenomena", "Mechanical quantities", "Physical quantities", "Elasticity (physics)", "Deformation (mechanics)", "Quantity", "Mechanics", "Physical properties" ]
1,092,885
https://en.wikipedia.org/wiki/Lattice%20field%20theory
In physics, lattice field theory is the study of lattice models of quantum field theory. This involves studying field theory on a space or spacetime that has been discretised onto a lattice. Details Although most lattice field theories are not exactly solvable, they are immensely appealing due to their feasibility for computer simulation, often using Markov chain Monte Carlo methods. One hopes that, by performing simulations on larger and larger lattices, while making the lattice spacing smaller and smaller, one will be able to recover the behavior of the continuum theory as the continuum limit is approached. Just as in all lattice models, numerical simulation provides access to field configurations that are not accessible to perturbation theory, such as solitons. Similarly, non-trivial vacuum states can be identified and examined. The method is particularly appealing for the quantization of a gauge theory using the Wilson action. Most quantization approaches maintain Poincaré invariance manifest but sacrifice manifest gauge symmetry by requiring gauge fixing. It's only after renormalization that gauge invariance can be recovered. Lattice field theory differs from these in that it keeps manifest gauge invariance, but sacrifices manifest Poincaré invariance—recovering it only after renormalization. The articles on lattice gauge theory and lattice QCD explore these issues in greater detail. See also Fermion doubling Further reading Creutz, M., Quarks, gluons and lattices, Cambridge University Press, Cambridge, (1985). (renewed version: (2023) ) DeGrand, T., DeTar, C., Lattice Methods for Quantum Chromodynamics, World Scientific, Singapore, (2006). Gattringer, C., Lang, C. B., Quantum Chromodynamics on the Lattice, Springer, (2010). Knechtli, F., Günther, M., Peardon, M., Lattice Quantum Chromodynamics: Practical Essentials, Springer, (2016). Lin, H., Meyer, H.B., Lattice QCD for Nuclear Physics, Springer, (2014). Makeenko, Y., Methods of contemporary gauge theory, Cambridge University Press, Cambridge, (2002). . Montvay, I., Münster, G., Quantum Fields on a Lattice, Cambridge University Press, Cambridge, (1997). Rothe, H., Lattice Gauge Theories, An Introduction, World Scientific, Singapore, (2005). Smit, J., Introduction to Quantum Fields on a Lattice, Cambridge University Press, Cambridge, (2002).
Lattice field theory
[ "Physics" ]
536
[ "Statistical mechanics stubs", "Theoretical physics", "Quantum physics stubs", "Quantum mechanics", "Computational physics", "Theoretical physics stubs", "Statistical mechanics", "Computational physics stubs" ]
1,093,675
https://en.wikipedia.org/wiki/Wannier%20function
The Wannier functions are a complete set of orthogonal functions used in solid-state physics. They were introduced by Gregory Wannier in 1937. Wannier functions are the localized molecular orbitals of crystalline systems. The Wannier functions for different lattice sites in a crystal are orthogonal, allowing a convenient basis for the expansion of electron states in certain regimes. Wannier functions have found widespread use, for example, in the analysis of binding forces acting on electrons. Definition Although, like localized molecular orbitals, Wannier functions can be chosen in many different ways, the original, simplest, and most common definition in solid-state physics is as follows. Choose a single band in a perfect crystal, and denote its Bloch states by where uk(r) has the same periodicity as the crystal. Then the Wannier functions are defined by , where R is any lattice vector (i.e., there is one Wannier function for each Bravais lattice vector); N is the number of primitive cells in the crystal; The sum on k includes all the values of k in the Brillouin zone (or any other primitive cell of the reciprocal lattice) that are consistent with periodic boundary conditions on the crystal. This includes N different values of k, spread out uniformly through the Brillouin zone. Since N is usually very large, the sum can be written as an integral according to the replacement rule: where "BZ" denotes the Brillouin zone, which has volume Ω. Properties On the basis of this definition, the following properties can be proven to hold: For any lattice vector R' , In other words, a Wannier function only depends on the quantity (r − R). As a result, these functions are often written in the alternative notation The Bloch functions can be written in terms of Wannier functions as follows: , where the sum is over each lattice vector R in the crystal. The set of wavefunctions is an orthonormal basis for the band in question. Wannier functions have been extended to nearly periodic potentials as well. Localization The Bloch states ψk(r) are defined as the eigenfunctions of a particular Hamiltonian, and are therefore defined only up to an overall phase. By applying a phase transformation eiθ(k) to the functions ψk(r), for any (real) function θ(k), one arrives at an equally valid choice. While the change has no consequences for the properties of the Bloch states, the corresponding Wannier functions are significantly changed by this transformation. One therefore uses the freedom to choose the phases of the Bloch states in order to give the most convenient set of Wannier functions. In practice, this is usually the maximally-localized set, in which the Wannier function is localized around the point R and rapidly goes to zero away from R. For the one-dimensional case, it has been proved by Kohn that there is always a unique choice that gives these properties (subject to certain symmetries). This consequently applies to any separable potential in higher dimensions; the general conditions are not established, and are the subject of ongoing research. A Pipek-Mezey style localization scheme has also been recently proposed for obtaining Wannier functions. Contrary to the maximally localized Wannier functions (which are an application of the Foster-Boys scheme to crystalline systems), the Pipek-Mezey Wannier functions do not mix σ and π orbitals. Rigorous results The existence of exponentially localized Wannier functions in insulators was proved mathematically in 2006. Modern theory of polarization Wannier functions have recently found application in describing the polarization in crystals, for example, ferroelectrics. The modern theory of polarization is pioneered by Raffaele Resta and David Vanderbilt. See for example, Berghold, and Nakhmanson, and a power-point introduction by Vanderbilt. The polarization per unit cell in a solid can be defined as the dipole moment of the Wannier charge density: where the summation is over the occupied bands, and Wn is the Wannier function localized in the cell for band n. The change in polarization during a continuous physical process is the time derivative of the polarization and also can be formulated in terms of the Berry phase of the occupied Bloch states. Wannier interpolation Wannier functions are often used to interpolate bandstructures calculated ab initio on a coarse grid of k-points to any arbitrary k-point. This is particularly useful for evaluation of Brillouin-zone integrals on dense grids and searching of Weyl points, and also taking derivatives in the k-space. This approach is similar in spirit to the tight binding approximation, but in contrast allows for an exact description of bands in a certain energy range. Wannier interpolation schemes have been derived for spectral properties, anomalous Hall conductivity, orbital magnetization, thermoelectric and electronic transport properties, gyrotropic effects, shift current, spin Hall conductivity and other effects. See also Orbital magnetization References Further reading External links Wannier90 computer code that calculates maximally localized Wannier functions Wannier Transport code that calculates maximally localized Wannier functions fit for Quantum Transport applications WannierTools: An open-source software package for novel topological materials WannierBerri - a python code for Wannier interpolation and tight-binding calculations See also Bloch's theorem Hannay angle Geometric phase Condensed matter physics
Wannier function
[ "Physics", "Chemistry", "Materials_science", "Engineering" ]
1,133
[ "Phases of matter", "Condensed matter physics", "Matter", "Materials science" ]
1,093,768
https://en.wikipedia.org/wiki/Electron-beam%20lithography
Electron-beam lithography (often abbreviated as e-beam lithography or EBL) is the practice of scanning a focused beam of electrons to draw custom shapes on a surface covered with an electron-sensitive film called a resist (exposing). The electron beam changes the solubility of the resist, enabling selective removal of either the exposed or non-exposed regions of the resist by immersing it in a solvent (developing). The purpose, as with photolithography, is to create very small structures in the resist that can subsequently be transferred to the substrate material, often by etching. The primary advantage of electron-beam lithography is that it can draw custom patterns (direct-write) with sub-10 nm resolution. This form of maskless lithography has high resolution but low throughput, limiting its usage to photomask fabrication, low-volume production of semiconductor devices, and research and development. Systems Electron-beam lithography systems used in commercial applications are dedicated e-beam writing systems that are very expensive (> US$1M). For research applications, it is very common to convert an electron microscope into an electron beam lithography system using relatively low cost accessories (< US$100K). Such converted systems have produced linewidths of ~20 nm since at least 1990, while current dedicated systems have produced linewidths on the order of 10 nm or smaller. Electron-beam lithography systems can be classified according to both beam shape and beam deflection strategy. Older systems used Gaussian-shaped beams that scanned these beams in a raster fashion. Newer systems use shaped beams that can be deflected to various positions in the writing field (also known as vector scan). Electron sources Lower-resolution systems can use thermionic sources (cathode), which are usually formed from lanthanum hexaboride. However, systems with higher-resolution requirements need to use field electron emission sources, such as heated W/ZrO2 for lower energy spread and enhanced brightness. Thermal field emission sources are preferred over cold emission sources, in spite of the former's slightly larger beam size, because they offer better stability over typical writing times of several hours. Lenses Both electrostatic and magnetic lenses may be used. However, electrostatic lenses have more aberrations and so are not used for fine focusing. There is currently no mechanism to make achromatic electron beam lenses, so extremely narrow dispersions of the electron beam energy are needed for finest focusing. Stage, stitching and alignment Typically, for very small beam deflections, electrostatic deflection "lenses" are used; larger beam deflections require electromagnetic scanning. Because of the inaccuracy and because of the finite number of steps in the exposure grid, the writing field is of the order of 100 micrometre – 1 mm. Larger patterns require stage moves. An accurate stage is critical for stitching (tiling writing fields exactly against each other) and pattern overlay (aligning a pattern to a previously made one). Electron beam write time The minimum time to expose a given area for a given dose is given by the following formula: where is the time to expose the object (can be divided into exposure time/step size), is the beam current, is the dose and is the area exposed. For example, assuming an exposure area of 1 cm2, a dose of 10−3 coulombs/cm2, and a beam current of 10−9 amperes, the resulting minimum write time would be 106 seconds (about 12 days). This minimum write time does not include time for the stage to move back and forth, as well as time for the beam to be blanked (blocked from the wafer during deflection), as well as time for other possible beam corrections and adjustments in the middle of writing. To cover the 700 cm2 surface area of a 300 mm silicon wafer, the minimum write time would extend to 7*108 seconds, about 22 years. This is a factor of about 10 million times slower than current optical lithography tools. It is clear that throughput is a serious limitation for electron beam lithography, especially when writing dense patterns over a large area. E-beam lithography is not suitable for high-volume manufacturing because of its limited throughput. The smaller field of electron beam writing makes for very slow pattern generation compared with photolithography (the current standard) because more exposure fields must be scanned to form the final pattern area (≤mm2 for electron beam vs. ≥40 mm2 for an optical mask projection scanner). The stage moves in between field scans. The electron beam field is small enough that a rastering or serpentine stage motion is needed to pattern a 26 mm X 33 mm area for example, whereas in a photolithography scanner only a one-dimensional motion of a 26 mm X 2 mm slit field would be required. Currently an optical maskless lithography tool is much faster than an electron beam tool used at the same resolution for photomask patterning. Shot noise As features sizes shrink, the number of incident electrons at fixed dose also shrinks. As soon as the number reaches ~10000, shot noise effects become predominant, leading to substantial natural dose variation within a large feature population. With each successive process node, as the feature area is halved, the minimum dose must double to maintain the same noise level. Consequently, the tool throughput would be halved with each successive process node. Note: 1 ppm of population is about 5 standard deviations away from the mean dose. Ref.: SPIE Proc. 8683-36 (2013) Shot noise is a significant consideration even for mask fabrication. For example, a commercial mask e-beam resist like FEP-171 would use doses less than 10 μC/cm2, whereas this leads to noticeable shot noise for a target critical dimension (CD) even on the order of ~200 nm on the mask. CD variation can be on the order of 15–20% for sub-20 nm features. Defects in electron-beam lithography Despite the high resolution of electron-beam lithography, the generation of defects during electron-beam lithography is often not considered by users. Defects may be classified into two categories: data-related defects, and physical defects. Data-related defects may be classified further into two sub-categories. Blanking or deflection errors occur when the electron beam is not deflected properly when it is supposed to, while shaping errors occur in variable-shaped beam systems when the wrong shape is projected onto the sample. These errors can originate either from the electron optical control hardware or the input data that was taped out. As might be expected, larger data files are more susceptible to data-related defects. Physical defects are more varied, and can include sample charging (either negative or positive), backscattering calculation errors, dose errors, fogging (long-range reflection of backscattered electrons), outgassing, contamination, beam drift and particles. Since the write time for electron beam lithography can easily exceed a day, "randomly occurring" defects are more likely to occur. Here again, larger data files can present more opportunities for defects. Photomask defects largely originate during the electron beam lithography used for pattern definition. Electron energy deposition in matter The primary electrons in the incident beam lose energy upon entering a material through inelastic scattering or collisions with other electrons. In such a collision the momentum transfer from the incident electron to an atomic electron can be expressed as , where b is the distance of closest approach between the electrons, and v is the incident electron velocity. The energy transferred by the collision is given by , where m is the electron mass and E is the incident electron energy, given by . By integrating over all values of T between the lowest binding energy, E0 and the incident energy, one obtains the result that the total cross section for collision is inversely proportional to the incident energy , and proportional to 1/E0 – 1/E. Generally, E >> E0, so the result is essentially inversely proportional to the binding energy. By using the same integration approach, but over the range 2E0 to E, one obtains by comparing cross-sections that half of the inelastic collisions of the incident electrons produce electrons with kinetic energy greater than E0. These secondary electrons are capable of breaking bonds (with binding energy E0) at some distance away from the original collision. Additionally, they can generate additional, lower energy electrons, resulting in an electron cascade. Hence, it is important to recognize the significant contribution of secondary electrons to the spread of the energy deposition. In general, for a molecule AB: e− + AB → AB− → A + B− This reaction, also known as "electron attachment" or "dissociative electron attachment" is most likely to occur after the electron has essentially slowed to a halt, since it is easiest to capture at that point. The cross-section for electron attachment is inversely proportional to electron energy at high energies, but approaches a maximum limiting value at zero energy. On the other hand, it is already known that the mean free path at the lowest energies (few to several eV or less, where dissociative attachment is significant) is well over 10 nm, thus limiting the ability to consistently achieve resolution at this scale. Resolution capability With today's electron optics, electron beam widths can routinely go down to a few nanometers. This is limited mainly by aberrations and space charge. However, the feature resolution limit is determined not by the beam size but by forward scattering (or effective beam broadening) in the resist, while the pitch resolution limit is determined by secondary electron travel in the resist. This point was driven home by a 2007 demonstration of double patterning using electron beam lithography in the fabrication of 15 nm half-pitch zone plates. Although a 15 nm feature was resolved, a 30 nm pitch was still difficult to do due to secondary electrons scattering from the adjacent feature. The use of double patterning allowed the spacing between features to be wide enough for the secondary electron scattering to be significantly reduced. The forward scattering can be decreased by using higher energy electrons or thinner resist, but the generation of secondary electrons is inevitable. It is now recognized that for insulating materials like PMMA, low energy electrons can travel quite a far distance (several nm is possible). This is due to the fact that below the ionization potential the only energy loss mechanism is mainly through phonons and polarons. Although the latter is basically an ionic lattice effect, polaron hopping can extend as far as 20 nm. The travel distance of secondary electrons is not a fundamentally derived physical value, but a statistical parameter often determined from many experiments or Monte Carlo simulations down to < 1 eV. This is necessary since the energy distribution of secondary electrons peaks well below 10 eV. Hence, the resolution limit is not usually cited as a well-fixed number as with an optical diffraction-limited system. Repeatability and control at the practical resolution limit often require considerations not related to image formation, e.g., resist development and intermolecular forces. A study by the College of Nanoscale Science and Engineering (CNSE) presented at the 2013 EUVL Workshop indicated that, as a measure of electron blur, 50–100 eV electrons easily penetrated beyond 10 nm of resist thickness in PMMA or a commercial resist. Furthermore dielectric breakdown discharge is possible. More recent studies have indicated that 20 nm resist thickness could be penetrated by low energy electrons (of sufficient dose) and sub-20 nm half-pitch electron-beam lithography already required double patterning. As of 2022, a state-of-the-art electron multi-beam writer achieves about a 20 nm resolution. Scattering In addition to producing secondary electrons, primary electrons from the incident beam with sufficient energy to penetrate the resist can be multiply scattered over large distances from underlying films and/or the substrate. This leads to exposure of areas at a significant distance from the desired exposure location. For thicker resists, as the primary electrons move forward, they have an increasing opportunity to scatter laterally from the beam-defined location. This scattering is called forward scattering. Sometimes the primary electrons are scattered at angles exceeding 90 degrees, i.e., they no longer advance further into the resist. These electrons are called backscattered electrons and have the same effect as long-range flare in optical projection systems. A large enough dose of backscattered electrons can lead to complete exposure of resist over an area much larger than defined by the beam spot. Proximity effect The smallest features produced by electron-beam lithography have generally been isolated features, as nested features exacerbate the proximity effect, whereby electrons from exposure of an adjacent region spill over into the exposure of the currently written feature, effectively enlarging its image, and reducing its contrast, i.e., difference between maximum and minimum intensity. Hence, nested feature resolution is harder to control. For most resists, it is difficult to go below 25 nm lines and spaces, and a limit of 20 nm lines and spaces has been found. In actuality, though, the range of secondary electron scattering is quite far, sometimes exceeding 100 nm, but becoming very significant below 30 nm. The proximity effect is also manifest by secondary electrons leaving the top surface of the resist and then returning some tens of nanometers distance away. Proximity effects (due to electron scattering) can be addressed by solving the inverse problem and calculating the exposure function E(x,y) that leads to a dose distribution as close as possible to the desired dose D(x,y) when convolved by the scattering distribution point spread function PSF(x,y). However, it must be remembered that an error in the applied dose (e.g., from shot noise) would cause the proximity effect correction to fail. Charging Since electrons are charged particles, they tend to charge the substrate negatively unless they can quickly gain access to a path to ground. For a high-energy beam incident on a silicon wafer, virtually all the electrons stop in the wafer where they can follow a path to ground. However, for a quartz substrate such as a photomask, the embedded electrons will take a much longer time to move to ground. Often the negative charge acquired by a substrate can be compensated or even exceeded by a positive charge on the surface due to secondary electron emission into the vacuum. The presence of a thin conducting layer above or below the resist is generally of limited use for high energy (50 keV or more) electron beams, since most electrons pass through the layer into the substrate. The charge dissipation layer is generally useful only around or below 10 keV, since the resist is thinner and most of the electrons either stop in the resist or close to the conducting layer. However, they are of limited use due to their high sheet resistance, which can lead to ineffective grounding. The range of low-energy secondary electrons (the largest component of the free electron population in the resist-substrate system) which can contribute to charging is not a fixed number but can vary from 0 to as high as 50 nm (see section New frontiers and extreme ultraviolet lithography). Hence, resist-substrate charging is not repeatable and is difficult to compensate consistently. Negative charging deflects the electron beam away from the charged area while positive charging deflects the electron beam toward the charged area. Electron-beam resist performance Due to the scission efficiency generally being an order of magnitude higher than the crosslinking efficiency, most polymers used for positive-tone electron-beam lithography will also crosslink (and therefore become negative tone) at doses an order of magnitude higher than the doses used to cause scission in the polymer for positive tone exposure. In the case of PMMA, exposure of electrons at up to more than 1000 μC/cm2, the gradation curve corresponds to the curve of a “normal” positive process. Above 2000 μC/cm2, the recombinant crosslinking process prevails, and at about 7000 μC/cm2 the layer is completely crosslinked which makes the layer more insoluble than the unexposed initial layer. If negative PMMA structures should be used, a stronger developer than for the positive process is required. Such large dose increases may be required to avoid shot noise effects. A study performed at the Naval Research Laboratory indicated that low-energy (10–50 eV) electrons were able to damage ~30 nm thick PMMA films. The damage was manifest as a loss of material. For the popular electron-beam resist ZEP-520, a pitch resolution limit of 60 nm (30 nm lines and spaces), independent of thickness and beam energy, was found. A 20 nm resolution had also been demonstrated using a 3 nm 100 keV electron beam and PMMA resist. 20 nm unexposed gaps between exposed lines showed inadvertent exposure by secondary electrons. Hydrogen silsesquioxane (HSQ) is a negative tone resist that is capable of forming isolated 2-nm-wide lines and 10 nm periodic dot arrays (10 nm pitch) in very thin layers. HSQ itself is similar to porous, hydrogenated SiO2. It may be used to etch silicon but not silicon dioxide or other similar dielectrics. In 2018, a thiol-ene resist was developed that features native reactive surface groups, which allows the direct functionalization of the resist surface with biomolecules. New frontiers To get around the secondary electron generation, it will be imperative to use low-energy electrons as the primary radiation to expose resist. Ideally, these electrons should have energies on the order of not much more than several eV in order to expose the resist without generating any secondary electrons, since they will not have sufficient excess energy. Such exposure has been demonstrated using a scanning tunneling microscope as the electron beam source. The data suggest that electrons with energies as low as 12 eV can penetrate 50 nm thick polymer resist. The drawback to using low energy electrons is that it is hard to prevent spreading of the electron beam in the resist. Low energy electron optical systems are also hard to design for high resolution. Coulomb inter-electron repulsion always becomes more severe for lower electron energy. Another alternative in electron-beam lithography is to use extremely high electron energies (at least 100 keV) to essentially "drill" or sputter the material. This phenomenon has been observed frequently in transmission electron microscopy. However, this is a very inefficient process, due to the inefficient transfer of momentum from the electron beam to the material. As a result, it is a slow process, requiring much longer exposure times than conventional electron beam lithography. Also high energy beams always bring up the concern of substrate damage. Interference lithography using electron beams is another possible path for patterning arrays with nanometer-scale periods. A key advantage of using electrons over photons in interferometry is the much shorter wavelength for the same energy. Despite the various intricacies and subtleties of electron beam lithography at different energies, it remains the most practical way to concentrate the most energy into the smallest area. There has been significant interest in the development of multiple electron beam approaches to lithography in order to increase throughput. This work has been supported by SEMATECH and start-up companies such as Multibeam Corporation, Mapper and IMS. IMS Nanofabrication has commercialized the multibeam-maskwriter and started a rollout in 2016. See also Electron beam technology Ion beam lithography Maskless lithography Photolithography References Lithography (microfabrication) Electron beam
Electron-beam lithography
[ "Chemistry", "Materials_science" ]
4,091
[ "Electron", "Microtechnology", "Electron beam", "Nanotechnology", "Lithography (microfabrication)" ]
1,095,210
https://en.wikipedia.org/wiki/Boron%20carbide
Boron carbide (chemical formula approximately B4C) is an extremely hard boron–carbon ceramic, a covalent material used in tank armor, bulletproof vests, engine sabotage powders, as well as numerous industrial applications. With a Vickers hardness of >30 GPa, it is one of the hardest known materials, behind cubic boron nitride and diamond. History Boron carbide was discovered in the 19th century as a by-product of reactions involving metal borides, but its chemical formula was unknown. It was not until the 1930s that the chemical composition was estimated as B4C. Controversy remained as to whether or not the material had this exact 4:1 stoichiometry, as, in practice the material is always slightly carbon-deficient with regard to this formula, and X-ray crystallography shows that its structure is highly complex, with a mixture of C-B-C chains and B12 icosahedra. These features argued against a very simple exact B4C empirical formula. Because of the B12 structural unit, the chemical formula of "ideal" boron carbide is often written not as B4C, but as B12C3, and the carbon deficiency of boron carbide described in terms of a combination of the B12C3 and B12CBC units. Crystal structure Boron carbide has a complex crystal structure typical of icosahedron-based borides. There, B12 icosahedra form a rhombohedral lattice unit (space group: Rm (No. 166), lattice constants: a = 0.56 nm and c = 1.212 nm) surrounding a C-B-C chain that resides at the center of the unit cell, and both carbon atoms bridge the neighboring three icosahedra. This structure is layered: the B12 icosahedra and bridging carbons form a network plane that spreads parallel to the c-plane and stacks along the c-axis. The lattice has two basic structure units – the B12 icosahedron and the B6 octahedron. Because of the small size of the B6 octahedra, they cannot interconnect. Instead, they bond to the B12 icosahedra in the neighboring layer, and this decreases bonding strength in the c-plane. Because of the B12 structural unit, the chemical formula of "ideal" boron carbide is often written not as B4C, but as B12C3, and the carbon deficiency of boron carbide described in terms of a combination of the B12C3 and B12C2 units. Some studies indicate the possibility of incorporation of one or more carbon atoms into the boron icosahedra, giving rise to formulas such as (B11C)CBC = B4C at the carbon-heavy end of the stoichiometry, but formulas such as B12(CBB) = B14C at the boron-rich end. "Boron carbide" is thus not a single compound, but a family of compounds of different compositions. A common intermediate, which approximates a commonly found ratio of elements, is B12(CBC) = B6.5C. Quantum mechanical calculations have demonstrated that configurational disorder between boron and carbon atoms on the different positions in the crystal determines several of the materials properties – in particular, the crystal symmetry of the B4C composition and the non-metallic electrical character of the B13C2 composition. Properties Boron carbide is known as a robust material having extremely high hardness (about 9.5 up to 9.75 on Mohs hardness scale), high cross section for absorption of neutrons (i.e. good shielding properties against neutrons), stability to ionizing radiation and most chemicals. Its Vickers hardness (38 GPa), elastic modulus (460 GPa) and fracture toughness (3.5 MPa·m1/2) approach the corresponding values for diamond (1150 GPa and 5.3 MPa·m1/2). , boron carbide is the third hardest substance known, after diamond and cubic boron nitride, earning it the nickname "black diamond". Semiconductor properties Boron carbide is a semiconductor, with electronic properties dominated by hopping-type transport. The energy band gap depends on composition as well as the degree of order. The band gap is estimated at 2.09 eV, with multiple mid-bandgap states which complicate the photoluminescence spectrum. The material is typically p-type. Preparation Boron carbide was first synthesized by Henri Moissan in 1899, by reduction of boron trioxide either with carbon or magnesium in presence of carbon in an electric arc furnace. In the case of carbon, the reaction occurs at temperatures above the melting point of B4C and is accompanied by liberation of large amount of carbon monoxide: 2 B2O3 + 7 C → B4C + 6 CO If magnesium is used, the reaction can be carried out in a graphite crucible, and the magnesium byproducts are removed by treatment with acid. Applications Boron's exceptional hardness can be used for the following applications: Padlocks Personal and vehicle ballistic armor plating Grit blasting nozzles High-pressure water jet cutter nozzles Scratch and wear resistant coatings Cutting tools and dies Abrasives Metal matrix composites In brake linings of vehicles Boron carbide's other properties also make it suitable for: Neutron absorber in nuclear reactors (see below) High energy fuel for solid fuel ramjets Nuclear applications The ability of boron carbide to absorb neutrons without forming long-lived radionuclides makes it attractive as an absorbent for neutron radiation arising in nuclear power plants and from anti-personnel neutron bombs. Nuclear applications of boron carbide include shielding. Boron carbide filaments Boron carbide filaments exhibit auspicious prospects as reinforcement elements in resin and metal composites, attributed to their exceptional strength, elastic modulus, and low density characteristics. In addition, boron carbide filaments are not affected by radiation due to its ability to absorb neutrons. It is less harmful than filaments made of other materials, such as cadmium. See also List of compounds with carbon number 1 References Bibliography External links National Pollutant Inventory – Boron and compounds NIST Chemistry Database Entry for Boron Carbide Carbides Boron compounds Superhard materials Neutron poisons
Boron carbide
[ "Physics" ]
1,365
[ "Materials", "Superhard materials", "Matter" ]
1,095,275
https://en.wikipedia.org/wiki/Cyanogen%20chloride
Cyanogen chloride is a highly toxic chemical compound with the formula CNCl. This linear, triatomic pseudohalogen is an easily condensed colorless gas. More commonly encountered in the laboratory is the related compound cyanogen bromide, a room-temperature solid that is widely used in biochemical analysis and preparation. Synthesis, basic properties, structure Cyanogen chloride is a molecule with the connectivity . Carbon and chlorine are linked by a single bond, and carbon and nitrogen by a triple bond. It is a linear molecule, as are the related cyanogen halides (NCF, NCBr, NCI). Cyanogen chloride is produced by the oxidation of sodium cyanide with chlorine. This reaction proceeds via the intermediate cyanogen (). NaCN + Cl2 -> ClCN + NaCl The compound trimerizes in the presence of acid to the heterocycle called cyanuric chloride. Cyanogen chloride is slowly hydrolyzed by water at neutral pH to release cyanate and chloride ions: ClCN + H2O -> NCO- + Cl- + 2H+ Applications in synthesis Cyanogen chloride is a precursor to the sulfonyl cyanides and chlorosulfonyl isocyanate, a useful reagent in organic synthesis. Further chlorination gives the isocyanide dichloride. Safety Also known as CK, cyanogen chloride is a highly toxic blood agent, and was once proposed for use in chemical warfare. It causes immediate injury upon contact with the eyes or respiratory organs. Symptoms of exposure may include drowsiness, rhinorrhea (runny nose), sore throat, coughing, confusion, nausea, vomiting, edema, loss of consciousness, convulsions, paralysis, and death. It is especially dangerous because it is capable of penetrating the filters in gas masks, according to United States analysts. CK is unstable due to polymerization, sometimes with explosive violence. Chemical weapon Cyanogen chloride is listed in schedule 3 of the Chemical Weapons Convention: all production must be reported to the OPCW. By 1945, the U.S. Army's Chemical Warfare Service developed chemical warfare rockets intended for the new M9 and M9A1 Bazookas. An M26 Gas Rocket was adapted to fire cyanogen chloride-filled warheads for these rocket launchers. As it was capable of penetrating the protective filter barriers in some gas masks, it was seen as an effective agent against Japanese forces (particularly those hiding in caves or bunkers) because their standard issue gas masks lacked the barriers that would provide protection against cyanogen chloride. The US added the weapon to its arsenal, and considered using it, along with hydrogen cyanide, as part of Operation Downfall, the planned invasion of Japan, but President Harry Truman decided against it, instead using the atomic bombs developed by the secret Manhattan Project. The CK rocket was never deployed or issued to combat personnel. References External links Chlorine compounds Triatomic molecules Cyano compounds Nonmetal halides Blood agents Pseudohalogens
Cyanogen chloride
[ "Physics", "Chemistry" ]
642
[ "Pseudohalogens", "Inorganic compounds", "Chemical weapons", "Molecules", "Triatomic molecules", "Blood agents", "Matter" ]