id stringlengths 2 8 | url stringlengths 31 117 | title stringlengths 1 71 | text stringlengths 153 118k | topic stringclasses 4
values | section stringlengths 4 49 ⌀ | sublist stringclasses 9
values |
|---|---|---|---|---|---|---|
188497 | https://en.wikipedia.org/wiki/Femur | Femur | The femur (; : femurs or femora ), or thigh bone, is the only bone in the thigh — the region of the lower limb between the hip and the knee. In many four-legged animals the femur is the upper bone of the hindleg.
The top of the femur fits into a socket in the pelvis called the hip joint, and the bottom of the femur connects to the shinbone (tibia) and kneecap (patella) to form the knee. In humans the femur is the largest and thickest bone in the body.
Structure
The femur is the only bone in the upper leg. The two femurs converge medially toward the knees, where they articulate with the proximal ends of the tibiae. The angle at which the femora converge is an important factor in determining the femoral-tibial angle. In females, thicker pelvic bones cause the femora to converge more than in males.
In the condition genu valgum (knock knee) the femurs converge so much that the knees touch. The opposite condition, genu varum (bow-leggedness), occurs when the femurs diverge. In the general population without these conditions, the femoral-tibial angle is about 175 degrees.
The femur is the largest and thickest bone in the human body. It is considered the strongest bone by some measures, though other studies suggest the temporal bone may be stronger. On average, the femur length accounts for 26.74% of a person's height, a ratio found in both men and women across most ethnic groups with minimal variation. This ratio is useful in anthropology, as it provides a reliable estimate of a person's height from an incomplete skeleton.
The femur is classified as a long bone, consisting of diaphysis (shaft or body) and two epiphyses (extremities) that articulate with the hip and knee bones.
Upper part
The upper or proximal extremity (close to the torso) contains the head, neck, the two trochanters and adjacent structures. The upper extremity is the thinnest femoral extremity, the lower extremity is the thickest femoral extremity.
The head of the femur, which articulates with the acetabulum of the pelvic bone, comprises two-thirds of a sphere. It has a small groove, or fovea, connected through the round ligament to the sides of the acetabular notch. The head of the femur is connected to the shaft through the neck or collum. The neck is 4–5 cm. long and the diameter is smallest front to back and compressed at its middle. The collum forms an angle with the shaft in about 130 degrees. This angle is highly variant. In the infant it is about 150 degrees and in old age reduced to 120 degrees on average. An abnormal increase in the angle is known as coxa valga and an abnormal reduction is called coxa vara. Both the head and neck of the femur is vastly embedded in the hip musculature and can not be directly palpated. In skinny people with the thigh laterally rotated, the head of the femur can be felt deep as a resistance profound (deep) for the femoral artery.
The transition area between the head and neck is quite rough due to attachment of muscles and the hip joint capsule. Here the two trochanters, greater and lesser trochanter, are found. The greater trochanter is almost box-shaped and is the most lateral prominent of the femur. The highest point of the greater trochanter is located higher than the collum and reaches the midpoint of the hip joint. The greater trochanter can easily be felt. The trochanteric fossa is a deep depression bounded posteriorly by the intertrochanteric crest on the medial surface of the greater trochanter.
The lesser trochanter is a cone-shaped extension of the lowest part of the femur neck. The two trochanters are joined by the intertrochanteric crest on the back side and by the intertrochanteric line on the front.
A slight ridge is sometimes seen commencing about the middle of the intertrochanteric crest, and reaching vertically downward for about 5 cm. along the back part of the body: it is called the linea quadrata (or quadrate line).
About the junction of the upper one-third and lower two-thirds on the intertrochanteric crest is the quadrate tubercle located. The size of the tubercle varies and it is not always located on the intertrochanteric crest and that also adjacent areas can be part of the quadrate tubercle, such as the posterior surface of the greater trochanter or the neck of the femur. In a small anatomical study it was shown that the epiphyseal line passes directly through the quadrate tubercle.
Body
The body of the femur (or shaft) is large, thick and almost cylindrical in form. It is a little broader above than in the center, broadest and somewhat flattened from before backward below. It is slightly arched, so as to be convex in front, and concave behind, where it is strengthened by a prominent longitudinal ridge, the linea aspera which diverges proximally and distal as the medial and lateral ridge. Proximally the lateral ridge of the linea aspera becomes the gluteal tuberosity while the medial ridge continues as the pectineal line. Besides the linea aspera the shaft has two other bordes; a lateral and medial border. These three bordes separates the shaft into three surfaces: One anterior, one medial and one lateral. Due to the vast musculature of the thigh the shaft can not be palpated.
The third trochanter is a bony projection occasionally present on the proximal femur near the superior border of the gluteal tuberosity. When present, it is oblong, rounded, or conical in shape and sometimes continuous with the gluteal ridge. A structure of minor importance in humans, the incidence of the third trochanter varies from 17–72% between ethnic groups and it is frequently reported as more common in females than in males.
Lower part
The lower extremity of the femur (or distal extremity) is the thickest femoral extremity, the upper extremity is the shortest femoral extremity. It is somewhat cuboid in form, but its transverse diameter is greater than its antero-posterior (front to back). It consists of two oblong eminences known as the condyles.
Anteriorly, the condyles are slightly prominent and are separated by a smooth shallow articular depression called the patellar surface. Posteriorly, they project considerably and a deep notch, the Intercondylar fossa of femur, is present between them. The lateral condyle is the more prominent and is the broader both in its antero-posterior and transverse diameters. The medial condyle is the longer and, when the femur is held with its body perpendicular, projects to a lower level. When, however, the femur is in its natural oblique position the lower surfaces of the two condyles lie practically in the same horizontal plane. The condyles are not quite parallel with one another; the long axis of the lateral is almost directly antero-posterior, but that of the medial runs backward and medialward. Their opposed surfaces are small, rough, and concave, and form the walls of the intercondyloid fossa. This fossa is limited above by a ridge, the intercondyloid line, and below by the central part of the posterior margin of the patellar surface. The posterior cruciate ligament of the knee joint is attached to the lower and front part of the medial wall of the fossa and the anterior cruciate ligament to an impression on the upper and back part of its lateral wall.
The articular surface of the lower end of the femur occupies the anterior, inferior, and posterior surfaces of the condyles. Its front part is named the patellar surface and articulates with the patella; it presents a median groove which extends downward to the intercondyloid fossa and two convexities, the lateral of which is broader, more prominent, and extends farther upward than the medial.
Each condyle is surmounted by an elevation, the epicondyle. The medial epicondyle is a large convex eminence to which the tibial collateral ligament of the knee-joint is attached. At its upper part is the adductor tubercle and behind it is a rough impression which gives origin to the medial head of the gastrocnemius. The lateral epicondyle which is smaller and less prominent than the medial, gives attachment to the fibular collateral ligament of the knee-joint.
Development
The femur develops from the limb buds as a result of interactions between the ectoderm and the underlying mesoderm; formation occurs roughly around the fourth week of development.
By the sixth week of development, the first hyaline cartilage model of the femur is formed by chondrocytes. Endochondral ossification begins by the end of the embryonic period and primary ossification centers are present in all long bones of the limbs, including the femur, by the 12th week of development. The hindlimb development lags behind forelimb development by 1–2 days.
Function
As the femur is the only bone in the thigh, it serves as an attachment point for all the muscles that exert their force over the hip and knee joints. Some biarticular muscles – which cross two joints, like the gastrocnemius and plantaris muscles – also originate from the femur. In all, 23 individual muscles either originate from or insert onto the femur.
In cross-section, the thigh is divided up into three separate fascial compartments divided by fascia, each containing muscles. These compartments use the femur as an axis, and are separated by tough connective tissue membranes (or septa). Each of these compartments has its own blood and nerve supply, and contains a different group of muscles. These compartments are named the anterior, medial and posterior fascial compartments.
Muscle attachments
Clinical significance
Fractures
A femoral fracture that involves the femoral head, femoral neck or the shaft of the femur immediately below the lesser trochanter may be classified as a hip fracture, especially when associated with osteoporosis. Femur fractures can be managed in a pre-hospital setting with the use of a traction splint.
Other animals
In primitive tetrapods, the main points of muscle attachment along the femur are the internal trochanter and third trochanter, and a ridge along the ventral surface of the femoral shaft referred to as the adductor crest. The neck of the femur is generally minimal or absent in the most primitive forms, reflecting a simple attachment to the acetabulum. The greater trochanter was present in the extinct archosaurs, as well as in modern birds and mammals, being associated with the loss of the primitive sprawling gait. The lesser trochanter is a unique development of mammals, which lack both the internal and fourth trochanters. The adductor crest is also often absent in mammals or alternatively reduced to a series of creases along the surface of the bone. Structures analogous to the third trochanter are present in mammals, including some primates.
Some species of whales, snakes, and other non-walking vertebrates have vestigial femurs. In some snakes, the protruding end of a pelvic spur, a vestigial pelvis and femur remnant which is not connected to the rest of the skeleton, plays a role in mating. This role in mating is hypothesized to have possibly occurred in Basilosauridae, an extinct family of whales with well-defined femurs, lower legs and feet. Occasionally, the genes that code for longer extremities cause a modern whale to develop miniature legs (atavism).
One of the earliest known vertebrates to have a femur is the Eusthenopteron, a prehistoric lobe-finned fish from the Late Devonian period.
Viral metagenomics
A recent study has revealed that bone is a significantly richer source of persistent DNA viruses than previously thought. In addition to Parvovirus B19 and Hepatitis B virus, ten other viruses were discovered, including several members of the herpesvirus and polyomavirus families, as well as human papillomavirus type 31 and torque teno virus.
Invertebrates
In invertebrate zoology the name femur appears in arthropodology. The usage is not homologous with that of vertebrate anatomy; the term "femur" simply has been adopted by analogy and refers, where applicable, to the most proximal of (usually) the two longest jointed segments of the legs of the Arthropoda. The two basal segments preceding the femur are the coxa and trochanter. This convention is not followed in carcinology but it applies in arachnology and entomology. In myriapodology, another segment, the prefemur, connects the trochanter and femur.
Additional media
| Biology and health sciences | Skeletal system | Biology |
188506 | https://en.wikipedia.org/wiki/Knee | Knee | In humans and other primates, the knee joins the thigh with the leg and consists of two joints: one between the femur and tibia (tibiofemoral joint), and one between the femur and patella (patellofemoral joint). It is the largest joint in the human body. The knee is a modified hinge joint, which permits flexion and extension as well as slight internal and external rotation. The knee is vulnerable to injury and to the development of osteoarthritis.
It is often termed a compound joint having tibiofemoral and patellofemoral components. (The fibular collateral ligament is often considered with tibiofemoral components.)
Structure
The knee is a modified hinge joint, a type of synovial joint, which is composed of three functional compartments: the patellofemoral articulation, consisting of the patella, or "kneecap", and the patellar groove on the front of the femur through which it slides; and the medial and lateral tibiofemoral articulations linking the femur, or thigh bone, with the tibia, the main bone of the lower leg. The joint is bathed in synovial fluid which is contained inside the synovial membrane called the joint capsule. The posterolateral corner of the knee is an area that has recently been the subject of renewed scrutiny and research.
The knee is the largest joint and one of the most important joints in the body. It plays an essential role in movement related to carrying the body weight in horizontal (running and walking) and vertical (jumping) directions.
At birth, the kneecap is just formed from cartilage, and this will ossify (change to bone) between the ages of three and five years. Because it is the largest sesamoid bone in the human body, the ossification process takes significantly longer.
Articular bodies
The main articular bodies of the femur are its lateral and medial condyles. These diverge slightly distally and posteriorly, with the lateral condyle being wider in front than at the back while the medial condyle is of more constant width. The radius of the condyles' curvature in the sagittal plane becomes smaller toward the back. This diminishing radius produces a series of involute midpoints (i.e. located on a spiral). The resulting series of transverse axes permit the sliding and rolling motion in the flexing knee while ensuring the collateral ligaments are sufficiently lax to permit the rotation associated with the curvature of the medial condyle about a vertical axis.
The pair of tibial condyles are separated by the intercondylar eminence composed of a lateral and a medial tubercle.
The patella also serves an articular body, and its posterior surface is referred to as the trochlea of the knee. It is inserted into the thin anterior wall of the joint capsule. On its posterior surface is a lateral and a medial articular surface, both of which communicate with the patellar surface which unites the two femoral condyles on the anterior side of the bone's distal end.
Articular capsule
The articular capsule has a synovial and a fibrous membrane separated by fatty deposits. Anteriorly, the synovial membrane is attached on the margin of the cartilage both on the femur and the tibia, but on the femur, it communicates with the suprapatellar bursa or recess and extends the joint space proximally. The suprapatellar bursa is prevented from being pinched during extension by the articularis genus muscle. Behind, the synovial membrane is attached to the margins of the two femoral condyles which produces two extensions (semimembranosus bursa under medial head of the gastrocnemius and popliteal bursa under lateral head of the gastrocnemius) similar to the suprapatellar bursa. Between these two extensions, the synovial membrane passes in front of the two cruciate ligaments at the center of the joint, thus forming a pocket direct inward.
Synovium lining the capsule and its bursae. The synovium also lines infrapatellar fat pad, the fat pad that lies below the ligamentum patellae. Synovium projecting into the fat pad as two foldings.
Nerves
From an anterior perspective, the superolateral quadrant of the knee is innervated by the nerves to the vastus lateralis and vastus intermedius, the sciatic nerve, and by the superior lateral genicular and common fibular nerves; in the inferolateral quadrant, the inferior lateral genicular nerve and recurrent fibular nerves predominate; the superomedial quadrant is innervated by the nerves to the vastus medialis and vastus intermedius, the obturator and sciatic nerves, and by the superior medial genicular nerve; and the inferomedial quadrant has innervation by the inferior medial genicular nerve and the infrapatellar branch of the saphenous nerve.
The articular branches from the obturator and tibial nerves supply the posterior knee capsule, with additional supply from the common fibular nerve and sciatic nerve; the tibial nerve innervates the entire posterior capsule; the posterior division of the obturator nerve and the tibial nerve supply the superomedial aspect of the posterior capsule; the superolateral aspect of the posterior capsule is innervated by the tibial nerve, and by the common fibular and sciatic nerves.
Bursae
Numerous bursae surround the knee joint. The largest communicative bursa is the suprapatellar bursa described above. Four considerably smaller bursae are located on the back of the knee. Two non-communicative bursae are located in front of the patella and below the patellar tendon, and others are sometimes present.
Cartilage
Cartilage is a thin, elastic tissue that protects the bone and makes certain that the joint surfaces can slide easily over each other. Cartilage ensures supple knee movement. There are two types of joint cartilage in the knees: fibrous cartilage (the meniscus) and hyaline cartilage. Fibrous cartilage has tensile strength and can resist pressure. Hyaline cartilage covers the surface along which the joints move. Collagen fibres within the articular cartilage have been described by Benninghoff as arising from the subchondral bone in a radial manner, building so called Gothic arches. On the surface of the cartilage, these fibres appear in a tangential orientation and increase the abrasion resistance. There are no blood vessels inside of the hyaline cartilage, the alimentation is performed per diffusion. Synovial fluid and the subchondral bone marrow serve both as nutrition sources for the hyaline cartilage. Lack of at least one source induces a degeneration. Cartilage will wear over the years. Cartilage has a very limited capacity for self-restoration. The newly formed tissue will generally consist of a large part of fibrous cartilage of lesser quality than the original hyaline cartilage. As a result, new cracks and tears will form in the cartilage over time.
Menisci
The articular disks of the knee-joint are called menisci because they only partly divide the joint space. These two disks, the medial meniscus and the lateral meniscus, consist of connective tissue with extensive collagen fibers containing cartilage-like cells. Strong fibers run along the menisci from one attachment to the other, while weaker radial fibers are interlaced with the former. The menisci are flattened at the center of the knee joint, fused with the synovial membrane laterally, and can move over the tibial surface. The upper and lower surfaces of the menisci are free. Each meniscus have anterior and posterior horns that meet in the intercondylar area of the tibia.
Medial meniscus is bigger, less curved, and thinner. Its posterior horn is thicker (14mm) than the anterior horn (6mm).
The lateral meniscus is smaller, more curved (nearly circular), and has more uniform thickness than medial meniscus (10mm). The lateral meniscus is less attached to the joint capsule, because its posterolateral surface is grooved by the popliteus tendon, separating the meniscus from the capsule. The popliteus tendon is not attached to the lateral meniscus.
Ligaments
The ligaments surrounding the knee joint offer stability by limiting movements and, together with the menisci and several bursae, protect the articular capsule.
Intracapsular
The knee is stabilized by a pair of cruciate ligaments. These ligaments are both extrasynovial, intracapsular ligaments. The anterior cruciate ligament (ACL) stretches from the lateral condyle of femur to the anterior intercondylar area. The ACL prevents the tibia from being pushed too far anterior relative to the femur. It is often torn during twisting or bending of the knee. The posterior cruciate ligament (PCL) stretches from medial condyle of femur to the posterior intercondylar area. This ligament prevents posterior displacement of the tibia relative to the femur. Injury to this ligament is uncommon but can occur as a direct result of forced trauma to the ligament.
The transverse ligament stretches from the lateral meniscus to the medial meniscus. It passes in front of the menisci. It is divided into several strips in 10% of cases. The two menisci are attached to each other anteriorly by the ligament. The posterior (of Wrisberg) and anterior meniscofemoral ligaments (of Humphrey) stretch from the posterior horn of the lateral meniscus to the medial femoral condyle. They pass anterior and posterior to the posterior cruciate ligament respectively. The meniscotibial ligaments (or "coronary") stretches from inferior edges of the menisci to the periphery of the tibial plateaus.
Extracapsular
The patellar ligament connects the patella to the tuberosity of the tibia. It is also occasionally called the patellar tendon because there is no definite separation between the quadriceps tendon (which surrounds the patella) and the area connecting the patella to the tibia. This very strong ligament helps give the patella its mechanical leverage and also functions as a cap for the condyles of the femur. Laterally and medially to the patellar ligament, the lateral and medial retinacula connect fibers from the vasti lateralis and medialis muscles to the tibia. Some fibers from the iliotibial tract radiate into the lateral retinaculum and the medial retinaculum receives some transverse fibers arising on the medial femoral epicondyle.
The medial collateral ligament (MCL a.k.a. "tibial") stretches from the medial epicondyle of the femur to the medial tibial condyle. It is composed of three groups of fibers, one stretching between the two bones, and two fused with the medial meniscus. The MCL is partly covered by the pes anserinus and the tendon of the semimembranosus passes under it. It protects the medial side of the knee from being bent open by a stress applied to the lateral side of the knee (a valgus force).
The lateral collateral ligament (LCL a.k.a. "fibular") stretches from the lateral epicondyle of the femur to the head of fibula. It is separate from both the joint capsule and the lateral meniscus. It protects the lateral side from an inside bending force (a varus force). The anterolateral ligament (ALL) is situated in front of the LCL.
Lastly, there are two ligaments on the dorsal side of the knee. The oblique popliteal ligament is a radiation of the tendon of the semimembranosus on the medial side, from where it is direct laterally and proximally. The arcuate popliteal ligament originates on the apex of the head of the fibula to stretch proximally, crosses the tendon of the popliteus muscle, and passes into the capsule.
Muscles
The most muscles responsible for the movement of the knee joint belong to either the anterior, medial or posterior compartment of the thigh. The extensors generally belong to the anterior compartment and the flexors to the posterior. The two exceptions to this is gracilis, a flexor, which belongs to the medial compartment and sartorius, a flexor, in the anterior compartment. Additionally, some muscles in the lower leg provide weak knee flexion, namely the gastrocnemius, in addition to their primary function of moving the foot.
Extensors
Flexors
Posterior compartment
Medial compartment:
Blood supply
The femoral artery and the popliteal artery help form the arterial network or plexus, surrounding the knee joint. There are six main branches: two superior genicular arteries, two inferior genicular arteries, the descending genicular artery and the recurrent branch of anterior tibial artery.
The medial genicular arteries penetrate the knee joint.
Function
The knee permits flexion and extension about a virtual transverse axis, as well as a slight medial and lateral rotation about the axis of the lower leg in the flexed position. The knee joint is called "mobile" because the femur and lateral meniscus move over the tibia during rotation, while the femur rolls and glides over both menisci during extension-flexion.
The center of the transverse axis of the extension/flexion movements is located where both collateral ligaments and both cruciate ligaments intersect. This center moves upward and backward during flexion, while the distance between the center and the articular surfaces of the femur changes dynamically with the decreasing curvature of the femoral condyles. The total range of motion is dependent on several parameters such as soft-tissue restraints, active insufficiency, and hamstring tightness.
Extended position
With the knee extended, both the lateral and medial collateral ligaments, as well as the anterior part of the anterior cruciate ligament, are taut. During extension, the femoral condyles glide and roll into a position which causes the complete unfolding of the tibial collateral ligament. During the last 10° of extension, an obligatory terminal rotation is triggered in which the knee is rotated medially 5°. The final rotation is produced by a lateral rotation of the tibia in the non-weight-bearing leg, and by a medial rotation of the femur in the weight-bearing leg. This terminal rotation is made possible by the shape of the medial femoral condyle, assisted by contraction of the popliteus muscle and the iliotibial tract and is caused by the stretching of the anterior cruciate ligament. Both cruciate ligaments are slightly unwound and both lateral ligaments become taut.
Flexed position
In the flexed position, the collateral ligaments are relaxed while the cruciate ligaments are taut. Rotation is controlled by the twisted cruciate ligaments; the two ligaments get twisted around each other during medial rotation of the tibia—which reduces the amount of rotation possible—while they become unwound during lateral rotation of the tibia. Because of the oblique position of the cruciate ligaments, at least a part of one of them is always tense and these ligaments control the joint as the collateral ligaments are relaxed. Furthermore, the dorsal fibers of the tibial collateral ligament become tensed during extreme medial rotation and the ligament also reduces the lateral rotation to 45–60°.
Clinical significance
Knee pain is caused by trauma, misalignment, degeneration, and conditions producing arthritis. The most common knee disorder is generally known as patellofemoral syndrome. The majority of minor cases of knee pain can be treated at home with rest and ice, but more serious injuries do require surgical care.
One form of patellofemoral syndrome involves a tissue-related problem that creates pressure and irritation in the knee between the patella and the trochlea (patellar compression syndrome), which causes pain. The second major class of knee disorder involves a tear, slippage, or dislocation that impairs the structural ability of the knee to balance the leg (patellofemoral instability syndrome). Patellofemoral instability syndrome may cause either pain, a sense of poor balance, or both.
Prepatellar bursitis also known as housemaid's knee is painful inflammation of the prepatellar bursa (a frontal knee bursa) often brought about by occupational activity such as roofing.
Age also contributes to disorders of the knee. Particularly in older people, knee pain frequently arises due to osteoarthritis. In addition, weakening of tissues around the knee may contribute to the problem. Patellofemoral instability may relate to hip abnormalities or to tightness of surrounding ligaments.
Cartilage lesions can be caused by:
Accidents (fractures)
Injuries
The removal of a meniscus
Anterior cruciate ligament injury
Posterior cruciate ligament injury
Posterolateral corner injury
Medial knee injuries
Considerable strain on the knee.
Any kind of work during which the knees undergo heavy stress may also be detrimental to cartilage. This is especially the case in professions in which people frequently have to walk, lift, or squat. Other causes of pain may be excessive on, and wear off, the knees, in combination with such things as muscle weakness and overweight.
Common complaints:
A painful, blocked, locked or swollen knee.
Sufferers sometimes feel as if their knees are about to give way, or may feel uncertain about their movement.
Overall fitness and knee injury
Physical fitness is related integrally to the development of knee problems. The same activity such as climbing stairs may cause pain from patellofemoral compression for someone who is physically unfit, but not for someone else (or even for that person at a different time). Obesity is another major contributor to knee pain. For instance, a 30-year-old woman who weighed at age 18 years, before her three pregnancies, and now weighs , had added of force across her patellofemoral joint with each step.
Common injuries due to physical activity
In sports that place great pressure on the knees, especially with twisting forces, it is common to tear one or more ligaments or cartilages. Some of the most common knee injuries are those to the medial side: medial knee injuries.
Anterior cruciate ligament injury
The anterior cruciate ligament is the most commonly injured ligament of the knee. The injury is common during sports. Twisting of the knee is a common cause of over-stretching or tearing the ACL. When the ACL is injured a popping sound may be heard, and the leg may suddenly give out. Besides swelling and pain, walking may be painful and the knee will feel unstable. Minor tears of the anterior cruciate ligament may heal over time, but a torn ACL requires surgery. After surgery, recovery is prolonged and low impact exercises are recommended to strengthen the joint.
Torn meniscus injury
The menisci act as shock absorbers and separate the two ends of bone in the knee joint. There are two menisci in the knee, the medial (inner) and the lateral (outer). When there is torn cartilage, it means that the meniscus has been injured. Meniscus tears occur during sports often when the knee is twisted. Menisci injury may be innocuous and one may be able to walk after a tear, but soon swelling and pain set in. Sometimes the knee will lock while bending. Pain often occurs when one squats. Small meniscus tears are treated conservatively but most large tears require surgery.
Fractures
Knee fractures are rare but do occur, especially as a result of a road accident. Knee fractures include a patella fracture, and a type of avulsion fracture called a Segond fracture. There is usually immediate pain and swelling, and a difficulty or inability to stand on the leg. The muscles go into spasm and even the slightest movements are painful. X-rays can easily confirm the injury and surgery will depend on the degree of displacement and type of fracture.
Ruptured tendon
Tendons usually attach muscle to bone. In the knee the quadriceps and patellar tendon can sometimes tear. The injuries to these tendons occur when there is forceful contraction of the knee. If the tendon is completely torn, bending or extending the leg is impossible. A completely torn tendon requires surgery but a partially torn tendon can be treated with leg immobilization followed by physical therapy.
Overuse
Overuse injuries of the knee include tendonitis, bursitis, muscle strains, and iliotibial band syndrome. These injuries often develop slowly over weeks or months. Activities that induce pain usually delay healing. Rest, ice and compression do help in most cases. Once the swelling has diminished, heat packs can increase blood supply and promote healing. Most overuse injuries subside with time but can flare up if the activities are quickly resumed. Individuals may reduce the chances of overuse injuries by warming up prior to exercise, by limiting high impact activities and keep their weight under control.
Varus or valgus deformity
There are two disorders relating to an abnormal angle in the coronal plane at the level of the knee:
Genu valgum is a valgus deformity in which the tibia is turned outward in relation to the femur, resulting in a knock-kneed appearance.
Genu varum is a varus deformity in which the tibia is turned inward in relation to the femur, resulting in a bowlegged deformity.
The degree of varus or valgus deformity can be quantified by the hip-knee-ankle angle, which is an angle between the femoral mechanical axis and the center of the ankle joint. It is normally between 1.0° and 1.5° of varus in adults. Normal ranges are different in children.
Chronic pain from osteoarthritis
Knee osteoarthritis is a major cause of pain and disability worldwide, with prevalence estimated at about 4% of the population, particularly among the elderly. Radiofrequency ablation of certain knee nerves is an outpatient procedure to reduce chronic arthritic pain. Using radiofrequency energy delivered via small electrodes positioned at target genicular nerves, the treatment achieves partial sensory denervation of the joint capsule. Despite the extensive innervation of the knee, specifically targeting the superior lateral, superior medial, and inferior medial genicular nerves has proved to be an effective ablation method for reducing chronic knee pain. In clinical research, such treatment has been shown to produce about 50% less knee pain for up to two years after the procedure.
Surgical interventions
Before the advent of arthroscopy and arthroscopic surgery, patients having surgery for a torn ACL required at least nine months of rehabilitation, having initially spent several weeks in a full-length plaster cast. With current techniques, such patients may be walking without crutches in two weeks, and playing some sports in a few months.
In addition to developing new surgical procedures, ongoing research is looking into underlying problems which may increase the likelihood of an athlete suffering a severe knee injury. These findings may lead to effective preventive measures, especially in female athletes, who have been shown to be especially vulnerable to ACL tears from relatively minor trauma.
Articular cartilage repair treatment:
Arthroscopic debridement of the knee (arthroscopic lavage)
Mosaïc-plasty
Microfracture (Ice-picking)
Autologous chondrocyte implantation
Osteochondral Autograft and Allografts
PLC Reconstruction
Imaging
MRI
Both anterior cruciate ligament (ACL) and posterior cruciate ligaments (PCL) are hypointense on both T1 and T2 weighted images of MRI. However, some high signal striations are often seen at the distal part of the ACL, making ACL higher intensity than PCL on MRI scans.
X-ray
Illustrations
Other animals
In humans, the term "knee" refers to the joints between the femur, tibia, and patella, in the leg.
In quadrupeds such as dogs, horses, and mice, the homologous joints between the femur, tibia, and patella, in the hind leg, are known as the stifle joint. Also in quadrupeds, particularly horses, ungulates, and elephants, the layman's term "knee" also commonly refers to the forward-facing joint in the foreleg, the carpus, which is homologous to the human wrist.
In birds, the "knee" is the joint between the femur and tibiotarsus, and also the patella (when present). The layman's term "knee" may also refer to the (lower and often more visible due to not being covered by feathers) joint between the tibiotarsus and tarsometatarsus, which is homologous to the human ankle.
In insects and other animals, the term knee widely refers to any hinge joint.
| Biology and health sciences | Skeletal system | Biology |
188518 | https://en.wikipedia.org/wiki/Human%20papillomavirus%20infection | Human papillomavirus infection | Human papillomavirus infection (HPV infection) is caused by a DNA virus from the Papillomaviridae family. Many HPV infections cause no symptoms and 90% resolve spontaneously within two years. In some cases, an HPV infection persists and results in either warts or precancerous lesions. These lesions, depending on the site affected, increase the risk of cancer of the cervix, vulva, vagina, penis, anus, mouth, tonsils, or throat. Nearly all cervical cancer is due to HPV, and two strains – HPV16 and HPV18 – account for 70% of all cases. HPV16 is responsible for almost 90% of HPV-positive oropharyngeal cancers. Between 60% and 90% of the other cancers listed above are also linked to HPV. HPV6 and HPV11 are common causes of genital warts and laryngeal papillomatosis.
An HPV infection is caused by the human papillomavirus, a DNA virus from the papillomavirus family. Over 200 types have been described. An individual can become infected with more than one type of HPV, and the disease is only known to affect humans. More than 40 types may be spread through sexual contact and infect the anus and genitals. Risk factors for persistent infection by sexually transmitted types include early age of first sexual intercourse, multiple sexual partners, smoking, and poor immune function. These types are typically spread by sustained direct skin-to-skin contact, with vaginal and anal sex being the most common methods. HPV infection can also spread from a mother to baby during pregnancy. There is no evidence that HPV can spread via common items like toilet seats, but the types that cause warts may spread via surfaces such as floors. HPV is not killed by common hand sanitizers and disinfectants, increasing the possibility of the virus being transferred via non-living infectious agents called fomites.
HPV vaccines can prevent the most common types of infection. To be most effective, inoculation should occur before the onset of sexual activity, and are therefore recommended between the ages of 9–13 years. Cervical cancer screening, such as the Papanicolaou test ("pap smear"), or examination of the cervix after applying acetic acid, can detect both early cancer and abnormal cells that may develop into cancer. Screening allows for early treatment which results in better outcomes. Screening has reduced both the number of cases and the number of deaths from cervical cancer. Genital warts can be removed by freezing.
Nearly every sexually active individual is infected by HPV at some point in their lives. HPV is the most common sexually transmitted infection (STI), globally. High-risk HPVs cause about 5% of all cancers worldwide and about 37,300 cases of cancer in the United States each year. Cervical cancer is among the most common cancers worldwide, causing an estimated 604,000 new cases and 342,000 deaths in 2020. About 90% of these new cases and deaths of cervical cancer occurred in low- and middle-income countries. Roughly 1% of sexually active adults have genital warts. Cases of skin warts have been described since the time of ancient Greece, but it was not until 1907 that they were determined to be caused by a virus.
HPV types
HPV is a group of more than 200 related viruses, which are designated by a number for each virus type. Some HPV types, such as HPV5, may establish infections that persist for the lifetime of the individual without ever manifesting any clinical symptoms. HPV types 1 and 2 can cause common warts in some infected individuals. HPV types 6 and 11 can cause genital warts and laryngeal papillomatosis.
Many HPV types are carcinogenic. About twelve HPV types (including types 16, 18, 31, and 45) are called "high-risk" types because persistent infection has been linked to cancer of the oropharynx, larynx, vulva, vagina, cervix, penis, and anus. These cancers all involve sexually transmitted infection of HPV to the stratified epithelial tissue. HPV type 16 is the strain most likely to cause cancer and is present in about 47% of all cervical cancers, and in many vaginal and vulvar cancers, penile cancers, anal cancers, and cancers of the head and neck.
The table below lists common symptoms of HPV infection and the associated types of HPV.
Available HPV vaccines protect against either two, four, or nine types of HPV. There are six prophylactic HPV vaccines licensed for use: the bivalent vaccines Cervarix, Cecolin, and Walrinvax; the quadrivalent vaccines Cervavax and Gardasil; and the nonavalent vaccine Gardasil 9. All HPV vaccines protect against at least HPV types 16 and 18, which cause the greatest risk of cervical cancer. The quadrivalent vaccines also protect against HPV types 6 and 11. The nonavalent vaccine Gardasil 9 provides protection against those four types (6, 11, 16, and 18), along with five other high-risk HPV types responsible for 20% of cervical cancers (types 31, 33, 45, 52, and 58).
Signs and symptoms
Warts
Skin infection ("cutaneous" infection) with HPV is very widespread.
Skin infections with HPV can cause noncancerous skin growths called warts (verrucae). Warts are caused by the rapid growth of cells on the outer layer of the skin.
While cases of warts have been described since the time of ancient Greece, their viral cause was not known until 1907.
Skin warts are most common in childhood and typically appear and regress spontaneously over weeks to months. Recurring skin warts are common. All HPVs are believed to be capable of establishing long-term "latent" infections in small numbers of stem cells present in the skin. Although these latent infections may never be fully eradicated, immunological control is thought to block the appearance of symptoms such as warts. Immunological control is HPV type-specific, meaning an individual may become resistant to one HPV type while remaining susceptible to other types.
Types of warts include:
Common warts are usually found on the hands and feet, but can also occur in other areas, such as the elbows or knees. Common warts have a characteristic cauliflower-like surface and are typically slightly raised above the surrounding skin. Cutaneous HPV types can cause genital warts but are not associated with the development of cancer.
Plantar warts are found on the soles of the feet; they grow inward, generally causing pain when walking.
Subungual or periungual warts form under the fingernail (subungual), around the fingernail, or on the cuticle (periungual). They are more difficult to treat than warts in other locations.
Flat warts are most commonly found on the arms, face, or forehead. Like common warts, flat warts occur most frequently in children and teens. In people with normal immune function, flat warts are not associated with the development of cancer.
Common, flat, and plantar warts are much less likely to spread from person to person.
Genital warts
HPV infection of the skin in the genital area is the most common sexually transmitted infection worldwide. Such infections are associated with genital or anal warts (medically known as condylomata acuminata or venereal warts), and these warts are the most easily recognized sign of genital HPV infection.
The strains of HPV that can cause genital warts are usually different from those that cause warts on other parts of the body, such as the hands or feet, or even the inner thighs. A wide variety of HPV types can cause genital warts, but types 6 and 11 together account for about 90% of all cases. However, in total more than 40 types of HPV are transmitted through sexual contact and can infect the skin of the anus and genitals. Such infections may cause genital warts, although they may also remain asymptomatic.
The great majority of genital HPV infections never cause any overt symptoms and are cleared by the immune system in a matter of months. Moreover, people may transmit the virus to others even if they do not display overt symptoms of infection. Most people acquire genital HPV infections at some point in their lives, and about 10% of women are currently infected. A large increase in the incidence of genital HPV infection occurs at the age when individuals begin to engage in sexual activity. As with cutaneous HPVs, immunity to genital HPV is believed to be specific to a specific strain of HPV.
Laryngeal papillomatosis
In addition to genital warts, infection by HPV types 6 and 11 can cause a rare condition known as recurrent laryngeal papillomatosis, in which warts form on the larynx or other areas of the respiratory tract.
These warts can recur frequently, may interfere with breathing, and in extremely rare cases can progress to cancer. For these reasons, repeated surgery to remove the warts may be advisable.
Cancer
Case statistics
Cervical cancer is among the most common cancers worldwide, causing an estimated 604,000 new cases and 342,000 deaths in 2020. About 90% of these new cases and deaths of cervical cancer occurred in low- and middle-income countries, where screening tests and treatment of early cervical cell changes are not readily available.
In the United States, about 37,300 cases of cancer due to HPV occur each year.
Cancer development
In some infected individuals, their immune systems may fail to control HPV. Lingering infection with high-risk HPV types, such as types 16, 18, 31, and 45, can favor the development of cancer.
Co-factors such as cigarette smoke can also enhance the risk of HPV-related cancers.
HPV is believed to cause cancer by integrating its genome into nuclear DNA. Some of the early genes expressed by HPV, such as E6 and E7, act as oncogenes that promote tumor growth and malignant transformation. HPV genome integration can also cause carcinogenesis by promoting genomic instability associated with alterations in DNA copy number.
E6 produces a protein (also called E6) that simultaneously binds to two host cell proteins called p53 and E6-Associated Protein (E6-AP). E6AP is an E3 Ubiquitin ligase, an enzyme whose purpose is to tag proteins with a post-translational modification called Ubiquitin. By binding both proteins, E6 induces E6AP to attach a chain of ubiquitin molecules to p53, thereby flagging p53 for proteosomal degradation. Normally, p53 acts to prevent cell growth and promotes cell death in the presence of DNA damage. p53 also upregulates the p21 protein, which blocks the formation of the cyclin D/Cdk4 complex, thereby preventing the phosphorylation of retinoblastoma protein (RB), and in turn, halting cell cycle progression by preventing the activation of E2F. In short, p53 is a tumor-suppressor protein that arrests the cell cycle and prevents cell growth and survival when DNA damage occurs. Thus, the degradation of p53, induced by E6, promotes unregulated cell division, cell growth and cell survival, all characteristics of cancer.
It is important to note, that while the interaction between E6, E6AP, and p53 was the first to be characterized, there are multiple other proteins in the host cell which interact with E6 and assist in the induction of cancer.
Squamous cell carcinoma of the skin
Studies have also shown a link between a wide range of HPV types and squamous cell carcinoma of the skin. In such cases, in vitro studies suggest that the E6 protein of the HPV virus may inhibit apoptosis induced by ultraviolet light.
Cervical cancer
Nearly all cases of cervical cancer are associated with HPV infection, with two types, HPV16 and HPV18, present in 70% of cases. In 2012, twelve HPV types were considered carcinogenic for cervical cancer by the International Agency for Research on Cancer: 16, 18, 31, 33, 35, 39, 45, 51, 52, 56, 58, and 59. One study found that 74% of squamous cell carcinomas and 78% of adenocarcinomas tested positive for HPV types 16 or 18. Persistent HPV infection increases the risk for developing cervical carcinoma. Individuals who have an increased incidence of these types of infection are women with HIV/AIDS, who are at a 22-fold increased risk of cervical cancer.
The carcinogenic HPV types in cervical cancer belong to the alphapapillomavirus genus and can be grouped further into HPV clades. The two major carcinogenic HPV clades, alphapapillomavirus-9 (A9) and alphapapillomavirus-7 (A7), contain HPV16 and HPV18, respectively. These two HPV clades were shown to have different effects on tumour molecular characteristics and patient prognosis, with clade A7 being associated with more aggressive pathways and an inferior prognosis.
In 2020, about 604,000 new cases and 342,000 deaths from cervical cancer occurred worldwide. Around 90% of these occurred in the developing world.
Most HPV infections of the cervix are cleared rapidly by the immune system and do not progress to cervical cancer (see below the Clearance subsection in Virology). Because the process of transforming normal cervical cells into cancerous ones is slow, cancer occurs in people having been infected with HPV for a long time, usually over a decade or more (persistent infection). Furthermore, both the HPV infection and cervical cancer drive metabolic modifications that may be correlated with the aberrant regulation of enzymes related to metabolic pathways.
Non-European (NE) HPV16 variants are significantly more carcinogenic than European (E) HPV16 variants.
Anal cancer
The risk for anal cancer is 17 to 31 times higher among HIV-positive individuals who were coinfected with high-risk HPV, and 80 times higher for particularly HIV-positive men who have sex with men.
Anal Pap smear screening for anal cancer might benefit some subpopulations of men or women engaging in anal sex. No consensus exists, though, that such screening is beneficial, or who should get an anal Pap smear.
Penile cancer
HPV is associated with approximately 50% of penile cancers. In the United States, penile cancer accounts for about 0.5% of all cancer cases in men. HPV16 is the most commonly associated type detected. The risk of penile cancer increases 2- to 3-fold for individuals who are infected with HIV as well as HPV.
Head and neck cancers
Oral infection with high-risk carcinogenic HPV types (most commonly HPV 16) is associated with an increasing number of head and neck cancers. This association is independent of tobacco and alcohol use.
The local percentage varies widely, from 70% in the United States to 4% in Brazil. Engaging in anal or oral sex with an HPV-infected partner may increase the risk of developing these types of cancers.
In the United States, the number of newly diagnosed, HPV-associated head and neck cancers has surpassed that of cervical cancer cases. The rate of such cancers has increased from an estimated 0.8 cases per 100,000 people in 1988 to 4.5 per 100,000 in 2012, and, as of 2021, the rate has continued to increase. Researchers explain these recent data by an increase in oral sex. This type of cancer is more common in men than in women.
The mutational profile of HPV-positive and HPV-negative head and neck cancer has been reported, further demonstrating that they are fundamentally distinct diseases.
Lung cancer
Some evidence links HPV to benign and malignant tumors of the upper respiratory tract. The International Agency for Research on Cancer has found that people with lung cancer were significantly more likely to have several high-risk forms of HPV antibodies compared to those who did not have lung cancer. Researchers looking for HPV among 1,633 lung cancer patients and 2,729 people without the lung disease found that people with lung cancer had more types of HPV than noncancer patients did, and among lung cancer patients, the chances of having eight types of serious HPV were significantly increased. In addition, expression of HPV structural proteins by immunohistochemistry and in vitro studies suggest HPV presence in bronchial cancer and its precursor lesions. Another study detected HPV in the exhaled breath condensate (EBC), bronchial brushing and neoplastic lung tissue of cases, and found a presence of an HPV infection in 16.4% of the subjects affected by nonsmall cell lung cancer, but in none of the controls. The reported average frequencies of HPV in lung cancers were 17% and 15% in Europe and the Americas, respectively, and the mean number of HPV in Asian lung cancer samples was 35.7%, with considerable heterogeneity between certain countries and regions.
Skin cancer
In very rare cases, HPV may cause epidermodysplasia verruciformis (EV) in individuals with a weakened immune system. The virus, unchecked by the immune system, causes the overproduction of keratin by skin cells, resulting in lesions resembling warts or cutaneous horns which can ultimately transform into skin cancer, but the development is not well understood. The specific types of HPV that are associated with EV are HPV5, HPV8, and HPV14.
Cause
Transmission
Sexually transmitted HPV is divided into two categories: low-risk and high-risk. Low-risk HPVs cause warts on or around the genitals. Type 6 and 11 cause 90% of all genital warts and recurrent respiratory papillomatosis that causes benign tumors in the air passages. High-risk HPVs cause cancer and consist of about twelve identified types. Types 16 and 18 are responsible for causing most of HPV-caused cancers. These high-risk HPVs cause 5% of the cancers in the world. In the United States, high-risk HPVs cause 3% of all cancer cases in women and 2% in men.
Risk factors for persistent genital HPV infections, which increase the risk of developing cancer, include early age of first sexual intercourse, multiple partners, smoking, and immunosuppression. Genital HPV is spread by sustained direct skin-to-skin contact, with vaginal, anal, and oral sex being the most common methods. Occasionally, it can spread from manual sex or from a mother to her baby during pregnancy. HPV is difficult to remove via standard hospital disinfection techniques and may be transmitted in a healthcare setting on re-usable gynecological equipment, such as vaginal ultrasound transducers. The period of communicability is still unknown, but probably at least as long as visible HPV lesions persist. HPV may still be transmitted even after lesions are treated and no longer visible or present.
Perinatal
Although genital HPV types can be transmitted from mother to child during birth, the appearance of genital HPV-related diseases in newborns is rare. However, the lack of appearance does not rule out asymptomatic latent infection, as the virus has proven to be capable of hiding for decades. Perinatal transmission of HPV types 6 and 11 can result in the development of juvenile-onset recurrent respiratory papillomatosis (JORRP). JORRP is very rare, with rates of about 2 cases per 100,000 children in the United States. Although JORRP rates are substantially higher if a woman presents with genital warts at the time of giving birth, the risk of JORRP in such cases is still less than 1%.
Genital infections
Genital HPV infections are transmitted primarily by contact with the genitals, anus, or mouth of an infected sexual partner.
Of the 120 known human papillomaviruses, 51 species and three subtypes infect the genital mucosa. Fifteen are classified as high-risk types (16, 18, 31, 33, 35, 39, 45, 51, 52, 56, 58, 59, 68, 73, and 82), three as probable high-risk (26, 53, and 66), and twelve as low-risk (6, 11, 40, 42, 43, 44, 54, 61, 70, 72, 81, and 89).
Condoms do not completely protect from the virus because the areas around the genitals including the inner thigh area are not covered, thus exposing these areas to the infected person's skin.
Hands
Studies have shown HPV transmission between the hands and genitals of the same person and sexual partners. Hernandez tested the genitals and dominant hand of each person in 25 heterosexual couples every other month for an average of seven months. She found two couples where the man's genitals infected the woman's hand with high-risk HPV, two where her hand infected his genitals, one where her genitals infected his hand, two each where he infected his own hand, and she infected her own hand. Hands were not the main source of transmission in these 25 couples, but they were significant.
Partridge reports men's fingertips became positive for high-risk HPV at more than half the rate (26% per two years) as their genitals (48%). Winer reports 14% of fingertip samples from sexually active women were positive.
Non-sexual hand contact seems to have little or no role in HPV transmission. Winer found all fourteen fingertip samples from virgin women negative at the start of her fingertip study. In a separate report on genital HPV infection, 1% of virgin women (1 of 76) with no sexual contact tested positive for HPV, while 10% of virgin women reporting non-penetrative sexual contact were positive (7 of 72).
Shared objects
Sharing of possibly contaminated objects, for example, razors, may transmit HPV. Although possible, transmission by routes other than sexual intercourse is less common for female genital HPV infection. Fingers-genital contact is a possible way of transmission but unlikely to be a significant source.
Blood
Though it has traditionally been assumed that HPV is not transmissible via blood – as it is thought to only infect cutaneous and mucosal tissues – recent studies have called this notion into question. Historically, HPV DNA has been detected in the blood of cervical cancer patients. In 2005, a group reported that, in frozen blood samples of 57 sexually naive pediatric patients who had vertical or transfusion-acquired HIV infection, 8 (14.0%) of these samples also tested positive for HPV-16. This seems to indicate that it may be possible for HPV to be transmitted via blood transfusion. However, as non-sexual transmission of HPV by other means is not uncommon, this could not be definitively proven. In 2009, a group tested Australian Red Cross blood samples from 180 healthy male donors for HPV, and subsequently found DNA of one or more strains of the virus in 15 (8.3%) of the samples. However, it is important to note that detecting the presence of HPV DNA in blood is not the same as detecting the virus itself in blood, and whether or not the virus itself can or does reside in blood in infected individuals is still unknown. As such, it remains to be determined whether HPV can or cannot be transmitted via blood. This is of concern, as blood donations are not currently screened for HPV, and at least some organizations such as the American Red Cross and other Red Cross societies do not presently appear to disallow HPV-positive individuals from donating blood.
Surgery
Hospital transmission of HPV, especially to surgical staff, has been documented. Surgeons, including urologists and/or anyone in the room, are subject to HPV infection by inhalation of noxious viral particles during electrocautery or laser ablation of a condyloma (wart). There has been a case report of a laser surgeon who developed extensive laryngeal papillomatosis after providing laser ablation to patients with anogenital condylomata.
Virology
HPV infection is limited to the basal cells of stratified epithelium, the only tissue in which they replicate. The virus cannot bind to live tissue; instead, it infects epithelial tissues through micro-abrasions or other epithelial trauma that exposes segments of the basement membrane. The infectious process is slow, taking 12–24 hours for initiation of transcription. It is believed that involved antibodies play a major neutralizing role while the virions still reside on the basement membrane and cell surfaces.
HPV lesions are thought to arise from the proliferation of infected basal keratinocytes. Infection typically occurs when basal cells in the host are exposed to the infectious virus through a disturbed epithelial barrier as would occur during sexual intercourse or after minor skin abrasions. HPV infections have not been shown to be cytolytic; rather, viral particles are released as a result of degeneration of desquamating cells. HPV can survive for many months and at low temperatures without a host; therefore, an individual with plantar warts can spread the virus by walking barefoot.
HPV is a small double-stranded circular DNA virus with a genome of approximately 8000 base pairs. The HPV life cycle strictly follows the differentiation program of the host keratinocyte. It is thought that the HPV virion infects epithelial tissues through micro-abrasions, whereby the virion associates with putative receptors such as alpha integrins, laminins, and annexin A2 leading to the entry of the virions into basal epithelial cells through clathrin-mediated endocytosis and/or caveolin-mediated endocytosis depending on the type of HPV. At this point, the viral genome is transported to the nucleus by unknown mechanisms and establishes itself at a copy number of 10-200 viral genomes per cell. A sophisticated transcriptional cascade then occurs as the host keratinocyte begins to divide and become increasingly differentiated in the upper layers of the epithelium.
Evolution
The phylogeny of the various strains of HPV generally reflects the migration patterns of Homo sapiens and suggests that HPV may have diversified along with the human population. Studies suggest that HPV evolved along five major branches that reflect the ethnicity of human hosts, and diversified along with the human population.
Researchers initially identified two major variants of HPV16, European (HPV16-E), and Non-European (HPV16-NE). More recent analyses based on thousands of HPV16 genomes show that indeed two major clades exist, that are further subdivided into four lineages (designated A-D) and even further subdivided into 16 sublineages (A1–4, B1–4, C1–4 and D1–4). The A1-A3 sublineages constitute the European variant, A4 the Asian variant, B1-B4 the African type I variant, C1–C4 the African type II variant, D1 the North American variant, D2 the Asian American type I variant, D3 the Asian American type II variant. The various lineages and sublineages have different oncogenic capacity, where overall, the non-European lineages are considered to increase the risk for cancer. Although HPV16 is a DNA virus, there are signs of recombination among the different lineages. Based on an analysis of more than 3600 genomes, between 0.3 and 1.2% of them could be recombinant. Thus, ideally, genotyping (for cancer-risk assessment) of HPV16 should not be based only on certain genes, but on all genes from the entire genome.
A bioinformatics tool named HPV16-Genotyper performs i) HPV16 lineage genotyping, ii) detects potential recombination events, iii) identifies, within the submitted sequences, mutations/SNPs that have been reported (in literature) to increase the risk for cancer.
E6/E7 proteins
The two primary oncoproteins of high-risk HPV types are E6 and E7. The "E" designation indicates that these two proteins are early proteins (expressed early in the HPV life cycle), while the "L" designation indicates that they are late proteins (late expression). The HPV genome is composed of six early (E1, E2, E4, E5, E6, and E7) open reading frames (ORF), two late (L1 and L2) ORFs, and a non-coding long control region (LCR). After the host cell is infected viral early promoter is activated and a polycistronic primary RNA containing all six early ORFs is transcribed. This polycistronic RNA then undergoes active RNA splicing to generate multiple isoforms of mRNAs. One of the spliced isoform RNAs, E6*I, serves as an E7 mRNA to translate E7 protein. However, viral early transcription subjects to viral E2 regulation and high E2 levels repress the transcription. HPV genomes integrate into the host genome by disruption of E2 ORF, preventing E2 repression on E6 and E7. Thus, viral genome integration into the host DNA genome increases E6 and E7 expression to promote cellular proliferation and the chance of malignancy. The degree to which E6 and E7 are expressed is correlated with the type of cervical lesion that can ultimately develop.
Role in cancer
Sometimes papillomavirus genomes are found integrated into the host genome, and this is especially noticeable with oncogenic HPVs. The E6/E7 proteins inactivate two tumor suppressor proteins, p53 (inactivated by E6) and pRb (inactivated by E7). The viral oncogenes E6 and E7 are thought to modify the cell cycle so as to retain the differentiating host keratinocyte in a state that is favourable to the amplification of viral genome replication and consequent late gene expression. E6 in association with host E6-associated protein, which has ubiquitin ligase activity, acts to ubiquitinate p53, leading to its proteosomal degradation. E7 (in oncogenic HPVs) acts as the primary transforming protein. E7 competes for retinoblastoma protein (pRb) binding, freeing the transcription factor E2F to transactivate its targets, thus pushing the cell cycle forward. All HPV can induce transient proliferation, but only strains 16 and 18 can immortalize cell lines in vitro. It has also been shown that HPV 16 and 18 cannot immortalize primary rat cells alone; there needs to be activation of the ras oncogene. In the upper layers of the host epithelium, the late genes L1 and L2 are transcribed/translated and serve as structural proteins that encapsidate the amplified viral genomes. Once the genome is encapsidated, the capsid appears to undergo a redox-dependent assembly/maturation event, which is tied to a natural redox gradient that spans both suprabasal and cornified epithelial tissue layers. This assembly/maturation event stabilizes virions and increases their specific infectivity. Virions can then be sloughed off in the dead squames of the host epithelium and the viral lifecycle continues. A 2010 study has found that E6 and E7 are involved in beta-catenin nuclear accumulation and activation of Wnt signaling in HPV-induced cancers.
Latency period
Once an HPV virion invades a cell, an active infection occurs, and the virus can be transmitted. Several months to years may elapse before squamous intraepithelial lesions (SIL) develop and can be clinically detected. The time from active infection to clinically detectable disease may make it difficult for epidemiologists to establish which partner was the source of infection.
Clearance
Most HPV infections are cleared up by most people without medical action or consequences. The table provides data for high-risk types (i.e. the types found in cancers).
Clearing an infection does not always create immunity if there is a new or continuing source of infection. Hernandez' 2005-6 study of 25 couples reports "A number of instances indicated apparent reinfection [from partner] after viral clearance."
Diagnosis
Over 200 types of HPV have been identified, and they are designated by numbers. They may be divided into "low-risk" and "high-risk" types. Low-risk types cause warts and high-risk types can cause lesions or cancer.
Cervical testing
Guidelines from the American Cancer Society recommend different screening strategies for cervical cancer based on a woman's age, screening history, risk factors, and choice of tests. Because of the link between HPV and cervical cancer, the ACS currently recommends early detection of cervical cancer in average-risk asymptomatic adults primarily with cervical cytology by Pap smear, regardless of HPV vaccination status. Women aged 30–65 should preferably be tested every 5 years with both the HPV test and the Pap test. In other age groups, a Pap test alone can suffice unless they have been diagnosed with atypical squamous cells of undetermined significance (ASC-US). Co-testing with a Pap test and HPV test is recommended because it decreases the rate of false-negatives. According to the National Cancer Institute, "The most common test detects DNA from several high-risk HPV types, but it cannot identify the types that are present. Another test is specific for DNA from HPV types 16 and 18, the two types that cause most HPV-associated cancers. A third test can detect DNA from several high-risk HPV types and can indicate whether HPV-16 or HPV-18 is present. A fourth test detects RNA from the most common high-risk HPV types. These tests can detect HPV infections before cell abnormalities are evident.
"Theoretically, the HPV DNA and RNA tests could be used to identify HPV infections in cells taken from any part of the body. However, the tests are approved by the FDA for only two indications: for follow-up testing of women who seem to have abnormal Pap test results and for cervical cancer screening in combination with a Pap test among women over age 30."
Mouth testing
Guidelines for oropharyngeal cancer screening by the Preventive Services Task Force and American Dental Association in the U.S. suggest conventional visual examination, but because some parts of the oropharynx are hard to see, this cancer is often only detected in later stages.
The diagnosis of oropharyngeal cancer occurs by biopsy of exfoliated cells or tissues. The National Comprehensive Cancer Network and College of American Pathologists recommend testing for HPV in oropharyngeal cancer. However, while testing is recommended, there is no specific type of test used to detect HPV from oral tumors that is currently recommended by the FDA in the United States. Because HPV type 16 is the most common type found in oropharyngeal cancer, p16 immunohistochemistry is one test option used to determine if HPV is present, which can help determine course of treatment since tumors that are negative for p16 have better outcomes. Another option that has emerged as a reliable option is HPV DNA in situ hybridization (ISH) which allows for visualization of the HPV.
Testing men
There is not a wide range of tests available even though HPV is common; most studies of HPV used tools and custom analysis not available to the general public. Clinicians often depend on the vaccine among young people and high clearance rates (see Clearance subsection in Virology) to create a low risk of disease and mortality, and treat the cancers when they appear. Others believe that reducing HPV infection in more men and women, even when it has no symptoms, is important (herd immunity) to prevent more cancers rather than just treating them. Where tests are used, negative test results show safety from transmission, and positive test results show where shielding (condoms, gloves) is needed to prevent transmission until the infection clears.
Studies have tested for and found HPV in men, including high-risk types (i.e. the types found in cancers), on fingers, mouth, saliva, anus, urethra, urine, semen, blood, scrotum and penis.
The aforementioned Qiagen/Digene kit was successfully used off-label to test the penis, scrotum, and anus of men in long-term relationships with women who were positive for high-risk HPV. Of these men, 60% were found to carry the virus, primarily on the penis. Similar studies have been conducted on women using cytobrushes - an endocervical brush for sampling the cervix in females - and custom analysis.
In one study researchers sampled subjects' urethra, scrotum, and penis. Samples taken from the urethra added less than 1% to the HPV rate. Studies like this led Giuliano to recommend sampling the glans, shaft, and crease between them, along with the scrotum, since sampling the urethra or anus added very little to the diagnosis. Dunne recommends the glans, shaft, their crease, and the foreskin.
In one study the subjects were asked not to wash their genitals for 12 hours before sampling, including the urethra as well as the scrotum and the penis. Other studies are silent on washing – a particular gap in studies of the hands.
One small study used wet cytobrushes, rather than wet the skin. It found a higher proportion of men to be HPV-positive when the skin was rubbed with a 600 grit emery paper before being swabbed with the brush, rather than swabbed with no preparation. It's unclear whether the emery paper collected the virions or simply loosened them for the swab to collect.
Studies have found self-collection (with emery paper and Dacron swabs) as effective as collection done by a clinician, and sometimes more so, since patients were more willing than a clinician to scrape vigorously. Women had similar success in self-sampling using tampons, swabs, cytobrushes, and lavage.
Several studies used cytobrushes to sample fingertips and under fingernails, without wetting the area or the brush.
Other studies analyzed urine, semen, and blood and found varying amounts of HPV, but there is not a publicly available test for those yet.
Other testing
Although it is possible to test for HPV DNA in other kinds of infections, there are no FDA-approved tests for general screening in the United States or tests approved by the Canadian government, since the testing is inconclusive and considered medically unnecessary.
Genital warts are the only visible sign of low-risk genital HPV and can be identified with a visual check. These visible growths, however, are the result of non-carcinogenic HPV types. Five percent acetic acid (vinegar) is used to identify both warts and squamous intraepithelial neoplasia (SIL) lesions with limited success by causing abnormal tissue to appear white, but most doctors have found this technique helpful only in moist areas, such as the female genital tract. At this time, HPV tests for males are used only in research.
Research into testing for HPV by antibody presence has been done. The approach is looking for an immune response in blood, which would contain antibodies for HPV if the patient is HPV positive. The reliability of such tests has not been proven, as there has not been a FDA approved product as of August 2018; testing by blood would be a less invasive test for screening purposes.
Prevention
The HPV vaccines can prevent the most common types of infection. To be effective they must be used before an infection occurs and are therefore recommended between the ages of nine and thirteen. Cervical cancer screening, such as with the Papanicolaou test (pap) or looking at the cervix after using acetic acid, can detect early cancer or abnormal cells that may develop into cancer. This allows for early treatment which results in better outcomes. Screening has reduced both the number and deaths from cervical cancer in the developed world. Warts can be removed by freezing.
Vaccines
Three vaccines are available to prevent infection by some HPV types: Gardasil, Gardasil 9 and Cervarix; all three protect against initial infection with HPV types 16 and 18, which cause most of the HPV-associated cancer cases. Gardasil also protects against HPV types 6 and 11, which cause 90% of genital warts. Gardasil is a recombinant quadrivalent vaccine, whereas Cervarix is bivalent, and is prepared from virus-like particles (VLP) of the L1 capsid protein. Gardasil 9 is nonavalent, having the potential to prevent about 90% of cervical, vulvar, vaginal, and anal cancers. It can protect for HPV types 6, 11, 16, 18, 31, 33, 45, 52, and 58; the latter five cause up to 20% of cervical cancers which were not previously covered.
The vaccines provide little benefit to women already infected with HPV types 16 and 18. For this reason, the vaccine is recommended primarily for those women not yet having been exposed to HPV during sex. The World Health Organization position paper on HPV vaccination clearly outlines appropriate, cost-effective strategies for using HPV vaccine in public sector programs.
There is high-certainty evidence that HPV vaccines protect against precancerous cervical lesions in young women, particularly those vaccinated aged 15 to 26. HPV vaccines do not increase the risk of serious adverse events. Longer follow-up is needed to monitor the impact of HPV vaccines on cervical cancer.
The CDC recommends the vaccines be delivered in two shots at an interval of at least 6 months for those aged 11–12, and three doses for those 13 and older. In most countries, they are funded only for female use, but are approved for male use in many countries, and funded for teenage boys in Australia. The vaccine does not have any therapeutic effect on existing HPV infections or cervical lesions. In 2010, 49% of teenage girls in the US got the HPV vaccine.
Following studies suggesting that the vaccine is more effective in younger girls than in older teenagers, the United Kingdom, Switzerland, Mexico, the Netherlands, and Quebec began offering the vaccine in a two-dose schedule for girls aged under 15 in 2014.
Cervical cancer screening recommendations have not changed for females who receive the HPV vaccine. It remains a recommendation that women continue cervical screening, such as Pap smear testing, even after receiving the vaccine, since it does not prevent all types of cervical cancer.
Both men and women are carriers of HPV. The Gardasil vaccine also protects men against anal cancers and warts and genital warts.
Duration of both vaccines' efficacy has been observed since they were first developed, and is expected to be long-lasting.
In December 2014, the FDA approved a nine-valent Gardasil-based vaccine, Gardasil 9, to protect against infection with the four strains of HPV covered by the first generation of Gardasil as well as five other strains responsible for 20% of cervical cancers (HPV-31, HPV-33, HPV-45, HPV-52, and HPV-58).
Condoms
The Centers for Disease Control and Prevention says that male "condom use may reduce the risk for genital human papillomavirus (HPV) infection" but provides a lesser degree of protection compared with other sexual transmitted infections "because HPV also may be transmitted by exposure to areas (e.g., infected skin or mucosal surfaces) that are not covered or protected by the condom."
Disinfection
The virus is unusually hardy and is immune to most common disinfectants. It is the first virus ever shown to be resistant to inactivation by glutaraldehyde, which is among the most common strong disinfectants used in hospitals. Diluted sodium hypochlorite bleach is effective, but cannot be used on some types of re-usable equipment, such as ultrasound transducers. As a result of these difficulties, there is developing concern about the possibility of transmitting the virus on healthcare equipment, particularly reusable gynecological equipment that cannot be autoclaved. For such equipment, some health authorities encourage use of UV disinfection or a non-hypochlorite "oxidizing‐based high‐level disinfectant [bleach] with label claims for non‐enveloped viruses", such as a strong hydrogen peroxide solution or chlorine dioxide wipes. Such disinfection methods are expected to be relatively effective against HPV.
Management
There is currently no specific treatment for HPV infection. However, the viral infection is usually cleared to undetectable levels by the immune system. According to the Centers for Disease Control and Prevention, the body's immune system clears HPV naturally within two years for 90% of cases (see Clearance subsection in Virology for more detail). However, experts do not agree on whether the virus is eliminated or reduced to undetectable levels, and it is difficult to know when it is contagious.
Follow up care is usually recommended and practiced by many health clinics. Follow-up is sometimes not successful because a portion of those treated do not return to be evaluated. In addition to the normal methods of phone calls and mail, text messaging and email can improve the number of people who return for care. As of 2015 it is unclear the best method of follow up following treatment of cervical intraepithelial neoplasia.
Epidemiology
Globally, 12% of women are positive for HPV DNA, with rates varying by age and country. The highest rates of HPV are in younger women, with a rate of 24% in women under 25 years. Rates decline in older age groups in Europe and the Americas, but less so in Africa and Asia. The rates are highest in Sub-Saharan Africa (24%) and Eastern Europe (21%) and lowest in North America (5%) and Western Asia (2%).
The most common types of HPV worldwide are HPV16 (3.2%), HPV18 (1.4%), HPV52 (0.9%), HPV31 (0.8%), and HPV58 (0.7%). High-risk types of HPV are also distributed unevenly, with HPV16 having a rate of around 13% in Africa and 30% in West and Central Asia.
Like many diseases, HPV disproportionately affects low-income and resource-poor countries. The higher rates of HPV in Sub-Saharan Africa, for example, may be related to high exposure to human immunodeficiency virus (HIV) in the region. Other factors that impact the global spread of disease are sexual behaviors including age of sexual debut, number of sexual partners, and ease of access to barrier contraception, all of which vary globally.
United States
HPV is estimated to be the most common sexually transmitted infection in the United States. Most sexually active men and women will probably acquire genital HPV infection at some point in their lives. The American Social Health Association estimates that about 75–80% of sexually active Americans will be infected with HPV at some point in their lifetime. By the age of 50 more than 80% of American women will have contracted at least one strain of genital HPV. It was estimated that, in the year 2000, there were approximately 6.2 million new HPV infections among Americans aged 15–44; of these, an estimated 74% occurred to people between ages of 15 and 24. Of the STIs studied, genital HPV was the most commonly acquired. In the United States, it is estimated that 10% of the population has an active HPV infection, 4% has an infection that has caused cytological abnormalities, and an additional 1% has an infection causing genital warts.
Estimates of HPV prevalence vary from 14% to more than 90%. One reason for the difference is that some studies report women who currently have a detectable infection, while other studies report women who have ever had a detectable infection. Another cause of discrepancy is the difference in strains that were tested for.
One study found that, during 2003–2004, at any given time, 26.8% of women aged 14 to 59 were infected with at least one type of HPV. This was higher than previous estimates; 15.2% were infected with one or more of the high-risk types that can cause cancer.
The prevalence for high-risk and low-risk types is roughly similar over time.
Human papillomavirus is not included among the diseases that are typically reportable to the CDC as of 2011.
Ireland
On average 538 cases of HPV-associated cancers were diagnosed per year in Ireland during the period 2010 to 2014. Cervical cancer was the most frequent HPV-associated cancer with on average 292 cases per year (74% of the female total, and 54% of the overall total of HPV-associated cancers). A study of 996 cervical cytology samples in an Irish urban female, opportunistically screened population, found an overall HPV prevalence of 19.8%, HPV 16 at 20% and HPV 18 at 12% were the commonest high-risk types detected. In Europe, types 16 and 18 are responsible for over 70% of cervical cancers. Overall rates of HPV-associated invasive cancers may be increasing. Between 1994 and 2014, there was a 2% increase in the rate of HPV-associated invasive cancers per year for both sexes in Ireland.
As HPV is known to be associated with anogenital warts, these are notifiable to the Health Protection Surveillance Centre (HPSC). Genital warts are the second most common STI in Ireland. There were 1,281 cases of anogenital warts notified in 2017, which was a decrease on the 2016 figure of 1,593. The highest age-specific rate for both male and female was in the 25–29 year old age range; 53% of cases were among males.
Sri Lanka
In Sri Lanka, the prevalence of HPV is 15.5% regardless of cytological abnormalities.
Inner Mongolia
In the Autonomous Region of Inner Mongolia overall HPV prevalence is 14.5% but shows substantial ethnical disparity, the prevalence in Mongolian women (14.9%) being much higher than that of Han participants (4.3%). Urbanization, the number of sex partners, and PAP history appear as risk factors for HPV infection in Han, but not in Mongolian women. The region is thus an important example that the epidemiology of HPV is more related to cultural and ethnical factors and not to geography per se.
History
In 1972, the association of the human papillomaviruses with skin cancer in epidermodysplasia verruciformis was proposed by Stefania Jabłońska in Poland. In 1976 Harald zur Hausen published the hypothesis that human papillomavirus plays an important role in the cause of cervical cancer. In 1978, Jabłońska and Gerard Orth at the Pasteur Institute discovered HPV-5 in skin cancer. In 1983 and 1984 zur Hausen and his collaborators identified HPV16 and HPV18 in cervical cancer.
The HeLa cell line contains extra DNA in its genome that originated from HPV type 18.
Research
The Ludwig-McGill HPV Cohort is one of the world's largest longitudinal studies of the natural history of human papillomavirus (HPV) infection and cervical cancer risk. It was established in 1993 by Ludwig Cancer Research and McGill University in Montreal, Canada.
| Biology and health sciences | Infectious disease | null |
188543 | https://en.wikipedia.org/wiki/Biofuel | Biofuel | Biofuel is a fuel that is produced over a short time span from biomass, rather than by the very slow natural processes involved in the formation of fossil fuels such as oil. Biofuel can be produced from plants or from agricultural, domestic or industrial bio waste. Biofuels are mostly used for transportation, but can also be used for heating and electricity. Biofuels (and bio energy in general) are regarded as a renewable energy source. The use of biofuel has been subject to criticism regarding the "food vs fuel" debate, varied assessments of their sustainability, and ongoing deforestation and biodiversity loss as a result of biofuel production.
In general, biofuels emit fewer greenhouse gas emissions when burned in an engine and are generally considered carbon-neutral fuels as the carbon emitted has been captured from the atmosphere by the crops used in production. However, life-cycle assessments of biofuels have shown large emissions associated with the potential land-use change required to produce additional biofuel feedstocks. The outcomes of lifecycle assessments (LCAs) for biofuels are highly situational and dependent on many factors including the type of feedstock, production routes, data variations, and methodological choices. This could be added to emphasize the complexity and variability in assessing the environmental impacts of biofuels. Estimates about the climate impact from biofuels vary widely based on the methodology and exact situation examined. Therefore, the climate change mitigation potential of biofuel varies considerably: in some scenarios emission levels are comparable to fossil fuels, and in other scenarios the biofuel emissions result in negative emissions.
Global demand for biofuels is predicted to increase by 56% over 2022–2027. By 2027 worldwide biofuel production is expected to supply 5.4% of the world's fuels for transport including 1% of aviation fuel. Demand for aviation biofuel is forecast to increase. However some policy has been criticised for favoring ground transportation over aviation.
The two most common types of biofuel are bioethanol and biodiesel. Brazil is the largest producer of bioethanol, while the EU is the largest producer of biodiesel. The energy content in the global production of bioethanol and biodiesel is 2.2 and 1.8 EJ per year, respectively.
Bioethanol is an alcohol made by fermentation, mostly from carbohydrates produced in sugar or starch crops such as maize, sugarcane, or sweet sorghum. Cellulosic biomass, derived from non-food sources, such as trees and grasses, is also being developed as a feedstock for ethanol production. Ethanol can be used as a fuel for vehicles in its pure form (E100), but it is usually used as a gasoline additive to increase octane ratings and improve vehicle emissions.
Biodiesel is produced from oils or fats using transesterification. It can be used as a fuel for vehicles in its pure form (B100), but it is usually used as a diesel additive to reduce levels of particulates, carbon monoxide, and hydrocarbons from diesel-powered vehicles.
Terminology
The term biofuel is used in different ways. One definition is "Biofuels are biobased products, in solid, liquid, or gaseous forms. They are produced from crops or natural products, such as wood, or agricultural residues, such as molasses and bagasse."
Other publications reserve the term biofuel for liquid or gaseous fuels, used for transportation.
The IPCC Sixth Assessment Report defines biofuel as "A fuel, generally in liquid form, produced from biomass. Biofuels include bioethanol from sugarcane, sugar beet or maize, and biodiesel from canola or soybeans.". It goes on to define biomass in this context as "organic material excluding the material that is fossilised or embedded in geological formations". This means that coal or other fossil fuels is not a form of biomass in this context.
Conventional biofuels (first generation)
First-generation biofuels (also denoted as "conventional biofuels") are made from food crops grown on arable land. The crop's sugar, starch, or oil content is converted into biodiesel or ethanol, using transesterification, or yeast fermentation.
Advanced biofuels
To avoid a "food versus fuel" dilemma, second-generation biofuels and third-generation biofuels (also called advanced biofuels or sustainable biofuels or drop-in biofuels) are made from feedstocks which do not directly compete with food or feed crop such as waste products and energy crops. A wide range of renewable residue feedstocks such as those derived from agriculture and forestry activities like rice straw, rice husk, wood chips, and sawdust can be used to produce advanced biofuels through biochemical and thermochemical processes.
The feedstock used to make the fuels either grow on arable land but are byproducts of the main crop, or they are grown on marginal land. Second-generation feedstocks also include straw, bagasse, perennial grasses, jatropha, waste vegetable oil, municipal solid waste and so forth.
Types
Liquid
Ethanol
Biologically produced alcohols, most commonly ethanol, and less commonly propanol and butanol, are produced by the action of microorganisms and enzymes through the fermentation of sugars or starches (easiest to produce) or cellulose (more difficult to produce).The IEA estimates that ethanol production used 20% of sugar supplies and 13% of corn supplies in 2021.
Ethanol fuel is the most common biofuel worldwide, particularly in Brazil. Alcohol fuels are produced by fermentation of sugars derived from wheat, corn, sugar beets, sugar cane, molasses and any sugar or starch from which alcoholic beverages such as whiskey, can be made (such as potato and fruit waste, etc.). Production methods used are enzyme digestion (to release sugars from stored starches), fermentation of the sugars, distillation and drying. The distillation process requires significant energy input to generate heat. Heat is sometimes generated with unsustainable natural gas fossil fuel, but cellulosic biomass such as bagasse is the most common fuel in Brazil, while pellets, wood chips and also waste heat are more common in Europe. Corn-to-ethanol and other food stocks has led to the development of cellulosic ethanol.
Other biofuels
Methanol is currently produced from natural gas, a non-renewable fossil fuel. In the future it is hoped to be produced from biomass as biomethanol. This is technically feasible, but the production is currently being postponed for concerns that the economic viability is still pending. The methanol economy is an alternative to the hydrogen economy to be contrasted with today's hydrogen production from natural gas.
Butanol () is formed by ABE fermentation (acetone, butanol, ethanol) and experimental modifications of the process show potentially high net energy gains with biobutanol as the only liquid product. Biobutanol is often claimed to provide a direct replacement for gasoline, because it will produce more energy than ethanol and allegedly can be burned "straight" in existing gasoline engines (without modification to the engine or car), is less corrosive and less water-soluble than ethanol, and could be distributed via existing infrastructures. Escherichia coli strains have also been successfully engineered to produce butanol by modifying their amino acid metabolism. One drawback to butanol production in E. coli remains the high cost of nutrient rich media, however, recent work has demonstrated E. coli can produce butanol with minimal nutritional supplementation. Biobutanol is sometimes called biogasoline, which is incorrect as it is chemically different, being an alcohol and not a hydrocarbon like gasoline.
Biodiesel
Biodiesel is the most common biofuel in Europe. It is produced from oils or fats using transesterification and is a liquid similar in composition to fossil/mineral diesel. Chemically, it consists mostly of fatty acid methyl (or ethyl) esters (FAMEs). Feedstocks for biodiesel include animal fats, vegetable oils, soy, rapeseed, jatropha, mahua, mustard, flax, sunflower, palm oil, hemp, field pennycress, Pongamia pinnata and algae. Pure biodiesel (B100, also known as "neat" biodiesel) currently reduces emissions with up to 60% compared to diesel Second generation B100. , researchers at Australia's CSIRO have been studying safflower oil as an engine lubricant, and researchers at Montana State University's Advanced Fuels Center in the US have been studying the oil's performance in a large diesel engine, with results described as a "breakthrough".
Biodiesel can be used in any diesel engine and modified equipment when mixed with mineral diesel. It can also be used in its pure form (B100) in diesel engines, but some maintenance and performance problems may occur during wintertime utilization, since the fuel becomes somewhat more viscous at lower temperatures, depending on the feedstock used.
Electronically controlled 'common rail' and 'Unit Injector' type systems from the late 1990s onwards can only use biodiesel blended with conventional diesel fuel. These engines have finely metered and atomized multiple-stage injection systems that are very sensitive to the viscosity of the fuel. Many current-generation diesel engines are designed to run on B100 without altering the engine itself, although this depends on the fuel rail design. Since biodiesel is an effective solvent and cleans residues deposited by mineral diesel, engine filters may need to be replaced more often, as the biofuel dissolves old deposits in the fuel tank and pipes. It also effectively cleans the engine combustion chamber of carbon deposits, helping to maintain efficiency.
Biodiesel is an oxygenated fuel, meaning it contains a reduced amount of carbon and higher hydrogen and oxygen content than fossil diesel. This improves the combustion of biodiesel and reduces the particulate emissions from unburnt carbon. However, using pure biodiesel may increase NOx-emissions Biodiesel is also safe to handle and transport because it is non-toxic and biodegradable, and has a high flash point of about 300 °F (148 °C) compared to petroleum diesel fuel, which has a flash point of 125 °F (52 °C).
In many European countries, a 5% biodiesel blend is widely used and is available at thousands of gas stations. In France, biodiesel is incorporated at a rate of 8% in the fuel used by all French diesel vehicles. Avril Group produces under the brand Diester, a fifth of 11 million tons of biodiesel consumed annually by the European Union. It is the leading European producer of biodiesel.
Green diesel
Green diesel can be produced from a combination of biochemical and thermochemical processes. Conventional green diesel is produced through hydroprocessing biological oil feedstocks, such as vegetable oils and animal fats. Recently, it is produced using series of thermochemical processes such as pyrolysis and hydroprocessing. In the thermochemical route, syngas produced from gasification, bio-oil produced from pyrolysis or biocrude produced from hydrothermal liquefaction is upgraded to green diesel using hydroprocessing. Hydroprocessing is the process of using hydrogen to reform a molecular structure. For example, hydrocracking which is a widely used hydroprocessing technique in refineries is used at elevated temperatures and pressure in the presence of a catalyst to break down larger molecules, such as those found in vegetable oils, into shorter hydrocarbon chains used in diesel engines. Green diesel may also be called renewable diesel, drop-in biodiesel, hydrotreated vegetable oil (HVO fuel) or hydrogen-derived renewable diesel. Unlike biodiesel, green diesel has exactly the same chemical properties as petroleum-based diesel. It does not require new engines, pipelines or infrastructure to distribute and use, but has not been produced at a cost that is competitive with petroleum. Gasoline versions are also being developed. Green diesel is being developed in Louisiana and Singapore by ConocoPhillips, Neste Oil, Valero, Dynamic Fuels, and Honeywell UOP as well as Preem in Gothenburg, Sweden, creating what is known as Evolution Diesel.
Straight vegetable oil
Straight unmodified edible vegetable oil is generally not used as fuel, but lower-quality oil has been used for this purpose. Used vegetable oil is increasingly being processed into biodiesel, or (more rarely) cleaned of water and particulates and then used as a fuel. The IEA estimates that biodiesel production used 17% of global vegetable oil supplies in 2021.
Oils and fats reacted with 10 pounds of a short-chain alcohol (usually methanol) in the presence of a catalyst (usually sodium hydroxide [NaOH] can be hydrogenated to give a diesel substitute. The resulting product is a straight-chain hydrocarbon with a high cetane number, low in aromatics and sulfur and does not contain oxygen. Hydrogenated oils can be blended with diesel in all proportions. They have several advantages over biodiesel, including good performance at low temperatures, no storage stability problems and no susceptibility to microbial attack.
Biogasline
Biogasoline can be produced biologically and thermochemically. Using biological methods, a study led by Professor Lee Sang-yup at the Korea Advanced Institute of Science and Technology (KAIST) and published in the international science journal Nature used modified E. coli fed with glucose found in plants or other non-food crops to produce biogasoline with the produced enzymes. The enzymes converted the sugar into fatty acids and then turned these into hydrocarbons that were chemically and structurally identical to those found in commercial gasoline fuel. The thermochemical approach of producing biogasoline are similar to those used to produce biodiesel. Biogasoline may also be called drop-in gasoline or renewable gasoline.
Bioethers
Bioethers (also referred to as fuel ethers or oxygenated fuels) are cost-effective compounds that act as octane rating enhancers. "Bioethers are produced by the reaction of reactive iso-olefins, such as iso-butylene, with bioethanol." Bioethers are created from wheat or sugar beets, and also be produced from the waste glycerol that results from the production of biodiesel. They also enhance engine performance, while significantly reducing engine wear and toxic exhaust emissions. By greatly reducing the amount of ground-level ozone emissions, they contribute to improved air quality.
In transportation fuel there are six ether additives: dimethyl ether (DME), diethyl ether (DEE), methyl tert-butyl ether (MTBE), ethyl tert-butyl ether (ETBE), tert-amyl methyl ether (TAME), and tert-amyl ethyl ether (TAEE).
The European Fuel Oxygenates Association identifies MTBE and ETBE as the most commonly used ethers in fuel to replace lead. Ethers were introduced in Europe in the 1970s to replace the highly toxic compound. Although Europeans still use bioether additives, the U.S. Energy Policy Act of 2005 lifted a requirement for reformulated gasoline to include an oxygenate, leading to less MTBE being added to fuel. Although bioethers are likely to replace ethers produced from petroleum in the UK, it is highly unlikely they will become a fuel in and of itself due to the low energy density.
Aviation biofuel
Gaseous
Biogas and biomethane
Biogas is a mixture composed primarily of methane and carbon dioxide produced by the process of anaerobic digestion of organic material by micro-organisms. Other trace components of this mixture includes water vapor, hydrogen sulfide, siloxanes, hydrocarbons, ammonia, oxygen, carbon monoxide, and nitrogen. It can be produced either from biodegradable waste materials or by the use of energy crops fed into anaerobic digesters to supplement gas yields. The solid byproduct, digestate, can be used as a biofuel or a fertilizer. When and other impurities are removed from biogas, it is called biomethane. The can also be combined with hydrogen in methanation to form more methane.
Biogas can be recovered from mechanical biological treatment waste processing systems. Landfill gas, a less clean form of biogas, is produced in landfills through naturally occurring anaerobic digestion. If it escapes into the atmosphere, it acts as a greenhouse gas.
In Sweden, "waste-to-energy" power plants capture methane biogas from garbage and use it to power transport systems. Farmers can produce biogas from cattle manure via anaerobic digesters.
Syngas
Syngas, a mixture of carbon monoxide, hydrogen and various hydrocarbons, is produced by partial combustion of biomass (combustion with an amount of oxygen that is not sufficient to convert the biomass completely to carbon dioxide and water). Before partial combustion the biomass is dried and sometimes pyrolysed. Syngas is more efficient than direct combustion of the original biofuel; more of the energy contained in the fuel is extracted.
Syngas may be burned directly in internal combustion engines, turbines or high-temperature fuel cells. The wood gas generator, a wood-fueled gasification reactor, can be connected to an internal combustion engine.
Syngas can be used to produce methanol, dimethyl ether and hydrogen, or converted via the Fischer–Tropsch process to produce a diesel substitute, or a mixture of alcohols that can be blended into gasoline. Gasification normally relies on temperatures greater than 700 °C.
Lower-temperature gasification is desirable when co-producing biochar, but results in syngas polluted with tar.
Solid
The term "biofuels" is also used for solid fuels that are made from biomass, even though this is less common.
Research into other types
Algae-based biofuels
Algae can be produced in ponds or tanks on land, and out at sea. Algal fuels have high yields, a high ignition point, can be grown with minimal impact on fresh water resources, can be produced using saline water and wastewater, and are biodegradable and relatively harmless to the environment if spilled. However, production requires large amounts of energy and fertilizer, the produced fuel degrades faster than other biofuels, and it does not flow well in cold temperatures.
By 2017, due to economic considerations, most efforts to produce fuel from algae have been abandoned or changed to other applications.
Third and fourth-generation biofuels also include biofuels that are produced by bioengineered organisms i.e. algae and cyanobacteria. Algae and cyanobacteria will use water, carbon dioxide, and solar energy to produce biofuels. This method of biofuel production is still at the research level. The biofuels that are secreted by the bioengineered organisms are expected to have higher photon-to-fuel conversion efficiency, compared to older generations of biofuels. One of the advantages of this class of biofuels is that the cultivation of the organisms that produce the biofuels does not require the use of arable land. The disadvantages include the cost of cultivating the biofuel-producing organisms being very high.
Electrofuels and solar fuels
Electrofuels and solar fuels may or may not be biofuels, depending on whether they contain biological elements. Electrofuels are made by storing electrical energy in the chemical bonds of liquids and gases. The primary targets are butanol, biodiesel, and hydrogen, but include other alcohols and carbon-containing gases such as methane and butane. A solar fuel is a synthetic chemical fuel produced from solar energy. Light is converted to chemical energy, typically by reducing protons to hydrogen, or carbon dioxide to organic compounds.
Bio-digesters
A bio-digester is a mechanized toilet that uses decomposition and sedimentation to turn human waste into a renewable fuel called biogas. Biogas can be made from substances like agricultural waste and sewage. The bio-digester uses a process called anaerobic digestion to produce biogas. Anaerobic digestion uses a chemical process to break down organic matter with the use of microorganisms in the absence of oxygen to produce biogas. The processes involved in anaerobic respiration are hydrolysis, acidogenesis, acetogenesis, and methanogenesis.
Extent of production and use
Global biofuel production was 81 Mtoe in 2017 which represented an annual increase of about 3% compared to 2010. In 2017, the US was the largest biofuel producer in the world producing 37 Mtoe, followed by Brazil and South America at 23 Mtoe and Europe (mainly Germany) at 12 Mtoe.
An assessment from 2017 found that: "Biofuels will never be a major transport fuel as there is just not enough land in the world to grow plants to make biofuel for all vehicles. It can however, be part of an energy mix to take us into a future of renewable energy."
In 2021, worldwide biofuel production provided 4.3% of the world's fuels for transport, including a very small amount of aviation biofuel. By 2027, worldwide biofuel production is expected to supply 5.4% of the world's fuels for transport including 1% of aviation fuel.
The US, Europe, Brazil and Indonesia are driving the majority of biofuel consumption growth. This demand for biodiesel, renewable diesel and biojet fuel is projected to increase by 44% (21 billion litres) over 2022-2027.
Issues
Environmental impacts
Estimates about the climate impact from biofuels vary widely based on the methodology and exact situation examined.
In general, biofuels emit fewer greenhouse gas emissions when burned in an engine and are generally considered carbon-neutral fuels as the carbon they emit has been captured from the atmosphere by the crops used in biofuel production. They can have greenhouse gas emissions ranging from as low as -127.1 gCO2eq per MJ when carbon capture is incorporated into their production to those exceeding 95 gCO2eq per MJ when land-use change is significant. Several factors are responsible for the variation in emission numbers of biofuel, such as feedstock and its origin, fuel production technique, system boundary definitions, and energy sources. However, many government policies, such as those by the European Union and the UK, require that biofuels have at least 65% greenhouse gas emissions savings (or 70% if it is renewable fuels of non-biological origins) relative to fossil fuels.
The growing demand for biofuels has raised concerns about land use and food security. Many biofuel crops are grown on land that could otherwise be used for food production. This shift in land use can lead to several problems:
Competition with Food Crops: The cultivation of biofuels, especially in food-insecure regions, can drive up the cost of food and reduce the amount of land available for growing essential crops. This can exacerbate global food insecurity, especially in developing countries.
Deforestation and Habitat Loss: To meet the increasing demand for biofuels, large areas of forests and natural habitats are being cleared for agriculture. This deforestation leads to the loss of biodiversity, threatens wildlife species, and disrupts ecosystems.
Biodiversity Loss
The expansion of biofuel production, particularly through monoculture farming (growing a single crop on a large scale), poses a significant threat to biodiversity. Large-scale biofuel crop production can lead to:
Habitat Destruction: The conversion of natural ecosystems into agricultural land can result in the loss of habitats for many plant and animal species, leading to decreased biodiversity.
Soil Degradation: Monoculture farming can deplete soil nutrients, reduce soil fertility, and increase the need for chemical inputs like fertilizers and pesticides, which can further harm surrounding ecosystems
Soil Fertility: Continuous cultivation of biofuel crops without proper crop rotation or sustainable farming practices can lead to soil depletion. Over time, the soil may lose vital nutrients, making it less suitable for farming.
Life-cycle assessments of first-generation biofuels have shown large emissions associated with the potential land-use change required to produce additional biofuel feedstocks. If no land-use change is involved, first-generation biofuels can—on average—have lower emissions than fossil fuels. However, biofuel production can compete with food crop production. Up to 40% of corn produced in the United States is used to make ethanol and worldwide 10% of all grain is turned into biofuel. A 50% reduction in grain used for biofuels in the US and Europe would replace all of Ukraine's grain exports. Several studies have shown that reductions in emissions from biofuels are achieved at the expense of other impacts, such as acidification, eutrophication, water footprint and biodiversity loss.
Second-generation biofuels are thought to increase environmental sustainability since the non-food part of plants is being used to produce second-generation biofuels instead of being disposed of. But the use of second-generation biofuels increases the competition for lignocellulosic biomass, increasing the cost of these biofuels.
In theory, third-generation biofuels, produced from algae, shouldn't harm the environment more than first- or second-generation biofuels due to lower changes in land use and the fact that they do not require pesticide use for production. When looking at the data however, it has been shown that the environmental cost to produce the infrastructure and energy required for third generation biofuel production, are higher than the benefits provided from the biofuels use.
The European Commission has officially approved a measure to phase out palm oil-based biofuels by 2030. Unsustainable palm oil agriculture has caused significant environmental and social problems, including deforestation and pollution.
The production of biofuels can be very energy intensive, which, if generated from non-renewable sources, can heavily mitigate the benefits gained through biofuel use. A solution proposed to solve this issue is to supply biofuel production facilities with excess nuclear energy, which can supplement the power provided by fossil fuels. This can provide a carbon inexpensive solution to help reduce the environmental impacts of biofuel production.
Indirect land use change impacts of biofuels
| Technology | Energy and fuel | null |
188731 | https://en.wikipedia.org/wiki/Decomposition | Decomposition | Decomposition or rot is the process by which dead organic substances are broken down into simpler organic or inorganic matter such as carbon dioxide, water, simple sugars and mineral salts. The process is a part of the nutrient cycle and is essential for recycling the finite matter that occupies physical space in the biosphere. Bodies of living organisms begin to decompose shortly after death. Although no two organisms decompose in the same way, they all undergo the same sequential stages of decomposition. Decomposition can be a gradual process for organisms that have extended periods of dormancy.
One can differentiate abiotic decomposition from biotic decomposition (biodegradation); the former means "the degradation of a substance by chemical or physical processes", e.g., hydrolysis; the latter means "the metabolic breakdown of materials into simpler components by living organisms", typically by microorganisms. Animals, such as earthworms, also help decompose the organic materials on and in soil through their activities. Organisms that do this are known as decomposers or detritivores.
The science which studies decomposition is generally referred to as taphonomy from the Greek word taphos, meaning tomb.
Animal decomposition
Decomposition begins at the moment of death, caused by two factors: autolysis, the breaking down of tissues by the body's own internal chemicals and enzymes, and putrefaction, the breakdown of tissues by bacteria. These processes release compounds such as cadaverine and putrescine, that are the chief source of the unmistakably putrid odor of decaying animal tissue.
Prime decomposers are bacteria or fungi, though larger scavengers also play an important role in decomposition if the body is accessible to insects, mites and other animals. Additionally, soil animals are considered key regulators of decomposition at local scales but their role at larger scales is unresolved. The most important arthropods that are involved in the process include carrion beetles, mites, the flesh-flies (Sarcophagidae) and blow-flies (Calliphoridae), such as the green bottle flies seen in the summer. In North America, the most important non-insect animals that are typically involved in the process include mammal and bird scavengers, such as coyotes, dogs, wolves, foxes, rats, crows and vultures. Some of these scavengers also remove and scatter bones, which they ingest at a later time. Aquatic and marine environments have break-down agents that include bacteria, fish, crustaceans, fly larvae and other carrion scavengers.
Stages of decomposition
Five general stages are typically used to describe the process of decomposition in vertebrate animals: fresh, bloat, active decay, advanced decay, and dry/remains. The general stages of decomposition are coupled with two stages of chemical decomposition: autolysis and putrefaction. These two stages contribute to the chemical process of decomposition, which breaks down the main components of the body. With death the microbiome of the living organism collapses and is followed by the necrobiome that undergoes predictable changes over time.
Fresh
Among those animals that have a heart, the fresh stage begins immediately after the heart stops beating. From the moment of death, the body begins cooling or warming to match the temperature of the ambient environment, during a stage called algor mortis. Shortly after death, within three to six hours, the muscular tissues become rigid and incapable of relaxing, during a stage called rigor mortis. Since blood is no longer being pumped through the body, gravity causes it to drain to the dependent portions of the body, creating an overall bluish-purple discoloration termed livor mortis or, more commonly, lividity. Depending on the position of the body, these parts would vary. For instance, if the person was flat on their back when they died, the blood would collect in the parts that are touching the ground. If the person was hanging, it would collect in their fingertips, toes and earlobes.
Once the heart stops, the blood can no longer supply oxygen or remove carbon dioxide from the tissues. The resulting decrease in pH and other chemical changes cause cells to lose their structural integrity, bringing about the release of cellular enzymes capable of initiating the breakdown of surrounding cells and tissues. This process is known as autolysis.
Visible changes caused by decomposition are limited during the fresh stage, although autolysis may cause blisters to appear at the surface of the skin.
The small amount of oxygen remaining in the body is quickly depleted by cellular metabolism and aerobic microbes naturally present in respiratory and gastrointestinal tracts, creating an ideal environment for the proliferation of anaerobic organisms. These multiply, consuming the body's carbohydrates, lipids and proteins, to produce a variety of substances including propionic acid, lactic acid, methane, hydrogen sulfide and ammonia. The process of microbial proliferation within a body is referred to as putrefaction and leads to the second stage of decomposition known as bloat.
Blowflies and flesh flies are the first carrion insects to arrive and they seek a suitable oviposition site.
Bloat
The bloat stage provides the first clear visual sign that microbial proliferation is underway. In this stage, anaerobic metabolism takes place, leading to the accumulation of gases, such as hydrogen sulfide, carbon dioxide, methane and nitrogen. The accumulation of gases within the bodily cavity causes the distention of the abdomen and gives a cadaver its overall bloated appearance. The gases produced also cause natural liquids and liquefying tissues to become frothy. As the pressure of the gases within the body increases, fluids are forced to escape from natural orifices, such as the nose, mouth and anus, and enter the surrounding environment. The buildup of pressure combined with the loss of integrity of the skin may also cause the body to rupture.
Intestinal anaerobic bacteria transform haemoglobin into sulfhemoglobin and other colored pigments. The associated gases which accumulate within the body at this time aid in the transport of sulfhemoglobin throughout the body via the circulatory and lymphatic systems, giving the body an overall marbled appearance.
If insects have access, maggots hatch and begin to feed on the body's tissues. Maggot activity, typically confined to natural orifices, and masses under the skin, causes the skin to slip, and hair to detach from the skin. Maggot feeding, and the accumulation of gases within the body, eventually leads to post-mortem skin ruptures which will then further allow purging of gases and fluids into the surrounding environment. Ruptures in the skin allow oxygen to re-enter the body and provide more surface area for the development of fly larvae and the activity of aerobic microorganisms. The purging of gases and fluids results in the strong distinctive odors associated with decay.
Active decay
Active decay is characterized by the period of greatest mass loss. This loss occurs as a result of both the voracious feeding of maggots and the purging of decomposition fluids into the surrounding environment. The purged fluids accumulate around the body and create a cadaver decomposition island (CDI). Liquefaction of tissues and disintegration become apparent during this time and strong odors persist. The end of active decay is signaled by the migration of maggots away from the body to pupate.
Advanced decay
Decomposition is largely inhibited during advanced decay due to the loss of readily available cadaveric material. Insect activity is also reduced during this stage. When the carcass is located on soil, the area surrounding it will show evidence of vegetation death. The CDI surrounding the carcass will display an increase in soil carbon and nutrients such as phosphorus, potassium, calcium and magnesium; changes in pH; and a significant increase in soil nitrogen.
Dry/remains
As the ecosystem recovers from the disturbance, the CDI moves into the dry/remains stage, which is characterized by a decrease in the intensity of the disturbance and an increase in the amount of plant growth around the affected area. This is a sign that the nutrients and other ecological resources present in the surrounding soil have not yet returned to their normal levels.
During this stage, it is important to monitor the ecosystem for any signs of continued disturbance or ecological stress. The resurgence of plant growth is a positive sign, but it may take several years for the ecosystem to fully recover and return to its pre-disturbance state. All that remains of the cadaver at this stage is dry skin, cartilage, and bones, which will become dry and bleached if exposed to the elements. If all soft tissue is removed from the cadaver, it is referred to as completely skeletonized, but if only portions of the bones are exposed, it is referred to as partially skeletonized.
Factors affecting decomposition of bodies
Exposure to the elements
A dead body that has been exposed to the open elements, such as water and air, will decompose more quickly and attract much more insect activity than a body that is buried or confined in special protective gear or artifacts. This is due, in part, to the limited number of insects that can penetrate soil and the lower temperatures under the soil.
The rate and manner of decomposition in an animal body are strongly affected by several factors. In roughly descending degrees of importance, they are:
Temperature;
The availability of oxygen;
Prior embalming;
Cause of death;
Burial, depth of burial, and soil type;
Access by scavengers;
Trauma, including wounds and crushing blows;
Humidity, or wetness;
Rainfall;
Body size and weight;
Composition;
Clothing;
The surface on which the body rests;
Foods/objects inside the specimen's digestive tract (bacon compared to lettuce).
The speed at which decomposition occurs varies greatly. Factors such as temperature, humidity, and the season of death all determine how fast a fresh body will skeletonize or mummify. A basic guide for the effect of environment on decomposition is given as Casper's Law (or Ratio): if all other factors are equal, then, when there is free access of air a body decomposes twice as fast as if immersed in water and eight times faster than if buried in the earth. Ultimately, the rate of bacterial decomposition acting on the tissue will depend upon the temperature of the surroundings. Colder temperatures decrease the rate of decomposition while warmer temperatures increase it. A dry body will not decompose efficiently. Moisture helps the growth of microorganisms that decompose the organic matter, but too much moisture could lead to anaerobic conditions slowing down the decomposition process.
The most important variable is the body's accessibility to insects, particularly flies. On the surface in tropical areas, invertebrates alone can easily reduce a fully fleshed corpse to clean bones in under two weeks. The skeleton itself is not permanent; acids in soils can reduce it to unrecognizable components. This is one reason given for the lack of human remains found in the wreckage of the Titanic, even in parts of the ship considered inaccessible to scavengers. Freshly skeletonized bone is often called green bone and has a characteristic greasy feel. Under certain conditions (underwater, but also cool, damp soil), bodies may undergo saponification and develop a waxy substance called adipocere, caused by the action of soil chemicals on the body's proteins and fats. The formation of adipocere slows decomposition by inhibiting the bacteria that cause putrefaction.
In extremely dry or cold conditions, the normal process of decomposition is halted – by either lack of moisture or temperature controls on bacterial and enzymatic action – causing the body to be preserved as a mummy. Frozen mummies commonly restart the decomposition process when thawed (see Ötzi the Iceman), whilst heat-desiccated mummies remain so unless exposed to moisture.
The bodies of newborns who never ingested food are an important exception to the normal process of decomposition. They lack the internal microbial flora that produces much of decomposition and quite commonly mummify if kept in even moderately dry conditions.
Anaerobic vs aerobic
Aerobic decomposition takes place in the presence of oxygen. This is most common to occur in nature. Living organisms that use oxygen to survive feed on the body. Anaerobic decomposition takes place in the absence of oxygen. This could be a place where the body is buried in organic material and oxygen cannot reach it. This process of putrefaction has a bad odor accompanied by it due to the hydrogen sulfide and organic matter containing sulfur.
Artificial preservation
Embalming is the practice of delaying the decomposition of human and animal remains. Embalming slows decomposition somewhat but does not forestall it indefinitely. Embalmers typically pay great attention to parts of the body seen by mourners, such as the face and hands. The chemicals used in embalming repel most insects and slow down bacterial putrefaction by either killing existing bacteria in or on the body themselves or by fixing cellular proteins, which means that they cannot act as a nutrient source for subsequent bacterial infections. In sufficiently dry environments, an embalmed body may end up mummified and it is not uncommon for bodies to remain preserved to a viewable extent after decades. Notable viewable embalmed bodies include those of:
Eva Perón of Argentina, whose body was injected with paraffin, was kept perfectly preserved for many years, and still is as far as is known (her body is no longer on public display).
Vladimir Lenin of the Soviet Union, whose body was kept submerged in a special tank of fluid for decades and is on public display in Lenin's Mausoleum.
Other Communist leaders with pronounced cults of personality such as Mao Zedong, Kim Il Sung, Ho Chi Minh, Kim Jong Il and most recently Hugo Chávez have also had their cadavers preserved in the fashion of Lenin's preservation and are now displayed in their respective mausoleums.
Pope John XXIII, whose preserved body can be viewed in St. Peter's Basilica.
Padre Pio, whose body was injected with formalin before burial in a dry vault from which he was later removed and placed on public display at the San Giovanni Rotondo.
Environmental preservation
A body buried in a sufficiently dry environment may be well preserved for decades. This was observed in the case for murdered civil rights activist Medgar Evers, who was found to be almost perfectly preserved over 30 years after his death, permitting an accurate autopsy when the case of his murder was re-opened in the 1990s.
Bodies submerged in a peat bog may become naturally embalmed, arresting decomposition and resulting in a preserved specimen known as a bog body. The generally cool and anoxic conditions in these environments limits the rate of microbial activity, thus limiting the potential for decomposition. The time for an embalmed body to be reduced to a skeleton varies greatly. Even when a body is decomposed, embalming treatment can still be achieved (the arterial system decays more slowly) but would not restore a natural appearance without extensive reconstruction and cosmetic work, and is largely used to control the foul odors due to decomposition.
An animal can be preserved almost perfectly, for millions of years in a resin such as amber.
There are some examples where bodies have been inexplicably preserved (with no human intervention) for decades or centuries and appear almost the same as when they died. In some religious groups, this is known as incorruptibility. It is not known whether or for how long a body can stay free of decay without artificial preservation.
Importance to forensic sciences
Various sciences study the decomposition of bodies under the general rubric of forensic science because the usual motive for such studies is to determine the time and cause of death for legal purposes:
Forensic taphonomy specifically studies the processes of decomposition to apply the biological and chemical principles to forensic cases to determine post-mortem interval (PMI), post-burial interval as well as to locate clandestine graves.
Forensic pathology studies the clues to the cause of death found in the corpse as a medical phenomenon.
Forensic entomology studies the insects and other vermin found in corpses; the sequence in which they appear, the kinds of insects, and where they are found in their life cycle are clues that can shed light on the time of death, the length of a corpse's exposure, and whether the corpse was moved.
Forensic anthropology is the medico-legal branch of physical anthropology that studies skeletons and human remains, usually to seek clues as to the identity, age, sex, height and ethnicity of their former owner.
The University of Tennessee Anthropological Research Facility (better known as the Body Farm) in Knoxville, Tennessee, has several bodies laid out in various situations in a fenced-in plot near the medical center. Scientists at the Body Farm study how the human body decays in various circumstances to gain a better understanding of decomposition.
Plant decomposition
Decomposition of plant matter occurs in many stages. It begins with leaching by water; the most easily lost and soluble carbon compounds are liberated in this process. Another early process is physical breakup or fragmentation of the plant material into smaller pieces, providing greater surface area for colonization and attack by decomposers. In fallen dead parts of plants (plant litter), this process is largely carried out by saprophagous (detritivorous) soil invertebrate fauna, whereas in standing parts of plants, primarily parasitic life-forms such as parasitic plants (e.g. mistletoes), insects (e.g. aphids) and fungi (e.g. polypores) play a major role in breaking down matter, both directly and indirectly via a multitrophic cascading effect
Following this, the plant detritus (consisting of cellulose, hemicellulose, microbial metabolites, and lignin) undergoes chemical alteration by microbes. Different types of compounds decompose at different rates. This is dependent on their chemical structure. For instance, lignin is a component of wood, which is relatively resistant to decomposition and can in fact only be decomposed by certain fungi, such as the white-rot fungi.
Wood decomposition is a complex process involving fungi which transport nutrients to the nutritionally scarce wood from outside environment. Because of this nutritional enrichment, the fauna of saproxylic insects may develop and, in turn, affect dead wood, contributing to decomposition and nutrient cycling in the forest floor. Lignin is one such remaining product of decomposing plants with a very complex chemical structure, causing the rate of microbial breakdown to slow. Warmth increases the speed of plant decay by roughly the same amount, regardless of the composition of the plant.
In most grassland ecosystems, natural damage from fire, detritivores that feed on decaying matter, termites, grazing mammals, and the physical movement of animals through the grass are the primary agents of breakdown and nutrient cycling, while bacteria and fungi play the main roles in further decomposition.
The chemical aspects of plant decomposition always involve the release of carbon dioxide. In fact, decomposition contributes over 90 percent of carbon dioxide released each year.
Food decomposition
The decomposition of food, either plant or animal, called spoilage in this context, is an important field of study within food science. Food decomposition can be slowed down by conservation. The spoilage of meat occurs, if the meat is untreated, in a matter of hours or days and results in the meat becoming unappetizing, poisonous or infectious. Spoilage is caused by the practically unavoidable infection and subsequent decomposition of meat by bacteria and fungi, which are borne by the animal itself, by the people handling the meat, and by their implements. Meat can be kept edible for a much longer time – though not indefinitely – if proper hygiene is observed during production and processing, and if appropriate food safety, food preservation and food storage procedures are applied.
Spoilage of food is attributed to contamination from microorganisms such as bacteria, molds and yeasts, along with natural decay of the food. These decomposition bacteria reproduce at rapid rates under conditions of moisture and preferred temperatures. When the proper conditions are lacking the bacteria may form spores which lurk until suitable conditions arise to continue reproduction. Decomposition rates and speed may differ or vary due to abiotic factors such as moisture level, temperature, and soil type. They also vary depending on the initial amount of breakdown caused by the prior consumers in the food chain. This means the form that organic matter is in, original plant or animal, partially eaten, or as faecal matter when the detritivore encounters it. The more broken down the matter, the faster the final decomposition.
Rate of decomposition
The rate of decomposition is governed by three sets of factors: the physical environment (temperature, moisture and soil properties), the quantity and quality of the dead material available to decomposers, and the nature of the microbial community itself.
Decomposition rates are low under very wet or very dry conditions. Decomposition rates are highest in damp, moist conditions with adequate levels of oxygen. Wet soils tend to become deficient in oxygen (this is especially true in wetlands), which slows microbial growth. In dry soils, decomposition slows as well, but bacteria continue to grow (albeit at a slower rate) even after soils become too dry to support plant growth. When the rains return and soils become wet, the osmotic gradient between the bacterial cells and the soil water causes the cells to gain water quickly. Under these conditions, many bacterial cells burst, releasing a pulse of nutrients. Decomposition rates also tend to be slower in acidic soils. Soils which are rich in clay minerals tend to have lower decomposition rates, and thus, higher levels of organic matter. The smaller particles of clay result in a larger surface area that can hold water. The higher the water content of a soil, the lower the oxygen content and consequently, the lower the rate of decomposition. Clay minerals also bind particles of organic material to their surface, making them less accessible to microbes. Soil disturbance like tilling increases decomposition by increasing the amount of oxygen in the soil and by exposing new organic matter to soil microbes.
The quality and quantity of the material available to decomposers is another major factor that influences the rate of decomposition. Substances like sugars and amino acids decompose readily and are considered labile. Cellulose and hemicellulose, which are broken down more slowly, are "moderately labile". Compounds which are more resistant to decay, like lignin or cutin, are considered recalcitrant. Litter with a higher proportion of labile compounds decomposes much more rapidly than does litter with a higher proportion of recalcitrant material. Consequently, dead animals decompose more rapidly than dead leaves, which themselves decompose more rapidly than fallen branches. As organic material in the soil ages, its quality decreases. The more labile compounds decompose quickly, leaving an increasing proportion of recalcitrant material called humus. Microbial cell walls also contain recalcitrant materials like chitin, and these also accumulate as the microbes die, further reducing the quality of older soil organic matter.
| Biology and health sciences | Biology basics | Biology |
188755 | https://en.wikipedia.org/wiki/Mischmetal | Mischmetal | Mischmetal (from – "mixed metal") is an alloy of rare-earth elements. It is also called cerium mischmetal, or rare-earth mischmetal. A typical composition includes approximately 55% cerium, 25% lanthanum, and 15~18% neodymium, with traces of other rare earth metals totaling 95% lanthanides, plus 5% iron. Its most common use is in the pyrophoric ferrocerium "flint" ignition device of many lighters and torches. Because an alloy of only rare-earth elements would be too soft to give good sparks, it is blended with iron oxide and magnesium oxide to form a harder material known as ferrocerium. In chemical formulae it is commonly abbreviated as Mm, e.g. MmNi5.
History
Carl Auer von Welsbach was the discoverer of neodymium and praseodymium, and co-discoverer of lutetium. He was also the inventor of the gas mantle (using thorium) and of the rare-earth industry. After extracting thorium from monazite sand, many lanthanides remained, for which there was no commercial use. He sought applications for the rare earths. Among his first discoveries/inventions was mischmetal.
Preparation
Historically, mischmetal was prepared from monazite, an anhydrous phosphate of the light lanthanides and thorium. The ore was cracked by reaction at high temperature with either concentrated sulfuric acid or sodium hydroxide. Thorium was removed by taking advantage of its weaker basicity relative to the trivalent lanthanides, its daughter radium was precipitated out using entrainment in barium sulfate, and the remaining lanthanides were converted to their chlorides. The resulting "rare-earth chloride" (hexahydrate), sometimes known as "lanthanide chloride", was the major commodity chemical of the rare-earth industry. By careful heating, preferably with ammonium chloride or in an atmosphere of hydrogen chloride, the hexahydrate could be dehydrated to provide the anhydrous chloride. Electrolysis of the molten anhydrous chloride (admixed with other anhydrous halide to improve the melt behavior) led to the formation of molten mischmetal, which would then be cast into ingots. Any samarium content of the ore tended not to be reduced to the metal, but accumulated in the molten halide, from which it could later be profitably isolated. Monazite-derived mischmetal typically was about 48% cerium, 25% lanthanum, 17% neodymium, and 5% praseodymium, with the balance being the other lanthanides. When bastnäsite started being processed for rare-earth content in about 1965, it too was converted to a version of rare-earth chloride and on to mischmetal. This version was higher in lanthanum and lower in neodymium.
, the high demand for neodymium has made it profitable to remove all of the heavier lanthanides and neodymium (and sometimes all of the praseodymium as well) from the natural-abundance lanthanide mixture for separate sale and to include only La-Ce-Pr or La-Ce in the most economical forms of mischmetal. The light lanthanides are so similar in their metallurgical properties, that any application for which the original composition would have been suitable, would be equally well served by these truncated mixtures. The traditional "rare-earth chloride", as a commodity chemical, was also used to extract the individual rare earths by companies that did not wish to process the ores directly. , mischmetal is typically priced at less than 10 USD per kilogram, and the underlying rare-earth chloride mixtures are typically less than US$5/kg.
Use
Mischmetal is used in the preparation of virtually all rare-earth elements. This is because such elements are nearly identical in most chemical processes, meaning that ordinary extraction processes do not distinguish them. Highly specialized processes, such as those developed by Carl Auer von Welsbach, exploit subtle differences in solubility to separate mischmetal into its constituent elements, with each step producing only an incremental change in composition. Such processes later informed Marie Curie in her search for new elements.
Zinc-aluminium galvanising
Traces of a cerium and lanthanum mischmetal are sometimes added to the Galfan galvanising process for steel wire. This is a zinc and 5-10% aluminium coating, with traces of mischmetal.
| Physical sciences | Specific alloys | Chemistry |
188774 | https://en.wikipedia.org/wiki/Coffin | Coffin | A coffin is a funerary box used for viewing or keeping a corpse, for either burial or cremation.
Coffins are sometimes referred to as caskets, particularly in American English. Any box in which the dead are buried is a coffin, and while a casket was originally regarded as a box for jewelry, use of the word "casket" in this sense began as a euphemism introduced by the undertaker's trade. A distinction is commonly drawn between "coffins" and "caskets", using "coffin" to refer to a tapered hexagonal or octagonal (also considered to be anthropoidal in shape) box and "casket" to refer to a rectangular box, often with a split lid used for viewing the deceased as seen in the picture. Receptacles for cremated and cremulated human ashes (sometimes called cremains) are called urns.
Etymology
First attested in English in 1380, the word coffin derives from the Old French , from Latin , which means basket, which is the latinisation of the Greek κόφινος (kophinos), basket. The earliest attested form of the word is the Mycenaean Greek ko-pi-na, written in Linear B syllabic script.
The modern French form, couffin, means cradle.
History
The earliest evidence of wooden coffin remains, dated at 5000 BC, was found in the Tomb 4 at Beishouling, Shaanxi. Clear evidence of a rectangular wooden coffin was found in Tomb 152 in an early Banpo site. The Banpo coffin belongs to a four-year-old girl; it measures 1.4 m (4.6 ft) by 0.55 m (1.8 ft) and 3–9 cm thick. As many as 10 wooden coffins have been found at the Dawenkou culture (4100–2600 BC) site at Chengzi, Shandong. The thickness of the coffin, as determined by the number of timber frames in its composition, also emphasized the level of nobility, as mentioned in the Classic of Rites, Xunzi and Zhuangzi. Examples of this have been found in several Neolithic sites: the double coffin, the earliest of which was found in the Liangzhu culture (3400–2250 BC) site at Puanqiao, Zhejiang, consists of an outer and an inner coffin, while the triple coffin, with its earliest finds from the Longshan culture (3000–2000 BC) sites at Xizhufeng and Yinjiacheng in Shandong, consists of two outer coffins and one inner.
Practices
A coffin may be buried in the ground directly, placed in a burial vault or cremated. Alternatively it may be entombed above ground in a mausoleum, a chapel, a church, or in a loculus within catacombs. Some countries practice one form almost exclusively, whereas in others it may depend on the individual cemetery.
In parts of Sumatra, Indonesia, ancestors are revered and bodies were often kept in coffins kept alongside the longhouses until a ritual burial could be performed. The dead are also disinterred for rituals. Mass burials are also practiced. In northern Sulawesi, some dead were kept in above ground sarcophagi called waruga until the practice was banned by the Dutch in the 19th century.
The handles and other ornaments (such as doves, stipple crosses, crucifix, symbols etc.) that go on the outside of a coffin are called fittings (sometimes called 'coffin furniture' – not to be confused with furniture that is coffin shaped) while organizing the inside of the coffin with fabric of some kind is known as "trimming the coffin".
Cultures that practice burial have widely different styles of coffins. In Judaism, the coffin must be plain, made of wood and contain no metal parts or adornments. These coffins use wooden pegs instead of nails. All Jews are buried in the same plain cloth shroud from shoulder to knees, regardless of status in life, gender or age. In China, coffins made from the scented, decay-resistant wood of cypress, sugi, thuja and incense-cedar are in high demand. Certain Aboriginal Australian groups use intricately decorated tree-bark cylinders sewn with fibre and sealed with adhesive as coffins. The cylinder is packed with dried grasses.
Sometimes coffins are constructed to permanently display the corpse, as in the case of the glass-covered coffin of the Haraldskær Woman on display in the Church of Saint Nicolai in Vejle, Denmark or the glass-coffins of Vladimir Lenin and Mao Zedong, which are in Red Square, Moscow and Tiananmen Square, Beijing, respectively.
When a coffin is used to transport a deceased person, it can also be called a pall, a term that also refers to the cloth used to cover the coffin while those who carry a casket are the pallbearers.
Design
Coffins are traditionally made with six sides plus the top (lid) and bottom, tapered around the shoulders, or rectangular with four sides. Another form of four-sided coffin is trapezoidal (also known as the "wedge" form) and is considered a variant of the six-sided hexagonal kind of coffin. Continental Europe at one time favoured the rectangular coffin or casket, although variations exist in size and shape. The rectangular form, and also the trapezoidal form, is still regularly used in Germany, Austria, Hungary and other parts of Eastern and Central Europe, with the lid sometimes made to slope gently from the head down towards the foot. Coffins in the UK are mainly similar to the hexagonal design, but with one-piece sides, curved at the shoulder instead of having a join. In Medieval Japan, round coffins were used, which resembled barrels in shape and were usually made by coopers. In the case of a death at sea, there have been instances where trunks have been pressed into use as coffins. Coffins usually have handles on the side so they will be easier to carry.
They may incorporate features that claim to protect the body or for public health reasons. For example, some may offer a protective casket that uses a gasket to seal the casket shut after it is closed for the final time. In England, it has long been law that a coffin for interment above ground should be sealed; this was traditionally implemented as a wooden outer coffin around a lead lining, around a third inner shell. After some decades have passed, the lead may ripple and tear. In the United States, numerous cemeteries require a vault of some kind in order to bury the deceased. A burial vault serves as an outer enclosure for buried remains and the coffin serves as an inner enclosure. The primary purpose of the vault is to prevent collapse of the coffin due to the weight of the soil above.
Some manufacturers offer a warranty on the structural integrity of the coffin. However, no coffin, regardless of its construction material (e.g., metal rather than wood), whether or not it is sealed, and whether or not the deceased was embalmed beforehand, will perfectly preserve the body. In some cases, a sealed coffin may actually speed up rather than slow down the process of decomposition. An airtight coffin, for example, fosters decomposition by anaerobic bacteria, which results in a putrefied liquefaction of the body, and all putrefied tissue remains inside the container, only to be exposed in the event of an exhumation. A container that allows air to pass in and out, such as a simple wooden box, allows for clean skeletonization. However the situation will vary according to soil or air conditions, and climate.
Coffins are made of many materials, including steel, various types of wood, and other materials such as fiberglass or recycled kraft paper. There is emerging interest in eco-friendly coffins made of purely natural materials such as bamboo, X-Board, willow or banana leaf. In the latter part of the 19th century and the early part of the 20th century in the United States, glass coffins were widely sold by travelling salesmen, who also would try to sell stock of the companies making the coffins.
Custom coffins are occasionally created and some companies also make set ranges with non-traditional designs. These include printing or painting of peaceful tropical scenes, sea-shells, sunsets, cherubim, and patriotic flags. Some manufacturers have designed them to look like gym carry bags, guitar cases, cigar humidors, and even yellow dumpster bins. Other coffins are left deliberately blank so that friends and family can inscribe final wishes and thoughts upon them to the deceased. In Taiwan, coffins made of crushed oyster shells were used in the 18th and 19th centuries.
In the 1990s, the rock group Kiss released a customized Kiss Kasket, which featured their trademark makeup designs and KISS logo and could also be used as a cooler. Pantera guitarist Dimebag Darrell was buried in one.
Design coffins in Ghana
Design coffins in Ghana, also called Fantasy coffins or figurative coffins, are only made by specialized carpenters in the Greater Accra Region. These colourful objects, which are not only coffins, but considered real works of art, were shown for the first time to a wider Western public in the exhibition Les Magiciens de la terre at the Musée National d’Art Moderne in Paris in 1989. The seven coffins shown in Paris were done by Seth Kane Kwei (1922–1992) and by his former assistant Paa Joe (b. 1947). Since then coffins of Kane Kweis successors Paa Joe, Daniel Mensah, Kudjoe Affutu or Eric Adjetey Anang and others have been displayed in many international art museums and galleries around the world.
The design coffins of the Ga have long been celebrated in the Western art world as the invention of a single, autonomous artist, the coffin maker Kane Kwei (1924–1992) of Teshie. But as Regula Tschumi shows with her recent research this assumption was false. Design coffins have existed already before Kane Kwei and other Ga carpenters like Ataa Oko (1919–2012) from La have built their first figurative coffins around 1950.<ref name="Regula Tschumi 2013">Regula Tschumi: The Figurative Palanquins of the Ga. History and Significance, in: African Arts, Vol. 46, Nr. 4, 2013, S. 60–73.</ref> Kane Kwei and Ataa Oko had only continued a tradition that already existed in Accra where the kings were using figurative palanquins in the forms of their family symbol. And as those chiefs who were using figurative palanquins had to be buried in a coffin looking like their palanquin, their families used figurative coffins which were formerly nothing else than the copies of the design palanquins. Today figurative coffins are of course no more reserved for the traditional Ga and their kings, many families who use figurative coffins are indeed Christians. For them design coffins have no longer a spiritual function, their appeal is more aesthetic, aimed at surprising mourners with strikingly innovative forms like automobiles or aeroplanes, fish or pigs, onions or tomatoes. So the figurative coffins, rather than constituting a new art form as it was long believed, were developed from the figurative palanquins which had existed already a long time.
Cremation
With the resurgence of cremation in the Western world, manufacturers have begun providing options for those who choose cremation. For a direct cremation a cardboard box is sometimes used. Those who wish to have a funeral visitation (sometimes called a viewing) or traditional funeral service will use a coffin of some sort.
Some choose to use a coffin made of wood or other materials like particle board or low-density fibreboard. Others will rent a regular casket for the duration of the services. These caskets have a removable bed and liner which is replaced after each use. There are also rental caskets with an outer shell that looks like a traditional coffin and a cardboard box that fits inside the shell. At the end of the services the inner box is removed and the deceased is cremated inside this box.
Industry
Traditionally, in the Western world, a coffin was made, when required, by the village carpenter, who would frequently manage the whole funeral. The design and workmanship would reflect the skills of that individual carpenter, with the materials and brasses being the materials that were available to the carpenter at the time. In past centuries, if a pauper's funeral was paid for by the parish, the coffin might have been made of the cheapest, thinnest possible pine. At the other extreme, a coffin bought privately by a wealthy individual might have used yew or mahogany with a fine lining, plated fittings and brass decorations, topped with a decorated velvet drape.
In modern times coffins are almost always mass-produced. Some manufacturers do not sell directly to the public, and only work with funeral homes. In that case, the funeral director usually sells the casket to a family for a deceased person as part of the funeral services offered, and the price of the casket is included in the total bill for services rendered.
Some funeral homes have small showrooms to present families with the available caskets that could be used for a deceased family member. In many modern funeral homes the showroom will consist of sample pieces that show only the end pieces of each type of coffin that can be used. They also include samples of the lining and other materials. This allows funeral homes to showcase a larger number of coffin styles without the need for a larger showroom. Other types may be available from a catalogue, including decorative paint effects or printed photographs or patterns.
Under a United States federal regulation, 16 CFR Part 453 (known as the Funeral Rule), if a family provides a casket they purchased elsewhere (for example from a United States retail warehouse store, as illustrated here), the establishment is required to accept the casket and use it in the services. If the casket is delivered direct to the funeral home from the manufacturer or store, they are required to accept delivery of the casket. The funeral home may not add any extra charges or fees to the overall bill if a family decides to purchase a casket elsewhere. If the casket was bought from the funeral home, these regulations require bills to be completely itemized.
| Technology | Containers | null |
188935 | https://en.wikipedia.org/wiki/Bose%E2%80%93Einstein%20statistics | Bose–Einstein statistics | In quantum statistics, Bose–Einstein statistics (B–E statistics) describes one of two possible ways in which a collection of non-interacting identical particles may occupy a set of available discrete energy states at thermodynamic equilibrium. The aggregation of particles in the same state, which is a characteristic of particles obeying Bose–Einstein statistics, accounts for the cohesive streaming of laser light and the frictionless creeping of superfluid helium. The theory of this behaviour was developed (1924–25) by Satyendra Nath Bose, who recognized that a collection of identical and indistinguishable particles can be distributed in this way. The idea was later adopted and extended by Albert Einstein in collaboration with Bose.
Bose–Einstein statistics apply only to particles that do not follow the Pauli exclusion principle restrictions. Particles that follow Bose-Einstein statistics are called bosons, which have integer values of spin. In contrast, particles that follow Fermi-Dirac statistics are called fermions and have half-integer spins.
Bose–Einstein distribution
At low temperatures, bosons behave differently from fermions (which obey the Fermi–Dirac statistics) in a way that an unlimited number of them can "condense" into the same energy state. This apparently unusual property also gives rise to the special state of matter – the Bose–Einstein condensate. Fermi–Dirac and Bose–Einstein statistics apply when quantum effects are important and the particles are "indistinguishable". Quantum effects appear if the concentration of particles satisfies
where is the number of particles, is the volume, and is the quantum concentration, for which the interparticle distance is equal to the thermal de Broglie wavelength, so that the wavefunctions of the particles are barely overlapping.
Fermi–Dirac statistics applies to fermions (particles that obey the Pauli exclusion principle), and Bose–Einstein statistics applies to bosons. As the quantum concentration depends on temperature, most systems at high temperatures obey the classical (Maxwell–Boltzmann) limit, unless they also have a very high density, as for a white dwarf. Both Fermi–Dirac and Bose–Einstein become Maxwell–Boltzmann statistics at high temperature or at low concentration.
Bose–Einstein statistics was introduced for photons in 1924 by Bose and generalized to atoms by Einstein in 1924–25.
The expected number of particles in an energy state for Bose–Einstein statistics is:
with and where is the occupation number (the number of particles) in state , is the degeneracy of energy level , is the energy of the -th state, μ is the chemical potential (zero for a photon gas), is the Boltzmann constant, and is the absolute temperature.
The variance of this distribution is calculated directly from the expression above for the average number.
For comparison, the average number of fermions with energy given by Fermi–Dirac particle-energy distribution has a similar form:
As mentioned above, both the Bose–Einstein distribution and the Fermi–Dirac distribution approaches the Maxwell–Boltzmann distribution in the limit of high temperature and low particle density, without the need for any ad hoc assumptions:
In the limit of low particle density, , therefore or equivalently . In that case, , which is the result from Maxwell–Boltzmann statistics.
In the limit of high temperature, the particles are distributed over a large range of energy values, therefore the occupancy on each state (especially the high energy ones with ) is again very small, . This again reduces to Maxwell–Boltzmann statistics.
In addition to reducing to the Maxwell–Boltzmann distribution in the limit of high and low density, Bose–Einstein statistics also reduces to Rayleigh–Jeans law distribution for low energy states with , namely
History
Władysław Natanson in 1911 concluded that Planck's law requires indistinguishability of "units of energy", although he did not frame this in terms of Einstein's light quanta.
While presenting a lecture at the University of Dhaka (in what was then British India and is now Bangladesh) on the theory of radiation and the ultraviolet catastrophe, Satyendra Nath Bose intended to show his students that the contemporary theory was inadequate, because it predicted results not in accordance with experimental results. During this lecture, Bose committed an error in applying the theory, which unexpectedly gave a prediction that agreed with the experiment. The error was a simple mistake—similar to arguing that flipping two fair coins will produce two heads one-third of the time—that would appear obviously wrong to anyone with a basic understanding of statistics (remarkably, this error resembled the famous blunder by d'Alembert known from his Croix ou Pile article). However, the results it predicted agreed with experiment, and Bose realized it might not be a mistake after all. For the first time, he took the position that the Maxwell–Boltzmann distribution would not be true for all microscopic particles at all scales. Thus, he studied the probability of finding particles in various states in phase space, where each state is a little patch having phase volume of h3, and the position and momentum of the particles are not kept particularly separate but are considered as one variable.
Bose adapted this lecture into a short article called "Planck's law and the hypothesis of light quanta" and submitted it to the Philosophical Magazine. However, the referee's report was negative, and the paper was rejected. Undaunted, he sent the manuscript to Albert Einstein requesting publication in the . Einstein immediately agreed, personally translated the article from English into German (Bose had earlier translated Einstein's article on the general theory of relativity from German to English), and saw to it that it was published. Bose's theory achieved respect when Einstein sent his own paper in support of Bose's to , asking that they be published together. The paper came out in 1924.
The reason Bose produced accurate results was that since photons are indistinguishable from each other, one cannot treat any two photons having equal quantum numbers (e.g., polarization and momentum vector) as being two distinct identifiable photons. Bose originally had a factor of 2 for the possible spin states, but Einstein changed it to polarization. By analogy, if in an alternate universe coins were to behave like photons and other bosons, the probability of producing two heads would indeed be one-third, and so is the probability of getting a head and a tail which equals one-half for the conventional (classical, distinguishable) coins. Bose's "error" leads to what is now called Bose–Einstein statistics.
Bose and Einstein extended the idea to atoms and this led to the prediction of the existence of phenomena which became known as Bose–Einstein condensate, a dense collection of bosons (which are particles with integer spin, named after Bose), which was demonstrated to exist by experiment in 1995.
Derivation
Derivation from the microcanonical ensemble
In the microcanonical ensemble, one considers a system with fixed energy, volume, and number of particles. We take a system composed of identical bosons, of which have energy and are distributed over levels or states with the same energy , i.e. is the degeneracy associated with energy of total energy . Calculation of the number of arrangements of particles distributed among states is a problem of combinatorics. Since particles are indistinguishable in the quantum mechanical context here, the number of ways for arranging particles in boxes (for the th energy level) would be (see image):
where is the k-combination of a set with m elements. The total number of arrangements in an ensemble of bosons is simply the product of the binomial coefficients above over all the energy levels, i.e.
The maximum number of arrangements determining the corresponding occupation number is obtained by maximizing the entropy, or equivalently, setting and taking the subsidiary conditions into account (as Lagrange multipliers). The result for , , is the Bose–Einstein distribution.
Derivation from the grand canonical ensemble
The Bose–Einstein distribution, which applies only to a quantum system of non-interacting bosons, is naturally derived from the grand canonical ensemble without any approximations. In this ensemble, the system is able to exchange energy and exchange particles with a reservoir (temperature T and chemical potential μ fixed by the reservoir).
Due to the non-interacting quality, each available single-particle level (with energy level ϵ) forms a separate thermodynamic system in contact with the reservoir. That is, the number of particles within the overall system that occupy a given single particle state form a sub-ensemble that is also grand canonical ensemble; hence, it may be analysed through the construction of a grand partition function.
Every single-particle state is of a fixed energy, . As the sub-ensemble associated with a single-particle state varies by the number of particles only, it is clear that the total energy of the sub-ensemble is also directly proportional to the number of particles in the single-particle state; where is the number of particles, the total energy of the sub-ensemble will then be . Beginning with the standard expression for a grand partition function and replacing with , the grand partition function takes the form
This formula applies to fermionic systems as well as bosonic systems. Fermi–Dirac statistics arises when considering the effect of the Pauli exclusion principle: whilst the number of fermions occupying the same single-particle state can only be either 1 or 0, the number of bosons occupying a single particle state may be any integer. Thus, the grand partition function for bosons can be considered a geometric series and may be evaluated as such:
Note that the geometric series is convergent only if , including the case where . This implies that the chemical potential for the Bose gas must be negative, i.e., , whereas the Fermi gas is allowed to take both positive and negative values for the chemical potential.
The average particle number for that single-particle substate is given by
This result applies for each single-particle level and thus forms the Bose–Einstein distribution for the entire state of the system.
The variance in particle number, , is:
As a result, for highly occupied states the standard deviation of the particle number of an energy level is very large, slightly larger than the particle number itself: . This large uncertainty is due to the fact that the probability distribution for the number of bosons in a given energy level is a geometric distribution; somewhat counterintuitively, the most probable value for N is always 0. (In contrast, classical particles have instead a Poisson distribution in particle number for a given state, with a much smaller uncertainty of , and with the most-probable N value being near .)
Derivation in the canonical approach
It is also possible to derive approximate Bose–Einstein statistics in the canonical ensemble. These derivations are lengthy and only yield the above results in the asymptotic limit of a large number of particles. The reason is that the total number of bosons is fixed in the canonical ensemble. The Bose–Einstein distribution in this case can be derived as in most texts by maximization, but the mathematically best derivation is by the Darwin–Fowler method of mean values as emphasized by Dingle. | Physical sciences | Statistical mechanics | Physics |
189021 | https://en.wikipedia.org/wiki/System%20software | System software | System software is software designed to provide a platform for other software. Examples of system software include operating systems (OS) (like macOS, Linux, Android, and Microsoft Windows).
Application software is software that allows users to do user-oriented tasks such as creating text documents, playing or developing games, creating presentations, listening to music, drawing pictures, or browsing the web. Examples of such software are computational science software, game engines, search engines, industrial automation, and software as a service applications.
In the late 1940s, application software was custom-written by computer users to fit their specific hardware and requirements. System software was usually supplied by the manufacturer of the computer hardware and was intended to be used by most or all users of that system.
Many operating systems come pre-packaged with basic application software. Such software is not considered system software when it can be uninstalled without affecting the functioning of other software. Examples of such software are games and simple editing tools supplied with Microsoft Windows, or software development toolchains supplied with many Linux distributions.
Some of the grayer areas between system and application software are web browsers integrated deeply into the operating system such as Internet Explorer in some versions of Microsoft Windows, or ChromeOS where the browser functions as the only user interface and the only way to run programs (and other web browser their place).
Operating systems or system control program
The operating system (prominent examples being Microsoft Windows, macOS, Linux, and z/OS), allows the parts of a computer to work together by performing tasks like transferring data between memory and disks or rendering output onto a display device. It provides a platform (hardware abstraction layer) to run high-level system software and application software.
A kernel is the core part of the operating system that defines an application programming interface for applications programs (including some system software) and an interface to device drivers.
Device drivers and firmware, including computer BIOS or UEFI, provide basic functionality to operate and control the hardware connected to or built into the computer.
A user interface interacts with a computer. It can either be a command-line interface (CLI) or, since the 1980s, a graphical user interface (GUI). This is the part of the operating system the user directly interacts with; it is considered an application and not system software.
Utility software or system support programs
Some organizations use the term systems programmer to describe a job function that is more accurately termed systems administrator. Software tools these employees use are then called system software. This utility software helps to analyze, configure, optimize and maintain the computer, such as virus protection. The term system software can also include software development tools (like a compiler, linker, or debugger).
| Technology | Computer software | null |
189023 | https://en.wikipedia.org/wiki/Calisthenics | Calisthenics | Calisthenics (American English) or callisthenics (British English) () is a form of strength training that utilizes an individual's body weight as resistance to perform multi-joint, compound movements with little or no equipment.
Origin and etymology
The Oxford English Dictionary describes callisthenics as "gymnastic exercises to achieve fitness and grace of movement". The word calisthenics comes from the ancient Greek words (), which means "beauty", and (), meaning "strength". It is the art of using one's body weight as resistance to develop muscles.
The practice was recorded as being used in ancient Greece, including by the armies of Alexander the Great and the Spartans at the Battle of Thermopylae.
Calisthenics was also recorded to have been used in ancient China. Along with dietary practices, Han dynasty physicians prescribed calisthenics as one of the methods for maintaining one's health.
Common exercises
The more commonly performed calisthenic exercises include:
Push-ups
Performed face down on the floor, palms against the floor under the shoulders, toes curled upwards against the floor. The arms are used to lift the body while maintaining a straight line from head to heel. The arms go from fully extended in the high position to nearly fully flexed in the low position while avoiding resting on the floor. This exercise trains the chest, shoulders, and triceps. An easier version of this exercise consists of placing the hands on a wall and then bending and straightening the arms.
Sit-ups
A person lies on their back with their legs bent. They bend at the waist and move their head and torso towards their legs. They then lower themselves back down to the start position. For people who find it difficult to get down onto the ground, a similar range of motion can be achieved by standing with the legs slightly bent, and then bowing slightly and straightening up again.
Curl-ups
Curl-up is an abdominal exercise that enables both building and defining "six-pack" abs and tightening the belly.
Squats
Standing with the feet a shoulder-width apart, the subject squats down until their thighs are parallel with the floor; during this action, they move their arms forwards in front of them. They then return to a standing position whilst moving their arms back to their sides. Squats train the quadriceps, hamstrings, calves, gluteal muscles, and core. The height of the squat can be adjusted to be deeper or shallower depending on the fitness level of the individual (i.e. half or quarter squats rather than full squats). Since squats can be performed easily in most environments and with a limited amount of space, they are among the most versatile calisthenic exercises.
Burpees
A full body calisthenics workout that works abdominal muscles, chest, arms, legs, and several parts of the back. The subject squats down and quickly moves their arms and legs into a push-up position. Sometimes, people do a push up (not mandatory) before they finish their rep by tucking the legs in and jumping up.
Chin-ups and pull-ups
Chin-ups and pull-ups are similar exercises but use opposite facing grips.
For a chin-up, the palms of the hands are facing the person as they pull up their body using the chin-up bar. The chin-up involves the biceps muscles more than the pull-up but the lats are still the primary mover.
For a pull-up, the bar is grasped using a shoulder-width grip. The subject lifts their body up, chin level with the bar, keeping their back straight throughout the exercise. The bar remains in front of the subject at all times. The subject then slowly returns to starting position in a slow, controlled manner. This primarily trains the lats, and secondary muscles working are upper back muscles, as well as the forearms and core muscles.
Dips
Done between parallel bars or facing either direction of trapezoid bars found in some gyms. Feet are crossed, with either foot in front and the body is lowered until the elbows are in line with the shoulders. The subject then pushes up until the arms are fully extended, but without locking the elbows. Dips focus primarily on the chest, triceps, and deltoids, especially the anterior portion.
Front lever and back lever
A front lever is performed by executing a lateral pulldown of the bar with straight arms until the body is parallel to the ground, with the front of the body facing upwards. This exercise may be done on rings or a pull-up bar.
A back lever is performed by lowering from an inverted hang from rings or bar, until the gymnast's body is parallel to the ground and facing towards the floor.
Handstand
A handstand is the act of supporting the body in a stable, inverted vertical position by balancing on the hands. In a basic handstand, the body is held straight with arms and legs fully extended, with hands spaced approximately a shoulder-width apart.
Hyperextensions
Performed in a prone position on the ground, the individual raises the legs, arms and upper body off the ground.
Leg raises
Lying on the back, hands in fists under buttocks, move feet up and down.
L-sit
The L-sit is an acrobatic body position in which all body weight rests on the hands, with the torso held in a slightly forward-leaning orientation, with legs held horizontally so that each leg forms a nominal right-angle with the torso. The right-angle causes the body to have a notable "L" shape, hence the name "L-sit". The L sit requires one to keep their core tensed and holding their legs horizontal, so that their body sits in a perfect 'L' position. This requires significant abdominal strength and a high level of hamstring flexibility.
Muscle-ups
An intermediate calisthenics exercise. Performed by a combination routine of a pull-up followed by a dip. May be done on pull-up bars or rings.
Planche
One of the most advanced exercises, which may be achieved after years of training. It is performed by protracting and depressing the scapula balancing the body on two arms. The planche requires a high amount of strength (particularly for taller individuals) as well as balance.
Planks
This is the name for holding the 'top' position of a push-up for extended periods of time. The primary muscle involved in this exercise is the rectus abdominis, especially if a posterior pelvic tilt is maintained.
Calf raises
Lunges
Jumping jack
The side-straddle hop is a two-action exercise. From a standing position, the subject first jumps slightly into the air while moving the legs more than a shoulder-width apart, swinging the arms overhead, and clapping the palms together. Secondly, the subject jumps slightly into the air once again while swinging the arms down and to the side, finally returning to a standing position. Both actions must be alternated per repetition.
Training methods
Calisthenics can be used as a means to pursue a number of fitness goals including, but not limited to hypertrophy (increasing one's muscle mass), strength, and endurance.
The training methods employed are often different, depending on the goal. For instance, when pursuing hypertrophy, one aims to increase the load volume over time; when pursuing strength, the intensity of the exercise is increased over time; and to improve endurance, one can gradually shorten their rest periods.
Calisthenics can be used to increase bone density, increasing core control, reducing stiffness but not just limited to it.
Co-operative calisthenics
Co-operative calisthenics refers to calisthenic exercises that involve two or more participants helping each other to perform the exercise. Such exercises may also be known as partner exercises, partner-resisted exercises, partner carrying, or bodyweight exercises with a partner. They have been used for centuries as a way of building physical strength, endurance, mobility, and co-ordination. Usually, one person performs the exercise and the other person adds resistance. For example, a person performing squats with someone on their back, or someone holding another person in their arms and walking around. Some exercises also involve the use of equipment. Two people may hold onto different ends of a rope and pull in different directions. One person would deliberately provide a lesser amount of resistance, which adds resistance to the exercise whilst also allowing the other person to move through a full range of motion as their superior level of force application pulls the rope along. A disadvantage of these exercises is that it can be challenging to measure how much resistance is being added by the partner, when considered in comparison to free weights or machines. An advantage such exercise has is that it allows for relatively high levels of resistance to be added with equipment being optional. On this basis, co-operative calisthenics can be just as easily performed on a playing field as in a gym. They are also versatile enough to allow them to be used for training goals other than simple strength. For example, a squat with a partner can be turned into a power-focused exercise by jumping or hopping with the partner instead, or even lifting them up on one knee.
Benefits
A 2017 study: "The effects of a calisthenics training intervention on posture, strength and body composition" found that calisthenics training is an "effective training solution to improve posture, strength and body composition without the use of any major training equipment".
History
Catharine Esther Beecher (1800–1878) was an American educator and author who popularized and shaped a conservative ideological movement to both elevate and entrench women's place in the domestic sphere of American culture. She introduced calisthenics in a course of physical education and promoted it.
Disciples of Friedrich Ludwig Jahn brought their version of gymnastics to the United States, while Beecher and Dio Lewis set up physical education programs for women in the 19th century. Organized systems of calisthenics in America took a back seat to competitive sports after the Battle of the Systems, when the states mandated physical education systems. The Royal Canadian Air Force's calisthenics program published in the 1960s helped to launch modern fitness culture.
Calisthenics is associated with the rapidly growing international sport called street workout. The street workout consists of athletes performing calisthenics routines in timed sessions, in front of a panel of judges. The World Street Workout & Calisthenics Federation (WSWCF), based in Riga, Latvia, orchestrates the annual national championships and hosts the world championships for the sport. The World Calisthenics Organization (WCO), based in Los Angeles, California, promotes a series of competitions known globally as "the Battle of the Bars". The WCO created the first set of rules for formal competitions, including weight classes, a timed round system, original judging criteria and a 10-point must system, giving an increasing number of athletes worldwide an opportunity to compete in these global competitions.
Street workout competitions have also popularized 'freestyle calisthenics', which is a style of calisthenics where the athlete uses their power and momentum to perform dynamic skills and tricks on the bar, often as part of a routine where each trick is linked together in a consistent flow. Freestyle calisthenics requires great skill to control one's momentum and an understanding of the mechanics of the body and the bar.
Calisthenics parks
Some outdoor fitness training areas and outdoor gyms are designed especially for calisthenics training, and most are free to use by the public. Calisthenics parks equipment include pull-up bars, monkey bars, parallel bars, and box jumps. Freely accessible online maps exist that show the locations and sample photos of calisthenics parks around the world.
| Biology and health sciences | Physical fitness | Health |
189037 | https://en.wikipedia.org/wiki/Exercise | Exercise | Exercise or workout is physical activity that enhances or maintains fitness and overall health. which is performed for various reasons, including weight loss or maintenance, to aid growth and improve strength, develop muscles and the cardiovascular system, prevent injuries, hone athletic skills, improve health, or simply for enjoyment. Many people choose to exercise outdoors where they can congregate in groups, socialize, and improve well-being as well as mental health.
In terms of health benefits, usually, 150 minutes of moderate-intensity exercise per week is recommended for reducing the risk of health problems. At the same time, even doing a small amount of exercise is healthier than doing none. Only doing an hour and a quarter (11 minutes/day) of exercise could reduce the risk of early death, cardiovascular disease, stroke, and cancer.
Classification
Physical exercises are generally grouped into three types, depending on the overall effect they have on the human body:
Aerobic exercise is any physical activity that uses large muscle groups and causes the body to use more oxygen than it would while resting. The goal of aerobic exercise is to increase cardiovascular endurance. Examples of aerobic exercise include running, cycling, swimming, brisk walking, skipping rope, rowing, hiking, dancing, playing tennis, continuous training, and long distance running.
Anaerobic exercise, which includes strength and resistance training, can firm, strengthen, and increase muscle mass, as well as improve bone density, balance, and coordination. Examples of strength exercises are push-ups, pull-ups, lunges, squats, bench press. Anaerobic exercise also includes weight training, functional training, Eccentric Training, interval training, sprinting, and high-intensity interval training which increase short-term muscle strength.
Flexibility exercises stretch and lengthen muscles. Activities such as stretching help to improve joint flexibility and keep muscles limber. The goal is to improve the range of motion which can reduce the chance of injury.
Physical exercise can also include training that focuses on accuracy, agility, power, and speed.
Types of exercise can also be classified as dynamic or static. 'Dynamic' exercises such as steady running, tend to produce a lowering of the diastolic blood pressure during exercise, due to the improved blood flow. Conversely, static exercise (such as weight-lifting) can cause the systolic pressure to rise significantly, albeit transiently, during the performance of the exercise.
Health effects
Physical exercise is important for maintaining physical fitness and can contribute to maintaining a healthy weight, regulating the digestive system, building and maintaining healthy bone density, muscle strength, and joint mobility, promoting physiological well-being, reducing surgical risks, and strengthening the immune system. Some studies indicate that exercise may increase life expectancy and the overall quality of life. People who participate in moderate to high levels of physical exercise have a lower mortality rate compared to individuals who by comparison are not physically active. Moderate levels of exercise have been correlated with preventing aging by reducing inflammatory potential. The majority of the benefits from exercise are achieved with around 3500 metabolic equivalent (MET) minutes per week, with diminishing returns at higher levels of activity. For example, climbing stairs 10 minutes, vacuuming 15 minutes, gardening 20 minutes, running 20 minutes, and walking or bicycling for transportation 25 minutes on a daily basis would together achieve about 3000 MET minutes a week. A lack of physical activity causes approximately 6% of the burden of disease from coronary heart disease, 7% of type 2 diabetes, 10% of breast cancer, and 10% of colon cancer worldwide. Overall, physical inactivity causes 9% of premature mortality worldwide.
The American-British writer Bill Bryson wrote: "If someone invented a pill that could do for us all that a moderate amount of exercise achieves, it would instantly become the most successful drug in history."
Fitness
Most people can increase fitness by increasing physical activity levels. Increases in muscle size from resistance training are primarily determined by diet and testosterone. This genetic variation in improvement from training is one of the key physiological differences between elite athletes and the larger population. There is evidence that exercising in middle age may lead to better physical ability later in life.
Early motor skills and development is also related to physical activity and performance later in life. Children who are more proficient with motor skills early on are more inclined to be physically active, and thus tend to perform well in sports and have better fitness levels. Early motor proficiency has a positive correlation to childhood physical activity and fitness levels, while less proficiency in motor skills results in a more sedentary lifestyle.
The type and intensity of physical activity performed may have an effect on a person's fitness level. There is some weak evidence that high-intensity interval training may improve a person's VO2 max slightly more than lower intensity endurance training. However, unscientific fitness methods could lead to sports injuries.
Cardiovascular system
The beneficial effect of exercise on the cardiovascular system is well documented. There is a direct correlation between physical inactivity and cardiovascular disease, and physical inactivity is an independent risk factor for the development of coronary artery disease. Low levels of physical exercise increase the risk of cardiovascular diseases mortality.
Children who participate in physical exercise experience greater loss of body fat and increased cardiovascular fitness. Studies have shown that academic stress in youth increases the risk of cardiovascular disease in later years; however, these risks can be greatly decreased with regular physical exercise.
There is a dose-response relationship between the amount of exercise performed from approximately kcal of energy expenditure per week and all-cause mortality and cardiovascular disease mortality in middle-aged and elderly men. The greatest potential for reduced mortality is seen in sedentary individuals who become moderately active.
Studies have shown that since heart disease is the leading cause of death in women, regular exercise in aging women leads to healthier cardiovascular profiles.
The most beneficial effects of physical activity on cardiovascular disease mortality can be attained through moderate-intensity activity (40–60% of maximal oxygen uptake, depending on age). After a myocardial infarction, survivors who changed their lifestyle to include regular exercise had higher survival rates. Sedentary people are most at risk for mortality from cardiovascular and all other causes. According to the American Heart Association, exercise reduces the risk of cardiovascular diseases, including heart attack and stroke.
Some have suggested that increases in physical exercise might decrease healthcare costs, increase the rate of job attendance, as well as increase the amount of effort women put into their jobs.
Immune system
Although there have been hundreds of studies on physical exercise and the immune system, there is little direct evidence on its connection to illness. Epidemiological evidence suggests that moderate exercise has a beneficial effect on the human immune system; an effect which is modeled in a J curve. Moderate exercise has been associated with a 29% decreased incidence of upper respiratory tract infections (URTI), but studies of marathon runners found that their prolonged high-intensity exercise was associated with an increased risk of infection occurrence. However, another study did not find the effect. Immune cell functions are impaired following acute sessions of prolonged, high-intensity exercise, and some studies have found that athletes are at a higher risk for infections. Studies have shown that strenuous stress for long durations, such as training for a marathon, can suppress the immune system by decreasing the concentration of lymphocytes. The immune systems of athletes and nonathletes are generally similar. Athletes may have a slightly elevated natural killer cell count and cytolytic action, but these are unlikely to be clinically significant.
Vitamin C supplementation has been associated with a lower incidence of upper respiratory tract infections in marathon runners.
Biomarkers of inflammation such as C-reactive protein, which are associated with chronic diseases, are reduced in active individuals relative to sedentary individuals, and the positive effects of exercise may be due to its anti-inflammatory effects. In individuals with heart disease, exercise interventions lower blood levels of fibrinogen and C-reactive protein, an important cardiovascular risk marker. The depression in the immune system following acute bouts of exercise may be one of the mechanisms for this anti-inflammatory effect.
Cancer
A systematic review evaluated 45 studies that examined the relationship between physical activity and cancer survival rates. According to the review, "[there] was consistent evidence from 27 observational studies that physical activity is associated with reduced all-cause, breast cancer–specific, and colon cancer–specific mortality. There is currently insufficient evidence regarding the association between physical activity and mortality for survivors of other cancers." Evidence suggests that exercise may positively affect the quality of life in cancer survivors, including factors such as anxiety, self-esteem and emotional well-being. For people with cancer undergoing active treatment, exercise may also have positive effects on health-related quality of life, such as fatigue and physical functioning. This is likely to be more pronounced with higher intensity exercise.
Exercise may contribute to a reduction of cancer-related fatigue in survivors of breast cancer. Although there is only limited scientific evidence on the subject, people with cancer cachexia are encouraged to engage in physical exercise. Due to various factors, some individuals with cancer cachexia have a limited capacity for physical exercise. Compliance with prescribed exercise is low in individuals with cachexia and clinical trials of exercise in this population often have high drop-out rates.
There is low-quality evidence for an effect of aerobic physical exercises on anxiety and serious adverse events in adults with hematological malignancies. Aerobic physical exercise may result in little to no difference in the mortality, quality of life, or physical functioning. These exercises may result in a slight reduction in depression and reduction in fatigue.
Neurobiological
Depression
Continuous aerobic exercise can induce a transient state of euphoria, colloquially known as a "runner's high" in distance running or a "rower's high" in crew, through the increased biosynthesis of at least three euphoriant neurochemicals: anandamide (an endocannabinoid), β-endorphin (an endogenous opioid), and phenethylamine (a trace amine and amphetamine analog).
Concussion
Supervised aerobic exercise without a risk of re-injury (falling, getting hit on the head) is prescribed as treatment for acute concussion. Some exercise interventions may also prevent sport-related concussion.
Sleep
Preliminary evidence from a 2012 review indicated that physical training for up to four months may increase sleep quality in adults over 40 years of age. A 2010 review suggested that exercise generally improved sleep for most people, and may help with insomnia, but there is insufficient evidence to draw detailed conclusions about the relationship between exercise and sleep. A 2018 systematic review and meta-analysis suggested that exercise can improve sleep quality in people with insomnia.
Libido
One 2013 study found that exercising improved sexual arousal problems related to antidepressant use.
Respiratory system
People who participate in physical exercise experience increased cardiovascular fitness.
There is some level of concern about additional exposure to air pollution when exercising outdoors, especially near traffic.
Mechanism of effects
Skeletal muscle
Resistance training and subsequent consumption of a protein-rich meal promotes muscle hypertrophy and gains in muscle strength by stimulating myofibrillar muscle protein synthesis (MPS) and inhibiting muscle protein breakdown (MPB). The stimulation of muscle protein synthesis by resistance training occurs via phosphorylation of the mechanistic target of rapamycin (mTOR) and subsequent activation of mTORC1, which leads to protein biosynthesis in cellular ribosomes via phosphorylation of mTORC1's immediate targets (the p70S6 kinase and the translation repressor protein 4EBP1). The suppression of muscle protein breakdown following food consumption occurs primarily via increases in plasma insulin. Similarly, increased muscle protein synthesis (via activation of mTORC1) and suppressed muscle protein breakdown (via insulin-independent mechanisms) has also been shown to occur following ingestion of β-hydroxy β-methylbutyric acid.
Aerobic exercise induces mitochondrial biogenesis and an increased capacity for oxidative phosphorylation in the mitochondria of skeletal muscle, which is one mechanism by which aerobic exercise enhances submaximal endurance performance. These effects occur via an exercise-induced increase in the intracellular AMP:ATP ratio, thereby triggering the activation of AMP-activated protein kinase (AMPK) which subsequently phosphorylates peroxisome proliferator-activated receptor gamma coactivator-1α (PGC-1α), the master regulator of mitochondrial biogenesis.
Other peripheral organs
Developing research has demonstrated that many of the benefits of exercise are mediated through the role of skeletal muscle as an endocrine organ. That is, contracting muscles release multiple substances known as myokines which promote the growth of new tissue, tissue repair, and multiple anti-inflammatory functions, which in turn reduce the risk of developing various inflammatory diseases. Exercise reduces levels of cortisol, which causes many health problems, both physical and mental. Endurance exercise before meals lowers blood glucose more than the same exercise after meals. There is evidence that vigorous exercise (90–95% of VO2 max) induces a greater degree of physiological cardiac hypertrophy than moderate exercise (40 to 70% of VO2 max), but it is unknown whether this has any effects on overall morbidity and/or mortality. Both aerobic and anaerobic exercise work to increase the mechanical efficiency of the heart by increasing cardiac volume (aerobic exercise), or myocardial thickness (strength training). Ventricular hypertrophy, the thickening of the ventricular walls, is generally beneficial and healthy if it occurs in response to exercise.
Central nervous system
The effects of physical exercise on the central nervous system may be mediated in part by specific neurotrophic factor hormones released into the blood by muscles, including BDNF, IGF-1, and VEGF.
Public health measures
Community-wide and school campaigns are often used in an attempt to increase a population's level of physical activity. Studies to determine the effectiveness of these types of programs need to be interpreted cautiously as the results vary. There is some evidence that certain types of exercise programmes for older adults, such as those involving gait, balance, co-ordination and functional tasks, can improve balance. Following progressive resistance training, older adults also respond with improved physical function. Brief interventions promoting physical activity may be cost-effective, however this evidence is weak and there are variations between studies.
Environmental approaches appear promising: signs that encourage the use of stairs, as well as community campaigns, may increase exercise levels. The city of Bogotá, Colombia, for example, blocks off of roads on Sundays and holidays to make it easier for its citizens to get exercise. Such pedestrian zones are part of an effort to combat chronic diseases and to maintain a healthy BMI.
Parents can promote physical activity by modelling healthy levels of physical activity or by encouraging physical activity. According to the Centers for Disease Control and Prevention in the United States, children and adolescents should do 60 minutes or more of physical activity each day. Implementing physical exercise in the school system and ensuring an environment in which children can reduce barriers to maintain a healthy lifestyle is essential.
The European Commission's Directorate-General for Education and Culture (DG EAC) has dedicated programs and funds for Health Enhancing Physical Activity (HEPA) projects within its Horizon 2020 and Erasmus+ program, as research showed that too many Europeans are not physically active enough. Financing is available for increased collaboration between players active in this field across the EU and around the world, the promotion of HEPA in the EU and its partner countries, and the European Sports Week. The DG EAC regularly publishes a Eurobarometer on sport and physical activity.
Exercise trends
Worldwide there has been a large shift toward less physically demanding work. This has been accompanied by increasing use of mechanized transportation, a greater prevalence of labor-saving technology in the home, and fewer active recreational pursuits. Personal lifestyle changes, however, can correct the lack of physical exercise.
Research published in 2015 suggests that incorporating mindfulness into physical exercise interventions increases exercise adherence and self-efficacy, and also has positive effects both psychologically and physiologically.
Social and cultural variation
Exercising looks different in every country, as do the motivations behind exercising. In some countries, people exercise primarily indoors (such as at home or health clubs), while in others, people primarily exercise outdoors. People may exercise for personal enjoyment, health and well-being, social interactions, competition or training, etc. These differences could potentially be attributed to a variety of reasons including geographic location and social tendencies.
In Colombia, for example, citizens value and celebrate the outdoor environments of their country. In many instances, they use outdoor activities as social gatherings to enjoy nature and their communities. In Bogotá, Colombia, a 70-mile stretch of road known as the Ciclovía is shut down each Sunday for bicyclists, runners, rollerbladers, skateboarders and other exercisers to work out and enjoy their surroundings.
Similarly to Colombia, citizens of Cambodia tend to exercise socially outside. In this country, public gyms have become quite popular. People will congregate at these outdoor gyms not only to use the public facilities, but also to organize aerobics and dance sessions, which are open to the public.
Sweden has also begun developing outdoor gyms, called utegym. These gyms are free to the public and are often placed in beautiful, picturesque environments. People will swim in rivers, use boats, and run through forests to stay healthy and enjoy the natural world around them. This works particularly well in Sweden due to its geographical location.
Exercise in some areas of China, particularly among those who are retired, seems to be socially grounded. In the mornings, square dances are held in public parks; these gatherings may include Latin dancing, ballroom dancing, tango, or even the jitterbug. Dancing in public allows people to interact with those with whom they would not normally interact, allowing for both health and social benefits.
These sociocultural variations in physical exercise show how people in different geographic locations and social climates have varying motivations and methods of exercising. Physical exercise can improve health and well-being, as well as enhance community ties and appreciation of natural beauty.
Adherence
Adhering or staying consistent with an exercise program can be challenging for many people. Studies have identified many different factors. Some factors include why a person is exercising (e.g, health, social), what types of exercises or how the exercise program is structured, whether or not professionals are involved in the program, education related to exercise and health, monitoring and progress made in exercise program, goals setting, and involved a person is in choosing the exercise program and setting goals.
Nutrition and recovery
Proper nutrition is as important to health as exercise. When exercising, it becomes even more important to have a good diet to ensure that the body has the correct ratio of macronutrients while providing ample micronutrients, to aid the body with the recovery process following strenuous exercise.
Active recovery is recommended after participating in physical exercise because it removes lactate from the blood more quickly than inactive recovery. Removing lactate from circulation allows for an easy decline in body temperature, which can also benefit the immune system, as an individual may be vulnerable to minor illnesses if the body temperature drops too abruptly after physical exercise. Exercise physiologists recommend the "4-Rs framework":
Rehydration
Replacing any fluid and electrolyte deficits
Refuel
Consuming carbohydrates to replenish muscle and liver glycogen
Repair
Consuming high-quality protein sources with additional supplementation of creatine monohydrate
Rest
Getting long and high-quality sleep after exercise, additionally improved by consuming casein proteins, antioxidant-rich fruits, and high-glycemic-index meals
Exercise has an effect on appetite, but whether it increases or decreases appetite varies from individual to individual, and is affected by the intensity and duration of the exercise.
Excessive exercise
History
The benefits of exercise have been known since antiquity. Dating back to 65 BCE, it was Marcus Cicero, Roman politician and lawyer, who stated: "It is exercise alone that supports the spirits, and keeps the mind in vigor." Exercise was also seen to be valued later in history during the Early Middle Ages as a means of survival by the Germanic peoples of Northern Europe.
More recently, exercise was regarded as a beneficial force in the 19th century. In 1858, Archibald MacLaren opened a gymnasium at the University of Oxford and instituted a training regimen for Major Frederick Hammersley and 12 non-commissioned officers. This regimen was assimilated into the training of the British Army, which formed the Army Gymnastic Staff in 1860 and made sport an important part of military life. Several mass exercise movements were started in the early twentieth century as well. The first and most significant of these in the UK was the Women's League of Health and Beauty, founded in 1930 by Mary Bagot Stack, that had 166,000 members in 1937.
The link between physical health and exercise (or lack of it) was further established in 1949 and reported in 1953 by a team led by Jerry Morris. Morris noted that men of similar social class and occupation (bus conductors versus bus drivers) had markedly different rates of heart attacks, depending on the level of exercise they got: bus drivers had a sedentary occupation and a higher incidence of heart disease, while bus conductors were forced to move continually and had a lower incidence of heart disease.
Other animals
Animals like chimpanzees, orangutans, gorillas and bonobos, which are closely related to humans, without ill effect engage in considerably less physical activity than is required for human health, raising the question of how this is biochemically possible.
Studies of animals indicate that physical activity may be more adaptable than changes in food intake to regulate energy balance.
Mice having access to activity wheels engaged in voluntary exercise and increased their propensity to run as adults. Artificial selection of mice exhibited significant heritability in voluntary exercise levels, with "high-runner" breeds having enhanced aerobic capacity, hippocampal neurogenesis, and skeletal muscle morphology.
The effects of exercise training appear to be heterogeneous across non-mammalian species. As examples, exercise training of salmon showed minor improvements of endurance, and a forced swimming regimen of yellowtail amberjack and rainbow trout accelerated their growth rates and altered muscle morphology favorable for sustained swimming. Crocodiles, alligators, and ducks showed elevated aerobic capacity following exercise training. No effect of endurance training was found in most studies of lizards, although one study did report a training effect. In lizards, sprint training had no effect on maximal exercise capacity, and muscular damage from over-training occurred following weeks of forced treadmill exercise.
| Biology and health sciences | Health, fitness, and medicine | null |
189055 | https://en.wikipedia.org/wiki/Agnatha | Agnatha | Agnatha (; ) is a paraphyletic infraphylum of non-gnathostome vertebrates, or jawless fish, in the phylum Chordata, subphylum Vertebrata, consisting of both living (cyclostomes) and extinct (conodonts, anaspids, and ostracoderms, among others). Among recent animals, cyclostomes are sister to all vertebrates with jaws, known as gnathostomes.
Molecular data, both from rRNA and from mtDNA as well as embryological data, strongly supports the hypothesis that both groups of living agnathans, hagfishes and lampreys, are more closely related to each other than to jawed fish, forming the clade Cyclostomi.
The oldest fossil agnathans appeared in the Cambrian. Living jawless fish comprise about 120 species in total. Hagfish are considered members of the subphylum Vertebrata, because they secondarily lost vertebrae; before this event was inferred from molecular and developmental data, the Craniata hypothesis was accepted (and is still sometimes used as a strictly morphological descriptor) to reference hagfish plus vertebrates.
Metabolism
Agnathans are ectothermic, meaning they do not regulate their own body temperature. Agnathan metabolism is slow in cold water, and therefore they do not have to eat very much. They have no distinct stomach, but rather a long gut, more or less homogeneous throughout its length. Lampreys feed on other fish and mammals. Anticoagulant fluids preventing blood clotting are injected into the host, causing the host to yield more blood. Hagfish are scavengers, eating mostly dead animals. They use a row of sharp teeth to break down the animal. Because agnathan teeth are unable to move up and down it limits their possible food types.
Morphology
In addition to the absence of jaws, modern agnathans are characterised by absence of paired fins; the presence of a notochord both in larvae and adults; and seven or more paired gill pouches. Lampreys have a light sensitive pineal eye (homologous to the pineal gland in mammals). All living and most extinct Agnatha do not have an identifiable stomach or any appendages. Fertilization and development are both external. There is no parental care in the Agnatha class. The Agnatha are ectothermic or cold, with a cartilaginous skeleton, and the heart contains 2 chambers.
Body covering
In modern agnathans, the body is covered in skin, with neither dermal or epidermal scales. The skin of hagfish has copious slime glands, the slime constituting their defense mechanism. The slime can sometimes clog up enemy fishes' gills, causing them to die. In direct contrast, many extinct agnathans sported extensive exoskeletons composed of either massive, heavy dermal armour or small mineralized scales.
Appendages
Almost all agnathans, including all extant agnathans, have no paired appendages, although most do have a dorsal or a caudal fin. Some fossil agnathans, such as osteostracans and pituriaspids, did have paired fins, a trait inherited in their jawed descendants.
Reproduction
Fertilization in lampreys is external. Mode of fertilization in hagfishes is not known. Development in both groups probably is external. There is no known parental care. Not much is known about the hagfish reproductive process. It is believed that hagfish only have 30 eggs over a lifetime. There is very little of the larval stage that characterizes the lamprey. Lamprey are only able to reproduce once. Lampreys reproduce in freshwater riverbeds, working in pairs to build a nest and burying their eggs about an inch beneath the sediment. The resulting hatchlings go through four years of larval development before becoming adults.
Evolution
Although a minor element of modern marine fauna, agnathans were prominent among the early fish in the early Paleozoic. Two types of Early Cambrian animal apparently having fins, vertebrate musculature, and gills are known from the early Cambrian Maotianshan shales of China: Haikouichthys and Myllokunmingia. They have been tentatively assigned to Agnatha by Janvier. A third possible agnathan from the same region is Haikouella. A possible agnathan that has not been formally described was reported by Simonetti from the Middle Cambrian Burgess Shale of British Columbia. Conodonts, a class of agnathans which arose in the early Cambrian, remained common enough until their extinction in the Triassic that their teeth (the only parts of them that were usually fossilized) are often used as index fossils from the late Cambrian to the Triassic.
Many Ordovician, Silurian, and Devonian agnathans were armored with heavy bony-spiky plates. The first armored agnathans—the ostracoderms, precursors to the bony fish and hence to the tetrapods (including humans)—are known from the middle Ordovician, and by the Late Silurian the agnathans had reached the high point of their evolution. Most of the ostracoderms, such as thelodonts, osteostracans, and galeaspids, were more closely related to the gnathostomes than to the surviving agnathans, known as cyclostomes. Cyclostomes apparently split from other agnathans before the evolution of dentine and bone, which are present in many fossil agnathans, including conodonts. Agnathans declined in the Devonian and never recovered.
Approximately 500 million years ago, two types of recombinatorial adaptive immune systems (AISs) arose in vertebrates. The jawed vertebrates diversify their repertoire of immunoglobulin domain-based T and B cell antigen receptors mainly through the rearrangement of V(D)J gene segments and somatic hypermutation, but none of the fundamental AIS recognition elements in jawed vertebrates have been found in jawless vertebrates. Instead, the AIS of jawless vertebrates is based on variable lymphocyte receptors (VLRs) that are generated through recombinatorial usage of a large panel of highly diverse leucine-rich-repeat (LRR) sequences. Three VLR genes (VLRA, VLRB, and VLRC) have been identified in lampreys and hagfish, and are expressed on three distinct lymphocytes lineages. VLRA+ cells and VLRC+ cells are T-cell-like and develop in a thymus-like lympho-epithelial structure, termed thymoids. VLRB+ cells are B-cell-like, develop in hematopoietic organs, and differentiate into "VLRB antibody"-secreting plasma cells.
Classification
Phylogeny
Phylogeny based on the work of Mikko Haaramo and Delsuc et al.
While the "Agnatha" Conodonta was indeed jawless, if it would have continued to live, its descendants would still be closer related to e.g. humans than to lampreys, and also contemporary it was closer related to the ancestor of humans. Due to such considerations, Agnatha can not be consolidated into a coherent grouping without either removing any non-Cyclostomata, or by including all Vertebrata thus rendering it into a junior synonym of Vertebrata.
The new phylogeny from Miyashita et al. (2019) is considered compatible with both morphological and molecular evidence.
| Biology and health sciences | Agnatha | null |
189105 | https://en.wikipedia.org/wiki/Roche%20lobe | Roche lobe | In astronomy, the Roche lobe is the region around a star in a binary system within which orbiting material is gravitationally bound to that star. It is an approximately teardrop-shaped region bounded by a critical gravitational equipotential, with the apex of the teardrop pointing towards the other star (the apex is at the Lagrangian point of the system).
The Roche lobe is different from the Roche sphere, which approximates the gravitational sphere of influence of one astronomical body in the face of perturbations from a more massive body around which it orbits. It is also different from the Roche limit, which is the distance at which an object held together only by gravity begins to break up due to tidal forces. The Roche lobe, Roche limit, and Roche sphere are named after the French astronomer Édouard Roche.
Definition
In a binary system with a circular orbit, it is often useful to describe the system in a coordinate system that rotates along with the objects. In this non-inertial frame, one must consider centrifugal force in addition to gravity. The two together can be described by a potential, so that, for example, the stellar surfaces lie along equipotential surfaces.
Close to each star, surfaces of equal gravitational potential are approximately spherical and concentric with the nearer star. Far from the stellar system, the equipotentials are approximately ellipsoidal and elongated parallel to the axis joining the stellar centers. A critical equipotential intersects itself at the Lagrangian point of the system, forming a two-lobed figure-of-eight with one of the two stars at the center of each lobe. This critical equipotential defines the Roche lobes.
Where matter moves relative to the co-rotating frame it will seem to be acted upon by a Coriolis force. This is not derivable from the Roche lobe model as the Coriolis force is a non-conservative force (i.e. not representable by a scalar potential).
Further analysis
In the gravity potential graphics, L1, L2, L3, L4, L5 are in synchronous rotation with the system. Regions of red, orange, yellow, green, light blue and blue are potential arrays from high to low. Red arrows are rotation of the system and black arrows are relative motions of the debris.
Debris goes faster in the lower potential region and slower in the higher potential region. So, relative motions of the debris in the lower orbit are in the same direction with the system revolution while opposite in the higher orbit.
L1 is the gravitational capture equilibrium point. It is a gravity cut-off point of the binary star system. It is the minimum potential equilibrium among L1, L2, L3, L4 and L5. It is the easiest way for the debris to commute between a Hill sphere (an inner circle of blue and light blue) and communal gravity regions (figure-eights of yellow and green in the inner side).
L2 and L3 are gravitational perturbation equilibria points. Passing through these two equilibrium points, debris can commute between the external region (figure-eights of yellow and green in the outer side) and the communal gravity region of the binary system.
L4 and L5 are the maximum potential points in the system. They are unstable equilibria. If the mass ratio of the two stars becomes larger, then the orange, yellow and green regions will become a horseshoe orbit.
The red region will become the tadpole orbit.
Mass transfer
When a star "exceeds its Roche lobe", its surface extends out beyond its Roche lobe and the material which lies outside the Roche lobe can "fall off" into the other object's Roche lobe via the first Lagrangian point. In binary evolution this is referred to as mass transfer via Roche-lobe overflow.
In principle, mass transfer could lead to the total disintegration of the object, since a reduction of the object's mass causes its Roche lobe to shrink. However, there are several reasons why this does not happen in general. First, a reduction of the mass of the donor star may cause the donor star to shrink as well, possibly preventing such an outcome. Second, with the transfer of mass between the two binary components, angular momentum is transferred as well.
While mass transfer from a more massive donor to a less massive accretor generally leads to a shrinking orbit, the reverse causes the orbit to expand (under the assumption of mass and angular-momentum conservation). The expansion of the binary orbit will lead to a less dramatic shrinkage or even expansion of the donor's Roche lobe, often preventing the destruction of the donor.
To determine the stability of the mass transfer and hence exact fate of the donor star, one needs to take into account how the radius of the donor star and that of its Roche lobe react to the mass loss from the donor; if the star expands faster than its Roche lobe or shrinks less rapidly than its Roche lobe for a prolonged time, mass transfer will be unstable and the donor star may disintegrate. If the donor star expands less rapidly or shrinks faster than its Roche lobe, mass transfer will generally be stable and may continue for a long time.
Mass transfer due to Roche-lobe overflow is responsible for a number of astronomical phenomena, including Algol systems, recurring novae (binary stars consisting of a red giant and a white dwarf that are sufficiently close that material from the red giant dribbles down onto the white dwarf), X-ray binaries and millisecond pulsars. Such mass transfer by Roche lobe overflow (RLOF) is further broken down into three distinct cases:
Case A
Case A RLOF occurs when the donor star is hydrogen burning. According to Nelson and Eggleton, there are a number of subclasses which are reproduced here:
AD dynamic
when RLOF happens to a star with a deep convection zone. Mass transfer happens rapidly on the dynamical time scale of the star and may end with a complete merger.
AR rapid contact
similar to AD, but as the star onto which matter is rapidly accreting gains mass, it gains physical size enough for it to reach its own Roche-lobe. As such times, the system manifests as a contact binary such as a W Ursae Majoris variable.
AS slow contact
similar to AR, but only a short period of fast mass transfer happens followed by a much longer period of slow mass transfer. Eventually the stars will come into contact, but they have changed substantially by the point this happens. Algol variables are the result of such situations.
AE early overtaking
similar to AS, but the star gaining mass overtakes the star donating mass to evolve past the main sequence. The donor star can shrink so small to stop mass transfer, but eventually mass transfer will start again as stellar evolution continues leading to the cases
AL late overtaking
the case when the star that initially was the donor undergoes a supernova after the other star has undergone its own round of RLOF.
AB binary
the case where the stars switch back and forth between which one is undergoing RLOF at least three times (technically a subclass of the above).
AN no overtaking
the case when the star that initially was the donor undergoes a supernova before the other star reaches a RLOF phase.
AG giant
Mass transfer does not begin until the star reaches the red giant branch but before it has exhausted its hydrogen core (after which the system is described as Case B).
Case B
Case B happens when RLOF starts while the donor is a post-core hydrogen burning/hydrogen shell burning star. This case can be further subdivided into classes Br and Bc according to whether the mass transfer occurs from a star dominated by a radiation zone (Br) and therefore evolves as the situation with most Case A RLOF or a convective zone (Bc) after which a common envelope phase may occur (similar to Case C). An alternative division of cases is Ba, Bb, and Bc which are roughly corresponding to RLOF phases that happen during helium fusion, after helium fusion but before carbon fusion, or after carbon fusion in the highly evolved star.
Case C
Case C happens when RLOF starts when the donor is at or beyond the helium shell burning phase. These systems are the rarest observed, but this may be due to selection bias.
Geometry
The precise shape of the Roche lobe depends on the mass ratio , and must be evaluated numerically. However, for many purposes it is useful to approximate the Roche lobe as a sphere of the same volume. An approximate formula for the radius of this sphere is
where and .
Function is greater than for . The length A is the orbital separation of the system and r is the radius of the sphere whose volume approximates the Roche lobe of mass M. This formula is accurate to within about 2%. Another approximate formula was proposed by Eggleton and reads as follows:
This formula gives results up to 1% accuracy over the entire range of the mass ratio .
| Physical sciences | Stellar astronomy | Astronomy |
189151 | https://en.wikipedia.org/wiki/Sillimanite | Sillimanite | Sillimanite or fibrolite is an aluminosilicate mineral with the chemical formula Al2SiO5. Sillimanite is named after the American chemist Benjamin Silliman (1779–1864). It was first described in 1824 for an occurrence in Chester, Connecticut.
Occurrence
Sillimanite or fibrolite is one of three aluminosilicate polymorphs, the other two being andalusite and kyanite. A common variety of sillimanite is known as fibrolite, so named because the mineral appears like a bunch of fibres twisted together when viewed in thin section or even by the naked eye. Both the fibrous and traditional forms of sillimanite are common in metamorphosed sedimentary rocks. It is an index mineral indicating high temperature but variable pressure. Example rocks include gneiss and granulite. It occurs with andalusite, kyanite, potassium feldspar, almandine, cordierite, biotite and quartz in schist, gneiss, hornfels and also rarely in pegmatites. Dumortierite and mullite are similar mineral species found in porcelain.
Sillimanite has been found in Brandywine Springs, New Castle County, Delaware. It was named by the State Legislature in 1977 as the state mineral of Delaware by the suggestion of the Delaware Mineralogical Society.
Uses
Natural sillimanite is used in the manufacture of high alumina refractories or 55–60% alumina bricks. However, it has mostly been replaced by the other aluminosilicate polymorphs, andalusite and kyanite, for this purpose. , sillimanite was just 2% of all aluminosilicate mineral production in the western world.
Gallery
| Physical sciences | Silicate minerals | Earth science |
189274 | https://en.wikipedia.org/wiki/Carpentry | Carpentry | Carpentry is a skilled trade and a craft in which the primary work performed is the cutting, shaping and installation of building materials during the construction of buildings, ships, timber bridges, concrete formwork, etc. Carpenters traditionally worked with natural wood and did rougher work such as framing, but today many other materials are also used and sometimes the finer trades of cabinetmaking and furniture building are considered carpentry. In the United States, 98.5% of carpenters are male, and it was the fourth most male-dominated occupation in the country in 1999. In 2006 in the United States, there were about 1.5 million carpentry positions. Carpenters are usually the first tradesmen on a job and the last to leave. Carpenters normally framed post-and-beam buildings until the end of the 19th century; now this old-fashioned carpentry is called timber framing. Carpenters learn this trade by being employed through an apprenticeship training—normally four years—and qualify by successfully completing that country's competence test in places such as the United Kingdom, the United States, Canada, Switzerland, Australia and South Africa. It is also common that the skill can be learned by gaining work experience other than a formal training program, which may be the case in many places.
Carpentry covers various services, such as furniture design and construction, door and window installation or repair, flooring installation, trim and molding installation, custom woodworking, stair construction, structural framing, wood structure and furniture repair, and restoration.
Etymology
The word "carpenter" is the English rendering of the Old French word carpentier (later, charpentier) which is derived from the Latin carpentarius [artifex], "(maker) of a carriage." The Middle English and Scots word (in the sense of "builder") was wright (from the Old English wryhta, cognate with work), which could be used in compound forms such as wheelwright or boatwright.
In the United Kingdom
In the UK, carpentry is used to describe the skill involved in first fixing of timber items such as construction of roofs, floors and timber framed buildings, i.e. those areas of construction that are normally hidden in a finished building. An easy way to envisage this is that first fix work is all that is done before plastering takes place. The second fix is done after plastering takes place. Second fix work, the installation of items such as skirting boards, architraves, doors, and windows are generally regarded as carpentry, however, the off-site manufacture and pre-finishing of the items is regarded as joinery. Carpentry is also used to construct the formwork into which concrete is poured during the building of structures such as roads and highway overpasses. In the UK, the skill of making timber formwork for poured or in situ concrete is referred to as shuttering.
In the United States
Carpentry in the United States is historically defined similarly to the United Kingdom as the "heavier and stronger" work distinguished from a joiner "...who does lighter and more ornamental work than that of a carpenter..." although the "...work of a carpenter and joiner are often combined." Joiner is less common than the terms finish carpenter or cabinetmaker. The terms housewright and barnwright were used historically and are now occasionally used by carpenters who work using traditional methods and materials. Someone who builds custom concrete formwork is a form carpenter.
History
Along with stone, wood is among the oldest building materials. The ability to shape it into tools, shelter, and weapons improved with technological advances from the Stone Age to the Bronze Age to the Iron Age. Some of the oldest archaeological evidence of carpentry are water well casings. These include an oak and hazel structure dating from 5256 BC, found in Ostrov, Czech Republic, and one built using split oak timbers with mortise and tenon and notched corners excavated in eastern Germany, dating from about 7,000 years ago in the early Neolithic period.
Relatively little history of carpentry was preserved before written language. Knowledge and skills were simply passed down over the generations. Even the advent of cave painting and writing recorded little. The oldest surviving complete architectural text is Vitruvius' ten books collectively titled De architectura, which discuss some carpentry. It was only with the invention of the printing press in the 15th century that this began to change, albeit slowly, with builders finally beginning to regularly publish guides and pattern books in the 18th and 19th centuries.
Some of the oldest surviving wooden buildings in the world are temples in China such as the Nanchan Temple built-in 782, Greensted Church in England, parts of which are from the 11th century, and the stave churches in Norway from the 12th and 13th centuries.
Europe
By the 16th century, sawmills were coming into use in Europe. The founding of America was partly based on a desire to extract resources from the new continent including wood for use in ships and buildings in Europe. In the 18th century part of the Industrial Revolution was the invention of the steam engine and cut nails. These technologies combined with the invention of the circular saw led to the development of balloon framing which was the beginning of the decline of traditional timber framing.
The 19th century saw the development of electrical engineering and distribution which allowed the development of hand-held power tools, wire nails, and machines to mass-produce screws. In the 20th century, portland cement came into common use and concrete foundations allowed carpenters to do away with heavy timber sills. Also, drywall (plasterboard) came into common use replacing lime plaster on wooden lath. Plywood, engineered lumber, and chemically treated lumber also came into use.
For types of carpentry used in America see American historic carpentry.
Training
Carpentry requires training which involves both acquiring knowledge and physical practice. In formal training a carpenter begins as an apprentice, then becomes a journeyman, and with enough experience and competency can eventually attain the status of a master carpenter. Today pre-apprenticeship training may be gained through non-union vocational programs such as high school shop classes and community colleges.
Informally a laborer may simply work alongside carpenters for years learning skills by observation and peripheral assistance. While such an individual may obtain journeyperson status by paying the union entry fee and obtaining a journeyperson's card (which provides the right to work on a union carpentry crew) the carpenter foreperson will, by necessity, dismiss any worker who presents the card but does not demonstrate the expected skill level.
Carpenters may work for an employer or be self-employed. No matter what kind of training a carpenter has had, some U.S. states require contractors to be licensed which requires passing a written test and having minimum levels of insurance.
Schools and programs
Formal training in the carpentry trade is available in seminars, certificate programs, high-school programs, online classes, in the new construction, restoration, and preservation carpentry fields. Sometimes these programs are called pre-apprenticeship training.
In the modern British construction industry, carpenters are trained through apprenticeship schemes where general certificates of secondary education (GCSE) in Mathematics, English, and Technology help but are not essential. However, this is deemed the preferred route, as young people can earn and gain field experience whilst training towards a nationally recognized qualification.
There are two main divisions of training: construction-carpentry and cabinetmaking. During pre-apprenticeship, trainees in each of these divisions spend 30 hours a week for 12 weeks in classrooms and indoor workshops learning mathematics, trade terminology, and skill in the use of hand and power tools. Construction-carpentry trainees also participate in calisthenics to prepare for the physical aspect of the work.
Upon completion of pre-apprenticeship, trainees who have passed the graded curriculum (taught by highly experienced journeyperson carpenters) are assigned to a local union and to union carpentry crews at work on construction sites or in cabinet shops as First Year Apprentices. Over the next four years, as they progress in status to Second Year, Third Year, and Fourth Year Apprentice, apprentices periodically return to the training facility every three months for a week of more detailed training in specific aspects of the trade.
In the United States, fewer than 5% of carpenters identify as female. A number of schools in the U.S.
appeal to non-traditional tradespeople by offering carpentry classes for and taught by women, including Hammerstone: Carpentry for Women in Ithaca, NY, Yestermorrow in Waitsfield, VT and Oregon Tradeswomen in Portland, OR.
Apprenticeships and journeyperson
Tradesmen in countries such as Germany and Australia are required to fulfill formal apprenticeships (usually three to four years) to work as professional carpenters. Upon graduation from the apprenticeship, they are known as journeyperson carpenters.
Up through the 19th and even the early 20th century, the journeyperson traveled to another region of the country to learn the building styles and techniques of that area before (usually) returning home. In modern times, journeypeople are not required to travel, and the term now refers to a level of proficiency and skill. Union carpenters in the United States, that is, members of the United Brotherhood of Carpenters and Joiners of America, are required to pass a skills test to be granted official journeyperson status, but uncertified professional carpenters may also be known as journeypersons based on their skill level, years of experience, or simply because they support themselves in the trade and not due to any certification or formal woodworking education.
Professional status as a journeyperson carpenter in the United States may be obtained in a number of ways. Formal training is acquired in a four-year apprenticeship program administered by the United Brotherhood of Carpenters and Joiners of America, in which journeyperson status is obtained after successful completion of twelve weeks of pre-apprenticeship training, followed by four years of on-the-job field training working alongside journeyperson carpenters. The Timber Framers Guild also has a formal apprenticeship program for traditional timber framing. Training is also available in groups like the Kim Bồng woodworking village in Vietnam where apprentices live and work to learn woodworking and carpentry skills.
In Canada, each province sets its own standards for apprenticeship. The average length of time is four years and includes a minimum number of hours of both on-the-job training and technical instruction at a college or other institution. Depending on the number of hours of instruction an apprentice receives, they can earn a Certificate of Proficiency, making them a journeyperson, or a Certificate of Qualification, which allows them to practice a more limited amount of carpentry. Canadian carpenters also have the option of acquiring an additional Interprovincial Red Seal that allows them to practice anywhere in Canada. The Red Seal requires the completion of an apprenticeship and an additional examination.
Master carpenter
After working as a journeyperson for a while, a carpenter may go on to study or test as a master carpenter. In some countries, such as Germany, Iceland and Japan, this is an arduous and expensive process, requiring extensive knowledge (including economic and legal knowledge) and skill to achieve master certification; these countries generally require master status for anyone employing and teaching apprentices in the craft. In others, like the United States, 'master carpenter' can be a loosely used term to describe any skilled carpenter.
Fully trained carpenters and joiners will often move into related trades such as shop fitting, scaffolding, bench joinery, maintenance and system installation.
Materials
Carpenters traditionally worked with natural wood which has been prepared by splitting (riving), hewing, or sawing with a pit saw or sawmill called lumber (American English) or timber (British English). Today natural and engineered lumber and many other building materials carpenters may use are typically prepared by others and delivered to the job site. In 2013 the carpenters union in America used the term carpenter for a catch-all position. Tasks performed by union carpenters include installing "...flooring, windows, doors, interior trim, cabinetry, solid surface, roofing, framing, siding, flooring, insulation, ...acoustical ceilings, computer-access flooring, metal framing, wall partitions, office furniture systems, and both custom or factory-produced materials, ...trim and molding,... ceiling treatments, ... exposed columns and beams, displays, mantels, staircases...metal studs, metal lath, and drywall..."
Health and safety
United States
Carpentry is often hazardous work. Types of woodworking and carpentry hazards include: machine hazards, flying materials, tool projection, fire and explosion, electrocution, noise, vibration, dust, and chemicals.
In the United States the Occupational Safety and Health Administration (OSHA) tries to prevent illness, injury, and fire through regulations. However, self-employed workers are not covered by the OSHA act. OSHA claims that "Since 1970, workplace fatalities have been reduced by more than 65 percent and occupational injury and illness rates have declined by 67 percent. At the same time, U.S. employment has almost doubled." The leading cause of overall fatalities, called the "fatal four," are falls, followed by struck by object, electrocution, and caught-in/between. In general construction "employers must provide working conditions that are free of known dangers. Keep floors in work areas in a clean and, so far as possible, dry condition. Select and provide required personal protective equipment at no cost to workers. Train workers about job hazards in a language that they can understand." Examples of how to prevent falls includes placing railings and toe-boards at any floor opening which cannot be well covered and elevated platforms and safety harness and lines, safety nets, stair railings, and handrails.
Safety is not just about the workers on the job site. Carpenters' work needs to meet the requirements in the Life Safety Code such as in stair building and building codes to promote long-term quality and safety for the building occupants.
Types
Cabinetmaker is a carpenter who does fine and detailed work specializing in the making of cabinets made from wood, wardrobes, dressers, storage chests, and other furniture designed for storage.
Carpenter and joiner has broad skill sets ranging from joinery, finish carpentry, framing, and formwork.
Conservation carpenter works in architectural conservation, known in the U.S. as a "preservation" or "restoration"; a carpenter who works in historic preservation, maintaining structures as they were built or restoring them to that condition.
Cooper, a barrel maker.
Finish carpenter (North America), also trim carpenter, specializes in installing molding and trim, such as door and window casings, mantels, crown mouldings, baseboards, and other types of ornamental work. Finish carpenters pick up where framing ends off, including hanging doors and installing cabinets.
Formwork carpenter creates the shuttering and falsework used in concrete construction, and reshores as necessary.
Framer is a carpenter who builds the skeletal structure or wooden framework of buildings, most often in the platform framing method. A framer who specializes in building with timbers and traditional joints rather than studs is known as a timber framer.
Log builder builds structures of stacked horizontal logs with limited joints.
Joiner (a traditional name now rare in North America), is one who does cabinetry, furniture making, fine woodworking, model building, instrument making, parquetry, joinery, or other carpentry where exact joints and minimal margins of error are important.
Luthier is someone who makes or repairs stringed instruments. The word luthier comes from the French word for lute, "luth".
Restoration carpenter (see conservation carpenter)
Set carpenter builds and dismantles temporary scenery and sets in film-making, television, and the theater.
Ship's carpenter specializes in maintenance, repair techniques, and carpentry specific to vessels afloat. Such a carpenter patrols the vessel's carpenter's walk to examine the hull for leaks.
Shipwright builds wooden ships on land.
Other
Japanese carpentry, daiku is the simple term for carpenter, a Miya-daiku (temple carpenter) performs the work of both architect and builder of shrines and temples, and a sukiya-daiku works on teahouse construction and houses. Sashimono-shi build furniture and tateguya do interior finishing work.
Green carpentry specializes in the use of environmentally friendly, energy-efficient and sustainable sources of building materials for use in construction projects. They also practice building methods that require using less material and material that has the same structural soundness.
Recycled (reclaimed, repurposed) carpentry is carpentry that uses scrap wood and parts of discarded or broken furniture to build new wood products.
| Technology | Material and chemical | null |
189284 | https://en.wikipedia.org/wiki/Steelmaking | Steelmaking | Steelmaking is the process of producing steel from iron ore and/or scrap. Steel has been made for millennia, and was commercialized on a massive scale in the 1850s and 1860s, using the Bessemer and Siemens-Martin processes.
Two major commercial processes are used. Basic oxygen steelmaking uses liquid pig-iron from a blast furnace and scrap steel as the main feed materials. Electric arc furnace (EAF) steelmaking uses scrap steel or direct reduced iron (DRI). Oxygen steelmaking has become more popular over time.
Steelmaking is one of the most carbon emission-intensive industries. , steelmaking was responsible for about 10% of greenhouse gas emissions. The industry is seeking significant emission reductions.
Steel
Steel is made from iron and carbon. Cast iron is a hard, brittle material that is difficult to work, whereas steel is malleable, relatively easily formed and versatile. On its own, iron is not strong, but a low concentration of carbon – less than 1 percent, depending on the kind of steel – gives steel strength and other important properties. Impurities such as nitrogen, silicon, phosphorus, sulfur, and excess carbon (the most important impurity) are removed, and alloying elements such as manganese, nickel, chromium, carbon, and vanadium are added to produce different grades of steel.
History
Early history
Early processes evolved during the classical era in China, India, and Rome. The earliest means of producing steel was in a bloomery.
For much of human history, steel was made only in small quantities. Early modern methods of producing steel were often labor-intensive and highly skilled arts. The Bessemer process and subsequent developments allowed steel to become integral to the global economy.
China
A system akin to the Bessemer process originated in the 11th century in East Asia. Hartwell wrote that the Song dynasty (960–1279 CE) innovated a "partial decarbonization" method of repeated forging of cast iron under a cold blast. Needham and Wertime described the method as a predecessor to the Bessemer process. This process was first described government official Shen Kuo (1031–1095) in 1075, when he visited Cizhou. Hartwell stated that the earliest center where this was practiced was perhaps the great iron-production district along the Henan–Hebei border during the 11th century.
Europe
In the 15th century, the finery process, which shares the air-blowing principle with the Bessemer process, was developed in Europe.
High-quality steel was also made by the reverse process of adding carbon to carbon-free wrought iron, usually imported from Sweden. The manufacturing process, called the cementation process, consisted of heating bars of wrought iron together with charcoal for periods of up to a week in a long stone box. This produced blister steel. The blister steel was put in a crucible with wrought iron and melted, producing crucible steel. Up to 3 tons of (then expensive) coke was burnt for each ton of steel produced. When rolled into bars such steel was sold at £50 to £60 (approximately £3,390 to £4,070 in 2008) a long ton. The most difficult and laborious part of the process was the production of wrought iron in finery forges in Sweden.
In 1740, Benjamin Huntsman developed the crucible technique for steel manufacture at his workshop in Handsworth, England. This process greatly improved the quantity and quality of steel production. It added three hours firing time and required large quantities of coke. In making crucible steel, the blister steel bars were broken into pieces and melted in small crucibles, each containing 20 kg or so. This produced higher quality metal, but increased the cost.
The Bessemer process reduced the time needed to make lower-grade steel to about half an hour while requiring only enough coke needed to melt the pig iron. The earliest Bessemer converters produced steel for £7 a long ton, although it initially sold for around £40 a ton.
Japan
The Japanese may have made use of a Bessemer-type process, as observed by 17th century European travellers. Adventurer Johan Albrecht de Mandelslo described the process in a book published in English in 1669. He wrote, "They have, among others, particular invention for the melting of iron, without the using of fire, casting it into a tun done about on the inside without about half a foot of earth, where they keep it with continual blowing, take it out by ladles full, to give it what form they please." Wagner stated that Mandelslo did not visit Japan, so his description of the process is likely derived from other accounts. Wagner stated that the Japanese process may have been similar to the Bessemer process, but cautions that alternative explanations are plausible.
By the early 19th century the puddling process was widespread. At the time, process heat was too low to entirely remove slag impurities, but the reverberatory furnace made it possible to heat iron without placing it directly in the fire, offering some protection from impurities in the fuel source. Coal then began to replace charcoal as fuel.
The Bessemer process allowed steel to be produced without fuel, using the iron's impurities to create the necessary heat. This drastically reduced costs, but raw materials with the required characteristics were not always easy to find.
Industrialization
Modern steelmaking began at the end of the 1850s when the Bessemer process became the first successful method of steelmaking in high quantity, followed by the open-hearth furnace.
Processes
Modern steelmaking consists of three steps: primary, secondary, and tertiary.
Primary steelmaking involves smelting iron into steel. Secondary steelmaking involves adding or removing other elements such as alloying agents and dissolved gases. Tertiary steelmaking casts molten metal into sheets, rolls or other forms. Multiple techniques are available for each step.
Primary step
Basic oxygen
Basic oxygen steelmaking (BOS)involves melting carbon-rich pig iron and converting it into steel. Blowing oxygen through molten pig iron oxidizes some of the carbon into and , turning the iron into steel. Refractories (materials resistant to decomposition under high temperatures)—calcium oxide and magnesium oxide—line the smelting vessel to withstand the heat, corrosive molten metal, and slag. The chemistry is controlled to remove impurities such as silicon and phosphorus.
The basic oxygen process was developed in 1948 by Robert Durrer, as a refinement of the Bessemer converter that replaced air with (more efficient) pure oxygen. It reduced plant capital costs and smelting time, and increased labor productivity. Between 1920 and 2000, labour requirements decreased by a factor of 1000, to 3 man-hours per thousand tonnes. In 2013, 70% of global steel output came from the basic oxygen furnace. Furnaces can convert up to 350 tons of iron into steel in less than 40 minutes, compared to 10–12 hours in an open hearth furnace.
Electric arc
Electric arc furnaces make steel from scrap or direct reduced iron. A "heat" (batch) of iron is loaded into the furnace, sometimes with a "hot heel" (molten steel from a previous heat). Gas burners may assist with the melt. As in BOS, fluxes are added to protect the vessel lining and help impurity removal. The furnaces are typically 100 tonne-capacity that produce steel every 40 to 50 minutes. This process allows larger alloy additions than the basic oxygen method.
HIsarna
In HIsarna ironmaking, iron ore is processed almost directly into liquid iron or hot metal. The process is based around a cyclone converter blast furnace, which makes it possible to skip making the BOS-required pig iron pellets. Skipping this preparatory step makes the HIsarna process more energy-efficient and lowers the carbon footprint.
Hydrogen reduction
Direct-reduced iron can be produced from iron ore as it reacts with atomic hydrogen. Renewable hydrogen allows steelmaking without fossil fuels. Direct reduction occurs at . The iron is infused with carbon (from coal) in an electric arc furnace. Hydrogen electrolysis requires approximately 2600 kWh per ton of steel. Hydrogen production raises costs by an estimated 20–30% over conventional methods.
Second step
The next step commonly uses ladles. Ladle operations include de-oxidation (or "killing"), vacuum degassing, alloy addition, inclusion removal, inclusion chemistry modification, de-sulphurisation, and homogenisation. It is common to perform ladle operations in gas-stirred ladles with electric arc heating in the furnace lid. Tight control of ladle metallurgy produces high grades of steel with narrow tolerances.
Tertiary step
Carbon dioxide emissions
, steelmaking was estimated to be responsible for around 11% of global emissions and around 7% of greenhouse gas emissions. Making 1 ton of steel emits about 1.8 tons of . The bulk of these emissions are from the industrial process in which coal provides the carbon that binds with the oxygen from the iron ore in a blast furnace in:
Additional emissions result from mining, refining and shipping ore, basic oxygen steelmaking, calcination, and the hot blast. Proposed techniques to reduce emissions in the steel industry include reduction of iron ore using green hydrogen rather than carbon, and carbon capture and storage.
Mining and extraction
Coal and iron ore mining are energy intensive, and damage their surroundings, leaving pollution, biodiversity loss, deforestation, and greenhouse gas emissions behind.
Blast furnace
Blast furnaces remove oxygen and trace elements from iron and add a tiny amount of carbon by melting the iron ore at in the presence of ambient oxygen and coke (a type of coal). The oxygen from the ore is carried away by the carbon from the coke in the form of . The reaction:
(s) + 3 CO(g) → 2 Fe(s) + 3 (g)
The reaction occurs due to the lower (favorable) energy state of compared to iron oxide, and the high temperatures are needed to achieve the reaction's activation energy. A small amount of carbon bonds with the iron, forming pig iron, which is an intermediary before steel, as its carbon content is too high – around 4%.
Decarburization
To reduce the carbon content in pig iron and obtain the desired carbon content of steel, it is re-melted and oxygen is blown through in basic oxygen steelmaking. In this step, the oxygen binds with the undesired carbon, carrying it away in the form of gas, an additional emission source. After this step, the carbon content in the pig iron is lowered sufficiently to obtain steel.
Calcination
Further emissions result from the use of limestone, which is melted at high temperatures in a reaction called calcination, according to:
(s) → CaO(s) + (g)
The resulting is an additional source of emissions. Calcium oxide (CaO, quicklime) can be used as a replacement to reduce emissions. It acts as a chemical flux, removing impurities (such as sulfur or phosphorus (e.g. apatite or fluorapatite)) in the form of slag and lowers emissions according to reactions such as:
SiO2 + CaO → CaSiO3
This use of limestone to provide a flux occurs both in the blast furnace (to obtain pig iron) and in the basic oxygen steel making (to obtain steel).
Hot blast
emissions result from the hot blast, which increases blast furnace temperatures. The hot blast pumps hot air into the blast furnace. The hot blast temperature ranges from depending on the design and condition. Oil, tar, natural gas, powdered coal and oxygen can be injected to combine with the coke to release additional energy and increase the percentage of reducing gases present, increasing productivity. Hot blast air is typically heated by burning fossil fuels, an additional emission source.
Strategies for reducing carbon emissions
The steel industry produces 7-8% of anthropogenic emissions and is one of the most energy-intensive industries. Emissions abatement and decarbonization strategies vary by manufacturing process. Options fall into three general categories: using a non-fossil energy source; increasing processing efficiency; and evolving the manufacturing process. They may be used individually or in combination.
"Green steel" describes steelmaking without fossil fuels. Some companies that claim to produce green steel reduce, but do not eliminate, emissions.
Australia
Australia produces nearly 40% of the world's iron ore. The Australian Renewable Energy Agency (ARENA) is funding research projects involving direct reduced ironmaking (DRI) to reduce emissions. Companies such as Rio Tinto, BHP, and BlueScope are developing green steel projects.
Europe
European projects from HYBRIT, LKAB, Voestalpine, and ThyssenKrupp are pursuing strategies to reduce emissions. HYBRIT claims to produce green steel.
Top gas recovery in BF/BOF
Top gas from the blast furnace is normally expelled into the air. This gas contains , H2, and CO. The top gas can be captured, the removed, and the reducing agents reinjected into the blast furnace. A 2012 study suggested that this process can reduce blast furnace emissions by 75%, while a 2017 study showed that emissions are reduced by 56.5% with carbon capture and storage, and reduced by 26.2% if only the recycling of the reducing agents is used. To keep the carbon captured from entering the atmosphere, a method of storing it or using it would have to be found.
Another way to use the top gas is in a top recovery turbine which generates electricity, which thereby reduces external energy needs if electric arc smelting is used. Carbon could also be captured from coke oven gases. , separating the CO2 from other gases and components in the system, and the high cost of the equipment and infrastructure changes needed, have prevented adoption, but the emission reduction potential has been estimated to be up to 65% to 80%.
Hydrogen direct reduction
Hydrogen direct reduction (HDR) using hydrogen produced from emission-free power (green hydrogen) offers emission-free iron-making, because water is the only by-product of the reaction between iron oxide and hydrogen.
As of 2021, ArcelorMittal, Voestalpine, and TATA had committed to using green hydrogen to smelt iron. In 2024 the HYBRIT project in Sweden was using HDR.
For the European Union, it is estimated that the hydrogen demand for HDR would require 180 GW of renewable capacity.
Iron ore electrolysis
Another developing possible technology is iron ore electrolysis, where the reducing agent is electrons. One method is molten oxide electrolysis. The cell consists of an inert anode, a liquid oxide electrolyte (CaO, MgO, etc.), and molten ore. When heated to ~1.600 °C, the ore is reduced to iron and oxygen. As of 2022 Boston Metal was at the semi-industrial stage for this process, with plans to commercialize by 2026.
The Siderwin research project involved Arcelormittal was testing a different type of electrolysis. It operates at around 110 °C.
Scrap-use in BF/BOF
Scrap steelmaking refers to steel that has either reached its end-of-life use, or is excess metal from the manufacture of steel components. Steel is easy to separate and recycle due to its magnetism. Using scrap avoids the emissions of 1.5 tons of for every ton. , steel had one of the highest recycling rates of any material, with around 30% of the world's steel coming from recycled components. However, steel cannot be recycled endlessly, and the recycling processes, using arc furnaces, use electricity.
H2 enrichment in BF/BOF
In a blast furnace, iron oxides are reduced by a combination of CO, H2, and carbon. Only around 10% of the iron oxides are reduced by H2. With H2 enrichment, the proportion of iron oxides reduced by H2 is increased, consuming less carbon is consumed and emitting less . This process can reduce emissions by an estimated 20%.
Other strategies
The HIsarna ironmaking process is a way of producing iron in a cyclone converter furnace without the pre-processing steps of choking/agglomeration, which reduces the emissions by around 20%.
One speculative idea is a project by SuSteel to develop a hydrogen plasma technology that reduces the ore with hydrogen at high operating temperatures.
Biomass such as charcoal or wood pellets are a potential alternative blast furnace fuel, that does not involve fossil fuels, but still emits carbon. Emissions are reduced by 5% to 28%.
| Technology | Metallurgy | null |
189331 | https://en.wikipedia.org/wiki/Quantum%20indeterminacy | Quantum indeterminacy | Quantum indeterminacy is the apparent necessary incompleteness in the description of a physical system, that has become one of the characteristics of the standard description of quantum physics. Prior to quantum physics, it was thought that
Quantum indeterminacy can be quantitatively characterized by a probability distribution on the set of outcomes of measurements of an observable. The distribution is uniquely determined by the system state, and moreover quantum mechanics provides a recipe for calculating this probability distribution.
Indeterminacy in measurement was not an innovation of quantum mechanics, since it had been established early on by experimentalists that errors in measurement may lead to indeterminate outcomes. By the later half of the 18th century, measurement errors were well understood, and it was known that they could either be reduced by better equipment or accounted for by statistical error models. In quantum mechanics, however, indeterminacy is of a much more fundamental nature, having nothing to do with errors or disturbance.
Measurement
An adequate account of quantum indeterminacy requires a theory of measurement. Many theories have been proposed since the beginning of quantum mechanics and quantum measurement continues to be an active research area in both theoretical and experimental physics. Possibly the first systematic attempt at a mathematical theory was developed by John von Neumann. The kinds of measurements he investigated are now called projective measurements. That theory was based in turn on the theory of projection-valued measures for self-adjoint operators that had been recently developed (by von Neumann and independently by Marshall Stone) and the Hilbert space formulation of quantum mechanics (attributed by von Neumann to Paul Dirac).
In this formulation, the state of a physical system corresponds to a vector of length 1 in a Hilbert space H over the complex numbers. An observable is represented by a self-adjoint (i.e. Hermitian) operator A on H. If H is finite dimensional, by the spectral theorem, A has an orthonormal basis of eigenvectors. If the system is in state ψ, then immediately after measurement the system will occupy a state that is an eigenvector e of A and the observed value λ will be the corresponding eigenvalue of the equation . It is immediate from this that measurement in general will be non-deterministic. Quantum mechanics, moreover, gives a recipe for computing a probability distribution Pr on the possible outcomes given the initial system state is ψ. The probability is
where E(λ) is the projection onto the space of eigenvectors of A with eigenvalue λ.
Example
In this example, we consider a single spin 1/2 particle (such as an electron) in which we only consider the spin degree of freedom. The corresponding Hilbert space is the two-dimensional complex Hilbert space C2, with each quantum state corresponding to a unit vector in C2 (unique up to phase). In this case, the state space can be geometrically represented as the surface of a sphere, as shown in the figure on the right.
The Pauli spin matrices
are self-adjoint and correspond to spin-measurements along the 3 coordinate axes.
The Pauli matrices all have the eigenvalues +1, −1.
For σ1, these eigenvalues correspond to the eigenvectors
For σ3, they correspond to the eigenvectors
Thus in the state
σ1 has the determinate value +1, while measurement of σ3 can produce either +1, −1 each with probability 1/2. In fact, there is no state in which measurement of both σ1 and σ3 have determinate values.
There are various questions that can be asked about the above indeterminacy assertion.
Can the apparent indeterminacy be construed as in fact deterministic, but dependent upon quantities not modeled in the current theory, which would therefore be incomplete? More precisely, are there hidden variables that could account for the statistical indeterminacy in a completely classical way?
Can the indeterminacy be understood as a disturbance of the system being measured?
Von Neumann formulated the question 1) and provided an argument why the answer had to be no, if one accepted the formalism he was proposing. However, according to Bell, von Neumann's formal proof did not justify his informal conclusion. A definitive but partial negative answer to 1) has been established by experiment: because Bell's inequalities are violated, any such hidden variable(s) cannot be local (see Bell test experiments).
The answer to 2) depends on how disturbance is understood, particularly since measurement entails disturbance (however note that this is the observer effect, which is distinct from the uncertainty principle). Still, in the most natural interpretation the answer is also no. To see this, consider two sequences of measurements: (A) that measures exclusively σ1 and (B) that measures only σ3 of a spin system in the state ψ. The measurement outcomes of (A) are all +1, while the statistical distribution of the measurements (B) is still divided between +1, −1 with equal probability.
Other examples of indeterminacy
Quantum indeterminacy can also be illustrated in terms of a particle with a definitely measured momentum for which there must be a fundamental limit to how precisely its location can be specified. This quantum uncertainty principle can be expressed in terms of other variables, for example, a particle with a definitely measured energy has a fundamental limit to how precisely one can specify how long it will have that energy.
The magnitude involved in quantum uncertainty is on the order of the Planck constant ().
Indeterminacy and incompleteness
Quantum indeterminacy is the assertion that the state of a system does not determine a unique collection of values for all its measurable properties. Indeed, according to the Kochen–Specker theorem, in the quantum mechanical formalism it is impossible that, for a given quantum state, each one of these measurable properties (observables) has a determinate (sharp) value. The values of an observable will be obtained non-deterministically in accordance with a probability distribution that is uniquely determined by the system state. Note that the state is destroyed by measurement, so when we refer to a collection of values, each measured value in this collection must be obtained using a freshly prepared state.
This indeterminacy might be regarded as a kind of essential incompleteness in our description of a physical system. Notice however, that the indeterminacy as stated above only applies to values of measurements not to the quantum state. For example, in the spin 1/2 example discussed above, the system can be prepared in the state ψ by using measurement of σ1 as a filter that retains only those particles such that σ1 yields +1. By the von Neumann (so-called) postulates, immediately after the measurement the system is assuredly in the state ψ.
However, Albert Einstein believed that quantum state cannot be a complete description of a physical system and, it is commonly thought, never came to terms with quantum mechanics. In fact, Einstein, Boris Podolsky and Nathan Rosen showed that if quantum mechanics is correct, then the classical view of how the real world works (at least after special relativity) is no longer tenable. This view included the following two ideas:
A measurable property of a physical system whose value can be predicted with certainty is actually an element of (local) reality (this was the terminology used by EPR).
Effects of local actions have a finite propagation speed.
This failure of the classical view was one of the conclusions of the EPR thought experiment in which two remotely located observers, now commonly referred to as Alice and Bob, perform independent measurements of spin on a pair of electrons, prepared at a source in a special state called a spin singlet state. It was a conclusion of EPR, using the formal apparatus of quantum theory, that once Alice measured spin in the x direction, Bob's measurement in the x direction was determined with certainty, whereas immediately before Alice's measurement Bob's outcome was only statistically determined. From this it follows that either value of spin in the x direction is not an element of reality or that the effect of Alice's measurement has infinite speed of propagation.
Indeterminacy for mixed states
We have described indeterminacy for a quantum system that is in a pure state. Mixed states are a more general kind of state obtained by a statistical mixture of pure states. For mixed states
the "quantum recipe" for determining the probability distribution of a measurement is determined as follows:
Let A be an observable of a quantum mechanical system. A is given by a densely
defined self-adjoint operator on H. The spectral measure of A is a projection-valued measure defined by the condition
for every Borel subset U of R. Given a mixed state S, we introduce the distribution of A under S as follows:
This is a probability measure defined on the Borel subsets of R that is the probability distribution obtained by measuring A in S.
Logical independence and quantum randomness
Quantum indeterminacy is often understood as information (or lack of it) whose existence we infer, occurring in individual quantum systems, prior to measurement. Quantum randomness is the statistical manifestation of that indeterminacy, witnessable in results of experiments repeated many times. However, the relationship between quantum indeterminacy and randomness is subtle and can be considered differently.
In classical physics, experiments of chance, such as coin-tossing and dice-throwing, are deterministic, in the sense that, perfect knowledge of the initial conditions would render outcomes perfectly predictable. The ‘randomness’ stems from ignorance of physical information in the initial toss or throw. In diametrical contrast, in the case of quantum physics, the theorems of Kochen and Specker, the inequalities of John Bell, and experimental evidence of Alain Aspect, all indicate that quantum randomness does not stem from any such physical information.
In 2008, Tomasz Paterek et al. provided an explanation in mathematical information. They proved that quantum randomness is, exclusively, the output of measurement experiments whose input settings introduce logical independence into quantum systems.
Logical independence is a well-known phenomenon in Mathematical Logic. It refers to the null logical connectivity that exists between mathematical propositions (in the same language) that neither prove nor disprove one another.
In the work of Paterek et al., the researchers demonstrate a link connecting quantum randomness and logical independence in a formal system of Boolean propositions. In experiments measuring photon polarisation, Paterek et al. demonstrate statistics correlating predictable outcomes with logically dependent mathematical propositions, and random outcomes with propositions that are logically independent.
In 2020, Steve Faulkner reported on work following up on the findings of Tomasz Paterek et al.; showing what logical independence in the Paterek Boolean propositions means, in the domain of Matrix Mechanics proper. He showed how indeterminacy's indefiniteness arises in evolved density operators representing mixed states, where measurement processes encounter irreversible 'lost history' and ingression of ambiguity.
| Physical sciences | Quantum mechanics | Physics |
189371 | https://en.wikipedia.org/wiki/Lactobacillus | Lactobacillus | Lactobacillus is a genus of gram-positive, aerotolerant anaerobes or microaerophilic, rod-shaped, non-spore-forming bacteria. Until 2020, the genus Lactobacillus comprised over 260 phylogenetically, ecologically, and metabolically diverse species; a taxonomic revision of the genus assigned lactobacilli to 25 genera (see below).
Lactobacillus species constitute a significant component of the human and animal microbiota at a number of body sites, such as the digestive system, and the female genital system. In women of European ancestry, Lactobacillus species are normally a major part of the vaginal microbiota. Lactobacillus forms biofilms in the vaginal and gut microbiota, allowing them to persist during harsh environmental conditions and maintain ample populations. Lactobacillus exhibits a mutualistic relationship with the human body, as it protects the host against potential invasions by pathogens, and in turn, the host provides a source of nutrients. Lactobacilli are among the most common probiotic found in food such as yogurt, and it is diverse in its application to maintain human well-being, as it can help treat diarrhea, vaginal infections, and skin disorders such as eczema.
Metabolism
Lactobacilli are homofermentative, i.e. hexoses are metabolised by glycolysis to lactate as major end product, or heterofermentative, i.e. hexoses are metabolised by the Phosphoketolase pathway to lactate, CO2 and acetate or ethanol as major end products. Most lactobacilli are aerotolerant and some species respire if heme and menaquinone are present in the growth medium. Aerotolerance of lactobacilli is manganese-dependent and has been explored (and explained) in Lactiplantibacillus plantarum (previously Lactobacillus plantarum). Lactobacilli generally do not require iron for growth.
The Lactobacillaceae are the only family of the lactic acid bacteria (LAB) that includes homofermentative and heterofermentative organisms; in the Lactobacillaceae, homofermentative or heterofermentative metabolism is shared by all strains of a genus. Lactobacillus species are all homofermentative, do not express pyruvate formate lyase, and most species do not ferment pentoses. In L. crispatus, pentose metabolism is strain specific and acquired by lateral gene transfer.
Genomes
The genomes of lactobacilli are highly variable, ranging in size from 1.2 to 4.9 Mb (megabases). Accordingly, the number of protein-coding genes ranges from 1,267 to about 4,758 genes (in Fructilactobacillus sanfranciscensis and Lentilactobacillus parakefiri, respectively). Even within a single species there can be substantial variation. For instance, strains of L. crispatus have genome sizes ranging from 1.83 to 2.7 Mb, or 1,839 to 2,688 open reading frames. Lactobacillus contains a wealth of compound microsatellites in the coding region of the genome, which are imperfect and have variant motifs. Many lactobacilli also contain multiple plasmids. A recent study has revealed that plasmids encode the genes which are required for adaptation of lactobacilli to the given environment.
Species
The genus Lactobacillus comprises the following species:
Lactobacillus acetotolerans Entani et al. 1986
Lactobacillus acidophilus (Moro 1900) Hansen and Mocquot 1970 (Approved Lists 1980)
"Lactobacillus alvi" Kim et al. 2011
Lactobacillus amylolyticus Bohak et al. 1999
Lactobacillus amylovorus Nakamura 1981
Lactobacillus apis Killer et al. 2014
"Lactobacillus backi" Bohak et al. 2006
Lactobacillus bombicola Praet et al. 2015
Lactobacillus colini Zhang et al. 2017
Lactobacillus crispatus (Brygoo and Aladame 195555) Moore and Holdeman 1970 (Approved Lists 1980)
Lactobacillus delbrueckii (Leichmann 1896) Beijerinck 1901 (Approved Lists 1980)
Lactobacillus equicursoris Morita et al. 2010
Lactobacillus fornicalis Dicks et al. 2000
Lactobacillus gallinarum Fujisawa et al. 1992
Lactobacillus gasseri Lauer and Kandler 1980
Lactobacillus gigeriorum Cousin et al. 2012
"Lactobacillus ginsenosidimutans" Jung et al. 2013
Lactobacillus hamsteri Mitsuoka and Fujisawa 1988
Lactobacillus helsingborgensis Olofsson et al. 2014
Lactobacillus helveticus (Orla-Jensen 1919) Bergey et al. 1925 (Approved Lists 1980)
Lactobacillus hominis Cousin et al. 2013
Lactobacillus iners Falsen et al. 1999
Lactobacillus intestinalis (ex Hemme 1974) Fujisawa et al. 1990
Lactobacillus jensenii Gasser et al. 1970 (Approved Lists 1980)
"Lactobacillus jinshani" Yu et al. 2020
Lactobacillus johnsonii Fujisawa et al. 1992
Lactobacillus kalixensis Roos et al. 2005
Lactobacillus kefiranofaciens Fujisawa et al. 1988
Lactobacillus kimbladii Olofsson et al. 2014
Lactobacillus kitasatonis Mukai et al. 2003
Lactobacillus kullabergensis Olofsson et al. 2014
Lactobacillus melliventris Olofsson et al. 2014
Lactobacillus mulieris Rocha et al. 2020
Lactobacillus nasalidis Suzuki-Hashido et al. 2021
Lactobacillus panisapium Wang et al. 2018
Lactobacillus paragasseri Tanizawa et al. 2018
Lactobacillus pasteurii Cousin et al. 2013
Lactobacillus porci Kim et al. 2018
Lactobacillus psittaci Lawson et al. 2001
"Lactobacillus raoultii" Nicaise et al. 2018
Lactobacillus rodentium Killer et al. 2014
Lactobacillus rogosae Holdeman and Moore 1974 (Approved Lists 1980)
Lactobacillus taiwanensis Wang et al. 2009
"Lactobacillus thermophilus" Ayers and Johnson 1924
"Lactobacillus timonensis" Afouda et al. 2017
Lactobacillus ultunensis Roos et al. 2005
Lactobacillus xujianguonis Meng et al. 2020
Taxonomy
The genus Lactobacillus currently contains 44 species which are adapted to vertebrate hosts or to insects. In recent years, other members of the genus Lactobacillus (formerly known as the Leuconostoc branch of Lactobacillus) have been reclassified into the genera Atopobium, Carnobacterium, Weissella, Oenococcus, and Leuconostoc. The Pediococcus species P. dextrinicus has been reclassified as a Lapidilactobacillus dextrinicus and most lactobacilli were assigned to Paralactobacillus or one of the 23 novel genera of the Lactobacillaceae. Two websites inform on the assignment of species to the novel genera or species (http://www.lactobacillus.uantwerpen.be/; http://www.lactobacillus.ualberta.ca/).
Phylogeny
The currently accepted taxonomy is based on the List of Prokaryotic names with Standing in Nomenclature and the phylogeny is based on whole-genome sequences.
Human health
Vaginal tract
Lactobacillus s.s. species are considered "keystone species" in the vaginal flora of reproductive-age women. Most, but not all, healthy women have vaginal floras dominated by one of four species of Lactobacillus: L. iners, L. crispatus, L. gasseri and L. jensenii. Other women have a more diverse mix of anaerobic microorganisms, though are still considered to have a healthy microbiome.
Interactions with pathogens
Lactobacilli produce lactic acid, which contributes to the vaginal acidity, and this lowered pH is generally accepted to be the main mechanism controlling the composition of the vaginal microflora.
Lactobacilli are also proposed to produce hydrogen peroxide, which inhibits the growth and virulence of the fungal pathogen Candida albicans in vitro, though this is argued not to be the main mechanism in vivo.
In vitro studies have also shown that lactobacilli reduce the pathogenicity of C. albicans through the production of organic acids and certain metabolites. Both the presence of metabolites, such as sodium butyrate, and the decrease in environmental pH caused by the organic acids reduce the growth of hyphae in C. albicans, which reduces its pathogenicity. Lactobacilli also reduce the pathogenicity of C. albicans by reducing C. albicans biofilm formation. Biofilm formation is reduced by both the competition from lactobacilli, and the formation of defective biofilms which is linked to the reduced hypha growth mentioned earlier. On the other hand, following antibiotic therapy, certain Candida species can suppress the regrowth of lactobacilli at body sites where they cohabitate, such as in the gastrointestinal tract.
In addition to its effects on C. albicans, Lactobacillus sp. also interact with other pathogens. For example, Limosilactobacillus reuteri (formerly Lactobacillus reuteri) can inhibit the growth of many different bacterial species by using glycerol to produce the antimicrobial substance called reuterin. Another example is Ligilactobacillus salivarius (formerly Lactobacillus salivarius), which interacts with many pathogens through the production of salivaricin B, a bacteriocin.
Probiotics
Because of the interactions with other microbes, fermenting bacteria like lactic acid bacteria (LAB) are now in use as probiotics with many applications.
Lactobacilli administered in combination with other probiotics benefits cases of irritable bowel syndrome (IBS), although the extent of efficacy is still uncertain. The probiotics help treat IBS by returning homeostasis when the gut microbiota experiences unusually high levels of opportunistic bacteria. In addition, lactobacilli can be administered as probiotics during cases of infection by the ulcer-causing bacterium Helicobacter pylori. Helicobacter pylori is linked to cancer, and antibiotic resistance impedes the success of current antibiotic-based eradication treatments. When probiotic lactobacilli are administered along with the treatment as an adjuvant, its efficacy is substantially increased and side effects may be lessened. In addition, lactobacilli with other probiotic organisms in ripened milk and yogurt aid development of immunity in the mucous intestine in humans by raising the number of LgA (+).
Gastroesophageal reflux disease (GERD) is a common condition associated with bile acid-induced oxidative stress and accumulation of reactive oxygen species (ROS) in esophageal tissues that cause inflammation and DNA damage. In an experimental model of GERD, Lactobacillus species (L. acidophilus, L. plantarum and L. fermentum) facilitated the repair of DNA damage caused by bile-induced ROS. For patients with GERD, there is significant interest in the anti-inflammatory effect of Lactobacilli that may help prevent progression to Barrett’s esophagus and esophageal adenocarcinoma.
Given the known microbial associations, lactobacilli are currently available as probiotics to help control urogenital and vaginal infections, such as bacterial vaginosis (BV). Lactobacilli produce bacteriocins to suppress pathogenic growth of certain bacteria, as well as lactic acid, which lowers the vaginal pH to around 4.5 or less, hampering the survival of other bacteria.
In children, lactobacilli such as Lacticaseibacillus rhamnosus (previously L. rhamnosus) are associated with a reduction of atopic eczema, also known as dermatitis, due to anti-inflammatory cytokines secreted by this probiotic bacteria.
Oral health
Some lactobacilli have been associated with cases of dental caries (cavities). Lactic acid can corrode teeth, and the Lactobacillus count in saliva has been used as a "caries test" for many years. Lactobacilli characteristically cause existing carious lesions to progress, especially those in coronal caries. The issue is, however, complex, as recent studies show probiotics can allow beneficial lactobacilli to populate sites on teeth, preventing streptococcal pathogens from taking hold and inducing dental decay. The scientific research of lactobacilli in relation to oral health is a new field and only a few studies and results have been published. Some studies have provided evidence of certain Lactobacilli which can be a probiotic for oral health. Some species, but not all, show evidence in defense to dental caries. Due to these studies, there have been applications of incorporating such probiotics in chewing gum and lozenges. There is also evidence of certain Lactobacilli that are beneficial in the defense of periodontal disease such as gingivitis and periodontitis.
Food production
Species of Lactobacillus (and related genera) comprise many food fermenting lactic acid bacteria and are used as starter cultures in industry for controlled fermentation in the production of wine, yogurt, cheese, sauerkraut, pickles, beer, cider, kimchi, cocoa, kefir, and other fermented foods, as well as animal feeds and the bokashi soil amendment. Lactobacillus species are dominant in yogurt, cheese, and sourdough fermentations.
Their importance in fermentation comes from both metabolism of the food itself, as well as the inhibition of growth of other potentially pathogenic microbes. The antibacterial and antifungal activity of lactobacilli relies on production of bacteriocins and low molecular weight compounds that inhibits these microorganisms.
Sourdough bread is made either spontaneously, by taking advantage of the bacteria naturally present in flour, or by using a "starter culture", which is a symbiotic culture of yeast and lactic acid bacteria growing in a water and flour medium. The bacteria metabolize sugars into lactic acid, which lowers the pH of their environment and creates the signature sourness associated with yogurt, sauerkraut, etc.
In many traditional pickling processes, vegetables are submerged in brine, and salt-tolerant lactobacilli feed on natural sugars found in the vegetables. The resulting mix of salt and lactic acid is a hostile environment for other microbes, such as fungi, and the vegetables are thus preserved—remaining edible for long periods.
Lactobacilli, especially pediococci and L. brevis, are some of the most common beer spoilage organisms. They are, however, essential to the production of sour beers such as Belgian lambics and American wild ales, giving the beer a distinct tart flavor.
Scientist Elie Metchnikoff won a Nobel prize in 1908 for his work on LAB, the connection to food, and possible usage as a probiotic.
| Biology and health sciences | Gram-positive bacteria | Plants |
189556 | https://en.wikipedia.org/wiki/Anorthite | Anorthite | Anorthite (an = not, ortho = straight) is the calcium endmember of the plagioclase feldspar mineral series. The chemical formula of pure anorthite is CaAl2Si2O8. Anorthite is found in mafic igneous rocks. Anorthite is rare on the Earth but abundant on the Moon.
Mineralogy
Anorthite is the calcium-rich endmember of the plagioclase solid solution series, the other endmember being albite (NaAlSi3O8). Anorthite also refers to plagioclase compositions with more than 90 molecular percent of the anorthite endmember. The composition of plagioclases is often expressed as a molar percentage of An%, or (for a specific quantity) Ann, where n = Ca/(Ca + Na) × 100. This equation predominantly works in a terrestrial context; exotic locales and in particular Lunar rocks may need to account for other cations, such as Fe2+, to explain differences between optically and structurally derived An% data observed in Lunar anorthites.
At standard pressure, pure (An100) anorthite melts at .
Occurrence
Anorthite is a rare compositional variety of plagioclase. It occurs in mafic igneous rock. It also occurs in metamorphic rocks of granulite facies, in metamorphosed carbonate rocks, and corundum deposits. Its type localities are Monte Somma and Valle di Fassa, Italy. It was first described in 1823. It is more rare in surficial rocks than it normally would be due to its high weathering potential in the Goldich dissolution series.
It also makes up much of the lunar highlands; the Genesis Rock, collected during the 1971 Apollo 15 mission, is made of anorthosite, a rock composed largely of anorthite. Anorthite was discovered in samples from comet Wild 2, and the mineral is an important constituent of Ca-Al-rich inclusions in rare varieties of chondritic meteorites.
| Physical sciences | Silicate minerals | Earth science |
189557 | https://en.wikipedia.org/wiki/Guillemot | Guillemot | Guillemot is the common name for several species of seabird in the Alcidae or auk family, part of the order Charadriiformes. In Europe, the term covers two genera, Uria and Cepphus. In North America the Uria species are called murres and only the Cepphus species are called "guillemots".
The current spelling guillemot is of French origin, first attested by Pierre Belon in 1555, but derived from Old (11th century) French willelm, and matched by English variants willock (attested 1631), willick, will and wilkie, all from forms of the name William, cf. , but ultimately onomatopoeic from the loud, high-pitched "will, willem" begging calls of the newly fledged young of the common guillemot. The American name murre, also known from England (particularly Cornwall) from the 17th century, is by contrast, onomatopoeic of the growling call of adult common guillemots.
The two living species of Uria, together with the razorbill, little auk, and the extinct great auk, make up the tribe Alcini. They have distinctly white bellies, thicker and longer bills than Cepphus, and form very dense colonies on cliffs during the reproductive season. Guillemot eggs are large (around 11% of female weight), pyriform in shape, and colourful, making them attractive targets for egg collectors.
The three living species of Cepphus form a tribe of their own, Cepphini. They are smaller than the Uria species and have black bellies in breeding plumage, rounder heads and bright red feet.
Systematics
Uria
Common murre or common guillemot, Uria aalge
Thick-billed murre or Brünnich's guillemot, Uria lomvia
Some prehistoric species are also known:
Uria brodkorbi (Monterey or Sisquoc Late Miocene of Lompoc, USA)
Uria affinis (Late Pleistocene of E USA)—possibly a subspecies of U. lomvia
Uria paleohesperis
Uria brodkorbi is the only known occurrence of the Alcini tribe in the temperate to subtropical Pacific, except for the very fringe of the range of U. aalge.
Cepphus
Black guillemot or tystie, Cepphus grylle
Pigeon guillemot, Cepphus columba
Spectacled guillemot, Cepphus carbo
As in other genera of auks, fossils of prehistoric forms of Cepphus have been found:
Cepphus olsoni (San Luis Rey River Late Miocene—Early Pliocene of W USA)
Cepphus cf. columba (Lawrence Canyon Early Pliocene of W USA)
Cepphus cf. grylle (San Diego Late Pliocene, W USA)
The latter two resemble the extant species, but because of the considerable distance in time or space from their current occurrence, they may represent distinct species.
Pyriform egg
Guillemots lay a single pyriform (pear-shaped) egg directly on a cliff edge in dense breeding colonies; they do not build a nest, allowing them to nest close to one another even on uneven cliff edges, the density protecting their eggs and chicks from predatory gulls. While the egg would seem vulnerable to rolling off the edge, this does not usually happen. It has been suggested that the egg might simply spin if disturbed, or roll in an arc preventing it from falling over the cliff edge. There is no evidence for either explanation. However, a pyriform egg placed experimentally on a steep slope did not roll, while a less pointed and more ellipsoidal egg did. Ornithologist Tim Birkhead experimented, and found that the arc that a pyriform egg rolls in is wider than most cliff ledges, so not protective against falls. He attributed the egg's stability to its long straight edge resting on the ground, creating more friction and making it less likely to move and fall.
Guillemot eggs were collected until the late 1920s in Scotland's St Kilda islands by their men scaling the cliffs. The eggs were buried in St Kilda peat ash to be eaten through the cold, northern winters. The eggs were considered to be similar to duck eggs in taste and nutrition.
Bounciness in chicks
Guillemot chicks are born on rocky cliffs near the seaside. They leave the nest by jumping off the cliffsides before their wings are strong enough to allow them to fly, so they parachute down toward the ground as opposed to flying. Their dense, downy feathers and underdeveloped wings allow them to avoid serious harm when falling to the ground, so they bounce around slightly after hitting the ground.
| Biology and health sciences | Charadriiformes | Animals |
189734 | https://en.wikipedia.org/wiki/Transfinite%20number | Transfinite number | In mathematics, transfinite numbers or infinite numbers are numbers that are "infinite" in the sense that they are larger than all finite numbers. These include the transfinite cardinals, which are cardinal numbers used to quantify the size of infinite sets, and the transfinite ordinals, which are ordinal numbers used to provide an ordering of infinite sets. The term transfinite was coined in 1895 by Georg Cantor, who wished to avoid some of the implications of the word infinite in connection with these objects, which were, nevertheless, not finite. Few contemporary writers share these qualms; it is now accepted usage to refer to transfinite cardinals and ordinals as infinite numbers. Nevertheless, the term transfinite also remains in use.
Notable work on transfinite numbers was done by Wacław Sierpiński: Leçons sur les nombres transfinis (1928 book) much expanded into Cardinal and Ordinal Numbers (1958, 2nd ed. 1965).
Definition
Any finite natural number can be used in at least two ways: as an ordinal and as a cardinal. Cardinal numbers specify the size of sets (e.g., a bag of marbles), whereas ordinal numbers specify the order of a member within an ordered set (e.g., "the man from the left" or "the day of January"). When extended to transfinite numbers, these two concepts are no longer in one-to-one correspondence. A transfinite cardinal number is used to describe the size of an infinitely large set, while a transfinite ordinal is used to describe the location within an infinitely large set that is ordered. The most notable ordinal and cardinal numbers are, respectively:
(Omega): the lowest transfinite ordinal number. It is also the order type of the natural numbers under their usual linear ordering.
(Aleph-null): the first transfinite cardinal number. It is also the cardinality of the natural numbers. If the axiom of choice holds, the next higher cardinal number is aleph-one, If not, there may be other cardinals which are incomparable with aleph-one and larger than aleph-null. Either way, there are no cardinals between aleph-null and aleph-one.
The continuum hypothesis is the proposition that there are no intermediate cardinal numbers between and the cardinality of the continuum (the cardinality of the set of real numbers): or equivalently that is the cardinality of the set of real numbers. In Zermelo–Fraenkel set theory, neither the continuum hypothesis nor its negation can be proved.
Some authors, including P. Suppes and J. Rubin, use the term transfinite cardinal to refer to the cardinality of a Dedekind-infinite set in contexts where this may not be equivalent to "infinite cardinal"; that is, in contexts where the axiom of countable choice is not assumed or is not known to hold. Given this definition, the following are all equivalent:
is a transfinite cardinal. That is, there is a Dedekind infinite set such that the cardinality of is
There is a cardinal such that
Although transfinite ordinals and cardinals both generalize only the natural numbers, other systems of numbers, including the hyperreal numbers and surreal numbers, provide generalizations of the real numbers.
Examples
In Cantor's theory of ordinal numbers, every integer number must have a successor. The next integer after all the regular ones, that is the first infinite integer, is named . In this context, is larger than , and , and are larger still. Arithmetic expressions containing specify an ordinal number, and can be thought of as the set of all integers up to that number. A given number generally has multiple expressions that represent it, however, there is a unique Cantor normal form that represents it, essentially a finite sequence of digits that give coefficients of descending powers of .
Not all infinite integers can be represented by a Cantor normal form however, and the first one that cannot is given by the limit and is termed . is the smallest solution to , and the following solutions give larger ordinals still, and can be followed until one reaches the limit , which is the first solution to . This means that in order to be able to specify all transfinite integers, one must think up an infinite sequence of names: because if one were to specify a single largest integer, one would then always be able to mention its larger successor. But as noted by Cantor, even this only allows one to reach the lowest class of transfinite numbers: those whose size of sets correspond to the cardinal number .
| Mathematics | Basics | null |
189749 | https://en.wikipedia.org/wiki/Automaton | Automaton | An automaton (; : automata or automatons) is a relatively self-operating machine, or control mechanism designed to automatically follow a sequence of operations, or respond to predetermined instructions. Some automata, such as bellstrikers in mechanical clocks, are designed to give the illusion to the casual observer that they are operating under their own power or will, like a mechanical robot. The term has long been commonly associated with automated puppets that resemble moving humans or animals, built to impress and/or to entertain people.
Animatronics are a modern type of automata with electronics, often used for the portrayal of characters or creatures in films and in theme park attractions.
Etymology
The word is the latinization of the Ancient Greek (), which means "acting of one's own will". It was first used by Homer to describe an automatic door opening, or automatic movement of wheeled tripods. It is more often used to describe non-electronic moving machines, especially those that have been made to resemble human or animal actions, such as the jacks on old public striking clocks, or the cuckoo and any other animated figures on a cuckoo clock.
History
Ancient
There are many examples of automata in Greek mythology: Hephaestus created automata for his workshop; Talos was an artificial man of bronze; King Alkinous of the Phaiakians employed gold and silver watchdogs. According to Aristotle, Daedalus used quicksilver to make his wooden statue of Aphrodite move. In other Greek legends he used quicksilver to install voice in his moving statues.
The automata in the Hellenistic world were intended as tools, toys, religious spectacles, or prototypes for demonstrating basic scientific principles. Numerous water-powered automata were built by Ktesibios, a Greek inventor and the first head of the Great Library of Alexandria; for example, he "used water to sound a whistle and make a model owl move. He had invented the world's first 'cuckoo clock. This tradition continued in Alexandria with inventors such as the Greek mathematician Hero of Alexandria (sometimes known as Heron), whose writings on hydraulics, pneumatics, and mechanics described siphons, a fire engine, a water organ, the aeolipile, and a programmable cart. Philo of Byzantium was famous for his inventions.
Complex mechanical devices are known to have existed in Hellenistic Greece, though the only surviving example is the Antikythera mechanism, the earliest known analog computer. The clockwork is thought to have come originally from Rhodes, where there was apparently a tradition of mechanical engineering; the island was renowned for its automata; to quote Pindar's seventh Olympic Ode:
The animated figures stand
Adorning every public street
And seem to breathe in stone, or
move their marble feet.
However, the information gleaned from recent scans of the fragments indicate that it may have come from the colonies of Corinth in Sicily and implies a connection with Archimedes.
According to Jewish legend, King Solomon used his wisdom to design a throne with mechanical animals which hailed him as king when he ascended it; upon sitting down an eagle would place a crown upon his head, and a dove would bring him a Torah scroll. It is also said that when King Solomon stepped upon the throne, a mechanism was set in motion. As soon as he stepped upon the first step, a golden ox and a golden lion each stretched out one foot to support him and help him rise to the next step. On each side, the animals helped the King up until he was comfortably seated upon the throne.
In ancient China, a curious account of automata is found in the Lie Zi text, believed to have originated around 400 BCE and compiled around the fourth century CE. Within it there is a description of a much earlier encounter between King Mu of Zhou (1023–957 BCE) and a mechanical engineer known as Yan Shi, an 'artificer'. The latter proudly presented the king with a very realistic and detailed life-size, human-shaped figure of his mechanical handiwork:
Other notable examples of automata include Archytas' dove, mentioned by Aulus Gellius. Similar Chinese accounts of flying automata are written of the 5th century BC Mohist philosopher Mozi and his contemporary Lu Ban, who made artificial wooden birds () that could successfully fly according to the and other texts.
Medieval
The manufacturing tradition of automata continued in the Greek world well into the Middle Ages. On his visit to Constantinople in 949 ambassador Liutprand of Cremona described automata in the emperor Theophilos' palace, including
Similar automata in the throne room (singing birds, roaring and moving lions) were described by Luitprand's contemporary the Byzantine emperor Constantine Porphyrogenitus, in his book De Ceremoniis (Perì tês Basileíou Tákseōs).
In the mid-8th century, the first wind powered automata were built: "statues that turned with the wind over the domes of the four gates and the palace complex of the Round City of Baghdad". The "public spectacle of wind-powered statues had its private counterpart in the 'Abbasid palaces where automata of various types were predominantly displayed." Also in the 8th century, the Muslim alchemist, Jābir ibn Hayyān (Geber), included recipes for constructing artificial snakes, scorpions, and humans that would be subject to their creator's control in his coded Book of Stones. In 827, Abbasid caliph al-Ma'mun had a silver and golden tree in his palace in Baghdad, which had the features of an automatic machine. There were metal birds that sang automatically on the swinging branches of this tree built by Muslim inventors and engineers. The Abbasid caliph al-Muqtadir also had a silver and golden tree in his palace in Baghdad in 917, with birds on it flapping their wings and singing. In the 9th century, the Banū Mūsā brothers invented a programmable automatic flute player and which they described in their Book of Ingenious Devices.
Al-Jazari described complex programmable humanoid automata amongst other machines he designed and constructed in the Book of Knowledge of Ingenious Mechanical Devices in 1206. His automaton was a boat with four automatic musicians that floated on a lake to entertain guests at royal drinking parties. His mechanism had a programmable drum machine with pegs (cams) that bump into little levers that operate the percussion. The drummer could be made to play different rhythms and drum patterns if the pegs were moved around.
Al-Jazari constructed a hand washing automaton first employing the flush mechanism now used in modern toilets. It features a female automaton standing by a basin filled with water. When the user pulls the lever, the water drains and the automaton refills the basin. His "peacock fountain" was another more sophisticated hand washing device featuring humanoid automata as servants who offer soap and towels. Mark E. Rosheim describes it as follows: "Pulling a plug on the peacock's tail releases water out of the beak; as the dirty water from the basin fills the hollow base a float rises and actuates a linkage which makes a servant figure appear from behind a door under the peacock and offer soap. When more water is used, a second float at a higher level trips and causes the appearance of a second servant figure—with a towel!"
Al-Jazari thus appears to have been the first inventor to display an interest in creating human-like machines for practical purposes such as manipulating the environment for human comfort. Lamia Balafrej has also pointed out the prevalence of the figure of the automated slave in al-Jazari's treatise. Automated slaves were a frequent motif in ancient and medieval literature but it was not so common to find them described in a technical book. Balafrej has also written about automated female slaves, which appeared in timekeepers and as liquid-serving devices in medieval Arabic sources, thus suggesting a link between feminized forms of labor like housekeeping, medieval slavery, and the imaginary of automation.
In 1066, the Chinese inventor Su Song built a water clock in the form of a tower which featured mechanical figurines which chimed the hours.
Samarangana Sutradhara, a Sanskrit treatise by Bhoja (11th century), includes a chapter about the construction of mechanical contrivances (automata), including mechanical bees and birds, fountains shaped like humans and animals, and male and female dolls that refilled oil lamps, danced, played instruments, and re-enacted scenes from Hindu mythology.
Villard de Honnecourt, in his 1230s sketchbook, depicted an early escapement mechanism in a drawing titled How to make an angel keep pointing his finger toward the Sun with an angel that would perpetually turn to face the sun. He also drew an automaton of a bird with jointed wings, which led to their design implementation in clocks.
At the end of the thirteenth century, Robert II, Count of Artois, built a pleasure garden at his castle at Hesdin that incorporated several automata as entertainment in the walled park. The work was conducted by local workmen and overseen by the Italian knight Renaud Coignet. It included monkey marionettes, a sundial supported by lions and "wild men", mechanized birds, mechanized fountains and a bellows-operated organ. The park was famed for its automata well into the fifteenth century before it was destroyed by English soldiers in the sixteenth century.
The Chinese author Xiao Xun wrote that when the Ming dynasty founder Hongwu (r. 1368–1398) was destroying the palaces of Khanbaliq belonging to the previous Yuan dynasty, there were—among many other mechanical devices—automata found that were in the shape of tigers.
Renaissance and early modern
The Renaissance witnessed a considerable revival of interest in automata. Hero's treatises were edited and translated into Latin and Italian. Hydraulic and pneumatic automata, similar to those described by Hero, were created for garden grottoes.
Giovanni Fontana, a Paduan engineer in 1420, developed Bellicorum instrumentorum liber which includes a puppet of a camelid driven by a clothed primate twice the height of a human being and an automaton of Mary Magdalene. He also created mechanical devils and rocket-propelled animal automata.
While functional, early clocks were also often designed as novelties and spectacles which integrated features of automata. Many big and complex clocks with automated figures were built as public spectacles in European town centres. One of the earliest of these large clocks was the Strasbourg astronomical clock, built in the 14th century which takes up the entire side of a cathedral wall. It contained an astronomical calendar, automata depicting animals, saints and the life of Christ. The mechanical rooster of Strasbourg clock was active from 1352 to 1789. The clock still functions to this day, but has undergone several restorations since its initial construction. The Prague astronomical clock was built in 1410, animated figures were added from the 17th century onwards. Numerous clockwork automata were manufactured in the 16th century, principally by the goldsmiths of the Free Imperial Cities of central Europe. These wondrous devices found a home in the cabinet of curiosities or Wunderkammern of the princely courts of Europe.
In 1454, Duke Philip created an entertainment show named The extravagant Feast of the Pheasant, which was intended to influence the Duke's peers to participate in a crusade against the Ottomans but ended up being a grand display of automata, giants, and dwarves.
A banquet in Camilla of Aragon's honor in Italy, 1475, featured a lifelike automated camel. The spectacle was a part of a larger parade which continued over days.
Leonardo da Vinci sketched a complex mechanical knight, which he may have built and exhibited at a celebration hosted by Ludovico Sforza at the court of Milan around 1495. The design of Leonardo's robot was not rediscovered until the 1950s. A functional replica was later built that could move its arms, twist its head, and sit up.
Da Vinci is frequently credited with constructing a mechanical lion, which he presented to King Francois I in Lyon in 1515. Although no record of the device's original designs remain, a recreation of this piece is housed at the Château du Clos Lucé.
The Smithsonian Institution has in its collection a clockwork monk, about high, possibly dating as early as 1560. The monk is driven by a key-wound spring and walks the path of a square, striking his chest with his right arm, while raising and lowering a small wooden cross and rosary in his left hand, turning and nodding his head, rolling his eyes, and mouthing silent obsequies. From time to time, he brings the cross to his lips and kisses it. It is believed that the monk was manufactured by Juanelo Turriano, mechanician to the Holy Roman Emperor Charles V.
The first description of a modern cuckoo clock was by the Augsburg nobleman Philipp Hainhofer in 1629. The clock belonged to Prince Elector August von Sachsen. By 1650, the workings of mechanical cuckoos were understood and were widely disseminated in Athanasius Kircher's handbook on music, Musurgia Universalis. In what is the first documented description of how a mechanical cuckoo works, a mechanical organ with several automated figures is described. In 18th-century Germany, clockmakers began making cuckoo clocks for sale. Clock shops selling cuckoo clocks became commonplace in the Black Forest region by the middle of the 18th century.
Japan adopted clockwork automata in the early 17th century as "karakuri" puppets. In 1662, Takeda Omi completed his first butai karakuri and then built several of these large puppets for theatrical exhibitions. Karakuri puppets went through a golden age during the Edo period (1603–1867).
A new attitude towards automata is to be found in René Descartes when he suggested that the bodies of animals are nothing more than complex machines – the bones, muscles and organs could be replaced with cogs, pistons, and cams. Thus mechanism became the standard to which Nature and the organism was compared. France in the 17th century was the birthplace of those ingenious mechanical toys that were to become prototypes for the engines of the Industrial Revolution. Thus, in 1649, when Louis XIV was still a child, François-Joseph de Camus designed for him a miniature coach, complete with horses and footmen, a page, and a lady within the coach; all these figures exhibited a perfect movement. According to Labat, General de Gennes constructed, in 1688, in addition to machines for gunnery and navigation, a peacock that walked and ate. Athanasius Kircher produced many automata to create Jesuit shows, including a statue which spoke and listened via a speaking tube.
The world's first successfully-built biomechanical automaton is considered to be The Flute Player, which could play twelve songs, created by the French engineer Jacques de Vaucanson in 1737. He also constructed The Tambourine Player and the Digesting Duck, a mechanical duck that – apart from quacking and flapping its wings – gave the false illusion of eating and defecating, seeming to endorse Cartesian ideas that animals are no more than machines of flesh.
In 1769, a chess-playing machine called the Turk, created by Wolfgang von Kempelen, made the rounds of the courts of Europe purporting to be an automaton. The Turk beat Benjamin Franklin in a game of chess when Franklin was ambassador to France. The Turk was actually operated from inside by a hidden human director, and was not a true automaton.
Other 18th century automaton makers include the prolific Swiss Pierre Jaquet-Droz (see Jaquet-Droz automata) and his son Henri-Louis Jaquet-Droz, and his contemporary Henri Maillardet. Maillardet, a Swiss mechanic, created an automaton capable of drawing four pictures and writing three poems. Maillardet's Automaton is now part of the collections at the Franklin Institute Science Museum in Philadelphia. Belgian-born John Joseph Merlin created the mechanism of the Silver Swan automaton, now at Bowes Museum. A musical elephant made by the French clockmaker Hubert Martinet in 1774 is one of the highlights of Waddesdon Manor. Tipu's Tiger is another late-18th century example of automata, made for Tipu Sultan, featuring a European soldier being mauled by a tiger. Catherine the Great of Russia was gifted a very large and elaborate Peacock Clock created by James Cox in 1781 now on display in the Hermitage Museum in Saint Petersburg.
According to philosopher Michel Foucault, Frederick the Great, king of Prussia from 1740 to 1786, was "obsessed" with automata. According to Manuel de Landa, "he put together his armies as a well-oiled clockwork mechanism whose components were robot-like warriors".
In 1801, Joseph Jacquard built his loom automaton that was controlled autonomously with punched cards.
Automata, particularly watches and clocks, were popular in China during the 18th and 19th centuries, and items were produced for the Chinese market. Strong interest by Chinese collectors in the 21st century brought many interesting items to market where they have had dramatic realizations.
Modern
The famous magician Jean-Eugène Robert-Houdin (1805–1871) was known for creating automata for his stage shows. Automata that acted according to a set of preset instructions were popular with magicians during this time.
In 1840, Italian inventor Innocenzo Manzetti constructed a flute-playing automaton, in the shape of a man, life-size, seated on a chair. Hidden inside the chair were levers, connecting rods and compressed air tubes, which made the automaton's lips and fingers move on the flute according to a program recorded on a cylinder similar to those used in player pianos. The automaton was powered by clockwork and could perform 12 different arias. As part of the performance, it would rise from the chair, bow its head, and roll its eyes.
The period between 1860 and 1910 is known as "The Golden Age of Automata". Mechanical coin-operated fortune tellers were introduced to boardwalks in Britain and America. In Paris during this period, many small family based companies of automata makers thrived. From their workshops they exported thousands of clockwork automata and mechanical singing birds around the world. Although now rare and expensive, these French automata attract collectors worldwide. The main French makers were Bontems, Lambert, Phalibois, Renou, Roullet & Decamps, Theroude and Vichy.
Abstract automata theory started in mid-20th century with finite automata; it is applied in branches of formal and natural science including computer science, physics, biology, as well as linguistics.
Contemporary automata continue this tradition with an emphasis on art, rather than technological sophistication. Contemporary automata are represented by the works of Cabaret Mechanical Theatre in the United Kingdom, Thomas Kuntz, Arthur Ganson, Joe Jones and Le Défenseur du Temps by French artist Jacques Monestier.
Since 1990 Dutch artist Theo Jansen has been building large automated PVC structures called strandbeest (beach animal) that can walk on wind power or compressed air. Jansen claims that he intends them to automatically evolve and develop artificial intelligence, with herds roaming freely over the beach.
British sculptor Sam Smith (1908–1983) was a well-known maker of automata.
Proposals
In 2016, the NASA Innovative Advanced Concepts program studied a rover, the Automaton Rover for Extreme Environments, designed to survive for an extended time in Venus' environmental conditions. Unlike other modern automata, AREE is an automaton instead of a robot for practical reasons—Venus's harsh conditions, particularly its surface temperature of , make operating electronics there for any significant time impossible. It would be controlled by a mechanical computer and driven by wind power.
Clocks
Automaton clocks are clocks which feature automatons within or around the housing and typically activate around the beginning of each hour, at each half hour, or at each quarter hour. They were largely produced from the 1st century BC to the end of the Victorian times in Europe. Older clocks typically featured religious characters or other mythical characters such as Death or Father Time. As time progressed, however, automaton clocks began to feature influential characters at the time of creation, such as kings, famous composers, or industrialists. Examples of automaton clocks include chariot clocks and cuckoo clocks. The Cuckooland Museum exhibits autonomous clocks. While automaton clocks are largely perceived to have been in use during medieval times in Europe, they are largely produced in Japan today.
In Automata theory, clocks are regarded as timed automatons, a type of finite automaton. Automaton clocks being finite essentially means that automaton clocks have a certain number of states in which they can exist. The exact number is the number of combinations possible on a clock with the hour, minute, and second hand: 43,200. The title of timed automaton declares that the automaton changes states at a set rate, which for clocks is 1 state change every second. Clock automata only takes as input the time displayed by the previous state. The automata uses this input to produce the next state, a display of time 1 second later than the previous. Clock automata often also use the previous state's input to 'decide' whether or not the next state requires merely changing the hands on the clock, or if a special function is required, such as a mechanical bird popping out of a house like in cuckoo clocks. This choice is evaluated through the position of complex gears, cams, axles, and other mechanical devices within the automaton.
In art and popular culture throughout history
One of the oldest stories involving an automaton is "The Sandman" (short story) written in 1816 by E. T. A. Hoffmann.
The Clockwork Man (1923) by E.V. Odle, features an automaton-like man or cyborg.
Metropolis (1927 silent film) features a female automaton in this science fiction story.
Elizabeth King, American artist and sculptor, has work that has focused on automata.
The Pretended (2000 novel) by Darryl A. Smith features automata doppelgängers in order to critique race.
The Invention of Hugo Cabret by Brian Selznick (2007 graphic novel) and film of the same name features an automaton.
The Alchemy of Stone by Ekaterina Sedia is a 2008 novel that features an automaton girl who must be wound up with a key to live.
The Automation (2014 novel) and its sequel The Pre-programming (2018) features creatures called "Automatons" created by the Greco-Roman god Vulcan.
Genshin Impact features an enemy group called an Automaton () which are mechanical beings with most originating from the lost kingdom of Khaenri'ah.
Immortals Fenyx Rising (2020) includes a side plot featuring a fallen Hephaistos and his automatons, complete with an automaton boss fight.
American Gods season 3 (2021) features an automaton at a fair made by an early version of Technical Boy. This differs from the novel the show is based on.
In the Syberia video game series, one of the main characters is an automaton called Oscar. He is very proud of his own design and dislikes being called a "robot".
In the video game Helldivers 2, one of the enemy factions is called the Automatons.
| Technology | Machinery and tools: General | null |
189768 | https://en.wikipedia.org/wiki/Consumer%20electronics | Consumer electronics | Consumer electronics or home electronics are electronic (analog or digital) equipment intended for everyday use, typically in private homes. Consumer electronics include devices used for entertainment, communications and recreation. These products are usually referred to as black goods in American English, due to many products being housed in black or dark casings. This term is used to distinguish them from "white goods", which are meant for housekeeping tasks, such as washing machines and refrigerators. In British English, they are often called brown goods by producers and sellers. In the 2010s, this distinction is absent in large big box consumer electronics stores, which sell entertainment, communication and home office devices, light fixtures and appliances, including the bathroom type.
Radio broadcasting in the early 20th century brought the first major consumer product, the broadcast receiver. Later products included telephones, televisions, and calculators, then audio and video recorders and players, video game consoles, mobile phones, personal computers and MP3 players. In the 2010s, consumer electronics stores often sell GPS, automotive electronics (vehicle audio), video game consoles, electronic musical instruments (e.g., synthesizer keyboards), karaoke machines, digital cameras, and video players (VCRs in the 1980s and 1990s, followed by DVD players and Blu-ray players). Stores also sell smart light fixtures and appliances, digital cameras, camcorders, mobile phones, and smartphones. Some of the newer products sold include virtual reality head-mounted display goggles, smart home devices that connect home devices to the Internet, streaming devices, and wearable technology.
In the 2010s, most consumer electronics have become based on digital technologies. They have essentially merged with the computer industry in what is increasingly referred to as the consumerization of information technology. Some consumer electronics stores have also begun selling office and baby furniture. Consumer electronics stores may be "brick and mortar" physical retail stores, online stores, or combinations of both. Annual consumer electronics sales are expected to reach by 2020. It is part of the wider electronics industry. In turn, the driving force behind the electronics industry is the semiconductor industry.
History
For its first fifty years, the phonograph turntable did not use electronics; the needle and sound horn were purely mechanical technologies. However, in the 1920s, radio broadcasting became the basis of mass production of radio receivers. The vacuum tubes that had made radios practical were used with record players as well, to amplify the sound so that it could be played through a loudspeaker. Television was soon invented but remained insignificant in the consumer market until the 1950s.
The first working transistor, a point-contact transistor, was invented by John Bardeen and Walter Houser Brattain at Bell Labs in 1947, which led to significant research in the field of solid-state semiconductors in the early-1950s. The invention and development of the earliest transistors at Bell led to transistor radios. This led to the emergence of the home entertainment consumer electronics industry starting in the 1950s, largely due to the efforts of Tokyo Tsushin Kogyo (now Sony) in successfully commercializing transistor technology for a mass market, with affordable transistor radios and then transistorized television sets.
Integrated circuits (ICs) followed when manufacturers built circuits (usually for military purposes) on a single substrate using electrical connections between circuits within the chip itself. IC technology led to more advanced and cheaper consumer electronics, such as transistorized televisions, pocket calculators, and by the 1980s, affordable video game consoles and personal computers that regular middle-class families could buy.
Starting in the 1980s with the compact disc and the introduction of personal computers, and until the early 2000s, many consumer electronics devices such as televisions and stereo systems, were digitized: digital computer technology, and thus digital signals, were integrated into the operation of consumer electronics devices, drastically changing their operation but with improved results such as improved image quality in televisions. This was made possible by Moore's law.
In 2004, the consumer electronics industry was worth US$240billion annually worldwide comprising visual equipment, audio equipment, and games consoles. It was truly global with Asia Pacific having 35% market share, Europe having 31.5%, the US having 23%, and the rest of the world having the rest. Major players in this industry are household names like Sony, Samsung, Philips, Sanyo, and Sharp. Samsung Electronics is part of Samsung. In 2003, combined revenues for Samsung Electronics were $55billion. Samsung Electronics UK is a subsidiary of Samsung Electronics contributing $1.2billion in revenues. Samsung Electronics has one of the highest R&D expenditure as a proportion of revenues in the industry and spent about $2.9billion in 2003. Along with its competitors, Samsung Electronics is global and employs 88,000 people in 89 offices in 46 countries. Not including facilities in Korea, it has 24 manufacturing complexes, 40 distribution bases and 15 branches spread over all continents except Antarctica. Countries with manufacturing facilities include the US, Malaysia, China, India, and Hungary.
White Goods
The increase in popularity of such domestic appliances as 'white goods' is a characteristic element of consumption patterns during the golden age of the Western economy. Europe's White Goods industry has evolved over the past 40 years, first by changing tariff barriers, and later by technical and demand shifts. The spending on domestic appliances has claimed only a tiny fraction of disposable income, rising from 0.5percent in the US in 1920, to about 2percent in 1980. Yet the sequence of electrical and mechanical durables have altered the activities and experiences of households in America and Britain in the twentieth century. With the expansion of cookers, vacuum cleaners, refrigerators, washing machines, radios, televisions, air conditioning, and microwave ovens, households have gained an escalating number of appliances. Despite the ubiquity of these goods, their diffusion is not well understood. Some types of appliances diffuse more frequently than others. In particular, home entertainment appliances such as radio and television have diffused much faster than household and kitchen machines."
Products
Consumer electronics devices include those used for
entertainment (flatscreen TVs, television sets, MP3 players, video recorders, DVD players, radio receivers, etc.)
communications (telephones, mobile phones, email-capable personal computers, desktop computers, laptops, printers, paper shredders, etc.)
recreation (digital cameras, camcorders, video game consoles, ROM cartridges, radio-controlled cars, robot kits, etc.).
Increasingly consumer electronics products such as digital distribution of video games have become based on the internet and digital technologies. The consumer electronics industry has primarily merged with the software industry in what is increasingly referred to as the consumerization of information technology.
Trends
One overriding characteristic of consumer electronic products is the trend of ever-falling prices. This is driven by gains in manufacturing efficiency and automation, lower labor costs as manufacturing has moved to lower-wage countries, and improvements in semiconductor design. Semiconductor components benefit from Moore's law, an observed principle which states that, for a given price, semiconductor functionality doubles every two years.
While consumer electronics continues in its trend of convergence, combining elements of many products, consumers face different purchasing decisions. There is an ever-increasing need to keep product information updated and comparable for the consumer to make an informed choice. Style, price, specification, and performance are all relevant. There are a gradual shift towards e-commerce web-storefronts.
Many products include Internet access using technologies such as Wi-Fi, Bluetooth, EDGE, or Ethernet. Products not traditionally associated with computer use (such as TVs or hi-fi equipment) now provide options to connect to the Internet or to a computer using a home network to provide access to digital content. The desire for high-definition (HD) content has led the industry to develop a number of technologies, such as WirelessHD or ITU-T G.hn, which are optimized for distribution of HD content between consumer electronic devices in a home.
Business competition
The consumer electronics industry faces consumers with unpredictable tastes on the demand side, supplier-related delays or disruptions on the supply side, and production challenges occurring in the process. The high rate of technology evolution or revolution requires large investments without any guarantee of profitable returns. As a result, the big players rely on global markets to achieve economies of scale. Even these companies sometimes have to cooperate with each other, for instance on standards, to reduce the risk of their investments. In supply chain management, there is much discussion on risks related to such aspects of supply chains as short product life cycles, high competition combined with cooperation, and globalization. The consumer electronics industry is the very embodiment of these aspects of supply chain management and related risks. While some of the supply and demand related risks are similar to such industries as the toy industry, the consumer electronics industry faces additional risks due to its vertically integrated supply chains. There are also numerous supply-chain-wide contextual risks that cut across the supply chain especially impacting companies with global supply chains. These include cultural differences in multinational operations, environmental risk, regulations risk, and exchange rate risk across multiple countries. Whether or not demand is comparable across countries affects the extent of the gains from international integration. In addition, consumer preferences change over time to disturb existing patterns of behavior. A feature of some industries is that demand for variety increases as the market moves from first-time buying to replacement demand. A resource to further understand this idea of consumer preferences can be observed through Lizabeth Cohen's book titled, "A Consumers' Republic", "Only if we have large demands can we expect large production".
Industries
The electronics industry, especially consumer electronics, emerged in the 20th century and has become a global industry worth billions of dollars. Contemporary society uses all manner of electronic devices built-in automated or semi-automated factories operated by the industry.
Manufacturing
Most consumer electronics are built in China, due to maintenance cost, availability of materials, quality, and speed as opposed to other countries such as the United States. Cities such as Shenzhen have become important production centres for the industry, attracting many consumer electronics companies such as Apple Inc.
Electronic component
An electronic component is any essential discrete device or physical entity in an electronic system used to affect electrons or their associated fields. Electronic components are mostly industrial products, available in a singular form, and are not to be confused with electrical elements, conceptual abstractions representing idealized electronic components.
Software development
Consumer electronics such as personal computers use various types of software. Embedded software is used within some consumer electronics, such as mobile phones. This type of software may be embedded within the hardware of electronic devices. Some consumer electronics include software that is used on a personal computer in conjunction with electronic devices, such as camcorders and digital cameras, and third-party software for such devices also exists.
Standardization
Some consumer electronics adhere to protocols, such as connection protocols "to high speed bi-directional signals". In telecommunications, a communications protocol is a system of digital rules for data exchange within or between computers.
Trade shows
The Consumer Electronics Show (CES) trade show has taken place yearly in Las Vegas, Nevada since its foundation in 1973. The event, which grew from having 100 exhibitors in its inaugural year to more than 4,500 exhibiting companies in its 2020 edition, features the latest in consumer electronics, speeches by industry experts and innovation awards.
The IFA Berlin trade show has taken place in Berlin, Germany since its foundation in 1924. The event features new consumer electronics and speeches by industry pioneers.
IEEE initiatives
Institute of Electrical and Electronics Engineers (IEEE), the world's largest professional society, has many initiatives to advance the state of the art of consumer electronics. IEEE has a dedicated society of thousands of professionals to promote CE, called the Consumer Electronics Society (CESoc). IEEE has multiple periodicals and international conferences to promote CE and encourage collaborative research and development in CE. The flagship conference of CESoc, called IEEE International Conference on Consumer Electronics (ICCE), is in its 35th year.
IEEE Transactions on Consumer Electronics
IEEE Consumer Electronics Magazine
IEEE International Conference on Consumer Electronics (ICCE)
Institute of Electrical and Electronics Engineers (IEEE) Computer Society also have initiated a conference to research on next generation consumer electronics as Smart Electronics. The conference named IEEE Symposium on Smart Electronics Systems (IEEE-iSES) is on its 9th year.
Retailing
Electronics retailing is a significant part of the retail industry in many countries. In the United States, dedicated consumer electronics stores have mostly given way to big-box stores such as Best Buy, the largest consumer electronics retailer in the country, although smaller dedicated stores include Apple Stores, and specialist stores that serve, for example, audiophiles and exceptions, such as the single-branch B&H Photo store in New York City. Broad-based retailers, such as Walmart and Target, also sell consumer electronics in many of their stores. In April 2014, retail e-commerce sales were the highest in the consumer electronic and computer categories as well. Some consumer electronics retailers offer extended warranties on products with programs such as SquareTrade.
An electronics district is an area of commerce with a high density of retail stores that sell consumer electronics.
Service and repair
Consumer electronic service can refer to the maintenance of said products. When consumer electronics have malfunctions, they may sometimes be repaired.
In 2013, in Pittsburgh, Pennsylvania, the increased popularity in listening to sound from analog audio devices, such as phonographs, as opposed to digital sound, has sparked a noticeable increase of business for the electronic repair industry there.
Mobile phone industry
A mobile phone, cellular phone, cell phone, cellphone, handphone, or hand phone, sometimes shortened to simply mobile, cell or just phone, is a portable telephone that can make and receive calls over a radio frequency link while the user is moving within a telephone service area. The radio frequency link establishes a connection to the switching systems of a mobile phone operator, which provides access to the public switched telephone network (PSTN). Modern mobile telephone services use a cellular network architecture and, therefore, mobile telephones are called cellular telephones or cell phones in North America. In addition to telephony, digital mobile phones (2G) support a variety of other services, such as text messaging, MMS, email, Internet access, short-range wireless communications (infrared, Bluetooth), business applications, video games and digital photography. Mobile phones offering only those capabilities are known as feature phones; mobile phones which offer greatly advanced computing capabilities are referred to as smartphones.
A smartphone is a portable device that combines mobile telephone and computing functions into one unit. They are distinguished from feature phones by their stronger hardware capabilities and extensive mobile operating systems, which facilitate wider software, internet (including web navigation over mobile broadband), and multimedia functionality (including music, video, cameras, and gaming), alongside core phone functions such as voice calls and text messaging. Smartphones typically contain a number of MOSFET integrated circuit (IC) chips, include various sensors that can be leveraged by pre-included and third-party software (such as a magnetometer, proximity sensors, barometer, gyroscope, accelerometer and more), and support wireless communications protocols (such as Bluetooth, Wi-Fi, or satellite navigation).
By country
Environmental impact
In 2017, Greenpeace USA published a study of 17 of the world's leading consumer electronics companies about their energy and resource consumption and the use of chemicals.
Rare metals and rare earth elements
Electronic devices use thousands rare metals and rare earth elements (40 on average for a smartphone), these material are extracted and refined using water and energy-intensive processes. These metals are also used in the renewable energy industry meaning that consumer electronics are directly competing for the raw materials.
Energy consumption
The energy consumption of consumer electronics and their environmental impact, either from their production processes or the disposal of the devices, is increasing steadily. EIA estimates that electronic devices and gadgets account for about 10%–15% of the energy use in American homes – largely because of their number; the average house has dozens of electronic devices. The energy consumption of consumer electronics increases – in America and Europe – to about 50% of household consumption if the term is redefined to include home appliances such as refrigerators, dryers, clothes washers and dishwashers.
Standby power
Standby power – used by consumer electronics and appliances while they are turned off – accounts for 5–10% of total household energy consumption, costing $100 annually to the average household in the United States. A study by United States Department of Energy's Berkeley Lab found that a videocassette recorders (VCRs) consume more electricity during the course of a year in standby mode than when they are used to record or playback videos. Similar findings were obtained concerning satellite boxes, which consume almost the same amount of energy in "on" and "off" modes.
A 2012 study in the United Kingdom, carried out by the Energy Saving Trust, found that the devices using the most power on standby mode included televisions, satellite boxes, and other video and audio equipment. The study concluded that UK households could save up to £86 per year by switching devices off instead of using standby mode. A report from the International Energy Agency in 2014 found that $80billion of power is wasted globally per year due to inefficiency of electronic devices. Consumers can reduce unwanted use of standby power by unplugging their devices, using power strips with switches, or by buying devices that are standardized for better energy management, particularly Energy Star-marked products.
Electronic waste
A high number of different metals and low concentration rates in electronics means that recycling is limited and energy intensive. Electronic waste describes discarded electrical or electronic devices. Many consumer electronics may contain toxic minerals and elements, and many electronic scrap components, such as CRTs, may contain contaminants such as lead, cadmium, beryllium, mercury, dioxins, or brominated flame retardants. Electronic waste recycling may involve significant risk to workers and communities and great care must be taken to avoid unsafe exposure in recycling operations and leaking of materials such as heavy metals from landfills and incinerator ashes. However, large amounts of the produced electronic waste from developed countries is exported, and handled by the informal sector in countries like India, despite the fact that exporting electronic waste to them is illegal. Strong informal sector can be a problem for the safe and clean recycling.
Reuse and repair
E-waste policy has gone through various incarnations since the 1970s, with emphases changing as the decades passed. More weight was gradually placed on the need to dispose of e-waste more carefully due to the toxic materials it may contain. There has also been recognition that various valuable metals and plastics from waste electrical equipment can be recycled for other uses. More recently, the desirability of reusing whole appliances has been foregrounded in the 'preparation for reuse' guidelines. The policy focus is slowly moving towards a potential shift in attitudes to reuse and repair.
With turnover of small household appliances high and costs relatively low, many consumers will throw unwanted electric goods in the normal dustbin, meaning that items of potentially high reuse or recycling value go to landfills. While more oversized items such as washing machines are usually collected, it has been estimated that the 160,000 tonnes of EEE in regular waste collections were worth £220million. And 23% of EEE taken to Household Waste Recycling Centres was immediately resaleable – or would be with minor repairs or refurbishment. This indicates a lack of awareness among consumers about where and how to dispose of EEE and the potential value of things that are going in the bin.
For the reuse and repair of electrical goods to increase substantially in the UK, some barriers must be overcome. These include people's mistrust of used equipment in terms of whether it will be functional, safe and the stigma for some of owning second-hand goods. But the benefits of reuse could allow lower-income households access to previously unaffordable technology while helping the environment at the same time.
Health impact
Desktop monitors and laptops produce major physical health concerns for humans when bodies are forced into unhealthy and uncomfortable positions to see the screen better. From this, neck and back pains and problems increase, commonly referred to as repetitive strain injuries. Using electronics before going to bed makes it difficult for people to fall asleep, hurting human health. Sleeping less prevents people from performing to their full potential physically and mentally and can also "increase rates of obesity and diabetes", which are "long-term health consequences". Obesity and diabetes are more commonly seen in students and in youth because they tend to be the ones using electronics the most. "People who frequently use their thumbs to type text messages on cell phones can develop a painful affliction called De Quervain syndrome that affects their tendons on their hands. The best-known disease in this category is called carpal tunnel syndrome, which results from pressure on the median nerve in the wrist".
| Technology | Basics_4 | null |
189842 | https://en.wikipedia.org/wiki/High-level%20programming%20language | High-level programming language | A high-level programming language is a programming language with strong abstraction from the details of the computer. In contrast to low-level programming languages, it may use natural language elements, be easier to use, or may automate (or even hide entirely) significant areas of computing systems (e.g. memory management), making the process of developing a program simpler and more understandable than when using a lower-level language. The amount of abstraction provided defines how "high-level" a programming language is.
In the 1960s, a high-level programming language using a compiler was commonly called an autocode.
Examples of autocodes are COBOL and Fortran.
The first high-level programming language designed for computers was Plankalkül, created by Konrad Zuse. However, it was not implemented in his time, and his original contributions were largely isolated from other developments due to World War II, aside from the language's influence on the "Superplan" language by Heinz Rutishauser and also to some degree ALGOL. The first significantly widespread high-level language was Fortran, a machine-independent development of IBM's earlier Autocode systems. The ALGOL family, with ALGOL 58 defined in 1958 and ALGOL 60 defined in 1960 by committees of European and American computer scientists, introduced recursion as well as nested functions under lexical scope. ALGOL 60 was also the first language with a clear distinction between value and name-parameters and their corresponding semantics. ALGOL also introduced several structured programming concepts, such as the while-do and if-then-else constructs and its syntax was the first to be described in formal notation – Backus–Naur form (BNF). During roughly the same period, COBOL introduced records (also called structs) and Lisp introduced a fully general lambda abstraction in a programming language for the first time.
Features
"High-level language" refers to the higher level of abstraction from machine language. Rather than dealing with registers, memory addresses, and call stacks, high-level languages deal with variables, arrays, objects, complex arithmetic or Boolean expressions, subroutines and functions, loops, threads, locks, and other abstract computer science concepts, with a focus on usability over optimal program efficiency. Unlike low-level assembly languages, high-level languages have few, if any, language elements that translate directly into a machine's native opcodes. Other features, such as string handling routines, object-oriented language features, and file input/output, may also be present. One thing to note about high-level programming languages is that these languages allow the programmer to be detached and separated from the machine. That is, unlike low-level languages like assembly or machine language, high-level programming can amplify the programmer's instructions and trigger a lot of data movements in the background without their knowledge. The responsibility and power of executing instructions have been handed over to the machine from the programmer.
Abstraction penalty
High-level languages intend to provide features that standardize common tasks, permit rich debugging, and maintain architectural agnosticism; while low-level languages often produce more efficient code through optimization for a specific system architecture. Abstraction penalty is the cost that high-level programming techniques pay for being unable to optimize performance or use certain hardware because they don't take advantage of certain low-level architectural resources. High-level programming exhibits features like more generic data structures and operations, run-time interpretation, and intermediate code files; which often result in execution of far more operations than necessary, higher memory consumption, and larger binary program size. For this reason, code which needs to run particularly quickly and efficiently may require the use of a lower-level language, even if a higher-level language would make the coding easier. In many cases, critical portions of a program mostly in a high-level language can be hand-coded in assembly language, leading to a much faster, more efficient, or simply reliably functioning optimised program.
However, with the growing complexity of modern microprocessor architectures, well-designed compilers for high-level languages frequently produce code comparable in efficiency to what most low-level programmers can produce by hand, and the higher abstraction may allow for more powerful techniques providing better overall results than their low-level counterparts in particular settings.
High-level languages are designed independent of a specific computing system architecture. This facilitates executing a program written in such a language on any computing system with compatible support for the Interpreted or JIT program. High-level languages can be improved as their designers develop improvements. In other cases, new high-level languages evolve from one or more others with the goal of aggregating the most popular constructs with new or improved features. An example of this is Scala which maintains backward compatibility with Java, meaning that programs and libraries written in Java will continue to be usable even if a programming shop switches to Scala; this makes the transition easier and the lifespan of such high-level coding indefinite. In contrast, low-level programs rarely survive beyond the system architecture which they were written for without major revision. This is the engineering 'trade-off' for the 'Abstraction Penalty'.
Relative meaning
Examples of high-level programming languages in active use today include Python, JavaScript, Visual Basic, Delphi, Perl, PHP, ECMAScript, Ruby, C#, Java and many others.
The terms high-level and low-level are inherently relative. Some decades ago, the C language, and similar languages, were most often considered "high-level", as it supported concepts such as expression evaluation, parameterised recursive functions, and data types and structures, while assembly language was considered "low-level". Today, many programmers might refer to C as low-level, as it lacks a large runtime-system (no garbage collection, etc.), basically supports only scalar operations, and provides direct memory addressing; it therefore, readily blends with assembly language and the machine level of CPUs and microcontrollers. Also, in the introduction chapter of The C Programming Language (second edition) by Brian Kernighan and Dennis Ritchie, C is described as "not a very high level" language.
Assembly language may itself be regarded as a higher level (but often still one-to-one if used without macros) representation of machine code, as it supports concepts such as constants and (limited) expressions, sometimes even variables, procedures, and data structures. Machine code, in turn, is inherently at a slightly higher level than the microcode or micro-operations used internally in many processors.
Execution modes
There are three general modes of execution for modern high-level languages:
Interpreted When code written in a language is interpreted, its syntax is read and then executed directly, with no compilation stage. A program called an interpreter reads each program statement, following the program flow, then decides what to do, and does it. A hybrid of an interpreter and a compiler will compile the statement into machine code and execute that; the machine code is then discarded, to be interpreted anew if the line is executed again. Interpreters are commonly the simplest implementations of the behavior of a language, compared to the other two variants listed here.
Compiled When code written in a language is compiled, its syntax is transformed into an executable form before running. There are two types of compilation:
Machine code generation Some compilers compile source code directly into machine code. This is the original mode of compilation, and languages that are directly and completely transformed to machine-native code in this way may be called truly compiled languages. See assembly language.
Intermediate representations When code written in a language is compiled to an intermediate representation, that representation can be optimized or saved for later execution without the need to re-read the source file. When the intermediate representation is saved, it may be in a form such as bytecode. The intermediate representation must then be interpreted or further compiled to execute it. Virtual machines that execute bytecode directly or transform it further into machine code have blurred the once clear distinction between intermediate representations and truly compiled languages.
Source-to-source translated or transcompiled Code written in a language may be translated into terms of a lower-level language for which native code compilers are already common. JavaScript and the language C are common targets for such translators. See CoffeeScript, Chicken Scheme, and Eiffel as examples. Specifically, the generated C and C++ code can be seen (as generated from the Eiffel language when using the EiffelStudio IDE) in the EIFGENs directory of any compiled Eiffel project. In Eiffel, the translated process is referred to as transcompiling or transcompiled, and the Eiffel compiler as a transcompiler or source-to-source compiler.
Note that languages are not strictly interpreted languages or compiled languages. Rather, implementations of language behavior use interpreting or compiling. For example, ALGOL 60 and Fortran have both been interpreted (even though they were more typically compiled). Similarly, Java shows the difficulty of trying to apply these labels to languages, rather than to implementations; Java is compiled to bytecode which is then executed by either interpreting (in a Java virtual machine (JVM)) or compiling (typically with a just-in-time compiler such as HotSpot, again in a JVM). Moreover, compiling, transcompiling, and interpreting is not strictly limited to only a description of the compiler artifact (binary executable or IL assembly).
High-level language computer architecture
Alternatively, it is possible for a high-level language to be directly implemented by a computer – the computer directly executes the HLL code. This is known as a high-level language computer architecture – the computer architecture itself is designed to be targeted by a specific high-level language. The Burroughs large systems were target machines for ALGOL 60, for example.
| Technology | Programming languages | null |
189845 | https://en.wikipedia.org/wiki/Low-level%20programming%20language | Low-level programming language | A low-level programming language is a programming language that provides little or no abstraction from a computer's instruction set architecture; commands or functions in the language are structurally similar to a processor's instructions. Generally, this refers to either machine code or assembly language. Because of the low (hence the word) abstraction between the language and machine language, low-level languages are sometimes described as being "close to the hardware". Programs written in low-level languages tend to be relatively non-portable, due to being optimized for a certain type of system architecture.
Low-level languages can convert to machine code without a compiler or interpreter—second-generation programming languages use a simpler processor called an assembler—and the resulting code runs directly on the processor. A program written in a low-level language can be made to run very quickly, with a small memory footprint. An equivalent program in a high-level language can be less efficient and use more memory. Low-level languages are simple, but considered difficult to use, due to numerous technical details that the programmer must remember. By comparison, a high-level programming language isolates execution semantics of a computer architecture from the specification of the program, which simplifies development.
Machine code
Machine code is the form in which code that can be directly executed is stored on a computer. It consists of machine language instructions, stored in memory, that perform operations such as moving values in and out of memory locations, arithmetic and Boolean logic, and testing values and, based on the test, either executing the next instruction in memory or executing an instruction at another location.
Machine code is usually stored in memory as binary data. Programmers almost never write programs directly in machine code; instead, they write code in assembly language or higher-level programming languages.
Although few programs are written in machine languages, programmers often become adept at reading it through working with core dumps or debugging from the front panel.
Example of a function in hexadecimal representation of x86-64 machine code to calculate the nth Fibonacci number, with each line corresponding to one instruction:
89 f8
85 ff
74 26
83 ff 02
76 1c
89 f9
ba 01 00 00 00
be 01 00 00 00
8d 04 16
83 f9 02
74 0d
89 d6
ff c9
89 c2
eb f0
b8 01 00 00
c3
Assembly language
Second-generation languages provide one abstraction level on top of the machine code. In the early days of coding on computers like TX-0 and PDP-1, the first thing MIT hackers did was to write assemblers.
Assembly language has little semantics or formal specification, being only a mapping of human-readable symbols, including symbolic addresses, to opcodes, addresses, numeric constants, strings and so on. Typically, one machine instruction is represented as one line of assembly code, commonly called mnemonics. Assemblers produce object files that can link with other object files or be loaded on their own.
Most assemblers provide macros to generate common sequences of instructions.
Example: The same Fibonacci number calculator as above, but in x86-64 assembly language using Intel syntax:
fib:
mov rax, rdi ; put the argument into rax
test rdi, rdi ; is it zero?
je .return_from_fib ; yes - return 0, which is already in rax
cmp rdi, 2 ; is 2 greater than or equal to it?
jbe .return_1_from_fib ; yes (i.e., it's 1 or 2) - return 1
mov rcx, rdi ; no - put it in rcx, for use as a counter
mov rdx, 1 ; the previous number in the sequence, which starts out as 1
mov rsi, 1 ; the number before that, which also starts out as 1
.fib_loop:
lea rax, [rsi + rdx] ; put the sum of the previous two numbers into rax
cmp rcx, 2 ; is the counter 2?
je .return_from_fib ; yes - rax contains the result
mov rsi, rdx ; make the previous number the number before the previous one
dec rcx ; decrement the counter
mov rdx, rax ; make the current number the previous number
jmp .fib_loop ; keep going
.return_1_from_fib:
mov rax, 1 ; set the return value to 1
.return_from_fib:
ret ; return
In this code example, the registers of the x86-64 processor are named and manipulated directly. The function loads its 64-bit argument from in accordance to the System V application binary interface for x86-64 and performs its calculation by manipulating values in the , , , and registers until it has finished and returns. Note that in this assembly language, there is no concept of returning a value. The result having been stored in the register, again in accordance with System V application binary interface, the instruction simply removes the top 64-bit element on the stack and causes the next instruction to be fetched from that location (that instruction is usually the instruction immediately after the one that called this function), with the result of the function being stored in . x86-64 assembly language imposes no standard for passing values to a function or returning values from a function (and in fact, has no concept of a function); those are defined by an application binary interface (ABI), such as the System V ABI for a particular instruction set.
Compare this with the same function in C:
unsigned int fib(unsigned int n)
{
if (!n)
{
return 0;
}
else if (n <= 2)
{
return 1;
}
else
{
unsigned int f_nminus2, f_nminus1, f_n;
for (f_nminus2 = f_nminus1 = 1, f_n = 0; ; --n)
{
f_n = f_nminus2 + f_nminus1;
if (n <= 2)
{
return f_n;
}
f_nminus2 = f_nminus1;
}
}
}
This code is similar in structure to the assembly language example but there are significant differences in terms of abstraction:
The input (parameter ) is an abstraction that does not specify any storage location on the hardware. In practice, the C compiler follows one of many possible calling conventions to determine a storage location for the input.
The local variables , , and are abstractions that do not specify any specific storage location on the hardware. The C compiler decides how to actually store them for the target architecture.
The return function specifies the value to return, but does not dictate how it is returned. The C compiler for any specific architecture implements a standard mechanism for returning the value. Compilers for the x86-64 architecture typically (but not always) use the register to return a value, as in the assembly language example (the author of the assembly language example has chosen to use the System V application binary interface for x86-64 convention but assembly language does not require this).
These abstractions make the C code compilable without modification on any architecture for which a C compiler has been written. The x86 assembly language code is specific to the x86-64 architecture and the System V application binary interface for that architecture.
Low-level programming in high-level languages
During the late 1960s and 1970s, high-level languages that included some degree of access to low-level programming functions, such as PL/S, BLISS, BCPL, extended ALGOL and NEWP (for Burroughs large systems/Unisys Clearpath MCP systems), and C, were introduced. One method for this is inline assembly, in which assembly code is embedded in a high-level language that supports this feature. Some of these languages also allow architecture-dependent compiler optimization directives to adjust the way a compiler uses the target processor architecture.
Although a language like C is high-level, it does not fully abstract away the ability to manage memory like other languages. In a high-level language like Python the programmer cannot directly access memory due to the abstractions between the interpreter and the machine. Thus C can allow more control by exposing memory management tools through tools like memory allocate (malloc).
Furthermore, as referenced above, the following block of C is from the GNU Compiler and shows the inline assembly ability of C. Per the GCC documentation this is a simple copy and addition code. This code displays the interaction between a generally high level language like C and its middle/low level counter part Assembly. Although this may not make C a natively low level language these facilities express the interactions in a more direct way. int src = 1;
int dst;
asm ("mov %1, %0\n\t"
"add $1, %0"
: "=r" (dst)
: "r" (src));
printf("%d\n", dst);
| Technology | Programming languages | null |
189866 | https://en.wikipedia.org/wiki/Amblypygi | Amblypygi | Amblypygi is an order of arachnids also known as whip spiders or tailless whip scorpions, not to be confused with whip scorpions or vinegaroons that belong to the related order Thelyphonida. The name "amblypygid" means "blunt tail", a reference to a lack of the flagellum that is otherwise seen in whip scorpions. Amblypygids possess no silk glands or venom. They rarely bite if threatened but can grab fingers with their pedipalps, resulting in thornlike puncture injuries.
As of 2023, five families, 17 genera, and around 260 species had been discovered and described. They are found in tropical and subtropical regions worldwide, mainly in warm and humid environments. They like to stay protected and hidden within leaf litter, caves, or underneath bark. Some species are subterranean; all are nocturnal. Fossilized amblypygids have been found dating back to the Carboniferous period, such as Weygoldtina.
Description
Body plan
Being arachnids, Amblypygi possess two body segments; the prosoma and the opisthosoma, (often referred to as the cephalothorax and abdomen), four pairs of legs, pedipalps, and chelicerae. Their bodies are broad and highly flattened, with a solid prosoma and a segmented opisthosoma.
Amblypygids range from in legspan. Most species have eight eyes; a pair of median eyes at the front of the carapace above the chelicerae and 2 smaller clusters of three eyes each further back on each side.
The first pair of legs act as sensory organs and are not used for walking. The sensory legs are very thin and elongated, have numerous sensory receptors, and can extend several times the length of the body.
Pedipalps
Amblypygids have raptorial pedipalps modified for grabbing and retaining prey, much like the forelegs of mantises. The pedipalps are generally covered in spines, used for impaling and capturing prey. They are kept folded in front of the prosoma when not in use. Recent work suggests that the pedipalps display sexual dimorphism in their size and shape.
Pedipalp anatomy varies strongly with species, with configurations often conforming to a particular style of prey capture. The pedipalps of some genera such as Euphrynicus are extremely long, and free of spines until near the extreme distal end of the appendage.
Exoskeleton
Whip spiders are covered with a layer of a solidified secretion that forms a super-hydrophobic coating. Studies on the spotted tailless whip scorpion also show their exoskeleton is enriched with several trace elements, including calcium, magnesium, manganese, potassium, sodium, and zinc, which tends to accumulate as the individual gets older. The same trace elements are also present in the exoskeleton of the other members of Tetrapulmonata.
Behavior
Amblypygids have eight legs, but use only six for walking, often in a crab-like, sideways fashion. The front pair of legs are modified for use as antennae-like feelers, with many fine segments giving the appearance of a "whip". When a suitable prey is located with the antenniform legs, the amblypygid seizes its victim with large spines on the grasping pedipalps, impaling and immobilizing the prey. This is typically done while climbing the side of a vertical surface and looking downward at their prey.
Pincer-like chelicerae then work to grind and chew the prey prior to ingestion. The tailless whip scorpion may go for over a month in which no food is eaten. Often this is due to pre-molt. Due to the lack of venom the tailless whip scorpion is very nervous in temperament, retreating away if any dangerous threat is sensed by the animal.
Courtship involves the male depositing stalked spermatophores, which have one or more sperm masses at the tip, onto the ground, and using his pedipalps to guide the female over them. She gathers the sperm and lays fertilized eggs into a sac carried under the abdomen, or opisthosoma. When the young hatch, they climb up onto the mother's back; any which fall off before their first molt will not survive.
Some species of amblypygids, particularly Phrynus marginemaculatus and Damon diadema, may be among the few examples of arachnids that exhibit social behavior. Research conducted at Cornell University suggests that mother amblypygids communicate with their young with her antenniform front legs, and the offspring reciprocate both with their mother and siblings. The ultimate function of this social behavior remains unknown. Amblypygids hold territories that they defend from other individuals.
The amblypygid diet mostly consists of arthropod prey, but these opportunistic predators have also been observed feeding on vertebrates. Amblypygids generally do not feed for a period of time before, during, and after molting. Like other arachnids, an amblypygid will molt several times during its life. Molting is done while hanging from the underside of a horizontal surface in order to use gravity to assist in separating the old exoskeleton from the animal.
As pets
Several genera of Amblypygi are sold and kept as pets, including Acanthophrynus, Charinus, Charon, Damon, Euphrynichus, Heterophrynus, Phrynus, Paraphrynus, and Phrynichus. Tailless whip scorpions are kept in tall enclosures with arboreal climbing surfaces to allow for two things: enough vertical space for climbing and molting, and enough space for heat to dissipate in order to keep the enclosure between and . of substrate at the bottom of the enclosure is generally sufficient to allow for burrowing and also serves as a method to retain water in order to keep the humidity above 75%. Tailless whip scorpions live anywhere between 5 and 10 years. Feeding can include small insects such as crickets, mealworms, and roaches.
Genera
The following genera are recognised:
Palaeoamblypygi Weygoldt, 1996
Paracharontidae Weygoldt, 1996
Paracharon Hansen, 1921 (1 species, West Africa)
Jorottui Moreno-González, Gutierrez-Estrada, & Prendini, 2023 (1 species, northern South America)
Weygoldtinidae Dunlop, 2018
†Weygoldtina Dunlop, 2018 (2 species, Upper Carboniferous Europe, North America)
Euamblypygi Weygoldt, 1996
†Paracharonopsis Engel & Grimaldi, 2014 (1 species, Cambay amber, India, Eocene)
Charinidae Weygoldt, 1996
Charinus Simon, 1892 (33 species)
Sarax Simon, 1892 (10 species)
Weygoldtia Miranda, Giupponi, Prendini & Scharff, 2018 (3 species)
Neoamblypygi Weygoldt, 1996
Charontidae Simon, 1892
Catageus Thorell, 1889 (9 species)
Charon Karsch, 1879 (5 species)
Unidistitarsata Engel & Grimaldi, 2014
†Kronocharon Engel & Grimaldi, 2014 (1 species, Burmese amber, Myanmar, Cretaceous)
†Burmacharon? Hu et al. 2020 (1 species, Burmese amber, Myanmar, Cretaceous)
Phrynoidea Blanchard, 1852
Phrynichidae Simon, 1900
Damon C. L. Koch, 1850 (10 species)
Euphrynichus Weygoldt, 1995 (2 species)
Musicodamon Fage, 1939 (1 species)
Phrynichodamon Weygoldt, 1996 (1 species)
Phrynichus Karsch, 1879 (16 species)
Trichodamon Mello-Leitão, 1935 (2 species)
Xerophrynus Weygoldt, 1996 (1 species)
Phrynidae Blanchard, 1852
Acanthophrynus Kraepelin, 1899 (1 species)
†Britopygus Dunlop & Martill, 2002 (1 species; Crato Formation, Brazil, Cretaceous)
Heterophrynus Pocock, 1894 (14 species)
Paraphrynus Moreno, 1940 (18 species)
Phrynus Lamarck, 1801 (28 species, Oligocene - Recent)
Incertae sedis:
† Sorellophrynus Harvey, 2002 (1 species, Upper Carboniferous, North America)
† Thelyphrynus Petrunkevich, 1913 (1 species, Upper Carboniferous, North America)
| Biology and health sciences | Arachnids | Animals |
189879 | https://en.wikipedia.org/wiki/Schizomida | Schizomida | Schizomida, also known as sprickets or short-tailed whip-scorpions, is an order of arachnids, generally less than in length. The order is not yet widely studied. E. O. Wilson has identified schizomids as among the "groups of organisms that desperately need experts to work on them."
Taxonomy
Schizomids are grouped into three families:
Calcitronidae † (fossil; dubious)
Hubbardiidae
Protoschizomidae (2 genera, 15 species)
Agastoschizomus
Protoschizomus
About 300 species of schizomids have been described worldwide, most belonging to the Hubbardiidae family. A systematic review including a full catalogue may be found in Reddell & Cokendolpher (1995). The Schizomida is sister to the order Uropygi, the two clades together forming the Thelyphonida (in the broad sense of the name). Based on molecular clock dates, both orders likely originated in the late Carboniferous somewhere in the tropics of Pangea, and the Schizomida underwent substantial diversification starting in the Cretaceous. The oldest known fossils of the group are from the Mid-Cretaceous Burmese amber of Myanmar, which are assignable to the Hubbardiidae.
Morphology
Schizomids are relatively small, soft-bodied arachnids, somewhat similar in appearance to whip scorpions. The prosoma (cephalothorax) is divided into three regions, each covered by plates, the large protopeltidium and the smaller, paired, mesopeltidia and metapeltidia. The name means "split or cleaved middle", referring to the way the prosoma is divided into two separate plates.
The opisthosoma (abdomen) is a smooth oval of 12 recognizable segments. The first is reduced and forms the pedicel, while the last three are constricted, forming the pygidium. The last segment bears a short whip-like tail or flagellum, consisting of no more than four segments. The females generally have 3-4-segmented flagella, while in males it is single segmented.
Like the related orders Uropygi and Amblypygi, and the more distantly related Solifugae, the schizomids use only six legs for walking, having modified their first two legs to serve as sensory organs. They also have large well-developed pincer-like pedipalps just before the sensory legs. The hind legs are modified for jumping, as part of their escape response when threatened.
Schizomids have no actual eyes, but a few species have vestigial eyespots capable of telling light from dark. They breathe through a single pair of book lungs located on the second abdominal segment, as the second pair on third abdominal segment found in the other orders of Tetrapulmonata is lost.
Distribution and habitat
Schizomids are generally tropical and subtropical creatures, and they have a global distribution in these habitats, including in Southeast Asia, India, Australia, several Pacific Islands, Central and South America, and Africa. Additionally, some populations have been found in neighboring temperate regions such as California and Texas. Of the two extant families of sprickets, Hubbardiidae has a global distribution while Protoschizomidae is only found in Mexico and Texas. While schizomids are not native to Europe, they have been introduced to the continent in Britain, France, the Czech Republic, and Poland via soil stock imported for botanical gardens; however, thus far they are still restricted to the artificial greenhouse environments. Despite their global distribution, most schizomid species have very restricted distributions, with many only known from their original locality.
Humidity is vital to determining the habitats in which sprickets can live as they need to avoid desiccation. They typically live in rainforest leaf litter, particularly in the top layer of organic soil, under rocks, in and beneath rotten logs, and even in caves. Although most species are restricted to rainforests, they can also be found in neighboring woody areas. The Australian species Draculoides vinei is believed to have been forced to move into a nearby humid cave system after its original forests dramatically decreased in size. Additionally, some species have been found in insect nests; Afrozomus machadoi lives in termite mounds, while Stenochrus portoricensis has been found in ant colonies. Schizomids are also occasionally found living in the trees; the South American Surazomus arboreus lives in rainforest that is seasonally flooded, forcing the arachnids to move higher into the trees to avoid drowning.
While sprickets are not typically found in colder climates, several Californian Hubbardia species have been found living under snow-covered rocks, and Hubbardia briggsi in particular is often found in snowy habitats during the winter.
Biology
While not much is known about the lifespans of schizomids, they have been found to live for several months in captivity.
Mortality and defense
Not much is known about the natural predators of sprickets. Amblypygids have been observed eating schizomids. Additionally, despite their small size, schizomids have been observed being parasitized by tiny nematodes; the opisthosoma of one Stenochrus goodnightorum was nearly completely filled by a parasitic nematode.
Diet and feeding
Sprickets are active predators, constantly using their antenniform legs to examine the forest soil for potential prey. A wide range of invertebrates are prey items, including isopods, millipedes, cockroaches, worms, springtails, termites, booklice, zorapterans, and even other schizomids. Prey can range in size from 10% of their body size to as much as 100%. Once potential prey is located, the arachnid uses their antenniform legs to determine the size of the creature and note any extremities. Should the schizomid not retreat, it will lunge forward and seize its victim with its palps. The prey is then subdued, and possibly taken to the shelter of a nearby crevice to be eaten. The chelicerae dismember the prey item before the tissues are liquified into chyme and ingested via suction with the mouth.
Schizomids can survive a long time without food; some Hubbardia pentapeltis have been shown to survive five months without food.
| Biology and health sciences | Arachnids | Animals |
189897 | https://en.wikipedia.org/wiki/Programming%20paradigm | Programming paradigm | A programming paradigm is a relatively high-level way to conceptualize and structure the implementation of a computer program. A programming language can be classified as supporting one or more paradigms.
Paradigms are separated along and described by different dimensions of programming. Some paradigms are about implications of the execution model, such as allowing side effects, or whether the sequence of operations is defined by the execution model. Other paradigms are about the way code is organized, such as grouping into units that include both state and behavior. Yet others are about syntax and grammar.
Some common programming paradigms include (shown in hierarchical relationship):
Imperative code directly controls execution flow and state change, explicit statements that change a program state
procedural organized as procedures that call each other
object-oriented organized as objects that contain both data structure and associated behavior, uses data structures consisting of data fields and methods together with their interactions (objects) to design programs
Class-based – object-oriented programming in which inheritance is achieved by defining classes of objects, versus the objects themselves
Prototype-based – object-oriented programming that avoids classes and implements inheritance via cloning of instances
Declarative code declares properties of the desired result, but not how to compute it, describes what computation should perform, without specifying detailed state changes
functional a desired result is declared as the value of a series of function evaluations, uses evaluation of mathematical functions and avoids state and mutable data
logic a desired result is declared as the answer to a question about a system of facts and rules, uses explicit mathematical logic for programming
reactive a desired result is declared with data streams and the propagation of change
Concurrent programming – has language constructs for concurrency, these may involve multi-threading, support for distributed computing, message passing, shared resources (including shared memory), or futures
Actor programming – concurrent computation with actors that make local decisions in response to the environment (capable of selfish or competitive behaviour)
Constraint programming – relations between variables are expressed as constraints (or constraint networks), directing allowable solutions (uses constraint satisfaction or simplex algorithm)
Dataflow programming – forced recalculation of formulas when data values change (e.g. spreadsheets)
Distributed programming – has support for multiple autonomous computers that communicate via computer networks
Generic programming – uses algorithms written in terms of to-be-specified-later types that are then instantiated as needed for specific types provided as parameters
Metaprogramming – writing programs that write or manipulate other programs (or themselves) as their data, or that do part of the work at compile time that would otherwise be done at runtime
Template metaprogramming – metaprogramming methods in which a compiler uses templates to generate temporary source code, which is merged by the compiler with the rest of the source code and then compiled
Reflective programming – metaprogramming methods in which a program modifies or extends itself
Pipeline programming – a simple syntax change to add syntax to nest function calls to language originally designed with none
Rule-based programming – a network of rules of thumb that comprise a knowledge base and can be used for expert systems and problem deduction & resolution
Visual programming – manipulating program elements graphically rather than by specifying them textually (e.g. Simulink); also termed diagrammatic programming
Overview
Programming paradigms come from computer science research into existing practices of software development. The findings allow for describing and comparing programming practices and the languages used to code programs. For perspective, other fields of research study
software engineering processes and describe various methodologies to describe and compare them.
A programming language can be described in terms of paradigms. Some languages support only one paradigm. For example, Smalltalk supports object-oriented and Haskell supports functional. Most languages support multiple paradigms. For example, a program written in C++, Object Pascal, or PHP can be purely procedural, purely object-oriented, or can contain aspects of both paradigms, or others.
When using a language that supports multiple paradigms, the developer chooses which paradigm elements to use. But, this choice may not involve considering paradigms per se. The developer often uses the features of a language as the language provides them and to the extent that the developer knows them. Categorizing the resulting code by paradigm is often an academic activity done in retrospect.
Languages categorized as imperative paradigm have two main features: they state the order in which operations occur, with constructs that explicitly control that order, and they allow side effects, in which state can be modified at one point in time, within one unit of code, and then later read at a different point in time inside a different unit of code. The communication between the units of code is not explicit.
In contrast, languages in the declarative paradigm do not state the order in which to execute operations. Instead, they supply a number of available operations in the system, along with the conditions under which each is allowed to execute. The implementation of the language's execution model tracks which operations are free to execute and chooses the order independently. More at Comparison of multi-paradigm programming languages.
In object-oriented programming, code is organized into objects that contain state that is owned by and (usually) controlled by the code of the object. Most object-oriented languages are also imperative languages.
In object-oriented programming, programs are treated as a set of interacting objects. In functional programming, programs are treated as a sequence of stateless function evaluations. When programming computers or systems with many processors, in process-oriented programming, programs are treated as sets of concurrent processes that act on a logical shared data structures.
Many programming paradigms are as well known for the techniques they forbid as for those they support. For instance, pure functional programming disallows side-effects, while structured programming disallows the goto construct. Partly for this reason, new paradigms are often regarded as doctrinaire or overly rigid by those accustomed to older ones. Yet, avoiding certain techniques can make it easier to understand program behavior, and to prove theorems about program correctness.
Programming paradigms can also be compared with programming models, which allows invoking an execution model by using only an API. Programming models can also be classified into paradigms based on features of the execution model.
For parallel computing, using a programming model instead of a language is common. The reason is that details of the parallel hardware leak into the abstractions used to program the hardware. This causes the programmer to have to map patterns in the algorithm onto patterns in the execution model (which have been inserted due to leakage of hardware into the abstraction). As a consequence, no one parallel programming language maps well to all computation problems. Thus, it is more convenient to use a base sequential language and insert API calls to parallel execution models via a programming model. Such parallel programming models can be classified according to abstractions that reflect the hardware, such as shared memory, distributed memory with message passing, notions of place visible in the code, and so forth. These can be considered flavors of programming paradigm that apply to only parallel languages and programming models.
Criticism
Some programming language researchers criticise the notion of paradigms as a classification of programming languages, e.g. Harper, and Krishnamurthi. They argue that many programming languages cannot be strictly classified into one paradigm, but rather include features from several paradigms. See Comparison of multi-paradigm programming languages.
History
Different approaches to programming have developed over time. Classification of each approach was either described at the time the approach was first developed, but often not until some time later, retrospectively. An early approach consciously identified as such is structured programming, advocated since the mid 1960s. The concept of a programming paradigm as such dates at least to 1978, in the Turing Award lecture of Robert W. Floyd, entitled The Paradigms of Programming, which cites the notion of paradigm as used by Thomas Kuhn in his The Structure of Scientific Revolutions (1962). Early programming languages did not have clearly defined programming paradigms and sometimes programs made extensive use of goto statements. Liberal use of which lead to spaghetti code which is difficult to understand and maintain. This led to the development of structured programming paradigms that disallowed the use of goto statements; only allowing the use of more structured programming constructs.
Languages and paradigms
Machine code
Machine code is the lowest-level of computer programming as it is machine instructions that define behavior at the lowest level of abstract possible for a computer. As it is the most prescriptive way to code it is classified as imperative.
It is sometimes called the first-generation programming language.
Assembly
Assembly language introduced mnemonics for machine instructions and memory addresses. Assembly is classified as imperative and is sometimes called the second-generation programming language.
In the 1960s, assembly languages were developed to support library COPY and quite sophisticated conditional macro generation and preprocessing abilities, CALL to subroutine, external variables and common sections (globals), enabling significant code re-use and isolation from hardware specifics via the use of logical operators such as READ/WRITE/GET/PUT. Assembly was, and still is, used for time-critical systems and often in embedded systems as it gives the most control of what the machine does.
Procedural languages
Procedural languages, also called the third-generation programming languages are the first described as high-level languages. They support vocabulary related to the problem being solved. For example,
COmmon Business Oriented Language (COBOL) uses terms like file, move and copy.
FORmula TRANslation (FORTRAN) using mathematical language terminology, it was developed mainly for scientific and engineering problems.
ALGOrithmic Language (ALGOL) focused on being an appropriate language to define algorithms, while using mathematical language terminology, targeting scientific and engineering problems, just like FORTRAN.
Programming Language One (PL/I) a hybrid commercial-scientific general purpose language supporting pointers.
Beginners All purpose Symbolic Instruction Code (BASIC) it was developed to enable more people to write programs.
C a general-purpose programming language, initially developed by Dennis Ritchie between 1969 and 1973 at AT&T Bell Labs.
These languages are classified as procedural paradigm. They directly control the step by step process that a computer program follows. The efficacy and efficiency of such a program is therefore highly dependent on the programmer's skill.
Object-oriented programming
In attempt to improve on procedural languages, object-oriented programming (OOP) languages were created, such as Simula, Smalltalk, C++, Eiffel, Python, PHP, Java, and C#. In these languages, data and methods to manipulate the data are in the same code unit called an object. This encapsulation ensures that the only way that an object can access data is via methods of the object that contains the data. Thus, an object's inner workings may be changed without affecting code that uses the object.
There is controversy raised by Alexander Stepanov, Richard Stallman and other programmers, concerning the efficacy of the OOP paradigm versus the procedural paradigm. The need for every object to have associative methods leads some skeptics to associate OOP with software bloat; an attempt to resolve this dilemma came through polymorphism.
Although most OOP languages are third-generation, it is possible to create an object-oriented assembler language. High Level Assembly (HLA) is an example of this that fully supports advanced data types and object-oriented assembly language programming despite its early origins. Thus, differing programming paradigms can be seen rather like motivational memes of their advocates, rather than necessarily representing progress from one level to the next. Precise comparisons of competing paradigms' efficacy are frequently made more difficult because of new and differing terminology applied to similar entities and processes together with numerous implementation distinctions across languages.
Declarative languages
A declarative programming program describes what the problem is, not how to solve it. The program is structured as a set of properties to find in the expected result, not as a procedure to follow. Given a database or a set of rules, the computer tries to find a solution matching all the desired properties. An archetype of a declarative language is the fourth generation language SQL, and the family of functional languages and logic programming.
Functional programming is a subset of declarative programming. Programs written using this paradigm use functions, blocks of code intended to behave like mathematical functions. Functional languages discourage changes in the value of variables through assignment, making a great deal of use of recursion instead.
The logic programming paradigm views computation as automated reasoning over a body of knowledge. Facts about the problem domain are expressed as logic formulas, and programs are executed by applying inference rules over them until an answer to the problem is found, or the set of formulas is proved inconsistent.
Other paradigms
Symbolic programming is a paradigm that describes programs able to manipulate formulas and program components as data. Programs can thus effectively modify themselves, and appear to "learn", making them suited for applications such as artificial intelligence, expert systems, natural-language processing and computer games. Languages that support this paradigm include Lisp and Prolog.
Differentiable programming structures programs so that they can be differentiated throughout, usually via automatic differentiation.
Literate programming, as a form of imperative programming, structures programs as a human-centered web, as in a hypertext essay: documentation is integral to the program, and the program is structured following the logic of prose exposition, rather than compiler convenience.
Symbolic programming techniques such as reflective programming (reflection), which allow a program to refer to itself, might also be considered as a programming paradigm. However, this is compatible with the major paradigms and thus is not a real paradigm in its own right.
| Technology | Programming | null |
189902 | https://en.wikipedia.org/wiki/Print%20on%20demand | Print on demand | Print on demand (POD) is a printing technology and business process in which book copies (or other documents, packaging, or materials) are not printed until the company receives an order, allowing prints in single or small quantities. While other industries established the build-to-order business model, POD could only develop after the beginning of digital printing because it was not economical to print single copies using traditional printing technologies such as letterpress and offset printing.
Many traditional small presses have replaced their traditional printing equipment with POD equipment or contracted their printing to POD service providers. Many academic publishers, including university presses, use POD services to maintain large backlists (lists of older publications); some use POD for all of their publications. Larger publishers may use POD in special circumstances, such as reprinting older, out-of-print titles or for test marketing.
Predecessors
Before the introduction of digital printing technology, production of small numbers of publications had many limitations. Large print jobs were not a problem, but small numbers of printed pages were typically during the early 20th century produced using stencils and reproducing on a mimeograph or similar machine. These produced printed pages of inferior quality to a book, cheaply and reasonably fast. By about 1950, electrostatic copiers were available to make paper master plates for offset duplicating machines. From about 1960, copying onto plain paper became possible for photocopy machines to make multiple good-quality copies of a monochrome original.
In 1966, Frederik Pohl discussed in Galaxy Science Fiction "a proposal for high-speed facsimile machines which would produce a book to your order, anywhere in the world". As the magazine's editor, he said that "it, or something like it, is surely the shape of the publishing business some time in the future". As technology advanced, it became possible to store text in digital form paper tape, punched cards readable by digital computer, magnetic mass storage, etc. and to print on a teletypewriter, line printer or other computer printer, but the software and hardware to produce original good-quality printed colour text and graphics and to print small jobs fast and cheaply was unavailable.
Book publishing
Print on demand with digital technology is a way to print items for a fixed cost per copy, regardless of the size of the order. While the unit price of each physical copy is greater than with offset printing, the average cost is lower for very small print jobs, because setup costs are much greater for offset printing.
POD has other business benefits besides lesser costs (for small jobs):
Technical set-up is usually quicker than for offset printing.
Large inventories of a book or print material do not need to be kept in stock, reducing storage, handling costs, and inventory accounting costs.
There is little or no waste from unsold products.
Many publishers use POD for other printing needs other than books such as galley proof, catalogs and review copies.
These advantages reduce the risks associated with publishing books and prints and can result in increased choice for consumers. However, the reduced risks for the publisher can also mean that quality control is less rigorous than usual.
Other publishing
Digital technology is ideally suited to publish small print jobs of posters (often as a single copy) when they are needed. The introduction of ultraviolet-curable inks and media for large-format inkjet printers has allowed artists, photographers and owners of image collections to take advantage of print on demand.
For example, UK art retailer King and McGaw fulfills many of its art print orders by printing on-demand rather than pre-printing and storing them until they are sold, requiring less space and reducing overheads to the business. This was brought about after a fire destroyed £3 million worth of stock and damage to their warehouse.
Service providers
The introduction of POD technologies and business models has created a range of new book creation and publishing opportunities. There are three main categories of offerings.
Self-publishing authors
POD creates a new category of publishing (or printing) company that offers services, usually for a fee, directly to authors who wish to self-publish. These services generally include printing and shipping each individual book ordered, handling royalties, and getting listings in online bookstores. The initial investment required for POD services is less than for offset printing. Other services may also be available, including formatting, proofreading, and editing, but such companies typically do not spend money for marketing, unlike conventional publishers. Such companies are suitable for authors prepared to design and promote their work themselves, with minimal assistance and at minimal cost. POD publishing gives authors editorial independence, speed to market, ability to revise content, and greater financial return per copy than royalties paid by conventional publishers.
POD enablement
While amateur/professional writers are targeted as early adopters by some companies, there is an effort to make POD more mass-market. A class of companies have chosen to be "author-agnostic", attempting to serve a broad mass-market of ordinary citizens who may want to express, record and print keepsake copies of memories and personal writing (diaries, travelogues, wedding journals, baby books, family reunion reports etc.). Instead of tailoring themselves to the classic book format (at least 100 pages, mostly text, complex rules for copyright and royalties), these companies strive to make POD more mass-market by creating programs by which a range of different text and picture items can be produced as finished books. The management of copyrights and royalties is often less important for this market, as the books themselves have a small clientele (close family and friends, for instance).
The major photo storage services have included the ability to produce picture books and calendars. However, they emphasize digital photography. Some companies apply this method to a greater volume of creative work (primarily text, as typed in personal weblogs) and include the capability to embed photographs and other media. Others assume the role of an infrastructure service provider, allowing any partner website to use its pre-designed payment and printing functions.
Publisher use
Print-on-demand services that offer printing and distributing services to publishing companies (instead of directly to self-publishing authors) are also growing in popularity within the industry. Many major publishers print on demand as a way to save money on inventory costs. Print on demand also allows texts to be revised and published more quickly.
Maintaining availability
Among traditional publishers, POD services can be used to make sure that books remain available when one print job has sold out, but another has not yet become available. This maintains the availability of older works, the estimated future sales of which may not be great enough to justify a further conventional print job. This can be useful for publishers with large backlists, such that sales for individual works may be few, but cumulative sales may be significant.
Managing uncertainty
Print on demand can be used to reduce risk when dealing with "surge" publications that are expected to have large sales but a brief sales life (such as biographies of minor celebrities, or event tie-ins): these publications represent good profitability but also great risk owing to the danger of inadvertently printing many more copies than are necessary, and the associated costs of maintaining excess inventory or pulping. POD allows a publisher to use cheaper conventional printing to produce enough copies to satisfy a pessimistic forecast of the publication sales, and then rely on POD to make up the difference.
Variable formats
Print on demand also allows books to be printed in a variety of formats. This process, known as accessible publishing, allows books to be printed in a variety of larger type sizes and special formats for those with vision impairment or reading disabilities, as well as personalised typefaces and formats that suit an individual reader's needs.
Economics
Profits from print-on-demand publishing are on a per-sale basis, and royalties vary depending on the method by which the item is sold. Greatest profits are usually generated from sales direct from a print-on-demand service's website or by the author buying copies from the service at a discount, as the publisher, and then selling them personally. Lesser royalties come from traditional bookshops and online retailers, both of which buy at high discount, although some POD companies allow the publisher or author to set their own discount level.
Because the per-unit cost is typically greater with POD than with a print job of thousands of copies, it is common for POD books to be more expensive than similar books made by conventional print jobs.
Book stores order books through a wholesaler or distributor, usually at a discount of as much as 70%. Wholesalers obtain their books in two ways: either as a special order such that the book is ordered direct from a publisher when a book store requests a copy, or as stocked, which they keep in their own warehouse as part of their inventory. Stocked books are usually also available through "sale or return", meaning that the book store can return unsold stock for full credit as much as one year after the initial sale.
POD books are rarely if ever available on such terms because for the publishing provider it is considered too much of a risk. However, wholesalers monitor what works are selling, and if authors promote their work successfully and achieve a reasonable number of orders from book stores or online retailers (who use the same wholesalers as the stores), then there is a reasonable chance of their work becoming available on such terms.
Author's Reversion Rights
In 1999, the Times Literary Supplement carried an article entitled "A Very Short Run", in which author Andrew Malcolm argued that under the rights-reversion clauses of older, pre-PoD contracts, copyrights would legally revert to their authors if their books were printed on demand rather than re-lithographed, and he envisaged a test case being successfully fought on this aspect. This claim was contradicted by an article entitled "Eternal Life?" in the Spring 2000 issue of The Author Magazine (the journal of the UK Society of Authors) by Cambridge University Press's Business Development Director Michael Holdsworth, who argued that printing on demand keeps books "permanently in print", thereby invalidating authors' reversion rights.
| Technology | Printing | null |
189951 | https://en.wikipedia.org/wiki/Force%20carrier | Force carrier | In quantum field theory, a force carrier is a type of particle that gives rise to forces between other particles. They serve as the quanta of a particular kind of physical field. Force carriers are also known as messenger particles, intermediate particles, or exchange particles.
Particle and field viewpoints
Quantum field theories describe nature in terms of fields. Each field has a complementary description as the set of particles of a particular type. A force between two particles can be described either as the action of a force field generated by one particle on the other, or in terms of the exchange of virtual force-carrier particles between them.
The energy of a wave in a field (for example, an electromagnetic wave in the electromagnetic field) is quantized, and the quantum excitations of the field can be interpreted as particles. The Standard Model contains the following force-carrier particles, each of which is an excitation of a particular force field:
Gluons, excitations of the strong gauge field.
Photons, W bosons, and Z bosons, excitations of the electroweak gauge fields.
Higgs bosons, excitations of one component of the Higgs field, which gives mass to fundamental particles.
In addition, composite particles such as mesons, as well as quasiparticles, can be described as excitations of an effective field.
Gravity is not a part of the Standard Model, but it is thought that there may be particles called gravitons which are the excitations of gravitational waves. The status of this particle is still tentative, because the theory is incomplete and because the interactions of single gravitons may be too weak to be detected.
Forces from the particle viewpoint
When one particle scatters off another, altering its trajectory, there are two ways to think about the process. In the field picture, we imagine that the field generated by one particle caused a force on the other. Alternatively, we can imagine one particle emitting a virtual particle which is absorbed by the other. The virtual particle transfers momentum from one particle to the other. This particle viewpoint is especially helpful when there are a large number of complicated quantum corrections to the calculation since these corrections can be visualized as Feynman diagrams containing additional virtual particles.
Another example involving virtual particles is beta decay where a virtual W boson is emitted by a nucleon and then decays to e± and (anti)neutrino.
The description of forces in terms of virtual particles is limited by the applicability of the perturbation theory from which it is derived. In certain situations, such as low-energy QCD and the description of bound states, perturbation theory breaks down.
History
The concept of messenger particles dates back to the 18th century when the French physicist Charles Coulomb showed that the electrostatic force between electrically charged objects follows a law similar to Newton's Law of Gravitation. In time, this relationship became known as Coulomb's law. By 1862, Hermann von Helmholtz had described a ray of light as the "quickest of all the messengers". In 1905, Albert Einstein proposed the existence of a light-particle in answer to the question: "what are light quanta?"
In 1923, at the Washington University in St. Louis, Arthur Holly Compton demonstrated an effect now known as Compton scattering. This effect is only explainable if light can behave as a stream of particles, and it convinced the physics community of the existence of Einstein's light-particle. Lastly, in 1926, one year before the theory of quantum mechanics was published, Gilbert N. Lewis introduced the term "photon", which soon became the name for Einstein's light particle. From there, the concept of messenger particles developed further, notably to massive force carriers (e.g. for the Yukawa potential).
| Physical sciences | Subatomic particles: General | Physics |
642298 | https://en.wikipedia.org/wiki/Aviation%20fuel | Aviation fuel | Aviation fuels are petroleum-based fuels, or petroleum and synthetic fuel blends, used to power aircraft. They have more stringent requirements than fuels used for ground use, such as heating and road transport, and contain additives to enhance or maintain properties important to fuel performance or handling. They are kerosene-based (JP-8 and Jet A-1) for gas-turbine–powered aircraft. Piston-engined aircraft use leaded gasoline and those with diesel engines may use jet fuel (kerosene). By 2012, all aircraft operated by the U.S. Air Force had been certified to use a 50–50 blend of kerosene and synthetic fuel derived from coal or natural gas as a way of stabilizing the cost of fuel.
Types of aviation fuel
Conventional aviation fuels
Jet fuel
Jet fuel is a clear to straw-colored fuel, based on either an unleaded kerosene (Jet A-1), or a naphtha–kerosene blend (Jet B). Similar to diesel fuel, it can be used in either compression ignition engines or turbine engines.
Jet-A powers modern commercial airliners and is a mix of extremely refined kerosene and burns at temperatures at or above . Kerosene-based fuel has a much higher flash point than gasoline-based fuel, meaning that it requires significantly higher temperature to ignite. It is a high-quality fuel; if it fails the purity and other quality tests for use on jet aircraft, it is sold to ground-based users with less demanding requirements, such as railroads.
Avgas
Avgas (aviation gasoline), or aviation spirit, is used by small aircraft, light helicopters and vintage piston-engined aircraft. Its formulation is distinct from the conventional gasoline (UK: petrol) used in motor vehicles, which is commonly called mogas or autogas in aviation context. Although it comes in many different grades, its octane ratings are generally well higher than those of road motor gasoline.
Emerging aviation fuels
Biofuels
Alternatives to conventional fossil-based aviation fuels, new fuels made via the biomass to liquid method (like sustainable aviation fuel) and certain straight vegetable oils can also be used.
Fuels such as sustainable aviation fuel have the advantage that few or no modifications are necessary on the aircraft itself, provided that the fuel characteristics meet specifications for lubricity and density as well as adequately swelling elastomer seals in current aircraft fuel systems. Sustainable aviation fuel and blends of fossil and sustainably-sourced alternative fuels yield lower emissions of particles and GHGs. They are, however, not being used heavily, because they still face political, technological, and economic barriers, such as currently being more expensive than conventionally produced aviation fuels by a wide margin.
Compressed natural gas and liquified natural gas
Compressed natural gas (CNG) and liquified natural gas (LNG) are fuel feedstocks that aircraft may use in the future. Studies have been done on the feasibility of using natural gas and include the "SUGAR Freeze" aircraft under NASA's N+4 Advanced Concept Development program (made by Boeing's Subsonic Ultra Green Aircraft Research (SUGAR) team). The Tupolev Tu-155 was an alternative fuel testbed which was fuelled on LNG. The low specific energy of natural gas even in liquid form compared to conventional fuels gives it a distinct disadvantage for flight applications.
Liquid hydrogen
Hydrogen can be used largely free of carbon emissions, if it is produced with power from renewable energy like wind and solar power.
Some development of technology for hydrogen-powered aircraft started after the millennium and gained track since about 2020, but as of 2022 was still far away from outright aircraft product development.
Hydrogen fuel cells do not produce or other emissions (besides water). However, hydrogen combustion does produce emissions. Cryogenic hydrogen can be used as a liquid at temperatures below 20 K. Gaseous hydrogen involves pressurized tanks at 250–350 bar. With materials available in the 2020s, the mass of tanks strong enough to withstand this kind of high pressure will greatly outweigh the hydrogen fuel itself, largely negating the weight to energy advantage of hydrogen fuel over hydrocarbon fuels. Hydrogen has a severe volumetric disadvantage relative to hydrocarbon fuels, but future blended wing body aircraft designs might be able to accommodate this extra volume without greatly expanding the wetted area.
Even if finally practical, the industry timeline for adopting hydrogen is fairly lengthy. Alternatives to conventional aviation fuel available in the near term include aviation biofuel and synthetically created fuel (aka "e-jet"). These fuels are collectively referred to as "Sustainable Aviation Fuel" (SAF).
Production of aviation fuel
The production of aviation fuel falls into two categories: fuel suitable for turbine engines and fuel suitable for spark-ignition piston engines. There are international specifications for each.
Jet fuel is a gas turbine fuel used in propeller and jet fixed-wing aircraft and helicopters. It has a low viscosity at low temperature, has limited ranges of density and calorific value, burns cleanly, and remains chemically stable when heated to high temperature.
Aviation gasoline, often referred to as avgas or 100-LL (low-lead), is a highly refined form of gasoline for aircraft, with an emphasis on purity, anti-knock characteristics and minimization of spark plug fouling. Avgas must meet performance guidelines for both the rich mixture condition required for take-off power settings and the leaner mixtures used during cruise to reduce fuel consumption. Aviation fuel can be used as CNG fuel.
Avgas is sold in much lower volume than jet fuel, but to many more individual aircraft operators, whereas jet fuel is sold in high volume to large aircraft operators, such as airlines and militaries.
Energy content
The net energy content for aviation fuels depends on their composition. Some typical values are:
BP Avgas 80, 44.65 MJ/kg, density at 15 °C is 690 kg/m3 ( MJ/litre).
Kerosene type BP Jet A-1, 43.15 MJ/kg, density at 15 °C is 804 kg/m3 ( MJ/litre).
Kerosene type BP Jet TS-1 (for lower temperatures), 43.2 MJ/kg, density at 15 °C is 787 kg/m3 ( MJ/litre).
Density
In performance calculations, airliner manufacturers use a density of jet fuel around .
Specific cases are:
Bombardier Aerospace: The Challenger Multi-role Aircraft is a special mission variant of the Bombardier Challenger 650 business jet platform. Bombardier bases performance on the use of fuel with an average lower heating value of 18,550 BTU/lb (43.147 MJ/kg) and a density of .
Embraer: In its airport planning manual for the E195, Embraer uses an adopted fuel density of .
Chemical composition
Aviation fuels consist of blends of over two thousand chemicals, primarily hydrocarbons (paraffins, olefins, naphthenes, and aromatics), additives such as antioxidants and metal deactivators, biocides, static reducers, icing inhibitors, corrosion inhibitors, and impurities. Principal components include n-heptane and isooctane. Like other fuels, aviation fuel for spark-ignited piston engines are described by their octane rating.
Alcohol, alcohol mixtures, and other alternative fuels may be used experimentally, but alcohol is not permitted in any certified aviation fuel specification. In Brazil, the Embraer Ipanema EMB-202A is a version of the Ipanema agricultural aircraft with a modified Lycoming IO-540-K1J5 engine so as to be able to run on ethanol. Other aircraft engines that were modified to run on 100% ethanol were several other types of Lycoming engines (including the Lycoming 235N2C, and Lycoming IO-320) and certain Rotax engines.
Tax
The Convention on International Civil Aviation (ICAO) (Chicago 1944, Article 24) exempts air fuels already loaded onto an aircraft on landing (and which remain on the aircraft) from import taxes. Bi-lateral air services agreements govern the tax exemption of aviation fuels. In the course of an EU initiative, many of these agreements have been modified to allow taxation. A motion for a European Parliament resolution on a European Strategy for Low-emission Mobility has stated that "the possibilities for harmonised international measures for kerosene taxation for aviation" needs to be explored.
A worry is that a local aviation fuel tax would cause increased tankering, where airlines carry extra fuel from low tax jurisdictions. This extra weight increases fuel burn, thus a local fuel tax could potentially increase overall fuel consumption. To avoid increased tankering, a worldwide aviation fuel tax has been proposed. Australia and the United States oppose a worldwide aviation fuel tax, but a number of other countries have expressed interest.
During a debate in the UK Parliament, the forgone tax income due to the exemption of tax on aviation fuel was estimated at £10 billion annually.
The planned inclusion of international aviation into the European Union Emission Trading Scheme in 2014 has been called an "illegal tax" by countries including the US and China, which cite the Chicago Convention.
Certification
Fuels have to conform to a specification in order to be approved for use in type certificated aircraft. The American Society for Testing and Materials (ASTM) developed specifications for automobile gasoline as well as aviation gasoline. These specifications are ASTM D910 and ASTM D6227 for aviation gasoline and ASTM D439 or ASTM D4814 (latest revision) for automobile gasoline.
In use
Aviation fuel generally arrives at the airport via pipeline systems, such as the CEPS. It is then pumped over and dispensed from a tanker or bowser. The fuel is then driven up to parked aircraft and helicopters. Some airports have pumps similar to filling stations to which aircraft must taxi. Some airports have permanent piping to parking areas for large aircraft.
Aviation fuel is transferred to an aircraft via one of two methods: overwing or underwing.
Overwing
Overwing fueling is used on smaller planes, helicopters, and all piston-engine aircraft. Overwing fueling is similar to car fueling — one or more fuel ports are opened and fuel is pumped in with a conventional pump.
Underwing
Underwing fueling, also called single-point refueling or pressure refueling where not dependent on gravity, is used on larger aircraft and for jet fuel exclusively.
For pressure refueling, a high-pressure hose is attached and fuel is pumped in at 275 kPa (40 psi) and a maximum of 310 kPa (45 psi) for most commercial aircraft. Pressure for military aircraft, especially fighters, ranges up to 415 kPa (60 psi). Air being displaced in the tanks is usually vented overboard through a single vent on the aircraft. Because there is only one attachment point, fuel distribution between tanks is either automated or controlled from a control panel, either at the fueling point or in the cockpit. An early use of pressure refueling was on the de Havilland Comet and Sud Aviation Caravelle. Larger aircraft allow for two or more attachment points; however, this is still referred to as single-point refueling, as either attachment point can refuel all of the tanks. Multiple attachments allow for a faster flowrate.
Misfueling
Because of the danger of confusing the fuel types, precautions are taken to distinguish between avgas and jet fuel beyond clearly marking all containers, vehicles, and piping. The aperture on fuel tanks of aircraft requiring avgas cannot be greater than 60 millimetres in diameter. Avgas is often dyed and is dispensed from nozzles with a diameter of 40 mm (49 mm in the United States).
Jet fuel is clear to straw-colored and is dispensed from a special nozzle, called a J spout or duckbill, that has a rectangular opening larger than 60 mm diagonally, so as not to fit into avgas ports. However, some jet and other turbine aircraft, such as some models of the Astar helicopter, have a fueling port too small for the J spout, and thus require a smaller nozzle.
Forecasting demand
In recent years, fuel markets have become increasingly volatile. This, along with rapidly changing airline schedules and the desire to not carry excess fuel on board aircraft, has increased the importance of demand forecasting. In March 2022, Austin's Austin–Bergstrom International Airport came close to running out of fuel, potentially stranding aircraft. Common forecasting techniques include tracking airline schedules and routes, expected distance flown, ground procedures, fuel efficiency of each aircraft and the impact of environmental factors like weather and temperature.
Safety precautions
Any fueling operation can be very dangerous, and aviation operations have characteristics which must be accommodated. As an aircraft flies through the air, it can accumulate static electricity. If this is not dissipated before fueling, an electric arc could occur and ignite fuel vapors. To prevent this, aircraft are electrically bonded to the fueling apparatus before fueling begins, and are not disconnected until after fueling is complete. Some regions require the aircraft and/or fuel truck to be grounded too. Pressure fueling systems incorporate a dead man's switch to preclude unmonitored operation.
Aviation fuel can cause severe environmental damage; all fueling vehicles must carry equipment to control fuel spills. Fire extinguishers must be present at any fueling operation. Airport firefighting forces are specially trained and equipped to handle aviation fuel fires and spills. Aviation fuel must be checked daily and before every flight for contaminants such as water or dirt.
Avgas is the only remaining lead-containing transportation fuel. Lead in avgas prevents damaging engine knock, or detonation, that can result in a sudden engine failure.
| Technology | Fuel | null |
642330 | https://en.wikipedia.org/wiki/Newtonian%20dynamics | Newtonian dynamics | In physics, Newtonian dynamics (also known as Newtonian mechanics) is the study of the dynamics of a particle or a small body according to Newton's laws of motion.
Mathematical generalizations
Typically, the Newtonian dynamics occurs in a three-dimensional Euclidean space, which is flat. However, in mathematics Newton's laws of motion can be generalized to multidimensional and curved spaces. Often the term Newtonian dynamics is narrowed to Newton's second law .
Newton's second law in a multidimensional space
Consider particles with masses in the regular three-dimensional Euclidean space. Let be their radius-vectors in some inertial coordinate system. Then the motion of these particles is governed by Newton's second law applied to each of them
The three-dimensional radius-vectors can be built into a single -dimensional radius-vector. Similarly, three-dimensional velocity vectors can be built into a single -dimensional velocity vector:
In terms of the multidimensional vectors () the equations () are written as
i.e. they take the form of Newton's second law applied to a single particle with the unit mass .
Definition. The equations () are called the
equations of a Newtonian dynamical system in a flat multidimensional Euclidean space, which is called the configuration space of this system. Its points are marked by the radius-vector
. The space whose points are marked by the pair of vectors is called the phase space of the dynamical system ().
Euclidean structure
The configuration space and the phase space of the dynamical system () both are Euclidean spaces, i. e. they are equipped with a Euclidean structure. The Euclidean structure of them is defined so that the kinetic energy of the single multidimensional particle with the unit mass is equal to the sum of kinetic energies of the three-dimensional particles with the masses :
Constraints and internal coordinates
In some cases the motion of the particles with the masses can be constrained. Typical constraints look like scalar equations of the form
Constraints of the form () are called holonomic and scleronomic. In terms of the radius-vector of the Newtonian dynamical system () they are written as
Each such constraint reduces by one the number of degrees of freedom of the Newtonian dynamical system (). Therefore, the constrained system has degrees of freedom.
Definition. The constraint equations () define an -dimensional manifold within the configuration space of the Newtonian dynamical system (). This manifold is called the configuration space of the constrained system. Its tangent bundle is called the phase space of the constrained system.
Let be the internal coordinates of a point of . Their usage is typical for the Lagrangian mechanics. The radius-vector is expressed as some definite function of :
The vector-function () resolves the constraint equations () in the sense that upon substituting () into () the equations () are fulfilled identically in .
Internal presentation of the velocity vector
The velocity vector of the constrained Newtonian dynamical system is expressed in terms of the partial derivatives of the vector-function
():
The quantities are called internal components of the velocity vector. Sometimes they are denoted with the use of a separate symbol
and then treated as independent variables. The quantities
are used as internal coordinates of a point of the phase space of the constrained Newtonian dynamical system.
Embedding and the induced Riemannian metric
Geometrically, the vector-function () implements an embedding of the configuration space of the constrained Newtonian dynamical system into the -dimensional flat configuration space of the unconstrained
Newtonian dynamical system (). Due to this embedding the Euclidean structure of the ambient space induces the Riemannian metric onto the manifold . The components of the metric tensor of this induced metric are given by the formula
where is the scalar product associated with the Euclidean structure ().
Kinetic energy of a constrained Newtonian dynamical system
Since the Euclidean structure of an unconstrained system of particles is introduced through their kinetic energy, the induced Riemannian structure on the configuration space of a constrained system preserves this relation to the kinetic energy:
The formula () is derived by substituting () into () and taking into account ().
Constraint forces
For a constrained Newtonian dynamical system the constraints described by the equations () are usually implemented by some mechanical framework. This framework produces some auxiliary forces including the force that maintains the system within its configuration manifold . Such a maintaining force is perpendicular to . It is called the normal force. The force from () is subdivided into two components
The first component in () is tangent to the configuration manifold . The second component is perpendicular to . In coincides with the normal force .
Like the velocity vector (), the tangent force
has its internal presentation
The quantities in () are called the internal components of the force vector.
Newton's second law in a curved space
The Newtonian dynamical system () constrained to the configuration manifold by the constraint equations () is described by the differential equations
where are Christoffel symbols of the metric connection produced by the Riemannian metric ().
Relation to Lagrange equations
Mechanical systems with constraints are usually described by Lagrange equations:
where is the kinetic energy the constrained dynamical system given by the formula (). The quantities in
() are the inner covariant components of the tangent force vector (see () and ()). They are produced from the inner contravariant components of the vector by means of the standard index lowering procedure using the metric ():
The equations () are equivalent to the equations (). However, the metric () and
other geometric features of the configuration manifold are not explicit in (). The metric () can be recovered from the kinetic energy by means of the formula
| Physical sciences | Classical mechanics | Physics |
642755 | https://en.wikipedia.org/wiki/Reduviidae | Reduviidae | The Reduviidae is a large cosmopolitan family of the suborder Heteroptera of the order Hemiptera (true bugs). Among the Hemiptera and together with the Nabidae almost all species are terrestrial ambush predators; most other predatory Hemiptera are aquatic. The main examples of non-predatory Reduviidae are some blood-sucking ectoparasites in the subfamily Triatominae, with a few species from South America noted for their ability to transmit Chagas disease. Though spectacular exceptions are known, most members of the family are fairly easily recognizable: they have a relatively narrow neck, sturdy build, and formidable curved proboscis (sometimes called a rostrum). Large specimens should be handled with caution, if at all, because they sometimes defend themselves with a very painful stab from the proboscis.
Taxonomy
The family members are almost all predatory, except for a few blood-sucking species, some of which are important as disease vectors. About 7000 species have been described, in more than 20 recognized subfamilies, making it one of the largest families in the Hemiptera.
The name Reduviidae is derived from the type genus, Reduvius. That name, in turn, comes from the Latin reduvia, meaning "hangnail" or "remnant". Possibly this name was inspired by the lateral flanges on the abdomen of many species.
Common genera include:
Lopodytes
Melanolestes
Platymeris
Pselliopus
Psyttala
Rasahus
Reduvius
Rhiginia
Sinea
Zelus
While members of most subfamilies have no common names other than assassin bugs, some subfamilies have their own common names such as:
Ambush bugs – subfamily Phymatinae
Thread-legged bugs – subfamily Emesinae, including the genus Emesaya
Kissing bugs (or cone-headed bugs) – subfamily Triatominae, unusual in that most species are blood-suckers and several are important disease vectors
Wheel bugs – genus Arilus, including the common North American species Arilus cristatus
Grass assassin bugs – genus Lopodytes
Morphology
Adult insects range from roughly , depending on the species. They most commonly have an elongated head with a distinct narrowed 'neck', long legs, and prominent, segmented, tubular mouthparts, most commonly called the proboscis, but some authors use the term "rostrum". Most species are bright in colour with hues of brown, black, red, or orange.
The most distinctive feature of the family is that the tip of the proboscis fits into a ridged groove in the prosternum, where it can be used to produce sound by stridulation. Sound is made by rasping the proboscis against ridges in this groove or stridulitrum (stridulatory organ). These sounds are often used to discourage predators. When harassed, many species can deliver a painful stab with the proboscis, injecting venom or digestive juices. The effects can be intensely painful and the injection from some species may be medically significant.
Feeding
Predatory Reduviidae use the long rostrum to inject a lethal saliva that liquefies the insides of the prey, which are then sucked out. The saliva contains enzymes that digest the tissues they swallow. This process is generally referred to as extraoral digestion. The saliva is commonly effective at killing prey substantially larger than the bug itself.
The legs of some Reduviidae have areas covered in tiny hairs that aid in holding onto their prey while they feed. Others, members of the subfamily Phymatinae in particular, have forelegs that resemble those of the praying mantis, and they catch and hold their prey in a similar way to mantises.
As nymphs, some species cover and camouflage themselves effectively with debris or the remains of dead prey insects. The nymphal instars of the species Acanthaspis pedestris present one good example of this behaviour where they occur in Tamil Nadu in India. Another well-known species is Reduvius personatus, known as the masked hunter because of its habit of camouflaging itself with dust. Some species tend to feed on pests such as cockroaches or bedbugs and are accordingly popular in regions where people regard their hunting as beneficial. Reduvius personatus is an example, and some people breed them as pets and for pest control. Some assassin bug subfamilies are adapted to hunting certain types of prey; for example, the Ectrichodiinae eat millipedes, and feather-legged bugs eat ants. A spectacular example of the latter is Ptilocnemus lemur, an Australian species in which the adult attacks and eats ants, but the nymph waits until the ant bites the feathery tufts on its hind legs, upon which it whips around and pierces the ant's head with its proboscis, and proceeds to feed.
Some research on the nature of the venom from certain Reduviidae is under way. The saliva of Rhynocoris marginatus showed some insecticidal activity in vitro, in tests on lepidopteran pests. The effects included reduction of food consumption, assimilation, and use. Its antiaggregation factors also affected the aggregation and mobility of haemocytes.
The saliva of the species Rhynocoris marginatus (Fab.) and Catamirus brevipennis (Servile) have been studied because of their activity against human pathogenic Gram-negative bacteria (including strains of Escherichia coli, Pseudomonas aeruginosa, Proteus vulgaris, and Salmonella typhimurium) and the Gram-positive (Streptococcus pyogenes).
Some species are bloodsuckers rather than predators, and they are accordingly far less welcome to humans. The blood-feeding habit is thought to have evolved from species that lived in the nests of mammalian hosts. Several species are known to live among bat roosts, including Cavernicola pilosa, Triatoma dimidiata and Eratyrus mucronatus. Triatoma species and other members of the subfamily Triatominae, such as Rhodnius species, Panstrongylus megistus, and Paratriatoma hirsuta, are known as kissing bugs, because they tend to bite sleeping humans in the soft tissue around the lips and eyes. A more serious problem than their bites is the fact that several of these haematophagous Central and South American species transmit the potentially fatal trypanosomal Chagas disease, sometimes called American trypanosomiasis. This results in the death of 12,000 people a year.
The Emesinae live among spider webs.
Phylogeny and evolutionary history
Current taxonomy is based on morphological characteristics. The first cladistic analysis based on molecular data (mitochondrial and nuclear ribosomal DNA) was published in 2009 and called into question the monophyly of some current groups, such as the Emesinae. Reduviidae are monophyletic, and the "Phymatine Complex" is consistently recovered as the sister to the higher Reduviidae, which includes 90 percent of the reduviid species diversity. Reduviidae is suggested to have split from other Cimicomorphs during the Jurassic, based on molecular clock. The oldest fossils of the family are from the Late Cretaceous (Cenomanian) aged Burmese amber, represented by nymphs and the genus Paleotriatoma, belonging to the subfamily Triatominae.
Example species
Arilus cristatus
| Biology and health sciences | Hemiptera (true bugs) | Animals |
642791 | https://en.wikipedia.org/wiki/Punxsutawney%20Phil | Punxsutawney Phil | Punxsutawney Phil () is a groundhog residing in Young Township near Punxsutawney, Pennsylvania, United States, who is the central figure in Punxsutawney's annual Groundhog Day celebration.
Folklore
On February 2 each year, Punxsutawney holds a civic festival with music and food. During the ceremony, which begins well before the winter sunrise, Phil emerges from his temporary home on Gobbler's Knob, located in a rural area about southeast of the town. According to the tradition, if Phil sees his shadow and returns to his hole, he has predicted six more weeks of winter-like weather. If Phil does not see his shadow, he has predicted an "early spring." Punxsutawney's event is the most famous of many Groundhog Day festivals held in the United States and Canada. The event formally began in 1887, although its roots go back even further.
The event is based upon a communal light-hearted suspension of disbelief which extends to the assertion that the same groundhog has been making predictions since the 19th century.
The event is organized by the "Inner Circle" – recognizable by their top hats and tuxedos – who ostensibly communicate with Phil to receive his prediction.
The vice president of the Inner Circle prepares two scrolls in advance of the actual ceremony, one proclaiming six more weeks of winter and one proclaiming an early spring. At daybreak on February 2, Punxsutawney Phil awakens from his burrow on Gobbler's Knob, is helped to the top of the stump by his handlers, and purportedly explains to the president of the Inner Circle, in a language known as "Groundhogese", whether he has seen his shadow. The president of the Inner Circle, the only person able to understand Groundhogese through his possession of an ancient acacia wood cane, then interprets Phil's message, and directs the vice president to read the proper scroll to the crowd gathered on Gobbler's Knob and the masses of "phaithphil phollowers" tuned in to live broadcasts around the world.
The Inner Circle scripts the Groundhog Day ceremonies in advance, with the Inner Circle deciding beforehand whether Phil will see his shadow. The Stormfax Almanac has made note of the weather conditions on each Groundhog Day since 1999; the almanac has recorded 12 incidents in a 20-year span in which the Inner Circle said the groundhog saw his shadow while the sky was cloudy or there was rain or snow coming down, and in one case said the groundhog did not see his shadow despite sunshine. Outside of Groundhog Day, Phil resides with a mate, Phyllis, at the Punxsutawney Memorial Library in a climate-controlled environment. In March 2024, the Inner Circle announced that Phil had sired two babies, the first time in the history of the event that such a siring had happened; the birth surprised the Inner Circle, which had assumed that groundhogs do not breed in captivity. As a result of the births, the family will move permanently to Gobblers Knob. The Inner Circle disowned the babies from ever inheriting their father's position.
Punxsutawney Phil canon
The practices and lore of Punxsutawney Phil's predictions are predicated on a light-hearted suspension of disbelief by those involved. According to the lore, there is only one Phil, and all other groundhogs are impostors. It is claimed that this one groundhog has lived to make weather prognostications since 1886, sustained by drinks of "groundhog punch" or "elixir of life" administered at the annual Groundhog Picnic in the fall. The lifespan of a groundhog in the wild is roughly six years.
According to the Groundhog Club, Phil, after the prediction, speaks to the club president in the language of 'Groundhogese', which supposedly only the current president can understand, and then his prediction is translated and revealed to all.
The Groundhog Day celebration is rooted in Germanic tradition that says that if a hibernating animal casts a shadow on February 2, the Christian celebration of Candlemas, winter and cold weather will last another six weeks. If no shadow is seen, legend says, spring will come early. In Germany, the tradition evolved into a myth that if the sun came out on Candlemas, a hedgehog would cast its shadow, predicting snow all the way into May. When German immigrants settled in Pennsylvania, they transferred the tradition onto local fauna, replacing hedgehogs with groundhogs. Several other towns in the region hold similar Groundhog Day events.
Phil first received his name in 1961. The origins of the name are unclear, but speculation suggests that it may have been indirectly named after Prince Philip, Duke of Edinburgh.
Reception
Prior to 1993, the Groundhog Day event in Punxsutawney attracted crowds of approximately 2,000. The popularity of the film Groundhog Day brought significantly more attention to the event, with annual crowds rising to 10,000–20,000. A notable exception was 2021, when the event took place without any crowds due to the COVID-19 pandemic. Since approximately 2018, the event is streamed online each year.
Also given the recent increase in crowd sizes, three sitting governors of Pennsylvania have attended the festivities, all since 2000: Ed Rendell in 2003, Tom Corbett in 2012, and Josh Shapiro in 2023 and 2024.
Phil was named the "Official" State Meteorologist by Governor Shapiro during the 2024 ceremony.
People for the Ethical Treatment of Animals object to the event, claiming that Phil is put under stress. They suggest replacing Phil with a robotic groundhog.
In some cases where Phil's prognostications have been incorrect, organizations have jokingly made legal threats against the groundhog. Such tongue-in-cheek actions have been made by a prosecutor in Ohio, the sheriff's office of Monroe County, Pennsylvania, and the Merrimack, New Hampshire Police Department.
In media and popular culture
Phil and the town of Punxsutawney were portrayed in the 1993 film Groundhog Day. The actual town used to portray Punxsutawney in the film is Woodstock, Illinois.
In Groundhog Day, the 2016 Broadway musical adaptation of the film, Phil is ascribed a more mythical role.
In 1995, Phil flew to Chicago for a guest appearance on The Oprah Winfrey Show, which aired on Groundhog Day, February 2, 1995.
A 2002 episode of the children's animated series Stanley, titled "Searching for Spring", featured Punxsutawney Phil.
Phil was the main attraction in "Groundhog Day", the April 10, 2005 episode of the MTV series Viva La Bam. In the episode, street skater Bam Margera holds a downhill race against Mark Costagliola in honor of Punxsutawney Phil at Bear Creek Mountain Resort in Macungie, Pennsylvania.
The Pennsylvania Lottery's mascot is a groundhog named Gus, referred to in commercials as "the second most famous groundhog in Pennsylvania", in deference to Phil. Because the Groundhog Club Inner Circle has trademarked the use of the name "Punxsutawney Phil", no commercial entity may use the name without the permission from the Inner Circle, which does not allow commercialization of the name.
Past predictions
Predictive accuracy
The Inner Circle, in keeping with the suspension of disbelief, claims a 100% accuracy rate, and an approximately 80% accuracy rate in recorded predictions. They claim that whenever the prediction is wrong, the person in charge of translating the message must have made a mistake in their interpretation. Impartial estimates place the groundhog's accuracy between 35% and 41%.
| Biology and health sciences | Individual animals | Animals |
642982 | https://en.wikipedia.org/wiki/Trade%20winds | Trade winds | The trade winds or easterlies are permanent east-to-west prevailing winds that flow in the Earth's equatorial region. The trade winds blow mainly from the northeast in the Northern Hemisphere and from the southeast in the Southern Hemisphere, strengthening during the winter and when the Arctic oscillation is in its warm phase. Trade winds have been used by captains of sailing ships to cross the world's oceans for centuries. They enabled European colonization of the Americas, and trade routes to become established across the Atlantic Ocean and the Pacific Ocean.
In meteorology, they act as the steering flow for tropical storms that form over the Atlantic, Pacific, and southern Indian oceans and cause rainfall in North America, Southeast Asia, and Madagascar and East Africa. Shallow cumulus clouds are seen within trade wind regimes and are capped from becoming taller by a trade wind inversion, which is caused by descending air aloft from within the subtropical ridge. The weaker the trade winds become, the more rainfall can be expected in the neighboring landmasses.
The trade winds also transport nitrate- and phosphate-rich Saharan dust to all Latin America, the Caribbean Sea, and to parts of southeastern and southwestern North America. Sahara dust is on occasion present in sunsets across Florida. When dust from the Sahara travels over land, rainfall is suppressed and the sky changes from a blue to a white appearance which leads to an increase in red sunsets. Its presence negatively impacts air quality by adding to the count of airborne particulates.
History
The term originally derives from the early fourteenth century sense of trade (in late Middle English) still often meaning "path" or "track". The Portuguese recognized the importance of the trade winds (then the volta do mar, meaning in Portuguese "turn of the sea" but also "return from the sea") in navigation in both the north and south Atlantic Ocean as early as the 15th century. From West Africa, the Portuguese had to sail away from continental Africa, that is, to west and northwest. They could then turn northeast, to the area around the Azores islands, and finally east to mainland Europe. They also learned that to reach South Africa, they needed to go far out in the ocean, head for Brazil, and around 30°S go east again. (This is because following the African coast southbound means sailing upwind in the Southern hemisphere.) In the Pacific Ocean, the full wind circulation, which included both the trade wind easterlies and higher-latitude westerlies, was unknown to Europeans until Andres de Urdaneta's voyage in 1565.
The captain of a sailing ship seeks a course along which the winds can be expected to blow in the direction of travel. During the Age of Sail, the pattern of prevailing winds made various points of the globe easy or difficult to access, and therefore had a direct effect on European empire-building and thus on modern political geography. For example, Manila galleons could not sail into the wind at all.
By the 18th century, the importance of the trade winds to England's merchant fleet for crossing the Atlantic Ocean had led both the general public and etymologists to identify the name with a later meaning of "trade": "(foreign) commerce". Between 1847 and 1849, Matthew Fontaine Maury collected enough information to create wind and current charts for the world's oceans.
Cause
As part of the Hadley cell, surface air flows toward the equator while the flow aloft is towards the poles. A low-pressure area of calm, light variable winds near the equator is known as the doldrums, near-equatorial trough, intertropical front, or the Intertropical Convergence Zone. When located within a monsoon region, this zone of low pressure and wind convergence is also known as the monsoon trough. Around 30° in both hemispheres, air begins to descend toward the surface in subtropical high-pressure belts known as subtropical ridges. The subsident (sinking) air is relatively dry because as it descends, the temperature increases, but the moisture content remains constant, which lowers the relative humidity of the air mass. This warm, dry air is known as a superior air mass and normally resides above a maritime tropical (warm and moist) air mass. An increase of temperature with height is known as a temperature inversion. When it occurs within a trade wind regime, it is known as a trade wind inversion.
The surface air that flows from these subtropical high-pressure belts toward the Equator is deflected toward the west in both hemispheres by the Coriolis effect. These winds blow predominantly from the northeast in the Northern Hemisphere and from the southeast in the Southern Hemisphere. Because winds are named for the direction from which the wind is blowing, these winds are called the northeasterly trade winds in the Northern Hemisphere and the southeasterly trade winds in the Southern Hemisphere. The trade winds of both hemispheres meet at the Doldrums.
As they blow across tropical regions, air masses heat up over lower latitudes due to more direct sunlight. Those that develop over land (continental) are drier and hotter than those that develop over oceans (maritime), and travel northward on the western periphery of the subtropical ridge. Maritime tropical air masses are sometimes referred to as trade air masses. All tropical oceans except the northern Indian Ocean have extensive areas of trade winds.
Weather and biodiversity effects
Clouds which form above regions within trade wind regimes are typically composed of cumulus which extend no more than in height, and are capped from being taller by the trade wind inversion. Trade winds originate more from the direction of the poles (northeast in the Northern Hemisphere, southeast in the Southern Hemisphere) during the cold season, and are stronger in the winter than the summer. As an example, the windy season in the Guianas, which lie at low latitudes in South America, occurs between January and April. When the phase of the Arctic oscillation (AO) is warm, trade winds are stronger within the tropics. The cold phase of the AO leads to weaker trade winds. When the trade winds are weaker, more extensive areas of rain fall upon landmasses within the tropics, such as Central America.
During mid-summer in the Northern Hemisphere (July), the westward-moving trade winds south of the northward-moving subtropical ridge expand northwestward from the Caribbean Sea into southeastern North America (Florida and Gulf Coast). When dust from the Sahara moving around the southern periphery of the ridge travels over land, rainfall is suppressed and the sky changes from a blue to a white appearance which leads to an increase in red sunsets. Its presence negatively impacts air quality by adding to the count of airborne particulates. Although the Southeast US has some of the cleanest air in North America, much of the African dust that reaches the United States affects Florida. Since 1970, dust outbreaks have worsened due to periods of drought in Africa. There is a large variability in the dust transport to the Caribbean and Florida from year to year. Dust events have been linked to a decline in the health of coral reefs across the Caribbean and Florida, primarily since the 1970s.
Every year, millions of tons of nutrient-rich Saharan dust cross the Atlantic Ocean, bringing vital phosphorus and other fertilizers to depleted Amazon soils.
| Physical sciences | Winds | null |
643020 | https://en.wikipedia.org/wiki/Aeolian%20processes | Aeolian processes | Aeolian processes, also spelled eolian, pertain to wind activity in the study of geology and weather and specifically to the wind's ability to shape the surface of the Earth (or other planets). Winds may erode, transport, and deposit materials and are effective agents in regions with sparse vegetation, a lack of soil moisture and a large supply of unconsolidated sediments. Although water is a much more powerful eroding force than wind, aeolian processes are important in arid environments such as deserts.
The term is derived from the name of the Greek god Aeolus, the keeper of the winds.
Definition and setting
Aeolian processes are those processes of erosion, transport, and deposition of sediments that are caused by wind at or near the surface of the earth. Sediment deposits produced by the action of wind and the sedimentary structures characteristic of these deposits are also described as aeolian.
Aeolian processes are most important in areas where there is little or no vegetation. However, aeolian deposits are not restricted to arid climates. They are also seen along shorelines; along stream courses in semiarid climates; in areas of ample sand weathered from weakly cemented sandstone outcrops; and in areas of glacial outwash.
Loess, which is silt deposited by wind, is common in humid to subhumid climates. Much of North America and Europe are underlain by sand and loess of Pleistocene age originating from glacial outwash.
The lee (downwind) side of river valleys in semiarid regions are often blanketed with sand and sand dunes. Examples in North America include the Platte, Arkansas, and Missouri Rivers.
Wind erosion
Wind erodes the Earth's surface by deflation (the removal of loose, fine-grained particles by the turbulent action of the wind) and by abrasion (the wearing down of surfaces by the grinding action and sandblasting by windborne particles). Once entrained in the wind, collisions between particles further break them down, a process called attrition.
Worldwide, erosion by water is more important than erosion by wind, but wind erosion is important in semiarid and arid regions. Wind erosion is increased by some human activities, such as the use of 4x4 vehicles.
Deflation
Deflation is the lifting and removal of loose material from the surface by wind turbulence. It takes place by three mechanisms: traction/surface creep, saltation, and suspension. Traction or surface creep is a process of larger grains sliding or rolling across the surface. Saltation refers to particles bouncing across the surface for short distances. Suspended particles are fully entrained in the wind, which carries them for long distances. Saltation likely accounts for 50–70 % of deflation, while suspension accounts for 30–40 % and surface creep accounts for 5–25 %.
Regions which experience intense and sustained erosion are called deflation zones. Most aeolian deflation zones are composed of desert pavement, a sheet-like surface of rock fragments that remains after wind and water have removed the fine particles. The rock mantle in desert pavements protects the underlying material from further deflation. Areas of desert pavement form the regs or stony deserts of the Sahara. These are further divided into rocky areas called hamadas and areas of small rocks and gravel called serirs. Desert pavement is extremely common in desert environments.
Blowouts are hollows formed by wind deflation. Blowouts are generally small, but may be up to several kilometers in diameter. The smallest are mere dimples deep and in diameter. The largest include the blowout hollows of Mongolia, which can be across and deep. Big Hollow in Wyoming, US, extends and is up to deep.
Abrasion
Abrasion (also sometimes called corrasion) is the process of wind-driven grains knocking or wearing material off of landforms. It was once considered a major contributor to desert erosion, but by the mid-20th Century, it had come to be considered much less important. Wind can normally lift sand only a short distance, with most windborne sand remaining within of the surface and practically none normally being carried above . Many desert features once attributed to wind abrasion, including wind caves, mushroom rocks, and the honeycomb weathering called tafoni, are now attributed to differential weathering, rainwash, deflation rather than abrasion, or other processes.
Yardangs are one kind of desert feature that is widely attributed to wind abrasion. These are rock ridges, up to tens of meters high and kilometers long, that have been streamlined by desert winds. Yardangs characteristically show elongated furrows or grooves aligned with the prevailing wind. They form mostly in softer material such as silts.
Abrasion produces polishing and pitting, grooving, shaping, and faceting of exposed surfaces. These are widespread in arid environments but geologically insignificant. Polished or faceted surfaces called ventifacts are rare, requiring abundant sand, powerful winds, and a lack of vegetation for their formation.
In parts of Antarctica wind-blown snowflakes that are technically sediments have also caused abrasion of exposed rocks.
Attrition
Attrition is the wearing down by collisions of particles entrained in a moving fluid. It is effective at rounding sand grains and at giving them a distinctive frosted surface texture.
Collisions between windborne particles is a major source of dust in the size range of 2-5 microns. Most of this is produced by the removal of a weathered clay coating from the grains.
Transport
Wind dominates the transport of sand and finer sediments in arid environments. Wind transport is also important in periglacial areas, on river flood plains, and in coastal areas. Coastal winds transport significant amounts of siliciclastic and carbonate sediments inland, while wind storms and dust storms can carry clay and silt particles great distances. Wind transports much of the sediments deposited in deep ocean basins. In ergs (desert sand seas), wind is very effective at transporting grains of sand size and smaller.
Particles are transported by winds through suspension, saltation (skipping or bouncing) and creeping (rolling or sliding) along the ground. The minimum wind velocity to initiate transport is called the fluid threshold or static threshold and is the wind velocity required to begin dislodging grains from the surface. Once transport is initiated, there is a cascade effect from grains tearing loose other grains, so that transport continues until the wind velocity drops below the dynamic threshold or impact threshold, which is usually less than the fluid threshold. In other words, there is hysteresis in the wind transport system.
Small particles may be held in the atmosphere in suspension. Turbulent air motion supports the weight of suspended particles and allows them to be transported for great distances. Wind is particularly effective at separating sediment grains under 0.05 mm in size from coarser grains as suspended particles.
Saltation is downwind movement of particles in a series of jumps or skips. Saltation is most important for grains of up to 2 mm in size. A saltating grain may hit other grains that jump up to continue the saltation. The grain may also hit larger grains (over 2 mm in size) that are too heavy to hop, but that slowly creep forward as they are pushed by saltating grains. Surface creep accounts for as much as 25 percent of grain movement in a desert.
Vegetation is effective at suppressing aeolian transport. Vegetation cover of as little as 15% is sufficient to eliminate most sand transport. The size of shore dunes is limited mostly by the amount of open space between vegetated areas.
Aeolian transport from deserts plays an important role in ecosystems globally. For example, wind transports minerals from the Sahara to the Amazon basin. Saharan dust is also responsible for forming red clay soils in southern Europe.
Dust storms
Dust storms are wind storms that have entrained enough dust to reduce visibility to less than . Most occur on the synoptic (regional) scale, due to strong winds along weather fronts, or locally from downbursts from thunderstorms.
Crops, people, and possibly even climates are affected by dust storms. On Earth, dust can cross entire oceans, as occurs with dust from the Sahara that reaches the Amazon Basin. Dust storms on Mars periodically engulf the entire planet. When the Mariner 9 spacecraft entered its orbit around Mars in 1971, a dust storm lasting one month covered the entire planet, thus delaying the task of photo-mapping the planet's surface.
Most of the dust carried by dust storms is in the form of silt-size particles. Deposits of this windblown silt are known as loess. The thickest known deposit of loess, up to , is on the Loess Plateau in China. This very same Asian dust is blown for thousands of miles, forming deep beds in places as far away as Hawaii. The Peoria Loess of North America is up to thick in parts of western Iowa. The soils developed on loess are generally highly productive for agriculture.
Small whirlwinds, called dust devils, are common in arid lands and are thought to be related to very intense local heating of the air that results in instabilities of the air mass. Dust devils may be as much as one kilometer high. Dust devils on Mars have been observed as high as , though this is uncommon.
Deposition
Wind is very effective at separating sand from silt and clay. As a result, there are distinct sandy (erg) and silty (loess) aeolian deposits, with only limited interbedding between the two. Loess deposits are found further from the original source of sediments than ergs. An example of this is the Sand Hills of Nebraska, US. Here vegetation-stabilized sand dunes are found to the west and loess deposits to the east, further from the original sediment source in the Ogallala Formation at the feet of the Rocky Mountains.
Some of the most significant experimental measurements on aeolian landforms were performed by Ralph Alger Bagnold, a British army engineer who worked in Egypt prior to World War II. Bagnold investigated the physics of particles moving through the atmosphere and deposited by wind. He recognized two basic dune types, the crescentic dune, which he called "barchan", and the linear dune, which he called longitudinal or "seif" (Arabic for "sword"). Bagnold developed a classification scheme that included small-scale ripples and sand sheets as well as various types of dunes.
Bagnold's classification is most applicable in areas devoid of vegetation. In 1941, John Tilton Hack added parabolic dunes, which are strongly influenced by vegetation, to the list of dune types. The discovery of dunes on Mars reinvigorated aeolian process research, which increasingly makes use of computer simulation.
Wind-deposited materials hold clues to past as well as to present wind directions and intensities. These features help us understand the present climate and the forces that molded it. For example, vast inactive ergs in much of the modern world attest to late Pleistocene trade wind belts being much expanded during the Last Glacial Maximum. Ice cores show a tenfold increase in non-volcanic dust during glacial maxima. The highest dust peak in the Vostok ice cores dates to 20 to 21 thousand years ago. The abundant dust is attributed to a vigorous low-latitude wind system plus more exposed continental shelf due to low sea levels.
Wind-deposited sand bodies occur as ripples and other small-scale features, sand sheets, and dunes.
Ripples and other small-scale features
Wind blowing on a sand surface ripples the surface into crests and troughs whose long axes are perpendicular to the wind direction. The average length of jumps during saltation corresponds to the wavelength, or distance between adjacent crests, of the ripples. In ripples, the coarsest materials collect at the crests causing inverse grading. This distinguishes small ripples from dunes, where the coarsest materials are generally in the troughs. This is also a distinguishing feature between water laid ripples and aeolian ripples.
A sand shadow is an accumulation of sand on the downwind side of an obstruction, such as a boulder or an isolated patch of vegetation. Here the sand builds up to the angle of repose (the maximum stable slope angle), about 34 degrees, then begins sliding down the slip face of the patch. A sandfall is a sand shadow of a cliff or escarpment.
Closely related to sand shadows are sand drifts. These form downwind of a gap between obstructions, due to the funneling effect of the obstructions on the wind.
Sand sheets
Sand sheets are flat or gently undulating sandy deposits with only small surface ripples. An example is the Selima Sand Sheet in the eastern Sahara Desert, which occupies in southern Egypt and northern Sudan. This consists of a few feet of sand resting on bedrock. Sand sheets are often remarkably flat and are sometimes described as desert peneplains.
Sand sheets are common in desert environments, particularly on the margins of dune fields, although they also occur within ergs. Conditions that favor the formation of sand sheets, instead of dunes, may include surface cementation, a high water table, the effects of vegetation, periodic flooding, or sediments rich in grains too coarse for effective saltation.
Dunes
A dune is an accumulations of sediment blown by the wind into a mound or ridge. They differ from sand shadows or sand drifts in that they are independent of any topographic obstacle. Dunes have gentle upwind slopes on the windward side. The downwind portion of the dune, the lee slope, is commonly a steep avalanche slope referred to as a slipface. Dunes may have more than one slipface. The minimum height of a slipface is about 30 centimeters.
Wind-blown sand moves up the gentle upwind side of the dune by saltation or creep. Sand accumulates at the brink, the top of the slipface. When the buildup of sand at the brink exceeds the angle of repose, a small avalanche of grains slides down the slipface. Grain by grain, the dune moves downwind.
Dunes take three general forms. Linear dunes, also called longitudinal dunes or seifs, are aligned in the direction of the prevailing winds. Transverse dunes, which include crescent dunes (barchans), are aligned perpendicular to the prevailing winds. More complex dunes, such as star dunes, form where the directions of the winds are highly variable. Additional dune types arise from various kinds of topographic forcing, such as from isolated hills or escarpments.
Transverse dunes
Transverse dunes occur in areas dominated by a single direction of the prevailing wind. In areas where sand is not abundant, transverse dunes take the form of barchans or crescent dunes. These are not common, but they are highly recognizable, with a distinctive crescent shape with the tips of the crescent directed downwind. The dunes are widely separated by areas of bedrock or reg. Barchans migrate up to per year, with the taller dunes migrating faster. Barchans first form when some minor topographic feature creates a sand patch. This grows into a sand mound, and the converging streamlines of the air flow around the mound build it into the distinctive crescent shape. Growth is ultimately limited by the carrying capacity of the wind, which as the wind becomes saturated with sediments, builds up the slip face of the dune. Because barchans develop in areas of limited sand availability, they are poorly preserved in the geologic record.
Where sand is more abundant, transverse dunes take the form of aklé dunes, such as those of the western Sahara. These form a network of sinuous ridges perpendicular to the wind direction. Aklé dunes are preserved in the geologic record as sandstone with large sets of cross-bedding and many reactivation surfaces.
Draas are very large composite transverse dunes. They can be up to across and high and extend lengthwise for hundreds of kilometers. In form, they resemble a large aklé or barchanoid dune. They form over a prolonged period of time in areas of abundant sand and show a complex internal structure. Careful 3-D mapping is required to determine the morphology of a draa preserved in the geologic record.
Linear dunes
Linear dunes can be traced up to tens of kilometers, with heights sometimes in excess of . They are typically several hundred meters across and are spaced apart. They sometimes coalesce at a Y-junction with the fork directed upwind. They have a sharp sinuous or en echelon crest. They are thought to form from a bimodal seasonal wind pattern, with a weak wind season characterized by wind directed an at acute angle to the prevailing winds of the strong wind season. The strong wind season produces a barchan form and the weak wind season stretches this into the linear form. Another possibility is that these dunes result from secondary flow, though the precise mechanism remains uncertain.
Complex dunes
Complex dunes (star dunes or rhourd dunes) are characterized by having more than two slip faces. They are typically across and high. They consist of a central peak with radiating crests and are thought to form where strong winds can come from any direction. Those in Gran Desierto de Altar of Mexico are thought to have formed from precursor linear dunes due to a change in the wind pattern about 3000 years ago. Complex dunes show Little lateral growth but strong vertical growth and are important sand sinks.
Other dune types
Vegetated parabolic dunes are crescent-shaped, but the ends of the crescent point upwind, not downwind. They form from the interaction of vegetation patches with active sand sources, such as blowouts. The vegetation stabilizes the arms of the dune, and an elongated lake sometimes forms between the arms of the dune.
Clay dunes are uncommon but have been found in Africa, Australia, and along the Gulf Coast of North America. These form on mud flats on the margins of saline bodies of water subject to strong prevailing winds during a dry season. Clay particles are bound into sand-sized pellets by salts and are then deposited in the dunes, where the return of the cool season allows the pellets to absorb moisture and become bound to the dune surface.
Aeolian desert systems
Deserts cover 20 to 25 percent of the modern land surface of the earth, mostly between the latitudes of 10 to 30 degrees north or south. Here the descending part of the tropical atmospheric circulation (the Hadley cell) produces high atmospheric pressure and suppresses precipitation. Large areas of this desert is floored with windblown sand. Such areas are called ergs when they exceed about in area or dune fields when smaller. Ergs and dune fields make up about 20% of modern deserts or about 6% of the Earth's total land surface.
The sandy areas of today's world are somewhat anomalous. Deserts, in both the present day and in the geological record, are usually dominated by alluvial fans rather than dune fields. The present relative abundance of sandy areas may reflect reworking of Tertiary sediments following the Last Glacial Maximum. Most modern deserts have experienced extreme Quaternary climate change, and the sediments that are now being churned by wind systems were generated in upland areas during previous pluvial (moist) periods and transported to depositional basins by stream flow. The sediments, already sorted during their initial fluvial transport, were further sorted by wind, which also sculpted the sediments into eolian landforms.
The state of an aeolian system depends mainly on three things: The amount of sediment supply, the availability of sediments, and the transport capacity of the winds. The sediment supply is largely produced in pluvial periods (periods of greater rainfall) and accumulates by runoff as fan deltas or terminal fans in sedimentary basins. Another important source of sediments is the reworking of carbonate sediments on continental shelves exposed during times of lower sea level. Sediment availability depends on the coarseness of the local sediment supply, the degree of exposure of sediment grains, the amount of soil moisture, and the extent of vegetation coverage. The potential transport rate of wind is usually more than the actual transport, because the sediment supply is usually insufficient to saturate the wind. In other words, most aeolian systems are transport-undersaturated (or sediment-undersaturated).
Aeolian desert systems can be divided into wet, dry, or stabilized systems. Dry systems have the water table well below the surface, where it has no stabilizing effect on sediments. Dune shapes determine whether sediment is deposited, simply moves across surface (a bypass system), or erosion takes place. Wet systems are characterized by a water table near the depositional surface, which exerts a strong control on deposition, bypass, or erosion. Stabilized systems have significant vegetation, surface cement, or mud drapes which dominate the evolution of the system. The Sahara shows the full range of all three types.
The movement of sediments in aeolian systems can be represented by sand-flow maps. These are based on meteorological observations, bedform orientations, and trends of yardangs. They are analogous to drainage maps, but are not as closely tied to topography, since wind can blow sand significant distances uphill.
The Sahara of North Africa is the largest hot desert in the world. Flowlines can be traced from erg to erg, demonstrating very long transport downwind. Satellite observations show yardangs aligned with the sandflow lines. All flowlines arise in the desert itself, and show indications of clockwise circulation roughly like high pressure cells. The greatest deflation occurs in dried lake beds where trade winds form a low-level jet between the Tibesti Mountains and the Ennedi Plateau. The flowlines eventually reach the, sea creating great plume of Saharan dust extending thousands of kilometers into the Atlantic Ocean. This creates a steady rain of silt into the ocean. It is estimated that 260 million tons of sediments are transported through this system each year, but the amount was much greater during the Last Glacial Maximum, based on deep-sea cores. Mineral dust of 0.1–1 microns in size is a good shortwave radiation scatterer and has a cooling effect on climate.
Another example of an aeolian system is the arid interior of Australia. With few topographic barriers to sand movement, an anticlockwise wind system is traced by systems of longitudinal dunes.
The Namib and Oman ergs are fed by coastal sediments. The Namib receives its sediments from the south through narrow deflation corridors from coast that cross more than of bedrock to the erg. The Oman was created by deflation of marine shelf carbonates during the last Pleistocene lowstand of the sea.
The Loess Plateau of China has been a long-term sink for sediments during the Quaternary ice age. It provides a record of glaciation, in the form of glacial loess layers separated by paleosols (fossil soils). The loess layers were desposited by a strong northwest winter monsoon, while the paleosols record the influence of a moist southeast monsoon.
The African savannah is mostly ergs deposited during the Last Glacial Maximum that are now stabilized by vegetation.
Examples
Major global aeolian systems thought to be linked with weather and climate variation:
An average of 132 million tons of dust from the Sahara (primarily the Sahel and Bodélé Depression) across the Atlantic each year.
Harmattan winter dust storms in West Africa also occur blowing dust to the ocean.
Asian dust originates in the Gobi Desert and reaches Korea, Japan, Taiwan (at times) and even the western US.
The 2018 Indian dust storms transported dust from the Thar Desert towards Delhi, Uttar Pradesh, and the Indo-Gangetic Plain.
Shamal June–July winds blowing dust in primarily north to south in Saudi Arabia, Iran, Iraq, UAE, and parts of Pakistan.
Haboob dust storms in Sudan, Australia, Arizona associated with monsoon.
Khamsin dust from Libya, Egypt and Levant in Spring associated with extratropical cyclones.
Dust Bowl event in US, carried sand eastward. 5500 tons were deposited in Chicago area.
Sirocco sandy winds from Africa/Sahara blowing north into South Europe.
Kalahari Desert blowing sand/dust east across southern Africa toward Indian Ocean.
Mars in the arid conditions, many aeolian processes have been discovered.
In the geologic record
Aeolian processes can be discerned at work in the geologic record as long ago as the Precambrian. Aeolian formations are prominent in the Paleozoic and Mesozoic of the western US. Other examples include the Permian Rotliegendes of northwestern Europe; the Jurassic–Cretaceous Botucatu Formation of the Parana Basin of Brazil; the Permian Lower Bunter Sandstone of Britain; the Permian-Triassic Corrie Sandstone and Hopeman Sandstone of Scotland; and the Proterozoic sandstones of India and northwest Africa.
Perhaps the best examples of aeolian processes in the geologic record are the Jurassic ergs of the western US. These include the Wingate Sandstone, the Navajo Sandstone, and the Page Sandstone. Individual formations are separated by regional unconformities indicate erg stabilization. The ergs interfingered with adjacent river systems, as with the Wingate Sandstone interfingering with the Moenave Formation and the Navajo Sandstone with the Kayenta Formation.
The Navajo and Nugget Sandstones were part of the largest erg deposit in the geologic record. These formations are up to thick and are exposed over . Their original extent was likely 2.5 times the present outcrop area. Though once thought to possibly be marine in origin, they are now all but universally regarded as aeolian deposits. They are made up mostly of fine- to medium-sized quartz grains that are well-rounded and frosted, both indications of aeolian transport. The Navajo contains huge tabular crossbed sets with sweeping foresets. Individual crossbed sets dip at an angle of more than 20 degrees and are from thick. The formation contains freshwater invertebrate fossils and vertebrate tracks. Slump structures (contorted bedding) are present that resemble those in modern wetted dunes. Successive migrating dunes deposited a vertical stacking of eolian beds between interdune bounding surfaces and regional supersurfaces.
The Permian Rotliegend Group of the North Sea and north Europe contains sediments from adjacent uplands. Erg sand bodies within the group are up to thick. Study of the crossbedding shows that sediments were deposited by a clockwise atmospheric cell. Drilling core show dry and wet interdune surfaces and regional supersurfaces, and provide evidence of five or more cycles of erg expansion and contraction. A global rise in sea level finally drowned the erg and deposited the beds of the Weissliegend.
The Cedar Mesa Sandstone in Utah was contemporary with the Rogliegend. This formation records at least 12 erg sequences bounded by regional deflation supersurfaces. Aeolian landforms preserved in the formation range from damp sandsheet and lake paleosol (fossil soil) beds to thin, chaotically arranged dune sets to equilibrium erg construction, with dunes wide migrating over still larger draas. The draas survived individual climate cycles, and their interdunes were sites of barchan nucleation during arid portions of the climate cycles.
| Physical sciences | Aeolian landforms | null |
643023 | https://en.wikipedia.org/wiki/Rash | Rash | A rash is a change of the skin that affects its color, appearance, or texture.
A rash may be localized in one part of the body, or affect all the skin. Rashes may cause the skin to change color, itch, become warm, bumpy, chapped, dry, cracked or blistered, swell, and may be painful.
The causes, and therefore treatments for rashes, vary widely. Diagnosis must take into account such things as the appearance of the rash, other symptoms, what the patient may have been exposed to, occupation, and occurrence in family members. The diagnosis may confirm any number of conditions.
The presence of a rash may aid diagnosis; associated signs and symptoms are diagnostic of certain diseases. For example, the rash in measles is an erythematous, morbilliform, maculopapular rash that begins a few days after the fever starts. It classically starts at the head, and spreads downwards.
Differential diagnosis
Common causes of rashes include:
Food allergy
Medication side effects
Anxiety
Allergies, for example to food, dyes, medicines, insect stings, metals such as zinc or nickel; such rashes are often called hives.
Skin contact with an irritant
Fungal infection, such as ringworm
Balsam of Peru
Skin diseases such as eczema or acne
Exposure to sun (sunburn) or heat
Friction due to chafing of the skin
Irritation such as caused by abrasives impregnated in clothing rubbing the skin. The cloth itself may be abrasive enough for some people
Secondary syphilis
Poor personal hygiene
Uncommon causes:
Autoimmune disorders such as psoriasis
Lead poisoning
Pregnancy
Repeated scratching on a particular spot
Lyme disease
Scarlet fever
COVID-19 (see )
Conditions
Diagnostic approach
The causes of a rash are numerous, which may make the evaluation of a rash extremely difficult. An accurate evaluation by a provider may only be made in the context of a thorough history, i.e. medications the patient is taking, the patient's occupation, where the patient has been and complete physical examination.
Points typically noted in the examination include:
The appearance: e.g., purpuric (typical of vasculitis and meningococcal disease), fine and like sandpaper (typical of scarlet fever); circular lesions with a central depression are typical of molluscum contagiosum (and in the past, small pox); plaques with silver scales are typical of psoriasis.
The distribution: e.g., the rash of scarlet fever becomes confluent and forms bright red lines in the skin creases of the neck, armpits and groins (Pastia's lines); the vesicles of chicken pox seem to follow the hollows of the body (they are more prominent along the depression of the spine on the back and in the hollows of both shoulder blades); very few rashes affect the palms of the hands and soles of the feet (secondary syphilis, rickettsia or spotted fevers, guttate psoriasis, hand, foot and mouth disease, keratoderma blennorrhagicum);
Symmetry: e.g., herpes zoster usually only affects one side of the body and does not cross the midline.
A patch test may be ordered, for diagnostic purposes.
Treatment
Treatment differs according to which rash a patient has been diagnosed with. Common rashes can be easily remedied using steroid topical creams (such as hydrocortisone) or non-steroidal treatments. Many of the medications are available over the counter in the United States.
The problem with steroid topical creams i.e. hydrocortisone; is their inability to penetrate the skin through absorption and therefore not be effective in clearing up the affected area, thus rendering the hydrocortisone almost completely ineffective in all except the most mild of cases.
| Biology and health sciences | Symptoms and signs | Health |
643440 | https://en.wikipedia.org/wiki/Botfly | Botfly | Botflies, also known as warble flies, heel flies, and gadflies, are flies of the family Oestridae. Their larvae are internal parasites of mammals, some species growing in the host's flesh and others within the gut. Dermatobia hominis is the only species of botfly known to parasitize humans routinely, though other species of flies cause myiasis in humans.
General
A botfly, also written bot fly, bott fly or bot-fly in various combinations, is any fly in the family Oestridae. Their life cycles vary greatly according to species, but the larvae of all species are internal parasites of mammals. Largely according to species, they also are known variously as warble flies, heel flies, and gadflies. The larvae of some species grow in the flesh of their hosts, while others grow within the hosts' alimentary tracts.
The word "bot" in this sense means a maggot. A warble is a skin lump or callus such as might be caused by an ill-fitting harness, or by the presence of a warble fly maggot under the skin. The human botfly, Dermatobia hominis, is the only species of botfly whose larvae ordinarily parasitise humans, though flies in some other families episodically cause human myiasis and are sometimes more harmful.
Family Oestridae
The Oestridae now are generally defined as including the former families Oestridae, Cuterebridae, Gasterophilidae, and Hypodermatidae as subfamilies.
The Oestridae, in turn, are a family within the superfamily Oestroidea, together with the families Calliphoridae, Mesembrinellidae, Mystacinobiidae, Polleniidae, Rhiniidae, Rhinophoridae, Sarcophagidae, Tachinidae, and Ulurumyiidae.
Of families of flies causing myiasis, the Oestridae include the highest proportion of species whose larvae live as obligate parasites within the bodies of mammals. Roughly 150 species are known worldwide. Most other species of flies implicated in myiasis are members of related families, such as blow-flies.
Infestation
Botflies deposit eggs on a host, or sometimes use an intermediate vector such as the common housefly, mosquitoes, and, in the case of D. hominis, a species of tick. After mating, the female botfly captures the phoretic insect by holding onto its wings with her legs. She then makes the slip—attaching 15 to 30 eggs onto the insect or arachnid's abdomen, where they incubate. The fertilized female does this over and over again to distribute the 100 to 400 eggs she produces in her short adult stage of life of only 8–9 days. Larvae from these eggs, stimulated by the warmth and proximity of a large mammal host, drop onto its skin and burrow underneath. Intermediate vectors are often used since many animal hosts recognize the approach of a botfly and flee.
Eggs are deposited on larger animals' skin directly, or the larvae hatch and drop from the eggs attached to the intermediate vector; the body heat of the host animal induces hatching upon contact or immediate proximity. Some forms of botfly also occur in the digestive tract after ingestion by licking.
Myiasis can be caused by larvae burrowing into the skin (or tissue lining) of the host animal. Mature larvae drop from the host and complete the pupal stage in the soil. They do not kill the host animal, thus they are true parasites.
The equine botflies present seasonal difficulties to equestrian caretakers, as they lay eggs on the insides of horses' front legs on the cannon or metacarpal bone (below the knee) and knees, and sometimes on the throat or nose depending on the species. These eggs, which look like small, yellow drops of paint, must be carefully removed during the laying season (late summer and early fall) to prevent infestation in the horse. When a horse rubs its nose on its legs, the eggs are transferred to the mouth and from there to the intestines, where the larvae grow and attach themselves to the stomach lining or the small intestine. The attachment of the larvae to the tissue produces a mild irritation, which results in erosions and ulcerations at the site. Removal of the eggs (which adhere to the host's hair) is difficult since the bone and tendons are directly under the skin on the cannon bones; eggs must be removed with a sharp knife (often a razor blade) or rough sandpaper and caught before they reach the ground. The larvae remain attached and develop for 10–12 months before they are passed out in the feces. Occasionally, horse owners report seeing botfly larvae in horse manure. These larvae are cylindrical and are reddish-orange. In one to two months, adult botflies emerge from the developing larvae and the cycle repeats itself. Botflies can be controlled with several types of dewormers, including dichlorvos, ivermectin, and trichlorfon.
In cattle, the lesions caused by these flies can become infected by Mannheimia granulomatis, a bacterium that causes lechiguana, characterized by rapid-growing, hard lumps beneath the skin of the animal. Without antibiotics, an affected animal will die within 3–11 months.
Philornis botflies often infest nestlings of wild parrots, like scarlet macaws and hyacinth macaws. A method using a reverse syringe design snake bite extractor proved to be suitable for removing larvae from the skin.
Cuterebra fontinella, the mouse botfly, parasitizes small mammals all around North America.
Dermatobia hominis, the human botfly, occasionally uses humans to host its larvae.
As human food
In cold climates supporting reindeer or caribou-reliant populations, large quantities of Hypoderma tarandi (caribou warble fly) maggots are available to human populations during the butchery of animals.
The sixth episode of season one of the television series Beyond Survival, titled "The Inuit – Survivors of the Future", features survival expert Les Stroud and two Inuit guides hunting caribou on the northern coast of Baffin Island near Pond Inlet, Nunavut, Canada. Upon skinning and butchering of one of the animals, numerous larvae (presumably H. tarandi, although not explicitly stated) are apparent on the inside of the caribou pelt. Stroud and his two Inuit guides eat (albeit somewhat reluctantly) one larva each, with Stroud commenting that the larva "tastes like milk" and was historically commonly consumed by the Inuit.
Copious art dating back to the Pleistocene in Europe confirms their consumption in premodern times, as well.
The Babylonian Talmud Hullin 67b discusses whether the warble fly is kosher.
| Biology and health sciences | Flies (Diptera) | null |
643769 | https://en.wikipedia.org/wiki/Quantum%20tunnelling | Quantum tunnelling | In physics, quantum tunnelling, barrier penetration, or simply tunnelling is a quantum mechanical phenomenon in which an object such as an electron or atom passes through a potential energy barrier that, according to classical mechanics, should not be passable due to the object not having sufficient energy to pass or surmount the barrier.
Tunneling is a consequence of the wave nature of matter, where the quantum wave function describes the state of a particle or other physical system, and wave equations such as the Schrödinger equation describe their behavior. The probability of transmission of a wave packet through a barrier decreases exponentially with the barrier height, the barrier width, and the tunneling particle's mass, so tunneling is seen most prominently in low-mass particles such as electrons or protons tunneling through microscopically narrow barriers. Tunneling is readily detectable with barriers of thickness about 1–3 nm or smaller for electrons, and about 0.1 nm or smaller for heavier particles such as protons or hydrogen atoms. Some sources describe the mere penetration of a wave function into the barrier, without transmission on the other side, as a tunneling effect, such as in tunneling into the walls of a finite potential well.
Tunneling plays an essential role in physical phenomena such as nuclear fusion and alpha radioactive decay of atomic nuclei. Tunneling applications include the tunnel diode, quantum computing, flash memory, and the scanning tunneling microscope. Tunneling limits the minimum size of devices used in microelectronics because electrons tunnel readily through insulating layers and transistors that are thinner than about 1 nm.
The effect was predicted in the early 20th century. Its acceptance as a general physical phenomenon came mid-century.
Introduction to the concept
Quantum tunnelling falls under the domain of quantum mechanics. To understand the phenomenon, particles attempting to travel across a potential barrier can be compared to a ball trying to roll over a hill. Quantum mechanics and classical mechanics differ in their treatment of this scenario.
Classical mechanics predicts that particles that do not have enough energy to classically surmount a barrier cannot reach the other side. Thus, a ball without sufficient energy to surmount the hill would roll back down. In quantum mechanics, a particle can, with a small probability, tunnel to the other side, thus crossing the barrier. The reason for this difference comes from treating matter as having properties of waves and particles.
Tunnelling problem
The wave function of a physical system of particles specifies everything that can be known about the system. Therefore, problems in quantum mechanics analyze the system's wave function. Using mathematical formulations, such as the Schrödinger equation, the time evolution of a known wave function can be deduced. The square of the absolute value of this wave function is directly related to the probability distribution of the particle positions, which describes the probability that the particles would be measured at those positions.
As shown in the animation, a wave packet impinges on the barrier, most of it is reflected and some is transmitted through the barrier. The wave packet becomes more de-localized: it is now on both sides of the barrier and lower in maximum amplitude, but equal in integrated square-magnitude, meaning that the probability the particle is somewhere remains unity. The wider the barrier and the higher the barrier energy, the lower the probability of tunneling.
Some models of a tunneling barrier, such as the rectangular barriers shown, can be analysed and solved algebraically. Most problems do not have an algebraic solution, so numerical solutions are used. "Semiclassical methods" offer approximate solutions that are easier to compute, such as the WKB approximation.
History
The Schrödinger equation was published in 1926. The first person to apply the Schrödinger equation to a problem that involved tunneling between two classically allowed regions through a potential barrier was Friedrich Hund in a series of articles published in 1927. He studied the solutions of a double-well potential and discussed molecular spectra. Leonid Mandelstam and Mikhail Leontovich discovered tunneling independently and published their results in 1928.
In 1927, Lothar Nordheim, assisted by Ralph Fowler, published a paper that discussed thermionic emission and reflection of electrons from metals. He assumed a surface potential barrier that confines the electrons within the metal and showed that the electrons have a finite probability of tunneling through or reflecting from the surface barrier when their energies are close to the barrier energy. Classically, the electron would either transmit or reflect with 100% certainty, depending on its energy. In 1928 J. Robert Oppenheimer published two papers on field emission, i.e. the emission of electrons induced by strong electric fields. Nordheim and Fowler simplified Oppenheimer's derivation and found values for the emitted currents and work functions that agreed with experiments.
A great success of the tunnelling theory was the mathematical explanation for alpha decay, which was developed in 1928 by George Gamow and independently by Ronald Gurney and Edward Condon. The latter researchers simultaneously solved the Schrödinger equation for a model nuclear potential and derived a relationship between the half-life of the particle and the energy of emission that depended directly on the mathematical probability of tunneling. All three researchers were familiar with the works on field emission, and Gamow was aware of Mandelstam and Leontovich's findings.
In the early days of quantum theory, the term tunnel effect was not used, and the effect was instead referred to as penetration of, or leaking through, a barrier. The German term wellenmechanische Tunneleffekt was used in 1931 by Walter Schottky. The English term tunnel effect entered the language in 1932 when it was used by Yakov Frenkel in his textbook.
In 1957 Leo Esaki demonstrated tunneling of electrons over a few nanometer wide barrier in a semiconductor structure and developed a diode based on tunnel effect. In 1960, following Esaki's work, Ivar Giaever showed experimentally that tunnelling also took place in superconductors. The tunnelling spectrum gave direct evidence of the superconducting energy gap. In 1962, Brian Josephson predicted the tunneling of superconducting Cooper pairs. Esaki, Giaever and Josephson shared the 1973 Nobel Prize in Physics for their works on quantum tunneling in solids.
In 1981, Gerd Binnig and Heinrich Rohrer developed a new type of microscope, called scanning tunneling microscope, which is based on tunnelling and is used for imaging surfaces at the atomic level. Binnig and Rohrer were awarded the Nobel Prize in Physics in 1986 for their discovery.
Applications
Tunnelling is the cause of some important macroscopic physical phenomena.
Solid-state physics
Electronics
Tunnelling is a source of current leakage in very-large-scale integration (VLSI) electronics and results in a substantial power drain and heating effects that plague such devices. It is considered the lower limit on how microelectronic device elements can be made. Tunnelling is a fundamental technique used to program the floating gates of flash memory.
Cold emission
Cold emission of electrons is relevant to semiconductors and superconductor physics. It is similar to thermionic emission, where electrons randomly jump from the surface of a metal to follow a voltage bias because they statistically end up with more energy than the barrier, through random collisions with other particles. When the electric field is very large, the barrier becomes thin enough for electrons to tunnel out of the atomic state, leading to a current that varies approximately exponentially with the electric field. These materials are important for flash memory, vacuum tubes, and some electron microscopes.
Tunnel junction
A simple barrier can be created by separating two conductors with a very thin insulator. These are tunnel junctions, the study of which requires understanding quantum tunnelling. Josephson junctions take advantage of quantum tunnelling and superconductivity to create the Josephson effect. This has applications in precision measurements of voltages and magnetic fields, as well as the multijunction solar cell.
Tunnel diode
Diodes are electrical semiconductor devices that allow electric current flow in one direction more than the other. The device depends on a depletion layer between N-type and P-type semiconductors to serve its purpose. When these are heavily doped the depletion layer can be thin enough for tunnelling. When a small forward bias is applied, the current due to tunnelling is significant. This has a maximum at the point where the voltage bias is such that the energy level of the p and n conduction bands are the same. As the voltage bias is increased, the two conduction bands no longer line up and the diode acts typically.
Because the tunnelling current drops off rapidly, tunnel diodes can be created that have a range of voltages for which current decreases as voltage increases. This peculiar property is used in some applications, such as high speed devices where the characteristic tunnelling probability changes as rapidly as the bias voltage.
The resonant tunnelling diode makes use of quantum tunnelling in a very different manner to achieve a similar result. This diode has a resonant voltage for which a current favors a particular voltage, achieved by placing two thin layers with a high energy conductance band near each other. This creates a quantum potential well that has a discrete lowest energy level. When this energy level is higher than that of the electrons, no tunnelling occurs and the diode is in reverse bias. Once the two voltage energies align, the electrons flow like an open wire. As the voltage further increases, tunnelling becomes improbable and the diode acts like a normal diode again before a second energy level becomes noticeable.
Tunnel field-effect transistors
A European research project demonstrated field effect transistors in which the gate (channel) is controlled via quantum tunnelling rather than by thermal injection, reducing gate voltage from ≈1 volt to 0.2 volts and reducing power consumption by up to 100×. If these transistors can be scaled up into VLSI chips, they would improve the performance per power of integrated circuits.
Conductivity of crystalline solids
While the Drude-Lorentz model of electrical conductivity makes excellent predictions about the nature of electrons conducting in metals, it can be furthered by using quantum tunnelling to explain the nature of the electron's collisions. When a free electron wave packet encounters a long array of uniformly spaced barriers, the reflected part of the wave packet interferes uniformly with the transmitted one between all barriers so that 100% transmission becomes possible. The theory predicts that if positively charged nuclei form a perfectly rectangular array, electrons will tunnel through the metal as free electrons, leading to extremely high conductance, and that impurities in the metal will disrupt it.
Scanning tunneling microscope
The scanning tunnelling microscope (STM), invented by Gerd Binnig and Heinrich Rohrer, may allow imaging of individual atoms on the surface of a material. It operates by taking advantage of the relationship between quantum tunnelling with distance. When the tip of the STM's needle is brought close to a conduction surface that has a voltage bias, measuring the current of electrons that are tunnelling between the needle and the surface reveals the distance between the needle and the surface. By using piezoelectric rods that change in size when voltage is applied, the height of the tip can be adjusted to keep the tunnelling current constant. The time-varying voltages that are applied to these rods can be recorded and used to image the surface of the conductor. STMs are accurate to 0.001 nm, or about 1% of atomic diameter.
Nuclear physics
Nuclear fusion
Quantum tunnelling is an essential phenomenon for nuclear fusion. The temperature in stellar cores is generally insufficient to allow atomic nuclei to overcome the Coulomb barrier and achieve thermonuclear fusion. Quantum tunnelling increases the probability of penetrating this barrier. Though this probability is still low, the extremely large number of nuclei in the core of a star is sufficient to sustain a steady fusion reaction.
Radioactive decay
Radioactive decay is the process of emission of particles and energy from the unstable nucleus of an atom to form a stable product. This is done via the tunnelling of a particle out of the nucleus (an electron tunneling into the nucleus is electron capture). This was the first application of quantum tunnelling. Radioactive decay is a relevant issue for astrobiology as this consequence of quantum tunnelling creates a constant energy source over a large time interval for environments outside the circumstellar habitable zone where insolation would not be possible (subsurface oceans) or effective.
Quantum tunnelling may be one of the mechanisms of hypothetical proton decay.
Chemistry
Energetically forbidden reactions
Chemical reactions in the interstellar medium occur at extremely low energies. Probably the most fundamental ion-molecule reaction involves hydrogen ions with hydrogen molecules.
The quantum mechanical tunnelling rate for the same reaction using the hydrogen isotope deuterium, D− + H2 → H− + HD, has been measured experimentally in an ion trap.
The deuterium was placed in an ion trap and cooled. The trap was then filled with hydrogen. At the temperatures used in the experiment, the energy barrier for reaction would not allow the reaction to succeed with classical dynamics alone. Quantum tunneling allowed reactions to happen in rare collisions. It was calculated from the experimental data that collisions happened one in every hundred billion.
Kinetic isotope effect
In chemical kinetics, the substitution of a light isotope of an element with a heavier one typically results in a slower reaction rate. This is generally attributed to differences in the zero-point vibrational energies for chemical bonds containing the lighter and heavier isotopes and is generally modeled using transition state theory. However, in certain cases, large isotopic effects are observed that cannot be accounted for by a semi-classical treatment, and quantum tunnelling is required. R. P. Bell developed a modified treatment of Arrhenius kinetics that is commonly used to model this phenomenon.
Astrochemistry in interstellar clouds
By including quantum tunnelling, the astrochemical syntheses of various molecules in interstellar clouds can be explained, such as the synthesis of molecular hydrogen, water (ice) and the prebiotic important formaldehyde. Tunnelling of molecular hydrogen has been observed in the lab.
Quantum biology
Quantum tunnelling is among the central non-trivial quantum effects in quantum biology. Here it is important both as electron tunnelling and proton tunnelling. Electron tunnelling is a key factor in many biochemical redox reactions (photosynthesis, cellular respiration) as well as enzymatic catalysis. Proton tunnelling is a key factor in spontaneous DNA mutation.
Spontaneous mutation occurs when normal DNA replication takes place after a particularly significant proton has tunnelled. A hydrogen bond joins DNA base pairs. A double well potential along a hydrogen bond separates a potential energy barrier. It is believed that the double well potential is asymmetric, with one well deeper than the other such that the proton normally rests in the deeper well. For a mutation to occur, the proton must have tunnelled into the shallower well. The proton's movement from its regular position is called a tautomeric transition. If DNA replication takes place in this state, the base pairing rule for DNA may be jeopardised, causing a mutation. Per-Olov Lowdin was the first to develop this theory of spontaneous mutation within the double helix. Other instances of quantum tunnelling-induced mutations in biology are believed to be a cause of ageing and cancer.
Mathematical discussion
Schrödinger equation
The time-independent Schrödinger equation for one particle in one dimension can be written as
or
where
is the reduced Planck constant,
m is the particle mass,
x represents distance measured in the direction of motion of the particle,
Ψ is the Schrödinger wave function,
V is the potential energy of the particle (measured relative to any convenient reference level),
E is the energy of the particle that is associated with motion in the x-axis (measured relative to V),
M(x) is a quantity defined by V(x) − E, which has no accepted name in physics.
The solutions of the Schrödinger equation take different forms for different values of x, depending on whether M(x) is positive or negative. When M(x) is constant and negative, then the Schrödinger equation can be written in the form
The solutions of this equation represent travelling waves, with phase-constant +k or −k. Alternatively, if M(x) is constant and positive, then the Schrödinger equation can be written in the form
The solutions of this equation are rising and falling exponentials in the form of evanescent waves. When M(x) varies with position, the same difference in behaviour occurs, depending on whether M(x) is negative or positive. It follows that the sign of M(x) determines the nature of the medium, with negative M(x) corresponding to medium A and positive M(x) corresponding to medium B. It thus follows that evanescent wave coupling can occur if a region of positive M(x) is sandwiched between two regions of negative M(x), hence creating a potential barrier.
The mathematics of dealing with the situation where M(x) varies with x is difficult, except in special cases that usually do not correspond to physical reality. A full mathematical treatment appears in the 1965 monograph by Fröman and Fröman. Their ideas have not been incorporated into physics textbooks, but their corrections have little quantitative effect.
WKB approximation
The wave function is expressed as the exponential of a function:
where
is then separated into real and imaginary parts:
where A(x) and B(x) are real-valued functions.
Substituting the second equation into the first and using the fact that the imaginary part needs to be 0 results in:
To solve this equation using the semiclassical approximation, each function must be expanded as a power series in . From the equations, the power series must start with at least an order of to satisfy the real part of the equation; for a good classical limit starting with the highest power of the Planck constant possible is preferable, which leads to
and
with the following constraints on the lowest order terms,
and
At this point two extreme cases can be considered.
Case 1
If the amplitude varies slowly as compared to the phase and
which corresponds to classical motion. Resolving the next order of expansion yields
Case 2
If the phase varies slowly as compared to the amplitude, and
which corresponds to tunneling. Resolving the next order of the expansion yields
In both cases it is apparent from the denominator that both these approximate solutions are bad near the classical turning points . Away from the potential hill, the particle acts similar to a free and oscillating wave; beneath the potential hill, the particle undergoes exponential changes in amplitude. By considering the behaviour at these limits and classical turning points a global solution can be made.
To start, a classical turning point, is chosen and is expanded in a power series about :
Keeping only the first order term ensures linearity:
Using this approximation, the equation near becomes a differential equation:
This can be solved using Airy functions as solutions.
Taking these solutions for all classical turning points, a global solution can be formed that links the limiting solutions. Given the two coefficients on one side of a classical turning point, the two coefficients on the other side of a classical turning point can be determined by using this local solution to connect them.
Hence, the Airy function solutions will asymptote into sine, cosine and exponential functions in the proper limits. The relationships between and are
and
With the coefficients found, the global solution can be found. Therefore, the transmission coefficient for a particle tunneling through a single potential barrier is
where are the two classical turning points for the potential barrier.
For a rectangular barrier, this expression simplifies to:
Faster than light
Some physicists have claimed that it is possible for spin-zero particles to travel faster than the speed of light when tunnelling. This appears to violate the principle of causality, since a frame of reference then exists in which the particle arrives before it has left. In 1998, Francis E. Low reviewed briefly the phenomenon of zero-time tunnelling. More recently, experimental tunnelling time data of phonons, photons, and electrons was published by Günter Nimtz. Another experiment overseen by A. M. Steinberg, seems to indicate that particles could tunnel at apparent speeds faster than light.
Other physicists, such as Herbert Winful, disputed these claims. Winful argued that the wave packet of a tunnelling particle propagates locally, so a particle can't tunnel through the barrier non-locally. Winful also argued that the experiments that are purported to show non-local propagation have been misinterpreted. In particular, the group velocity of a wave packet does not measure its speed, but is related to the amount of time the wave packet is stored in the barrier. Moreover, if quantum tunneling is modeled with the relativistic Dirac equation, well established mathematical theorems imply that the process is completely subluminal.
Dynamical tunneling
The concept of quantum tunneling can be extended to situations where there exists a quantum transport between regions that are classically not connected even if there is no associated potential barrier. This phenomenon is known as dynamical tunnelling.
Tunnelling in phase space
The concept of dynamical tunnelling is particularly suited to address the problem of quantum tunnelling in high dimensions (d>1). In the case of an integrable system, where bounded classical trajectories are confined onto tori in phase space, tunnelling can be understood as the quantum transport between semi-classical states built on two distinct but symmetric tori.
Chaos-assisted tunnelling
In real life, most systems are not integrable and display various degrees of chaos. Classical dynamics is then said to be mixed and the system phase space is typically composed of islands of regular orbits surrounded by a large sea of chaotic orbits. The existence of the chaotic sea, where transport is classically allowed, between the two symmetric tori then assists the quantum tunnelling between them. This phenomenon is referred as chaos-assisted tunnelling. and is characterized by sharp resonances of the tunnelling rate when varying any system parameter.
Resonance-assisted tunnelling
When is small in front of the size of the regular islands, the fine structure of the classical phase space plays a key role in tunnelling. In particular the two symmetric tori are coupled "via a succession of classically forbidden transitions across nonlinear resonances" surrounding the two islands.
Related phenomena
Several phenomena have the same behavior as quantum tunnelling. Two examples are evanescent wave coupling (the application of Maxwell's wave-equation to light) and the application of the non-dispersive wave-equation from acoustics applied to "waves on strings".
These effects are modeled similarly to the rectangular potential barrier. In these cases, one transmission medium through which the wave propagates that is the same or nearly the same throughout, and a second medium through which the wave travels differently. This can be described as a thin region of medium B between two regions of medium A. The analysis of a rectangular barrier by means of the Schrödinger equation can be adapted to these other effects provided that the wave equation has travelling wave solutions in medium A but real exponential solutions in medium B.
In optics, medium A is a vacuum while medium B is glass. In acoustics, medium A may be a liquid or gas and medium B a solid. For both cases, medium A is a region of space where the particle's total energy is greater than its potential energy and medium B is the potential barrier. These have an incoming wave and resultant waves in both directions. There can be more mediums and barriers, and the barriers need not be discrete. Approximations are useful in this case.
A classical wave-particle association was originally analyzed as analogous to quantum tunneling, but subsequent analysis found a fluid dynamics cause related to the vertical momentum imparted to particles near the barrier.
| Physical sciences | Quantum mechanics | null |
644443 | https://en.wikipedia.org/wiki/AdS/CFT%20correspondence | AdS/CFT correspondence | In theoretical physics, the anti-de Sitter/conformal field theory correspondence (frequently abbreviated as AdS/CFT) is a conjectured relationship between two kinds of physical theories. On one side are anti-de Sitter spaces (AdS) that are used in theories of quantum gravity, formulated in terms of string theory or M-theory. On the other side of the correspondence are conformal field theories (CFT) that are quantum field theories, including theories similar to the Yang–Mills theories that describe elementary particles.
The duality represents a major advance in the understanding of string theory and quantum gravity. This is because it provides a non-perturbative formulation of string theory with certain boundary conditions and because it is the most successful realization of the holographic principle, an idea in quantum gravity originally proposed by Gerard 't Hooft and promoted by Leonard Susskind.
It also provides a powerful toolkit for studying strongly coupled quantum field theories. Much of the usefulness of the duality results from the fact that it is a strong–weak duality: when the fields of the quantum field theory are strongly interacting, the ones in the gravitational theory are weakly interacting and thus more mathematically tractable. This fact has been used to study many aspects of nuclear and condensed matter physics by translating problems in those subjects into more mathematically tractable problems in string theory.
The AdS/CFT correspondence was first proposed by Juan Maldacena in late 1997. Important aspects of the correspondence were soon elaborated on in two articles, one by Steven Gubser, Igor Klebanov and Alexander Polyakov, and another by Edward Witten. By 2015, Maldacena's article had over 10,000 citations, becoming the most highly cited article in the field of high energy physics.
One of the most prominent examples of the AdS/CFT correspondence has been the AdS5/CFT4 correspondence: a relation between N = 4 supersymmetric Yang–Mills theory in 3+1 dimensions and type IIB superstring theory on .
Background
Quantum gravity and strings
Current understanding of gravity is based on Albert Einstein's general theory of relativity. Formulated in 1915, general relativity explains gravity in terms of the geometry of space and time, or spacetime. It is formulated in the language of classical physics that was developed by physicists such as Isaac Newton and James Clerk Maxwell. The other nongravitational forces are explained in the framework of quantum mechanics. Developed in the first half of the twentieth century by a number of different physicists, quantum mechanics provides a radically different way of describing physical phenomena based on probability.
Quantum gravity is the branch of physics that seeks to describe gravity using the principles of quantum mechanics. Currently, a popular approach to quantum gravity is string theory, which models elementary particles not as zero-dimensional points but as one-dimensional objects called strings. In the AdS/CFT correspondence, one typically considers theories of quantum gravity derived from string theory or its modern extension, M-theory.
In everyday life, there are three familiar dimensions of space (up/down, left/right, and forward/backward), and there is one dimension of time. Thus, in the language of modern physics, one says that spacetime is four-dimensional. One peculiar feature of string theory and M-theory is that these theories require extra dimensions of spacetime for their mathematical consistency: in string theory spacetime is ten-dimensional, while in M-theory it is eleven-dimensional. The quantum gravity theories appearing in the AdS/CFT correspondence are typically obtained from string and M-theory by a process known as compactification. This produces a theory in which spacetime has effectively a lower number of dimensions and the extra dimensions are "curled up" into circles.
A standard analogy for compactification is to consider a multidimensional object such as a garden hose. If the hose is viewed from a sufficient distance, it appears to have only one dimension, its length, but as one approaches the hose, one discovers that it contains a second dimension, its circumference. Thus, an ant crawling inside it would move in two dimensions.
Quantum field theory
The application of quantum mechanics to physical objects such as the electromagnetic field, which are extended in space and time, is known as quantum field theory. In particle physics, quantum field theories form the basis for our understanding of elementary particles, which are modeled as excitations in the fundamental fields. Quantum field theories are also used throughout condensed matter physics to model particle-like objects called quasiparticles.
In the AdS/CFT correspondence, one considers, in addition to a theory of quantum gravity, a certain kind of quantum field theory called a conformal field theory. This is a particularly symmetric and mathematically well behaved type of quantum field theory. Such theories are often studied in the context of string theory, where they are associated with the surface swept out by a string propagating through spacetime, and in statistical mechanics, where they model systems at a thermodynamic critical point.
Overview of the correspondence
Geometry of anti-de Sitter space
In the AdS/CFT correspondence, one considers string theory or M-theory on an anti-de Sitter background. This means that the geometry of spacetime is described in terms of a certain vacuum solution of Einstein's equation called anti-de Sitter space.
In very elementary terms, anti-de Sitter space is a mathematical model of spacetime in which the notion of distance between points (the metric) is different from the notion of distance in ordinary Euclidean geometry. It is closely related to hyperbolic space, which can be viewed as a disk as illustrated on the right. This image shows a tessellation of a disk by triangles and squares. One can define the distance between points of this disk in such a way that all the triangles and squares are the same size and the circular outer boundary is infinitely far from any point in the interior.
Now imagine a stack of hyperbolic disks where each disk represents the state of the universe at a given time. The resulting geometric object is three-dimensional anti-de Sitter space. It looks like a solid cylinder in which any cross section is a copy of the hyperbolic disk. Time runs along the vertical direction in this picture. The surface of this cylinder plays an important role in the AdS/CFT correspondence. As with the hyperbolic plane, anti-de Sitter space is curved in such a way that any point in the interior is actually infinitely far from this boundary surface.
This construction describes a hypothetical universe with only two space and one time dimension, but it can be generalized to any number of dimensions. Indeed, hyperbolic space can have more than two dimensions and one can "stack up" copies of hyperbolic space to get higher-dimensional models of anti-de Sitter space.
Idea of AdS/CFT
An important feature of anti-de Sitter space is its boundary (which looks like a cylinder in the case of three-dimensional anti-de Sitter space). One property of this boundary is that, locally around any point, it looks just like Minkowski space, the model of spacetime used in nongravitational physics.
One can therefore consider an auxiliary theory in which "spacetime" is given by the boundary of anti-de Sitter space. This observation is the starting point for the AdS/CFT correspondence, which states that the boundary of anti-de Sitter space can be regarded as the "spacetime" for a conformal field theory. The claim is that this conformal field theory is equivalent to the gravitational theory on the bulk anti-de Sitter space in the sense that there is a "dictionary" for translating calculations in one theory into calculations in the other. Every entity in one theory has a counterpart in the other theory. For example, a single particle in the gravitational theory might correspond to some collection of particles in the boundary theory. In addition, the predictions in the two theories are quantitatively identical so that if two particles have a 40 percent chance of colliding in the gravitational theory, then the corresponding collections in the boundary theory would also have a 40 percent chance of colliding.
Notice that the boundary of anti-de Sitter space has fewer dimensions than anti-de Sitter space itself. For instance, in the three-dimensional example illustrated above, the boundary is a two-dimensional surface. The AdS/CFT correspondence is often described as a "holographic duality" because this relationship between the two theories is similar to the relationship between a three-dimensional object and its image as a hologram. Although a hologram is two-dimensional, it encodes information about all three dimensions of the object it represents. In the same way, theories that are related by the AdS/CFT correspondence are conjectured to be exactly equivalent, despite living in different numbers of dimensions. The conformal field theory is like a hologram that captures information about the higher-dimensional quantum gravity theory.
Examples of the correspondence
Following Maldacena's insight in 1997, theorists have discovered many different realizations of the AdS/CFT correspondence. These relate various conformal field theories to compactifications of string theory and M-theory in various numbers of dimensions. The theories involved are generally not viable models of the real world, but they have certain features, such as their particle content or high degree of symmetry, which make them useful for solving problems in quantum field theory and quantum gravity.
The most famous example of the AdS/CFT correspondence states that type IIB string theory on the product space is equivalent to N = 4 supersymmetric Yang–Mills theory on the four-dimensional boundary. In this example, the spacetime on which the gravitational theory lives is effectively five-dimensional (hence the notation AdS5), and there are five additional compact dimensions (encoded by the S5 factor). In the real world, spacetime is four-dimensional, at least macroscopically, so this version of the correspondence does not provide a realistic model of gravity. Likewise, the dual theory is not a viable model of any real-world system as it assumes a large amount of supersymmetry. Nevertheless, as explained below, this boundary theory shares some features in common with quantum chromodynamics, the fundamental theory of the strong force. It describes particles similar to the gluons of quantum chromodynamics together with certain fermions. As a result, it has found applications in nuclear physics, particularly in the study of the quark–gluon plasma.
Another realization of the correspondence states that M-theory on is equivalent to the so-called (2,0)-theory in six dimensions. In this example, the spacetime of the gravitational theory is effectively seven-dimensional. The existence of the (2,0)-theory that appears on one side of the duality is predicted by the classification of superconformal field theories. It is still poorly understood because it is a quantum mechanical theory without a classical limit. Despite the inherent difficulty in studying this theory, it is considered to be an interesting object for a variety of reasons, both physical and mathematical.
Yet another realization of the correspondence states that M-theory on is equivalent to the ABJM superconformal field theory in three dimensions. Here the gravitational theory has four noncompact dimensions, so this version of the correspondence provides a somewhat more realistic description of gravity.
Applications to quantum gravity
A non-perturbative formulation of string theory
In quantum field theory, one typically computes the probabilities of various physical events using the techniques of perturbation theory. Developed by Richard Feynman and others in the first half of the twentieth century, perturbative quantum field theory uses special diagrams called Feynman diagrams to organize computations. One imagines that these diagrams depict the paths of point-like particles and their interactions. Although this formalism is extremely useful for making predictions, these predictions are only possible when the strength of the interactions, the coupling constant, is small enough to reliably describe the theory as being close to a theory without interactions.
The starting point for string theory is the idea that the point-like particles of quantum field theory can also be modeled as one-dimensional objects called strings. The interaction of strings is most straightforwardly defined by generalizing the perturbation theory used in ordinary quantum field theory. At the level of Feynman diagrams, this means replacing the one-dimensional diagram representing the path of a point particle by a two-dimensional surface representing the motion of a string. Unlike in quantum field theory, string theory does not yet have a full non-perturbative definition, so many of the theoretical questions that physicists would like to answer remain out of reach.
The problem of developing a non-perturbative formulation of string theory was one of the original motivations for studying the AdS/CFT correspondence. As explained above, the correspondence provides several examples of quantum field theories that are equivalent to string theory on anti-de Sitter space. One can alternatively view this correspondence as providing a definition of string theory in the special case where the gravitational field is asymptotically anti-de Sitter (that is, when the gravitational field resembles that of anti-de Sitter space at spatial infinity). Physically interesting quantities in string theory are defined in terms of quantities in the dual quantum field theory.
Black hole information paradox
In 1975, Stephen Hawking published a calculation that suggested that black holes are not completely black but emit a dim radiation due to quantum effects near the event horizon. At first, Hawking's result posed a problem for theorists because it suggested that black holes destroy information. More precisely, Hawking's calculation seemed to conflict with one of the basic postulates of quantum mechanics, which states that physical systems evolve in time according to the Schrödinger equation. This property is usually referred to as unitarity of time evolution. The apparent contradiction between Hawking's calculation and the unitarity postulate of quantum mechanics came to be known as the black hole information paradox.
The AdS/CFT correspondence resolves the black hole information paradox, at least to some extent, because it shows how a black hole can evolve in a manner consistent with quantum mechanics in some contexts. Indeed, one can consider black holes in the context of the AdS/CFT correspondence, and any such black hole corresponds to a configuration of particles on the boundary of anti-de Sitter space. These particles obey the usual rules of quantum mechanics and in particular evolve in a unitary fashion, so the black hole must also evolve in a unitary fashion, respecting the principles of quantum mechanics. In 2005, Hawking announced that the paradox had been settled in favor of information conservation by the AdS/CFT correspondence, and he suggested a concrete mechanism by which black holes might preserve information.
Applications to quantum field theory
Nuclear physics
One physical system that has been studied using the AdS/CFT correspondence is the quark–gluon plasma, an exotic state of matter produced in particle accelerators. This state of matter arises for brief instants when heavy ions such as gold or lead nuclei are collided at high energies. Such collisions cause the quarks that make up atomic nuclei to deconfine at temperatures of approximately two trillion kelvins, conditions similar to those present at around 10−11 seconds after the Big Bang.
The physics of the quark–gluon plasma is governed by quantum chromodynamics, but this theory is mathematically intractable in problems involving the quark–gluon plasma. In an article appearing in 2005, Đàm Thanh Sơn and his collaborators showed that the AdS/CFT correspondence could be used to understand some aspects of the quark–gluon plasma by describing it in the language of string theory. By applying the AdS/CFT correspondence, Sơn and his collaborators were able to describe the quark gluon plasma in terms of black holes in five-dimensional spacetime. The calculation showed that the ratio of two quantities associated with the quark–gluon plasma, the shear viscosity η and volume density of entropy s, should be approximately equal to a certain universal constant:
where ħ denotes the reduced Planck constant and k is the Boltzmann constant. In addition, the authors conjectured that this universal constant provides a lower bound for η/s in a large class of systems. In an experiment conducted at the Relativistic Heavy Ion Collider at Brookhaven National Laboratory, the experimental result in one model was close to this universal constant but it was not the case in another model.
Another important property of the quark–gluon plasma is that very high energy quarks moving through the plasma are stopped or "quenched" after traveling only a few femtometres. This phenomenon is characterized by a number called the jet quenching parameter, which relates the energy loss of such a quark to the squared distance traveled through the plasma. Calculations based on the AdS/CFT correspondence give the estimated value , and the experimental value of lies in the range .
Condensed matter physics
Over the decades, experimental condensed matter physicists have discovered a number of exotic states of matter, including superconductors and superfluids. These states are described using the formalism of quantum field theory, but some phenomena are difficult to explain using standard field theoretic techniques. Some condensed matter theorists including Subir Sachdev hope that the AdS/CFT correspondence will make it possible to describe these systems in the language of string theory and learn more about their behavior.
So far some success has been achieved in using string theory methods to describe the transition of a superfluid to an insulator. A superfluid is a system of electrically neutral atoms that flows without any friction. Such systems are often produced in the laboratory using liquid helium, but recently experimentalists have developed new ways of producing artificial superfluids by pouring trillions of cold atoms into a lattice of criss-crossing lasers. These atoms initially behave as a superfluid, but as experimentalists increase the intensity of the lasers, they become less mobile and then suddenly transition to an insulating state. During the transition, the atoms behave in an unusual way. For example, the atoms slow to a halt at a rate that depends on the temperature and on the Planck constant, the fundamental parameter of quantum mechanics, which does not enter into the description of the other phases. This behavior has recently been understood by considering a dual description where properties of the fluid are described in terms of a higher dimensional black hole.
Criticism
With many physicists turning towards string-based methods to solve problems in nuclear and condensed matter physics, some theorists working in these areas have expressed doubts about whether the AdS/CFT correspondence can provide the tools needed to realistically model real-world systems. In a talk at the Quark Matter conference in 2006, an American physicist, Larry McLerran pointed out that the super Yang–Mills theory that appears in the AdS/CFT correspondence differs significantly from quantum chromodynamics, making it difficult to apply these methods to nuclear physics. According to McLerran,
In a letter to Physics Today, Nobel laureate Philip W. Anderson voiced similar concerns about applications of AdS/CFT to condensed matter physics, stating
History and development
String theory and nuclear physics
The discovery of the AdS/CFT correspondence in late 1997 was the culmination of a long history of efforts to relate string theory to nuclear physics. In fact, string theory was originally developed during the late 1960s and early 1970s as a theory of hadrons, the subatomic particles like the proton and neutron that are held together by the strong nuclear force. The idea was that each of these particles could be viewed as a different oscillation mode of a string. In the late 1960s, experimentalists had found that hadrons fall into families called Regge trajectories with squared energy proportional to angular momentum, and theorists showed that this relationship emerges naturally from the physics of a rotating relativistic string.
On the other hand, attempts to model hadrons as strings faced serious problems. One problem was that string theory includes a massless spin-2 particle whereas no such particle appears in the physics of hadrons. Such a particle would mediate a force with the properties of gravity. In 1974, Joël Scherk and John Schwarz suggested that string theory was therefore not a theory of nuclear physics as many theorists had thought but instead a theory of quantum gravity. At the same time, it was realized that hadrons are actually made of quarks, and the string theory approach was abandoned in favor of quantum chromodynamics.
In quantum chromodynamics, quarks have a kind of charge that comes in three varieties called colors. In a paper from 1974, Gerard 't Hooft studied the relationship between string theory and nuclear physics from another point of view by considering theories similar to quantum chromodynamics, where the number of colors is some arbitrary number N, rather than three. In this article, 't Hooft considered a certain limit where N tends to infinity and argued that in this limit certain calculations in quantum field theory resemble calculations in string theory.
Black holes and holography
In 1975, Stephen Hawking published a calculation that suggested that black holes are not completely black but emit a dim radiation due to quantum effects near the event horizon. This work extended previous results of Jacob Bekenstein who had suggested that black holes have a well-defined entropy. At first, Hawking's result appeared to contradict one of the main postulates of quantum mechanics, namely the unitarity of time evolution. Intuitively, the unitarity postulate says that quantum mechanical systems do not destroy information as they evolve from one state to another. For this reason, the apparent contradiction came to be known as the black hole information paradox.
Later, in 1993, Gerard 't Hooft wrote a speculative paper on quantum gravity in which he revisited Hawking's work on black hole thermodynamics, concluding that the total number of degrees of freedom in a region of spacetime surrounding a black hole is proportional to the surface area of the horizon. This idea was promoted by Leonard Susskind and is now known as the holographic principle. The holographic principle and its realization in string theory through the AdS/CFT correspondence have helped elucidate the mysteries of black holes suggested by Hawking's work and are believed to provide a resolution of the black hole information paradox. In 2004, Hawking conceded that black holes do not violate quantum mechanics, and he suggested a concrete mechanism by which they might preserve information.
Maldacena's paper
On January 1, 1998, Juan Maldacena published a landmark paper that initiated the study of AdS/CFT. According to Alexander Markovich Polyakov, "[Maldacena's] work opened the flood gates." The conjecture immediately excited great interest in the string theory community and was considered in a paper by Steven Gubser, Igor Klebanov and Polyakov, and another paper of Edward Witten. These papers made Maldacena's conjecture more precise and showed that the conformal field theory appearing in the correspondence lives on the boundary of anti-de Sitter space.
One special case of Maldacena's proposal says that super Yang–Mills theory, a gauge theory similar in some ways to quantum chromodynamics, is equivalent to string theory in five-dimensional anti-de Sitter space. This result helped clarify the earlier work of 't Hooft on the relationship between string theory and quantum chromodynamics, taking string theory back to its roots as a theory of nuclear physics. Maldacena's results also provided a concrete realization of the holographic principle with important implications for quantum gravity and black hole physics. By the year 2015, Maldacena's paper had become the most highly cited paper in high energy physics with over 10,000 citations. These subsequent articles have provided considerable evidence that the correspondence is correct, although so far it has not been rigorously proved.
Generalizations
Three-dimensional gravity
In order to better understand the quantum aspects of gravity in our four-dimensional universe, some physicists have considered a lower-dimensional mathematical model in which spacetime has only two spatial dimensions and one time dimension. In this setting, the mathematics describing the gravitational field simplifies drastically, and one can study quantum gravity using familiar methods from quantum field theory, eliminating the need for string theory or other more radical approaches to quantum gravity in four dimensions.
Beginning with the work of J. David Brown and Marc Henneaux in 1986, physicists have noticed that quantum gravity in a three-dimensional spacetime is closely related to two-dimensional conformal field theory. In 1995, Henneaux and his coworkers explored this relationship in more detail, suggesting that three-dimensional gravity in anti-de Sitter space is equivalent to the conformal field theory known as Liouville field theory. Another conjecture formulated by Edward Witten states that three-dimensional gravity in anti-de Sitter space is equivalent to a conformal field theory with monster group symmetry. These conjectures provide examples of the AdS/CFT correspondence that do not require the full apparatus of string or M-theory.
dS/CFT correspondence
Unlike our universe, which is now known to be expanding at an accelerating rate, anti-de Sitter space is neither expanding nor contracting. Instead it looks the same at all times. In more technical language, one says that anti-de Sitter space corresponds to a universe with a negative cosmological constant, whereas the real universe has a small positive cosmological constant.
Although the properties of gravity at short distances should be somewhat independent of the value of the cosmological constant, it is desirable to have a version of the AdS/CFT correspondence for positive cosmological constant. In 2001, Andrew Strominger introduced a version of the duality called the dS/CFT correspondence. This duality involves a model of spacetime called de Sitter space with a positive cosmological constant. Such a duality is interesting from the point of view of cosmology since many cosmologists believe that the very early universe was close to being de Sitter space.
Kerr/CFT correspondence
Although the AdS/CFT correspondence is often useful for studying the properties of black holes, most of the black holes considered in the context of AdS/CFT are physically unrealistic. Indeed, as explained above, most versions of the AdS/CFT correspondence involve higher-dimensional models of spacetime with unphysical supersymmetry.
In 2009, Monica Guica, Thomas Hartman, Wei Song, and Andrew Strominger showed that the ideas of AdS/CFT could nevertheless be used to understand certain astrophysical black holes. More precisely, their results apply to black holes that are approximated by extremal Kerr black holes, which have the largest possible angular momentum compatible with a given mass. They showed that such black holes have an equivalent description in terms of conformal field theory. The Kerr/CFT correspondence was later extended to black holes with lower angular momentum.
Higher spin gauge theories
The AdS/CFT correspondence is closely related to another duality conjectured by Igor Klebanov and Alexander Markovich Polyakov in 2002. This duality states that certain "higher spin gauge theories" on anti-de Sitter space are equivalent to conformal field theories with O(N) symmetry. Here the theory in the bulk is a type of gauge theory describing particles of arbitrarily high spin. It is similar to string theory, where the excited modes of vibrating strings correspond to particles with higher spin, and it may help to better understand the string theoretic versions of AdS/CFT and possibly even prove the correspondence. In 2010, Simone Giombi and Xi Yin obtained further evidence for this duality by computing quantities called three-point functions.
| Physical sciences | Quantum mechanics | Physics |
644550 | https://en.wikipedia.org/wiki/Higgs%20mechanism | Higgs mechanism | In the Standard Model of particle physics, the Higgs mechanism is essential to explain the generation mechanism of the property "mass" for gauge bosons. Without the Higgs mechanism, all bosons (one of the two classes of particles, the other being fermions) would be considered massless, but measurements show that the W+, W−, and Z0 bosons actually have relatively large masses of around 80 GeV/c2. The Higgs field resolves this conundrum. The simplest description of the mechanism adds to the Standard Model a quantum field (the Higgs field), which permeates all of space. Below some extremely high temperature, the field causes spontaneous symmetry breaking during interactions. The breaking of symmetry triggers the Higgs mechanism, causing the bosons with which it interacts to have mass. In the Standard Model, the phrase "Higgs mechanism" refers specifically to the generation of masses for the W±, and Z weak gauge bosons through electroweak symmetry breaking. The Large Hadron Collider at CERN announced results consistent with the Higgs particle on 14 March 2013, making it extremely likely that the field, or one like it, exists, and explaining how the Higgs mechanism takes place in nature.
The view of the Higgs mechanism as involving spontaneous symmetry breaking of a gauge symmetry is technically incorrect since by Elitzur's theorem gauge symmetries can never be spontaneously broken. Rather, the Fröhlich–Morchio–Strocchi mechanism reformulates the Higgs mechanism in an entirely gauge invariant way, generally leading to the same results.
The mechanism was proposed in 1962 by Philip Warren Anderson, following work in the late 1950s on symmetry breaking in superconductivity and a 1960 paper by Yoichiro Nambu that discussed its application within particle physics.
A theory able to finally explain mass generation without "breaking" gauge theory was published almost simultaneously by three independent groups in 1964: by Robert Brout and François Englert; by Peter Higgs; and by Gerald Guralnik, C. R. Hagen, and Tom Kibble. The Higgs mechanism is therefore also called the Brout–Englert–Higgs mechanism, or Englert–Brout–Higgs–Guralnik–Hagen–Kibble mechanism, Anderson–Higgs mechanism, Anderson–Higgs–Kibble mechanism, Higgs–Kibble mechanism by Abdus Salam and ABEGHHK'tH mechanism (for Anderson, Brout, Englert, Guralnik, Hagen, Higgs, Kibble, and 't Hooft) by Peter Higgs. The Higgs mechanism in electrodynamics was also discovered independently by Eberly and Reiss in reverse as the "gauge" Dirac field mass gain due to the artificially displaced electromagnetic field as a Higgs field.
On 8 October 2013, following the discovery at CERN's Large Hadron Collider of a new particle that appeared to be the long-sought Higgs boson predicted by the theory, it was announced that Peter Higgs and François Englert had been awarded the 2013 Nobel Prize in Physics.
Standard Model
The Higgs mechanism was incorporated into modern particle physics by Steven Weinberg and Abdus Salam, and is an essential part of the Standard Model.
In the Standard Model, at temperatures high enough that electroweak symmetry is unbroken, all elementary particles are massless. At a critical temperature, the Higgs field develops a vacuum expectation value; some theories suggest the symmetry is spontaneously broken by tachyon condensation, and the W and Z bosons acquire masses (also called "electroweak symmetry breaking", or EWSB). In the history of the universe, this is believed to have happened about a picosecond after the hot big bang, when the universe was at a temperature 159.5 ± 1.5 GeV.
Fermions, such as the leptons and quarks in the Standard Model, can also acquire mass as a result of their interaction with the Higgs field, but not in the same way as the gauge bosons.
Structure of the Higgs field
In the standard model, the Higgs field is an SU(2) doublet (i.e. the standard representation with two complex components called isospin), which is a scalar under Lorentz transformations. Its electric charge is zero; its weak isospin is and the third component of weak isospin is −; and its weak hypercharge (the charge for the U(1) gauge group defined up to an arbitrary multiplicative constant) is 1. Under U(1) rotations, it is multiplied by a phase, which thus mixes the real and imaginary parts of the complex spinor into each other, combining to the standard two-component complex representation of the group U(2).
The Higgs field, through the interactions specified (summarized, represented, or even simulated) by its potential, induces spontaneous breaking of three out of the four generators ("directions") of the gauge group U(2). This is often written as SU(2)L × U(1)Y, (which is strictly speaking only the same on the level of infinitesimal symmetries) because the diagonal phase factor also acts on other fields – quarks in particular. Three out of its four components would ordinarily resolve as Goldstone bosons, if they were not coupled to gauge fields.
However, after symmetry breaking, these three of the four degrees of freedom in the Higgs field mix with the three W and Z bosons (, and ), and are only observable as components of these weak bosons, which are made massive by their inclusion; only the single remaining degree of freedom becomes a new scalar particle: the Higgs boson. The components that do not mix with Goldstone bosons form a massless photon.
The photon as the part that remains massless
The gauge group of the electroweak part of the standard model is SU(2)L × U(1)Y. The group SU(2) is the group of all 2-by-2 unitary matrices with unit determinant; all the orthonormal changes of coordinates in a complex two dimensional vector space.
Rotating the coordinates so that the second basis vector points in the direction of the Higgs boson makes the vacuum expectation value of H the spinor The generators for rotations about the x, y, and z axes are by half the Pauli matrices , , and , so that a rotation of angle about the z-axis takes the vacuum to
While the and generators mix up the top and bottom components of the spinor, the rotations only multiply each by opposite phases. This phase can be undone by a U(1) rotation of angle Consequently, under both an SU(2) -rotation and a U(1) rotation by an amount the vacuum is invariant.
This combination of generators
defines the unbroken part of the gauge group, where is the electric charge, is the generator of rotations around the 3-axis in the adjoint representation of SU(2) and is the weak hypercharge generator of the U(1). This combination of generators (a 3 rotation in the SU(2) and a simultaneous U(1) rotation by half the angle) preserves the vacuum, and defines the unbroken gauge group in the standard model, namely the electric charge group. The part of the gauge field in this direction stays massless, and amounts to the physical photon.
By contrast, the broken trace-orthogonal charge couples to the massive boson.
Consequences for fermions
In spite of the introduction of spontaneous symmetry breaking, the mass terms preclude chiral gauge invariance. For these fields, the mass terms should always be replaced by a gauge-invariant "Higgs" mechanism. One possibility is some kind of Yukawa coupling (see below) between the fermion field and the Higgs field , with unknown couplings , which after symmetry breaking (more precisely: after expansion of the Lagrange density around a suitable ground state) again results in the original mass terms, which are now, however (i.e., by introduction of the Higgs field) written in a gauge-invariant way. The Lagrange density for the Yukawa interaction of a fermion field and the Higgs field is
where again the gauge field only enters via the gauge covariant derivative operator (i.e., it is only indirectly visible). The quantities are the Dirac matrices, and is the already-mentioned Yukawa coupling parameter for . Now the mass-generation follows the same principle as above, namely from the existence of a finite expectation value Again, this is crucial for the existence of the property mass.
History of research
Background
Spontaneous symmetry breaking offered a framework to introduce bosons into relativistic quantum field theories. However, according to Goldstone's theorem, these bosons should be massless. The only observed particles which could be approximately interpreted as Goldstone bosons were the pions, which Yoichiro Nambu related to chiral symmetry breaking.
A similar problem arises with Yang–Mills theory (also known as non-abelian gauge theory), which predicts massless spin-1 gauge bosons. Massless weakly-interacting gauge bosons lead to long-range forces, which are only observed for electromagnetism and the corresponding massless photon. Gauge theories of the weak force needed a way to describe massive gauge bosons in order to be consistent.
Discovery
That breaking gauge symmetries did not lead to massless particles was observed in 1961 by Julian Schwinger, but he did not demonstrate massive particles would eventuate. This was done in Philip Warren Anderson's 1962 paper but only in non-relativistic field theory; it also discussed consequences for particle physics but did not work out an explicit relativistic model. The relativistic model was developed in 1964 by three independent groups:
Robert Brout and François Englert
Peter Higgs
Gerald Guralnik, Carl Richard Hagen, and Tom Kibble.
Slightly later, in 1965, but independently from the other publications the mechanism was also proposed by Alexander Migdal and Alexander Polyakov, at that time Soviet undergraduate students. However, their paper was delayed by the editorial office of JETP, and was published late, in 1966.
The mechanism is closely analogous to phenomena previously discovered by Yoichiro Nambu involving the "vacuum structure" of quantum fields in superconductivity. A similar but distinct effect (involving an affine realization of what is now recognized as the Higgs field), known as the Stueckelberg mechanism, had previously been studied by Ernst Stueckelberg.
These physicists discovered that when a gauge theory is combined with an additional field that spontaneously breaks the symmetry group, the gauge bosons can consistently acquire a nonzero mass. In spite of the large values involved (see below) this permits a gauge theory description of the weak force, which was independently developed by Steven Weinberg and Abdus Salam in 1967. Higgs's original article presenting the model was rejected by Physics Letters. When revising the article before resubmitting it to Physical Review Letters, he added a sentence at the end, mentioning that it implies the existence of one or more new, massive scalar bosons, which do not form complete representations of the symmetry group; these are the Higgs bosons.
The three papers by Brout and Englert; Higgs; and Guralnik, Hagen, and Kibble were each recognized as "milestone letters" by Physical Review Letters in 2008. While each of these seminal papers took similar approaches, the contributions and differences among the 1964 PRL symmetry breaking papers are noteworthy. All six physicists were jointly awarded the 2010 J. J. Sakurai Prize for Theoretical Particle Physics for this work.
Benjamin W. Lee is often credited with first naming the "Higgs-like" mechanism, although there is debate around when this first occurred. One of the first times the Higgs name appeared in print was in 1972 when Gerardus 't Hooft and Martinus J. G. Veltman referred to it as the "Higgs–Kibble mechanism" in their Nobel winning paper.
Simple explanation of the theory, from its origins in superconductivity
The proposed Higgs mechanism arose as a result of theories proposed to explain observations in superconductivity. A superconductor does not allow penetration by external magnetic fields (the Meissner effect). This strange observation implies that the electromagnetic field somehow becomes short ranged during this phenomenon. Successful theories arose to explain this during the 1950s, first for fermions (Ginzburg–Landau theory, 1950), and then for bosons (BCS theory, 1957).
In these theories, superconductivity is interpreted as arising from a charged condensate. Initially, the condensate value does not have any preferred direction. This implies it is scalar, but its phase is capable of defining a gauge, in gauge based field theories. To do this, the field must be charged. A charged scalar field must also be complex (or described another way, it contains at least two components, and a symmetry capable of rotating each into the other(s)). In naïve gauge theory, a gauge transformation of a condensate usually rotates the phase. However, in these circumstances, it instead fixes a preferred choice of phase. However it turns out that fixing the choice of gauge so that the condensate has the same phase everywhere, also causes the electromagnetic field to gain an extra term. This extra term causes the electromagnetic field to become short range.
Goldstone's theorem also plays a role in such theories. The connection is technically, when a condensate breaks a symmetry, then the state reached by acting with a symmetry generator on the condensate has the same energy as before. This means that some kinds of oscillation will not involve change of energy. Oscillations with unchanged energy imply that excitations (particles) associated with the oscillation are massless.
Once attention was drawn to this theory within particle physics, the parallels were clear. A change of the usually long range electromagnetic field to become short ranged, within a gauge invariant theory, was exactly the needed effect sought for the weak force bosons (because a long range force has massless gauge bosons, and a short ranged force implies massive gauge bosons, suggesting that a result of this interaction is that the field's gauge bosons acquired mass, or a similar and equivalent effect). The features of a field required to do this was also quite well defined – it would have to be a charged scalar field, with at least two components, and complex in order to support a symmetry able to rotate these into each other.
Examples
The Higgs mechanism occurs whenever a charged field has a vacuum expectation value. In the non-relativistic context this is a superconductor, more formally known as the Landau model of a charged Bose–Einstein condensate. In the relativistic condensate, the condensate is a scalar field that is relativistically invariant.
Landau model
The Higgs mechanism is a type of superconductivity which occurs in the vacuum. It occurs when all of space is filled with a sea of particles which are charged, or, in field language, when a charged field has a nonzero vacuum expectation value. Interaction with the quantum fluid filling the space prevents certain forces from propagating over long distances (as it does inside a superconductor; e.g., in the Ginzburg–Landau theory).
A superconductor expels all magnetic fields from its interior, a phenomenon known as the Meissner effect. This was mysterious for a long time, because it implies that electromagnetic forces somehow become short-range inside the superconductor. Contrast this with the behavior of an ordinary metal. In a metal, the conductivity shields electric fields by rearranging charges on the surface until the total field cancels in the interior.
But magnetic fields can penetrate to any distance, and if a magnetic monopole (an isolated magnetic pole) is surrounded by a metal the field can escape without collimating into a string. In a superconductor, however, electric charges move with no dissipation, and this allows for permanent surface currents, not just surface charges. When magnetic fields are introduced at the boundary of a superconductor, they produce surface currents which exactly neutralize them.
The Meissner effect arises due to currents in a thin surface layer, whose thickness can be calculated from the simple model of Ginzburg–Landau theory, which treats superconductivity as a charged Bose–Einstein condensate.
Suppose that a superconductor contains bosons with charge . The wavefunction of the bosons can be described by introducing a quantum field, which obeys the Schrödinger equation as a field equation. In units where the reduced Planck constant, , is set to 1:
The operator annihilates a boson at the point , while its adjoint creates a new boson at the same point. The wavefunction of the Bose–Einstein condensate is then the expectation value of which is a classical function that obeys the same equation. The interpretation of the expectation value is that it is the phase that one should give to a newly created boson so that it will coherently superpose with all the other bosons already in the condensate.
When there is a charged condensate, the electromagnetic interactions are screened. To see this, consider the effect of a gauge transformation on the field. A gauge transformation rotates the phase of the condensate by an amount which changes from point to point, and shifts the vector potential by a gradient:
When there is no condensate, this transformation only changes the definition of the phase of at every point. But when there is a condensate, the phase of the condensate defines a preferred choice of phase.
The condensate wave function can be written as
where is real amplitude, which determines the local density of the condensate. If the condensate were neutral, the flow would be along the gradients of , the direction in which the phase of the Schrödinger field changes. If the phase changes slowly, the flow is slow and has very little energy. But now can be made equal to zero just by making a gauge transformation to rotate the phase of the field.
The energy of slow changes of phase can be calculated from the Schrödinger kinetic energy,
and taking the density of the condensate to be constant,
Fixing the choice of gauge so that the condensate has the same phase everywhere, the electromagnetic field energy has an extra term,
When this term is present, electromagnetic interactions become short-ranged. Every field mode, no matter how long the wavelength, oscillates with a nonzero frequency. The lowest frequency can be read off from the energy of a long wavelength mode,
This is a harmonic oscillator with frequency
The quantity is the density of the condensate of superconducting particles.
In an actual superconductor, the charged particles are electrons, which are fermions not bosons. So in order to have superconductivity, the electrons need to somehow bind into Cooper pairs. The charge of the condensate is therefore twice the electron charge . The pairing in a normal superconductor is due to lattice vibrations, and is in fact very weak; this means that the pairs are very loosely bound. The description of a Bose–Einstein condensate of loosely bound pairs is actually more difficult than the description of a condensate of elementary particles, and was only worked out in 1957 by John Bardeen, Leon Cooper, and John Robert Schrieffer in the famous BCS theory.
Abelian Higgs mechanism
Gauge invariance means that certain transformations of the gauge field do not change the energy at all. If an arbitrary gradient is added to , the energy of the field is exactly the same. This makes it difficult to add a mass term, because a mass term tends to push the field toward the value zero. But the zero value of the vector potential is not a gauge invariant idea. What is zero in one gauge is nonzero in another.
So in order to give mass to a gauge theory, the gauge invariance must be broken by a condensate. The condensate will then define a preferred phase, and the phase of the condensate will define the zero value of the field in a gauge-invariant way. The gauge-invariant definition is that a gauge field is zero when the phase change along any path from parallel transport is equal to the phase difference in the condensate wavefunction.
The condensate value is described by a quantum field with an expectation value, just as in the Ginzburg–Landau model.
In order for the phase of the vacuum to define a gauge, the field must have a phase (also referred to as 'to be charged'). In order for a scalar field to have a phase, it must be complex, or (equivalently) it should contain two fields with a symmetry which rotates them into each other. The vector potential changes the phase of the quanta produced by the field when they move from point to point. In terms of fields, it defines how much to rotate the real and imaginary parts of the fields into each other when comparing field values at nearby points.
The only renormalizable model where a complex scalar field acquires a nonzero value is the 'Mexican-hat' model, where the field energy has a minimum away from zero. The action for this model is
which results in the Hamiltonian
The first term is the kinetic energy of the field. The second term is the extra potential energy when the field varies from point to point. The third term is the potential energy when the field has any given magnitude.
This potential energy, the Higgs potential, has a graph which looks like a Mexican hat, which gives the model its name. In particular, the minimum energy value is not at but on the circle of points where the magnitude of
When the field is not coupled to electromagnetism, the Mexican-hat potential has flat directions. Starting in any one of the circle of vacua and changing the phase of the field from point to point costs very little energy. Mathematically, if
with a constant prefactor, then the action for the field i.e., the "phase" of the Higgs field has only derivative terms. This is not a surprise: Adding a constant to is a symmetry of the original theory, so different values of cannot have different energies. This is an example of configuring the model to conform to Goldstone's theorem: Spontaneously broken continuous symmetries (normally) produce massless excitations.
The Abelian Higgs model is the Mexican-hat model coupled to electromagnetism:
The classical vacuum is again at the minimum of the potential, where the magnitude of the complex field is equal But now the phase of the field is arbitrary, because gauge transformations change it. This means that the field can be set to zero by a gauge transformation, and does not represent any actual degrees of freedom at all.
Furthermore, choosing a gauge where the phase of the vacuum is fixed, the potential energy for fluctuations of the vector field is nonzero. So in the Abelian Higgs model, the gauge field acquires a mass. To calculate the magnitude of the mass, consider a constant value of the vector potential in the -direction in the gauge where the condensate has constant phase. This is the same as a sinusoidally varying condensate in the gauge where the vector potential is zero. In the gauge where A is zero, the potential energy density in the condensate is the scalar gradient energy:
This energy is the same as a mass term where
Mathematical details of the abelian Higgs mechanism
Non-Abelian Higgs mechanism
The Non-Abelian Higgs model has the following action
where now the non-Abelian field A is contained in the covariant derivative D and in the tensor components and (the relation between A and those components is well-known from the Yang–Mills theory).
It is exactly analogous to the Abelian Higgs model. Now the field is in a representation of the gauge group, and the gauge covariant derivative is defined by the rate of change of the field minus the rate of change from parallel transport using the gauge field A as a connection.
Again, the expectation value of defines a preferred gauge where the vacuum is constant, and fixing this gauge, fluctuations in the gauge field A come with a nonzero energy cost.
Depending on the representation of the scalar field, not every gauge field acquires a mass. A simple example is in the renormalizable version of an early electroweak model due to Julian Schwinger. In this model, the gauge group is SO(3) (or SU(2) − there are no spinor representations in the model), and the gauge invariance is broken down to U(1) or SO(2) at long distances. To make a consistent renormalizable version using the Higgs mechanism, introduce a scalar field which transforms as a vector (a triplet) of SO(3). If this field has a vacuum expectation value, it points in some direction in field space. Without loss of generality, one can choose the z-axis in field space to be the direction that is pointing, and then the vacuum expectation value of is , where is a constant with dimensions of mass ().
Rotations around the z-axis form a U(1) subgroup of SO(3) which preserves the vacuum expectation value of , and this is the unbroken gauge group. Rotations around the x and y-axis do not preserve the vacuum, and the components of the SO(3) gauge field which generate these rotations become massive vector mesons. There are two massive W mesons in the Schwinger model, with a mass set by the mass scale , and one massless U(1) gauge boson, similar to the photon.
The Schwinger model predicts magnetic monopoles at the electroweak unification scale, and does not predict the Z boson. It doesn't break electroweak symmetry properly as in nature. But historically, a model similar to this (but not using the Higgs mechanism) was the first in which the weak force and the electromagnetic force were unified.
Affine Higgs mechanism
Ernst Stueckelberg discovered a version of the Higgs mechanism by analyzing the theory of quantum electrodynamics with a massive photon. Effectively, Stueckelberg's model is a limit of the regular Mexican hat Abelian Higgs model, where the vacuum expectation value goes to infinity and the charge of the Higgs field goes to zero in such a way that their product stays fixed. The mass of the Higgs boson is proportional to , so the Higgs boson becomes infinitely massive and decouples, so is not present in the discussion. The vector meson mass, however, is equal to the product , and stays finite.
The interpretation is that when a U(1) gauge field does not require quantized charges, it is possible to keep only the angular part of the Higgs oscillations, and discard the radial part. The angular part of the Higgs field has the following gauge transformation law:
The gauge covariant derivative for the angle (which is actually gauge invariant) is:
In order to keep fluctuations finite and nonzero in this limit, should be rescaled so that its kinetic term in the action stays normalized. The action for the theta field is read off from the Mexican hat action by substituting
since is the gauge boson mass. By making a gauge transformation to set the gauge freedom in the action is eliminated, and the action becomes that of a massive vector field:
To have arbitrarily small charges requires that the U(1) is not the circle of unit complex numbers under multiplication, but the real numbers under addition, which is only different in the global topology. Such a U(1) group is non-compact. The field transforms as an affine representation of the gauge group. Among the allowed gauge groups, only non-compact U(1) admits affine representations, and the U(1) of electromagnetism is experimentally known to be compact, since charge quantization holds to extremely high accuracy.
The Higgs condensate in this model has infinitesimal charge, so interactions with the Higgs boson do not violate charge conservation. The theory of quantum electrodynamics with a massive photon is still a renormalizable theory, one in which electric charge is still conserved, but magnetic monopoles are not allowed. For non-Abelian gauge theory, there is no affine limit, and the Higgs oscillations cannot be too much more massive than the vectors.
| Physical sciences | Particle physics: General | Physics |
644662 | https://en.wikipedia.org/wiki/Pixel%20density | Pixel density | Pixels per inch (ppi) and pixels per centimetre (ppcm or pixels/cm) are measurements of the pixel density of an electronic image device, such as a computer monitor or television display, or image digitizing device such as a camera or image scanner. Horizontal and vertical density are usually the same, as most devices have square pixels, but differ on devices that have non-square pixels. Pixel density is not the same as where the former describes the amount of detail on a physical surface or device, the latter describes the amount of pixel information regardless of its scale. Considered in another way, a pixel has no inherent size or unit (a pixel is actually a sample), but when it is printed, displayed, or scanned, then the pixel has both a physical size (dimension) and a pixel density (ppi).
Basic principles
Since most digital hardware devices use dots or pixels, the size of the media (in inches) and the number of pixels (or dots) are directly related by the 'pixels per inch'. The following formula gives the number of pixels, horizontally or vertically, given the physical size of a format and the pixels per inch of the output:
Pixels per inch (or pixels per centimetre) describes the detail of an image file when the print size is known. For example, a 100×100 pixel image printed in a 2 inch square has a resolution of 50 pixels per inch. Used this way, the measurement is meaningful when printing an image. In many applications, such as Adobe Photoshop, the program is designed so that one creates new images by specifying the output device and PPI (pixels per inch). Thus the output target is often defined upon creating the image.
Outputting to a different device
When moving images between devices, such as printing an image that was created on a monitor, it is important to understand the pixel density of both devices. Consider a 23″ HD monitor (20″ wide), that has a known, native resolution of 1920 pixels (horizontal). Let us assume an artist created a new image at this monitor resolution of 1920 pixels, possibly intended for the web without regard to printing. Rewriting the formula above can tell us the pixel density (PPI) of the image on the monitor display:
Now, let us imagine the artist wishes to print a larger banner at 48″ horizontally. We know the number of pixels in the image, and the size of the output, from which we can use the same formula again to give the PPI of the printed poster:
This shows that the output banner will have only 40 pixels per inch. Since a printer device is capable of printing at 300 ppi, the resolution of the original image is well below what would be needed to create a decent quality banner, even if it looked good on a monitor for a website. We would say more directly that a 1920 × 1080 pixel image does not have enough pixels to be printed in a large format.
Printing on paper
Printing on paper is accomplished with different technologies. Newspapers and magazines were traditionally printed using a halftone screen, which would print dots at a given frequency, the screen frequency, in lines per inch (LPI) by using a purely analog process in which a photographic print is converted into variable sized dots through interference patterns passing through a screen. Modern inkjet printers can print microscopic dots at any location, and don't require a screen grid, with the metric dots per inch (DPI). These are both different from pixel density or pixels per inch (PPI) because a pixel is a single sample of any color, whereas an inkjet print can only print a dot of a specific color either on or off. Thus a printer translates the pixels into a series of dots using a process called dithering. The dot pitch, smallest size of each dot, is also determined by the type of paper the image is printed on. An absorbent paper surface, uncoated recycled paper for instance, lets ink droplets spread — so has a larger dot pitch.
Often one wishes to know the image quality in pixels per inch (PPI) that would be suitable for a given output device. If the choice is too low, then the quality will be below what the device is capable of—loss of quality—and if the choice is too high then pixels will be stored unnecessarily—wasted disk space. The ideal pixel density (PPI) depends on the output format, output device, the intended use and artistic choice. For inkjet printers measured in DPI it is generally good practice to use half or less than the DPI to determine the PPI. For example, an image intended for a printer capable of 600 dpi could be created at 300 ppi. When using other technologies such as AM or FM screen printing, there are often published screening charts that indicate the ideal PPI for a printing method.
Using the DPI or LPI of a printer remains useful to determine PPI until one reaches larger formats, such as 36" or higher, as the factor of visual acuity then becomes more important to consider. If a print can be viewed close up, then one may choose the printer device limits. However, if a poster, banner or billboard will be viewed from far away then it is possible to use a much lower PPI.
Computer displays
The PPI/PPCM of a computer display is related to the size of the display in inches/centimetres and the total number of pixels in the horizontal and vertical directions. This measurement is often referred to as dots per inch, though that measurement more accurately refers to the resolution of a computer printer.
For example, a 15-inch (38 cm) display whose dimensions work out to 12 inches (30.48 cm) wide by 9 inches (22.86 cm) high, capable of a maximum 1024×768 (or XGA) pixel resolution, can display around 85 PPI, or 33.46PPCM, in both the horizontal and vertical directions. This figure is determined by dividing the width (or height) of the display area in pixels by the width (or height) of the display area in inches. It is possible for a display to have different horizontal and vertical PPI measurements (e.g., a typical 4:3 ratio CRT monitor showing a 1280×1024 mode computer display at maximum size, which is a 5:4 ratio, not quite the same as 4:3). The apparent PPI of a monitor depends upon the screen resolution (that is, the number of pixels) and the size of the screen in use; a monitor in 800×600 mode has a lower PPI than does the same monitor in a 1024×768 or 1280×960 mode.
The dot pitch of a computer display determines the absolute limit of possible pixel density.
Typical circa-2000 cathode-ray tube or LCD computer displays range from 67 to 130 PPI, though desktop monitors have exceeded 200 PPI, and certain smartphone manufacturers' flagship mobile device models have been exceeding 500 PPI since 2014.
In January 2008, Kopin Corporation announced a 0.44 inch (1.12 cm) SVGA LCD with a pixel density of 2272 PPI (each pixel only 11.25μm). In 2011 they followed this up with a 3760-DPI 0.21-inch diagonal VGA colour display. The manufacturer says they designed the LCD to be optically magnified, as in high-resolution eyewear devices.
Holography applications demand even greater pixel density, as higher pixel density produces a larger image size and wider viewing angle. Spatial light modulators can reduce pixel pitch to 2.5 μm, giving a pixel density of 10,160 PPI.
Some observations indicate that the unaided human generally can't differentiate detail beyond 300 PPI. However, this figure depends both on the distance between viewer and image, and the viewer’s visual acuity. The human eye also responds in a different way to a bright, evenly lit interactive display from how it does to prints on paper.
High pixel density display technologies would make supersampled antialiasing obsolete, enable true WYSIWYG graphics and, potentially enable a practical “paperless office” era. For perspective, such a device at 15 inch (38 cm) screen size would have to display more than four Full HD screens (or WQUXGA resolution).
The PPI pixel density specification of a display is also useful for calibrating a monitor with a printer. Software can use the PPI measurement to display a document at "actual size" on the screen.
Calculation of monitor PPI
PPI can be calculated from the screen's diagonal size in inches and the resolution in pixels (width and height). This can be done in two steps:
Calculate diagonal resolution in pixels using the Pythagorean theorem:
Calculate the PPI:
where
is width resolution in pixels
is height resolution in pixels
is diagonal size in pixels.
is diagonal size in inches (this is the number advertised as the size of the display).
For example:
For 15.6-inch screen with a 5120×2880 resolution you get = 376.57 PPI.
For 50-inch screen with a 8192×4608 resolution you get = 188 PPI.
For 27-inch screen with a 3840×2160 resolution you get = 163 PPI.
For 32-inch screen with a 3840×2160 resolution you get = 138 PPI.
For an old-school 10.1-inch netbook screen with a 1024×600 resolution you get = 117.5 PPI.
For 27-inch screen with a 2560×1440 resolution you get = 108.8 PPI.
For a 21.5-inch (546.1 mm) screen with a 1920×1080 resolution you get = 102.46 PPI;
These calculations may not be very precise. Frequently, screens advertised as “X inch screen” can have their real physical dimensions of viewable area differ, for example:
Apple Inc. advertised their mid-2011 iMac as a "21.5 inch (viewable) [...] display," but its actual viewable area is 545.22 mm or 21.465 inches. The more precise figure increases the calculated PPI from 102.46 (using 21.5) to 102.63.
The HP LP2065 20 inch (50.8 cm) monitor has an actual viewable area of 20.1 inch (51 cm).
In a more significant case, some monitors such as the Dell UltraSharp UP3216Q (3840×2160 px) are advertised as a 32-inch "class" monitor (137.68 PPI), but the actual viewing area diagonal is 31.5 inches, making the true PPI 139.87.
Calculating PPI of camera view screens
Camera manufacturers often quote view screens in 'number of dots'. This is not the same as the number of pixels, because there are 3 'dots' per pixel – red, green and blue. For example, the Canon 50D is quoted as having 920,000 dots. This translates as 307,200 pixels (×3 = 921,600 dots). Thus the screen is 640×480 pixels.
This must be taken into account when working out the PPI. 'Dots' and 'pixels' are often confused in reviews and specs when viewing information about digital cameras specifically.
Scanners and cameras
"PPI" or "pixel density" may also describe image scanner resolution. In this context, PPI is synonymous with samples per inch. In digital photography, pixel density is the number of pixels divided by the area of the sensor. A typical DSLR, circa 2013, has 1–6.2 MP/cm2; a typical compact has 20–70 MP/cm2.
For example, Sony Alpha SLT-A58 has 20.1 megapixels on an APS-C sensor having 6.2 MP/cm2 since a compact camera like Sony Cyber-shot DSC-HX50V has 20.4 megapixels on an 1/2.3" sensor having 70 MP/cm2. The professional camera has a lower PPI than a compact camera, because it has larger photodiodes due to having far larger sensors.
Smartphones
Smartphones use small displays, but modern smartphone displays have a larger PPI rating, such as the Samsung Galaxy S7 with a quad HD display at 577 PPI, Fujitsu F-02G with a quad HD display at 564 PPI, the LG G6 with quad HD display at 564 PPI or – XHDPI or Oppo Find 7 with 534 PPI on 5.5-inch display – XXHDPI (see section below). Sony's Xperia XZ Premium has a 4K display with a pixel density of 807 PPI, the highest of any smartphone as of 2017.
Logical DPI values on Android
Android supports the following logical DPI values for controlling how large content is displayed:
Metrication
The digital publishing industry primarily uses pixels per inch but sometimes pixels per centimeter is used, or a conversion factor is given.
The PNG image file format only allows the meter as the unit for pixel density.
Image file format support
The following table show how pixel density is supported by popular image file formats. The cell colors used do not indicate how feature-rich a certain image file format is, but what density support can be expected of a certain image file format.
Even though image manipulation software can optionally set density for some image file formats, not many other software uses density information when displaying images. Web browsers, for example, ignore any density information. As the table shows, support for density information in image file formats varies enormously and should be used with great care in a controlled context.
| Physical sciences | Visual | Basics and measurement |
644913 | https://en.wikipedia.org/wiki/ExpressCard | ExpressCard | ExpressCard, initially called NEWCARD, is an interface to connect peripheral devices to a computer, usually a laptop computer. The ExpressCard technical standard specifies the design of slots built into the computer and of expansion cards to insert in the slots. The cards contain electronic circuits and sometimes connectors for external devices. The ExpressCard standard replaces the PC Card (also known as PCMCIA) standards.
ExpressCards can connect a variety of devices to a computer including mobile broadband modems (sometimes called connect cards), IEEE 1394 (FireWire) connectors, USB connectors, Ethernet network ports, Serial ATA storage devices, solid-state drives, external enclosures for desktop-size PCI Express graphics cards and other peripheral devices, wireless network interface controllers (NIC), TV tuner cards, Common Access Card (CAC) readers, and sound cards.
Standards
Originally developed by the Personal Computer Memory Card International Association (PCMCIA), the ExpressCard standard is maintained by the USB Implementers Forum (USB-IF). The host device supports PCI Express, USB 2.0 (including Hi-Speed), and USB 3.0 (SuperSpeed) (ExpressCard 2.0 only) connectivity through the ExpressCard slot; cards can be designed to use any of these modes. The cards are hot-pluggable. The ExpressCard standard is an open standard by ITU-T definition, and can be obtained from the USB-IF website free of charge.
USB-IF administers the ExpressCard Compliance Program, under which companies earn the right to obtain a license to use the ExpressCard logo on their compliant products.
Form factors
The ExpressCard standard specifies two form factors, ExpressCard/34 ( wide) and ExpressCard/54 ( wide, in an L-shape)—the connector is the same on both (34 mm wide). Standard cards are long ( shorter than CardBus) and thick, but may be thicker on sections that extend outside the standard form for antennas, sockets, etc. With its 75 mm standard length, the ExpressCard will protrude 5 mm over the holder's surface (e.g. laptop surface), whereas a variant with 70 mm length remains level with the surface.
Comparison to other standards
The older PC Cards came in 16-bit and the later 32-bit CardBus designs. The major benefit of the ExpressCard over the PC card is more bandwidth, due to the ExpressCard's direct connection to the system bus over a PCI Express ×1 lane and USB 2.0, while CardBus cards only interface with PCI. The ExpressCard has a maximum throughput of 2.5 Gbit/s through PCI Express and 480 Mbit/s through USB 2.0 dedicated for each slot, while all CardBus and PCI devices connected to a computer usually share a total 1.06 Gbit/s bandwidth.
The ExpressCard standard specifies voltages of either 1.5 V or 3.3 V; CardBus slots can use 3.3 V or 5.0 V. The ExpressCard FAQ claims lower cost, better scalability, and better integration with motherboard chipset technology than Cardbus. PCMCIA devices can be connected to an ExpressCard slot via an adapter.
When the PC Card was introduced, the only other way to connect peripherals to a laptop computer was via RS-232 and parallel ports of limited performance, so it was widely adopted for many peripherals. More recently, virtually all laptop equipment has 480 Mbit/s Hi-Speed USB 2.0 ports, and most types of peripheral which formerly used a PC Card connection are available for USB or are built-in, making the ExpressCard less necessary than the PC Card was in its day. Many laptop computers do not have an ExpressCard slot.
Availability
An ExpressCard slot was commonly included on high-end laptops from the mid 2000s to the early 2010s:
Hewlett-Packard began shipping systems with ExpressCard in November 2004.
Lenovo integrated the slot into their flagship ThinkPad T43 in May 2005.
Dell Computer also incorporates this in their Precision (the 17 in models have them exclusively, the 15 in Precisions have both express card and PCMCIA Card slots), Inspiron, Latitude (Latitude D-series have PCMCIA card slots. The D820/D830 have both ExpressCard and PCMCIA card slots. Latitude E-Series 6000 have ExpressCard|54 slots), Studio, Vostro and XPS Laptop product lines.
Fujitsu-Siemens began shipping systems with ExpressCard in mid-2005.
Apple Inc. included single ExpressCard/34 slots in every MacBook Pro notebook computer from January 2006 through June 2009. At the June 8, 2009 Apple Worldwide Developers Conference the company announced that the 15-inch and 13-inch MacBook Pro models would replace the ExpressCard slot with a Secure Digital card slot, while retaining the ExpressCard slot on the 17-inch model. In June 2012 Apple discontinued the 17-inch model and no further Macbooks have offered an ExpressCard slot.
, ASUS had also replaced the PC Card slot with an ExpressCard slot on many of its new models.
Sony also began shipping systems with ExpressCard with its new laptop VGN-C, VGN-SZ, VGN-NS, VPC and FW product line.
The Acer Aspire laptop series also had a single ExpressCard/54 slot on most new models.
, Panasonic incorporated ExpressCard/54 slots in all the fully rugged and semi-rugged models of their Toughbook brand of laptop computers.
, Gateway notebooks (ML3109 and later) also shipped with ExpressCard/54 interfaces.
Because of the lack of backward compatibility, some laptop manufacturers initially released models incorporating both CardBus (PCMCIA, PC Card) and ExpressCard slots. These included certain models of Acer Aspire, Acer Extensa, Toshiba Satellite, Dell Latitude and Precision, MSI S42x and Lenovo ThinkPad Z60m, R52, T60, R61 and T61.
In March 2005, the Personal Computer Memory Card International Association (PCMCIA) showed some of the first ExpressCard products at the CeBIT trade show in Germany. A large number of ExpressCard devices were presented.
In November 2006, Belkin announced that it was launching the first ExpressCard docking station, which uses the PCIe part of an ExpressCard connection to enable 1600x1200 video and the USB part to provide USB, audio and network ports. This points to the ability for ExpressCard to allow more capable non-OEM docking stations for laptop computers.
In 2007, Sony introduced its Vaio TZ model, which incorporates ExpressCards. Also the Sony Vaio FZ and Vaio Z series have the ExpressCard/34 Slot integrated in them. Sony also uses the ExpressCard/34 form factor for the flash memory modules in its XDCAM EX/SxS based camcorders, making the copying of video data between these cameras and ExpressCard-equipped laptops easier. For this reason, Sony also offers a USB-based SxS reader for desktop computers.
The Toshiba Satellite P and X 200/205 series of laptops and desktop replacements have an ExpressCard/34 slot since April 2007. In P200 series it's a /54 size not /34.
Ever since PCMCIA disbanded in 2009, newer laptops from 2010 on more commonly do not include ExpressCard slots except for some business-oriented models (e.g. some Lenovo models use it for supporting a smart card reader). For WWAN connectivity cards, either mini-PCIe slots or USB connected variants have become the preferred connection methods. For external desktop graphics card enclosures and other peripherals that interface with PCI Express, Thunderbolt has supplanted ExpressCard in that role due to its faster speed and ability to use multiple PCIe 2.0 lanes; the first and second Thunderbolt revisions offered 20 Gbit/s of maximum bandwidth with four PCIe 2.0 lanes while ExpressCard could only muster 5 Gbit/s maximum with one PCIe 2.0 lane.
ExpressCard 2.0
The ExpressCard 2.0 standard was introduced on March 4, 2009, at CeBIT in Hannover. It provides a single PCIe 1.0 2.5 GT/s lane (optionally PCIe 2.0 with 5 GT/s) and a USB 3.0 "SuperSpeed" link with a raw transfer speed of 5 Gbit/s (effective transfer speed up to 400 MB/s). It is forward and backward compatible with earlier ExpressCard modules and slots. USB 3.0 SuperSpeed compatibility is achieved by sharing the pins with the PCIe link. An inserted card signals which mode should be used.
The standard failed to gain widespread use and some Taiwanese manufacturers discontinued it as early as 2011. After the dissolution of the PCMCIA in 2010, the specification, associated documentation and licensing responsibilities were moved to the USB Implementers Forum. The specifications were last revised in 2009, and removed from their website in 2018.
| Technology | Computer hardware | null |
645335 | https://en.wikipedia.org/wiki/Diffusion%20equation | Diffusion equation | The diffusion equation is a parabolic partial differential equation. In physics, it describes the macroscopic behavior of many micro-particles in Brownian motion, resulting from the random movements and collisions of the particles (see Fick's laws of diffusion). In mathematics, it is related to Markov processes, such as random walks, and applied in many other fields, such as materials science, information theory, and biophysics. The diffusion equation is a special case of the convection–diffusion equation when bulk velocity is zero. It is equivalent to the heat equation under some circumstances.
Statement
The equation is usually written as:
where is the density of the diffusing material at location and time and is the collective diffusion coefficient for density at location ; and represents the vector differential operator del. If the diffusion coefficient depends on the density then the equation is nonlinear, otherwise it is linear.
The equation above applies when the diffusion coefficient is isotropic; in the case of anisotropic diffusion, is a symmetric positive definite matrix, and the equation is written (for three dimensional diffusion) as:
The diffusion equation has numerous analytic solutions.
If is constant, then the equation reduces to the following linear differential equation:
which is identical to the heat equation.
Historical origin
The particle diffusion equation was originally derived by Adolf Fick in 1855.
Derivation
The diffusion equation can be trivially derived from the continuity equation, which states that a change in density in any part of the system is due to inflow and outflow of material into and out of that part of the system. Effectively, no material is created or destroyed:
where j is the flux of the diffusing material. The diffusion equation can be obtained easily from this when combined with the phenomenological Fick's first law, which states that the flux of the diffusing material in any part of the system is proportional to the local density gradient:
If drift must be taken into account, the Fokker–Planck equation provides an appropriate generalization.
Discretization
The diffusion equation is continuous in both space and time. One may discretize space, time, or both space and time, which arise in application. Discretizing time alone just corresponds to taking time slices of the continuous system, and no new phenomena arise.
In discretizing space alone, the Green's function becomes the discrete Gaussian kernel, rather than the continuous Gaussian kernel. In discretizing both time and space, one obtains the random walk.
Discretization in image processing
The product rule is used to rewrite the anisotropic tensor diffusion equation, in standard discretization schemes, because direct discretization of the diffusion equation with only first order spatial central differences leads to checkerboard artifacts. The rewritten diffusion equation used in image filtering:
where "tr" denotes the trace of the 2nd rank tensor, and superscript "T" denotes transpose, in which in image filtering D(ϕ, r) are symmetric matrices constructed from the eigenvectors of the image structure tensors. The spatial derivatives can then be approximated by two first order and a second order central finite differences. The resulting diffusion algorithm can be written as an image convolution with a varying kernel (stencil) of size 3 × 3 in 2D and 3 × 3 × 3 in 3D.
| Physical sciences | Fluid mechanics | Physics |
645586 | https://en.wikipedia.org/wiki/Tweezers | Tweezers | Tweezers are small hand tools used for grasping objects too small to be easily handled with the human fingers. Tweezers are thumb-driven forceps most likely derived from tongs used to grab or hold hot objects since the dawn of recorded history. In a scientific or medical context, they are normally referred to as just "forceps", a name that is used together with other grasping surgical instruments that resemble pliers, pincers and scissors-like clamps.
Tweezers make use of two third-class levers connected at one fixed end (the fulcrum point of each lever), with the pincers at the others. When used, they are commonly held with one hand in a pen grip between the thumb and index finger (sometimes also the middle finger), with the top end resting on the first dorsal interosseous muscle at the webspace between the thumb and index finger. Spring tension holds the grasping ends apart until finger pressure is applied. This provides an extended pinch and allows the user to easily grasp, manipulate and quickly release small or delicate objects with readily variable pressure.
People commonly use tweezers for such tasks as plucking hair from the face or eyebrows, often using the term eyebrow tweezers. Other common uses for tweezers are as a tool to manipulate small objects, including for example small, particularly surface-mount, electronic parts, and small mechanical parts for models and precision mechanisms. Stamp collectors use tweezers (stamp tongs) to handle postage stamps which, while large enough to pick up by hand, could be damaged by handling; the jaws of stamp tongs are smooth. Another example of a specialized use is picking out the flakes of gold in gold panning. Tweezers are also used in kitchens for food presentation to remove bones from fillets of fish in a process known as pin boning, and are as tongs used to serve pieces of cake to restaurant patrons.
History
Tweezers are known to have been used in predynastic Egypt. There are drawings of Egyptian craftsmen holding hot pots over ovens with a double-bow shaped tool. Asiatic tweezers, consisting of two strips of metal brazed together, were commonly used in Mesopotamia and Ancient India from about 3000 BC, perhaps for purposes such as catching lice. During the Bronze Age, tweezers were manufactured in Kerma.
The word tweezer comes from etwee which describes a small case that people would use to carry small objects (such as toothpicks) with them. Etwee takes its origin from French étui "small case" from the Old French verb estuier, "to hold or keep safe." Over time, the object now known as "tweezers" took on this name because the tool was commonly found in these tiny carrying cases. Eventually, the word "tweeze" was accepted as a verb in the English language.
There is evidence of Roman shipbuilders pulling nails out of construction with plier-type pincers.
Types
Tweezers come in a variety of tip shapes and sizes. Blunt tip tweezers have a rounded end which can be used when a pointed object may get entangled, when manipulating cotton swabs, for example. Flat tip tweezers, pictured at right, have an angled tip which may be used for removing splinters. Some tweezers have a long needle-like tip which may be useful for reaching into small crevices. Triangular tip tweezers have larger, wider tips useful for gripping larger objects. Tweezers with curved tips also exist, sometimes called bent forceps. Microtweezers have an extremely small, pointed tip used for manipulating tiny electronic components and the like.
There are two common forms of construction for tweezers: two fused, angled pieces of metal, or one piece of metal bent in half. The bent tweezer is cheaper to manufacture, but gives weaker grip. The fused tweezer is more expensive, but allows for a stronger grip. The width between the tips of the tweezers when no force is applied also affects how powerful the grip is.
Cross-locking tweezers (aka reverse-action tweezers or self-closing tweezers) work in the opposite way to normal tweezers. Cross-locking tweezers open when squeezed and close when released, gripping the item without any exertion of the user's fingers.
Usage of traditional tweezers
Applications:
typesetting
dealing with stamps (see Philately)
dealing with smaller coins (see Numismatics), to protect the coins these are wrapped at the tips with plastic
electronics
soldering
cosmetics
hair removal (eyebrow tweezers)
nail art (application of gems, stickers etc to fingernails or toenails as part of a manicure or pedicure respectively)
semiconductor technology in the form of wafer tweezers
medicine (Forceps and Tissue Forceps)
household
jewelry making
textile industry as iron nubs
science, laboratory
aquascaping
watchmaking
Other kinds of tweezers
The original tweezers for mechanical gripping have given rise to a number of tools with similar action or purpose but not dependent upon mechanical pressure, including
Optical tweezers use light to manipulate microscopic objects as small as a single atom. The momentum transfer from a focused laser beam is able to trap small particles. In the biological sciences, these instruments have been used to apply forces in the pico Newton range and to measure displacements in the nm range of objects ranging in size from 10 nm to over 100 mm.
Magnetic tweezers use magnetic forces to manipulate single molecules (such as DNA) via paramagnetic interactions. In practice it is an array of magnetic traps designed for manipulating individual biomolecules and measuring the ultra-small forces that affect their behavior.
Electric tweezers deliver an electrical signal through the tip, intended for depilation by damaging hair roots to prevent new hair from growing from the same root.
Electrostatic tweezers use electrostatic voltage to induce the redistribution of charges in targeted objects, therefore generating Coulomb attraction force between tweezers and manipulated objects. Electrostatic tweezers work in a trapping mode or guiding mode.
Vacuum tweezers use differences in atmospheric pressure to grasp items from 100 micrometres in size up to parts weighing several pounds. Special vacuum tweezer tips are manufactured to handle a wide variety of items such as surface-mount electronics, optics, biological material, stamps and coins. They may be used to handle parts that are so small that conventional mechanical tweezers may cause parts to be damaged or dropped and lost.
Acoustic tweezers use sound to manipulate particles or cells in the fluid. An elegant Gor'kov potential theory is used most for small sizes compared with the incident wavelength.
Molecular tweezers are noncyclic host molecules that have two arms capable of binding guests molecules through non-covalent bonding.
Hot, or soldering, tweezers combine the squeezing action of mechanical tweezers with heating, to grip small surface-mount electronic devices while simultaneously heating them, for soldering or desoldering.
Tweezer probes are a pair of electrical test probes fixed to a tweezer mechanism to measure voltages or other electronic circuit parameters between closely spaced pins.
Tweezers integrated with an electronic measuring device for evaluation of electrical parameters of small-size electronic components.
Carbon nano-tweezers have been fabricated by deposition of MWNT bundles on isolated electrodes deposited on tempered glass micropipettes. Those nanotube bundles can be mechanically manipulated by electricity and can be used to manipulate and transfer micro- and nano-structures. The nanotube bundles used for tweezers are about 50 nm in diameter and 2 μm in lengths. Under electric bias, two close sets of bundles are attracted and can be used as nanoscale tweezers.
Other uses of the same principle are named tweezers; although such terms are not necessarily widely used their meaning is clear to people in the relevant field. E.g., Raman tweezers, which combine Raman spectroscopy with optical tweezers.
| Technology | Surgical instruments | null |
645624 | https://en.wikipedia.org/wiki/Sphagnum | Sphagnum | Sphagnum is a genus of approximately 380 accepted species of mosses, commonly known as sphagnum moss, also bog moss and quacker moss (although that term is also sometimes used for peat). Accumulations of Sphagnum can store water, since both living and dead plants can hold large quantities of water inside their cells; plants may hold 16 to 26 times as much water as their dry weight, depending on the species. The empty cells help retain water in drier conditions.
As Sphagnum moss grows, it can slowly spread into drier conditions, forming larger mires, both raised bogs and blanket bogs. Thus, Sphagnum can influence the composition of such habitats, with some describing Sphagnum as 'habitat manipulators' or 'autogenic ecosystem engineers'. These peat accumulations then provide habitat for a wide array of peatland plants, including sedges and ericaceous shrubs, as well as orchids and carnivorous plants.
Sphagnum and the peat formed from it do not decay readily because of the phenolic compounds embedded in the moss's cell walls. In addition, bogs, like all wetlands, develop anaerobic soil conditions, which produces slower anaerobic decay rather than aerobic microbial action. Peat moss can also acidify its surroundings by taking up cations, such as calcium and magnesium, and releasing hydrogen ions.
Under the right conditions, peat can accumulate to a depth of many meters. Different species of Sphagnum have different tolerance limits for flooding and pH, and any one peatland may have a number of different Sphagnum species.
Description
An individual Sphagnum plant consists of a main stem, with tightly arranged clusters of branch fascicles usually consisting of two or three spreading branches and two to four hanging branches. The top of the plant (capitulum) has compact clusters of young branches that give the plant its characteristic tuft-like appearance. Along the stem are scattered leaves of various shapes, named stem leaves; the shape varies according to species.
Sphagnum has a distinctive cellular structure. The stem portion consists of two important sections. The pith which is the site of food production and storage, and the cortical layer which serves to absorb water and protect the pith. Mosses have no vascular system to move water and nutrients around the plant. Thus tissues are thin and usually one cell thick to allow them to diffuse easily. Sphagnum mosses have two distinct cell types. There are small, green, living cells with chlorophyll (chlorophyllose cells) that produce food for the plant. Additionally there are larger hyaline or retort cells that are barrel shaped and have a pore at one end to allow for water absorption and improved water-holding capacity. These unique cells help Sphagnum to retain water during prolonged UV exposure.
Lifecycle
Sphagnum, like all other land plants, has an alternation of generations; like other bryophytes, the haploid gametophyte generation is dominant and persistent. Unlike other mosses, the long-lived gametophytes do not rely upon rhizoids to assist in water uptake.
Sphagnum species can be unisexual (male or female, dioecious) or bisexual (male and female gametes produced from the same plant; monoecious); In North America, 80% of Sphagnum species are unisexual.
Gametophytes have substantial asexual reproduction by fragmentation, producing much of the living material in sphagnum peatlands.
Swimming sperm fertilize eggs contained in archegonia that remain attached to the female gametophyte. The sporophyte is relatively short-lived, and consists almost entirely of a shiny green, spherical spore capsule that becomes black with spores. Sporophytes are raised on stalks to facilitate spore dispersal, but unlike other mosses, Sphagnum stalks are produced by the maternal gametophyte. Tetrahedral haploid spores are produced in the sporophyte by meiosis, which are then dispersed when the capsule explosively discharges its cap, called an operculum, and shoots the spores some distance. The spores germinate to produce minute protonemae, which start as filaments, can become thalloid, and can produce a few rhizoids. Soon afterwards, the protonema develops buds and these differentiate into its characteristic, erect, leafy, branched gametophyte with chlorophyllose cells and hyaline cells.
Carpets of living Sphagnum may be attacked by various fungi, and one fungus that is also a mushroom, Sphagnurus paluster, produces conspicuous dead patches. When this fungus and other agarics attack the protonema, Sphagnum is induced to produce nonphotosynthetic gemmae that can survive the fungal attack and months later germinate to produce new protonema and leafy gametophytes.
Spore dispersal
As with many other mosses, Sphagnum species disperse spores through the wind. The tops of spore capsules are only about 1 cm (") above ground, and where wind is weak. As the spherical spore capsule dries, the operculum is forced off, followed by a cloud of spores. The exact mechanism has traditionally attributed to a "pop gun" method using air compressed in the capsule, reaching a maximum velocity of per second, but alternative mechanisms have been recently proposed. High-speed photography has shown vortex rings are created during the discharge, which enable the spores to reach a height of , further than would be expected by ballistics alone. The acceleration of the spores is about 36,000g. Spores are extremely important in establishment of new populations in disturbed habitats and on islands.
Human activities like slash-and-burn and cattle grazing are believed to promote the growth and expansion of Sphagnum moss. Oceanic islands such as the Faroe Islands, the Galápagos or the Azores have recorded a significant increase in their Sphagnum populations after human settlement.
Taxonomy
Peat moss can be distinguished from other moss species by its unique branch clusters. The plant and stem color, the shape of the branch and stem leaves, and the shape of the green cells are all characteristics used to identify peat moss to species. Sphagnum taxonomy has been very contentious since the early 1900s; most species require microscopic dissection to be identified. In the field, most Sphagnum species can be identified to one of four major sections of the genus—classification and descriptions follow Andrus 2007 (Flora North America):
Sphagnum sect. Acutifolia plants generally form hummocks above the water line, usually colored orange or red. Examples: Sphagnum fuscum and S. warnstorfii.
Sphagnum sect. Cuspidata plants are usually found in hollows, lawns, or are aquatic, and are green. Examples: Sphagnum cuspidatum and S. flexuosum.
Sphagnum sect. Sphagnum plants have the largest gametophytes among the sections, forming large hummocks, their leaves form cuculate (hood-shaped) apices, and are green, except for S. magellanicum Example: Sphagnum austinii.
Sphagnum sect. Subsecunda plants vary in color from green to yellow and orange (but never red), and are found in hollows, lawns, or are aquatic. Species always with unisexual gametophytes. Examples: Sphagnum lescurii and Sphagnum pylaesii.
The reciprocal monophyly of these sections and two other minor ones (Rigida and Squarrosa) has been clarified using molecular phylogenetics. All but two species normally identified as Sphagnum reside in one clade; two other species have recently been separated into new families within the Sphagnales reflecting an ancestral relationship with the Tasmanian endemic Ambuchanania and long phylogenetic distance to the rest of Sphagnum. Within main clade of Sphagnum, phylogenetic distance is relatively short, and molecular dating methods suggest nearly all current Sphagnum species are descended from a radiation that occurred just 14 million years ago.
Distribution
Sphagnum mosses occur mainly in the Northern Hemisphere in peat bogs, conifer forests, and moist tundra areas. Their northernmost populations lie in the archipelago of Svalbard, Arctic Norway, at 81° N.
In the Southern Hemisphere, the largest peat areas are in southern Chile and Argentina, part of the vast Magellanic moorland (circa 44,000 square km; 17,000 sq. mi.). Peat areas are also found in New Zealand and Tasmania. In the Southern Hemisphere, however, peat landscapes may contain many moss species other than Sphagnum. Sphagnum species are also reported from "dripping rocks" in mountainous, subtropical Brazil.
Conservation
Several of the world's largest wetlands are sphagnum-dominated bogs, including the West Siberian Lowland, the Hudson Bay Lowland and the Mackenzie River Valley. These areas provide habitat for common and rare species. They also store large amounts of carbon, which helps reduce global warming.
According to an article written in 2013, the U.S. got up to 80% of sphagnum peat moss it uses from Canada. At that time, in Canada, the peat bog mass harvested each year was roughly 1/60th of the peat mass that annually accumulated. About 0.02% of the of Canadian peat bog are used for peat moss mining. Some efforts are being made to restore peat bogs after peat mining, and some debate exists as to whether the peat bogs can be restored to their premining condition and how long the process takes. "The North American Wetlands Conservation Council estimates that harvested peatlands can be restored to 'ecologically balanced systems' within five to 20 years after peat harvesting." Some wetlands scientists assert that "a managed bog bears little resemblance to a natural one. Like tree farms, these peatlands tend toward monoculture, lacking the biodiversity of an unharvested bog."
PittMoss, a peat moss alternative made from recycled newspaper, has emerged as a sustainable substitute in growing media. Coir has also been touted as a sustainable alternative to peat moss in growing media. Another peat moss alternative is manufactured in California from sustainably harvested redwood fiber. Semi-open cell polyurethane materials available in flaked and sheet stock are also finding application as sphagnum replacements with typical usage in green wall and roof garden substrates.
Chile
In the 2010s, Sphagnum peat in Chile has begun to be harvested at a large scale for export to countries like Japan, South Korea, Taiwan and the United States. Sphagnum’s ability to absorb excess water and release it during dry months means that overexploitation may threaten the water supply in the fjords and channels of Chile. Extraction of Sphagnum in Chile is regulated by law since 2 August 2018. Between 2018 and 2024, Chilean law allowed for the manual extraction of Sphagnum using only pitchforks or similar tools as an aid. In a given designated harvesting area (polygon) at least 30% of Sphagnum coverage had to be left unharvested. Harvested Sphagnum fibers we not allowed to exceed in length and the remaining Sphagnum after harvest was not to be left with a length of less than over the water table. In the regions of Los Ríos (40°S) and Los Lagos (41–43°S) the same plots could be harvested after 12 years, while further south in Aysén (44–48°S) and Magallanes (49–56°S) 85 years had to pass before the same area can be harvested again. According to a 2024 law harvesting of Sphagnum can only be done with land-management plans approved by Servicio Agrícola y Ganadero. Some environmental organisations expressed regret as the original law project presented in 2018 sought the extablish a definitive ban on the harvest. Along Rubens River in Magallanes Region there are some historically important harvesting fields of peat in Sphagnum peatlands. Sphagnum peatlands in Chile disturbed by peat extraction have been found to host various invasive plant species including Rumex acetosella, Carex canescens, Holcus lanatus and Hieracium pilosella. Harvesting of peat in Sphagnum mosses or any where else is forbidden in Chile since April 2024.
Harvesting aside, bogs where Sphagnum grows have also come under threat by the development of wind farms in cool humid areas such as the Cordillera del Piuchén where the San Pedro Wind Farm was constructed in the 2010s. The construction of each wind turbine usually implies the removal of vegetation and the alteration of the soil, changing by the way also of the local hydrology.
Europe
Europe has a long history of the exploitation of peatlands. The Netherlands, for example, once had large areas of peatland, both fen and bog. Between 100 AD and the present, they were drained and converted to agricultural land. The English broadlands have small lakes that originated as peat mines.
More than 90% of the bogs in England have been damaged or destroyed. A handful of bogs has been preserved through government buyouts of peat-mining interests. Over longer time scales, however, some parts of England, Ireland, Scotland, and Wales have seen expansion of bogs, particularly blanket bogs, in response to deforestation and abandonment of agricultural land.
New Zealand
New Zealand has, like other parts of the world, lost large areas of peatland. The latest estimates for wetland loss in New Zealand are 90% over 150 years. In some cases, better care is taken during the harvesting of Sphagnum to ensure enough moss is remaining to allow regrowth. An 8-year cycle is suggested, but some sites require a longer cycle of 11 to 32 years for full recovery of biomass, depending on factors including whether reseeding is done, the light intensity, and the water table. This "farming" is based on a sustainable management program approved by New Zealand's Department of Conservation; it ensures the regeneration of the moss, while protecting the wildlife and the environment. Most harvesting in New Zealand swamps is done only using pitchforks without the use of heavy machinery. During transportation, helicopters are commonly employed to transfer the newly harvested moss from the swamp to the nearest road.
Uses
Decayed, dried sphagnum moss has the name of peat or peat moss. This is used as a soil conditioner which increases the soil's capacity to hold water and nutrients by increasing capillary forces and cation exchange capacity – uses that are particularly useful in gardening. This is often desired when dealing with very sandy soil, or plants that need increased or steady moisture content to flourish. A distinction is sometimes made between sphagnum moss, the live moss growing on top of a peat bog, and 'sphagnum peat moss' (North American usage) or 'sphagnum peat' (British usage), the latter being the slowly decaying matter underneath.
Dried sphagnum moss is used in northern Arctic regions as an insulating material.
Anaerobic acidic sphagnum bogs have low rates of decay, and hence preserve plant fragments and pollen to allow reconstruction of past environments. They even preserve human bodies for millennia; examples of these preserved specimens are Tollund Man, Haraldskær Woman, Clonycavan Man and Lindow Man. Such bogs can also preserve human hair and clothing, one of the most noteworthy examples being Egtved Girl, Denmark. Because of the acidity of peat, however, bones are dissolved rather than preserved. These bogs have also been used to preserve food. Up to 2000-year-old containers of butter or lard have been found.
Sphagnum moss has been used for centuries as a dressing for wounds, including through World War I. Botanist John William Hotson's paper, Sphagnum as a surgical dressing, published in Science in 1918, was instrumental in the acceptance of Sphagnum moss use as a medical dressing in place of cotton. Preparations using Sphagnum such as Sphagnol soap have been used for various skin conditions including acne, ringworm, and eczema. The soap was used by the British Red Cross during both World Wars to treat facial wounds and trench sores.
Since it is absorptive and extremely acidic, it inhibits growth of bacteria and fungi, so it is used for shipping seeds and live plants.
Peat moss is used to dispose of the clarified liquid output (effluent) from septic tanks in areas that lack the proper conditions for ordinary disposal means. It is also used as an environmentally friendly alternative to chlorine in swimming pool sanitation. The moss inhibits the growth of microbes and reduces the need for chlorine in swimming pools.
In Finland, peat mosses have been used to make bread during famines.
In China, Japan and Korea, long strand dried sphagnum moss is traditionally used as a potting medium for cultivating Vanda falcata orchids.
| Biology and health sciences | Bryophytes | null |
645676 | https://en.wikipedia.org/wiki/Flow%20network | Flow network | In graph theory, a flow network (also known as a transportation network) is a directed graph where each edge has a capacity and each edge receives a flow. The amount of flow on an edge cannot exceed the capacity of the edge. Often in operations research, a directed graph is called a network, the vertices are called nodes and the edges are called arcs. A flow must satisfy the restriction that the amount of flow into a node equals the amount of flow out of it, unless it is a source, which has only outgoing flow, or sink, which has only incoming flow. A network can be used to model traffic in a computer network, circulation with demands, fluids in pipes, currents in an electrical circuit, or anything similar in which something travels through a network of nodes.
Definition
A network is a directed graph with a non-negative capacity function for each edge, and without multiple arcs (i.e. edges with the same source and target nodes). Without loss of generality, we may assume that if , then is also a member of . Additionally, if then we may add to E and then set the .
If two nodes in are distinguished – one as the source and the other as the sink – then is called a flow network.
Flows
Flow functions model the net flow of units between pairs of nodes, and are useful when asking questions such as what is the maximum number of units that can be transferred from the source node s to the sink node t? The amount of flow between two nodes is used to represent the net amount of units being transferred from one node to the other.
The excess function represents the net flow entering a given node (i.e. the sum of the flows entering ) and is defined byA node is said to be active if (i.e. the node consumes flow), deficient if (i.e. the node produces flow), or conserving if . In flow networks, the source is deficient, and the sink is active.
Pseudo-flows, feasible flows, and pre-flows are all examples of flow functions.
A pseudo-flow is a function of each edge in the network that satisfies the following two constraints for all nodes and :
Skew symmetry constraint: The flow on an arc from to is equivalent to the negation of the flow on the arc from to , that is: . The sign of the flow indicates the flow's direction.
Capacity constraint: An arc's flow cannot exceed its capacity, that is: .
A pre-flow is a pseudo-flow that, for all }, satisfies the additional constraint:
Non-deficient flows: The net flow entering the node is non-negative, except for the source, which "produces" flow. That is: for all }.
A feasible flow, or just a flow, is a pseudo-flow that, for all }, satisfies the additional constraint:
Flow conservation constraint: The total net flow entering a node is zero for all nodes in the network except the source and the sink , that is: for all . In other words, for all nodes in the network except the source and the sink , the total sum of the incoming flow of a node is equal to its outgoing flow (i.e. , for each vertex ).
The value of a feasible flow for a network, is the net flow into the sink of the flow network, that is: . Note, the flow value in a network is also equal to the total outgoing flow of source , that is: . Also, if we define as a set of nodes in such that and , the flow value is equal to the total net flow going out of A (i.e. ). The flow value in a network is the total amount of flow from to .
Concepts useful to flow problems
Flow decomposition
Flow decomposition is a process of breaking down a given flow into a collection of path flows and cycle flows. Every flow through a network can be decomposed into one or more paths and corresponding quantities, such that each edge in the flow equals the sum of all quantities of paths that pass through it. Flow decomposition is a powerful tool in optimization problems to maximize or minimize specific flow parameters.
Adding arcs and flows
We do not use multiple arcs within a network because we can combine those arcs into a single arc. To combine two arcs into a single arc, we add their capacities and their flow values, and assign those to the new arc:
Given any two nodes and , having two arcs from to with capacities and respectively is equivalent to considering only a single arc from to with a capacity equal to .
Given any two nodes and , having two arcs from to with pseudo-flows and respectively is equivalent to considering only a single arc from to with a pseudo-flow equal to .
Along with the other constraints, the skew symmetry constraint must be remembered during this step to maintain the direction of the original pseudo-flow arc. Adding flow to an arc is the same as adding an arc with the capacity of zero.
Residuals
The residual capacity of an arc with respect to a pseudo-flow is denoted , and it is the difference between the arc's capacity and its flow. That is, . From this we can construct a residual network, denoted , with a capacity function which models the amount of available capacity on the set of arcs in . More specifically, capacity function of each arc in the residual network represents the amount of flow which can be transferred from to given the current state of the flow within the network.
This concept is used in Ford–Fulkerson algorithm which computes the maximum flow in a flow network.
Note that there can be an unsaturated path (a path with available capacity) from to in the residual network, even though there is no such path from to in the original network. Since flows in opposite directions cancel out, decreasing the flow from to is the same as increasing the flow from to .
Augmenting paths
An augmenting path is a path in the residual network, where , , and . More simply, an augmenting path is an available flow path from the source to the sink. A network is at maximum flow if and only if there is no augmenting path in the residual network .
The bottleneck is the minimum residual capacity of all the edges in a given augmenting path. See example explained in the "Example" section of this article. The flow network is at maximum flow if and only if it has a bottleneck with a value equal to zero. If any augmenting path exists, its bottleneck weight will be greater than 0. In other words, if there is a bottleneck value greater than 0, then there is an augmenting path from the source to the sink. However, we know that if there is any augmenting path, then the network is not at maximum flow, which in turn means that, if there is a bottleneck value greater than 0, then the network is not at maximum flow.
The term "augmenting the flow" for an augmenting path means updating the flow of each arc in this augmenting path to equal the capacity of the bottleneck. Augmenting the flow corresponds to pushing additional flow along the augmenting path until there is no remaining available residual capacity in the bottleneck.
Multiple sources and/or sinks
Sometimes, when modeling a network with more than one source, a supersource is introduced to the graph. This consists of a vertex connected to each of the sources with edges of infinite capacity, so as to act as a global source. A similar construct for sinks is called a supersink.
Example
In Figure 1 you see a flow network with source labeled , sink , and four additional nodes. The flow and capacity is denoted . Notice how the network upholds the capacity constraint and flow conservation constraint. The total amount of flow from to is 5, which can be easily seen from the fact that the total outgoing flow from is 5, which is also the incoming flow to . By the skew symmetry constraint, from to is -2 because the flow from to is 2.
In Figure 2 you see the residual network for the same given flow. Notice how there is positive residual capacity on some edges where the original capacity is zero in Figure 1, for example for the edge . This network is not at maximum flow. There is available capacity along the paths , and , which are then the augmenting paths.
The bottleneck of the path is equal to .
Applications
Picture a series of water pipes, fitting into a network. Each pipe is of a certain diameter, so it can only maintain a flow of a certain amount of water. Anywhere that pipes meet, the total amount of water coming into that junction must be equal to the amount going out, otherwise we would quickly run out of water, or we would have a buildup of water. We have a water inlet, which is the source, and an outlet, the sink. A flow would then be one possible way for water to get from source to sink so that the total amount of water coming out of the outlet is consistent. Intuitively, the total flow of a network is the rate at which water comes out of the outlet.
Flows can pertain to people or material over transportation networks, or to electricity over electrical distribution systems. For any such physical network, the flow coming into any intermediate node needs to equal the flow going out of that node. This conservation constraint is equivalent to Kirchhoff's current law.
Flow networks also find applications in ecology: flow networks arise naturally when considering the flow of nutrients and energy between different organisms in a food web. The mathematical problems associated with such networks are quite different from those that arise in networks of fluid or traffic flow. The field of ecosystem network analysis, developed by Robert Ulanowicz and others, involves using concepts from information theory and thermodynamics to study the evolution of these networks over time.
Classifying flow problems
The simplest and most common problem using flow networks is to find what is called the maximum flow, which provides the largest possible total flow from the source to the sink in a given graph. There are many other problems which can be solved using max flow algorithms, if they are appropriately modeled as flow networks, such as bipartite matching, the assignment problem and the transportation problem. Maximum flow problems can be solved in polynomial time with various algorithms (see table). The max-flow min-cut theorem states that finding a maximal network flow is equivalent to finding a cut of minimum capacity that separates the source and the sink, where a cut is the division of vertices such that the source is in one division and the sink is in another.
In a multi-commodity flow problem, you have multiple sources and sinks, and various "commodities" which are to flow from a given source to a given sink. This could be for example various goods that are produced at various factories, and are to be delivered to various given customers through the same transportation network.
In a minimum cost flow problem, each edge has a given cost , and the cost of sending the flow across the edge is . The objective is to send a given amount of flow from the source to the sink, at the lowest possible price.
In a circulation problem, you have a lower bound on the edges, in addition to the upper bound . Each edge also has a cost. Often, flow conservation holds for all nodes in a circulation problem, and there is a connection from the sink back to the source. In this way, you can dictate the total flow with and . The flow circulates through the network, hence the name of the problem.
In a network with gains or generalized network each edge has a gain, a real number (not zero) such that, if the edge has gain g, and an amount x flows into the edge at its tail, then an amount gx flows out at the head.
In a source localization problem, an algorithm tries to identify the most likely source node of information diffusion through a partially observed network. This can be done in linear time for trees and cubic time for arbitrary networks and has applications ranging from tracking mobile phone users to identifying the originating source of disease outbreaks.
| Mathematics | Graph theory | null |
14878257 | https://en.wikipedia.org/wiki/Ruby%20%28color%29 | Ruby (color) | Ruby is a color that is a representation of the color of the cut and polished ruby gemstone and is a shade of red or pink.
Origins
The first recorded use of ruby as a color name in English was in 1572.
Variations
Rubine red
Displayed at right is the Pantone color rubine red.
Ruber
The color ruber is displayed to the right.
Medium ruby
Medium ruby is the color called ruby in Crayola Gem Tones, a specialty set of crayons introduced by the Crayola company in 1994.
Ruby red
Displayed at right is the color ruby red.
This is one of the colors in the RAL color matching system, a color system widely used in Europe. The RAL color list originated in 1927, and it reached its present form in 1961.
Big dip o'ruby
Displayed at right is the color big dip o'ruby.
Big dip o'ruby is one of the colors in the special set of metallic Crayola crayons called Metallic FX, the colors of which were formulated by Crayola in 2001.
This is supposed to be a metallic color. However, there is no mechanism for displaying metallic colors on a flat computer screen.
Antique ruby
At right is displayed the color antique ruby.
The first recorded use of antique ruby as a color name in English was in 1926.
The color antique ruby is a dark tone of ruby.
Deep ruby
Displayed at right is deep tone of ruby that is called ruby in the British Standards 381 color list. This color is #542 on the 381 color list. The 381 color list is for colors used in identification, coding, and other special purposes. The British Standard color lists were first formulated in 1930 and reached their present form in 1955.
In nature
The ruby-throated hummingbird (Archilochus colubris) is a small hummingbird. It is the only species of hummingbird that regularly nests east of the Mississippi River in North America.
The ruby seadragon (Phyllopteryx dewysea) is a marine fish in the family Syngnathidae, which also includes seahorses. It inhabits the coast of Western Australia. The species was first described in 2015, making it only the third known species of seadragon, and the first to be discovered in 150 years.
The ruby snapper (Etelis carbunculis) is a species of fish that lives in Australia.
Infrared light in the portion of the spectrum where it is still visible to humans (out to approximately 1050 nanometers) appears ruby red. Starting at about 660 nm in the visible red, a monochromatic source such as an LED or laser begins to look very slightly purplish, gradually becoming more so as the wavelength increases. Below about 900 nm, the color is more purple than red, similar to some of the color samples on this page.
| Physical sciences | Colors | Physics |
14878724 | https://en.wikipedia.org/wiki/Oral%20hygiene | Oral hygiene | Oral hygiene is the practice of keeping one's oral cavity clean and free of disease and other problems (e.g. bad breath) by regular brushing of the teeth (dental hygiene) and adopting good hygiene habits. It is important that oral hygiene be carried out on a regular basis to enable prevention of dental disease and bad breath. The most common types of dental disease are tooth decay (cavities, dental caries) and gum diseases, including gingivitis, and periodontitis.
General guidelines for adults suggest brushing at least twice a day with a fluoridated toothpaste: brushing before going to sleep at night and after breakfast in the morning. Cleaning between the teeth is called interdental cleaning and is as important as tooth brushing. This is because a toothbrush cannot reach between the teeth and therefore only removes about 50% of plaque from the surface of the teeth. There are many tools available for interdental cleaning which include floss, tape and interdental brushes; it is up to each individual to choose which tool they prefer to use.
Sometimes white or straight teeth are associated with oral hygiene. However, a hygienic mouth can have stained teeth or crooked teeth. To improve the appearance of their teeth, people may use tooth whitening treatments and orthodontics.
The importance of the role of the oral microbiome in dental health has been increasingly recognized. Data from human oral microbiology research shows that a commensal microflora can switch to an opportunistic pathogenic flora through complex changes in their environment. These changes are driven by the host rather than the bacteria. Archeological evidence of calcified dental plaque shows marked shifts in the oral microbiome towards a disease-associated microbiome with cariogenic bacteria becoming dominant during the Industrial Revolution. Streptococcus mutans is the most important bacteria in causing caries. Modern oral microbiota are significantly less diverse than historic populations. Caries (cavities), for example, have become a major endemic disease, affecting 60-90% of schoolchildren in industrialized countries. In contrast, dental caries and periodontal diseases were rare in pre-Neolithic and early hominins.
Tooth cleaning and decay
Tooth decay is the most common global disease. Over 80% of cavities occur inside fissures in teeth where brushing cannot reach food left trapped after eating and saliva and fluoride have no access to neutralize acid and remineralize demineralized teeth, unlike easy-to-clean parts of the tooth, where fewer cavities occur.
Teeth cleaning is the removal of dental plaque and tartar from teeth to prevent cavities, gingivitis, gum disease, and tooth decay. Severe gum disease causes at least one-third of adult tooth loss.
Since before recorded history, a variety of oral hygiene measures have been used for teeth cleaning. This has been verified by various excavations done throughout the world, in which chew sticks, tree twigs, bird feathers, animal bones and porcupine quills have been found. In historic times, different forms of tooth cleaning tools have been used. Indian medicine (Ayurveda) has used the neem tree, or daatun, and its products to create teeth cleaning twigs and similar products; a person chews one end of the neem twig until it somewhat resembles the bristles of a toothbrush, and then uses it to brush the teeth. In the Muslim world, the miswak, or siwak, made from a twig or root, has antiseptic properties and has been widely used since the Islamic Golden Age. Rubbing baking soda or chalk against the teeth was also common; however, this can increase gum and tooth sensitivity.
The Australian Healthcare and Hospital Association's (AHHA) most recent evidence brief suggests that dental check-ups should be conducted once every three years for adults, and one every two years for children. It has been documented that dental professionals frequently advise for more frequent visits, but this advice is contraindicated by evidence suggesting that check up frequency should be based on individual risk factors, or the AHHA's check-up schedule. In the UK, it is common practice to invite people for check-ups every 6 months; however, recent research has shown that this isn't necessary for people who have low risk of oral disease. Professional cleaning includes tooth scaling, tooth polishing, and, if tartar has accumulated, debridement; this is usually followed by a fluoride treatment. However, the American Dental Hygienists' Association (ADHA) stated in 1998 that there is no evidence that scaling and polishing only above the gums provides therapeutic value, and cleaning should be done under the gums as well. The Cochrane Oral Health Group found only three studies meeting the criteria for inclusion in their study and found little evidence in them to support claims of benefits from supragingival (above the gum) tooth scaling or tooth polishing.
Dental sealants, which are applied by dentists, cover and protect fissures and grooves in the chewing surfaces of back teeth, preventing food from becoming trapped and thereby halt the decay process. An elastomer strip has been shown to force sealant deeper inside opposing chewing surfaces and can also force fluoride toothpaste inside chewing surfaces to aid in remineralising demineralised teeth.
Between cleanings by a dental hygienist, good oral hygiene is essential for preventing tartar build-up which causes the problems mentioned above. This is done through careful, frequent brushing with a toothbrush, combined with the use of dental floss or interdental brushes to prevent accumulation of plaque on the teeth. Powered toothbrushes reduce dental plaque and gingivitis more than manual toothbrushing in both short and long term. Further evidence is needed to determine the clinical importance of these findings.
Patients need to be aware of the importance of brushing and flossing their teeth daily. New parents need to be educated to promote healthy habits in their children.
Sources of problems
Plaque
Dental plaque, also known as dental biofilm, is a sticky, yellow film consisting of a wide range of bacteria that attaches to the tooth surfaces and can be visible around the gum line. It starts to reappear after the tooth surface has been cleaned, which is why regular brushing is encouraged. A high-sugar diet encourages the formation of plaque. Sugar (fermentable carbohydrates), is converted into acid by the plaque. The acid then causes the breakdown of the adjacent tooth, eventually leading to tooth decay.
If plaque is left on a subgingival (under the gum) surface undisturbed, not only is there an increased risk of tooth decay, but it will also go on to irritate the gums and make them appear red and swollen. Some bleeding may be noticed during tooth brushing or flossing. These are the signs of inflammation that indicate poor gum health (gingivitis).
Calculus
Dental calculus is composed of calcium phosphate minerals with live microorganisms that is covered by a unmineralized layer. The longer that plaque stays on the tooth surface, the harder and more attached to the tooth it becomes. That is when it is referred to as calculus and needs to be removed by a dental professional. If this is not treated, the inflammation will lead to the bone loss and will eventually lead to the affected teeth becoming loose.
Preventive care
Tooth brushing
Routine tooth brushing is the principal method of preventing many oral diseases, and perhaps the most important activity an individual can practice to reduce plaque buildup. Controlling plaque reduces the risk of the individual with plaque-associated diseases such as gingivitis, periodontitis, and caries – the three most common oral diseases. The average brushing time for individuals is between 30 seconds and just over 60 seconds. Many oral health care professionals agree that tooth brushing should be done for a minimum of two minutes, and be practiced at least twice a day. Brushing for at least two minutes per session is optimal for preventing the most common oral diseases, and removes considerably more plaque than brushing for only 45 seconds.
Toothbrushing can only clean to a depth of about 1.5 mm inside the gingival pockets, but a sustained regime of plaque removal above the gum line can affect the ecology of the microbes below the gums and may reduce the number of pathogens in pockets up to 5 mm in depth.
Toothpaste (dentifrice) with fluoride, or alternatives such as nano-hydroxyapatite, is an important tool to readily use when tooth brushing. The fluoride (or alternative) in the dentifrice is an important protective factor against caries, and an important supplement needed to remineralize already affected enamel. Currently, there is insufficient evidence to evaluate the caries inhibiting characteristics of slow release fluoride glass beads. However, in terms of preventing gum disease, the use of toothpaste does not increase the effectiveness of the activity with respect to the amount of plaque removed.
Population studies shown that regular tooth brushing is associated with reduced risk of cardiovascular diseases and better blood pressure profile.
Manual toothbrush
The modern manual tooth brush is a dental tool which consists of a head of nylon bristles attached to a long handle to help facilitate the manual action of tooth brushing. Furthermore, the handle aids in reaching as far back as teeth erupt in the oral cavity. The tooth brush is arguably a person's best tool for removing dental plaque from teeth, thus capable of preventing all plaque-related diseases if used routinely, correctly and effectively. Oral health professionals recommend the use of a tooth brush with a small head and soft bristles as they are most effective in removing plaque without damaging the gums.
The technique is crucial to the effectiveness of tooth brushing and disease prevention. Back and forth brushing is not effective in removing plaque at the gum line. Tooth brushing should employ a systematic approach, angle the bristles at a 45-degree angle towards the gums, and make small circular motions at that angle. This action increases the effectiveness of the technique in removing plaque at the gum line.
Electric toothbrush
Electric toothbrushes are toothbrushes with moving or vibrating bristle heads. The two main types of electric toothbrushes are the sonic type which has a vibrating head, and the oscillating-rotating type in which the bristle head makes constant clockwise and anti-clockwise movements. Electric toothbrushes are more expensive than manual toothbrushes and more damaging to the environment.
Sonic or ultrasonic toothbrushes vibrate at a high frequency with a small amplitude, and a fluid turbulent activity that aids in plaque removal. The rotating type might reduce plaque and gingivitis compared to manual brushing, though it is currently uncertain whether this is of clinical significance. The movements of the bristles and their vibrations help break up chains of bacteria up to 5mm below the gum line. The oscillating-rotating electric toothbrush on the other hand uses the same mechanical action as produced by manual tooth brushing – removing plaque via mechanical disturbance of the biofilm – however at a higher frequency.
Using electric tooth brushes is less complex in regards to brushing technique, making it a viable option for children, and adults with limited dexterity. The bristle head should be guided from tooth to tooth slowly, following the contour of the gums and crowns of the tooth. The motion of the toothbrush head removes the need to manually oscillate the brush or make circles.
Flossing
Tooth brushing alone will not remove plaque from all surfaces of the tooth as 40% of the surfaces are interdental. One technique that can be used to access these areas is dental floss. When the proper technique is used, flossing can remove plaque and food particles from between the teeth and below the gums. The American Dental Association (ADA) reports that up to 80% of plaque may be removed by this method. The ADA recommends cleaning between the teeth as part of one's daily oral hygiene regime.
Types of floss include:
Unwaxed floss: Unbound nylon filaments that spread across the tooth. Plaque and debris get trapped for easy removal.
Waxed floss: less susceptible to tearing or shredding when used between tight contacts or areas with overhanging restorations.
Polytetrafluoroethylene (Teflon): Slides easily through tight contacts and does not fray.
The type of floss used is a personal preference; however, without proper technique it may not be effective.
The correct technique to ensure maximum plaque removal is as follows:
Floss length: 15–25 cm wrapped around middle fingers.
For upper teeth grasp the floss with thumb and index finger, for lower teeth with both index fingers. Ensure that a length of roughly an inch is left between the fingers.
Ease the floss gently between the teeth using a back and forth motion.
Position the floss in such a way that it becomes securely wrapped around the interdental surface of the tooth in a C shape.
Ensure that the floss is taken below the gum margins using a back and forth up and down motion.
There are a few different options on the market that can make flossing easier if dexterity or coordination is a barrier, or as a preference over normal floss. Floss threaders are ideal for cleaning between orthodontic appliances, and flossettes are ideal for children and those with poor dexterity. Special flossettes are made for those with orthodontics.
Interdental brushes
Interdental brushes come in a range of color-coded sizes. They consist of a handle with a piece of wire covered in tapered bristles, designed to be placed into the interdental space for plaque removal. Studies indicate that interdental brushes are equally or more effective than floss when removing plaque and reducing gum inflammation. They are especially recommended to people with orthodontics, often to use as well as floss.
The steps in using an interdental brush are as follows:
Identify the size required, the largest size that will fit without force is ideal, if necessary more than one size can be used.
Insert the bristles into the interdental space at a 90-degree angle.
Move the brush back and forth between the teeth.
Rinse under water to remove debris when necessary.
Rinse with warm soapy water once complete and store in a clean dry area.
Replace once bristles are worn.
Tongue scrapers
The tongue contains numerous bacteria which causes bad breath. Bad breath, also considered as Halitosis, is a bad oral hygiene habit that also leads to dehydration and other medical conditions. Parents are always concerned by bad breath among their toddlers, but they feel embarrassed to talk about this with a toddler dentist. When one starts noticing the bad breath among toddlers, there is a need to instantly help them. Tongue cleaners are designed to remove the debris built up on the tongue. Using a toothbrush to clean the tongue is another possibility, however it might be hard to reach the back of the tongue and the bristles of the toothbrush may be too soft to remove the debris. Steps of using a tongue scraper:
Rinse the tongue scraper in order to clean it and remove any present debris
Start at the back of the tongue and gently scrape forwards. Be sure to clean the sides of the tongue, as well as the middle portion
After the cleaning is completed, rinse the tongue scraper and any debris that is left behind
Rinse the mouth
Oral irrigation
Some dental professionals recommend subgingival irrigation, also known as water flossing, as a way to clean teeth and gums. Oral irrigators may be used instead of or in addition to flossing.
Single-tufted brushes
Single-tufted brushes are a tool in conjunction with tooth brushing.
The tooth brush is designed to reach the 'hard to reach places' within the mouth. This tool is best used behind the lower front teeth, behind the back molars, crooked teeth and between spaces where teeth have been removed.
The single- tufted brush design has an angled handle, a 4mm diameter and rounded bristle tips.
Gum stimulators
Toothbrushes with pointed rubber tips at the ends of the handles have been available for many years, and have more recently been replaced by a standalone tool called a gum stimulator designed to massage the gum line and the bases of the areas between the teeth. Such stimulators help to increase circulation to the gum line and to clear away bacteria which might not be removed by brushing and flossing alone.
Oral swabs
Oral care swabs, commonly known as Toothettes, are small sponges attached to a stick, often used for oral care in hospital or long-term care settings. The sponge is used to moisten and clear the patient's mouth of debris or thickened saliva in situations where conventional toothbrushing is not an option.
Dental probiotics
Some people use probiotics for gingivitis, halitosis, and more.
Food and drink
Foods that help muscles and bones also help teeth and gums. Vitamin C is necessary, for example, to prevent scurvy, which manifests as serious gum disease.
Eating a balanced diet and limiting sugar intake can help prevent tooth decay and periodontal disease. The Fédération dentaire internationale (FDI World Dental Federation) has promoted foods such as raw vegetables, plain yogurt, cheese, or fruit as dentally beneficial—this has been echoed by the American Dental Association (ADA).
Beneficial foods
Community water fluoridation is the addition of fluoride to adjust the natural fluoride concentration of a community's water supply to the level recommended for optimal dental health, approximately 1.0 ppm (parts per million). Fluoride is a primary protector against dental cavities. Fluoride makes the surface of teeth more resistant to acids during the process of remineralization. Drinking fluoridated water is recommended by some dental professional.
Milk, cheese, nuts and chicken are also rich in calcium and phosphate, and may also encourage remineralization.
The body cannot absorb all the required calcium if it lacks vitamin D, so fatty fish (salmon, for instance) as a major source of vitamin D helps an individual's teeth and gums to get more benefits of calcium.
Green and black tea which is richly endowed with polyphenols acts as a suppressor of the bacteria that cause plaque, therefore it helps in sustaining oral health and is advisable during or after a meal.
Foods high in fiber (like vegetables) may help to increase the flow of saliva, and a bolus of fibre like celery string, fresh carrot or broccoli can force saliva into trapped food inside pits and fissures on chewing surfaces where over 80% of cavities occur, to dilute carbohydrates like sugar, neutralize acid and remineralize teeth on easy-to-reach surfaces.
Harmful foods
Sugars are commonly associated with dental cavities. Other carbohydrates, especially cooked starches, e.g. crisps/potato chips, may also damage teeth, although to a lesser degree (and indirectly) since starch has to be converted to glucose by salivary amylase (an enzyme in the saliva) first. Sugars in foods that are more 'sticky', such as toffee, are likely to cause more damage to teeth than those in less 'sticky' foods, such as certain forms of chocolate or most fruits.
Sucrose (table sugar) is most commonly associated with cavities. The amount of sugar consumed at any one time is less important than how often food and drinks that contain sugar are consumed. The more frequently sugars are consumed, the greater the time during which the tooth is exposed to low pH levels, at which point demineralisation occurs (below 5.5 for most people). It is important therefore to try to encourage infrequent consumption of food and drinks containing sugar so that teeth have a chance to be repaired by remineralisation and fluoride. Limiting sugar-containing foods and drinks to meal times is one way to reduce the incidence of cavities. Sugars from fruit and fruit juices, e.g., glucose, fructose, and maltose can also cause cavities. Sucrose is used by Streptococcus mutans bacteria to produce biofilm. The sucrose is split by glucansucrase, which allows the bacteria to use the resulting glucose for building glucan polymer film and the resulting fructose as fuel to be converted to lactic acid.
Acids contained in fruit juice, vinegar and soft drinks lower the pH level of the oral cavity which causes the enamel to demineralize. Drinking drinks such as orange juice or cola throughout the day raises the risk of dental cavities.[Source?]
Another factor which affects the risk of developing cavities is the stickiness of foods. Some foods or sweets may stick to the teeth and so reduce the pH in the mouth for an extended time, particularly if they are sugary. It is important that teeth be cleaned at least twice a day, preferably with a toothbrush and fluoride toothpaste, to remove any food sticking to the teeth. Regular brushing and the use of dental floss also removes the dental plaque coating the tooth surface.
Chewing gum
Chewing gum assists oral irrigation between and around the teeth, cleaning and removing particles, but for teeth in poor condition it may damage or remove loose fillings as well. Dental chewing gums claim to improve dental health. Sugar-free chewing gum stimulates saliva production, and helps to clean the surface of the teeth.
Ice
Chewing on solid objects such as ice can chip teeth, leading to further tooth fractures. Chewing on ice has been linked to symptoms of anemia. People with anemia tend to want to eat food with no nutritional value.
Other
Smoking is one of the leading risk factors associated with periodontal diseases. It is thought that smoking impairs and alters normal immune responses, eliciting destructive processes while inhibiting reparative responses promoting the incidence and development of periodontal diseases.
Regular vomiting, as seen in bulimia nervosa and morning sickness also causes significant damage, due to acid erosion.
People with intellectual disability have increased risk of developing oral health problems like gum diseases or dental decay than the general population. For those people with severe disability, understanding the importance of oral hygiene and developing skills to achieve higher quality of oral care may not be their top priority. Therefore, studies have been conducted to assess different interventions to improve the knowledge and skills of the people with intellectual disabilities and their carer.
Mouthwash
There are three commonly used kinds of mouthwash: saline (salty water), essential oils (Listerine, etc.), and chlorhexidine gluconate.
Saline
Saline (warm salty water) is usually recommended after procedures like dental extractions. In a study completed in 2014, warm saline mouthrinse was compared to no mouthrinse in preventing alveolar osteitis (dry socket) after extraction. In the group that was instructed to rinse with saline, the prevalence of alveolar osteitis was less than in the group that did not.
Essential oils (EO) or cetyl pyridinium chloride (CPC)
Essential oils, found in Listerine mouthwash, contains eucalyptol, menthol, thymol, and methyl salicylate. CPC containing mouthwash contains cetyl pyridinium chloride, found in brands such as Colgate Plax, Crest Pro Health, Oral B Pro Health Rinse. In a meta-analyses completed in 2016, EO and CPC mouthrinses were compared and it was found that plaque and gingivitis levels were lower with EO mouthrinse when used as an adjunct to mechanical plaque removal (toothbrushing and interdental cleaning).
Chlorhexidine
Chlorhexidine gluconate is an antiseptic mouthrinse that should only be used in two-week time periods due to brown staining on the teeth and tongue. Compared to essential oils, it is more efficacious in controlling plaque levels, but has no better effect on gingivitis and is therefore generally used for post-surgical wound healing or the short-term control of plaque.
Sodium hypochlorite
As mentioned earlier, sodium hypochlorite, a common household bleach, can be used as a 0.2% solution for 30 seconds two or three times a week as a cheap and effective means of combating harmful bacteria. The commercial product is 5% or 6%, so this requires diluting the product by a factor of about 30 (half a tablespoon in a full glass of water). The solution will lose activity with time and may be discarded after one day.
Appliances care
Dentures
Dentures must be kept extremely clean. It is recommended that dentures be cleaned mechanically twice a day with a soft-bristled brush and denture cleansing paste. It is not recommended to use toothpaste, as it is too abrasive for acrylic, and will leave plaque retentive scratches in the surface. Dentures should be taken out at night, as leaving them in whilst sleeping has been linked to poor oral health. Leaving a denture in during sleep reduces the protective cleansing and antibacterial properties of saliva against Candida albicans (oral thrush) and denture stomatitis; the inflammation and redness of the oral mucosa underneath the denture. For the elderly, wearing a denture during sleep has been proven to greatly increase the risk of pneumonia. It is now recommended that dentures should be stored in a dry container overnight, as keeping dentures dry for 8 hours significantly reduces the amount of Candida albicans on an acrylic denture. Approximately once a week it is recommended to soak a denture overnight with an alkaline-peroxide denture cleansing tablet, as this has been proved to reduce bacterial mass and pathogenicity.
Retainers
As with dentures, it is recommended to clean retainers properly at least once a day (avoiding toothpaste and using soap) and to soak them overnight with an alkaline-peroxide denture cleansing tablet once a week. Hot temperatures will warp the shape of the retainer, therefore rinsing under cold water is preferred. Keeping the retainer in a plastic case and rinsing it beforehand considered to help reduce the number of bacteria being transferred back into the mouth.
Braces
While undertaking the braces treatment, it is recommended to use a small-sized or specialized toothbrush with a soft head to access hard-to-reach areas. Brushing after every meal is highly advisable. Using a high fluoride toothpaste during treatment can be more effective than using a normal toothpaste. Regular flossing is as important as brushing, and helps to remove any plaque build-up, as well as smaller food particles that are stuck in your braces or between your teeth. Floss threaders for braces or interdental brushes are also an option. Furthermore, fluoride foam (high fluoride concentrations) application by a dentist every 6–8 weeks during treatment, could reduce dental decay. However, more research needs to be carried out regarding this.
Education
To become a dental hygienist in the U.S. one must attend a college or university that is approved by the Commission on Dental Accreditation and take the National Board Dental Hygiene Examination. There are several degrees one may receive. An associate degree after attending community college is the most common and only takes two years to obtain. After doing so, one may work in a dental office. There is also the option of receiving a bachelor's degree or master's degree if one plans to work in an educational institute either for teaching or research.
Oral hygiene and systemic diseases
Several recent clinical studies suggest oral disease and inflammation (oral bacteria & oral infections) may be a risk factor for serious systemic diseases, such as:
cardiovascular disease (heart attack and Stroke)
bacterial pneumonia: Oral hygiene care for critically ill patients has been reported to reduce the risk of ventilator-associated pneumonia.
low birth weight or extreme high birth weight of one's baby
diabetes complications
osteoporosis
Relation to mental health
There is found to be a strong correlation between people with mental health disorders and having dental fear. People suffering from mental health disorders can have problems arising due to neglect of daily care of oral hygiene. For example, the problems that may arise are dry mouth, dental caries, jaw pain, oral cancer, and periodontitis (also called gum disease). In a twenty-five-year study, it was found that people suffering from mental health disorders have a 2.8 times increased chance of losing their teeth.
In a study of an Australian community, there was a semi-structured interview that included males and females over the age of eighteen. The goal was to see how having mental health challenges affects a person's overall health, focusing mainly on oral health. The results showed not going to the dentist for cleanings, and not brushing their teeth at all, resulted in signs of tooth decay.
The findings showed that individuals were less likely to go to the dentist regularly because they felt that they would be treated differently, unfairly, or judged . At a National Opinion Research Center at the University of Chicago, a survey of about 150 questions was asked to a group of about 17,000 people. It was found that people struggling with bad oral health came from areas of low income, did not visit the dentist regularly, and struggled with poor mental health.
In a study involving 2,784 psychiatric patients and 31,084 people from the general population, along with 131 nurses, a dental hygienist educated these patients on the importance of oral hygiene. The dental hygienist provided a twenty-minute PowerPoint presentation to show proper cleaning methods. The psychiatric patients observed that their oral hygiene was lacking and after the presentation, their oral care increased drastically. It was reported by Shappell and her colleagues that individuals with psychiatric disorders stated that they do nothing for their oral health. She found that these individuals struggle with chronic oral pain and that is a stressor that decreases serotonin levels causing their mental health disorders to be a bigger challenge.
Relation to cognitive decline
| Biology and health sciences | Health and fitness | null |
17731272 | https://en.wikipedia.org/wiki/Homosclerophorida | Homosclerophorida | Homosclerophorida is an order of marine sponges. It is the only order in the monotypic class Homoscleromorpha. The order is composed of two families: Plakinidae and Oscarellidae.
Taxonomy
Homoscleromorpha is phylogenetically well separated from Demospongiae. Therefore, it has been recognized as the fourth class of sponges.
It has been suggested that Homoscleromorpha are more closely related to eumetazoans than to the other sponge groups, rendering sponges paraphyletic. This view has not been supported by later work using larger datasets and new techniques for phylogenetic inference, which tend to support sponges as monophyletic, with Homoscleromorpha grouping together with Calcarea.
On the basis of molecular and morphological evidence, the two families Plakinidae and Oscarellidae have been reinstated.
There are 117 species in this group divided into 9 genera.
The spiculate genera in this group are Aspiculophora, Corticium, Placinolopha, Plakina, Plakinasterella, Plakortis and Tetralophophora.
The aspiculate species are the genera Oscarella and Pseudocorticium.
Description
These sponges are massive or encrusting in form and have a very simple structure with very little variation in spicule form (all spicules tend to be very small). Reproduction is viviparous and the larva is an oval form known as an amphiblastula. This form is usual in calcareous sponges but is less common in other sponges.
Habitat
Homoscleromorpha are exclusively marine sponges that tend to encrust on other surfaces at shallow depths. These sponges typically inhabit shady locations, under overhangs and inside caves. In the Mediterranean Sea, 82% of the species in this taxon can be found in caves, and 41% of them are found nowhere else.
| Biology and health sciences | Porifera | Animals |
1651906 | https://en.wikipedia.org/wiki/Ordinary%20least%20squares | Ordinary least squares | In statistics, ordinary least squares (OLS) is a type of linear least squares method for choosing the unknown parameters in a linear regression model (with fixed level-one effects of a linear function of a set of explanatory variables) by the principle of least squares: minimizing the sum of the squares of the differences between the observed dependent variable (values of the variable being observed) in the input dataset and the output of the (linear) function of the independent variable. Some sources consider OLS to be linear regression.
Geometrically, this is seen as the sum of the squared distances, parallel to the axis of the dependent variable, between each data point in the set and the corresponding point on the regression surface—the smaller the differences, the better the model fits the data. The resulting estimator can be expressed by a simple formula, especially in the case of a simple linear regression, in which there is a single regressor on the right side of the regression equation.
The OLS estimator is consistent for the level-one fixed effects when the regressors are exogenous and forms perfect colinearity (rank condition), consistent for the variance estimate of the residuals when regressors have finite fourth moments and—by the Gauss–Markov theorem—optimal in the class of linear unbiased estimators when the errors are homoscedastic and serially uncorrelated. Under these conditions, the method of OLS provides minimum-variance mean-unbiased estimation when the errors have finite variances. Under the additional assumption that the errors are normally distributed with zero mean, OLS is the maximum likelihood estimator that outperforms any non-linear unbiased estimator.
Linear model
Suppose the data consists of observations . Each observation includes a scalar response and a column vector of parameters (regressors), i.e., . In a linear regression model, the response variable, , is a linear function of the regressors:
or in vector form,
where , as introduced previously, is a column vector of the -th observation of all the explanatory variables; is a vector of unknown parameters; and the scalar represents unobserved random variables (errors) of the -th observation. accounts for the influences upon the responses from sources other than the explanatory variables . This model can also be written in matrix notation as
where and are vectors of the response variables and the errors of the observations, and is an matrix of regressors, also sometimes called the design matrix, whose row is and contains the -th observations on all the explanatory variables.
Typically, a constant term is included in the set of regressors , say, by taking for all . The coefficient corresponding to this regressor is called the intercept. Without the intercept, the fitted line is forced to cross the origin when .
Regressors do not have to be independent for estimation to be consistent e.g. they may be non-linearly dependent. Short of perfect multicollinearity, parameter estimates may still be consistent; however, as multicollinearity rises the standard error around such estimates increases and reduces the precision of such estimates. When there is perfect multicollinearity, it is no longer possible to obtain unique estimates for the coefficients to the related regressors; estimation for these parameters cannot converge (thus, it cannot be consistent).
As a concrete example where regressors are non-linearly dependent yet estimation may still be consistent, we might suspect the response depends linearly both on a value and its square; in which case we would include one regressor whose value is just the square of another regressor. In that case, the model would be quadratic in the second regressor, but none-the-less is still considered a linear model because the model is still linear in the parameters ().
Matrix/vector formulation
Consider an overdetermined system
of linear equations in unknown coefficients, , with . This can be written in matrix form as
where
(Note: for a linear model as above, not all elements in contains information on the data points. The first column is populated with ones, . Only the other columns contain actual data. So here is equal to the number of regressors plus one).
Such a system usually has no exact solution, so the goal is instead to find the coefficients which fit the equations "best", in the sense of solving the quadratic minimization problem
where the objective function is given by
A justification for choosing this criterion is given in Properties below. This minimization problem has a unique solution, provided that the columns of the matrix are linearly independent, given by solving the so-called normal equations:
The matrix is known as the normal matrix or Gram matrix and the matrix is known as the moment matrix of regressand by regressors. Finally, is the coefficient vector of the least-squares hyperplane, expressed as
or
Estimation
Suppose b is a "candidate" value for the parameter vector β. The quantity , called the residual for the i-th observation, measures the vertical distance between the data point and the hyperplane , and thus assesses the degree of fit between the actual data and the model. The sum of squared residuals (SSR) (also called the error sum of squares (ESS) or residual sum of squares (RSS)) is a measure of the overall model fit:
where T denotes the matrix transpose, and the rows of X, denoting the values of all the independent variables associated with a particular value of the dependent variable, are Xi = xiT. The value of b which minimizes this sum is called the OLS estimator for β. The function S(b) is quadratic in b with positive-definite Hessian, and therefore this function possesses a unique global minimum at , which can be given by the explicit formula[proof]
The product N = XT X is a Gram matrix, and its inverse, Q = N−1, is the cofactor matrix of β, closely related to its covariance matrix, Cβ.
The matrix (XT X)−1 XT = Q XT is called the Moore–Penrose pseudoinverse matrix of X. This formulation highlights the point that estimation can be carried out if, and only if, there is no perfect multicollinearity between the explanatory variables (which would cause the Gram matrix to have no inverse).
Prediction
After we have estimated β, the fitted values (or predicted values) from the regression will be
where P = X(XTX)−1XT is the projection matrix onto the space V spanned by the columns of X. This matrix P is also sometimes called the hat matrix because it "puts a hat" onto the variable y. Another matrix, closely related to P is the annihilator matrix ; this is a projection matrix onto the space orthogonal to V. Both matrices P and M are symmetric and idempotent (meaning that and ), and relate to the data matrix X via identities and . Matrix M creates the residuals from the regression:
The variances of the predicted values are found in the main diagonal of the variance-covariance matrix of predicted values:
where P is the projection matrix and s2 is the sample variance.
The full matrix is very large; its diagonal elements can be calculated individually as:
where Xi is the i-th row of matrix X.
Sample statistics
Using these residuals we can estimate the sample variance s2 using the reduced chi-squared statistic:
The denominator, n−p, is the statistical degrees of freedom. The first quantity, s2, is the OLS estimate for σ2, whereas the second, , is the MLE estimate for σ2. The two estimators are quite similar in large samples; the first estimator is always unbiased, while the second estimator is biased but has a smaller mean squared error. In practice s2 is used more often, since it is more convenient for the hypothesis testing. The square root of s2 is called the regression standard error, standard error of the regression, or standard error of the equation.
It is common to assess the goodness-of-fit of the OLS regression by comparing how much the initial variation in the sample can be reduced by regressing onto X. The coefficient of determination R2 is defined as a ratio of "explained" variance to the "total" variance of the dependent variable y, in the cases where the regression sum of squares equals the sum of squares of residuals:
where TSS is the total sum of squares for the dependent variable, , and is an n×n matrix of ones. ( is a centering matrix which is equivalent to regression on a constant; it simply subtracts the mean from a variable.) In order for R2 to be meaningful, the matrix X of data on regressors must contain a column vector of ones to represent the constant whose coefficient is the regression intercept. In that case, R2 will always be a number between 0 and 1, with values close to 1 indicating a good degree of fit.
Simple linear regression model
If the data matrix X contains only two variables, a constant and a scalar regressor xi, then this is called the "simple regression model". This case is often considered in the beginner statistics classes, as it provides much simpler formulas even suitable for manual calculation. The parameters are commonly denoted as :
The least squares estimates in this case are given by simple formulas
Alternative derivations
In the previous section the least squares estimator was obtained as a value that minimizes the sum of squared residuals of the model. However it is also possible to derive the same estimator from other approaches. In all cases the formula for OLS estimator remains the same: ; the only difference is in how we interpret this result.
Projection
For mathematicians, OLS is an approximate solution to an overdetermined system of linear equations , where β is the unknown. Assuming the system cannot be solved exactly (the number of equations n is much larger than the number of unknowns p), we are looking for a solution that could provide the smallest discrepancy between the right- and left- hand sides. In other words, we are looking for the solution that satisfies
where is the standard L2 norm in the n-dimensional Euclidean space Rn. The predicted quantity Xβ is just a certain linear combination of the vectors of regressors. Thus, the residual vector will have the smallest length when y is projected orthogonally onto the linear subspace spanned by the columns of X. The OLS estimator in this case can be interpreted as the coefficients of vector decomposition of along the basis of X.
In other words, the gradient equations at the minimum can be written as:
A geometrical interpretation of these equations is that the vector of residuals, is orthogonal to the column space of X, since the dot product is equal to zero for any conformal vector, v. This means that is the shortest of all possible vectors , that is, the variance of the residuals is the minimum possible. This is illustrated at the right.
Introducing and a matrix K with the assumption that a matrix is non-singular and KT X = 0 (cf. Orthogonal projections), the residual vector should satisfy the following equation:
The equation and solution of linear least squares are thus described as follows:
Another way of looking at it is to consider the regression line to be a weighted average of the lines passing through the combination of any two points in the dataset. Although this way of calculation is more computationally expensive, it provides a better intuition on OLS.
Maximum likelihood
The OLS estimator is identical to the maximum likelihood estimator (MLE) under the normality assumption for the error terms.[proof] This normality assumption has historical importance, as it provided the basis for the early work in linear regression analysis by Yule and Pearson. From the properties of MLE, we can infer that the OLS estimator is asymptotically efficient (in the sense of attaining the Cramér–Rao bound for variance) if the normality assumption is satisfied.
Generalized method of moments
In iid case the OLS estimator can also be viewed as a GMM estimator arising from the moment conditions
These moment conditions state that the regressors should be uncorrelated with the errors. Since xi is a p-vector, the number of moment conditions is equal to the dimension of the parameter vector β, and thus the system is exactly identified. This is the so-called classical GMM case, when the estimator does not depend on the choice of the weighting matrix.
Note that the original strict exogeneity assumption implies a far richer set of moment conditions than stated above. In particular, this assumption implies that for any vector-function , the moment condition will hold. However it can be shown using the Gauss–Markov theorem that the optimal choice of function is to take , which results in the moment equation posted above.
Properties
Assumptions
There are several different frameworks in which the linear regression model can be cast in order to make the OLS technique applicable. Each of these settings produces the same formulas and same results. The only difference is the interpretation and the assumptions which have to be imposed in order for the method to give meaningful results. The choice of the applicable framework depends mostly on the nature of data in hand, and on the inference task which has to be performed.
One of the lines of difference in interpretation is whether to treat the regressors as random variables, or as predefined constants. In the first case (random design) the regressors xi are random and sampled together with the yis from some population, as in an observational study. This approach allows for more natural study of the asymptotic properties of the estimators. In the other interpretation (fixed design), the regressors X are treated as known constants set by a design, and y is sampled conditionally on the values of X as in an experiment. For practical purposes, this distinction is often unimportant, since estimation and inference is carried out while conditioning on X. All results stated in this article are within the random design framework.
Classical linear regression model
The classical model focuses on the "finite sample" estimation and inference, meaning that the number of observations n is fixed. This contrasts with the other approaches, which study the asymptotic behavior of OLS, and in which the behavior at a large number of samples is studied.
Correct specification. The linear functional form must coincide with the form of the actual data-generating process.
Strict exogeneity. The errors in the regression should have conditional mean zero: The immediate consequence of the exogeneity assumption is that the errors have mean zero: (for the law of total expectation), and that the regressors are uncorrelated with the errors: . The exogeneity assumption is critical for the OLS theory. If it holds then the regressor variables are called exogenous. If it does not, then those regressors that are correlated with the error term are called endogenous, and the OLS estimator becomes biased. In such case the method of instrumental variables may be used to carry out inference.
No linear dependence. The regressors in X must all be linearly independent. Mathematically, this means that the matrix X must have full column rank almost surely: Usually, it is also assumed that the regressors have finite moments up to at least the second moment. Then the matrix is finite and positive semi-definite. When this assumption is violated the regressors are called linearly dependent or perfectly multicollinear. In such case the value of the regression coefficient β cannot be learned, although prediction of y values is still possible for new values of the regressors that lie in the same linearly dependent subspace.
Spherical errors: where is the identity matrix in dimension n, and σ2 is a parameter which determines the variance of each observation. This σ2 is considered a nuisance parameter in the model, although usually it is also estimated. If this assumption is violated then the OLS estimates are still valid, but no longer efficient. It is customary to split this assumption into two parts:
Homoscedasticity: , which means that the error term has the same variance σ2 in each observation. When this requirement is violated this is called heteroscedasticity, in such case a more efficient estimator would be weighted least squares. If the errors have infinite variance then the OLS estimates will also have infinite variance (although by the law of large numbers they will nonetheless tend toward the true values so long as the errors have zero mean). In this case, robust estimation techniques are recommended.
No autocorrelation: the errors are uncorrelated between observations: for . This assumption may be violated in the context of time series data, panel data, cluster samples, hierarchical data, repeated measures data, longitudinal data, and other data with dependencies. In such cases generalized least squares provides a better alternative than the OLS. Another expression for autocorrelation is serial correlation.
Normality. It is sometimes additionally assumed that the errors have normal distribution conditional on the regressors:This assumption is not needed for the validity of the OLS method, although certain additional finite-sample properties can be established in case when it does (especially in the area of hypotheses testing). Also when the errors are normal, the OLS estimator is equivalent to the maximum likelihood estimator (MLE), and therefore it is asymptotically efficient in the class of all regular estimators. Importantly, the normality assumption applies only to the error terms; contrary to a popular misconception, the response (dependent) variable is not required to be normally distributed.
Independent and identically distributed (iid)
In some applications, especially with cross-sectional data, an additional assumption is imposed — that all observations are independent and identically distributed. This means that all observations are taken from a random sample which makes all the assumptions listed earlier simpler and easier to interpret. Also this framework allows one to state asymptotic results (as the sample size ), which are understood as a theoretical possibility of fetching new independent observations from the data generating process. The list of assumptions in this case is:
iid observations: (xi, yi) is independent from, and has the same distribution as, (xj, yj) for all ;
no perfect multicollinearity: is a positive-definite matrix;
exogeneity:
homoscedasticity: .
Time series model
The stochastic process {xi, yi} is stationary and ergodic; if {xi, yi} is nonstationary, OLS results are often spurious unless {xi, yi} is co-integrating.
The regressors are predetermined: E[xiεi] = 0 for all i = 1, ..., n;
The p×p matrix is of full rank, and hence positive-definite;
{xiεi} is a martingale difference sequence, with a finite matrix of second moments .
Finite sample properties
First of all, under the strict exogeneity assumption the OLS estimators and s2 are unbiased, meaning that their expected values coincide with the true values of the parameters:[proof]
If the strict exogeneity does not hold (as is the case with many time series models, where exogeneity is assumed only with respect to the past shocks but not the future ones), then these estimators will be biased in finite samples.
The variance-covariance matrix (or simply covariance matrix) of is equal to
In particular, the standard error of each coefficient is equal to square root of the j-th diagonal element of this matrix. The estimate of this standard error is obtained by replacing the unknown quantity σ2 with its estimate s2. Thus,
It can also be easily shown that the estimator is uncorrelated with the residuals from the model:
The Gauss–Markov theorem states that under the spherical errors assumption (that is, the errors should be uncorrelated and homoscedastic) the estimator is efficient in the class of linear unbiased estimators. This is called the best linear unbiased estimator (BLUE). Efficiency should be understood as if we were to find some other estimator which would be linear in y and unbiased, then
in the sense that this is a nonnegative-definite matrix. This theorem establishes optimality only in the class of linear unbiased estimators, which is quite restrictive. Depending on the distribution of the error terms ε, other, non-linear estimators may provide better results than OLS.
Assuming normality
The properties listed so far are all valid regardless of the underlying distribution of the error terms. However, if you are willing to assume that the normality assumption holds (that is, that ), then additional properties of the OLS estimators can be stated.
The estimator is normally distributed, with mean and variance as given before:
This estimator reaches the Cramér–Rao bound for the model, and thus is optimal in the class of all unbiased estimators. Note that unlike the Gauss–Markov theorem, this result establishes optimality among both linear and non-linear estimators, but only in the case of normally distributed error terms.
The estimator s2 will be proportional to the chi-squared distribution:
The variance of this estimator is equal to , which does not attain the Cramér–Rao bound of . However it was shown that there are no unbiased estimators of σ2 with variance smaller than that of the estimator s2. If we are willing to allow biased estimators, and consider the class of estimators that are proportional to the sum of squared residuals (SSR) of the model, then the best (in the sense of the mean squared error) estimator in this class will be , which even beats the Cramér–Rao bound in case when there is only one regressor ().
Moreover, the estimators and s2 are independent, the fact which comes in useful when constructing the t- and F-tests for the regression.
Influential observations
As was mentioned before, the estimator is linear in y, meaning that it represents a linear combination of the dependent variables yi. The weights in this linear combination are functions of the regressors X, and generally are unequal. The observations with high weights are called influential because they have a more pronounced effect on the value of the estimator.
To analyze which observations are influential we remove a specific j-th observation and consider how much the estimated quantities are going to change (similarly to the jackknife method). It can be shown that the change in the OLS estimator for β will be equal to
where is the j-th diagonal element of the hat matrix P, and xj is the vector of regressors corresponding to the j-th observation. Similarly, the change in the predicted value for j-th observation resulting from omitting that observation from the dataset will be equal to
From the properties of the hat matrix, , and they sum up to p, so that on average . These quantities hj are called the leverages, and observations with high hj are called leverage points. Usually the observations with high leverage ought to be scrutinized more carefully, in case they are erroneous, or outliers, or in some other way atypical of the rest of the dataset.
Partitioned regression
Sometimes the variables and corresponding parameters in the regression can be logically split into two groups, so that the regression takes form
where X1 and X2 have dimensions n×p1, n×p2, and β1, β2 are p1×1 and p2×1 vectors, with .
The Frisch–Waugh–Lovell theorem states that in this regression the residuals and the OLS estimate will be numerically identical to the residuals and the OLS estimate for β2 in the following regression:
where M1 is the annihilator matrix for regressors X1.
The theorem can be used to establish a number of theoretical results. For example, having a regression with a constant and another regressor is equivalent to subtracting the means from the dependent variable and the regressor and then running the regression for the de-meaned variables but without the constant term.
Constrained estimation
Suppose it is known that the coefficients in the regression satisfy a system of linear equations
where Q is a p×q matrix of full rank, and c is a q×1 vector of known constants, where . In this case least squares estimation is equivalent to minimizing the sum of squared residuals of the model subject to the constraint A. The constrained least squares (CLS) estimator can be given by an explicit formula:
This expression for the constrained estimator is valid as long as the matrix XTX is invertible. It was assumed from the beginning of this article that this matrix is of full rank, and it was noted that when the rank condition fails, β will not be identifiable. However it may happen that adding the restriction A makes β identifiable, in which case one would like to find the formula for the estimator. The estimator is equal to
where R is a p×(p − q) matrix such that the matrix is non-singular, and . Such a matrix can always be found, although generally it is not unique. The second formula coincides with the first in case when XTX is invertible.
Large sample properties
The least squares estimators are point estimates of the linear regression model parameters β. However, generally we also want to know how close those estimates might be to the true values of parameters. In other words, we want to construct the interval estimates.
Since we have not made any assumption about the distribution of error term εi, it is impossible to infer the distribution of the estimators and . Nevertheless, we can apply the central limit theorem to derive their asymptotic properties as sample size n goes to infinity. While the sample size is necessarily finite, it is customary to assume that n is "large enough" so that the true distribution of the OLS estimator is close to its asymptotic limit.
We can show that under the model assumptions, the least squares estimator for β is consistent (that is converges in probability to β) and asymptotically normal:[proof]
where
Intervals
Using this asymptotic distribution, approximate two-sided confidence intervals for the j-th component of the vector can be constructed as
at the confidence level,
where q denotes the quantile function of standard normal distribution, and [·]jj is the j-th diagonal element of a matrix.
Similarly, the least squares estimator for σ2 is also consistent and asymptotically normal (provided that the fourth moment of εi exists) with limiting distribution
These asymptotic distributions can be used for prediction, testing hypotheses, constructing other estimators, etc.. As an example consider the problem of prediction. Suppose is some point within the domain of distribution of the regressors, and one wants to know what the response variable would have been at that point. The mean response is the quantity , whereas the predicted response is . Clearly the predicted response is a random variable, its distribution can be derived from that of :
which allows construct confidence intervals for mean response to be constructed:
at the confidence level.
Hypothesis testing
Two hypothesis tests are particularly widely used. First, one wants to know if the estimated regression equation is any better than simply predicting that all values of the response variable equal its sample mean (if not, it is said to have no explanatory power). The null hypothesis of no explanatory value of the estimated regression is tested using an F-test. If the calculated F-value is found to be large enough to exceed its critical value for the pre-chosen level of significance, the null hypothesis is rejected and the alternative hypothesis, that the regression has explanatory power, is accepted. Otherwise, the null hypothesis of no explanatory power is accepted.
Second, for each explanatory variable of interest, one wants to know whether its estimated coefficient differs significantly from zero—that is, whether this particular explanatory variable in fact has explanatory power in predicting the response variable. Here the null hypothesis is that the true coefficient is zero. This hypothesis is tested by computing the coefficient's t-statistic, as the ratio of the coefficient estimate to its standard error. If the t-statistic is larger than a predetermined value, the null hypothesis is rejected and the variable is found to have explanatory power, with its coefficient significantly different from zero. Otherwise, the null hypothesis of a zero value of the true coefficient is accepted.
In addition, the Chow test is used to test whether two subsamples both have the same underlying true coefficient values. The sum of squared residuals of regressions on each of the subsets and on the combined data set are compared by computing an F-statistic; if this exceeds a critical value, the null hypothesis of no difference between the two subsets is rejected; otherwise, it is accepted.
Example with real data
The following data set gives average heights and weights for American women aged 30–39 (source: The World Almanac and Book of Facts, 1975).
{|class="wikitable" style="text-align:right;"
|-
! style="text-align:left;" | Height (m)
| 1.47 || 1.50 || 1.52 || 1.55 || 1.57
| rowspan="6" |
|-
! style="text-align:left;" | Weight (kg)
| 52.21 || 53.12 || 54.48 || 55.84 || 57.20
|-
! style="text-align:left;" | Height (m)
| 1.60 || 1.63 || 1.65 || 1.68 || 1.70
|-
! style="text-align:left;" | Weight (kg)
| 58.57 || 59.93 || 61.29 || 63.11 || 64.47
|-
! style="text-align:left;" | Height (m)
| 1.73 || 1.75 || 1.78 || 1.80 || 1.83
|-
! style="text-align:left;" | Weight (kg)
| 66.28 || 68.10 || 69.92 || 72.19 || 74.46
|}
When only one dependent variable is being modeled, a scatterplot will suggest the form and strength of the relationship between the dependent variable and regressors. It might also reveal outliers, heteroscedasticity, and other aspects of the data that may complicate the interpretation of a fitted regression model. The scatterplot suggests that the relationship is strong and can be approximated as a quadratic function. OLS can handle non-linear relationships by introducing the regressor 2. The regression model then becomes a multiple linear model:
The output from most popular statistical packages will look similar to this:
{|style="border:1px solid #aaa; padding:2pt 10pt;"
|-
| Method || colspan="4" | Least squares
|-
| Dependent variable || colspan="4" | WEIGHT
|-
| Observations || colspan="4" | 15
|-
| colspan="5" |
|- style="text-align:right;"
! style="padding-left:0.5em; text-align:left;" | Parameter
! style="padding-left:0.5em;" | Value
! style="padding-left:0.5em;" | Std error
! style="padding-left:0.5em;" | t-statistic
! style="padding-left:0.5em;" | p-value
|-
| colspan="5" |
|- style="text-align:right;"
| style="text-align:left;" |
| 128.8128 || 16.3083 || 7.8986 || 0.0000
|- style="text-align:right;"
| style="text-align:left;" |
| –143.1620 || 19.8332 || –7.2183 || 0.0000
|- style="text-align:right;"
| style="text-align:left;" |
| 61.9603 || 6.0084 || 10.3122 || 0.0000
|-
| colspan="5" |
|-
| R2 || style="text-align:right;" | 0.9989
| colspan="2" | S.E. of regression || style="text-align:right;" | 0.2516
|-
| Adjusted R2 || style="text-align:right;" | 0.9987
| colspan="2" | Model sum-of-sq. || style="text-align:right;" | 692.61
|-
| Log-likelihood || style="text-align:right;" | 1.0890
| colspan="2" | Residual sum-of-sq. || style="text-align:right;" | 0.7595
|-
| Durbin–Watson stat. || style="text-align:right;" | 2.1013
| colspan="2" | Total sum-of-sq. || style="text-align:right;" | 693.37
|-
| Akaike criterion || style="text-align:right;" | 0.2548
| colspan="2" | F-statistic || style="text-align:right;" | 5471.2
|-
| Schwarz criterion || style="text-align:right;" | 0.3964
| colspan="2" | p-value (F-stat) || style="text-align:right;" | 0.0000
|}
In this table:
The Value column gives the least squares estimates of parameters βj
The Std error column shows standard errors of each coefficient estimate:
The t-statistic and p-value columns are testing whether any of the coefficients might be equal to zero. The t-statistic is calculated simply as . If the errors ε follow a normal distribution, t follows a Student-t distribution. Under weaker conditions, t is asymptotically normal. Large values of t indicate that the null hypothesis can be rejected and that the corresponding coefficient is not zero. The second column, p-value, expresses the results of the hypothesis test as a significance level. Conventionally, p-values smaller than 0.05 are taken as evidence that the population coefficient is nonzero.
R-squared is the coefficient of determination indicating goodness-of-fit of the regression. This statistic will be equal to one if fit is perfect, and to zero when regressors X have no explanatory power whatsoever. This is a biased estimate of the population R-squared, and will never decrease if additional regressors are added, even if they are irrelevant.
Adjusted R-squared is a slightly modified version of , designed to penalize for the excess number of regressors which do not add to the explanatory power of the regression. This statistic is always smaller than , can decrease as new regressors are added, and even be negative for poorly fitting models:
Log-likelihood is calculated under the assumption that errors follow normal distribution. Even though the assumption is not very reasonable, this statistic may still find its use in conducting LR tests.
Durbin–Watson statistic tests whether there is any evidence of serial correlation between the residuals. As a rule of thumb, the value smaller than 2 will be an evidence of positive correlation.
Akaike information criterion and Schwarz criterion are both used for model selection. Generally when comparing two alternative models, smaller values of one of these criteria will indicate a better model.
Standard error of regression is an estimate of σ, standard error of the error term.
Total sum of squares, model sum of squared, and residual sum of squares tell us how much of the initial variation in the sample were explained by the regression.
F-statistic tries to test the hypothesis that all coefficients (except the intercept) are equal to zero. This statistic has F(p–1,n–p) distribution under the null hypothesis and normality assumption, and its p-value indicates probability that the hypothesis is indeed true. Note that when errors are not normal this statistic becomes invalid, and other tests such as Wald test or LR test should be used.
Ordinary least squares analysis often includes the use of diagnostic plots designed to detect departures of the data from the assumed form of the model. These are some of the common diagnostic plots:
Residuals against the explanatory variables in the model. A non-linear relation between these variables suggests that the linearity of the conditional mean function may not hold. Different levels of variability in the residuals for different levels of the explanatory variables suggests possible heteroscedasticity.
Residuals against explanatory variables not in the model. Any relation of the residuals to these variables would suggest considering these variables for inclusion in the model.
Residuals against the fitted values, .
Residuals against the preceding residual. This plot may identify serial correlations in the residuals.
An important consideration when carrying out statistical inference using regression models is how the data were sampled. In this example, the data are averages rather than measurements on individual women. The fit of the model is very good, but this does not imply that the weight of an individual woman can be predicted with high accuracy based only on her height.
Sensitivity to rounding
This example also demonstrates that coefficients determined by these calculations are sensitive to how the data is prepared. The heights were originally given rounded to the nearest inch and have been converted and rounded to the nearest centimetre. Since the conversion factor is one inch to 2.54 cm this is not an exact conversion. The original inches can be recovered by Round(x/0.0254) and then re-converted to metric without rounding. If this is done the results become:
Using either of these equations to predict the weight of a 5' 6" (1.6764 m) woman gives similar values: 62.94 kg with rounding vs. 62.98 kg without rounding. Thus a seemingly small variation in the data has a real effect on the coefficients but a small effect on the results of the equation.
While this may look innocuous in the middle of the data range it could become significant at the extremes or in the case where the fitted model is used to project outside the data range (extrapolation).
This highlights a common error: this example is an abuse of OLS which inherently requires that the errors in the independent variable (in this case height) are zero or at least negligible. The initial rounding to nearest inch plus any actual measurement errors constitute a finite and non-negligible error. As a result, the fitted parameters are not the best estimates they are presumed to be. Though not totally spurious the error in the estimation will depend upon relative size of the x and y errors.
Another example with less real data
Problem statement
We can use the least square mechanism to figure out the equation of a two body orbit in polar base co-ordinates. The equation typically used is where is the radius of how far the object is from one of the bodies. In the equation the parameters and are used to determine the path of the orbit. We have measured the following data.
We need to find the least-squares approximation of and for the given data.
Solution
First we need to represent e and p in a linear form. So we are going to rewrite the equation as . Furthermore, one could fit for apsides by expanding with an extra parameter as , which is linear in both and in the extra basis function , used to extra . We use the original two-parameter form to represent our observational data as:
where is and is and is constructed by the first column being the coefficient of and the second column being the coefficient of and is the values for the respective so and
On solving we get
so and
| Mathematics | Statistics and probability | null |
1653711 | https://en.wikipedia.org/wiki/Flathead%20%28fish%29 | Flathead (fish) | A flathead is one of a number of small to medium fish species with notably flat heads, distributed in membership across various genera of the family Platycephalidae. Many species are found in estuaries and the open ocean in the Indo-Pacific, especially most parts of Australia where they are popular sport and table fish. Flathead can grow at least in length and in weight, with dusky flathead (Platycephalus fuscus) being the biggest, although fish this size are seldom caught.
Anatomy and morphology
Flathead are notable for their unusual body shape, which their hunting strategy is based upon. Flathead are dorsally compressed, meaning their body is wide but flattened and very low in height. Both eyes are on the top of the flattened head, giving excellent binocular vision to attack overhead prey. The effect is somewhat similar to flounders. In contrast to flounder, however, flathead are much more elongated, the tail remains vertical, and the mouth is large, wide and symmetrical. Flathead use this body structure to hide in sand (their body colour changes to match their background), with only their eyes visible, and explode upwards and outwards to engulf small fish and prawns as they drift over, using a combination of ram and suction feeding thereby improving their chances to catch prey.
Flathead have two short spikes on either side of their heads and on top of their heads that contain venom. The venom, while not fatal, can cause pain and infection for no more than about 2 days. Some anglers believe the pain of the sting of the Flathead fish can be reduced by rubbing the slime of the belly of the same fish that caused the sting on the inflicted wound, due to a particular gland in its belly.
Habitat
Dusky flatheads (Platycephalus fuscus) are found in estuaries and coastal bays from Cairns in Queensland to the Gippsland Lakes in Victoria. They occur over sand, mud, gravel and seagrass and can inhabit estuarine waters up to the tidal limit.
Oceanic flathead species (sand flathead, tiger flathead, bar-tailed flathead) are, as named, generally located more offshore than the dusky flathead, frequenting the sandy zones around and between coastal reefs; although bar-tailed flathead occur in many estuarine environments, for example the Swan/Canning River System in Perth.
Importance to humans
Fishermen catch flathead on a variety of baits and artificial lures all year round, but they are more commonly caught during summer. Only a handful of the many flathead species are regularly caught by fishermen. Saltwater estuaries along the east-coast of Australia are a popular place for recreational fishing for flathead species such as the Dusky Flathead. The Australian state of New South Wales has a substantial commercial flathead catch. Many flathead species are commonly used as food in Australia.
However, , the sand flathead (Platycephalus bassensis) was classified as depleted due to overfishing.
| Biology and health sciences | Acanthomorpha | Animals |
4331203 | https://en.wikipedia.org/wiki/Ellipsoid%20method | Ellipsoid method | In mathematical optimization, the ellipsoid method is an iterative method for minimizing convex functions over convex sets. The ellipsoid method generates a sequence of ellipsoids whose volume uniformly decreases at every step, thus enclosing a minimizer of a convex function.
When specialized to solving feasible linear optimization problems with rational data, the ellipsoid method is an algorithm which finds an optimal solution in a number of steps that is polynomial in the input size.
History
The ellipsoid method has a long history. As an iterative method, a preliminary version was introduced by Naum Z. Shor. In 1972, an approximation algorithm for real convex minimization was studied by Arkadi Nemirovski and David B. Yudin (Judin).
As an algorithm for solving linear programming problems with rational data, the ellipsoid algorithm was studied by Leonid Khachiyan; Khachiyan's achievement was to prove the polynomial-time solvability of linear programs. This was a notable step from a theoretical perspective: The standard algorithm for solving linear problems at the time was the simplex algorithm, which has a run time that typically is linear in the size of the problem, but for which examples exist for which it is exponential in the size of the problem. As such, having an algorithm that is guaranteed to be polynomial for all cases was a theoretical breakthrough.
Khachiyan's work showed, for the first time, that there can be algorithms for solving linear programs whose runtime can be proven to be polynomial. In practice, however, the algorithm is fairly slow and of little practical interest, though it provided inspiration for later work that turned out to be of much greater practical use. Specifically, Karmarkar's algorithm, an interior-point method, is much faster than the ellipsoid method in practice. Karmarkar's algorithm is also faster in the worst case.
The ellipsoidal algorithm allows complexity theorists to achieve (worst-case) bounds that depend on the dimension of the problem and on the size of the data, but not on the number of rows, so it remained important in combinatorial optimization theory for many years. Only in the 21st century have interior-point algorithms with similar complexity properties appeared.
Description
A convex minimization problem consists of the following ingredients.
A convex function to be minimized over the vector (containing n variables);
Convex inequality constraints of the form , where the functions are convex; these constraints define a convex set .
Linear equality constraints of the form .
We are also given an initial ellipsoid defined as
containing a minimizer , where and is the center of .
Finally, we require the existence of a separation oracle for the convex set . Given a point , the oracle should return one of two answers:
"The point is in ", or -
"The point is not in , and moreover, here is a hyperplane that separates from ", that is, a vector such that for all .
The output of the ellipsoid method is either:
Any point in the polytope (i.e., any feasible point), or -
A proof that is empty.
Inequality-constrained minimization of a function that is zero everywhere corresponds to the problem of simply identifying any feasible point. It turns out that any linear programming problem can be reduced to a linear feasibility problem (i.e. minimize the zero function subject to some linear inequality and equality constraints). One way to do this is by combining the primal and dual linear programs together into one program, and adding the additional (linear) constraint that the value of the primal solution is no worse than the value of the dual solution. Another way is to treat the objective of the linear program as an additional constraint, and use binary search to find the optimum value.
Unconstrained minimization
At the k-th iteration of the algorithm, we have a point at the center of an ellipsoid
We query the cutting-plane oracle to obtain a vector such that
We therefore conclude that
We set to be the ellipsoid of minimal volume containing the half-ellipsoid described above and compute . The update is given by
where
The stopping criterion is given by the property that
Inequality-constrained minimization
At the k-th iteration of the algorithm for constrained minimization, we have a point at the center of an ellipsoid as before. We also must maintain a list of values recording the smallest objective value of feasible iterates so far. Depending on whether or not the point is feasible, we perform one of two tasks:
If is feasible, perform essentially the same update as in the unconstrained case, by choosing a subgradient that satisfies
If is infeasible and violates the j-th constraint, update the ellipsoid with a feasibility cut. Our feasibility cut may be a subgradient of which must satisfy
for all feasible z.
Performance in convex programs
Theoretical run-time complexity guarantee
The run-time complexity guarantee of the ellipsoid method in the real RAM model is given by the following theorem.
Consider a family of convex optimization problems of the form: minimize f(x) s.t. x is in G, where f is a convex function and G is a convex set (a subset of an Euclidean space Rn). Each problem p in the family is represented by a data-vector Data(p), e.g., the real-valued coefficients in matrices and vectors representing the function f and the feasible region G. The size of a problem p, Size(p), is defined as the number of elements (real numbers) in Data(p). The following assumptions are needed:
G (the feasible region) is:
Bounded;
Has a non-empty interior (so there is a strictly-feasible point);
Given Data(p), one can compute using poly(Size(p)) arithmetic operations:
An ellipsoid that contains G;
A lower bound MinVol(p)>0 on the volume of G.
Given Data(p) and a point x in Rn, one can compute using poly(Size(p)) arithmetic operations:
A separation oracle for G (that is: either assert that x is in G, or return a hyperplane separating x from G).
A first-order oracle for f (that is: compute the value of f(x) and a subgradient f(x)).
Under these assumptions, the ellipsoid method is "R-polynomial". This means that there exists a polynomial Poly such that, for every problem-instance p and every approximation-ratio ε>0, the method finds a solution x satisfying :,using at most the following number of arithmetic operations on real numbers:where V(p) is a data-dependent quantity. Intuitively, it means that the number of operations required for each additional digit of accuracy is polynomial in Size(p). In the case of the ellipsoid method, we have:.The ellipsoid method requires at most steps, and each step requires Poly(Size(p)) arithmetic operations.
Practical performance
The ellipsoid method is used on low-dimensional problems, such as planar location problems, where it is numerically stable. Nemirovsky and BenTal say that it is efficient if the number of variables is at most 20-30; this is so even if there are thousands of constraints, as the number of iterations does not depend on the number of constraints. However, in problems with many variables, the ellipsoid method is very inefficient, as the number of iterations grows as O(n2).
Even on "small"-sized problems, it suffers from numerical instability and poor performance in practice .
Theoretical importance
The ellipsoid method is an important theoretical technique in combinatorial optimization. In computational complexity theory, the ellipsoid algorithm is attractive because its complexity depends on the number of columns and the digital size of the coefficients, but not on the number of rows.
The ellipsoid method can be used to show that many algorithmic problems on convex sets are polynomial-time equivalent.
Performance in linear programs
Leonid Khachiyan applied the ellipsoid method to the special case of linear programming: minimize cTx s.t. Ax ≤ b, where all coefficients in A,b,c are rational numbers. He showed that linear programs can be solved in polynomial time. Here is a sketch of Khachiyan's theorem.Step 1: reducing optimization to search. The theorem of linear programming duality says that we can reduce the above minimization problem to the search problem: find x,y s.t. Ax ≤ b ; ATy = c ; y ≤ 0 ; cTx=bTy. The first problem is solvable iff the second problem is solvable; in case the problem is solvable, the x-components of the solution to the second problem are an optimal solution to the first problem. Therefore, from now on, we can assume that we need to solve the following problem: find z ≥ 0 s.t. Rz ≤ r. Multiplying all rational coefficients by the common denominator, we can assume that all coefficients are integers.Step 2: reducing search to feasibility-check. The problem find z ≥ 0 s.t. Rz ≤ r can be reduced to the binary decision problem: "is there a z ≥ 0 such that Rz ≤ r?". This can be done as follows. If the answer to the decision problem is "no", then the answer to the search problem is "None", and we are done. Otherwise, take the first inequality constraint R1z ≤ r1; replace it with an equality R1z = r1; and apply the decision problem again. If the answer is "yes", we keep the equality; if the answer is "no", it means that the inequality is redundant, and we can remove it. Then we proceed to the next inequality constraint. For each constraint, we either convert it to equality or remove it. Finally, we have only equality constraints, which can be solved by any method for solving a system of linear equations.Step 3: the decision problem can be reduced to a different optimization problem. Define the residual function f(z) := max[(Rz)1-r1, (Rz)2-r2, (Rz)3-r3,...]. Clearly, f(z)≤0 iff Rz ≤ r. Therefore, to solve the decision problem, it is sufficient to solve the minimization problem: minz f(z). The function f is convex (it is a maximum of linear functions). Denote the minimum value by f*. Then the answer to the decision problem is "yes" iff f*≤0.Step 4: In the optimization problem minz f(z), we can assume that z is in a box of side-length 2L, where L is the bit length of the problem data. Thus, we have a bounded convex program, that can be solved up to any accuracy ε by the ellipsoid method, in time polynomial in L.Step 5: It can be proved that, if f*>0, then f*>2-poly(L), for some polynomial. Therefore, we can pick the accuracy ε=2-poly(L). Then, the ε-approximate solution found by the ellipsoid method will be positive, iff f*>0, iff the decision problem is unsolvable.
Variants
The ellipsoid method has several variants, depending on what cuts exactly are used in each step.
Different cuts
In the central-cut ellipsoid method, the cuts are always through the center of the current ellipsoid. The input is a rational number ε>0, a convex body K given by a weak separation oracle, and a number R such that S(0,R) (the ball of radius R around the origin) contains K. The output is one of the following:
(a) A vector at a distance of at most ε from K, or --
(b) A positive definite matrix A and a point a such that the ellipsoid E(A,a) contains K, and the volume of E(A,a) is at most ε.
The number of steps is , the number of required accuracy digits is p := 8N, and the required accuracy of the separation oracle is d := 2−p.
In the deep-cut ellipsoid method, the cuts remove more than half of the ellipsoid in each step. This makes it faster to discover that K is empty. However, when K is nonempty, there are examples in which the central-cut method finds a feasible point faster. The use of deep cuts does not change the order of magnitude of the run-time.
In the shallow-cut ellipsoid method, the cuts remove less than half of the ellipsoid in each step. This variant is not very useful in practice, but it has theoretical importance: it allows to prove results that cannot be derived from other variants. The input is a rational number ε>0, a convex body K given by a shallow separation oracle, and a number R such that S(0,R) contains K. The output is a positive definite matrix A and a point a such that one of the following holds:
(a) The ellipsoid E(A,a) has been declared "tough" by the oracle, or -
(b) K is contained in E(A,a) and the volume of E(A,a) is at most ε.
The number of steps is , and the number of required accuracy digits is p := 8N.
Different ellipsoids
There is also a distinction between the circumscribed ellipsoid and the inscribed ellipsoid methods:
In the circumscribed ellipsoid method, each iteration finds an ellipsoid of smallest volume that contains the remaining part of the previous ellipsoid. This method was developed by Yudin and Nemirovskii.
In the Inscribed ellipsoid method''', each iteration finds an ellipsoid of largest volume that is contained the remaining part of the previous ellipsoid. This method was developed by Tarasov, Khachian and Erlikh.
The methods differ in their runtime complexity (below, n is the number of variables and epsilon is the accuracy):
The circumscribed method requires iterations, where each iteration consists of finding a separating hyperplane and finding a new circumscribed ellipsoid. Finding a circumscribed ellipsoid requires time.
The inscribed method requires iterations, where each iteration consists of finding a separating hyperplane and finding a new inscribed ellipsoid. Finding an inscribed ellipsoid requires time for some small .
The relative efficiency of the methods depends on the time required for finding a separating hyperplane, which depends on the application: if the runtime is for then the circumscribed method is more efficient, but if then the inscribed method is more efficient.
Related methods
The center-of-gravity method is a conceptually simpler method, that requires fewer steps. However, each step is computationally expensive, as it requires to compute the center of gravity of the current feasible polytope.
Interior point methods, too, allow solving convex optimization problems in polynomial time, but their practical performance is much better than the ellipsoid method.
| Mathematics | Optimization | null |
4334419 | https://en.wikipedia.org/wiki/Arizonasaurus | Arizonasaurus | Arizonasaurus was a ctenosauriscid archosaur from the Middle Triassic (243 million years ago). Arizonasaurus is found in the Middle Triassic Moenkopi Formation of northern Arizona. A fairly complete skeleton was found in 2002 by Sterling Nesbitt. The taxon has a large sailback formed by elongated neural spines of the vertebrae. The type species, Arizonasaurus babbitti, was named by Samuel Paul Welles in 1947.
Discovery and naming
The type species, Arizonasaurus babbitti, was named by Samuel Paul Welles in 1947 on the basis of a few teeth and a maxilla, labelled as specimen UCMP 36232. A fairly complete skeleton was found in 2002 by Sterling Nesbitt.
Description
Arizonasaurus had a sail made of tall neural spines. This sail was similar to those of other basal archosaurs, such as other ctenosauriscids like Ctenosauriscus, Bromsgroveia, and Hypselorhachis.
Arizonasaurus is described from two braincase specimens. Some ancestral features of these braincases are plesiomorphic for crurotarsans.
Below is a list of characteristics found by Nesbitt in 2005 that distinguish Arizonasaurus:
a deep fossa hidden from view on the posteroventral edge of the upward-pointing process of the maxilla;
and a tongue-groove attachment between the pubis and the ilium.
Classification
Arizonasaurus was closely related to Ctenosauriscus; and, together with a few other genera, they make up Ctenosauriscidae. The ctenosauriscids were closely related to the poposaurids, as shown by a few shared derived characteristics. The pelvic girdle in Arizonasaurus unites this taxon with Ctenosauriscus, Lotosaurus, Bromsgroveia, and Hypselorhachus. Together, newly identified pseudosuchian features act as evidence that poposaurids, such as Poposaurus, Sillosuchus, and Chatterjeea, and ctenosauriscids form a monophyletic group that is derived rauisuchians.
Below is a phylogenetic cladogram simplified from Butler et al. in 2011 showing the cladistics of Archosauriformes, focusing mostly on Pseudosuchia:
Biogeography
Arizonasaurus is from the middle Triassic Moenkopi Formation of northern Arizona. The presence of a poposauroid in the early Middle Triassic suggests that the divergence of birds and crocodiles occurred earlier than previously thought. Ctenosauriscids from the Middle Triassic allow the distribution of Triassic faunas to be more widespread, now in Europe, Asia, North America and Africa. The fauna of the Moenkopi Formation represents a stage transitional fauna between the faunas of older and younger age.
| Biology and health sciences | Other prehistoric archosaurs | Animals |
11210910 | https://en.wikipedia.org/wiki/Port%20of%20Busan | Port of Busan | The Port of Busan is the largest port in South Korea, located in the city of Busan, South Korea. Its location is known as Busan Harbor.
The port is ranked sixth in the world's container throughput and is the largest seaport in South Korea. The port is operated by the Busan Port Authority, which was founded in 2004 as a public company. In 2019, around 22 million TEU were handled at 10 container terminals in Busan.
History
The Port of Busan was established in 1876 as a small port with strict trading between Korea, China and Japan. It is situated at the mouth of the Nakdong River () facing the Tsushima Island of Japan. During the Korean War (1950-1953), Busan was among the few places North Korea did not invade, causing war refugees to flee to the city of Busan. At that time Busan's port was crucial to receive war materials and aid, such as fabrics and processed foods to keep the economy stable. In the 1970s, a rise in the footwear and veneer industries caused factory workers to migrate to Busan, bringing Busan's population from 1.8 million to 3 million.
The Port of Busan continued to grow and by 2003 the port was the fourth largest container port in the world. South Korea accounted for 0.7% of global trade in 1970, but by 2003 it went up to 2.5%. 50% of the Busan's manufacturing jobs are related to exports, and 83% of the country's exports are containerized, making Busan the country's largest container and general cargo port. Compared to the Port of Busan, Inchon port handles only 7% of containers. Easy access to the Port of Busan between Japan, Singapore, and Hong Kong contribute to its vast growth.
Currently the Port of Busan is the fifth busiest container port in the world and the tenth busiest port in North-east Asia. It is developed, managed, and operated by the Busan Port Authority (BPA) established in 2004. Today the Port of Busan consists of four ports- North Port, South Port, Gamcheon Port, and Dadaepo Port, an International Passenger Terminal and the Gamman container terminal. The North Port provides passenger handling facilities and cargo, and with Gamcheon Port's help more cargo volumes can be handled (Ship Technology). The South Port is home to the Busan Cooperative Fish Market which is the largest fishing base in Korea, and it handles 30% of the total marine volume. The Dadaepo Port located west of the Busan Port, mainly handles coastal catches.
In 2007 the Busan Port handled cargo containing fertilizers, meat, scrap metal, petroleum and other gases, crude petroleum, coal, leather, fats and oils, iron ore, rough wood, natural sand, milling industry products, and sugar. In 2016, South Korea exported a total of $515B and imported $398B. Top exports of South Korea are integrated circuits, cars, refined petroleum, passenger and cargo ships, and vehicle parts. South Korea exports the most to China, the United States, Vietnam, Hong Kong, and Japan. Imports to South Korea mainly come from China, Japan, the United States, Germany, and other Asian countries. In 2017 Busan processed more than 20 million TEU, twenty-foot equivalents (a measure used to estimate the capacity of container ships).
The port is part of the Maritime Silk Road that runs from the Chinese coast towards the southern tip of India to Mombasa, from there through the Suez Canal to the Mediterranean and there to the Upper Adriatic region of Trieste with its rail connections to Central and Eastern Europe.
The current traffic volumes and urban population categorize Busan as a Large-Port Metropolis, using the Southampton system of port-city classification.
Port Statistics
Incidents
In 2021, a big cargo ship hit a number of cranes as it was parking. There were no injuries or deaths.
Sister ports
The Port of Busan also has 6 sister ports (listed in order of dates).
– Port of Southampton, United Kingdom (1978)
– Port of Seattle, United States (1981)
– Port of Osaka, Japan (1985)
– Port of Rotterdam, Netherlands (1985)
– Port of New York & New Jersey, USA (1988)
– Port of Shanghai, China (1994)
- Bandar Abbas, Iran
| Technology | Specific piers and ports | null |
11217970 | https://en.wikipedia.org/wiki/Promontory | Promontory | A promontory is a raised mass of land that projects into a lowland or a body of water (in which case it is a peninsula). Most promontories either are formed from a hard ridge of rock that has resisted the erosive forces that have removed the softer rock to the sides of it, or are the high ground that remains between two river valleys where they form a confluence. One type of promontory is a headland, or head.
Promontories in history
Located at the edge of a landmass, promontories offer a natural defense against enemies, as they are often surrounded by water and difficult to access. Many ancient and modern forts and castles have been built on promontories for this reason.
One of the most famous examples of promontory forts is the Citadel of Namur in Belgium. Located at the confluence of the Meuse and Sambre rivers, the citadel has been a prime fortified location since the 10th century. The surrounding rivers act as a natural moat, making it difficult for enemies to access the fort.
Another example of a promontory fort is Fort Pitt, which was built by the English during the American Revolution on the site of the former Fort Duquesne, which belonged to the French during the French and Indian War. The fort was located at the confluence of the Allegheny and Monongahela rivers, providing an additional layer of defense. The surrounding area eventually became the city of Pittsburgh, Pennsylvania.
In Ireland, many promontory forts were built by the ancient Celts for defense against invaders. These forts were often located on isolated peninsulas or headlands and were difficult to access, making them ideal for defending against enemy attacks.
The ancient town of Ras Bar Balla in southern Somalia is another example of a promontory fort. Located on a small promontory, the town was part of the Ajuran Sultanate's domain during the Middle Ages and was strategically located to defend against potential invaders.
| Physical sciences | Fluvial landforms | Earth science |
17006 | https://en.wikipedia.org/wiki/Knot | Knot | A knot is an intentional complication in cordage which may be practical or decorative, or both. Practical knots are classified by function, including hitches, bends, loop knots, and splices: a hitch fastens a rope to another object; a bend fastens two ends of a rope to each another; a loop knot is any knot creating a loop; and splice denotes any multi-strand knot, including bends and loops. A knot may also refer, in the strictest sense, to a stopper or knob at the end of a rope to keep that end from slipping through a grommet or eye. Knots have excited interest since ancient times for their practical uses, as well as their topological intricacy, studied in the area of mathematics known as knot theory.
History
Knots and knotting have been used and studied throughout history. For example, Chinese knotting is a decorative handicraft art that began as a form of Chinese folk art in the Tang and Song Dynasty (960–1279 AD) in China, later popularized in the Ming. Knot theory is the recent mathematical study of knots.
Knots of ancient origin include the bottle sling, bowline, cat's paw, clove hitch, cow hitch, double fisherman's knot, eskimo bowline, figure-eight knot, fisherman's knot, half hitch, kalmyk loop, one-sided overhand bend, overhand knot, overhand loop, reef knot, running bowline, single hitch, thief knot, Turk's head knot, and two half-hitches.
The eleven main knots of Chinese knotting are the four-flower knot, six-flower knot, Chinese button knot, double connection knot, double coin knot, agemaki, cross knot, square knot, Plafond knot, Pan Chang knot, and the good luck knot.
Knots of more recent origin include the friendship knot of Chinese knotting. The sheepshank knot originates from 1627 while the Western Union splice originates from the beginning of telegraphy.
Use
There is a large variety of knots, each with properties that make it suitable for a range of tasks. Some knots are used to attach the rope (or other knotting material) to other objects such as another rope, cleat, ring, or stake. Some knots are used to bind or constrict objects. Decorative knots usually bind to themselves to produce attractive patterns.
Teaching
While some people can look at diagrams or photos and tie the illustrated knots, others learn best by watching how a knot is tied. Knot tying skills are often transmitted by sailors, scouts, climbers, canyoners, cavers, arborists, rescue professionals, stagehands, fishermen, linemen and surgeons. The International Guild of Knot Tyers is an organization dedicated to the promotion of knot tying.
Applications
Truckers in need of securing a load may use a trucker's hitch, gaining mechanical advantage. Knots can save spelunkers from being buried under rock. Many knots can also be used as makeshift tools, for example, the bowline can be used as a rescue loop, and the munter hitch can be used for belaying. The diamond hitch was widely used to tie packages on to donkeys and mules.
In hazardous environments such as mountains, knots are very important. In the event of someone falling into a ravine or a similar terrain feature, with the correct equipment and knowledge of knots a rappel system can be set up to lower a rescuer down to a casualty and set up a hauling system to allow a third individual to pull both the rescuer and the casualty out of the ravine. Further application of knots includes developing a high line, which is similar to a zip line, and which can be used to move supplies, injured people, or the untrained across rivers, crevices, or ravines. Note the systems mentioned typically require carabiners and the use of multiple appropriate knots. These knots include the bowline, double figure eight, munter hitch, munter mule, prusik, autoblock, and clove hitch. Thus any individual who goes into a mountainous environment should have basic knowledge of knots and knot systems to increase safety and the ability to undertake activities such as rappelling.
Knots can be applied in combination to produce complex objects such as lanyards and netting. In ropework, the frayed end of a rope is held together by a type of knot called a whipping knot. Many types of textiles use knots to repair damage. Macramé, one kind of textile, is generated exclusively through the use of knotting, instead of knits, crochets, weaves or felting. Macramé can produce self-supporting three-dimensional textile structures, as well as flat work, and is often used ornamentally or decoratively.
Properties
Strength
Knots weaken the rope in which they are made. When knotted rope is strained to its breaking point, it almost always fails at the knot or close to it, unless it is defective or damaged elsewhere. The bending, crushing, and chafing forces that hold a knot in place also unevenly stress rope fibers and ultimately lead to a reduction in strength. The exact mechanisms that cause the weakening and failure are complex and are the subject of continued study. Special fibers that show differences in color in response to strain are being developed and used to study stress as it relates to types of knots.
Relative knot strength, also called knot efficiency, is the breaking strength of a knotted rope in proportion to the breaking strength of the rope without the knot. Determining a precise value for a particular knot is difficult because many factors can affect a knot efficiency test: the type of fiber, the style of rope, the size of rope, whether it is wet or dry, how the knot is dressed before loading, how rapidly it is loaded, whether the knot is repeatedly loaded, and so on. The efficiency of common knots ranges between 40 and 80% of the rope's original strength.
In most situations forming loops and bends with conventional knots is far more practical than using rope splices, even though the latter can maintain nearly the rope's full strength. Prudent users allow for a large safety margin in the strength of rope chosen for a task due to the weakening effects of knots, aging, damage, shock loading, etc. The working load limit of a rope is generally specified with a significant safety factor, up to 15:1 for critical applications. For life-threatening applications, other factors come into play.
Security
Even if the rope does not break, a knot may still fail to hold. Knots that hold firm under a variety of adverse conditions are said to be more secure than those that do not.
The following sections describe the main ways that knots fail to hold.
Slipping
The load creates tension that pulls the rope back through the knot in the direction of the load. If this continues far enough, the working end passes into the knot and the knot unravels and fails. This behavior can worsen when the knot is repeatedly strained and let slack, dragged over rough terrain, or repeatedly struck against hard objects such as masts and flagpoles.
Even with secure knots, slippage may occur when the knot is first put under real tension. This can be mitigated by leaving plenty of rope at the working end outside of the knot, and by dressing the knot cleanly and tightening it as much as possible before loading. Sometimes, the use of a stopper knot or, even better, a backup knot can prevent the working end from passing through the knot; but if a knot is observed to slip, it is generally preferable to use a more secure knot. Life-critical applications often require backup knots to maximize safety.
Capsizing
To capsize (or spill) a knot is to change its form and rearrange its parts, usually by pulling on specific ends in certain ways. When used inappropriately, some knots tend to capsize easily or even spontaneously. Often the capsized form of the knot offers little resistance to slipping or unraveling. A reef knot, when misused as a bend, can capsize dangerously.
Sometimes a knot is intentionally capsized as a method of tying another knot, as with the "lightning method" of tying a bowline. Some knots, such as the carrick bend, are generally tied in one form then capsized to obtain a stronger or more stable form.
Sliding
In knots that are meant to grip other objects, failure can be defined as the knot moving relative to the gripped object. While the knot itself is not untied, it ceases to perform the desired function. For instance, a simple rolling hitch tied around a railing and pulled parallel to the railing might hold up to a certain tension, then start sliding. Sometimes this problem can be corrected by working-up the knot tighter before subjecting it to load, but usually the problem requires either a knot with more wraps or a rope of different diameter or material.
Releasability
Knots differ in the effort required to untie them after loading. Knots that are very difficult to untie, such as the water knot, are said to "jam" or be jamming knots. Knots that come untied with less difficulty, such as the Zeppelin bend, are referred to as "non-jamming".
Components
Bight
A bight is any curved section, slack part, or loop between the ends of a rope, string, or yarn.
Bitter end
As a ropeworker's term, "bitter end" refers to the end of a rope that is tied off. In British nautical usage, the bitter end is the ship end of the anchor cable, secured by the anchor bitts and the bitter pin in the cable locker under the forecastle. At anchor, the more anchor line that is payed out, the better the anchor's hold. In a storm, if the anchor drags, ships will pay out more and more anchor line until they reach the "bitter end." At this point, they can only hope the anchor holds, hence the expression "hanging on to the bitter end". (A bitt is a metal block with a crosspin for tying lines to, also found on piers.) Also, the working end.
Loop
A curve narrower than a bight but with separate ends.
Elbow
Two crossing points created by an extra twist in a loop or a circle.
Standing end
The standing end is the longer end of the rope not involved in the knot, often shown as unfinished. It is often (but not always) the end of the rope under load after the knot is complete. For example, when a clove hitch ties a boat to a pier, the end going to the boat is the standing end.
Standing part
Section of line between knot and the standing end (seen above).
Turn
A turn or single turn is a curve with crossed legs.
A round turn is the complete encirclement of an object; requires two passes.
Two round turns circles the object twice; requires three passes.
Working end
The active end of a line used in making the knot. May also be called the "running end", "live end", or "tag end".
Working part
Section of line between knot and the working end.
Knot categories
The list of knots is extensive, but common properties allow for a useful system of categorization. For example, loop knots share the attribute of having some kind of an anchor point constructed on the standing end (such as a loop or overhand knot) into which the working end is easily hitched, using a round turn. An example of this is the bowline. Constricting knots often rely on friction to cinch down tight on loose bundles; an example is the Miller's knot. Knots may belong to more than one category.
Bend A knot uniting two lines (for knots joining two ends of the same line, see binding knots or loops).
Binding A knot that restricts object(s) by making multiple winds.
Coil knot Knots used to tie up lines for storage.
Decorative knot A complex knot exhibiting repeating patterns often constructed around and enhancing an object.
Hitch A knot tied to a post, cable, ring, or spar.
Lashing A knot used to hold (usually) poles together.
Loop A knot used to create a closed circle in a line.
Plait (or braid)A number of lines interwoven in a simple regular pattern.
Slip (or running) A knot tied with a hitch around one of its parts. In contrast, a loop is closed with a bend. While a slip knot can be closed, a loop remains the same size.
Slipped Some knots may be finished by passing a bight rather than the end, for ease of untying. The common shoelace knot is an example, being a reef knot with both ends slipped.
Seizing A knot used to hold two lines or two parts of the same line together.
Sennit A number of lines interwoven in a complex pattern. | Technology | Components_2 | null |
17011 | https://en.wikipedia.org/wiki/Orca | Orca | The orca (Orcinus orca), or killer whale, is a toothed whale and the largest member of the oceanic dolphin family. It is the only extant species in the genus Orcinus and is recognizable by its black-and-white patterned body. A cosmopolitan species, it is found in diverse marine environments, from Arctic to Antarctic regions to tropical seas.
Orcas are apex predators with a diverse diet. Individual populations often specialize in particular types of prey. This includes a variety of fish, sharks, rays, and marine mammals such as seals and other dolphins and whales. They are highly social; some populations are composed of highly stable matrilineal family groups (pods). Their sophisticated hunting techniques and vocal behaviors, often specific to a particular group and passed along from generation to generation, are considered to be manifestations of animal culture.
The International Union for Conservation of Nature assesses the orca's conservation status as data deficient because of the likelihood that two or more orca types are separate species. Some local populations are considered threatened or endangered due to prey depletion, habitat loss, pollution (by PCBs), capture for marine mammal parks, and conflicts with human fisheries. In late 2005, the southern resident orcas were placed on the U.S. Endangered Species list.
Orcas are not usually a threat to humans, and no fatal attack has ever been documented in their natural habitat. There have been cases of captive orcas killing or injuring their handlers at marine theme parks.
Naming
Orcas, despite being dolphins, are commonly called 'killer whales' due to a mistranslation of the Spanish 'asesino de ballenas' (literally 'whale killer'), reflecting their historical predation on whales. Since the 1960s, the use of "orca" instead of "killer whale" has steadily grown in common use.
The genus name Orcinus means 'of the kingdom of the dead', or 'belonging to Orcus'. Ancient Romans originally used orca ( orcae) for these animals, possibly borrowing Ancient Greek (óryx), which referred (among other things) to a whale species, perhaps a narwhal. As part of the family Delphinidae, the species is more closely related to other oceanic dolphins than to other whales.
They are sometimes referred to as 'blackfish', a name also used for other whale species. 'Grampus' is a former name for the species, but is now seldom used. This meaning of 'grampus' should not be confused with the genus Grampus, whose only member is Risso's dolphin.
Taxonomy
Orcinus orca is the only recognized extant species in the genus Orcinus, and one of many animal species originally described by Carl Linnaeus in his landmark 1758 10th edition of Systema Naturae. Konrad Gessner wrote the first scientific description of an orca in his Piscium & aquatilium animantium natura of 1558, part of the larger Historia animalium, based on examination of a dead stranded animal in the Bay of Greifswald that had attracted a great deal of local interest.
The orca is one of 35 species in the oceanic dolphin family, which first appeared about 11 million years ago. The orca lineage probably branched off shortly thereafter. Although it has morphological similarities with the false killer whale, the pygmy killer whale and the pilot whales, a study of cytochrome b gene sequences indicates that its closest extant relatives are the snubfin dolphins of the genus Orcaella. However, a more recent (2018) study places the orca as a sister taxon to the Lissodelphininae, a clade that includes Lagenorhynchus and Cephalorhynchus. In contrast, a 2019 phylogenetic study found the orca to be the second most basal member of the Delphinidae, with only the Atlantic white-sided dolphin (Leucopleurus acutus) being more basal.
Types
The three to five types of orcas may be distinct enough to be considered different races, subspecies, or possibly even species (see Species problem). The IUCN reported in 2008, "The taxonomy of this genus is clearly in need of review, and it is likely that O. orca will be split into a number of different species or at least subspecies over the next few years." Although large variation in the ecological distinctiveness of different orca groups complicate simple differentiation into types, research off the west coast of North America has identified fish-eating "residents", mammal-eating "transients" and "offshores". Other populations have not been as well studied, although specialized fish and mammal eating orcas have been distinguished elsewhere. Mammal-eating orcas in different regions were long thought likely to be closely related, but genetic testing has refuted this hypothesis.
A 2024 study supported the elevation of Eastern North American resident and transient orcas as distinct species, O. ater and O. rectipinnus respectively. The Society for Marine Mammalogy declined to recognize the two species, citing uncertainty as to whether the types constituted unique species or subspecies. "Pending a more complete global review and revision", the Society provisionally recognized them as subspecies Orcinus orca ater and O. o. rectipinnus, with O. o. orca as the nominate subspecies.
Four types have been documented in the Antarctic, Types A–D. Two dwarf species, named Orcinus nanus and Orcinus glacialis, were described during the 1980s by Soviet researchers, but most cetacean researchers are skeptical about their status. Complete mitochondrial sequencing indicates the two Antarctic groups (types B and C) should be recognized as distinct species, as should the North Pacific transients, leaving the others as subspecies pending additional data. A 2019 study of Type D orcas also found them to be distinct from other populations and possibly even a unique species.
Characteristics
Orcas are the largest extant members of the dolphin family. Males typically range from long and weigh in excess of . Females are smaller, generally ranging from and weighing about . Orcas may attain larger sizes as males have been recorded at and females at . Large males can reach a weight of over . Calves at birth weigh about and are about long. The skeleton of the orca is typical for an oceanic dolphin, but more robust.
With their distinctive pigmentation, adult orcas are seldom confused with any other species. When seen from a distance, juveniles can be confused with false killer whales or Risso's dolphins. The orca is mostly black but with sharply bordered white areas. The entire lower jaw is white and from here, the colouration stretches across the underside to the genital area; narrowing and expanding some, and extending into lateral flank patches close to the end. The tail fluke (fin) is also white on the underside, while the eyes have white oval-shaped patches behind and above them, and a grey or white "saddle patch" exists behind the dorsal fin and across the back. Males and females also have different patterns of black and white skin in their genital areas. In newborns, the white areas are yellow or orange coloured. Antarctic orcas may have pale grey to nearly white backs. Some Antarctic orcas are brown and yellow due to diatoms in the water. Both albino and melanistic orcas have been documented.
Orca pectoral fins are large and rounded, resembling paddles, with those of males significantly larger than those of females. Dorsal fins also exhibit sexual dimorphism, with those of males about high, more than twice the size of the female's, with the male's fin more like an elongated isosceles triangle, whereas the female's is more curved. In the skull, adult males have longer lower jaws than females, as well as larger occipital crests. The snout is blunt and lacks the beak of other species. The orca's teeth are very strong, and its jaws exert a powerful grip; the upper teeth fall into the gaps between the lower teeth when the mouth is closed. The firm middle and back teeth hold prey in place, while the front teeth are inclined slightly forward and outward to protect them from powerful jerking movements.
Orcas have good eyesight above and below the water, excellent hearing, and a good sense of touch. They have exceptionally sophisticated echolocation abilities, detecting the location and characteristics of prey and other objects in the water by emitting clicks and listening for echoes, as do other members of the dolphin family. The mean body temperature of the orca is . Like most marine mammals, orcas have a layer of insulating blubber ranging from thick beneath the skin. The pulse is about 60 heartbeats per minute when the orca is at the surface, dropping to 30 beats/min when submerged.
An individual orca can often be identified from its dorsal fin and saddle patch. Variations such as nicks, scratches, and tears on the dorsal fin and the pattern of white or grey in the saddle patch are unique. Published directories contain identifying photographs and names for hundreds of North Pacific animals. Photographic identification has enabled the local population of orcas to be counted each year rather than estimated, and has enabled great insight into life cycles and social structures.
Range and habitat
Orcas are found in all oceans and most seas. Due to their enormous range, numbers, and density, relative distribution is difficult to estimate, but they clearly prefer higher latitudes and coastal areas over pelagic environments. Areas which serve as major study sites for the species include the coasts of Iceland, Norway, the Valdés Peninsula of Argentina, the Crozet Islands, New Zealand and parts of the west coast of North America, from California to Alaska. Systematic surveys indicate the highest densities of orcas (>0.40 individuals per 100 km2) in the northeast Atlantic around the Norwegian coast, in the north Pacific along the Aleutian Islands, the Gulf of Alaska and in the Southern Ocean off much of the coast of Antarctica. They are considered "common" (0.20–0.40 individuals per 100 km2) in the eastern Pacific along the coasts of British Columbia, Washington and Oregon, in the North Atlantic Ocean around Iceland and the Faroe Islands.
In the Antarctic, orcas range up to the edge of the pack ice and are believed to venture into the denser pack ice, finding open leads much like beluga whales in the Arctic. However, orcas are merely seasonal visitors to Arctic waters, and do not approach the pack ice in the summer. With the rapid Arctic sea ice decline in the Hudson Strait, their range now extends deep into the northwest Atlantic. Occasionally, orcas swim into freshwater rivers. They have been documented up the Columbia River in the United States. They have also been found in the Fraser River in Canada and the Horikawa River in Japan.
Migration patterns are poorly understood. Each summer, the same individuals appear off the coasts of British Columbia and Washington. Despite decades of research, where these animals go for the rest of the year remains unknown. Transient pods have been sighted from southern Alaska to central California.
Population
Worldwide population estimates are uncertain, but recent consensus suggests a minimum of 50,000 (2006). Local estimates include roughly 25,000 in the Antarctic, 8,500 in the tropical Pacific, 2,250–2,700 off the cooler northeast Pacific and 500–1,500 off Norway. Japan's Fisheries Agency estimated in the 2000s that 2,321 orcas were in the seas around Japan.
Feeding
Orcas are apex predators, meaning that they themselves have no natural predators. They are sometimes called "wolves of the sea", because they hunt in groups like wolf packs. Orcas hunt varied prey including fish, cephalopods, mammals, seabirds, and sea turtles. Different populations or ecotypes may specialize, and some can have a dramatic impact on prey species. However, whales in tropical areas appear to have more generalized diets due to lower food productivity. Orcas spend most of their time at shallow depths, but occasionally dive several hundred metres depending on their prey.
Fish
Fish-eating orcas prey on around 30 species of fish. Some populations in the Norwegian and Greenland sea specialize in herring and follow that fish's autumnal migration to the Norwegian coast. Salmon account for 96% of northeast Pacific residents' diet, including 65% of large, fatty Chinook. Chum salmon are also eaten, but smaller sockeye and pink salmon are not a significant food item. Depletion of specific prey species in an area is, therefore, cause for concern for local populations, despite the high diversity of prey. On average, an orca eats each day. While salmon are usually hunted by an individual whale or a small group, herring are often caught using carousel feeding: the orcas force the herring into a tight ball by releasing bursts of bubbles or flashing their white undersides. They then slap the ball with their tail flukes, stunning or killing up to 15 fish at a time, then eating them one by one. Carousel feeding has been documented only in the Norwegian orca population, as well as some oceanic dolphin species.
In New Zealand, sharks and rays appear to be important prey, including eagle rays, long-tail and short-tail stingrays, common threshers, smooth hammerheads, blue sharks, basking sharks, and shortfin makos. With sharks, orcas may herd them to the surface and strike them with their tail flukes, while bottom-dwelling rays are cornered, pinned to the ground and taken to the surface. In other parts of the world, orcas have preyed on broadnose sevengill sharks, whale sharks, and even great white sharks. Competition between orcas and white sharks is probable in regions where their diets overlap. The arrival of orcas in an area can cause white sharks to flee and forage elsewhere. Orcas appear to target the liver of sharks. In one case a single orca was observed killing and eating a great white shark on its own.
Mammals and birds
Orcas are sophisticated and effective predators of marine mammals. They are recorded to prey on other cetacean species, usually smaller dolphins and porpoises such as common dolphins, bottlenose dolphins, Pacific white-sided dolphins, dusky dolphins, harbour porpoises and Dall's porpoises. While hunting these species, orcas usually have to chase them to exhaustion. For highly social species, orca pods try to separate an individual from its group. Larger groups have a better chance of preventing their prey from escaping, which is killed by being thrown around, rammed and jumped on. Arctic orcas may attack beluga whales and narwhals stuck in pools enclosed by sea ice, the former are also driven into shallower water where juveniles are grabbed. By contrast, orcas appear to be wary of pilot whales, which have been recorded to mob and chase them. Nevertheless, possible predation on long-finned pilot whales has been recorded in Iceland, and one study suggests short-finned pilot whales are among Caribbean Orcas' prey. Killer whales have been recorded attacking short-finned pilot whales in Peru as well.
Orcas also prey on larger species such as sperm whales, grey whales, humpback whales and minke whales. On three separate occasions in 2019 orcas were recorded to have killed blue whales off the south coast of Western Australia, including an estimated individual. Large whales require much effort and coordination to kill and orcas often target calves. A hunt begins with a chase followed by a violent attack on the exhausted prey. Large whales often show signs of orca attack via tooth rake marks. Pods of female sperm whales sometimes protect themselves by forming a protective circle around their calves with their flukes facing outwards, using them to repel the attackers. There is also evidence that humpback whales will defend against or mob orcas who are attacking either humpback calves or juveniles as well as members of other species.
Prior to the advent of industrial whaling, great whales may have been the major food source for orcas. The introduction of modern whaling techniques may have aided orcas by the sound of exploding harpoons indicating the availability of prey to scavenge, and compressed air inflation of whale carcasses causing them to float, thus exposing them to scavenging. However, the devastation of great whale populations by unfettered whaling has possibly reduced their availability for orcas, and caused them to expand their consumption of smaller marine mammals, thus contributing to the decline of these as well.
Other marine mammal prey includes seal species such as harbour seals, elephant seals, California sea lions, Steller sea lions, South American sea lions and walruses. Often, to avoid injury, orcas disable their prey before killing and eating it. This may involve throwing it in the air, slapping it with their tails, ramming it, or breaching and landing on it. In steeply banked beaches off Península Valdés, Argentina, and the Crozet Islands, orcas feed on South American sea lions and southern elephant seals in shallow water, even beaching temporarily to grab prey before wriggling back to the sea. Beaching, usually fatal to cetaceans, is not an instinctive behaviour, and can require years of practice for the young. Orcas can then release the animal near juvenile whales, allowing the younger whales to practice the difficult capture technique on the now-weakened prey. In the Antarctic, type B orcas hunt Weddell seals and other prey by "wave-hunting". They "spy-hop" to locate them on resting on ice floes, and then swim in groups to create waves that wash over the floe. This washes the prey into the water, where other orcas lie in wait.
In the Aleutian Islands, a decline in sea otter populations in the 1990s was controversially attributed by some scientists to orca predation, although with no direct evidence. The decline of sea otters followed a decline in seal populations, which in turn may be substitutes for their original prey, now decimated by industrial whaling. Orcas have been observed preying on terrestrial mammals, such as moose swimming between islands off the northwest coast of North America. Orca cannibalism has also been reported based on analysis of stomach contents, but this is likely to be the result of scavenging remains dumped by whalers. One orca was also attacked by its companions after being shot. Although resident orcas have never been observed to eat other marine mammals, they occasionally harass and kill porpoises and seals for no apparent reason. Some dolphins recognize resident orcas as harmless and remain in the same area.
Orcas do consume seabirds but are more likely to kill and leave them uneaten. Penguin species recorded as prey in Antarctic and sub-Antarctic waters include gentoo penguins, chinstrap penguins, king penguins and rockhopper penguins. Orcas in many areas may prey on cormorants and gulls. A captive orca at Marineland of Canada discovered it could regurgitate fish onto the surface, attracting sea gulls, and then eat the birds. Four others then learned to copy the behaviour.
Behaviour
Day-to-day orca behaviour generally consists of foraging, travelling, resting and socializing. Orcas frequently engage in surface behaviour such as breaching (jumping completely out of the water) and tail-slapping. These activities may have a variety of purposes, such as courtship, communication, dislodging parasites, or play. Spyhopping is a behaviour in which a whale holds its head above water to view its surroundings. Resident orcas swim alongside porpoises and other dolphins.
Orcas will engage in surplus killing, that is, killing that is not designed to be for food. As an example, a BBC film crew witnessed orca in British Columbia playing with a male Steller sea lion to exhaustion, but not eating it.
Some orcas have been observed swimming with dead salmon on their heads, resembling hats.
Social structure
Orcas are notable for their complex societies. Only elephants and higher primates live in comparably complex social structures. Due to orcas' complex social bonds, many marine experts have concerns about how humane it is to keep them in captivity.
Resident orcas in the eastern North Pacific live in particularly complex and stable social groups. Unlike any other known mammal social structure, resident whales live with their mothers for their entire lives. These family groups are based on matrilines consisting of the eldest female (matriarch) and her sons and daughters, and the descendants of her daughters, etc. The average size of a matriline is 5.5 animals. Because females can reach age 90, as many as four generations travel together. These matrilineal groups are highly stable. Individuals separate for only a few hours at a time, to mate or forage. With one exception, an orca named Luna, no permanent separation of an individual from a resident matriline has been recorded.
Closely related matrilines form loose aggregations called pods, usually consisting of one to four matrilines. Unlike matrilines, pods may separate for weeks or months at a time. DNA testing indicates resident males nearly always mate with females from other pods. Clans, the next level of resident social structure, are composed of pods with similar dialects, and common but older maternal heritage. Clan ranges overlap, mingling pods from different clans. The highest association layer is the community, which consists of pods that regularly associate with each other but share no maternal relations or dialects.
Transient pods are smaller than resident pods, typically consisting of an adult female and one or two of her offspring. Males typically maintain stronger relationships with their mothers than other females. These bonds can extend well into adulthood. Unlike residents, extended or permanent separation of transient offspring from natal matrilines is common, with juveniles and adults of both sexes participating. Some males become "rovers" and do not form long-term associations, occasionally joining groups that contain reproductive females. As in resident clans, transient community members share an acoustic repertoire, although regional differences in vocalizations have been noted.
As with residents and transients, the lifestyle of these whales appears to reflect their diet; fish-eating orcas off Norway have resident-like social structures, while mammal-eating orcas in Argentina and the Crozet Islands behave more like transients.
Orcas of the same sex and age group may engage in physical contact and synchronous surfacing. These behaviours do not occur randomly among individuals in a pod, providing evidence of "friendships".
Vocalizations
Like all cetaceans, orcas depend heavily on underwater sound for orientation, feeding, and communication. They produce three categories of sounds: clicks, whistles, and pulsed calls. Clicks are believed to be used primarily for navigation and discriminating prey and other objects in the surrounding environment, but are also commonly heard during social interactions.
Northeast Pacific resident groups tend to be much more vocal than transient groups in the same waters. Residents feed primarily on Chinook and chum salmon, which are insensitive to orca calls (inferred from the audiogram of Atlantic salmon). In contrast, the marine mammal prey of transients hear whale calls well and thus transients are typically silent. Vocal behaviour in these whales is mainly limited to surfacing activities and milling (slow swimming with no apparent direction) after a kill.
All members of a resident pod use similar calls, known collectively as a dialect. Dialects are composed of specific numbers and types of discrete, repetitive calls. They are complex and stable over time. Call patterns and structure are distinctive within matrilines. Newborns produce calls similar to their mothers, but have a more limited repertoire. Individuals likely learn their dialect through contact with pod members. Family-specific calls have been observed more frequently in the days following a calf's birth, which may help the calf learn them. Dialects are probably an important means of maintaining group identity and cohesiveness. Similarity in dialects likely reflects the degree of relatedness between pods, with variation growing over time. When pods meet, dominant call types decrease and subset call types increase. The use of both call types is called biphonation. The increased subset call types may be the distinguishing factor between pods and inter-pod relations.
Dialects also distinguish types. Resident dialects contain seven to 17 (mean = 11) distinctive call types. All members of the North American west coast transient community express the same basic dialect, although minor regional variation in call types is evident. Preliminary research indicates offshore orcas have group-specific dialects unlike those of residents and transients.
Norwegian and Icelandic herring-eating orcas appear to have different vocalizations for activities like hunting. A population that live in McMurdo Sound, Antarctica have 28 complex burst-pulse and whistle calls.
Intelligence
Orcas have the second-heaviest brains among marine mammals (after sperm whales, which have the largest brain of any animal). Orcas have more gray matter and more cortical neurons than any mammal, including humans. They can be trained in captivity and are often described as intelligent, although defining and measuring "intelligence" is difficult in a species whose environment and behavioural strategies are very different from those of humans. Orcas imitate others, and seem to deliberately teach skills to their kin. Off the Crozet Islands, mothers push their calves onto the beach, waiting to pull the youngster back if needed. In March 2023, a female orca was spotted with a newborn pilot whale in Snæfellsnes.
People who have interacted closely with orcas offer numerous anecdotes demonstrating the whales' curiosity, playfulness, and ability to solve problems. Alaskan orcas have not only learned how to steal fish from longlines, but have also overcome a variety of techniques designed to stop them, such as the use of unbaited lines as decoys. Once, fishermen placed their boats several miles apart, taking turns retrieving small amounts of their catch, in the hope that the whales would not have enough time to move between boats to steal the catch as it was being retrieved. The tactic worked initially, but the orcas adapted quickly and split into groups.
In other anecdotes, researchers describe incidents in which wild orcas playfully tease humans by repeatedly moving objects the humans are trying to reach, or suddenly start to toss around a chunk of ice after a human throws a snowball.
The orca's use of dialects and the passing of other learned behaviours from generation to generation have been described as a form of animal culture.
Life cycle
Female orcas begin to mature at around the age of 10 and reach peak fertility around 20, experiencing periods of polyestrous cycling separated by non-cycling periods of three to 16 months. Females can often breed until age 40, followed by a rapid decrease in fertility. Orcas are among the few animals that undergo menopause and live for decades after they have finished breeding. The lifespans of wild females average 50 to 80 years. Some are claimed to have lived substantially longer: Granny (J2) was estimated by some researchers to have been as old as 105 years at the time of her death, though a biopsy sample indicated her age as 65 to 80 years. It is thought that orcas held in captivity tend to have shorter lives than those in the wild, although this is subject to scientific debate.
Males mate with females from other pods, which prevents inbreeding. Gestation varies from 15 to 18 months. Mothers usually calve a single offspring about once every five years. In resident pods, births occur at any time of year, although winter is the most common. Mortality is extremely high during the first seven months of life, when 37–50% of all calves die. Weaning begins at about 12 months of age, and is complete by two years. According to observations in several regions, all male and female pod members participate in the care of the young.
Males sexually mature at the age of 15, but do not typically reproduce until age 21. Wild males live around 29 years on average, with a maximum of about 60 years. One male, known as Old Tom, was reportedly spotted every winter between the 1840s and 1930 off New South Wales, Australia, which would have made him up to 90 years old. Examination of his teeth indicated he died around age 35, but this method of age determination is now believed to be inaccurate for older animals. One male known to researchers in the Pacific Northwest (identified as J1) was estimated to have been 59 years old when he died in 2010. Orcas are unique among cetaceans, as their caudal sections elongate with age, making their heads relatively shorter.
Infanticide, once thought to occur only in captive orcas, was observed in wild populations by researchers off British Columbia on December 2, 2016. In this incident, an adult male killed the calf of a female within the same pod, with the adult male's mother also joining in the assault. It is theorized that the male killed the young calf in order to mate with its mother (something that occurs in other carnivore species), while the male's mother supported the breeding opportunity for her son. The attack ended when the calf's mother struck and injured the attacking male. Such behaviour matches that of many smaller dolphin species, such as the bottlenose dolphin.
Conservation
In 2008, the IUCN (International Union for Conservation of Nature) changed its assessment of the orca's conservation status from conservation dependent to data deficient, recognizing that one or more orca types may actually be separate, endangered species. Depletion of prey species, pollution, large-scale oil spills, and habitat disturbance caused by noise and conflicts with boats are the most significant worldwide threats. In January 2020, the first orca in England and Wales since 2001 was found dead with a large fragment of plastic in its stomach.
Like other animals at the highest trophic levels, the orca is particularly at risk of poisoning from bioaccumulation of toxins, including Polychlorinated biphenyls (PCBs). European harbour seals have problems in reproductive and immune functions associated with high levels of PCBs and related contaminants, and a survey off the Washington coast found PCB levels in orcas were higher than levels that had caused health problems in harbour seals. Blubber samples in the Norwegian Arctic show higher levels of PCBs, pesticides and brominated flame-retardants than in polar bears. A 2018 study published in Science found that global orca populations are poised to dramatically decline due such toxic pollution.
In the Pacific Northwest, wild salmon stocks, a main resident food source, have declined dramatically in recent years. In the Puget Sound region, only 75 whales remain with few births over the last few years. On the west coast of Alaska and the Aleutian Islands, seal and sea lion populations have also substantially declined.
In 2005, the United States government listed the southern resident community as an endangered population under the Endangered Species Act. This community comprises three pods which live mostly in the Georgia and Haro Straits and Puget Sound in British Columbia and Washington. They do not breed outside of their community, which was once estimated at 200 animals and later shrank to around 90. In October 2008, the annual survey revealed seven were missing and presumed dead, reducing the count to 83. This is potentially the largest decline in the population in the past 10 years. These deaths can be attributed to declines in Chinook salmon.
Scientist Ken Balcomb has extensively studied orcas since 1976; he is the research biologist responsible for discovering U.S. Navy sonar may harm orcas. He studied orcas from the Center for Whale Research, located in Friday Harbor, Washington. He was also able to study orcas from "his home porch perched above Puget Sound, where the animals hunt and play in summer months". In May 2003, Balcomb (along with other whale watchers near the Puget Sound coastline) noticed uncharacteristic behaviour displayed by the orcas. The whales seemed "agitated and were moving haphazardly, attempting to lift their heads free of the water" to escape the sound of the sonars. "Balcomb confirmed at the time that strange underwater pinging noises detected with underwater microphones were sonar. The sound originated from a U.S. Navy frigate 12 miles (19 kilometres) distant, Balcomb said." The impact of sonar waves on orcas is potentially life-threatening. Three years prior to Balcomb's discovery, research in the Bahamas showed 14 beaked whales washed up on the shore. These whales were beached on the day U.S. Navy destroyers were activated into sonar exercise. Of the 14 whales beached, six of them died. These six dead whales were studied, and CAT scans of two of the whale heads showed hemorrhaging around the brain and the ears, which is consistent with decompression sickness.
Another conservation concern was made public in September 2008 when the Canadian government decided it was not necessary to enforce further protections (including the Species at Risk Act in place to protect endangered animals along with their habitats) for orcas aside from the laws already in place. In response to this decision, six environmental groups sued the federal government, claiming orcas were facing many threats on the British Columbia Coast and the federal government did nothing to protect them from these threats. A legal and scientific nonprofit organization, Ecojustice, led the lawsuit and represented the David Suzuki Foundation, Environmental Defence, Greenpeace Canada, International Fund for Animal Welfare, the Raincoast Conservation Foundation, and the Wilderness Committee. Many scientists involved in this lawsuit, including Bill Wareham, a marine scientist with the David Suzuki Foundation, noted increased boat traffic, water toxic wastes, and low salmon population as major threats, putting approximately 87 orcas on the British Columbia Coast in danger.
Underwater noise from shipping, drilling, and other human activities is a significant concern in some key orca habitats, including Johnstone Strait and Haro Strait. In the mid-1990s, loud underwater noises from salmon farms were used to deter seals. Orcas also avoided the surrounding waters. High-intensity sonar used by the Navy disturbs orcas along with other marine mammals. Orcas are popular with whale watchers, which may stress the whales and alter their behaviour, particularly if boats approach too closely or block their lines of travel.
The Exxon Valdez oil spill adversely affected orcas in Prince William Sound and Alaska's Kenai Fjords region. Eleven members (about half) of one resident pod disappeared in the following year. The spill damaged salmon and other prey populations, which in turn damaged local orcas. By 2009, scientists estimated the AT1 transient population (considered part of a larger population of 346 transients), numbered only seven individuals and had not reproduced since the spill. This population is expected to die out.
Orcas are included in Appendix II of the Convention on International Trade in Endangered Species (CITES), meaning international trade (including in parts/derivatives) is regulated.
Relationship with humans
Indigenous cultures
The indigenous peoples of the Pacific Northwest Coast feature orcas throughout their art, history, spirituality and religion. The Haida regarded orcas as the most powerful animals in the ocean, and their mythology tells of orcas living in houses and towns under the sea. According to these stories, they took on human form when submerged, and humans who drowned went to live with them. For the Kwakwaka'wakw, the orca was regarded as the ruler of the undersea world, with sea lions for slaves and dolphins for warriors. In Nuu-chah-nulth and Kwakwaka'wakw mythology, orcas may embody the souls of deceased chiefs. The Tlingit of southeastern Alaska regarded the orca as custodian of the sea and a benefactor of humans.
The Lummi consider orca to be people, referring to them as "qwe'lhol'mechen" which means "our relations under the waves".
The Maritime Archaic people of Newfoundland also had great respect for orcas, as evidenced by stone carvings found in a 4,000-year-old burial at the Port au Choix Archaeological Site.
In the tales and beliefs of the Siberian Yupik people, orcas are said to appear as wolves in winter, and wolves as orcas in summer. Orcas are believed to assist their hunters in driving walrus. Reverence is expressed in several forms: the boat represents the animal, and a wooden carving hung from the hunter's belt. Small sacrifices such as tobacco or meat are strewn into the sea for them.
The Ainu people of Hokkaido, the Kuril Islands, and southern Sakhalin often referred to orcas in their folklore and myth as Repun Kamuy (God of Sea/Offshore) to bring fortunes (whales) to the coasts, and there had been traditional funerals for stranded or deceased orcas akin to funerals for other animals such as brown bears.
Attacks by wild orcas on humans and animals
In Western cultures, orcas were historically feared as dangerous, savage predators. The first written description of an orca was given by Pliny the Elder circa AD 70, who wrote, "Orcas (the appearance of which no image can express, other than an enormous mass of savage flesh with teeth) are the enemy of [other kinds of whale]... they charge and pierce them like warships ramming." (see citation in section "Naming", above).
Of the very few confirmed attacks on humans by wild orcas, none have been fatal. In one instance, orcas tried to tip ice floes on which a dog team and photographer of the Terra Nova Expedition were standing. The sled dogs' barking is speculated to have sounded enough like seal calls to trigger the orca's hunting curiosity. In the 1970s, a surfer in California was bitten, but the Orca then retreated, and in 2005, a boy in Alaska who was splashing in a region frequented by harbour seals was bumped by an orca that apparently misidentified him as prey.
Orca attacks on sailboats and small vessels
Beginning around 2020, one or more pods of orcas began to attack sailing vessels off the southern tip of Europe, and a few were sunk. At least 15 interactions between orcas and boats off the Iberian coast were reported in 2020. According to the Atlantic Orca Working Group (GTOA) as many as 500 vessels have been damaged between 2020 and 2023. In one video, an orca can be seen biting on one of the two rudders ripped from a catamaran near Gibraltar. The captain of the vessel reported this was the second attack on a vessel under his command and the orcas focused on the rudders. "Looks like they knew exactly what they are doing. They didn't touch anything else." After an orca repeatedly rammed a vessel off the coast of Norway in 2023, there is a concern the behavior is spreading to other areas. This has led to recommendations that sailors now carry bags of sand. Dropping sand into the water near the rudder is thought to confuse the sonar signal. Experts were divided as to whether the behavior was some sort of revenge or protection response to a previous traumatic incident, or playful or frustrated attempts to get a boat's propeller to emit a stream of high-speed water.
Attacks on humans by captive orcas
Unlike wild orcas, captive orcas have made nearly two dozen attacks on humans since the 1970s, some of which have been fatal.
Human attacks on orcas
Competition with fishermen also led to orcas being regarded as pests. In the waters of the Pacific Northwest and Iceland, the shooting of orcas was accepted and even encouraged by governments. As an indication of the intensity of shooting that occurred until fairly recently, about 25% of the orcas captured in Puget Sound for aquariums through 1970 bore bullet scars. The U.S. Navy claimed to have deliberately killed hundreds of orcas in Icelandic waters in 1956 with machine guns, rockets, and depth charges.
Modern Western attitudes
Western attitudes towards orcas have changed dramatically in recent decades. In the mid-1960s and early 1970s, orcas came to much greater public and scientific awareness, starting with the live-capture and display of an orca known as Moby Doll, a southern resident orca harpooned off Saturna Island in 1964. He was the first ever orca to be studied at close quarters alive, not postmortem. Moby Doll's impact in scientific research at the time, including the first scientific studies of an orca's sound production, led to two articles about him in the journal Zoologica. So little was known at the time, it was nearly two months before the whale's keepers discovered what food (fish) it was willing to eat. To the surprise of those who saw him, Moby Doll was a docile, non-aggressive whale who made no attempts to attack humans.
Between 1964 and 1976, 50 orcas from the Pacific Northwest were captured for display in aquaria, and public interest in the animals grew. In the 1970s, research pioneered by Michael Bigg led to the discovery of the species' complex social structure, its use of vocal communication, and its extraordinarily stable mother–offspring bonds. Through photo-identification techniques, individuals were named and tracked over decades.
Bigg's techniques also revealed the Pacific Northwest population was in the low hundreds rather than the thousands that had been previously assumed. The southern resident community alone had lost 48 of its members to captivity; by 1976, only 80 remained. In the Pacific Northwest, the species that had unthinkingly been targeted became a cultural icon within a few decades.
The public's growing appreciation also led to growing opposition to whale–keeping in aquarium. Only one whale has been taken in North American waters since 1976. In recent years, the extent of the public's interest in orcas has manifested itself in several high-profile efforts surrounding individuals. Following the success of the 1993 film Free Willy, the movie's captive star Keiko was returned to the coast of his native Iceland in 2002. The director of the International Marine Mammal Project for the Earth Island Institute, David Phillips, led the efforts to return Keiko to the Iceland waters. Keiko however did not adapt to the harsh climate of the Arctic Ocean, and died a year into his release after contracting pneumonia, at the age of 27. In 2002, the orphan Springer was discovered in Puget Sound, Washington. She became the first whale to be successfully reintegrated into a wild pod after human intervention, crystallizing decades of research into the vocal behaviour and social structure of the region's orcas. The saving of Springer raised hopes that another young orca named Luna, which had become separated from his pod, could be returned to it. However, his case was marked by controversy about whether and how to intervene, and in 2006, Luna was killed by a boat propeller.
Whaling
The earliest known records of commercial hunting of orcas date to the 18th century in Japan. During the 19th and early 20th centuries, the global whaling industry caught immense numbers of baleen and sperm whales, but largely ignored orcas because of their limited amounts of recoverable oil, their smaller populations, and the difficulty of taking them. Once the stocks of larger species were depleted, orcas were targeted by commercial whalers in the mid-20th century. Between 1954 and 1997, Japan took 1,178 orcas (although the Ministry of the Environment claims that there had been domestic catches of about 1,600 whales between late 1940s to 1960s) and Norway took 987. Extensive hunting of orcas, including an Antarctic catch of 916 in 1979–80 alone, prompted the International Whaling Commission to recommend a ban on commercial hunting of the species pending further research. Today, no country carries out a substantial hunt, although Indonesia and Greenland permit small subsistence hunts (see Aboriginal whaling). Other than commercial hunts, orcas were hunted along Japanese coasts out of public concern for potential conflicts with fisheries. Such cases include a semi-resident male-female pair in Akashi Strait and Harimanada being killed in the Seto Inland Sea in 1957, the killing of five whales from a pod of 11 members that swam into Tokyo Bay in 1970, and a catch record in southern Taiwan in the 1990s.
Cooperation with humans
Orcas have helped humans hunting other whales. One well-known example was the orcas of Eden, Australia, including the male known as Old Tom. Whalers more often considered them a nuisance, however, as orcas would gather to scavenge meat from the whalers' catch. Some populations, such as in Alaska's Prince William Sound, may have been reduced significantly by whalers shooting them in retaliation.
Whale watching
Whale watching continues to increase in popularity, but may have some problematic impacts on orcas. Exposure to exhaust gases from large amounts of vessel traffic is causing concern for the overall health of the 75 remaining southern resident orcas (SRKWs) left as of early 2019. This population is followed by approximately 20 vessels for 12 hours a day during the months May–September. Researchers discovered that these vessels are in the line of sight for these whales for 98–99.5% of daylight hours. With so many vessels, the air quality around these whales deteriorates and impacts their health. Air pollutants that bind with exhaust fumes are responsible for the activation of the cytochrome P450 1A gene family. Researchers have successfully identified this gene in skin biopsies of live whales and also the lungs of deceased whales. A direct correlation between activation of this gene and the air pollutants can not be made because there are other known factors that will induce the same gene. Vessels can have either wet or dry exhaust systems, with wet exhaust systems leaving more pollutants in the water due to various gas solubility. A modelling study determined that the lowest-observed-adverse-effect-level (LOAEL) of exhaust pollutants was about 12% of the human dose.
As a response to this, in 2017 boats off the British Columbia coast now have a minimum approach distance of 200 metres compared to the previous 100 metres. This new rule complements Washington State's minimum approach zone of 180 metres that has been in effect since 2011. If a whale approaches a vessel it must be placed in neutral until the whale passes. The World Health Organization has set air quality standards in an effort to control the emissions produced by these vessels.
Captivity
The orca's intelligence, trainability, striking appearance, playfulness in captivity and sheer size have made it a popular exhibit at aquaria and aquatic theme parks. From 1976 to 1997, 55 whales were taken from the wild in Iceland, 19 from Japan, and three from Argentina. These figures exclude animals that died during capture. Live captures fell dramatically in the 1990s, and by 1999, about 40% of the 48 animals on display in the world were captive-born.
Organizations such as World Animal Protection and the Whale and Dolphin Conservation campaign against the practice of keeping them in captivity. In captivity, they often develop pathologies, such as the dorsal fin collapse seen in 60–90% of captive males. Captives have vastly reduced life expectancies, on average only living into their 20s. That said, a 2015 study coauthored by staff at SeaWorld and the Minnesota Zoo suggested no significant difference in survivorship between free-ranging and captive orcas. However, in the wild, females who survive infancy live 46 years on average, and up to 70–80 years in rare cases. Wild males who survive infancy live 31 years on average, and up to 50–60 years. Captivity usually bears little resemblance to wild habitat, and captive whales' social groups are foreign to those found in the wild. Critics claim captive life is stressful due to these factors and the requirement to perform circus tricks that are not part of wild orca behaviour, see above. Wild orcas may travel up to in a day, and critics say the animals are too big and intelligent to be suitable for captivity. Captives occasionally act aggressively towards themselves, their tankmates, or humans, which critics say is a result of stress. Between 1991 and 2010, the bull orca known as Tilikum was involved in the death of three people, and was featured in the critically acclaimed 2013 film Blackfish. Tilikum lived at SeaWorld from 1992 until his death in 2017.
In March 2016, SeaWorld announced that they would be ending their orca breeding program and their theatrical shows. However, as of 2020, theatrical shows featuring orcas are still ongoing.
| Biology and health sciences | Cetaceans | null |
17022 | https://en.wikipedia.org/wiki/Karst | Karst | Karst () is a topography formed from the dissolution of soluble carbonate rocks such as limestone and dolomite. It is characterized by features like poljes above and drainage systems with sinkholes and caves underground. There is some evidence that karst may occur in more weathering-resistant rocks such as quartzite given the right conditions.
Subterranean drainage may limit surface water, with few to no rivers or lakes. In regions where the dissolved bedrock is covered (perhaps by debris) or confined by one or more superimposed non-soluble rock strata, distinctive karst features may occur only at subsurface levels and can be totally missing above ground.
The study of paleokarst (buried karst in the stratigraphic column) is important in petroleum geology because as much as 50% of the world's hydrocarbon reserves are hosted in carbonate rock, and much of this is found in porous karst systems.
Etymology
The English word karst was borrowed from German in the late 19th century, which entered German usage much earlier, to describe a number of geological, geomorphological, and hydrological features found within the range of the Dinaric Alps, stretching from the northeastern corner of Italy above the city of Trieste, across the Balkan peninsula along the coast of the eastern Adriatic to Kosovo and North Macedonia, where the massif of the Šar Mountains begins. The karst zone is at the northwesternmost section, described in early topographical research as a plateau between Italy and Slovenia. Languages preserving this form include , , and .
In the local South Slavic languages, all variations of the word are derived from a Romanized Illyrian base (yielding , ), later metathesized from the reconstructed form into forms such as and , , first attested in the 18th century, and the adjective form in the 16th century. As a proper noun, the Slovene form was first attested in 1177.
Ultimately, the word is of Mediterranean origin. It has also been suggested that the word may derive from the Proto-Indo-European root 'rock'. The name may also be connected to the oronym Kar(u)sádios oros cited by Ptolemy, and perhaps also to Latin .
Early studies
Johann Weikhard von Valvasor, a pioneer of the study of karst in Slovenia and a fellow of the Royal Society, London, introduced the word karst to European scholars in 1689 to describe the phenomenon of underground flows of rivers in his account of Lake Cerknica.
Jovan Cvijić greatly advanced the knowledge of karst regions to the point where he became known as the "father of karst geomorphology". Primarily discussing the karst regions of the Balkans, Cvijić's 1893 publication Das Karstphänomen describes landforms such as karren, dolines and poljes. In a 1918 publication, Cvijić proposed a cyclical model for karst landscape development.
Karst hydrology emerged as a discipline in the late 1950s and the early 1960s in France. Previously, the activities of cave explorers, called speleologists, had been dismissed as more of a sport than a science and so the underground karst caves and their associated watercourses were, from a scientific perspective, understudied.
Development
Karst is most strongly developed in dense carbonate rock, such as limestone, that is thinly bedded and highly fractured. Karst is not typically well developed in chalk, because chalk is highly porous rather than dense, so the flow of groundwater is not concentrated along fractures. Karst is also most strongly developed where the water table is relatively low, such as in uplands with entrenched valleys, and where rainfall is moderate to heavy. This contributes to rapid downward movement of groundwater, which promotes dissolution of the bedrock, whereas standing groundwater becomes saturated with carbonate minerals and ceases to dissolve the bedrock.
Chemistry of dissolution
The carbonic acid that causes karst features is formed as rain passes through Earth's atmosphere picking up carbon dioxide (CO2), which readily dissolves in the water. Once the rain reaches the ground, it may pass through soil that provides additional CO2 produced by soil respiration. Some of the dissolved carbon dioxide reacts with the water to form a weak carbonic acid solution, which dissolves calcium carbonate. The primary reaction sequence in limestone dissolution is the following:
In very rare conditions, oxidation can play a role. Oxidation played a major role in the formation of ancient Lechuguilla Cave in the US state of New Mexico and is presently active in the Frasassi Caves of Italy.
The oxidation of sulfides leading to the formation of sulfuric acid can also be one of the corrosion factors in karst formation. As oxygen (O2)-rich surface waters seep into deep anoxic karst systems, they bring oxygen, which reacts with sulfide present in the system (pyrite or hydrogen sulfide) to form sulfuric acid (H2SO4). Sulfuric acid then reacts with calcium carbonate, causing increased erosion within the limestone formation. This chain of reactions is:
This reaction chain forms gypsum.
Morphology
The karstification of a landscape may result in a variety of large- or small-scale features both on the surface and beneath. On exposed surfaces, small features may include solution flutes (or rillenkarren), runnels, limestone pavement (clints and grikes), kamenitzas collectively called karren or lapiez. Medium-sized surface features may include sinkholes or cenotes (closed basins), vertical shafts, foibe (inverted funnel shaped sinkholes), disappearing streams, and reappearing springs.
Large-scale features may include limestone pavements, poljes, and karst valleys. Mature karst landscapes, where more bedrock has been removed than remains, may result in karst towers, or haystack/eggbox landscapes. Beneath the surface, complex underground drainage systems (such as karst aquifers) and extensive caves and cavern systems may form.
Erosion along limestone shores, notably in the tropics, produces karst topography that includes a sharp makatea surface above the normal reach of the sea, and undercuts that are mostly the result of biological activity or bioerosion at or a little above mean sea level. Some of the most dramatic of these formations can be seen in Thailand's Phangnga Bay and at Halong Bay in Vietnam.
Calcium carbonate dissolved into water may precipitate out where the water discharges some of its dissolved carbon dioxide. Rivers which emerge from springs may produce tufa terraces, consisting of layers of calcite deposited over extended periods of time. In caves, a variety of features collectively called speleothems are formed by deposition of calcium carbonate and other dissolved minerals.
Interstratal karst
Interstratal karst is a karst landscape which is developed beneath a cover of insoluble rocks. Typically this will involve a cover of sandstone overlying limestone strata undergoing solution. In the United Kingdom for example extensive doline fields have developed at Cefn yr Ystrad, Mynydd Llangatwg and Mynydd Llangynidr in South Wales across a cover of Twrch Sandstone which overlies concealed Carboniferous Limestone, the last-named locality having been declared a site of special scientific interest in respect of it.
Kegelkarst, salt karst, and karst forests
Kegelkarst is a type of tropical karst terrain with numerous cone-like hills, formed by cockpits, mogotes, and poljes and without strong fluvial erosion processes. This terrain is found in Cuba, Jamaica, Indonesia, Malaysia, the Philippines, Puerto Rico, southern China, Myanmar, Thailand, Laos and Vietnam.
Salt karst (or 'halite karst') is developed in areas where salt is undergoing solution underground. It can lead to surface depressions and collapses which present a geo-hazard.
Karst areas tend to have unique types of forests. The karst terrain is difficult for humans to traverse, so that their ecosystems are often relatively undisturbed. The soil tends to have a high pH, which encourages growth of unusual species of orchids, palms, mangroves, and other plants.
Paleokarst
Paleokarst or palaeokarst is a development of karst observed in geological history and preserved within the rock sequence, effectively a fossil karst. There are for example palaeokarst surfaces exposed within the Clydach Valley Subgroup of the Carboniferous Limestone sequence of South Wales which developed as sub-aerial weathering of recently formed limestones took place during periods of non-deposition within the early part of the period. Sedimentation resumed and further limestone strata were deposited on an irregular karst surface, the cycle recurring several times in connection with fluctuating sea levels over prolonged periods.
Pseudokarst
Pseudokarsts are similar in form or appearance to karst features but are created by different mechanisms. Examples include lava caves and granite tors—for example, Labertouche Cave in Victoria, Australia—and paleocollapse features. Mud Caves are an example of pseudokarst.
Hydrology
Karst formations have unique hydrology, resulting in many unusual features. A karst fenster (karst window) occurs when an underground stream emerges onto the surface between layers of rock, cascades some distance, and then disappears back down, often into a sinkhole.
Rivers in karst areas may disappear underground a number of times and spring up again in different places, even under a different name, like Ljubljanica, the "river of seven names".
Another example of this is the Popo Agie River in Fremont County, Wyoming, where, at a site named "The Sinks" in Sinks Canyon State Park, the river flows into a cave in a formation known as the Madison Limestone and then rises again down the canyon in a placid pool.
A turlough is a unique type of seasonal lake found in Irish karst areas which are formed through the annual welling-up of water from the underground water system.
Aquifers
Karst aquifers typically develop in limestone. Surface water containing natural carbonic acid moves down into small fissures in limestone. This carbonic acid gradually dissolves limestone thereby enlarging the fissures. The enlarged fissures allow a larger quantity of water to enter which leads to a progressive enlargement of openings. Abundant small openings store a large quantity of water. The larger openings form a conduit system that drains the aquifer to springs.
Characterization of karst aquifers requires field exploration to locate sinkholes, swallets, sinking streams, and springs in addition to studying geologic maps. Conventional hydrogeologic methods such as aquifer tests and potentiometric mapping are insufficient to characterize the complexity of karst aquifers, and need to be supplemented with dye traces, measurement of spring discharges, and analysis of water chemistry. U.S. Geological Survey dye tracing has determined that conventional groundwater models that assume a uniform distribution of porosity are not applicable for karst aquifers.
Linear alignment of surface features such as straight stream segments and sinkholes develop along fracture traces. Locating a well in a fracture trace or intersection of fracture traces increases the likelihood to encounter good water production. Voids in karst aquifers can be large enough to cause destructive collapse or subsidence of the ground surface that can initiate a catastrophic release of contaminants.
Groundwater flow rate in karst aquifers is much more rapid than in porous aquifers. For example, in the Barton Springs Edwards aquifer, dye traces measured the karst groundwater flow rates from 0.5 to 7 miles per day (0.8 to 11.3 km/d). The rapid groundwater flow rates make karst aquifers much more sensitive to groundwater contamination than porous aquifers.
Groundwater in karst areas is also just as easily polluted as surface streams, because Karst formations are cavernous and highly permeable, resulting in reduced opportunity for contaminant filtration.
Well water may also be unsafe as the water may have run unimpeded from a sinkhole in a cattle pasture, bypassing the normal filtering that occurs in a porous aquifer. Sinkholes have often been used as farmstead or community trash dumps. Overloaded or malfunctioning septic tanks in karst landscapes may dump raw sewage directly into underground channels.
Geologists are concerned with these negative effects of human activity on karst hydrology which, , supplied about 25% of the global demand for drinkable water.
Effects of karst hydrology
Farming in karst areas must take into account the lack of surface water. The soils may be fertile enough, and rainfall may be adequate, but rainwater quickly moves through the crevices into the ground, sometimes leaving the surface soil parched between rains.
The karst topography also poses peculiar difficulties for human inhabitants. Sinkholes can develop gradually as surface openings enlarge, but progressive erosion is frequently unseen until the roof of a cavern suddenly collapses. Such events have swallowed homes, cattle, cars, and farm machinery. In the United States, sudden collapse of such a cavern-sinkhole swallowed part of the collection of the National Corvette Museum in Bowling Green, Kentucky in 2014.
Karst areas
The world's largest limestone karst is Australia's Nullarbor Plain. Slovenia has the world's highest risk of sinkholes, while the western Highland Rim in the eastern United States is at the second-highest risk of karst sinkholes.
In Canada, Wood Buffalo National Park, Northwest Territories contains areas of karst sinkholes. Mexico hosts important karst regions in the Yucatán Peninsula and Chiapas. The West of Ireland is home to The Burren, a karst limestone area. The South China Karst in the provinces of Guizhou, Guangxi, and Yunnan provinces is a UNESCO World Heritage Site.
List of terms for karst-related features
Many karst-related terms derive from South Slavic languages, entering scientific vocabulary through early research in the Western Balkan Dinaric Alpine karst.
Abîme, a vertical shaft in karst that may be very deep and usually opens into a network of subterranean passages
Cenote, a deep sinkhole, characteristic of Mexico, resulting from collapse of limestone bedrock that exposes groundwater underneath
Doline, also sink or sinkhole, is a closed depression draining underground in karst areas. The name "doline" comes from dolina, meaning "valley", and derives from South Slavic languages.
Foibe, an inverted funnel-shaped sinkhole
Karst window (also known as a "karst fenster"), a feature where a spring emerges briefly, with the water discharge then abruptly disappearing into a nearby sinkhole
Karst spring, a spring emerging from karst, originating a flow of water on the surface
Limestone pavement, a landform consisting of a flat, incised surface of exposed limestone that resembles an artificial pavement
Losing stream, sinking river or ponornica in South Slavic languages.
Polje (karst polje, karst field), a large flat specifically karst plain. The word polje derives from South Slavic languages.
Ponor, same as estavelle, sink or sinkhole in South Slavic languages, where surface flow enters an underground system
Scowle, porous irregular karst landscape in a region of England.
Turlough (turlach), a type of disappearing lake characteristic of Irish karst.
Uvala, a collection of multiple smaller individual sinkholes that coalesce into a compound sinkhole. The term derives from South Slavic languages.
Karstology
The study of the various aspects of karst regions is called karstology. This includes biological, chemical, ecological, geomorphological, hydrogeological, hydrological, political, socio-economical, and other processes over a variety of spatial and temporal scales in karst regions, with the purpose of understanding karst aquifers and ecosystems, and the development of the surface and underground structure, so that the environment can be protected and human activities planned effectively.
| Physical sciences | Caves | null |
17025 | https://en.wikipedia.org/wiki/Kidney | Kidney | In humans, the kidneys are two reddish-brown bean-shaped blood-filtering organs that are a multilobar, multipapillary form of mammalian kidneys, usually without signs of external lobulation. They are located on the left and right in the retroperitoneal space, and in adult humans are about in length. They receive blood from the paired renal arteries; blood exits into the paired renal veins. Each kidney is attached to a ureter, a tube that carries excreted urine to the bladder.
The kidney participates in the control of the volume of various body fluids, fluid osmolality, acid-base balance, various electrolyte concentrations, and removal of toxins. Filtration occurs in the glomerulus: one-fifth of the blood volume that enters the kidneys is filtered. Examples of substances reabsorbed are solute-free water, sodium, bicarbonate, glucose, and amino acids. Examples of substances secreted are hydrogen, ammonium, potassium and uric acid. The nephron is the structural and functional unit of the kidney. Each adult human kidney contains around 1 million nephrons, while a mouse kidney contains only about 12,500 nephrons. The kidneys also carry out functions independent of the nephrons. For example, they convert a precursor of vitamin D to its active form, calcitriol; and synthesize the hormones erythropoietin and renin.
Chronic kidney disease (CKD) has been recognized as a leading public health problem worldwide. The global estimated prevalence of CKD is 13.4%, and patients with kidney failure needing renal replacement therapy are estimated between 5 and 7 million. Procedures used in the management of kidney disease include chemical and microscopic examination of the urine (urinalysis), measurement of kidney function by calculating the estimated glomerular filtration rate (eGFR) using the serum creatinine; and kidney biopsy and CT scan to evaluate for abnormal anatomy. Dialysis and kidney transplantation are used to treat kidney failure; one (or both sequentially) of these are almost always used when renal function drops below 15%. Nephrectomy is frequently used to cure renal cell carcinoma.
Renal physiology is the study of kidney function. Nephrology is the medical specialty which addresses diseases of kidney function: these include CKD, nephritic and nephrotic syndromes, acute kidney injury, and pyelonephritis. Urology addresses diseases of kidney (and urinary tract) anatomy: these include cancer, renal cysts, kidney stones and ureteral stones, and urinary tract obstruction.
The word “renal” is an adjective meaning “relating to the kidneys”, and its roots are French or late Latin. Whereas according to some opinions, "renal" should be replaced with "kidney" in scientific writings such as "kidney artery", other experts have advocated preserving the use of "renal" as appropriate including in "renal artery".
Structure
In humans, the kidneys are located high in the abdominal cavity, one on each side of the spine, and lie in a retroperitoneal position at a slightly oblique angle. The asymmetry within the abdominal cavity, caused by the position of the liver, typically results in the right kidney being slightly lower and smaller than the left, and being placed slightly more to the middle than the left kidney. The left kidney is approximately at the vertebral level T12 to L3, and the right is slightly lower. The right kidney sits just below the diaphragm and posterior to the liver. The left kidney sits below the diaphragm and posterior to the spleen. On top of each kidney is an adrenal gland. The upper parts of the kidneys are partially protected by the 11th and 12th ribs. Each kidney, with its adrenal gland is surrounded by two layers of fat: the perirenal fat present between renal fascia and renal capsule and pararenal fat superior to the renal fascia.
The human kidney is a bean-shaped structure with a convex and a concave border. A recessed area on the concave border is the renal hilum, where the renal artery enters the kidney and the renal vein and ureter leave. The kidney is surrounded by tough fibrous tissue, the renal capsule, which is itself surrounded by perirenal fat, renal fascia, and pararenal fat. The anterior (front) surface of these tissues is the peritoneum, while the posterior (rear) surface is the transversalis fascia.
The superior pole of the right kidney is adjacent to the liver. For the left kidney, it is next to the spleen. Both, therefore, move down upon inhalation.
A Danish study measured the median renal length to be on the left side and on the right side in adults. Median renal volumes were on the left and on the right.
Gross anatomy
The functional substance, or parenchyma, of the human kidney is divided into two major structures: the outer renal cortex and the inner renal medulla. Grossly, these structures take the shape of eight to 18 cone-shaped renal lobes, each containing renal cortex surrounding a portion of medulla called a renal pyramid. Between the renal pyramids are projections of cortex called renal columns.
The tip, or papilla, of each pyramid empties urine into a minor calyx; minor calyces empty into major calyces, and major calyces empty into the renal pelvis. This becomes the ureter. At the hilum, the ureter and renal vein exit the kidney and the renal artery enters. Hilar fat and lymphatic tissue with lymph nodes surround these structures. The hilar fat is contiguous with a fat-filled cavity called the renal sinus. The renal sinus collectively contains the renal pelvis and calyces and separates these structures from the renal medullary tissue.
The kidneys possess no overtly moving structures.
Blood supply
The kidneys receive blood from the renal arteries, left and right, which branch directly from the abdominal aorta. The kidneys receive approximately 20–25% of cardiac output in adult human. Each renal artery branches into segmental arteries, dividing further into interlobar arteries, which penetrate the renal capsule and extend through the renal columns between the renal pyramids. The interlobar arteries then supply blood to the arcuate arteries that run through the boundary of the cortex and the medulla. Each arcuate artery supplies several interlobular arteries that feed into the afferent arterioles that supply the glomeruli.
Blood drains from the kidneys, ultimately into the inferior vena cava. After filtration occurs, the blood moves through a small network of small veins (venules) that converge into interlobular veins. As with the arteriole distribution, the veins follow the same pattern: the interlobular provide blood to the arcuate veins then back to the interlobar veins, which come to form the renal veins which exit the kidney.
Nerve supply
The kidney and nervous system communicate via the renal plexus, whose fibers course along the renal arteries to reach each kidney. Input from the sympathetic nervous system triggers vasoconstriction in the kidney, thereby reducing renal blood flow. The kidney also receives input from the parasympathetic nervous system, by way of the renal branches of the vagus nerve; the function of this is yet unclear. Sensory input from the kidney travels to the T10–11 levels of the spinal cord and is sensed in the corresponding dermatome. Thus, pain in the flank region may be referred from corresponding kidney.
Microanatomy
Renal histology is the study of the microscopic structure of the kidney. The adult human kidney contains at least 26 distinct cell types, including epithelial, endothelial, stromal and smooth muscle cells. Distinct cell types include:
Kidney glomerulus parietal cell
Kidney glomerulus podocyte
Intraglomerular mesangial cell
Extraglomerular mesangial cell
Juxtaglomerular cell
Kidney proximal tubule brush border cell
Loop of Henle thin segment cell
Thick ascending limb cell
Kidney distal tubule cell
Collecting duct principal cell
Collecting duct intercalated cell
Interstitial kidney cells
Gene and protein expression
In humans, about 20,000 protein coding genes are expressed in human cells and almost 70% of these genes are expressed in normal, adult kidneys. Just over 300 genes are more specifically expressed in the kidney, with only some 50 genes being highly specific for the kidney. Many of the corresponding kidney specific proteins are expressed in the cell membrane and function as transporter proteins. The highest expressed kidney specific protein is uromodulin, the most abundant protein in urine with functions that prevent calcification and growth of bacteria. Specific proteins are expressed in the different compartments of the kidney with podocin and nephrin expressed in glomeruli, Solute carrier family protein SLC22A8 expressed in proximal tubules, calbindin expressed in distal tubules and aquaporin 2 expressed in the collecting duct cells.
Development
The mammalian kidney develops from intermediate mesoderm. Kidney development, also called nephrogenesis, proceeds through a series of three successive developmental phases: the pronephros, mesonephros, and metanephros. The metanephros are primordia of the permanent kidney.
Function
The kidneys excrete a variety of waste products produced by metabolism into the urine. The microscopic structural and functional unit of the kidney is the nephron. It processes the blood supplied to it via filtration, reabsorption, secretion and excretion; the consequence of those processes is the production of urine. These include the nitrogenous wastes urea, from protein catabolism, and uric acid, from nucleic acid metabolism. The ability of mammals and some birds to concentrate wastes into a volume of urine much smaller than the volume of blood from which the wastes were extracted is dependent on an elaborate countercurrent multiplication mechanism. This requires several independent nephron characteristics to operate: a tight hairpin configuration of the tubules, water and ion permeability in the descending limb of the loop, water impermeability in the ascending loop, and active ion transport out of most of the ascending limb. In addition, passive countercurrent exchange by the vessels carrying the blood supply to the nephron is essential for enabling this function.
The kidney participates in whole-body homeostasis, regulating acid–base balance, electrolyte concentrations, extracellular fluid volume, and blood pressure. The kidney accomplishes these homeostatic functions both independently and in concert with other organs, particularly those of the endocrine system. Various endocrine hormones coordinate these endocrine functions; these include renin, angiotensin II, aldosterone, antidiuretic hormone, and atrial natriuretic peptide, among others.
Formation of urine
Filtration
Filtration, which takes place at the renal corpuscle, is the process by which cells and large proteins are retained while materials of smaller molecular weights are filtered from the blood to make an ultrafiltrate that eventually becomes urine. The adult human kidney generates approximately 180 liters of filtrate a day, most of which is reabsorbed. The normal range for a twenty four hour urine volume collection is 800 to 2,000 milliliters per day. The process is also known as hydrostatic filtration due to the hydrostatic pressure exerted on the capillary walls.
Reabsorption
Reabsorption is the transport of molecules from this ultrafiltrate and into the peritubular capillary network that surrounds the nephron tubules. It is accomplished via selective receptors on the luminal cell membrane. Water is 55% reabsorbed in the proximal tubule. Glucose at normal plasma levels is completely reabsorbed in the proximal tubule. The mechanism for this is the Na+/glucose cotransporter. A plasma level of 350 mg/dL will fully saturate the transporters and glucose will be lost in the urine. A plasma glucose level of approximately 160 is sufficient to allow glucosuria, which is an important clinical clue to diabetes mellitus.
Amino acids are reabsorbed by sodium dependent transporters in the proximal tubule. Hartnup disease is a deficiency of the tryptophan amino acid transporter, which results in pellagra.
Secretion
Secretion is the reverse of reabsorption: molecules are transported from the peritubular capillary through the interstitial fluid, then through the renal tubular cell and into the ultrafiltrate.
Excretion
The last step in the processing of the ultrafiltrate is excretion: the ultrafiltrate passes out of the nephron and travels through a tube called the collecting duct, which is part of the collecting duct system, and then to the ureters where it is renamed urine. In addition to transporting the ultrafiltrate, the collecting duct also takes part in reabsorption.
Hormone secretion
The kidneys secrete a variety of hormones, including erythropoietin, calcitriol, and renin. Erythropoietin is released in response to hypoxia (low levels of oxygen at tissue level) in the renal circulation. It stimulates erythropoiesis (production of red blood cells) in the bone marrow. Calcitriol, the activated form of vitamin D, promotes intestinal absorption of calcium and the renal reabsorption of phosphate. Renin is an enzyme which regulates angiotensin and aldosterone levels.
Blood pressure regulation
Although the kidney cannot directly sense blood, long-term regulation of blood pressure predominantly depends upon the kidney. This primarily occurs through maintenance of the extracellular fluid compartment, the size of which depends on the plasma sodium concentration. Renin is the first in a series of important chemical messengers that make up the renin–angiotensin system. Changes in renin ultimately alter the output of this system, principally the hormones angiotensin II and aldosterone. Each hormone acts via multiple mechanisms, but both increase the kidney's absorption of sodium chloride, thereby expanding the extracellular fluid compartment and raising blood pressure. When renin levels are elevated, the concentrations of angiotensin II and aldosterone increase, leading to increased sodium chloride reabsorption, expansion of the extracellular fluid compartment, and an increase in blood pressure. Conversely, when renin levels are low, angiotensin II and aldosterone levels decrease, contracting the extracellular fluid compartment, and decreasing blood pressure.
Acid–base balance
The two organ systems that help regulate the body's acid–base balance are the kidneys and lungs. Acid–base homeostasis is the maintenance of pH around a value of 7.4. The lungs are the part of respiratory system which helps to maintain acid–base homeostasis by regulating carbon dioxide (CO2) concentration in the blood. The respiratory system is the first line of defense when the body experiences and acid–base problem. It attempts to return the body pH to a value of 7.4 by controlling the respiratory rate. When the body is experiencing acidic conditions, it will increase the respiratory rate which in turn drives off CO2 and decreases the H+ concentration, therefore increasing the pH. In basic conditions, the respiratory rate will slow down so that the body holds onto more CO2 and increases the H+ concentration and decreases the pH.
The kidneys have two cells that help to maintain acid-base homeostasis: intercalated A and B cells. The intercalated A cells are stimulated when the body is experiencing acidic conditions. Under acidic conditions, the high concentration of CO2 in the blood creates a gradient for CO2 to move into the cell and push the reaction HCO3 + H ↔ H2CO3 ↔ CO2 + H2O to the left. On the luminal side of the cell there is a H+ pump and a H/K exchanger. These pumps move H+ against their gradient and therefore require ATP. These cells will remove H+ from the blood and move it to the filtrate which helps to increase the pH of the blood. On the basal side of the cell there is a HCO3/Cl exchanger and a Cl/K co-transporter (facilitated diffusion). When the reaction is pushed to the left it also increases the HCO3 concentration in the cell and HCO3 is then able to move out into the blood which additionally raises the pH. The intercalated B cell responds very similarly, however, the membrane proteins are flipped from the intercalated A cells: the proton pumps are on the basal side and the HCO3/Cl exchanger and K/Cl co-transporter are on the luminal side. They function the same, but now release protons into the blood to decrease the pH.
Regulation of osmolality
The kidneys help maintain the water and salt level of the body. Any significant rise in plasma osmolality is detected by the hypothalamus, which communicates directly with the posterior pituitary gland. An increase in osmolality causes the gland to secrete antidiuretic hormone (ADH), resulting in water reabsorption by the kidney and an increase in urine concentration. The two factors work together to return the plasma osmolality to its normal levels.
Measuring function
Various calculations and methods are used to try to measure kidney function. Renal clearance is the volume of plasma from which the substance is completely cleared from the blood per unit time. The filtration fraction is the amount of plasma that is actually filtered through the kidney. This can be defined using the equation. The kidney is a very complex organ and mathematical modelling has been used to better understand kidney function at several scales, including fluid uptake and secretion.
Clinical significance
Nephrology is the subspeciality under Internal Medicine that deals with kidney function and disease states related to renal malfunction and their management including dialysis and kidney transplantation. Urology is the specialty under Surgery that deals with kidney structure abnormalities such as kidney cancer and cysts and problems with urinary tract. Nephrologists are internists, and urologists are surgeons, whereas both are often called "kidney doctors". There are overlapping areas that both nephrologists and urologists can provide care such as kidney stones and kidney related infections.
There are many causes of kidney disease. Some causes are acquired over the course of life, such as diabetic nephropathy whereas others are congenital, such as polycystic kidney disease.
Medical terms related to the kidneys commonly use terms such as renal and the prefix nephro-. The adjective renal, meaning related to the kidney, is from the Latin rēnēs, meaning kidneys; the prefix nephro- is from the Ancient Greek word for kidney, nephros (νεφρός). For example, surgical removal of the kidney is a nephrectomy, while a reduction in kidney function is called renal dysfunction.
Acquired Disease
Diabetic nephropathy
Glomerulonephritis
Hydronephrosis is the enlargement of one or both of the kidneys caused by obstruction of the flow of urine.
Interstitial nephritis is inflammation of the area of the kidney known as the renal interstitium.
Kidney stones (nephrolithiasis) are a relatively common and particularly painful disorder. A chronic condition can result in scars to the kidneys. The removal of kidney stones involves ultrasound treatment to break up the stones into smaller pieces, which are then passed through the urinary tract. One common symptom of kidney stones is a sharp to disabling pain in the middle and sides of the lower back or groin.
Kidney tumour
Wilms tumor
Renal cell carcinoma
Lupus nephritis
Minimal change disease
In nephrotic syndrome, the glomerulus has been damaged so that a large amount of protein in the blood enters the urine. Other frequent features of the nephrotic syndrome include swelling, low serum albumin, and high cholesterol.
Pyelonephritis is infection of the kidneys and is frequently caused by complication of a urinary tract infection.
Kidney failure
Acute kidney failure
Stage 5 Chronic Kidney Disease
Renal artery stenosis
Renovascular hypertension
Kidney injury and failure
Generally, humans can live normally with just one kidney, as one has more functioning renal tissue than is needed to survive. Only when the amount of functioning kidney tissue is greatly diminished does one develop chronic kidney disease. Renal replacement therapy, in the form of dialysis or kidney transplantation, is indicated when the glomerular filtration rate has fallen very low or if the renal dysfunction leads to severe symptoms.
Dialysis
Dialysis is a treatment that substitutes for the function of normal kidneys. Dialysis may be instituted when approximately 85%–90% of kidney function is lost, as indicated by a glomerular filtration rate (GFR) of less than 15. Dialysis removes metabolic waste products as well as excess water and sodium (thereby contributing to regulating blood pressure); and maintains many chemical levels within the body. Life expectancy is 5–10 years for those on dialysis; some live up to 30 years. Dialysis can occur via the blood (through a catheter or arteriovenous fistula), or through the peritoneum (peritoneal dialysis) Dialysis is typically administered three times a week for several hours at free-standing dialysis centers, allowing recipients to lead an otherwise essentially normal life.
Congenital disease
Congenital hydronephrosis
Congenital obstruction of urinary tract
Duplex kidneys, or double kidneys, occur in approximately 1% of the population. This occurrence normally causes no complications, but can occasionally cause urinary tract infections.
Duplicated ureter occurs in approximately one in 100 live births
Horseshoe kidney occurs in approximately one in 400 live births
Nephroblastoma (Syndromic Wilm's tumour)
Nutcracker syndrome
Polycystic kidney disease
Autosomal dominant polycystic kidney disease affects patients later in life. Approximately one in 1000 people will develop this condition
Autosomal recessive polycystic kidney disease is far less common, but more severe, than the dominant condition. It is apparent in utero or at birth.
Renal agenesis. Failure of one kidney to form occurs in approximately one in 750 live births. Failure of both kidneys to form used to be fatal; however, medical advances such as amnioinfusion therapy during pregnancy and peritoneal dialysis have made it possible to stay alive until a transplant can occur.
Renal dysplasia
Unilateral small kidney
Multicystic dysplastic kidney occurs in approximately one in every 2400 live births
Ureteropelvic Junction Obstruction or UPJO; although most cases are congenital, some are acquired.
Diagnosis
Many renal diseases are diagnosed on the basis of a detailed medical history, and physical examination. The medical history takes into account present and past symptoms, especially those of kidney disease; recent infections; exposure to substances toxic to the kidney; and family history of kidney disease.
Kidney function is tested by using blood tests and urine tests. The most common blood tests are creatinine, urea and electrolytes. Urine tests such as urinalysis can evaluate for pH, protein, glucose, and the presence of blood. Microscopic analysis can also identify the presence of urinary casts and crystals. The glomerular filtration rate (GFR) can be directly measured ("measured GFR", or mGFR) but this rarely done in everyday practice. Instead, special equations are used to calculate GFR ("estimated GFR", or eGFR).
Imaging
Renal ultrasonography is essential in the diagnosis and management of kidney-related diseases. Other modalities, such as CT and MRI, should always be considered as supplementary imaging modalities in the assessment of renal disease.
Biopsy
The role of the renal biopsy is to diagnose renal disease in which the etiology is not clear based upon noninvasive means (clinical history, past medical history, medication history, physical exam, laboratory studies, imaging studies). In general, a renal pathologist will perform a detailed morphological evaluation and integrate the morphologic findings with the clinical history and laboratory data, ultimately arriving at a pathological diagnosis. A renal pathologist is a physician who has undergone general training in anatomic pathology and additional specially training in the interpretation of renal biopsy specimens.
Ideally, multiple core sections are obtained and evaluated for adequacy (presence of glomeruli) intraoperatively. A pathologist/pathology assistant divides the specimen(s) for submission for light microscopy, immunofluorescence microscopy and electron microscopy.
The pathologist will examine the specimen using light microscopy with multiple staining techniques (hematoxylin and eosin/H&E, PAS, trichrome, silver stain) on multiple level sections. Multiple immunofluorescence stains are performed to evaluate for antibody, protein and complement deposition. Finally, ultra-structural examination is performed with electron microscopy and may reveal the presence of electron-dense deposits or other characteristic abnormalities that may suggest an etiology for the patient's renal disease.
Other animals
In the majority of vertebrates, the mesonephros persists into the adult, albeit usually fused with the more advanced metanephros; only in amniotes is the mesonephros restricted to the embryo. The kidneys of fish and amphibians are typically narrow, elongated organs, occupying a significant portion of the trunk. The collecting ducts from each cluster of nephrons usually drain into an archinephric duct, which is homologous with the vas deferens of amniotes. However, the situation is not always so simple; in cartilaginous fish and some amphibians, there is also a shorter duct, similar to the amniote ureter, which drains the posterior (metanephric) parts of the kidney, and joins with the archinephric duct at the bladder or cloaca. Indeed, in many cartilaginous fish, the anterior portion of the kidney may degenerate or cease to function altogether in the adult.
In the most primitive vertebrates, the hagfish and lampreys, the kidney is unusually simple: it consists of a row of nephrons, each emptying directly into the archinephric duct. Invertebrates may possess excretory organs that are sometimes referred to as "kidneys", but, even in Amphioxus, these are never homologous with the kidneys of vertebrates, and are more accurately referred to by other names, such as nephridia. In amphibians, kidneys and the urinary bladder harbour specialized parasites, monogeneans of the family Polystomatidae.
The kidneys of reptiles consist of a number of lobules arranged in a broadly linear pattern. Each lobule contains a single branch of the ureter in its centre, into which the collecting ducts empty. Reptiles have relatively few nephrons compared with other amniotes of a similar size, possibly because of their lower metabolic rate.
Birds have relatively large, elongated kidneys, each of which is divided into three or more distinct lobes. The lobes consists of several small, irregularly arranged, lobules, each centred on a branch of the ureter. Birds have small glomeruli, but about twice as many nephrons as similarly sized mammals.
The human kidney is fairly typical of that of mammals. Distinctive features of the mammalian kidney, in comparison with that of other vertebrates, include the presence of the renal pelvis and renal pyramids and a clearly distinguishable cortex and medulla. The latter feature is due to the presence of elongated loops of Henle; these are much shorter in birds, and not truly present in other vertebrates (although the nephron often has a short intermediate segment between the convoluted tubules). It is only in mammals that the kidney takes on its classical "kidney" shape, although there are some exceptions, such as the multilobed reniculate kidneys of pinnipeds and cetaceans.
Evolutionary adaptation
Kidneys of various animals show evidence of evolutionary adaptation and have long been studied in ecophysiology and comparative physiology. Kidney morphology, often indexed as the relative medullary thickness, is associated with habitat aridity among species of mammals and diet (e.g., carnivores have only long loops of Henle).
Society and culture
Significance
Egyptian
In ancient Egypt, the kidneys, like the heart, were left inside the mummified bodies, unlike other organs which were removed. Comparing this to the biblical statements, and to drawings of human body with the heart and two kidneys portraying a set of scales for weighing justice, it seems that the Egyptian beliefs had also connected the kidneys with judgement and perhaps with moral decisions.
Hebrew
According to studies in modern and ancient Hebrew, various body organs in humans and animals served also an emotional or logical role, today mostly attributed to the brain and the endocrine system. The kidney is mentioned in several biblical verses in conjunction with the heart, much as the bowels were understood to be the "seat" of emotion – grief, joy and pain. Similarly, the Talmud (Berakhoth 61.a) states that one of the two kidneys counsels what is good, and the other evil.
In the sacrifices offered at the biblical Tabernacle and later on at the temple in Jerusalem, the priests were instructed to remove the kidneys and the adrenal gland covering the kidneys of the sheep, goat and cattle offerings, and to burn them on the altar, as the holy part of the "offering for God" never to be eaten.
India: Ayurvedic system
In ancient India, according to the Ayurvedic medical systems, the kidneys were considered the beginning of the excursion channels system, the 'head' of the Mutra Srotas, receiving from all other systems, and therefore important in determining a person's health balance and temperament by the balance and mixture of the three 'Dosha's – the three health elements: Vatha (or Vata) – air, Pitta – bile, and Kapha – mucus. The temperament and health of a person can then be seen in the resulting color of the urine.
Modern Ayurveda practitioners, a practice which is characterized as pseudoscience, have attempted to revive these methods in medical procedures as part of Ayurveda Urine therapy. These procedures have been called "nonsensical" by skeptics.
Medieval Christianity
The Latin term renes is related to the English word "reins", a synonym for the kidneys in Shakespearean English (e.g. Merry Wives of Windsor 3.5), which was also the time when the King James Version of the Bible was translated. Kidneys were once popularly regarded as the seat of the conscience and reflection, and a number of verses in the Bible (e.g. Ps. 7:9, Rev. 2:23) state that God searches out and inspects the kidneys, or "reins", of humans, together with the heart.
History
Kidney stones have been identified and recorded about as long as written historical records exist. The urinary tract including the ureters, as well as their function to drain urine from the kidneys, has been described by Galen in the second century AD.
The first to examine the ureter through an internal approach, called ureteroscopy, rather than surgery was Hampton Young in 1929. This was improved on by VF Marshall who is the first published use of a flexible endoscope based on fiber optics, which occurred in 1964. The insertion of a drainage tube into the renal pelvis, bypassing the uterers and urinary tract, called nephrostomy, was first described in 1941. Such an approach differed greatly from the open surgical approaches within the urinary system employed during the preceding two millennia.
Additional images
| Biology and health sciences | Urinary system | null |
17037 | https://en.wikipedia.org/wiki/Kumquat | Kumquat | Kumquats ( ), or cumquats in Australian English, are a group of small, angiosperm, fruit-bearing trees in the family Rutaceae. Their taxonomy is disputed. They were previously classified as forming the now-historical genus Fortunella or placed within Citrus, . Different classifications have alternatively assigned them to anywhere from a single species, Citrus japonica, to numerous species representing each cultivar. Recent genomic analysis defines three pure species, Citrus hindsii, C. margarita and C. crassifolia, with C. × japonica being a hybrid of the last two.
The edible fruit closely resembles the orange (Citrus x sinensis) in color, texture, and anatomy, but is much smaller, being approximately the size of a large olive. The kumquat is a fairly cold-hardy citrus.
Etymology
The English word kumquat is a borrowing of the Cantonese (; ), from "golden" + "orange".
Description
Kumquat plants have thornless branches and extremely glossy leaves. They bear dainty white flowers that occur in clusters or individually inside the leaf axils. The plants can reach a height from , with dense branches, sometimes bearing small thorns. They bear yellowish-orange fruits that are oval or round in shape. The fruits can be in diameter and have a sweet, pulpy skin and slightly acidic inner pulp. The fruit is often eaten whole by humans and has a taste which is sweet and somewhat sour. Kumquat trees are self-pollinating.
Species
Citrus taxonomy is complicated and controversial. Different systems place various types of kumquats in different species or unite them into as few as two species. Botanically, many of the varieties of kumquats are classified as their own species, rather than a cultivar. Historically they were viewed as falling within the genus Citrus, but the Swingle system of citrus taxonomy elevated them to their own genus, Fortunella. Recent phylogenetic analysis suggests they do fall within Citrus. Swingle divided the kumquats into two subgenera, the Protocitrus, containing the primitive Hong Kong kumquat, and Eufortunella, comprising the round, oval kumquat, Meiwa kumquats, to which Tanaka added two others, the Malayan kumquat and the Jiangsu kumquat. Chromosomal analysis suggested that Swingle's Eufortunella represent a single 'true' species, while Tanaka's additional species were revealed to be likely hybrids of Fortunella with other Citrus, so-called xCitrofortunella.
One recent genomic analysis concluded there was only one true species of kumquat, but the analysis did not include the Hong Kong variety seen as a distinct species in all earlier analyses. A 2020 review concluded that genomic data were insufficient to reach a definitive conclusion on which kumquat cultivars represented distinct species. In 2022, a genome-level analysis of cultivated and wild varieties drew several conclusions. The authors found support for the division of kumquats into subgenera: Protocitrus, for the wild Hong Kong variety, and Eufortunella for the cultivated varieties, with a divergence predating the end of the Quaternary glaciation, perhaps between two ancestral populations isolated south and north, respectively, of the Nanling mountain range. Within the latter group, the oval, round and Meiwa kumquat each showed a level of divergence greater than between other recognized citrus species, such as between pomelo and citron, and hence each merits species-level classification. Though Swingle had speculated that the Meiwa kumquat was a hybrid of oval and round kumquats, the genomic analysis suggested instead that the round kumquat was an oval/Meiwa hybrid.
Hybrids
Hybrid forms of the kumquat include the following:
Calamansi: mandarin orange x kumquat
Citrangequat: citrange x kumquat
Limequat: key lime x kumquat
Orangequat: Satsuma mandarin x kumquat
Procimequat: limequat x kumquat
Sunquat: Meyer lemon (?) x kumquat
Yuzuquat: yuzu x kumquat
Origin and distribution
The kumquat plant is native to Southern China. The historical reference to kumquats appears in literature of China from at least the 12th century. They have been cultivated for centuries in other parts of East Asia, South Asia, and Southeast Asia. They were introduced into Europe in 1846 by Robert Fortune, collector for the London Horticultural Society, and are now found across the world.
Cultivation
Kumquats are much hardier than citrus plants such as oranges. Sowing seed in the spring is most ideal because the temperature is pleasant with more chances of rain and sunshine. This also gives the tree enough time to become well established before winter. Early spring is the best time to transplant a sapling. They do best in direct sunlight (needing 6–7 hours a day) and planted directly in the ground. Kumquats do well in USDA hardy zones 9 and 10 and can survive in temperatures as low as . On trees mature enough, kumquats will form in about 90 days.
In cultivation in the UK, Citrus japonica has gained the Royal Horticultural Society's Award of Garden Merit (confirmed 2017).
Propagation
Kumquats do not grow well from seeds and so are vegetatively propagated by using rootstock of another citrus fruit, air layering, or cuttings.
Varieties
The Nordmann seedless is a seedless cultivar of the Nagami kumquat (Citrus margarita). It is similar to Nagami but with a slightly different shape and lighter skin.
The Centennial Variegated is another cultivar of the Nagami kumquat. It originated from the open pollination of a Nagami kumquat tree. The fruits are striped light green and yellow when underripe, and turn orange and lose their stripes when they ripen. They are oval-shaped, necked, 2.5 inches long and have a smooth rind. They mature in winter. This cultivar arose spontaneously from the oval kumquat (Citrus margarita). It produces a greater proportion of fruit to peel than the oval kumquat, and the fruit are rounder and sometimes necked. Fruit are distinguishable by their variegation in color, exhibiting bright green and yellow stripes, and by its lack of thorns.
The Puchimaru kumquat is a seedless or virtually seedless Japanese kumquat cultivar. It is resistant to citrus canker and citrus scab. The fruit weighs 11–20 grams and is ellipsoid in shape. It has a dark orange rind which is 4 millimeters thick. The juice content is relatively low. The oil glands are somewhat large and conspicuous. It ripens in January.
Uses
Nutrition
A raw kumquat is 81% water, 16% carbohydrates, 2% protein, and 1% fat (table). In a reference amount of , raw kumquat supplies of food energy and is a rich source of vitamin C (53% of the Daily Value), with no other micronutrients in significant content (table).
Essential oil
The essential oil of the kumquat peel contains much of the aroma of the fruit, and is composed principally of limonene, which makes up around 93% of the total. Besides limonene and alpha-pinene (0.34%), both of which are considered monoterpenes, the oil is unusually rich (0.38% total) in sesquiterpenes such as α-bergamotene (0.021%), caryophyllene (0.18%), α-humulene (0.07%) and α-muurolene (0.06%), and these contribute to the spicy and woody flavor of the fruit. Carbonyl compounds make up much of the remainder, and these are responsible for much of the distinctive flavor. These compounds include esters such as isopropyl propanoate (1.8%) and terpinyl acetate (1.26%); ketones such as carvone (0.175%); and a range of aldehydes such as citronellal (0.6%) and 2-methylundecanal. Other oxygenated compounds include nerol (0.22%) and trans-linalool oxide (0.15%).
Gallery
| Biology and health sciences | Sapindales | null |
17038 | https://en.wikipedia.org/wiki/Kyanite | Kyanite | Kyanite is a typically blue aluminosilicate mineral, found in aluminium-rich metamorphic pegmatites and sedimentary rock. It is the high pressure polymorph of andalusite and sillimanite, and the presence of kyanite in metamorphic rocks generally indicates metamorphism deep in the Earth's crust. Kyanite is also known as disthene or cyanite.
Kyanite is strongly anisotropic, in that its hardness varies depending on its crystallographic direction. In kyanite, this anisotropism can be considered an identifying characteristic, along with its characteristic blue color. Its name comes from the same origin as that of the color cyan, being derived from the Ancient Greek word κύανος. This is typically rendered into English as kyanos or kuanos and means "dark blue."
Kyanite is used as a raw material in the manufacture of ceramics and abrasives, and it is an important index mineral used by geologists to trace metamorphic zones.
Properties
Kyanite is an aluminum silicate mineral, with the chemical formula . It is typically patchy blue in color, though it can range from pale to deep blue and can also be gray or white or, infrequently, light green. It typically forms sprays of bladed crystals, but is less commonly found as distinct euhedral (well-shaped) crystals, which are particularly prized by collectors. It has a perfect {100} cleavage plane, parallel to the long axis of the crystal, and a second good cleavage plane {010} that is at an angle of 79 degrees to the {100} cleavage plane. Kyanite also shows a parting on {001} at an angle of about 85 degrees to the long axis of the crystal. Cleavage surfaces typically display a pearly luster. The crystals are slightly flexible.
Kyanite's elongated, columnar crystals are usually a good first indication of the mineral, as well as its color (when the specimen is blue). Associated minerals are useful as well, especially the presence of the polymorphs of staurolite, which occurs frequently with kyanite. However, the most useful characteristic in identifying kyanite is its anisotropism. If one suspects a specimen to be kyanite, verifying that it has two distinctly different hardness values on perpendicular axes is a key to identification; it has a hardness of 5.5 parallel to {001} and 7 parallel to {100}. Thus, a steel needle will easily scratch a kyanite crystal parallel to its long axis, but the crystal is impervious to being scratched by a steel needle perpendicular to the long axis.
Structure
The kyanite structure can be visualized as a distorted face centered cubic lattice of oxygen ions, with aluminium ions occupying 40% of the octahedral sites and silicon occupying 10% of the tetrahedral sites. The aluminium octahedra form chains along the length of the crystal, half of which are straight and half of which are zigzag, with silica tetrahedra linking the chains together. There is no direct linkage between the silica tetrahedra, making kyanite a member of the nesosilicate class of silicate minerals.
Occurrence
Kyanite occurs in biotite gneiss, mica schist, and hornfels, which are metamorphic rocks formed at high pressure during regional metamorphism of a protolith which is rich in aluminium (a pelitic protolith). Kyanite is also occasionally found in granite and pegmatites and associated quartz veins, and is infrequently found in eclogites. It occurs as detrital grains in sedimentary rocks, although it tends to weather rapidly. It is associated with staurolite, andalusite, sillimanite, talc, hornblende, gedrite, mullite and corundum.
Kyanite is one of the most common minerals, having the composition . Minerals with identical compositions but a different, distinct crystal structure are called polymorphs. There are two polymorphs of kyanite: andalusite and sillimanite. Kyanite is the most stable at high pressure, andalusite is the most stable at lower temperature and pressure, and sillimanite is the most stable at higher temperature and lower pressure. They are all equally stable at the triple point near 4.2 kbar and . This makes the presence of kyanite in a metamorphic rock an indication of metamorphism at high pressure.
Kyanite is often used as an index mineral to define and trace a metamorphic zone that was subject to a particular degree of metamorphism at great depth in the crust. For example, G. M. Barrow defined kyanite zones and sillimanite zones in his pioneering work on the mineralogy of metamorphic rocks. Barrow was characterizing a region of Scotland that had experienced regional metamorphism at depth. By contrast, the metamorphic zones surrounding the Fanad pluton of Ireland, which formed by contact metamorphism at a shallower depth in the crust, include andalusite and sillimanite zones but no kyanite zone.
Kyanite is potentially stable at low temperature and pressure. However, under these conditions, the reactions that produce kyanite, such as:
never take place, and hydrous aluminosilicate minerals such as muscovite, pyrophyllite, or kaolinite are found instead of kyanite.
Bladed crystals of kyanite are very common, but individual euhedral crystals are prized by collectors. Kyanite occurs in Manhattan schist, formed under extreme pressure as a result of a continental collision during the assembly of the supercontinent of Pangaea. It is also found in pegmatites of the Appalachian Mountains and in Minas Gerais, Brazil. Splendid specimens are found at Pizzo Forno in Switzerland.
Kyanite can take on an orange color, which notably occurs in Loliondo, Tanzania. The orange color is due to inclusions of small amounts of manganese (Mn3+) in the structure.
Uses
Kyanite is used primarily in refractory and ceramic products, including porcelain plumbing and dishware. It is also used in electronics, electrical insulators and abrasives.
At temperatures above 1100 °C, kyanite decomposes into mullite and vitreous silica via the following reaction:
This transformation results in an expansion. Mullitized kyanite is used to manufacture refractory materials.
Kyanite has been used as a semiprecious gemstone, which may display cat's eye chatoyancy, though this effect is limited by its anisotropism and perfect cleavage. Color varieties include orange kyanite from Tanzania. The orange color is due to inclusions of small amounts of manganese (Mn3+) in the structure.
| Physical sciences | Silicate minerals | Earth science |
17064 | https://en.wikipedia.org/wiki/Kangaroo | Kangaroo | Kangaroos are marsupials from the family Macropodidae (macropods, meaning "large foot"). In common use the term is used to describe the largest species from this family, the red kangaroo, as well as the antilopine kangaroo, eastern grey kangaroo, and western grey kangaroo. Kangaroos are indigenous to Australia and New Guinea. The Australian government estimates that 42.8 million kangaroos lived within the commercial harvest areas of Australia in 2019, down from 53.2 million in 2013.
As with the terms "wallaroo" and "wallaby", "kangaroo" refers to a paraphyletic grouping of species. All three terms refer to members of the same taxonomic family, Macropodidae, and are distinguished according to size. The largest species in the family are called "kangaroos" and the smallest are generally called "wallabies". The term "wallaroos" refers to species of an intermediate size. There are also the tree-kangaroos, another type of macropod, which inhabit the tropical rainforests of New Guinea, far northeastern Queensland and some of the islands in the region. This kind of kangaroo lives in the upper branches of trees. A general idea of the relative size of these informal terms could be:
wallabies: head and body length of 45–105 cm and tail length of 33–75 cm; the dwarf wallaby (the smallest of all known macropod species) is 46 cm long and weighs 1.6 kg;
tree-kangaroos: ranging from Lumholtz's tree-kangaroo: body and head length of 48–65 cm, tail of 60–74 cm, weight of 7.2 kg (16 lb) for males and 5.9 kg (13 lb) for females; to the grizzled tree-kangaroo: length of 75–90 cm (30 to 35 in) and weight of 8–15 kg (18–33 lb);
wallaroos: the black wallaroo (the smaller of the two species) with a tail length of 60–70 cm and weight of 19–22 kg (41.8–48.5 lb) for males and 13 kg (28.6 lb) for females;
kangaroos: a large male can be 2 m (6 ft 7 in) tall and weigh 90 kg (200 lb).
Kangaroos have large, powerful hind legs, large feet adapted for leaping, a long muscular tail for balance, and a small head. Like most marsupials, female kangaroos have a pouch called a marsupium in which joeys complete postnatal development.
Because of its grazing habits, the kangaroo has developed specialized teeth that are rare among mammals. Its incisors are able to crop grass close to the ground and its molars chop and grind the grass. Since the two sides of the lower jaw are not joined or fused together, the lower incisors are farther apart, giving the kangaroo a wider bite. The silica in grass is abrasive, so kangaroo molars are ground down and they actually move forward in the mouth before they eventually fall out, and are replaced by new teeth that grow in the back. This process is known as polyphyodonty and, amongst other mammals, only occurs in elephants and manatees.
The large kangaroos have adapted much better than the smaller macropods to land clearing for pastoral agriculture and habitat changes brought to the Australian landscape by humans. Many of the smaller species are rare and endangered, while kangaroos are relatively plentiful, despite a common misconception to the contrary.
The kangaroo along with the koala are symbols of Australia. A kangaroo appears on the Australian coat of arms and on some of its currency, and is used as a logo for some of Australia's most well-known organisations, such as Qantas, and as the roundel of the Royal Australian Air Force. The kangaroo is important to both Australian culture and the national image, and consequently there are numerous popular culture references.
Wild kangaroos are shot for meat, leather hides, and to protect grazing land. Kangaroo meat has perceived health benefits for human consumption compared with traditional meats due to the low level of fat on kangaroos.
Terminology
The word kangaroo derives from the Guugu Yimithirr word gangurru, referring to eastern grey kangaroos. The name was first recorded as "kanguru" on 12 July 1770 in an entry in the diary of Sir Joseph Banks; this occurred at the site of modern Cooktown, on the banks of the Endeavour River, where under the command of Lieutenant James Cook was beached for almost seven weeks to repair damage sustained on the Great Barrier Reef. Cook first referred to kangaroos in his diary entry of 4 August. Guugu Yimithirr is the language of the people of the area.
A common myth about the kangaroo's English name is that it was a Guugu Yimithirr phrase for "I don't know" or "I don't understand". According to this legend, Cook and Banks were exploring the area when they happened upon the animal. They asked a nearby local what the creatures were called. The local responded "kangaroo", said to mean "I don't know/understand", which Cook then took to be the name of the creature. Anthropologist Walter Roth was trying to correct this legend as far back as in 1898, but few took note until 1972 when linguist John B. Haviland in his research with the Guugu Yimithirr people was able to confirm that gangurru referred to a rare large dark-coloured species of kangaroo. However, when Phillip Parker King visited the Endeavour River region in 1819 and 1820, he maintained that the local word was not kangaroo but menuah perhaps referring to a different species of macropod. There are similar, more credible stories of naming confusion, such as with the Yucatán Peninsula.
Kangaroos are often colloquially referred to as "roos". Male kangaroos are called bucks, boomers, jacks, or old men; females are does, flyers, or jills; and the young ones are joeys. The collective noun for a group of kangaroos is a mob, court, or troupe.
Taxonomy and description
There are four extant species that are commonly referred to as kangaroos:
The red kangaroo (Osphranter rufus) is the largest surviving marsupial anywhere in the world. It occupies the arid and semi-arid centre of the country. The highest population densities of the red kangaroo occur in the rangelands of western New South Wales. Red kangaroos are commonly mistaken as the most abundant species of kangaroo, but eastern greys actually have a larger population. A large male can be 2 metres (6 ft 7 in) tall and weigh 90 kg (200 lb).
The eastern grey kangaroo (Macropus giganteus) is less well-known than the red (outside Australia), but the most often seen, as its range covers the fertile eastern part of the country. The range of the eastern grey kangaroo extends from the top of the Cape York Peninsula in northern Queensland down to Victoria, as well as areas of southeastern Australia and Tasmania. Population densities of eastern grey kangaroos usually peak near 100 per km2 in suitable habitats of open woodlands. Populations are more limited in areas of land clearance, such as farmland, where forest and woodland habitats are limited in size or abundance.
The western grey kangaroo (Macropus fuliginosus) is slightly smaller again at about 54 kg (119 lb) for a large male. It is found in the southern part of Western Australia, South Australia near the coast, and the Murray–Darling basin. The highest population densities occur in the western Riverina district of New South Wales and in the western areas of the Nullarbor Plain in Western Australia. Populations may have declined, particularly in agricultural areas. The species has a high tolerance to the plant toxin sodium fluoroacetate, which indicates a possible origin from the southwest region of Australia.
The antilopine kangaroo (Osphranter antilopinus) is, essentially, the far northern equivalent of the eastern grey and western grey kangaroos. It is sometimes referred to as the antilopine wallaroo, but in behaviour and habitat it is more similar to the red, eastern grey and western grey kangaroos. Like them, it is a creature of the grassy plains and woodlands, and gregarious. Its name comes from its fur, which is similar in colour and texture to that of antelopes. Characteristically, the noses of males swell behind the nostrils. This enlarges nasal passages and allows them to release more heat in hot and humid climates.
In addition, there are about 50 smaller macropods closely related to the kangaroos in the family Macropodidae. Kangaroos and other macropods share a common ancestor with the Phalangeridae from the Middle Miocene. This ancestor was likely arboreal and lived in the canopies of the extensive forests that covered most of Australia at that time, when the climate was much wetter, and fed on leaves and stems. From the Late Miocene through the Pliocene and into the Pleistocene the climate got drier, which led to a decline of forests and expansion of grasslands. At this time, there was a radiation of macropodids characterised by enlarged body size and adaptation to the low quality grass diet with the development of foregut fermentation. The most numerous early macropods, the Balbaridae and the Bulungamayinae, became extinct in the Late Miocene around 5–10 mya. There is dispute over the relationships of the two groups to modern kangaroos and rat-kangaroos. Some argue that the balbarines were the ancestors of rat-kangaroos and the bulungamayines were the ancestors of kangaroos. while others hold the contrary view.
The middle to late bulungamayines, Ganguroo and Wanburoo lacked digit 1 of the hind foot and digits 2 and 3 were reduced and partly under the large digit 4, much like the modern kangaroo foot. This would indicate that they were bipedal. In addition, their ankle bones had an articulation that would have prohibited much lateral movements, an adaptation for bipedal hopping. Species related to the modern grey kangaroos and wallaroos begin to appear in the Pliocene. The red kangaroo appears to be the most recently evolved kangaroo, with its fossil record not going back beyond the Pleistocene era, 1–2 mya.
The first kangaroo to be exhibited in the Western world was an example shot by John Gore, an officer on Captain Cook's ship, HMS Endeavour, in 1770. The animal was shot and its skin and skull transported back to England whereupon it was stuffed (by taxidermists who had never seen the animal before) and displayed to the general public as a curiosity. The first glimpse of a kangaroo for many 18th-century Britons was a painting by George Stubbs.
Comparison with wallabies
Kangaroos and wallabies belong to the same taxonomic family (Macropodidae) and often the same genera, but kangaroos are specifically categorised into the four largest species of the family. The term wallaby is an informal designation generally used for any macropod that is smaller than a kangaroo or a wallaroo that has not been designated otherwise.
Biology and behaviour
Locomotion
Kangaroos are the only large mammals to use hopping on two legs as their primary means of locomotion. The comfortable hopping speed for a red kangaroo is about , but speeds of up to can be attained over short distances, while it can sustain a speed of for nearly . During a hop, the powerful gastrocnemius muscles lift the body off the ground while the smaller plantaris muscle, which attaches near the large fourth toe, is used for push-off. Seventy percent of potential energy is stored in the elastic tendons. At slow speeds, it employs pentapedal locomotion, using its tail to form a tripod with its two forelimbs while bringing its hind feet forward. Both pentapedal walking and fast hopping are energetically costly. Hopping at moderate speeds is the most energy efficient, and a kangaroo moving above maintains energy consistency more than similarly sized animals running at the same speed.
Diet
Kangaroos have single-chambered stomachs quite unlike those of cattle and sheep, which have four compartments. They sometimes regurgitate the vegetation they have eaten, chew it as cud, and then swallow it again for final digestion. However, this is a different, more strenuous, activity than it is in ruminants, and does not take place as frequently.
Different species of kangaroos have different diets, although all are strict herbivores. The eastern grey kangaroo is predominantly a grazer, and eats a wide variety of grasses, whereas some other species such as the red kangaroo include significant amounts of shrubs in their diets. Smaller species of kangaroos also consume hypogeal fungi. Many species are nocturnal, and crepuscular, usually spending the hot days resting in shade, and the cool evenings, nights and mornings moving about and feeding.
Absence of digestive methane release
Despite having herbivorous diets similar to ruminants such as cattle, which release large quantities of digestive methane through exhaling and eructation (burping), kangaroos release virtually none. The hydrogen byproduct of fermentation is instead converted into acetate, which is then used to provide further energy. Scientists are interested in the possibility of transferring the bacteria responsible for this process from kangaroos to cattle, since the greenhouse gas effect of methane is 23 times greater than carbon dioxide per molecule.
Social and sexual behavior
Groups of kangaroos are called mobs, courts or troupes, which usually have 10 or more kangaroos in them. Living in mobs can provide protection for some of the weaker members of the group. The size and stability of mobs vary between geographic regions, with eastern Australia having larger and more stable aggregations than in arid areas farther west. Larger aggregations display high amounts of interactions and complex social structures, comparable to that of ungulates. One common behavior is nose touching and sniffing, which mostly occurs when an individual joins a group. The kangaroo performing the sniffing gains much information from smell cues. This behavior enforces social cohesion without consequent aggression. During mutual sniffing, if one kangaroo is smaller, it will hold its body closer to the ground and its head will quiver, which serves as a possible form of submission. Greetings between males and females are common, with larger males being the most involved in meeting females. Most other non-antagonistic behavior occurs between mothers and their young. Mother and young reinforce their bond through grooming. A mother will groom her young while it is suckling or after it is finished suckling. A joey will nuzzle its mother's pouch if it wants access to it.
Sexual activity of kangaroos consists of consort pairs. Oestrous females roam widely and attract the attention of males with conspicuous signals. A male will monitor a female and follow her every movement. He sniffs her urine to see if she is in oestrus, a process exhibiting the flehmen response. The male will then proceed to approach her slowly to avoid alarming her. If the female does not run away, the male will continue by licking, pawing, and scratching her, and copulation will follow. After copulation is over, the male will move on to another female. Consort pairing may take several days and the copulation is also long. Thus, a consort pair is likely to attract the attention of a rival male. As larger males are tending bonds with females near oestrus, smaller males will tend to females that are farther from oestrus. Dominant males can avoid having to sort through females to determine their reproductive status by searching for tending bonds held by the largest male they can displace without a fight.
Fighting has been described in all species of kangaroos. Fights between kangaroos can be brief or long and ritualised. In highly competitive situations, such as males fighting for access to oestrous females or at limited drinking spots, the fights are brief. Both sexes will fight for drinking spots, but long, ritualised fighting or "boxing" is largely done by males. Smaller males fight more often near females in oestrus, while the large males in consorts do not seem to get involved. Ritualised fights can arise suddenly when males are grazing together. However, most fights are preceded by two males scratching and grooming each other. One or both of them will adopt a high standing posture, with one male issuing a challenge by grasping the other male's neck with its forepaw. Sometimes, the challenge will be declined. Large males often reject challenges by smaller males. During fighting, the combatants adopt a high standing posture and paw at each other's heads, shoulders and chests. They will also lock forearms and wrestle and push each other as well as balance on their tails to kick each other in the abdomen.
Brief fights are similar, except there is no forearm locking. The losing combatant seems to use kicking more often, perhaps to parry the thrusts of the eventual winner. A winner is decided when a kangaroo breaks off the fight and retreats. Winners are able to push their opponents backwards or down to the ground. They also seem to grasp their opponents when they break contact and push them away. The initiators of the fights are usually the winners. These fights may serve to establish dominance hierarchies among males, as winners of fights have been seen to displace their opponent from resting sites later in the day. Dominant males may also pull grass to intimidate subordinate ones.
Predators
Kangaroos have a few natural predators. The thylacine, considered by palaeontologists to have once been a major natural predator of the kangaroo, is now extinct. Other extinct predators included the marsupial lion, Megalania and Wonambi. However, with the arrival of humans in Australia at least 50,000 years ago and the introduction of the dingo about 5,000 years ago, kangaroos have had to adapt.
Along with dingoes, introduced species such as foxes, feral cats, and both domestic and feral dogs, pose a threat to kangaroo populations. Kangaroos and wallabies are adept swimmers, and often flee into waterways if presented with the option. If pursued into the water, a large kangaroo may use its forepaws to hold the predator underwater so as to drown it. Another defensive tactic described by witnesses is catching the attacking dog with the forepaws and disembowelling it with the hind legs.
Adaptations
Kangaroos have developed a number of adaptations to a dry, infertile country and highly variable climate. As with all marsupials, the young are born at a very early stage of development—after a gestation of 31–36 days. At this stage, only the forelimbs are somewhat developed, to allow the newborn to climb to the pouch and attach to a teat. In comparison, a human embryo at a similar stage of development would be at about 7 weeks gestation (even in a modern intensive care unit, premature babies born at less than 23 weeks gestation are usually not mature enough to survive). When the joey is born, it is about the size of a lima bean. The joey will usually stay in the pouch for about 9 months (180–320 days for the Western Grey) before starting to leave the pouch for small periods of time. It is usually fed by its mother until reaching 18 months.
The female kangaroo is usually pregnant in permanence, except on the day she gives birth; however, she has the ability to freeze the development of an embryo until the previous joey is able to leave the pouch. This is known as embryonic diapause and will occur in times of drought and in areas with poor food sources. The composition of the milk produced by the mother varies according to the needs of the joey. In addition, the mother is able to produce two different kinds of milk simultaneously for the newborn and the older joey still in the pouch.
Unusually, during a dry period, males will not produce sperm and females will conceive only if enough rain has fallen to produce a large quantity of green vegetation.
Kangaroos and wallabies have large, elastic tendons in their hind legs. They store elastic strain energy in the tendons of their large hind legs, providing most of the energy required for each hop by the spring action of the tendons rather than by any muscular effort. This is true in all animal species which have muscles connected to their skeletons through elastic elements such as tendons, but the effect is more pronounced in kangaroos.
There is also a link between the hopping action and breathing: as the feet leave the ground, air is expelled from the lungs; bringing the feet forward ready for landing refills the lungs, providing further energy efficiency. Studies of kangaroos and wallabies have demonstrated, beyond the minimum energy expenditure required to hop at all, increased speed requires very little extra effort (much less than the same speed increase in, say, a horse, dog or human), and the extra energy is required to carry extra weight. For kangaroos, the key benefit of hopping is not speed to escape predators—the top speed of a kangaroo is no higher than that of a similarly sized quadruped, and the Australian native predators are in any case less fearsome than those of other countries—but economy: in an infertile country with highly variable weather patterns, the ability of a kangaroo to travel long distances at moderately high speed in search of food sources is crucial to survival.
New research has revealed that a kangaroo's tail acts as a third leg rather than just a balancing strut. Kangaroos have a unique three-stage walk where they plant their front legs and tail first, then push off their tail, followed lastly by the back legs. The propulsive force of the tail is equal to that of both the front and hind legs combined and performs as much work as what a human leg walking can at the same speed.
A DNA sequencing project of the genome of a member of the kangaroo family, the tammar wallaby, was started in 2004. It was a collaboration between Australia (mainly funded by the State of Victoria) and the National Institutes of Health in the US. The tammar's genome was fully sequenced in 2011. The genome of a marsupial such as the kangaroo is of great interest to scientists studying comparative genomics, because marsupials are at an ideal degree of evolutionary divergence from humans: mice are too close and have not developed many different functions, while birds are genetically too remote. The dairy industry could also benefit from this project.
Blindness
Eye disease is rare but not new among kangaroos. The first official report of kangaroo blindness took place in 1994, in central New South Wales. The following year, reports of blind kangaroos appeared in Victoria and South Australia. By 1996, the disease had spread "across the desert to Western Australia". Australian authorities were concerned the disease could spread to other livestock and possibly humans. Researchers at the Australian Animal Health Laboratories in Geelong detected a virus called the Wallal virus in two species of midges, believed to have been the carriers. Veterinarians also discovered fewer than 3% of kangaroos exposed to the virus developed blindness.
Reproduction and life cycle
Kangaroo reproduction is similar to that of opossums. The egg (still contained in the shell membrane, a few micrometres thick, and with only a small quantity of yolk within it) descends from the ovary into the uterus. There it is fertilised and quickly develops into a neonate. Even in the largest kangaroo species (the red kangaroo), the neonate emerges after only 33 days. Usually, only one young is born at a time. It is blind, hairless, and only a few centimetres long; its hindlegs are mere stumps; it instead uses its more developed forelegs to climb its way through the thick fur on its mother's abdomen into the pouch, which takes about three to five minutes. Once in the pouch, it fastens onto one of the four teats and starts to feed. Almost immediately, the mother's sexual cycle starts again. Another egg descends into the uterus and she becomes sexually receptive. Then, if she mates and a second egg is fertilised, its development is temporarily halted. This is known as embryonic diapause, and will occur in times of drought and in areas with poor food sources. Meanwhile, the neonate in the pouch grows rapidly. After about 190 days, the baby (joey) is sufficiently large and developed to make its full emergence out of the pouch, after sticking its head out for a few weeks until it eventually feels safe enough to fully emerge. From then on, it spends increasing time in the outside world and eventually, after about 235 days, it leaves the pouch for the last time. The lifespan of kangaroos averages at six years in the wild to in excess of 20 years in captivity, varying by the species. Most individuals, however, do not reach maturity in the wild.
Interaction with humans
The kangaroo has always been a very important animal for Aboriginal Australians, for its meat, hide, bone, and tendon. Kangaroo hides were also sometimes used for recreation; in particular there are accounts of some tribes (Kurnai) using stuffed kangaroo scrotum as a ball for the traditional football game of marngrook. In addition, there were important Dreaming stories and ceremonies involving the kangaroo. Aherrenge is a current kangaroo dreaming site in the Northern Territory.
Unlike many of the smaller macropods, kangaroos have fared well since European settlement. European settlers cut down forests to create vast grasslands for sheep and cattle grazing, added stock watering points in arid areas, and have substantially reduced the number of dingoes. This overabundance has led to the view that the kangaroo is a pest animal as well as requiring regular culling and other forms of management. There is concern that the current management practices are leading to detrimental consequences for kangaroo welfare, landscape sustainability, biodiversity conservation, resilient agricultural production and Aboriginal health and culture.
Kangaroos are shy and retiring by nature, and in normal circumstances present no threat to humans. In 2003, Lulu, an eastern grey which had been hand-reared, saved a farmer's life by alerting family members to his location when he was injured by a falling tree branch. She received the RSPCA Australia National Animal Valour Award on 19 May 2004.
There are very few records of kangaroos attacking humans without provocation; however, several such unprovoked attacks in 2004 spurred fears of a rabies-like disease possibly affecting the marsupials. Only two reliably documented cases of a fatality from a kangaroo attack have occurred in Australia. The first attack occurred in New South Wales in 1936 when a hunter was killed as he tried to rescue his two dogs from a heated fray. The second attack was inflicted on a 77-year-old man from a domesticated kangaroo in Redmond, Western Australia in September 2022. Other suggested causes for erratic and dangerous kangaroo behaviour include extreme thirst and hunger. In July 2011, a male red kangaroo attacked a 94-year-old woman in her own backyard as well as her son and two police officers responding to the situation. The kangaroo was capsicum sprayed (pepper sprayed) and later put down after the attack.
Kangaroos—even those that are not domesticated— can communicate with humans, according to a research study.
Collisions with vehicles
Nine out of ten animal collisions in Australia involve kangaroos. A collision with a vehicle is capable of killing a kangaroo. Kangaroos dazzled by headlights or startled by engine noise often leap in front of cars. Since kangaroos in mid-bound can reach speeds of around 50 km/h (31 mph) and are relatively heavy, the force of impact can be severe. Small vehicles may be destroyed, while larger vehicles may suffer engine damage. The risk of harm or death to vehicle occupants is greatly increased if the windscreen is the point of impact. As a result, "kangaroo crossing" signs are commonplace in Australia.
Vehicles that frequent isolated roads, where roadside assistance may be scarce, are often fitted with "roo bars" to minimise damage caused by collision. Bonnet-mounted devices, designed to scare wildlife off the road with ultrasound and other methods, have been devised and marketed, but are ineffective.
If a female is the victim of a collision, animal welfare groups ask that her pouch be checked for any surviving joey, in which case it may be removed to a wildlife sanctuary or veterinary surgeon for rehabilitation. Likewise, when an adult kangaroo is injured in a collision, a vet, the RSPCA Australia or the National Parks and Wildlife Service can be consulted for instructions on proper care. In New South Wales, rehabilitation of kangaroos is carried out by volunteers from WIRES. Council road signs often list phone numbers for callers to report injured animals.
Emblems and popular culture
The kangaroo is a recognisable symbol of Australia. The kangaroo and emu feature on the Australian coat of arms. Kangaroos have also been featured on coins, most notably the five kangaroos on the Australian one dollar coin. The Australian Made logo consists of a golden kangaroo in a green triangle to show that a product is grown or made in Australia.
Registered trademarks of early Australian companies using the kangaroo included Yung, Schollenberger & Co. Walla Walla Brand leather and skins (1890); Arnold V. Henn (1892) whose emblem showed a family of kangaroos playing with a skipping rope; Robert Lascelles & Co. linked the speed of the animal with its velocipedes (1896); while some overseas manufacturers, like that of "The Kangaroo" safety matches (made in Japan) of the early 1900s, also adopted the symbol. Even today, Australia's national airline, Qantas, uses a bounding kangaroo for its logo.
The kangaroo appears in Rudyard Kipling's Just So Stories, "The Sing-Song of Old Man Kangaroo", while the kangaroo is chased by a dingo, he gives Nqong the Big God's advice, that his legs and tail grew longest before five o'clock.
The kangaroo and wallaby feature predominantly in Australian sports teams names and mascots. Examples include the Australian national rugby league team (the Kangaroos) and the Australian national rugby union team (the Wallabies). In a nation-wide competition held in 1978 for the XII Commonwealth Games by the Games Australia Foundation Limited in 1982, Hugh Edwards' design was chosen; a simplified form of six thick stripes arranged in pairs extending from along the edges of a triangular centre represent both the kangaroo in full flight, and a stylised "A" for Australia.
Kangaroos are well represented in films, television, books, toys and souvenirs around the world. Skippy the Bush Kangaroo was a popular 1960s Australian children's television series about a fictional pet kangaroo. Kangaroos are featured in the Rolf Harris song "Tie Me Kangaroo Down, Sport" and several Christmas carols.
Meat
The kangaroo has been a source of food for indigenous Australians for tens of thousands of years. Kangaroo meat is high in protein and low in fat (about 2%). Kangaroo meat has a high concentration of conjugated linoleic acid (CLA) compared with other foods, and is a rich source of vitamins and minerals. Low fat diets rich in CLA have been studied for their potential in reducing obesity and atherosclerosis.
Kangaroo meat is sourced from wild animals and is seen by many as the best source of population control programs as opposed to culling them as pests where carcasses are left in paddocks. Kangaroos are harvested by licensed shooters in accordance with a code of practice and are protected by state and federal legislation.
Kangaroo meat is exported to many countries around the world. However, it is not considered biblically kosher by Jews or Adventists. It is considered halal according to Muslim dietary standards, because kangaroos are herbivorous.
| Biology and health sciences | Marsupials | null |
17100 | https://en.wikipedia.org/wiki/Kingfisher | Kingfisher | Kingfishers are a family, the Alcedinidae, of small to medium-sized, brightly coloured birds in the order Coraciiformes. They have a cosmopolitan distribution, with most species living in the tropical regions of Africa, Asia, and Oceania, but also can be found in Europe and the Americas. They can be found in deep forests near calm ponds and small rivers. The family contains 118 species and is divided into three subfamilies and 19 genera. All kingfishers have large heads, long, sharp, pointed bills, short legs, and stubby tails. Most species have bright plumage with only small differences between the sexes. Most species are tropical in distribution, and a slight majority are found only in forests.
They consume a wide range of prey, usually caught by swooping down from a perch. While kingfishers are usually thought to live near rivers and eat fish, many species live away from water and eat small invertebrates. Like other members of their order, they nest in cavities, usually tunnels dug into the natural or artificial banks in the ground. Some kingfishers nest in arboreal termite nests. A few species, principally insular forms, are threatened with extinction. In Britain, the word "kingfisher" normally refers to the common kingfisher.
Taxonomy, systematics and evolution
The kingfisher family Alcedinidae is in the order Coraciiformes, which also includes the motmots, bee-eaters, todies, rollers, and ground-rollers. The name of the family was introduced (as Alcedia) by the French polymath Constantine Samuel Rafinesque in 1815. It is divided into three subfamilies, the tree kingfishers (Halcyoninae), the river kingfishers (Alcedininae), and the water kingfishers (Cerylinae). The name Daceloninae is sometimes used for the tree kingfisher subfamily but it was introduced by Charles Lucien Bonaparte in 1841 while Halcyoninae introduced by Nicholas Aylward Vigors in 1825 is earlier and has priority. A few taxonomists elevate the three subfamilies to family status. In spite of the word "kingfisher" in their English vernacular names, many of these birds are not specialist fish-eaters; none of the species in Halcyoninae are.
The scientific name is derived from Greek mythology and the ancient belief that the birds nested in the open sea and called them halkyons (Latin halcyon) from hals (sea) and kyon (born). In Greek mythology the gods gave the halkyons the ability to calm the waters when nesting. In Greek mythology, one of the Pleiades named Alcyone (Alcedo in Latin) married Ceyx who was killed in a shipwreck. Alcyone drowned herself in grief and the gods revived them both as kingfishers.
The phylogenetic relationship between the kingfishers and the other five families that make up the order Coraciiformes is shown in the cladogram below. The number of species in each family is taken from the list maintained by Frank Gill, Pamela C. Rasmussen and David Donsker on behalf of the International Ornithological Committee (IOC).
The centre of kingfisher diversity is the Australasian realm, but the group originated in the Indomalayan region around 27 million years ago (Mya) and invaded the Australasian realm a number of times. Fossil kingfishers have been described from Lower Eocene rocks in Wyoming and Middle Eocene rocks in Germany, around 30–40 Mya. More recent fossil kingfishers have been described in the Miocene rocks of Australia (5–25 Mya). Several fossil birds have been erroneously ascribed to the kingfishers, including Halcyornis, from the Lower Eocene rocks in Kent, which has also been considered a gull, but is now thought to have been a member of an extinct family.
Amongst the three subfamilies, the Alcedininae are basal to the other two subfamilies. The few species found in the Americas, all from the subfamily Cerylinae, suggest that the sparse representation in the Western Hemisphere resulted from just two original colonising events. The subfamily is a comparatively recent split from the Halcyoninae, diversifying in the Old World as recently as the Miocene or Pliocene.
The following cladogram is based on a molecular phylogenetic study published in 2017.
Description
The smallest species of kingfisher is the African dwarf kingfisher (Ispidina lecontei), which averages in length and between in weight. The largest kingfisher in Africa is the giant kingfisher (Megaceryle maxima), which is in length and in weight. The common Australian kingfisher, known as the laughing kookaburra (Dacelo novaeguineae), is the heaviest species, with females reaching nearly in weight.
The plumage of most kingfishers is bright, with green and blue being the most common colours. The brightness of the colours is neither the product of iridescence (except in the American kingfishers) or pigments but is instead caused by the structure of the feathers, which causes scattering of blue light (the Tyndall effect). In most species, no overt differences between the sexes exist; when differences occur, they are quite small (less than 10%).
The kingfishers have long, dagger-like bills. The bill is usually longer and more compressed in species that hunt fish, and shorter and more broad in species that hunt prey off the ground. The largest and most atypical bill is that of the shovel-billed kookaburra, which is used to dig through the forest floor in search of prey. They generally have short legs, although species that feed on the ground have longer tarsi. Most species have four toes, three of which are forward-pointing and fused towards the base ("syndactyl") to some extent.
The irises of most species are dark brown. The kingfishers have excellent vision; they are capable of binocular vision and are thought in particular to have good colour vision. They have restricted movement of their eyes within the eye sockets, instead using head movements to track prey. In addition, they are capable of compensating for the refraction of water and reflection when hunting prey underwater, and are able to judge depth under water accurately. They also have nictitating membranes that cover the eyes to protect them when they hit the water; the pied kingfisher has a bony plate, which slides across the eye when it hits the water.
Distribution and habitat
The kingfishers have a cosmopolitan distribution, occurring throughout the world's tropical and temperate regions. They are absent from the polar regions and some of the world's driest deserts. Several species have reached islands groups, particularly those in the south and east Pacific Ocean. The Old World tropics and Australasia are the core areas for this group. Europe and North America north of Mexico are very poorly represented, with only one common kingfisher (common kingfisher and belted kingfisher, respectively), and two uncommon or very local species each: (ringed kingfisher and green kingfisher in the southwestern United States, pied kingfisher and white-throated kingfisher in southeastern Europe). The six species occurring in the Americas are four closely related green kingfishers in the genus Chloroceryle and two large crested kingfishers in the genus Megaceryle. Even tropical South America has only five species plus the wintering belted kingfisher. In comparison, the African country of the Gambia has eight resident species in its area.
Individual species may have massive ranges, like the common kingfisher, which ranges from Ireland across Europe, North Africa, and Asia as far as the Solomon Islands in Australasia, or the pied kingfisher, which has a widespread distribution across Africa and Asia. Other species have much smaller ranges, particularly insular species which are endemic to single small islands. The Kofiau paradise kingfisher is restricted to the island of Kofiau off New Guinea.
Kingfishers occupy a wide range of habitats. While they are often associated with rivers and lakes, over half the world's species are found in forests and forested streams. They also occupy a wide range of other habitats. The red-backed kingfisher of Australia lives in the driest deserts, although kingfishers are absent from other dry deserts like the Sahara. Other species live high in mountains, or in open woodland, and a number of species live on tropical coral atolls. Numerous species have adapted to human-modified habitats, particularly those adapted to woodlands, and may be found in cultivated and agricultural areas, as well as parks and gardens in towns and cities.
Behaviour and ecology
Diet and feeding
Kingfishers feed on a wide variety of prey. They are most famous for hunting and eating fish, and some species do specialise in catching fish, but other species take crustaceans, frogs and other amphibians, annelid worms, molluscs, insects, spiders, centipedes, reptiles (including snakes), and even birds and mammals. Individual species may specialise in a few items or take a wide variety of prey, and for species with large global distributions, different populations may have different diets. Woodland and forest kingfishers take mainly insects, particularly grasshoppers, whereas the water kingfishers are more specialised in taking fish. The red-backed kingfisher has been observed hammering into the mud nests of fairy martins to feed on their nestlings. Kingfishers usually hunt from an exposed perch; when a prey item is observed, the kingfisher swoops down to snatch it, then returns to the perch. Kingfishers of all three families beat larger prey against a perch to kill the prey and to dislodge or break protective spines and bones. Having beaten the prey, it is manipulated and then swallowed. Sometimes, a pellet of bones, scales, and other indigestible debris is coughed up. The shovel-billed kookaburra uses its massive, wide bill as a shovel to dig for worms in soft mud.
Breeding
Kingfishers are territorial, some species defending their territories vigorously. They are generally monogamous, although cooperative breeding has been observed in some species and is quite common in others, for example the laughing kookaburra, where helpers aid the dominant breeding pair in raising the young.
Like all Coraciiformes, the kingfishers are cavity nesters, as well as tree nesters, with most species nesting in holes dug in the ground. These holes are usually in earth banks on the sides of rivers, lakes or man-made ditches. Some species may nest in holes in trees, the earth clinging to the roots of an uprooted tree, or arboreal nests of termites (termitarium). These termite nests are common in forest species. The nests take the form of a small chamber at the end of a tunnel. Nest-digging duties are shared between the sexes. During the initial excavations, the bird may fly at the chosen site with considerable force, and birds have injured themselves fatally while doing this. The length of the tunnels varies by species and location; nests in termitaria are necessarily much shorter than those dug into the earth, and nests in harder substrates are shorter than those in soft soil or sand. The longest tunnels recorded are those of the giant kingfisher, which have been found to be long.
The eggs of kingfishers are invariably white. The typical clutch size varies by species; some of the very large and very small species lay as few as two eggs per clutch, whereas others may lay 10 eggs, the typical is around three to six eggs. Both sexes incubate the eggs. The offspring of the kingfisher usually stay with the parents for 3–4 months.
Status and conservation
A number of species are considered threatened by human activities and are in danger of extinction. Most of these are forest species with limited distribution, particularly insular species. They are threatened by habitat loss caused by forest clearance or degradation and in some cases by introduced species. The Marquesan kingfisher of French Polynesia is listed as critically endangered due to a combination of habitat loss and degradation caused by introduced cattle, and possibly due to predation by introduced species.
Relationship with humans
Kingfishers are generally shy birds, but in spite of this, they feature heavily in human culture, generally due to the large head supporting its powerful mouth, their bright plumage, or some species' interesting behavior.
For the Dusun people of Borneo, the Oriental dwarf kingfisher is considered a bad omen, and warriors who see one on the way to battle should return home. Another Bornean tribe considers the banded kingfisher an omen bird, albeit generally a good omen.
The sacred kingfisher, along with other Pacific kingfishers, was venerated by the Polynesians, who believed it had control over the seas and waves.
Modern taxonomy also refers to the winds and sea in naming kingfishers after a classical Greek myth. The first pair of the mythical-bird Halcyon (kingfishers) were created from a marriage of Alcyone and Ceyx. As gods, they lived the sacrilege of referring to themselves as Zeus and Hera. They died for this, but the other gods, in an act of compassion, made them into birds, thus restoring them to their original seaside habitat. In addition, special "halcyon days" were granted. These are the seven days on either side of the winter solstice when storms shall never again occur for them. The Halcyon birds' "days" were for caring for the winter-hatched clutch (or brood), but the phrase "Halcyon days" also refers specifically to an idyllic time in the past, or in general to a peaceful time. In another version, a woman named Alcyone was cast into the waves by her father for her promiscuity and was turned into a kingfisher.
Various kinds of kingfishers and human cultural artifacts are named after the couple, in reference to this metamorphosis myth:
The genus Ceyx (within the river kingfishers family) is named after him.
The kingfisher subfamily Halcyoninae (tree kingfishers) is named after his wife, as is the genus Halcyon.
The belted kingfisher's specific name (Megaceryle alcyon) also references her name.
Not all the kingfishers are named in this way. The etymology of kingfisher (Alcedo atthis) is obscure; the term comes from "king's fisher", but why that name was applied is not known.
| Biology and health sciences | Coraciiformes | Animals |
17140 | https://en.wikipedia.org/wiki/Katal | Katal | The katal (symbol: kat) is that catalytic activity that will raise the rate of conversion by one mole per second in a specified assay system. It is a unit of the International System of Units (SI) used for quantifying the catalytic activity of enzymes (that is, measuring the enzymatic activity level in enzyme catalysis) and other catalysts.
The unit "katal" is not attached to a specified measurement procedure or assay condition, but any given catalytic activity is: the value measured depends on experimental conditions that must be specified. Therefore, to define the quantity of a catalyst in katals, the catalysed rate of conversion (the rate of conversion in presence of the catalyst minus the rate of spontaneous conversion) of a defined chemical reaction is measured in moles per second. One katal of trypsin, for example, is that amount of trypsin which breaks one mole of peptide bonds in one second under the associated specified conditions.
Definition
One katal refers to an amount of enzyme that gives a catalysed rate of conversion of one mole per second. Because this is such a large unit for most enzymatic reactions, the nanokatal (nkat) is used in practice.
The katal is not used to express the rate of a reaction; that is expressed in units of concentration per second, as moles per liter per second. Rather, the katal is used to express catalytic activity, which is a property of the catalyst.
SI multiples
History
The General Conference on Weights and Measures and other international organizations recommend use of the katal. It replaces the non-SI enzyme unit of catalytic activity. The enzyme unit is still more commonly used than the katal, especially in biochemistry. The adoption of the katal has been slow.
Origin
The name "katal" has been used for decades. The first proposal to make it an SI unit came in 1978, and it became an official SI unit in 1999. The name comes from the Ancient Greek κατάλυσις (katalysis), meaning "dissolution"; the word "catalysis" itself is a Latinized form of the Greek word.
| Physical sciences | Catalytic activity | Basics and measurement |
17143 | https://en.wikipedia.org/wiki/Koala | Koala | The koala (Phascolarctos cinereus), sometimes inaccurately called the koala bear, is an arboreal herbivorous marsupial native to Australia. It is the only extant representative of the family Phascolarctidae. Its closest living relatives are the wombats. The koala is found in coastal areas of the island's eastern and southern regions, inhabiting Queensland, New South Wales, Victoria, and South Australia. It is easily recognisable by its stout, tailless body and large head with round, fluffy ears and large, dark nose. The koala has a body length of and weighs . Its fur colour ranges from silver grey to chocolate brown. Koalas from the northern populations are typically smaller and lighter in colour than their counterparts further south. These populations are possibly separate subspecies, but not all researchers accept this.
Koalas typically inhabit open Eucalyptus woodland, as the leaves of these trees make up most of their diet. This eucalypt diet has low nutritional and caloric content and contains toxic compounds that deter most other mammals from feeding on them. Koalas are largely sedentary and sleep up to twenty hours a day. They are asocial; only mothers bond to dependent offspring. Adult males communicate with bellows that intimidate rivals and attract mates. Males mark their presence with secretions from scent glands located on their chests. Like other marsupials, koalas give birth to young known as joeys at a very early stage of development. They crawl into their mothers' pouches, where they live for their first six to seven months. They are fully weaned around a year old. Koalas have few natural predators and parasites, but are threatened by pathogens such as Chlamydiaceae bacteria and koala retrovirus.
Because of their distinctive appearance, koalas, along with kangaroos and emus, are recognised worldwide as symbols of Australia. They were hunted by Indigenous Australians and depicted in myths and cave art for millennia. The first recorded encounter between a European and a koala was in 1798, and an image of the animal was published in 1810 by naturalist George Perry. Botanist Robert Brown wrote the first detailed scientific description in 1814 although his work remained unpublished for 180 years. Artist John Gould illustrated and described the koala, thereby introducing the species to the British public. Further details about the animal's biology were revealed in the 19th century by English scientists. Koalas are listed as a vulnerable species by the International Union for Conservation of Nature. Among the many threats to their existence are habitat destruction caused by agriculture, urbanisation, droughts, and associated bushfires, some related to climate change. In February 2022, the koala was officially listed as endangered in the Australian Capital Territory, New South Wales, and Queensland.
Etymology
The word "koala" comes from the Dharug , meaning . Although the vowel "u" was originally written in the English orthography as "oo" (in spellings such as coola or koolah — two syllables), the spelling later became "oa" and the word is now pronounced in three syllables, possibly in error.
Related words include "kula" from Georges River to Sydney's south and west, and "kulla" (or kūlla) among southeastern Queensland’s Dippil people.
Another hypothesis is that koala was an aboriginal name from the Hawkesbury River district near Sydney.
Adopted by white settlers, the word "koala" became one of hundreds of Aboriginal loan words in Australian English, where it was also commonly referred to as "native bear", later "koala bear", for its resemblance to a bear. It is one of several Aboriginal words that made it into International English alongside words like "didgeridoo" and "kangaroo". The generic name, Phascolarctos, is derived from the Greek words () and () . The specific name, , is Latin for .
Taxonomy
The koala was given its generic name Phascolarctos in 1816 by French zoologist Henri Marie Ducrotay de Blainville, who did not give it a specific name until further review. In 1819, German zoologist Georg August Goldfuss gave it the binomial Lipurus cinereus. Because Phascolarctos was published first, according to the International Code of Zoological Nomenclature, it has priority as the official genus name. French naturalist Anselme Gaëtan Desmarest coined the name Phascolarctos fuscus in 1820, suggesting that the brown-coloured versions were a different species than the grey ones. Other names suggested by European authors included Marodactylus cinereus by Goldfuss in 1820, P. flindersii by René Primevère Lesson in 1827, and P. koala by John Edward Gray in 1827.
Evolution
The koala is classified with wombats (family Vombatidae) and several extinct families (including marsupial tapirs, marsupial lions and giant wombats) in the suborder Vombatiformes within the order Diprotodontia. The Vombatiformes are a sister group to a clade that includes macropods (kangaroos and wallabies) and possums. The koala's lineage possibly branched off around 40 million years ago during the Eocene.
The modern koala is the only extant member of Phascolarctidae, a family that includes several extinct genera and species. During the Oligocene and Miocene, koalas lived in rainforests and had broader diets. Some species, such as Nimiokoala greystanesi and some species of Perikoala, were around the same size as the modern koala, while others, such as species of Litokoala, were one-half to two-thirds its size. Like the modern species, prehistoric koalas had well developed ear structures, which suggests that they also made long-distance vocalisations and had a relatively inactive lifestyle. During the Miocene, the Australian continent began drying out, leading to the decline of rainforests and the spread of open Eucalyptus woodlands. The genus Phascolarctos split from Litokoala in the late Miocene, and had several adaptations that allowed it to live on a eucalyptus diet: the palate shifted towards the front of the skull; the upper teeth were lined by thicker bone, molars became relatively low compared to the jaw joint and with more chewing surface; the pterygoid fossa shrank; and a larger gap separated the incisor teeth and the molars.
P. cinereus may have emerged as a dwarf form of the giant koala (P. stirtoni), following the disappearance of several giant animals in the late Pleistocene. A 2008 study questioned this hypothesis, noting that P. cinereus and P. stirtoni were sympatric during the mid-late Pleistocene, and that their teeth morphology displayed the major differences. The fossil record of the modern koala extends back at least to the middle Pleistocene.
Genetics and variations
Three subspecies have been described: the Queensland koala (Phascolarctos cinereus adustus, Thomas 1923), the New South Wales koala (Phascolarctos cinereus cinereus, Goldfuss 1817), and the Victorian koala (Phascolarctos cinereus victor, Troughton 1935). These forms are distinguished by pelage colour and thickness, body size, and skull shape. The Queensland koala is the smallest, with silver or grey short hairs and a shorter skull. The Victorian koala is the largest, with shaggier, brown fur and a wider skull. The geographic limits of these variations are based on state borders, and their status as subspecies is disputed. A 1999 genetic study suggests koalas exist as a cline within a single evolutionarily significant unit with limited gene flow between local populations. In 2016, a comprehensive phylogenetic study did not support the recognition of any subspecies.
Other studies have found that koala populations are highly inbred with low genetic variation. Such low genetic diversity may have been caused by population declines during the late Pleistocene. Rivers and roads limit gene flow and contribute to the isolation of southeast Queensland populations. In April 2013, scientists from the Australian Museum and Queensland University of Technology announced they had fully sequenced the koala genome.
Characteristics
The koala is a robust animal with a large head and vestigial or non-existent tail. It has a body length of and a weight of , making it among the largest arboreal marsupials. Koalas from Victoria are twice as heavy as those from Queensland. The species is sexually dimorphic: males are 50% larger than females. Males' noses are more curved and sport chest glands, which are visible as bald patches. The female's pouch opening is secured by a sphincter which holds the young in.
The pelage of the koala is denser on the back. Back fur colour varies from light grey to chocolate brown. The belly fur is whitish; on the rump it is mottled whitish and dark. The koala has the most effective insulating back fur of any marsupial and is resilient to wind and rain, while the belly fur can reflect solar radiation. The koala has curved, sharp claws well adapted for climbing trees. The large forepaws have two opposable digits (the first and second, which are opposable to the other three) that allow them to grip small branches. On the hind paws, the second and third digits are fused, a typical condition for members of the Diprotodontia, and the attached claws (which are still separate) function like a comb. The animal has a robust skeleton and a short, muscular upper body with relatively long upper limbs that contribute to its ability to climb. The thigh muscles are anchored further down the shinbone, increasing its climbing power.
For a mammal, the koala has a disproportionately small brain, 60% smaller than that of a typical diprotodont, weighing only on average. The brain's surface is fairly smooth and "primitive". It does not entirely fill the cranial cavity, unlike most mammals, and is lightened by large amounts of cerebrospinal fluid. It is possible that the fluid protects the brain should the animal fall from a tree. The koala's small brain may be an adaptation to the energy restrictions imposed by its diet, which is insufficient to sustain a larger brain. Its small brain limits its ability to perform complex behaviours. For example, it will not eat plucked eucalyptus leaves on a flat surface, which does not match its feeding routine.
The koala has a broad, dark nose with a good sense of smell, and it is known to sniff the oils of individual branchlets to assess their edibility. Its relatively small eyes are unusual among marsupials in that the pupils have vertical slits, an adaptation to living on a more vertical plane. Its round ears provide it with good hearing, and it has a well-developed middle ear. The koala larynx is located relatively low in the vocal tract and can be pulled further down. They possess unique folds in the velum (soft palate), known as velar vocal folds, in addition to the typical vocal folds of the larynx. These features allow the koala to produce deeper sounds than would otherwise be possible for their size.
The koala has several adaptations for its low nutrient, toxic, and fibrous diet. The animal's dentition consists of incisors and cheek teeth (a single premolar and four molars on each jaw) that are separated by a large gap (a characteristic feature of herbivorous mammals). The koala bites a leaf with the incisors and clips it with the premolars at the petiole, before chewing it to pieces with the cusped molars. Koalas may store food in their cheek pouches before it is ready to be chewed. The partially worn molars of koalas in their prime are optimal for breaking leaves into small particles, resulting in more efficient stomach digestion and nutrient absorption in the small intestine, which digests the eucalyptus leaves to provide most of the animal's energy. A koala sometimes regurgitates the food into the mouth to be chewed a second time.
Koalas are hindgut fermenters, and their digestive retention can last 100 hours in the wild or 200 hours in captivity. This is made possible by their caecum— long and in diameter—possibly the largest for an animal of its size. Koalas can retain food particles for longer fermentation if needed. They are more likely keep smaller particles as larger ones take longer to digest. While the hindgut is relatively large, only 10% of the animal's energy is obtained from digestion in this chamber. The koala's metabolic rate is only 50% of the typical mammalian rate, owing to its low energy intake, although this can vary across seasons and sexes. They can digest the toxic plant secondary metabolites, phenolic compounds and terpenes due to their production of cytochrome P450, which neutralises these poisons in the liver. The koala replaces lost water at a lower rate than species such as some possums. It maintains water by absorbing it in the caecum, resulting in drier faecal pellets packed with undigested fibre.
Distribution and habitat
The koala's range covers roughly , and 30 ecoregions. It ranges throughout mainland eastern and southeastern Australia, including the states of Queensland, New South Wales, Victoria, and South Australia. The koala was introduced to several nearby islands. The population on Magnetic Island represents the northern limit of its range.
Fossil evidence shows that the koala's range stretched as far west as southwestern Western Australia during the late Pleistocene. They were likely driven to extinction in these areas by environmental changes and hunting by Indigenous Australians. Koalas were introduced to Western Australia at Yanchep in 1938 but that population was reduced to 4 individuals by 2022. Koalas can be found in both tropical and temperate habitats ranging from dense woodlands to more spaced-out forests. In semi-arid climates, they prefer riparian habitats, where nearby streams and creeks provide refuge during times of drought and extreme heat.
Behaviour and ecology
Foraging and activities
Koalas are herbivorous, and while most of their diet consists of eucalypt leaves, they can be found in trees of other genera, such as Acacia, Allocasuarina, Callitris, Leptospermum, and Melaleuca. Though the foliage of over 600 species of Eucalyptus is available, the koala shows a strong preference for around 30. They prefer plant matter with higher protein than fibre and lignin. The most favoured species are Eucalyptus microcorys, E. tereticornis, and E. camaldulensis, which, on average, make up more than 20% of their diet. Despite its reputation as a picky eater, the koala is more generalist than some other marsupial species, such as the greater glider. The koala does not need to drink often as it can get enough water from the leaves, though larger males may additionally drink water found on the ground or in tree hollows. When feeding, a koala reaches out to grab leaves with one forepaw while the other paws hang on to the branch. Depending on the size of the individual, a koala can walk to the end of a branch or must stay near the base. Each day, koalas eat up to of leaves, spread over four to six feeding periods. Despite their adaptations to a low-energy lifestyle, they have meagre fat reserves.
Their low-energy diet limits their activity and they sleep 20 hours a day. They are predominantly active at night and spend most of their waking hours foraging. They typically eat and sleep in the same tree, possibly for as long as a day. On warm days, a koala may rest with its back against a branch or lie down with its limbs dangling. When it gets hot, the koala rests lower in the canopy and near the trunk, where the surface is cooler than the surrounding air. It curls up when it gets cold and wet. It resorts to a lower, thicker, branch during high winds. While it spends most of the time in the tree, the animal descends to the ground to move to another tree, with either a walking or leaping gait. The koala usually grooms itself with its hind paws, with their double claws, but it sometimes uses its forepaws or mouth.
Social life
Koalas are asocial and spend just 15 minutes a day on social behaviours. In areas of higher density and fewer trees, home ranges are smaller and more clumped. Koala society appears to consist of "residents" and "transients": the former are mostly adult females and the latter are males. Resident males appear to be territorial and dominant. The territories of dominant males are found near breeding females, while younger males must wait until they reach full size to challenge for breeding rights. Adult males occasionally venture outside their home ranges; when they do, dominant ones retain their status. As a male climbs a new tree, he rubs his chest against it and sometimes dribbles urine. This scent-marking behaviour probably serves as communication, and individuals are known to sniff the bottom of a newly found tree. Chest gland secretions are complex chemical mixtures — about 40 compounds were identified in one analysis — that vary in composition and concentration across season and age.
Adult males communicate with loud bellows — "a long series of deep, snoring inhalations and belching exhalations". Because of their low frequency, these bellows can travel far through the forest. Koalas may bellow at any time, particularly during the breeding season, when it serves to attract females and possibly intimidate other males. They also bellow to advertise their presence when they change trees. These sounds signal and exaggerate the male's body size; females pay more attention to bellows by larger males. Female koalas bellow, though more softly, in addition to making snarls, wails, and screams. These calls are produced when in distress and when making defensive threats. Younger animals squeak and older ones squawk when distraught. When another individual climbs over it, a koala makes a low closed-mouth grunt. Koalas also communicate with facial expressions. When snarling, wailing, or squawking, the animal curls the upper lip and points its ears forward. Screaming koalas pull their lips and ears back. Females form an oval shape with their lips when annoyed.
Agonistic behaviour typically consists of quarrels between individuals who are trying to pass each other on a tree. This occasionally involves biting. Strangers may wrestle, chase, and bite. In extreme situations, a larger male may try to displace a smaller rival from a tree, chasing, cornering, and biting it. Once the individual is driven away, the victor bellows and marks the tree. Pregnant and lactating females are particularly aggressive and attack individuals who come too close. In general, however, koalas tend to avoid fighting due to energy costs.
Reproduction and development
Koalas are seasonal breeders, and they give birth from October to May. Females in oestrus lean their heads back and shake their bodies. Despite these obvious signals, males try to copulate with any female during this period, mounting them from behind. Because of his much larger size, a male can overpower a female. A female may scream and vigorously fight off her suitors but will accede to one that is dominant or familiar. The commotion can attract other males to the scene, obliging the incumbent to delay mating and fight off the intruders. A female may learn who is more dominant during these fights. Older males typically accumulate scratches, scars, and cuts on the exposed parts of their noses and their eyelids.
Koalas are induced ovulators. The gestation period lasts 33–35 days, and a female gives birth to one joey or occasionally, twins. The young are born tiny and barely formed, weighing no more than . However, their lips, forelimbs, and shoulders are relatively advanced, and they can breathe, defecate, and urinate. The joey crawls into its mother's pouch to continue its development. Female koalas do not clean their pouches, while an unusual trait among marsupials.
The joey latches on to one of the female's two teats and suckles it. The female lactates for as long as a year to make up for her low energy production. Unlike in other marsupials, koala milk becomes less fatty as the joey grows. After seven weeks, the joey has a proportionally large head, clear edges around its face, more colouration, and a visible pouch (if female) or scrotum (male). At 13 weeks, the joey weighs around , and its head doubles in size. The eyes begin to open and hair begins to appear. At 26 weeks, the fully furred animal resembles an adult and can look outside the pouch.
At six or seven months, the joey weighs and fully emerges from the pouch for the first time. It explores its new surroundings cautiously, clutching its mother for support. Around this time, the mother prepares it for a eucalyptus diet by producing a faecal pap that the joey eats from her cloaca. This pap comes from the cecum, is more liquid than regular faeces, and is filled with bacteria. A nine month old joey has its adult coat colour and weighs . Having permanently left the pouch, it rides on its mother's back for transportation, learning to climb by grasping branches. Gradually, it becomes more independent. The mother becomes pregnant again after a year after the offspring reaches around . She permanently severs her bond with her previous offspring and no longer allows it to suckle, but it remains nearby until it is one-and-a-half to two years old.
Females become sexually mature at about three years of age; in comparison, males reach sexual maturity at about age four although they can experience spermatogenesis as early as two years. Males do not start marking their scent until they reach sexual maturity though their chest glands become functional much earlier. Koalas can breed every year if environmental conditions are good, though the long dependence of the young usually leads to year-long gaps in births.
Health and mortality
Koalas live from 13 to 18 years in the wild although males may die sooner because of their more risky lives. Koalas usually survive falls from trees, yet they can get hurt and even die, particularly inexperienced young and fighting males. Around age six, the koala's chewing teeth begin to wear down and their chewing efficiency decreases. Eventually, the cusps disappear completely and the animal dies of starvation. Koalas have few predators. Dingos and large pythons and some birds of prey may take them. Koalas are generally not subject to external parasites other than ticks around the coast. The mite Sarcoptes scabiei gives koalas mange, while the bacterium Mycobacterium ulcerans skin ulcers, but these are uncommon. Internal parasites are few and have little effect. These include the Bertiella obesa tapeworm, commonly found in the intestine, and the Marsupostrongylus longilarvatus and Durikainema phascolarcti nematodes, which are infrequently found in the lungs. In a three-year study of almost 600 koalas taken to the Australia Zoo Wildlife Hospital in Queensland, 73.8% of the animals were infected with parasitic protozoal genus Trypanosoma, the most frequent of which was T. irwini.
Koalas can be subject to pathogens such as Chlamydiaceae bacteria, which can cause keratoconjunctivitis, urinary tract infection, and reproductive tract infection. Such infections are common on the mainland, but absent in some island populations. , efforts are underway to use vaccination to try to stem the koala chlamydia epidemic. The koala retrovirus (KoRV) may cause koala immune deficiency syndrome (KIDS) which is similar to AIDS in humans. Prevalence of KoRV in koala populations suggests it spread from north to south, for only southern populations have virus-free individuals.
The animals are vulnerable to bushfires due to their slow speed and the flammability of eucalypt trees. The koala instinctively seeks refuge in the higher branches where it is vulnerable to heat and fire. Bushfires divide the animal's habitat, which isolates them, decreases their numbers, and creates genetic bottlenecks. Dehydration and overheating can prove fatal. Consequently, the koala is vulnerable to the effects of climate change. Models of climate change predict warmer and drier climates, suggesting that the koala's range will shrink in the east and south to more mesic habitats.
Relation to humans
History
The first written reference to the koala was recorded by John Price, servant of John Hunter, the Governor of New South Wales. Price encountered the "cullawine" on 26 January 1798, during an expedition to the Blue Mountains, but his remarks would first be published in Historical Records of Australia, nearly a century later. In 1802, French-born explorer Francis Louis Barrallier encountered the animal when his two Aboriginal guides, returning from a hunt, brought back two koala feet they were intending to eat. Barrallier preserved the appendages and sent them and his notes to Hunter's successor, Philip Gidley King, who forwarded them to Joseph Banks. Similar to Price, Barrallier's notes were not published until 1897. Reports of the "Koolah" appeared in the Sydney Gazette in late 1803, and helped provide the impetus for King to send artist John Lewin to create watercolours of the animal. Lewin painted three pictures, one of which was printed in Georges Cuvier's Le Règne Animal (The Animal Kingdom) (1827).
Botanist Robert Brown was the first to write a formal scientific description in 1803, based on a female specimen captured near what is now Mount Kembla in the Illawarra region of New South Wales. Austrian botanical illustrator Ferdinand Bauer drew the animal's skull, throat, feet, and paws. Brown's work remained unpublished and largely unnoticed, however; his field books and notes remained in his possession until his death, when they were bequeathed to the British Museum in London. They were not identified until 1994, while Bauer's koala watercolours were not published until 1989. William Paterson, who had befriended Brown and Bauer during their stay in New South Wales, wrote an eyewitness report of his encounters with the animals and this would be the basis for British surgeon Everard Home's anatomical writings on them. Home, who in 1808 published his report, coined the scientific name Didelphis coola.
George Perry officially published the first image of the koala in his 1810 natural history work Arcana. Perry called it the "New Holland Sloth", and his dislike for the koala, evident in his description of the animal, was reflected in the contemporary British attitudes towards Australian animals as strange and primitive: ... the eye is placed like that of the Sloth, very close to the mouth and nose, which gives it a clumsy awkward appearance, and void of elegance in the combination ... they have little either in their character or appearance to interest the Naturalist or Philosopher. As Nature however provides nothing in vain, we may suppose that even these torpid, senseless creatures are wisely intended to fill up one of the great links of the chain of animated nature ...
Naturalist and popular artist John Gould illustrated and described the koala in his three-volume work The Mammals of Australia (1845–1863) and introduced the species, as well as other members of Australia's little-known faunal community, to the public. Comparative anatomist Richard Owen, in a series of publications on the physiology and anatomy of Australian mammals, presented a paper on the anatomy of the koala to the Zoological Society of London. In this widely cited publication, he provided an early description of its internal anatomy, and noted its general structural similarity to the wombat. English naturalist George Robert Waterhouse, curator of the Zoological Society of London, was the first to correctly classify the koala as a marsupial in the 1840s, and compared it to fossil species Diprotodon and Nototherium, which had been discovered just recently. Similarly, Gerard Krefft, curator of the Australian Museum in Sydney, noted evolutionary mechanisms at work when comparing the koala to fossil marsupials in his 1871 The Mammals of Australia.
Britain received its first living koala in 1881, which was obtained by the Zoological Society of London. As related by prosecutor to the society, William Alexander Forbes, the animal suffered an accidental demise when the heavy lid of a washstand fell on it and it was unable to free itself. Forbes dissected the specimen and wrote about the female reproductive system, the brain, and the liver — parts not previously described by Owen, who had access only to preserved specimens. Scottish embryologist William Caldwell — well known in scientific circles for determining the reproductive mechanism of the platypus — described the uterine development of the koala in 1884, and used this new information to convincingly map out the evolutionary timeline of the koala and the monotremes.
Cultural significance
The koala is known worldwide and is a major draw for Australian zoos and wildlife parks. It has been featured in popular culture and as soft toys. It benefited the Australian tourism industry by over $1 billion in 1998, and subsequently grown. Its international popularly rose after World War II, when tourism increased and the animals were exported to zoos overseas. In 1997, about 75% of European and Japanese tourists placed the koala at the top of their list of animals to see. According to biologist Stephen Jackson: "If you were to take a straw poll of the animal most closely associated with Australia, it's a fair bet that the koala would come out marginally in front of the kangaroo". Factors that contribute to the koala's enduring popularity include its teddy bear-like appearance with childlike body proportions.
The koala features in the Dreamtime stories and mythology of Indigenous Australians. The Tharawal people believed that the animal helped them get to Australia by rowing the boat. Another myth tells of a tribe that killed a koala and used its long intestines to create a bridge for people from other parts of the world. How the koala lost its tail is the subject of many tales. In one, a kangaroo cuts it off to punish the koala for uncouth behaviour. Tribes in Queensland and Victoria regarded the koala as a wise animal that gave valuable guidance. Bidjara-speaking people credited the koala for making trees grow in their arid lands. The animal is depicted in rock carvings, though less so than some other species.
Early European settlers in Australia considered the koala to be a creeping sloth-like animal with a "fierce and menacing look". At the turn of the 20th century, the koala's reputation took a positive turn. It appears in Ethel Pedley's 1899 book Dot and the Kangaroo, as the "funny native bear". Artist Norman Lindsay depicted a more anthropomorphic koala in The Bulletin cartoons, starting in 1904. This character also appeared as Bunyip Bluegum in Lindsay's 1918 book The Magic Pudding. The most well known fictional koala is Blinky Bill. Created by Dorothy Wall in 1933, the character appeared in books, films, TV series, merchandise, and a 1986 environmental song by John Williamson. The koala first appeared on an Australian stamp in 1930.
The song "Ode to a Koala Bear" appears on the B-side of the 1983 Paul McCartney/Michael Jackson duet single Say Say Say. A koala is the main character in animated cartoons in the early 1980s: Hanna-Barbera's The Kwicky Koala Show and Nippon Animation's Noozles. Food products shaped like the koala include the Caramello Koala chocolate bar and the bite-sized cookie snack Koala's March. Dadswells Bridge in Victoria features a tourist complex shaped like a giant koala and the Queensland Reds rugby team has a koala as its icon.
Koala diplomacy
Political leaders and members of royal families had their pictures taken with koalas, including Queen Elizabeth II, Prince Harry, Crown Prince Naruhito, Crown Princess Masako, Pope John Paul II, US President Bill Clinton, Soviet premier Mikhail Gorbachev and South African President Nelson Mandela At the 2014 G20 Brisbane summit, hosted by Prime Minister Tony Abbott, many world leaders, including Russian President Vladimir Putin and US President Barack Obama, were photographed holding koalas. The event gave rise to the term "koala diplomacy", which became the Oxford Word of the Month for December 2016. The term also includes the loan of koalas by the Australian government to overseas zoos in countries such as Singapore and Japan, as a form of "soft power diplomacy", like the "panda diplomacy" practised by China.
Conservation
The koala was originally classified as Least Concern on the Red List, and reassessed as Vulnerable in 2014. In the Australian Capital Territory, New South Wales and Queensland, the species was listed under the EPBC Act in February 2022 as endangered by extinction. The described population was determined in 2012 to be "a species for the purposes of the EPBC Act 1999" in Federal legislation.
Australian policymakers declined a 2009 proposal to include the koala in the Environment Protection and Biodiversity Conservation Act 1999. A 2017 WWF report found a 53% decline per generation in Queensland, and a 26% decline in New South Wales. The koala population in South Australia and Victoria appear to be abundant; however, the Australian Koala Foundation (AKF) argued that the exclusion of Victorian populations from protective measures was based on a misconception that the total population was 200,000, whereas they believed in 2012 that it was probably less than 100,000. AKF estimated in 2022 that there could be 43,000–100,000. This compares with 8 to 10 million at the start of the 20th century. The Australian Government's Threatened Species Scientific Committee estimated that the 2021 koala population was 92,000, down from 185,000 two decades prior.
The koala was heavily hunted by European settlers in the early 20th century, largely for its fur. Australia exported as many as two million pelts by 1924. Koala furs were used to make rugs, coat linings, muffs, and on women's garment trimmings. The first successful efforts at conserving the species were initiated by the establishment of Brisbane's Lone Pine Koala Sanctuary and Sydney's Koala Park Sanctuary in the 1920s and 1930s. Its owner Noel Burnet created the first successful breeding program.
One of the biggest anthropogenic threats to the koala is habitat destruction and fragmentation. Near the coast, the main cause of this is urbanisation, while in rural areas, habitat is cleared for agriculture. Its favoured trees are harvested for wood products. In 2000, Australia had the fifth highest rate of land clearance globally, stripping of native plants. The koalas' distribution has shrunk by more than 50% since European arrival, largely due to habitat fragmentation in Queensland. Nevertheless, koalas live in many protected areas.
While urbanisation can pose a threat to koala populations, the animals can survive in urban areas given enough trees. Urban populations have distinct vulnerabilities: collisions with vehicles and attacks by domestic dogs. Cars and dogs kill about 4,000 animals every year. To reduce road deaths, government agencies have been exploring various wildlife crossing options, such as the use of fencing to channel animals toward an underpass, in some cases adding a walkway to an existing culvert. Injured koalas are often taken to wildlife hospitals and rehabilitation centres. In a 30-year retrospective study performed at a New South Wales koala rehabilitation centre, trauma was found to be the most frequent cause of admission, followed by symptoms of Chlamydia infection.
| Biology and health sciences | Marsupials | null |
17160 | https://en.wikipedia.org/wiki/Knife | Knife | A knife (: knives; from Old Norse 'knife, dirk') is a tool or weapon with a cutting edge or blade, usually attached to a handle or hilt. One of the earliest tools used by humanity, knives appeared at least 2.5 million years ago, as evidenced by the Oldowan tools. Originally made of wood, bone, and stone (such as flint and obsidian), over the centuries, in step with improvements in both metallurgy and manufacturing, knife blades have been made from copper, bronze, iron, steel, ceramic, and titanium. Most modern knives have either fixed or folding blades; blade patterns and styles vary by maker and country of origin.
Knives can serve various purposes. Hunters use a hunting knife, soldiers use the combat knife, scouts, campers, and hikers carry a pocketknife; there are kitchen knives for preparing foods (the chef's knife, the paring knife, bread knife, cleaver), table knife (butter knives and steak knives), weapons (daggers or switchblades), knives for throwing or juggling, and knives for religious ceremony or display (the kirpan).
Parts
A modern knife consists of:
the blade
the handle
the point – the end of the knife used for piercing
the edge – the cutting surface of the knife extending from the point to the heel
the grind – the cross section shape of the blade
the spine – the thickest section of the blade; on a single-edged knife, the side opposite the edge; on a two-edged knife, more toward the middle
the fuller – a groove added to make the blade lighter (optional)
the ricasso – the flat section of the blade located at the junction of the blade and the knife's bolster or guard (optional)
the – the barrier between the blade and the handle which prevents the hand from slipping forward onto the blade and protects the hand from the external forces that are usually applied to the blade during use (optional)
the hilt or butt – the end of the handle used for blunt force
the lanyard – a strap used to secure the knife to the wrist (optional)
The blade edge can be plain or serrated, or a combination of both. Single-edged knives may have a reverse edge or false edge occupying a section of the spine. These edges are usually serrated and are used to further enhance function.
The handle, used to grip and manipulate the blade safely, may include a tang, a portion of the blade that extends into the handle. Knives are made with partial tangs (extending part way into the handle, known as "stick tangs") or full tangs (extending the full length of the handle, often visible on top and bottom). There is also the enterçado construction method present in antique knives from Brazil, such as the Sorocaban Knife, which consists in riveting a repurposed blade to the ricasso of a bladeless handle. The handle may include a bolster, a piece of heavy material (usually metal) situated at the front or rear of the handle. The bolster, as its name suggests, is used to mechanically strengthen the knife.
Blade
Knife blades can be manufactured from a variety of materials, each of which has advantages and disadvantages. Carbon steel, an alloy of iron and carbon, can be very sharp. It holds its edge well, and remains easy to sharpen, but is vulnerable to rust and stains. Stainless steel is an alloy of iron, chromium, possibly nickel, and molybdenum, with only a small amount of carbon. It is not able to take quite as sharp an edge as carbon steel, but is highly resistant to corrosion. High carbon stainless steel is stainless steel with a higher amount of carbon, intended to incorporate the better attributes of carbon steel and stainless steel. High carbon stainless steel blades do not discolor or stain, and maintain a sharp edge. Laminated blades use multiple metals to create a layered structure, combining the attributes of both. For example, a harder, more brittle steel may be pressed between an outer layer of softer, tougher, stainless steel to reduce vulnerability to corrosion. In this case, however, the part most affected by corrosion, the edge, is still vulnerable. Damascus steel is a form of pattern welding with similarities to laminate construction. Layers of different steel types are welded together, but then the stock is manipulated to create patterns in the steel.
Titanium is a metal that has a better strength-to-weight ratio, is more wear resistant, and more flexible than steel. Although less hard and unable to take as sharp an edge, carbides in the titanium alloy allow them to be heat-treated to a sufficient hardness. Ceramic blades are hard, brittle, lightweight, and do not corrode: they may maintain a sharp edge for years with no maintenance at all, but are fragile and will break if dropped on a hard surface or twisted in use. They can only be sharpened on silicon carbide sandpaper and appropriate grinding wheels. Plastic blades are not sharp and are usually serrated to enable them to cut. They are often disposable.
Steel blades are commonly shaped by forging or stock removal. Forged blades are made by heating a single piece of steel, then shaping the metal while hot using a hammer or press. Stock removal blades are shaped by grinding and removing metal. With both methods, after shaping, the steel must be heat treated. This involves heating the steel above its critical point, then quenching the blade to harden it. After hardening, the blade is tempered to remove stresses and make the blade tougher. Mass manufactured kitchen cutlery uses both the forging and stock removal processes. Forging tends to be reserved for manufacturers' more expensive product lines, and can often be distinguished from stock removal product lines by the presence of an integral bolster, though integral bolsters can be crafted through either shaping method.
Knives are sharpened in various ways. Flat ground blades have a profile that tapers from the thick spine to the sharp edge in a straight or convex line. Seen in cross section, the blade would form a long, thin triangle, or where the taper does not extend to the back of the blade, a long thin rectangle with one peaked side. Hollow ground blades have concave, beveled edges. The resulting blade has a thinner edge, so it may have better cutting ability for shallow cuts, but it is lighter and less durable than flat ground blades and will tend to bind in deep cuts. Serrated blade knives have a wavy, scalloped or saw-like blade. Serrated blades are more well suited for tasks that require aggressive 'sawing' motions, whereas plain edge blades are better suited for tasks that require push-through cuts (e.g., shaving, chopping, slicing).
Many knives have holes in the blade for various uses. Holes are commonly drilled in blades to reduce friction while cutting, increase single-handed usability of pocket knives, and, for butchers' knives, allow hanging out of the way when not in use.
Fixed-blade features
A fixed blade knife, sometimes called a sheath knife, does not fold or slide, and is typically stronger due to the tang, the extension of the blade into the handle, and lack of moving parts.
Folding blade features
A folding knife connects the blade to the handle through a pivot, allowing the blade to fold into the handle. To prevent injury to the knife user through the blade accidentally closing on the user's hand, folding knives typically have a locking mechanism. Different locking mechanisms are favored by various individuals for reasons such as perceived strength (lock safety), legality, and ease of use.
Popular locking mechanisms include:
Slip joint – Found most commonly on traditional pocket knives, the opened blade does not lock, but is held in place by a spring device that allows the blade to fold if a certain amount of pressure is applied.
Lockback – Also known as the spine lock, the lockback includes a pivoted latch affixed to a spring, and can be disengaged only by pressing the latch down to release the blade.
Linerlock – Invented by Michael Walker, a Linerlock is a folding knife with a side-spring lock that can be opened and closed with one hand without repositioning the knife in the hand. The lock is self-adjusting for wear.
Compression Lock – A variant of the Liner Lock, it uses a small piece of metal at the tip of the lock to lock into a small corresponding impression in the blade. This creates a lock that does not disengage when the blade is torqued, instead of becoming more tightly locked. It is released by pressing the tab of metal to the side, to allow the blade to be placed into its groove set into the handle.
Frame Lock – Also known as the integral lock or monolock, this locking mechanism was invented by a custom knifemaker Chris Reeve for the Sebenza as an update to the liner lock. The frame lock works in a manner similar to the liner lock but uses a partial cutout of the actual knife handle, rather than a separate liner inside the handle to hold the blade in place.
Collar lock – found on Opinel knives.
Button Lock – Found mainly on automatic knives, this type of lock uses a small push-button to open and release the knife.
Axis Lock – A locking mechanism patented by Benchmade Knife Company until 2020. A cylindrical bearing is tensioned such that it will jump between the knife blade and some feature of the handle to lock the blade open.
Arc Lock – A locking mechanism exclusively licensed to SOG Specialty Knives. It differs from an axis lock in that the cylindrical bearing is tensioned by a rotary spring rather than an axial spring.
Ball Bearing Lock – A locking mechanism exclusively licensed to Spyderco. This lock is conceptually similar to the axis and arc locks but the bearing is instead a ball bearing.
Tri-Ad Lock – A locking mechanism exclusively licensed to Cold Steel. It is a form of lockback which incorporates a thick steel stop pin between the front of the latch and the back of the tang to transfer force from the blade into the handle.
PickLock – A round post on the back base of the blade locks into a hole in a spring tab in the handle. To close, manually lift (pick) the spring tab (lock) off the blade post with your fingers, or in "Italian Style Stilettos" swivel the bolster (hand guard) clockwise to lift the spring tab off the blade post.
Another prominent feature of many folding knives is the opening mechanism. Traditional pocket knives and Swiss Army knives commonly employ the nail nick, while modern folding knives more often use a stud, hole, disk, or flipper located on the blade, all of which have the benefit of allowing the user to open the knife with one hand.
The "wave" feature is another prominent design, which uses a part of the blade that protrudes outward to catch on one's pocket as it is drawn, thus opening the blade; this was patented by Ernest Emerson and is not only used on many of the Emerson knives, but also on knives produced by several other manufacturers, notably Spyderco and Cold Steel.
Automatic or switchblade knives open using the stored energy from a spring that is released when the user presses a button or lever or other actuator built into the handle of the knife. Automatic knives are severely restricted by law in the UK and most American states.
Increasingly common are assisted opening knives which use springs to propel the blade once the user has moved it past a certain angle. These differ from automatic or switchblade knives in that the blade is not released by means of a button or catch on the handle; rather, the blade itself is the actuator. Most assisted openers use flippers as their opening mechanism. Assisted opening knives can be as fast or faster than automatic knives to deploy.
Common locking mechanisms
In the lock back, as in many folding knives, a stop pin acting on the top (or behind) the blade prevents it from rotating clockwise. A hook on the tang of the blade engages with a hook on the rocker bar which prevents the blade from rotating counter-clockwise. The rocker bar is held in position by a torsion bar. To release the knife the rocker bar is pushed downwards as indicated and pivots around the rocker pin, lifting the hook and freeing the blade.
When negative pressure (pushing down on the spine) is applied to the blade all the stress is transferred from the hook on the blade's tang to the hook on the rocker bar and thence to the small rocker pin. Excessive stress can shear one or both of these hooks rendering the knife effectively useless. Knife company Cold Steel uses a variant of the lock back called the Tri-Ad Lock which introduces a pin in front of the rocker bar to relieve stress on the rocker pin, has an elongated hole around the rocker pin to allow the mechanism to wear over time without losing strength and angles the hooks so that the faces no longer meet vertically.
The bolt in the bolt lock is a rectangle of metal that is constrained to slide only back and forward. When the knife is open a spring biases the bolt to the forward position where it rests above the tang of the blade preventing the blade from closing. Small knobs extend through the handle of the knife on both sides allowing the user to slide the bolt backward freeing the knife to close. The Axis Lock used by knife maker Benchmade is functionally identical to the bolt lock except that it uses a cylinder rather than a rectangle to trap the blade. The Arc Lock by knife maker SOG is similar to the Axis Lock except the cylinder follows a curved path rather than a straight path.
In the liner lock, an L-shaped split in the liner allows part of the liner to move sideways from its resting position against the handle to the centre of the knife where it rests against the flat end of the tang. To disengage, this leaf spring is pushed so it again rests flush against the handle allowing the knife to rotate. A frame lock is functionally identical but instead of using a thin liner inside the handle material uses a thicker piece of metal as the handle and the same split in it allows a section of the frame to press against the tang.
Sliding blade features
A sliding knife is a knife that can be opened by sliding the knife blade out the front of the handle. One method of opening is where the blade exits out the front of the handle point-first and then is locked into place (an example of this is the gravity knife). Another form is an OTF (out-the-front) switchblade, which only requires the push of a button or spring to cause the blade to slide out of the handle and lock into place. To retract the blade back into the handle, a release lever or button, usually the same control as to open, is pressed. A very common form of sliding knife is the sliding utility knife (commonly known as a stanley knife or boxcutter).
Handle
The handles of knives can be made from a number of different materials, each of which has advantages and disadvantages. Handles are produced in a wide variety of shapes and styles. Handles are often textured to enhance grip.
Wood handles provide good grip and are warm in the hand, but are more difficult to care for. They do not resist water well, and will crack or warp with prolonged exposure to water. Modern stabilized and laminated woods have largely overcome these problems. Many beautiful and exotic hardwoods are employed in the manufacture of custom and some production knives. In some countries it is now forbidden for commercial butchers' knives to have wood handles, for sanitary reasons.
Plastic handles are more easily cared for than wooden handles, but can be slippery and become brittle over time.
Injection molded handles made from higher grade plastics are composed of polyphthalamide, and when marketed under trademarked names such as Zytel or Grivory, are reinforced with Kevlar or fiberglass. These are often used by major knife manufacturers.
Rubber handles such as Kraton or Resiprene-C are generally preferred over plastic due to their durable and cushioning nature.
Micarta is a popular handle material on user knives due to its toughness and stability. Micarta is nearly impervious to water, is grippy when wet, and is an excellent insulator. Micarta has come to refer to any fibrous material cast in resin. There are many varieties of micarta available. One very popular version is a fiberglass impregnated resin called G-10.
Leather handles are seen on some hunting and military knives, notably the KA-BAR. Leather handles are typically produced by stacking leather washers, or less commonly, as a sleeve surrounding another handle material. Russian manufacturers often use birchbark in the same manner.
Skeleton handles refers to the practice of using the tang itself as the handle, usually with sections of material removed to reduce weight. Skeleton handled knives are often wrapped with parachute cord or other wrapping materials to enhance grip.
Stainless steel and aluminum handles are durable and sanitary, but can be slippery. To counter this, premium knife makers make handles with ridges, bumps, or indentations to provide extra grip. Another problem with knives that have metal handles is that, since metal is an excellent heat-conductor, these knives can be very uncomfortable, and even painful or dangerous, when handled without gloves or other protective handwear in (very) cold climates.
More exotic materials usually only seen on art or ceremonial knives include: Stone, bone, mammoth tooth, mammoth ivory, oosik (walrus penis bone), walrus tusk, antler (often called stag in a knife context), sheep horn, buffalo horn, teeth, and mop (mother of pearl or "pearl"). Many materials have been employed in knife handles.
Handles may be adapted to accommodate the needs of people with disabilities. For example, knife handles may be made thicker or with more cushioning for people with arthritis in their hands. A non-slip handle accommodates people with palmar hyperhidrosis.
Types
Weapons
As a weapon, the knife is universally adopted as an essential tool. It is the essential element of a knife fight. For example:
Ballistic knife: A specialized combat knife with a detachable gas- or spring-propelled blade that can be fired to a distance of several feet or meters by pressing a trigger or switch on the handle.
Bayonet: A knife-shaped close-quarters combat weapon designed to attach to the muzzle of a rifle or similar weapon.
Butterfly knife: A folding pocket knife also known as a "balisong" or "batangas" with two counter-rotating handles where the blade is concealed within grooves in the handles.
Combat knife: Any knife intended to be used by soldiers in the field, as a general-use tool, but also for fighting.
Dagger: A single-edged or double-edged combat knife with a central spine and edge(s) sharpened their full length, used primarily for thrusting or stabbing. Variations include the Stiletto and Push dagger. See List of daggers for a more detailed list.
Fighting knife: A knife with a blade designed to inflict a lethal injury in a physical confrontation between two or more individuals at very short range (grappling distance). Well known examples include the Bowie knife, Ka-Bar combat knife, and the Fairbairn–Sykes fighting knife.
Machete: A knife with a broad blade designed for chopping, often curved either in a convex or concave fashion.
Shiv: A crudely made homemade knife out of everyday materials, especially prevalent in prisons among inmates. An alternate name in some prisons is shank.
Sword: An evolution of the knife with a lengthened and strengthened blade used primarily for mêlée combat and hunting.
Throwing knife: A knife designed and weighted for throwing.
Trench knife: Purpose-made or improvised knives, intended for close-quarter fighting, particularly in trench warfare; some have a d-shaped integral hand guard.
Sports equipment
Throwing knife: A knife designed and weighted for throwing.
Utensils
A primary aspect of the knife as a tool includes dining, used either in food preparation or as cutlery. Examples of this include:
Bread knife: A knife with a serrated blade for cutting bread
Boning knife: A knife used for removing the bones of poultry, meat, and fish.
Fillet Knife: A knife a with flexible blade used to separate meat or fish from bones.
Butcher's Knife: A knife designed and used primarily for the butchering and/or dressing of animals.
Carving knife: A knife for carving large cooked meats such as poultry, roasts, hams, and other large cooked meats.
Canelle or Channel knife: The notch of the blade is used to cut a twist from a citrus fruit, usually in the preparation of cocktails
Chef's knife: Also known as a French knife, a cutting tool used in preparing food
Cleaver: A large knife that varies in its shape but usually resembles a rectangular-bladed hatchet. It is used mostly for hacking through bones as a kitchen knife or butcher knife, and can also be used for crushing via its broad side, typically garlic.
Electric knife: An electrical device consisting of two serrated blades that are clipped together, providing a sawing action when powered on
Kitchen knife: Any knife, including the chef's knife, that is intended to be used in food preparation
Oyster knife: Has a short, thick blade for prying open oyster shells
Mezzaluna: A two-handled arc-shaped knife used in a rocking motion as an herb chopper or for cutting other foods
Paring or Coring Knife: A knife with a small but sharp blade used for cutting out the cores from fruit.
Rocker knife is a knife that cuts with a rocking motion, which is primarily used by people whose disabilities prevent them from using a fork and knife simultaneously.
Table knife or Case knife: A piece of cutlery, either a butter knife, steak knife, or both, that is part of a table setting, accompanying the fork and spoon
Tools
As a utility tool the knife can take many forms, including:
Bowie knife: Commonly, any large sheath knife, or a specific style of large knife popularized by Jim Bowie.
Bushcraft knife: A sturdy, normally fixed blade knife used while camping in the wilderness.
Camping knife: A camping knife is used for camping and survival purposes in a wilderness environment.
Head knife or Round knife: A knife with a semicircular blade used since antiquity to cut leather.
Crooked knife: Sometimes referred to as a "curved knife", "carving knife" or in the Algonquian language the "mocotaugan" is a utilitarian knife used for carving.
Diver's knife: A knife adapted for use in diving and water sports and a necessary part of standard diving dress.
Electrician's knife: A short-bladed knife used to cut electrical insulation. Also, a folding knife with a large screw driver as well as a blade. Typically the screwdriver locks, but the blade may not lock.
Folding knife: A folding knife is a knife with one or more blades that fit inside the handle that can still fit in a pocket. It is also known as a jackknife or jack-knife.
Hunting knife: A knife used to dress large game.
Kiridashi: A small Japanese knife having a chisel grind and a sharp point, used as a general-purpose utility knife.
Linoleum knife: is a small knife that has a short, stiff blade with a curved point and a handle and is used to cut linoleum or other sheet materials.
Machete: A large heavy knife used to cut through thick vegetation such as sugar cane or jungle undergrowth; it may be used as an offensive weapon.
Marking knife: A woodworking tool used for marking out workpieces.
Palette knife: A knife, or frosting spatula, lacking a cutting edge, used by artists for tasks such as mixing and applying paint and in cooking for spreading icing.
Paper knife: Or a "letter opener" it is a knife made of metal or plastic, used for opening mail.
Pocketknife: a folding knife designed to be carried in a pants pocket. Subtypes include:
Lockback knife: a folding knife with a mechanism that locks the blade into the open position, preventing accidental closure while in use
Multi-tool and Swiss Army knife, which combine a folding knife blade with other tools and implements, such as pliers, scissors, or screwdrivers
Produce knife: A knife with a rectangular profile and a blunt front edge used by grocers to cut produce.
Rigging knife: A knife used to cut rigging in sailing vessels.
Scalpel: A medical knife, used to perform surgery.
Straight razor: A reusable knife blade used for shaving hair.
Survival knife: A sturdy knife, sometimes with a hollow handle filled with survival equipment.
Switchblade: A knife with a folding blade that springs out of the grip when a button or lever on the grip is pressed.
Utility knife: A short knife with a replaceable (typically) triangular blade, used for cutting sheet materials including card stock, paperboard, and corrugated fiberboard, also called a boxcutter knife or boxcutter
Wood carving knife and whittling knives: Knives used to shape wood in the arts of wood carving and whittling, often with short, thin replaceable blades for better control.
Craft knife: A scalpel-like form of non-retractable utility knife with a (typically) long handle and a replaceable pointed blade, used for precise, clean cutting in arts and crafts, often called an X-acto knife in the US and Canada after the popular brand name.
Traditional and religious implements
Athame: A typically black-handled and double-edged ritual knife used in Wicca and other derivative forms of Neopagan witchcraft. (see also Boline).
Dirk: A long bladed thrusting dagger worn by Scottish Highlanders for customary and ceremonial purposes.
Katar: An Indian push dagger sometimes used ceremonially.
Kilaya: A dagger used in Tibetan Buddhist rituals.
Kirpan: A ceremonial knife that all baptised Sikhs must wear as one of the five visible symbols of the Sikh faith (Kakars)
Kris: A dagger used in Indo-Malay cultures, often by nobility and sometimes in religious rituals
Kukri: A Nepalese knife used as a tool and weapon
Maguro bōchō: A traditional Japanese knife with a long specialized blade that is used to fillet large ocean fish.
Puukko: A traditional Finnish style woodcraft belt-knife used as a tool rather than a weapon
Seax: A Germanic single-edged knife, dagger or short sword used both as a tool and as a weapon.
Sgian-dubh: A small knife traditionally worn with the Highland and Isle dress (Kilt) of Scotland.
Ulu: An Inuit woman's all-purpose knife with a handle directly above a highly curved blade.
Yakutian knife: A traditional Yakuts knife used as a tool for wood carving and meat or fish cutting. Can be used as a part of yakutian ethnic costume.
Rituals and superstitions
The knife plays a significant role in some cultures through ritual and superstition, as the knife was an essential tool for survival since early man. Knife symbols can be found in various cultures to symbolize all stages of life; for example, a knife placed under the bed while giving birth is said to ease the pain, or, stuck into the headboard of a cradle, to protect the baby; knives were included in some Anglo-Saxon burial rites, so the dead would not be defenseless in the next world. The knife plays an important role in some initiation rites, and many cultures perform rituals with a variety of knives, including the ceremonial sacrifices of animals. Samurai warriors, as part of bushido, could perform ritual suicide, or seppuku, with a tantō, a common Japanese knife. An athame, a ceremonial knife, is used in Wicca and derived forms of neopagan witchcraft.
In Greece, a black-handled knife placed under the pillow is used to keep away nightmares. As early as 1646 reference is made to a superstition of laying a knife across another piece of cutlery being a sign of witchcraft. A common belief is that if a knife is given as a gift, the relationship of the giver and recipient will be severed. Something such as a small coin, dove or a valuable item is exchanged for the gift, rendering "payment."
Legislation
Some types of knives are restricted by law, and carrying of knives may be regulated, because they are often used in crime, although restrictions vary greatly by jurisdiction and type of knife. For example, some laws prohibit carrying knives in public while other laws prohibit possession of certain knives, such as switchblades.
| Technology | Weapons | null |
17198 | https://en.wikipedia.org/wiki/Kariba%20Dam | Kariba Dam | The Kariba Dam is a double curvature concrete arch dam in the Kariba Gorge of the Zambezi river basin between Zambia and Zimbabwe. The dam stands tall and long. The dam forms Lake Kariba, which extends for and holds of water.
Construction
The dam was constructed on the orders of the Government of the Federation of Rhodesia and Nyasaland, a 'federal colony' within the British Empire. The double curvature concrete arch dam was designed by Coyne et Bellier and constructed between 1955 and 1959 by Impresit of Italy at a cost of $135,000,000 for the first stage with only the Kariba South power cavern. Final construction and the addition of the Kariba North Power cavern by Mitchell Construction was not completed until 1977, due to largely political problems, for a total cost of $480,000,000. During construction, 86 construction workers lost their lives.
The dam was officially opened by Queen Elizabeth The Queen Mother on 17 May 1960.
Power generation
The Kariba Dam supplies of electricity to parts of both Zambia (the Copperbelt) and Zimbabwe and generates per annum. Each country has its own power station on the north and south bank of the dam, respectively. The south station belonging to Zimbabwe has been in operation since 1960 and had six generators of capacity each for a total of .
On November 11, 2013 it was announced by Zimbabwe's Finance Minister, Patrick Chinamasa that capacity at the Zimbabwean (South) Kariba hydropower station would be increased by 300 megawatts. The cost of upgrading the facility has been supported by a $319m loan from China. The deal is a clear example of Zimbabwe's "Look East" policy, which was adopted after falling out with Western powers. Construction on the Kariba South expansion began in mid-2014 and was initially expected to be complete in 2019.
In March 2018, president Emmerson Mnangagwa commissioned the completed expansion of Kariba South Hydroelectric Power Station. The addition of two new turbines raised capacity at this station to . The expansion work was done by Sinohydro, at a cost of US$533 million. Work started in 2014, and was completed in March 2018.
The north station belonging to Zambia has been in operation since 1976, and has four generators of each for a total of ; work to expand this capacity by an additional to was completed in December 2013. Two additional 180 MW generators were added.
Location
The Kariba Dam project was planned and carried out by the Government of the Federation of Rhodesia and Nyasaland. The Federation was often referred to as the Central African Federation (CAF). The CAF was a 'federal colony' within the British Empire in southern Africa that existed from 1953 to the end of 1963, comprising the former self-governing British colony of Southern Rhodesia and the former British protectorates of Northern Rhodesia and Nyasaland. Northern Rhodesia had decided earlier in 1953 (before the Federation was founded) to build a dam within its territory, on the Kafue River, a major tributary of the Zambezi. It would have been closer to Northern Rhodesia's Copperbelt, which was in need of more power. This would have been a cheaper and less grandiose project, with a smaller environmental impact. Southern Rhodesia, the richest of the three, objected to a Kafue dam and insisted that the dam be sited instead at Kariba. Also, the capacity of the Kafue dam was much lower than that at Kariba. Initially, the dam was managed and maintained by the Central African Power Corporation. The Kariba Dam is now owned and operated by the Zambezi River Authority, which is jointly and equally owned by Zimbabwe and Zambia.
Since Zambia's independence, three dams have been built on the Kafue River: the Kafue Gorge Upper Dam, Kafue Gorge Lower Dam and the Itezhi-Tezhi Dam.
Environmental impacts
Population displacement and resettlement
The creation of the reservoir forced resettlement of about 57,000 Tonga people living along the Zambezi on both sides.
There are many different perspectives on how much resettlement aid was given to the displaced Tonga. British author David Howarth described the efforts in Northern Rhodesia:
Anthropologist Thayer Scudder, who has studied these communities since the late 1950s, wrote:
American writer Jacques Leslie, in Deep Water (2005), focused on the plight of the people displaced by Kariba Dam, and found the situation little changed since the 1970s. In his view, Kariba remains the worst dam-resettlement disaster in African history.
Basilwizi Trust
In an effort to regain control of their lives, the local people who were displaced by the Kariba dam's reservoir formed the Basilwizi Trust in 2002. The Trust seeks mainly to improve the lives of people in the area through organizing development projects and serving as a conduit between the people of the Zambezi Valley and their country's decision-making process.
River ecology
The Kariba Dam controls 90% of the total runoff of the Zambezi River, thus changing the downstream ecology dramatically.
Wildlife rescue
From 1958 to 1961, Operation Noah captured and removed around 6,000 large animals and numerous small ones threatened by the lake's rising waters.
Recent activity
On 6 February 2008, the BBC reported that heavy rain could lead to a release of water from the dam, which would force 50,000 people downstream to evacuate.
Rising levels led to the opening of the floodgates in March 2010, requiring the evacuation of 130,000 people who lived in the floodplain, and causing concerns that flooding would spread to nearby areas.
In March 2014, at a conference organized by the Zambezi River Authority, engineers warned that the foundations of the dam had weakened and there was a possibility of dam failure unless repairs were made.
On 3 October 2014 the BBC reported that "The Kariba Dam is in a dangerous state. Opened in 1959, it was built on a seemingly solid bed of basalt. But, in the past 50 years, the torrents from the spillway have eroded that bedrock, carving a vast crater that has undercut the dam's foundations. … engineers are now warning that without urgent repairs, the whole dam will collapse. If that happened, a tsunami-like wall of water would rip through the Zambezi valley, reaching the Mozambique border within eight hours. The torrent would overwhelm Mozambique's Cahora Bassa Dam and knock out 40% of southern Africa's hydroelectric capacity. Along with the devastation of wildlife in the valley, the Zambezi River Authority estimates that the lives of 3.5 million people are at risk."
In June 2015 The Institute of Risk Management South Africa completed a Risk Research Report entitled Impact of the failure of the Kariba Dam. It concluded: "Whilst we can debate whether the Kariba
Dam will fail, why it might occur and when, there is no doubt that the impact across the region would be devastating."
In January 2016 it was reported that water levels at the dam had dropped to 12% of capacity. Levels fell by , which is just above the minimum operating level for hydropower. Low rainfalls and overuse of the water by the power plants have left the reservoir near empty, raising the prospect that both Zimbabwe and Zambia will face water shortages.
In July and September 2018, The Lusaka Times reported that work had started relating to the plunge pool and cracks in the dam wall.
On 22 February 2019 Bloomberg reported "Zambia has reduced hydropower production at the Kariba Dam because of rapidly declining water levels" but "Zambia doesn't anticipate power cuts as a result of shortages". On 5 August that year, the same publication reported that the reservoir was near empty, and that it may have to stop hydropower production.
As of November 2020 the water level in the Kariba reservoir has remained steady around the 25% capacity, up from nearly half that in November 2019. The Zambezi River Authority has stated that it is optimistic about rainfall estimates for the 2020/2021 rainfall season, allocating an increased amount of water for power production. At that time, the reservoir held 15.77 billion cubic meters of water, with the water line sitting at around 478.30 metres (1,569.23 ft), just above the minimum capacity for power generation of 475.50 metres (1,560.04 ft).
It was reported in February 2022 that rehabilitation work has been underway since 2017 on the Kariba dam. The Zambezi River Authority (ZRA) said that work on the Kariba Dam Rehabilitation Project (KDRP), which includes efforts to reconfigure the plunge pool and rebuild the spillway gates, is scheduled to be finished in 2025. The rehabilitation of the dam is being financed by the European Union (EU), the World Bank, the Swedish government and the African Development Bank (AfDB), with the Zambian and Zimbabwe governments contributing counterpart funding. The project’s goal is to guarantee the structural integrity of the Kariba Dam, assuring the sustained generation of power primarily for the benefit of the inhabitants of Zimbabwe and Zambia and the broader Southern African Development Community area. The work on redesigning the plunge pool involves bulk excavating the rock in the current pool to assist plunge pool stabilization and avoid additional scouring or erosion along the weak fault zone towards the dam foundation. This reshaping work will be accomplished by constructing a temporary water-tight cofferdam to complete the reshaping work under dry conditions.
An energy crisis due to drought and low water levels continued into January 2023, with water level falling to just 1% of capacity and output limited to 800 MW for a fraction of the day.
Industrial power users have proposed a 250 MW floating solar plant on Lake Kariba to improve electricity reliability.
| Technology | Dams | null |
17327 | https://en.wikipedia.org/wiki/Kinetic%20energy | Kinetic energy | In physics, the kinetic energy of an object is the form of energy that it possesses due to its motion.
In classical mechanics, the kinetic energy of a non-rotating object of mass m traveling at a speed v is .
The kinetic energy of an object is equal to the work, force (F) times displacement (s), needed to achieve its stated velocity. Having gained this energy during its acceleration, the mass maintains this kinetic energy unless its speed changes. The same amount of work is done by the object when decelerating from its current speed to a state of rest.
The SI unit of kinetic energy is the joule, while the English unit of kinetic energy is the foot-pound.
In relativistic mechanics, is a good approximation of kinetic energy only when v is much less than the speed of light.
History and etymology
The adjective kinetic has its roots in the Greek word κίνησις kinesis, meaning "motion". The dichotomy between kinetic energy and potential energy can be traced back to Aristotle's concepts of actuality and potentiality.
The principle of classical mechanics that E ∝ mv2 is conserved was first developed by Gottfried Leibniz and Johann Bernoulli, who described kinetic energy as the living force or vis viva. Willem 's Gravesande of the Netherlands provided experimental evidence of this relationship in 1722. By dropping weights from different heights into a block of clay, Gravesande determined that their penetration depth was proportional to the square of their impact speed. Émilie du Châtelet recognized the implications of the experiment and published an explanation.
The terms kinetic energy and work in their present scientific meanings date back to the mid-19th century. Early understandings of these ideas can be attributed to Thomas Young, who in his 1802 lecture to the Royal Society, was the first to use the term energy to refer to kinetic energy in its modern sense, instead of vis viva. Gaspard-Gustave Coriolis published in 1829 the paper titled Du Calcul de l'Effet des Machines outlining the mathematics of kinetic energy. William Thomson, later Lord Kelvin, is given the credit for coining the term "kinetic energy" c. 1849–1851. William Rankine, who had introduced the term "potential energy" in 1853, and the phrase "actual energy" to complement it, later cites William Thomson and Peter Tait as substituting the word "kinetic" for "actual".
Overview
Energy occurs in many forms, including chemical energy, thermal energy, electromagnetic radiation, gravitational energy, electric energy, elastic energy, nuclear energy, and rest energy. These can be categorized in two main classes: potential energy and kinetic energy. Kinetic energy is the movement energy of an object. Kinetic energy can be transferred between objects and transformed into other kinds of energy.
Kinetic energy may be best understood by examples that demonstrate how it is transformed to and from other forms of energy. For example, a cyclist uses chemical energy provided by food to accelerate a bicycle to a chosen speed. On a level surface, this speed can be maintained without further work, except to overcome air resistance and friction. The chemical energy has been converted into kinetic energy, the energy of motion, but the process is not completely efficient and produces heat within the cyclist.
The kinetic energy in the moving cyclist and the bicycle can be converted to other forms. For example, the cyclist could encounter a hill just high enough to coast up, so that the bicycle comes to a complete halt at the top. The kinetic energy has now largely been converted to gravitational potential energy that can be released by freewheeling down the other side of the hill. Since the bicycle lost some of its energy to friction, it never regains all of its speed without additional pedaling. The energy is not destroyed; it has only been converted to another form by friction. Alternatively, the cyclist could connect a dynamo to one of the wheels and generate some electrical energy on the descent. The bicycle would be traveling slower at the bottom of the hill than without the generator because some of the energy has been diverted into electrical energy. Another possibility would be for the cyclist to apply the brakes, in which case the kinetic energy would be dissipated through friction as heat.
Like any physical quantity that is a function of velocity, the kinetic energy of an object depends on the relationship between the object and the observer's frame of reference. Thus, the kinetic energy of an object is not invariant.
Spacecraft use chemical energy to launch and gain considerable kinetic energy to reach orbital velocity. In an entirely circular orbit, this kinetic energy remains constant because there is almost no friction in near-earth space. However, it becomes apparent at re-entry when some of the kinetic energy is converted to heat. If the orbit is elliptical or hyperbolic, then throughout the orbit kinetic and potential energy are exchanged; kinetic energy is greatest and potential energy lowest at closest approach to the earth or other massive body, while potential energy is greatest and kinetic energy the lowest at maximum distance. Disregarding loss or gain however, the sum of the kinetic and potential energy remains constant.
Kinetic energy can be passed from one object to another. In the game of billiards, the player imposes kinetic energy on the cue ball by striking it with the cue stick. If the cue ball collides with another ball, it slows down dramatically, and the ball it hit accelerates as the kinetic energy is passed on to it. Collisions in billiards are effectively elastic collisions, in which kinetic energy is preserved. In inelastic collisions, kinetic energy is dissipated in various forms of energy, such as heat, sound and binding energy (breaking bound structures).
Flywheels have been developed as a method of energy storage. This illustrates that kinetic energy is also stored in rotational motion.
Several mathematical descriptions of kinetic energy exist that describe it in the appropriate physical situation. For objects and processes in common human experience, the formula mv2 given by classical mechanics is suitable. However, if the speed of the object is comparable to the speed of light, relativistic effects become significant and the relativistic formula is used. If the object is on the atomic or sub-atomic scale, quantum mechanical effects are significant, and a quantum mechanical model must be employed.
Kinetic energy for non-relativistic velocity
Treatments of kinetic energy depend upon the relative velocity of objects compared to the fixed speed of light. Speeds experienced directly by humans are non-relativisitic; higher speeds require the theory of relativity.
Kinetic energy of rigid bodies
In classical mechanics, the kinetic energy of a point object (an object so small that its mass can be assumed to exist at one point), or a non-rotating rigid body depends on the mass of the body as well as its speed. The kinetic energy is equal to 1/2 the product of the mass and the square of the speed. In formula form:
where is the mass and is the speed (magnitude of the velocity) of the body. In SI units, mass is measured in kilograms, speed in metres per second, and the resulting kinetic energy is in joules.
For example, one would calculate the kinetic energy of an 80 kg mass (about 180 lbs) traveling at 18 metres per second (about 40 mph, or 65 km/h) as
When a person throws a ball, the person does work on it to give it speed as it leaves the hand. The moving ball can then hit something and push it, doing work on what it hits. The kinetic energy of a moving object is equal to the work required to bring it from rest to that speed, or the work the object can do while being brought to rest: net force × displacement = kinetic energy, i.e.,
Since the kinetic energy increases with the square of the speed, an object doubling its speed has four times as much kinetic energy. For example, a car traveling twice as fast as another requires four times as much distance to stop, assuming a constant braking force. As a consequence of this quadrupling, it takes four times the work to double the speed.
The kinetic energy of an object is related to its momentum by the equation:
where:
is momentum
is mass of the body
For the translational kinetic energy, that is the kinetic energy associated with rectilinear motion, of a rigid body with constant mass , whose center of mass is moving in a straight line with speed , as seen above is equal to
where:
is the mass of the body
is the speed of the center of mass of the body.
The kinetic energy of any entity depends on the reference frame in which it is measured. However, the total energy of an isolated system, i.e. one in which energy can neither enter nor leave, does not change over time in the reference frame in which it is measured. Thus, the chemical energy converted to kinetic energy by a rocket engine is divided differently between the rocket ship and its exhaust stream depending upon the chosen reference frame. This is called the Oberth effect. But the total energy of the system, including kinetic energy, fuel chemical energy, heat, etc., is conserved over time, regardless of the choice of reference frame. Different observers moving with different reference frames would however disagree on the value of this conserved energy.
The kinetic energy of such systems depends on the choice of reference frame: the reference frame that gives the minimum value of that energy is the center of momentum frame, i.e. the reference frame in which the total momentum of the system is zero. This minimum kinetic energy contributes to the invariant mass of the system as a whole.
Derivation
Without vector calculus
The work W done by a force F on an object over a distance s parallel to F equals
.
Using Newton's Second Law
with m the mass and a the acceleration of the object and
the distance traveled by the accelerated object in time t, we find with for the velocity v of the object
With vector calculus
The work done in accelerating a particle with mass m during the infinitesimal time interval dt is given by the dot product of force F and the infinitesimal displacement dx
where we have assumed the relationship p = m v and the validity of Newton's Second Law. (However, also see the special relativistic derivation below.)
Applying the product rule we see that:
Therefore, (assuming constant mass so that dm = 0), we have,
Since this is a total differential (that is, it only depends on the final state, not how the particle got there), we can integrate it and call the result kinetic energy:
This equation states that the kinetic energy (Ek) is equal to the integral of the dot product of the momentum (p) of a body and the infinitesimal change of the velocity (v) of the body. It is assumed that the body starts with no kinetic energy when it is at rest (motionless).
Rotating bodies
If a rigid body Q is rotating about any line through the center of mass then it has rotational kinetic energy () which is simply the sum of the kinetic energies of its moving parts, and is thus given by:
where:
ω is the body's angular velocity
r is the distance of any mass dm from that line
is the body's moment of inertia, equal to .
(In this equation the moment of inertia must be taken about an axis through the center of mass and the rotation measured by ω must be around that axis; more general equations exist for systems where the object is subject to wobble due to its eccentric shape).
Kinetic energy of systems
A system of bodies may have internal kinetic energy due to the relative motion of the bodies in the system. For example, in the Solar System the planets and planetoids are orbiting the Sun. In a tank of gas, the molecules are moving in all directions. The kinetic energy of the system is the sum of the kinetic energies of the bodies it contains.
A macroscopic body that is stationary (i.e. a reference frame has been chosen to correspond to the body's center of momentum) may have various kinds of internal energy at the molecular or atomic level, which may be regarded as kinetic energy, due to molecular translation, rotation, and vibration, electron translation and spin, and nuclear spin. These all contribute to the body's mass, as provided by the special theory of relativity. When discussing movements of a macroscopic body, the kinetic energy referred to is usually that of the macroscopic movement only. However, all internal energies of all types contribute to a body's mass, inertia, and total energy.
Fluid dynamics
In fluid dynamics, the kinetic energy per unit volume at each point in an incompressible fluid flow field is called the dynamic pressure at that point.
Dividing by V, the unit of volume:
where is the dynamic pressure, and ρ is the density of the incompressible fluid.
Frame of reference
The speed, and thus the kinetic energy of a single object is frame-dependent (relative): it can take any non-negative value, by choosing a suitable inertial frame of reference. For example, a bullet passing an observer has kinetic energy in the reference frame of this observer. The same bullet is stationary to an observer moving with the same velocity as the bullet, and so has zero kinetic energy. By contrast, the total kinetic energy of a system of objects cannot be reduced to zero by a suitable choice of the inertial reference frame, unless all the objects have the same velocity. In any other case, the total kinetic energy has a non-zero minimum, as no inertial reference frame can be chosen in which all the objects are stationary. This minimum kinetic energy contributes to the system's invariant mass, which is independent of the reference frame.
The total kinetic energy of a system depends on the inertial frame of reference: it is the sum of the total kinetic energy in a center of momentum frame and the kinetic energy the total mass would have if it were concentrated in the center of mass.
This may be simply shown: let be the relative velocity of the center of mass frame i in the frame k. Since
Then,
However, let the kinetic energy in the center of mass frame, would be simply the total momentum that is by definition zero in the center of mass frame, and let the total mass: . Substituting, we get:
Thus the kinetic energy of a system is lowest to center of momentum reference frames, i.e., frames of reference in which the center of mass is stationary (either the center of mass frame or any other center of momentum frame). In any different frame of reference, there is additional kinetic energy corresponding to the total mass moving at the speed of the center of mass. The kinetic energy of the system in the center of momentum frame is a quantity that is invariant (all observers see it to be the same).
Rotation in systems
It sometimes is convenient to split the total kinetic energy of a body into the sum of the body's center-of-mass translational kinetic energy and the energy of rotation around the center of mass (rotational energy):
where:
Ek is the total kinetic energy
Et is the translational kinetic energy
Er is the rotational energy or angular kinetic energy in the rest frame
Thus the kinetic energy of a tennis ball in flight is the kinetic energy due to its rotation, plus the kinetic energy due to its translation.
Relativistic kinetic energy
If a body's speed is a significant fraction of the speed of light, it is necessary to use relativistic mechanics to calculate its kinetic energy. In relativity, the total energy is given by the energy-momentum relation:
Here we use the relativistic expression for linear momentum: , where .
with being an object's (rest) mass, speed, and c the speed of light in vacuum.
Then kinetic energy is the total relativistic energy minus the rest energy:
At low speeds, the square root can be expanded and the rest energy drops out, giving the Newtonian kinetic energy.
Derivation
Start with the expression for linear momentum , where .
Integrating by parts yields
Since ,
is a constant of integration for the indefinite integral.
Simplifying the expression we obtain
is found by observing that when and , giving
resulting in the formula
This formula shows that the work expended accelerating an object from rest approaches infinity as the velocity approaches the speed of light. Thus it is impossible to accelerate an object across this boundary.
Low speed limit
The mathematical by-product of this calculation is the mass–energy equivalence formula, that mass and energy are essentially the same thing:
At a low speed (v ≪ c), the relativistic kinetic energy is approximated well by the classical kinetic energy. To see this, apply the binomial approximation or take the first two terms of the Taylor expansion in powers of for the reciprocal square root:
So, the total energy can be partitioned into the rest mass energy plus the non-relativistic kinetic energy at low speeds.
When objects move at a speed much slower than light (e.g. in everyday phenomena on Earth), the first two terms of the series predominate. The next term in the Taylor series approximation
is small for low speeds. For example, for a speed of the correction to the non-relativistic kinetic energy is 0.0417 J/kg (on a non-relativistic kinetic energy of 50 MJ/kg) and for a speed of 100 km/s it is 417 J/kg (on a non-relativistic kinetic energy of 5 GJ/kg).
The relativistic relation between kinetic energy and momentum is given by
This can also be expanded as a Taylor series, the first term of which is the simple expression from Newtonian mechanics:
This suggests that the formulae for energy and momentum are not special and axiomatic, but concepts emerging from the equivalence of mass and energy and the principles of relativity.
General relativity
Using the convention that
where the four-velocity of a particle is
and is the proper time of the particle, there is also an expression for the kinetic energy of the particle in general relativity.
If the particle has momentum
as it passes by an observer with four-velocity uobs, then the expression for total energy of the particle as observed (measured in a local inertial frame) is
and the kinetic energy can be expressed as the total energy minus the rest energy:
Consider the case of a metric that is diagonal and spatially isotropic (gtt, gss, gss, gss). Since
where vα is the ordinary velocity measured w.r.t. the coordinate system, we get
Solving for ut gives
Thus for a stationary observer (v = 0)
and thus the kinetic energy takes the form
Factoring out the rest energy gives:
This expression reduces to the special relativistic case for the flat-space metric where
In the Newtonian approximation to general relativity
where Φ is the Newtonian gravitational potential. This means clocks run slower and measuring rods are shorter near massive bodies.
Kinetic energy in quantum mechanics
In quantum mechanics, observables like kinetic energy are represented as operators. For one particle of mass m, the kinetic energy operator appears as a term in the Hamiltonian and is defined in terms of the more fundamental momentum operator . The kinetic energy operator in the non-relativistic case can be written as
Notice that this can be obtained by replacing by in the classical expression for kinetic energy in terms of momentum,
In the Schrödinger picture, takes the form where the derivative is taken with respect to position coordinates and hence
The expectation value of the electron kinetic energy, , for a system of N electrons described by the wavefunction is a sum of 1-electron operator expectation values:
where is the mass of the electron and is the Laplacian operator acting upon the coordinates of the ith electron and the summation runs over all electrons.
The density functional formalism of quantum mechanics requires knowledge of the electron density only, i.e., it formally does not require knowledge of the wavefunction. Given an electron density , the exact N-electron kinetic energy functional is unknown; however, for the specific case of a 1-electron system, the kinetic energy can be written as
where is known as the von Weizsäcker kinetic energy functional.
| Physical sciences | Classical mechanics | null |
17360 | https://en.wikipedia.org/wiki/Komodo%20dragon | Komodo dragon | The Komodo dragon (Varanus komodoensis), also known as the Komodo monitor, is a large reptile of the monitor lizard family Varanidae that is endemic to the Indonesian islands of Komodo, Rinca, Flores, Gili Dasami, and Gili Motang. It is the largest extant species of lizard, with the males growing to a maximum length of and weighing up to .
As a result of their size, Komodo dragons are apex predators, and dominate the ecosystems in which they live. Komodo dragons hunt and ambush prey including invertebrates, birds, and mammals. Komodo dragons' group behavior in hunting is exceptional in the reptile world. The diet of Komodo dragons mainly consists of Javan rusa (Rusa timorensis), though they also eat considerable amounts of carrion. Komodo dragons also occasionally attack humans.
Mating begins between May and August, and the eggs are laid in September; as many as 20 eggs are deposited at a time in an abandoned megapode nest or in a self-dug nesting hole. The eggs are incubated for seven to eight months, hatching in April, when insects are most plentiful. Young Komodo dragons are vulnerable and dwell in trees to avoid predators, such as cannibalistic adults, which young Komodo dragons also try to repel by rolling in feces. They take 8 to 9 years to mature and are estimated to live up to 30 years.
Komodo dragons were first recorded by Western scientists in 1910. Their large size and fearsome reputation make them popular zoo exhibits. In the wild, their range has been reduced by human encroachment and is likely to contract further from the effects of climate change; hence, they are listed as Endangered by the IUCN Red List. They are protected under Indonesian law, and Komodo National Park was founded in 1980 to aid protection efforts.
Taxonomy
Komodo dragons were first documented by Europeans in 1910, when rumors of a "land crocodile" reached Lieutenant van Steyn van Hensbroek of the Dutch colonial administration. Widespread notoriety came after 1912, when Peter Ouwens, the director of the Zoological Museum of Bogor, Java, published a paper on the topic after receiving a photo and a skin from the lieutenant, as well as two other specimens from a collector.
The first two live Komodo dragons to arrive in Europe were exhibited in the Reptile House at London Zoo when it opened in 1927. Joan Beauchamp Procter made some of the earliest observations of these animals in captivity and she demonstrated their behaviour at a scientific meeting of the Zoological Society of London in 1928.
The Komodo dragon was the driving factor for an expedition to Komodo Island by W. Douglas Burden in 1926. After returning with 12 preserved specimens and two live ones, this expedition provided the inspiration for the 1933 movie King Kong. It was also Burden who coined the common name "Komodo dragon". Three of his specimens were stuffed and are still on display in the American Museum of Natural History.
The Dutch island administration, realizing the limited number of individuals in the wild, soon outlawed sport hunting and heavily limited the number of individuals taken for scientific study. Collecting expeditions ground to a halt with the occurrence of World War II, not resuming until the 1950s and 1960s, when studies examined the Komodo dragon's feeding behavior, reproduction, and body temperature. At around this time, an expedition was planned in which a long-term study of the Komodo dragon would be undertaken. This task was given to the Auffenberg family, who stayed on Komodo Island for 11 months in 1969. During their stay, Walter Auffenberg and his assistant Putra Sastrawan captured and tagged more than 50 Komodo dragons.
Research from the Auffenberg expedition proved enormously influential in raising Komodo dragons in captivity. Research after that of the Auffenberg family has shed more light on the nature of the Komodo dragon, with biologists such as Claudio Ciofi continuing to study the creatures.
Etymology
The Komodo dragon is also sometimes known as the Komodo monitor or the Komodo Island monitor in scientific literature, although these names are uncommon. To the natives of Komodo Island, it is referred to as ora, buaya darat ('land crocodile'), or biawak raksasa ('giant monitor').
Evolutionary history
Genetic analysis of mitochondrial DNA shows the Komodo dragon to be the closest relative (sister taxon) of the Australian lace monitor (V. varius), with their common ancestor diverging from a lineage that gave rise to the crocodile monitor (Varanus salvadorii) of New Guinea. A 2021 study showed that during the late Miocene, the ancestors of Komodo dragons had hybridized with the common ancestor of Australian sand monitors (including V. spenceri, V. gouldii, V. rosenbergi and V. panoptes).
Fossils from across Queensland demonstrate that the Komodo dragon was once present in Australia, with fossils spanning from the Early Pliocene (~3.8 million years ago) to the Middle Pleistocene, with the youngest confirmed records of the species in Australia dating to at latest 330,000 years ago. In Australia, it coexisted with the even larger monitor species Varanus priscus also known as megalania, the largest terrestrial lizard ever. The oldest records of the Komodo dragon on Flores date to around 1.4 million years ago, during the Early Pleistocene. Additionally, Pleistocene fossils of Varanus found in Java and Timor may belong to the Komodo dragon.
Description
In the wild, adult Komodo dragons usually weigh around , although captive specimens often weigh more. According to Guinness World Records, an average adult male will weigh and measure , while an average female will weigh and measure . The largest verified specimen in captivity was long and weighed , including its undigested food. The largest wild specimen had a length , a snout-vent length (SVL) and a mass of excluding stomach contents. The heaviest reached a mass of . The study noted that weights greater than were possible but only after the animal had consumed a large meal.
The Komodo dragon has a tail as long as its body, as well as about 60 frequently replaced, serrated teeth that can measure up to in length. Its saliva is frequently blood-tinged because its teeth are almost completely covered by gingival tissue that is naturally lacerated during feeding. It also has a long, yellow, deeply forked tongue. Komodo dragon skin is reinforced by armoured scales, which contain tiny bones called osteoderms that function as a sort of natural chain-mail. The only areas lacking osteoderms on the head of the adult Komodo dragon are around the eyes, nostrils, mouth margins, and parietal eye, a light-sensing organ on the top of the head. Where lizards typically have one or two varying patterns or shapes of osteoderms, komodos have four: rosette, platy, dendritic, and vermiform. This rugged hide makes Komodo dragon skin a poor source of leather. Additionally, these osteoderms become more extensive and variable in shape as the Komodo dragon ages, ossifying more extensively as the lizard grows. These osteoderms are absent in hatchlings and juveniles, indicating that the natural armor develops as a product of age and competition between adults for protection in intraspecific combat over food and mates.
Morphology
Dentition
Komodo dragons have ziphodont teeth, which are defined as teeth that are laterally flattened, recurved, and with serrated tooth crowns where the serrations have a dentine core and a very thin enamel outer layer. This is the same type of dentition observed in many extinct theropod dinosaurs. The teeth of the insectivorous juveniles in contrast are barely recurved, with fewer and less well-developed serrations that lack dentine cores.
A 2024 study published in Nature Ecology & Evolution found that Komodo dragons have orange, iron-enriched coatings on their tooth serrations and tips, as an adaptation for maintaining the sharp cutting edges. This feature is also observed to a lesser degree in a few other Australasian to Asian monitor species, though notably absent in a few other species from that range.
Teeth are quickly replaced every 40 days, while maintaining up to 5 replacement teeth for each tooth position at any given time. This high rate of replacement and large number of replacement teeth is similar to that of the crocodile monitor. Many other monitor species as well as Chinese crocodile lizards and beaded lizards only have 1-2 replacement teeth behind each tooth position.
Senses
As with other varanids, Komodo dragons have only a single ear bone, the stapes, for transferring vibrations from the tympanic membrane to the cochlea. This arrangement means they are likely restricted to sounds in the 400 to 2,000 hertz range, compared to humans who hear between 20 and 20,000 hertz. They were formerly thought to be deaf when a study reported no agitation in wild Komodo dragons in response to whispers, raised voices, or shouts. This was disputed when London Zoo employee Joan Procter trained a captive specimen to come out to feed at the sound of her voice, even when she could not be seen.
The Komodo dragon can see objects as far away as , but because its retinas only contain cones, it is thought to have poor night vision. It can distinguish colours, but has poor visual discrimination of stationary objects.
As with many other reptiles, the Komodo dragon primarily relies on its tongue to detect, taste, and smell stimuli, with the vomeronasal sense using the Jacobson's organ, rather than using the nostrils. With the help of a favorable wind and its habit of swinging its head from side to side as it walks, a Komodo dragon may be able to detect carrion from away. It only has a few taste buds in the back of its throat. Its scales, some of which are reinforced with bone, have sensory plaques connected to nerves to facilitate its sense of touch. The scales around the ears, lips, chin, and soles of the feet may have three or more sensory plaques.
Behaviour and ecology
The Komodo dragon prefers hot and dry places and typically lives in dry, open grassland, savanna, and tropical forest at low elevations. As an ectotherm, it is most active in the day, although it exhibits some nocturnal activity. Komodo dragons are solitary, coming together only to breed and eat. They are capable of running rapidly in brief sprints up to , diving up to , and climbing trees proficiently when young through use of their strong claws. To catch out-of-reach prey, the Komodo dragon may stand on its hind legs and use its tail as a support. As it matures, its claws are used primarily as weapons, as its great size makes climbing impractical.
For shelter, the Komodo dragon digs holes that can measure from wide with its powerful forelimbs and claws. Because of its large size and habit of sleeping in these burrows, it is able to conserve body heat throughout the night and minimise its basking period the morning after. The Komodo dragon stays in the shade during the hottest part of the day and hunts in the afternoon. These special resting places, usually located on ridges with cool sea breezes, are marked with droppings and are cleared of vegetation. They serve as strategic locations from which to ambush deer.
Diet
Komodo dragons are apex predators. They are carnivores; although they have been considered as eating mostly carrion, they will frequently ambush live prey with a stealthy approach. When suitable prey arrives near a dragon's ambush site, it will suddenly charge at the animal at high speeds and go for the underside or the throat.
Komodo dragons do not deliberately allow the prey to escape with fatal injuries but try to kill prey outright using a combination of lacerating damage and blood loss. They have been recorded as killing wild pigs within seconds, and observations of Komodo dragons tracking prey for long distances are likely misinterpreted cases of prey escaping an attack before succumbing to infection. Most prey attacked by a Komodo dragon reputedly suffer from said sepsis and will later be eaten by the same or other lizards.
Komodo dragons eat by tearing large chunks of flesh and swallowing them whole while holding the carcass down with their forelegs. For smaller prey up to the size of a goat, their loosely articulated jaws, flexible skulls, and expandable stomachs allow them to swallow prey whole. The undigested vegetable contents of a prey animal's stomach and intestines are typically avoided. Copious amounts of red saliva the Komodo dragons produce help to lubricate the food, but swallowing is still a long process (15–20 minutes to swallow a goat). A Komodo dragon may attempt to speed up the process by ramming the carcass against a tree to force it down its throat, sometimes ramming so forcefully that the tree is knocked down. A small tube under the tongue that connects to the lungs allows it to breathe while swallowing.
After eating up to 80% of its body weight in one meal, it drags itself to a sunny location to speed digestion, as the food could rot and poison the dragon if left undigested in its stomach for too long. Because of their slow metabolism, large dragons can survive on as few as 12 meals a year. After digestion, the Komodo dragon regurgitates a mass of horns, hair, and teeth known as the gastric pellet, which is covered in malodorous mucus. After regurgitating the gastric pellet, it rubs its face in the dirt or on bushes to get rid of the mucus.
The eating habits of Komodo dragons follow a hierarchy, with the larger animals generally eating before the smaller ones. The largest male typically asserts his dominance and the smaller males show their submission by use of body language and rumbling hisses. Dragons of equal size may resort to "wrestling". Losers usually retreat, though they have been known to be killed and eaten by victors.
The Komodo dragon's diet varies depending on stage of growth. Young Komodo dragons will eat insects, birds and bird's eggs and small reptiles, while larger Komodo dragons (typically over ) prefer large ungulate prey, such as Javan rusa deer, wild pigs and water buffalo. Occasionally, they attack and bite humans. Sometimes they consume human corpses, digging up bodies from shallow graves. This habit of raiding graves caused the villagers of Komodo to move their graves from sandy to clay ground, and pile rocks on top of them, to deter the lizards. Dwarf species of Stegodon (a proboscidean related to living elephants) are suggested to have been a primary prey item of the Komodo dragon during the Pleistocene, prior to the introduction of their modern ungulate prey, which were only introduced to the islands in the Holocene, around 10-7,000 years ago.
The Komodo dragon drinks by sucking water into its mouth via buccal pumping (a process also used for respiration), lifting its head, and letting the water run down its throat.
Saliva
Although previous studies proposed that Komodo dragon saliva contains a variety of highly septic bacteria that would help to bring down prey, research in 2013 suggested that the bacteria in the mouths of Komodo dragons are ordinary and similar to those found in other carnivores. Komodo dragons have good mouth hygiene. To quote Bryan Fry: "After they are done feeding, they will spend 10 to 15 minutes lip-licking and rubbing their head in the leaves to clean their mouth ... Unlike people have been led to believe, they do not have chunks of rotting flesh from their meals on their teeth, cultivating bacteria." They do have a slashing bite, which normally includes a dose of their neurotoxin venom and anticoagulant saliva. Komodo dragons do not wait for prey to die and track it at a distance, as vipers do; observations of them hunting deer, boar and in some cases buffalo reveal that they kill prey in less than half an hour.
The observation of prey dying of sepsis would then be explained by the natural instinct of water buffalos, which are not native to the islands where the Komodo dragon lives, to run into water after escaping an attack. The warm, faeces-filled water would then cause the infections. The study used samples from 16 captive dragons (10 adults and six neonates) from three US zoos.
Antibacterial immune factor
Researchers have isolated a powerful antibacterial peptide, VK25, from the blood plasma of Komodo dragons. Based on their analysis of this peptide, they have synthesized a short peptide dubbed DRGN-1 and tested it against multidrug-resistant (MDR) pathogens. Preliminary results of these tests show that DRGN-1 is effective in killing drug-resistant bacterial strains and even some fungi. It has the added observed benefit of significantly promoting wound healing in both uninfected and mixed biofilm infected wounds.
Disputed claims of venom
In late 2005, researchers at the University of Melbourne speculated that the perentie (Varanus giganteus), other species of monitors, and agamids may be somewhat venomous. The team believes that the immediate effects of bites from these lizards were caused by mild envenomation. Bites on human digits by a lace monitor (V. varius), a Komodo dragon, and a spotted tree monitor (V. timorensis) all produced similar effects: rapid swelling, localised disruption of blood clotting, and shooting pain up to the elbow, with some symptoms lasting for several hours.
In 2009, the same researchers published further evidence demonstrating that Komodo dragons possess a venomous bite. MRI scans of a preserved skull showed the presence of two glands in the lower jaw. The researchers extracted one of these glands from the head of a terminally ill dragon in the Singapore Zoological Gardens, and found it secreted several different toxic proteins. The known functions of these proteins include inhibition of blood clotting, lowering of blood pressure, muscle paralysis, and the induction of hypothermia, leading to shock and loss of consciousness in envenomated prey. As a result of the discovery, the previous theory that bacteria were responsible for the deaths of Komodo victims was disputed.
Other scientists have stated that this allegation of venom glands "has had the effect of underestimating the variety of complex roles played by oral secretions in the biology of reptiles, produced a very narrow view of oral secretions and resulted in misinterpretation of reptilian evolution." According to these scientists "reptilian oral secretions contribute to many biological roles other than to quickly dispatch prey." These researchers concluded, "Calling all in this clade venomous implies an overall potential danger that does not exist, misleads in the assessment of medical risks, and confuses the biological assessment of squamate biochemical systems." Evolutionary biologist Schwenk says that even if the lizards have venom-like proteins in their mouths they may be using them for a different function, and he doubts venom is necessary to explain the effect of a Komodo dragon bite, arguing that shock and blood loss are the primary factors. As of 2023, no clear unambiguous evidence of Komodo dragon bites being venomous has been presented.
Reproduction
Mating occurs between May and August, with the eggs laid in September. During this period, males fight over females and territory by grappling with one another upon their hind legs, with the loser eventually being pinned to the ground. These males may vomit or defecate when preparing for the fight. The winner of the fight will then flick his long tongue at the female to gain information about her receptivity. Females are antagonistic and resist with their claws and teeth during the early phases of courtship. Therefore, the male must fully restrain the female during coitus to avoid being hurt. Other courtship displays include males rubbing their chins on the female, hard scratches to the back, and licking. Copulation occurs when the male inserts one of his hemipenes into the female's cloaca. Komodo dragons may be monogamous and form "pair bonds", a rare behavior for lizards.
Female Komodos lay their eggs from August to September and may use several types of locality; in one study, 60% laid their eggs in the nests of orange-footed scrubfowl (a moundbuilder or megapode), 20% on ground level and 20% in hilly areas. The females make many camouflage nests/holes to prevent other dragons from eating the eggs. Nests typically house one female, however a study found evidence of two females occasionally occupying the same den. Clutches contain an average of 20 eggs, which have an incubation period of 7–8 months. Hatching is an exhausting effort for the neonates, which break out of their eggshells with an egg tooth that falls off before long. After cutting themselves out, the hatchlings may lie in their eggshells for hours before starting to dig out of the nest. They are born quite defenseless and are vulnerable to predation. Sixteen youngsters from a single nest were on average 46.5 cm long and weighed 105.1 grams.
Young Komodo dragons spend much of their first few years in trees, where they are relatively safe from predators, including cannibalistic adults, as juvenile dragons make up 10% of their diets. The habit of cannibalism may be advantageous in sustaining the large size of adults, as medium-sized prey on the islands is rare. When the young approach a kill, they roll around in faecal matter and rest in the intestines of eviscerated animals to deter these hungry adults. Komodo dragons take approximately 8 to 9 years to mature, and may live for up to 30 years.
Parthenogenesis
A Komodo dragon at London Zoo named Sungai laid a clutch of eggs in late 2005 after being separated from male company for more than two years. Scientists initially assumed she had been able to store sperm from her earlier encounter with a male, an adaptation known as superfecundation. On 20 December 2006, it was reported that Flora, a captive Komodo dragon living in the Chester Zoo in England, was the second known Komodo dragon to have laid unfertilised eggs: she laid 11 eggs, and seven of them hatched, all of them male. Scientists at Liverpool University in England performed genetic tests on three eggs that collapsed after being moved to an incubator, and verified Flora had never been in physical contact with a male dragon. After Flora's eggs' condition had been discovered, testing showed Sungai's eggs were also produced without outside fertilization. On 31 January 2008, the Sedgwick County Zoo in Wichita, Kansas, became the first zoo in the Americas to document parthenogenesis in Komodo dragons. The zoo has two adult female Komodo dragons, one of which laid about 17 eggs on 19–20 May 2007. Only two eggs were incubated and hatched due to space issues; the first hatched on 31 January 2008, while the second hatched on 1 February. Both hatchlings were males.
Komodo dragons have the ZW chromosomal sex-determination system, as opposed to the mammalian XY system. Male progeny prove Flora's unfertilized eggs were haploid (n) and doubled their chromosomes later to become diploid (2n) (by being fertilized by a polar body, or by chromosome duplication without cell division), rather than by her laying diploid eggs by one of the meiosis reduction-divisions in her ovaries failing. When a female Komodo dragon (with ZW sex chromosomes) reproduces in this manner, she provides her progeny with only one chromosome from each of her pairs of chromosomes, including only one of her two sex chromosomes. This single set of chromosomes is duplicated in the egg, which develops parthenogenetically. Eggs receiving a Z chromosome become ZZ (male); those receiving a W chromosome become WW and fail to develop, meaning that only males are produced by parthenogenesis in this species.It has been hypothesised that this reproductive adaptation allows a single female to enter an isolated ecological niche (such as an island) and by parthenogenesis produce male offspring, thereby establishing a sexually reproducing population (via reproduction with her offspring that can result in both male and female young). Despite the advantages of such an adaptation, zoos are cautioned that parthenogenesis may be detrimental to genetic diversity.
Encounters with humans
Attacks on humans are rare, but Komodo dragons have been responsible for several human fatalities, both in the wild and in captivity. According to data from Komodo National Park spanning a 38-year period between 1974 and 2012, there were 24 reported attacks on humans, five of them fatal. Most of the victims were local villagers living around the national park.
Conservation
The Komodo dragon is classified by the IUCN as Endangered and is listed on the IUCN Red List. The species' sensitivity to natural and human-made threats has long been recognized by conservationists, zoological societies, and the Indonesian government. Komodo National Park was founded in 1980 to protect Komodo dragon populations on islands including Komodo, Rinca, and Padar. Later, the Wae Wuul and Wolo Tado Reserves were opened on Flores to aid Komodo dragon conservation.
Komodo dragons generally avoid encounters with humans. Juveniles are very shy and will flee quickly into a hideout if a human comes closer than about . Older animals will also retreat from humans from a shorter distance away. If cornered, they may react aggressively by gaping their mouth, hissing, and swinging their tail. If they are disturbed further, they may attack and bite. Although there are anecdotes of unprovoked Komodo dragons attacking or preying on humans, most of these reports are either not reputable or have subsequently been interpreted as defensive bites. Only very few cases are truly the result of unprovoked attacks by atypical individuals who lost their fear of humans.
Volcanic activity, earthquakes, loss of habitat, fire, tourism, loss of prey due to poaching, and illegal poaching of the dragons themselves have all contributed to the vulnerable status of the Komodo dragon. A major future threat to the species is climate change via both aridification and sea level rise, which can affect the low-lying habitats and valleys that the Komodo dragon depends on, as Komodo dragons do not range into the higher-altitude regions of the islands they inhabit. Based on projections, climate change will lead to a decline in suitable habitat of 8.4%, 30.2%, or 71% by 2050 depending on the climate change scenario. Without effective conservation actions, populations on Flores are extirpated in all scenarios, while in the more extreme scenarios, only the populations on Komodo and Rinca persist in highly reduced numbers. Rapid climate change mitigation is crucial for conserving the species in the wild. Other scientists have disputed the conclusions about the effects of climate change on Komodo dragon populations.
Under Appendix I of CITES (the Convention on International Trade in Endangered Species), commercial international trade of Komodo dragon skins or specimens is prohibited. Despite this, there are occasional reports of illegal attempts to trade in live Komodo dragons. The most recent attempt was in March 2019, when Indonesian police in the East Java city of Surabaya reported that a criminal network had been caught trying to smuggle 41 young Komodo dragons out of Indonesia. The plan was said to include shipping the animals to several other countries in Southeast Asia through Singapore. It was hoped that the animals could be sold for up to 500 million rupiah (around US$35,000) each. It was believed that the Komodo dragons had been smuggled out of East Nusa Tenggara province through the port at Ende in central Flores.
In 2013, the total population of Komodo dragons in the wild was assessed as 3,222 individuals, declining to 3,092 in 2014 and 3,014 in 2015. Populations remained relatively stable on the bigger islands (Komodo and Rinca), but decreased on smaller islands, such as Nusa Kode and Gili Motang, likely due to diminishing prey availability. On Padar, a former population of Komodo dragons has recently become extirpated, of which the last individuals were seen in 1975. It is widely assumed that the Komodo dragon died out on Padar following a major decline of populations of large ungulate prey, for which poaching was most likely responsible.
In captivity
Komodo dragons have long been sought-after zoo attractions, where their size and reputation make them popular exhibits. They are, however, rare in zoos because they are susceptible to infection and parasitic disease if captured from the wild, and do not readily reproduce in captivity. A pair of Komodo dragons was displayed at the Bronx Zoo in New York in September 1926, but they only lasted a couple of months, dying in October and November 1926. The first Komodo dragons were displayed at London Zoo in 1927. A Komodo dragon was exhibited in 1934 in the United States at the National Zoo in Washington, D.C., but it lived for only two years. More attempts to exhibit Komodo dragons were made, but the lifespan of the animals in captivity at the time proved very short, averaging five years in the National Zoological Park. Studies were done by Walter Auffenberg, which were documented in his book The Behavioral Ecology of the Komodo Monitor, eventually allowing for more successful management and breeding of the dragons in captivity. Surabaya Zoo in Indonesia has been breeding Komodo dragons since 1990 and had 134 dragons in 2022, the largest collection outside its natural habitat. As of May 2009, there were 35 North American, 13 European, one Singaporean, two African, and two Australian institutions which housed captive Komodo dragons. In 2016, four Komodo dragons were transferred from Bronx zoo to Madras Crocodile Bank Trust in India.
A variety of behaviors have been observed from captive specimens. Most individuals become relatively tame within a short time, and are capable of recognising individual humans and discriminating between familiar and unfamiliar keepers. Komodo dragons have also been observed to engage in play with a variety of objects, including shovels, cans, plastic rings, and shoes. This behavior does not seem to be "food-motivated predatory behavior".
Even seemingly docile dragons may become unpredictably aggressive, especially when the animal's territory is invaded by someone unfamiliar. In June 2001, a Komodo dragon seriously injured Phil Bronstein, the then-husband of actress Sharon Stone, when he entered its enclosure at the Los Angeles Zoo after being invited in by its keeper. Bronstein was bitten on his bare foot, as the keeper had told him to take off his white shoes and socks, which the keeper stated could potentially excite the Komodo dragon as they were the same colour as the white rats the zoo fed the dragon. Although he survived, Bronstein needed to have several tendons in his foot reattached surgically.
| Biology and health sciences | Reptiles | null |
17361 | https://en.wikipedia.org/wiki/Kiln | Kiln | A kiln is a thermally insulated chamber, a type of oven, that produces temperatures sufficient to complete some process, such as hardening, drying, or chemical changes. Kilns have been used for millennia to turn objects made from clay into pottery, tiles and bricks. Various industries use rotary kilns for pyroprocessing (to calcinate ores, such as limestone to lime for cement) and to transform many other materials.
Etymology
According to the Oxford English Dictionary, kiln was derived from the words cyline, cylene, cyln(e) in Old English, in turn derived from Latin culina ('kitchen'). In Middle English, the word is attested as kulne, kyllne, kilne, kiln, kylle, kyll, kil, kill, keele, kiele. In Greek the word καίειν, kaiein, means 'to burn'.
Pronunciation
The word 'kiln' was originally pronounced 'kil' with the 'n' silent, as is referenced in Webster's Dictionary of 1828 and in English Words as Spoken and Written for Upper Grades by James A. Bowen 1900: "The digraph ln, n silent, occurs in kiln. A fall down the kiln can kill you." Bowen was noting that "kill" and "kiln" are homophones.
Uses of kilns
Pit fired pottery was produced for thousands of years before the earliest known kiln, which dates to around 6000 BCE, and was found at the Yarim Tepe site in modern Iraq. Neolithic kilns were able to produce temperatures greater than 900 °C (1652 °F).
Uses include:
Annealing, fusing and deforming glass, or fusing metallic oxide paints to the surface of glass
Heat treatment for metallic workpieces
Ceramics
Brickworks
Melting metal for casting
Calcination of ore in a rotary kiln prior to smelting
Pyrolysis of chemical materials
Heating limestone with clay in the manufacture of Portland cement, the cement kiln
Heating limestone to make quicklime or calcium oxide, the lime kiln
Heating gypsum to make plaster of Paris
For cremation (at high temperature)
Drying of tobacco leaves
Drying malted barley for brewing and other fermentations
Drying hops for brewing (known as a hop kiln or oast house)
Drying corn (grain) before grinding or storage, sometimes called a corn kiln, corn drying kiln
Drying green lumber so it can be used immediately
Drying wood for use as firewood
Heating wood to the point of pyrolysis to produce charcoal
Extracting pine tar from pine tree logs or roots
Ceramic kilns
Kilns are an essential part of the manufacture of almost all types of ceramics. Ceramics require high temperatures so chemical and physical reactions will occur to permanently alter the unfired body. In the case of pottery, clay materials are shaped, dried and then fired in a kiln. The final characteristics are determined by the composition and preparation of the clay body and the temperature at which it is fired. After a first firing, glazes may be used and the ware is fired a second time to fuse the glaze into the body. A third firing at a lower temperature may be required to fix overglaze decoration. Modern kilns often have sophisticated electronic control systems, although pyrometric devices are often also used.
Clay consists of fine-grained particles that are relatively weak and porous. Clay is combined with other minerals to create a workable clay body. The firing process includes sintering. This heats the clay until the particles partially melt and flow together, creating a strong, single mass, composed of a glassy phase interspersed with pores and crystalline material. Through firing, the pores are reduced in size, causing the material to shrink slightly.
In the broadest terms, there are two types of kilns: intermittent and continuous, both being an insulated box with a controlled inner temperature and atmosphere.
A continuous kiln, sometimes called a tunnel kiln, is long with only the central portion directly heated. From the cool entrance, ware is slowly moved through the kiln, and its temperature is increased steadily as it approaches the central, hottest part of the kiln. As it continues through the kiln, the temperature is reduced until the ware exits the kiln nearly at room temperature. A continuous kiln is energy-efficient, because heat given off during cooling is recycled to pre-heat the incoming ware. In some designs, the ware is left in one place, while the heating zone moves across it. Kilns in this type include:
Hoffmann kiln
Bull's Trench kiln
Habla (Zig-Zag) kiln
Roller kiln: A special type of kiln, common in tableware and tile manufacture, is the roller-hearth kiln, in which wares placed on bats are carried through the kiln on rollers.
In the intermittent kiln, the ware is placed inside the kiln, the kiln is closed, and the internal temperature is increased according to a schedule. After the firing is completed, both the kiln and the ware are cooled. The ware is removed, the kiln is cleaned and the next cycle begins. Kilns in this type include:
Clamp kiln
Skove kiln
Scotch kiln
Down-draft kiln
Shuttle kilns: this is a car-bottom kiln with a door on one or both ends. Burners are positioned top and bottom on each side, creating a turbulent circular air flow. This type of kiln is generally a multi-car design and is used for processing whitewares, technical ceramics and refractories in batches. Depending upon the size of ware, shuttle kilns may be equipped with car-moving devices to transfer fired and unfired ware in and out of the kiln. Shuttle kilns can be either updraft or downdraft. A shuttle kiln derives its name from the fact that kiln cars can enter a shuttle kiln from either end of the kiln, whereas a tunnel kiln has flow in only one direction.
Kiln technology is very old. Kilns developed from a simple earthen trench filled with pots and fuel pit firing, to modern methods. One improvement was to build a firing chamber around pots with baffles and a stoking hole. This conserved heat. A chimney stack improved the air flow or draw of the kiln, thus burning the fuel more completely.
Chinese kiln technology has always been a key factor in the development of Chinese pottery, and until recent centuries was the most advanced in the world. The Chinese developed kilns capable of firing at around 1,000 °C before 2000 BCE. These were updraft kilns, often built below ground. Two main types of kiln were developed by about 200 AD and remained in use until modern times. These are the dragon kiln of hilly southern China, usually fuelled by wood, long and thin and running up a slope, and the horseshoe-shaped mantou kiln of the north Chinese plains, smaller and more compact. Both could reliably produce the temperatures of up to 1300 °C or more needed for porcelain. In the late Ming, the egg-shaped kiln or zhenyao was developed at Jingdezhen and mainly used there. This was something of a compromise between the other types, and offered locations in the firing chamber with a range of firing conditions.
Both Ancient Roman pottery and medieval Chinese pottery could be fired in industrial quantities, with tens of thousands of pieces in a single firing. Early examples of simpler kilns found in Britain include those that made roof-tiles during the Roman occupation. These kilns were built up the side of a slope, such that a fire could be lit at the bottom and the heat would rise up into the kiln.
Traditional kilns include:
Dragon kiln of south China: thin and long, climbing up a hillside. This type spread to the rest of East Asia giving the Japanese anagama kiln, arriving via Korea in the 5th century. This kiln usually consists of one long firing chamber, pierced with smaller ware stacking ports on one side, with a firebox at one end and a flue at the other. Firing time can vary from one day to several weeks. Traditional anagama kilns are also built on a slope to allow for a better draft. The Japanese noborigama kiln is an evolution from anagama design as a multi-chamber kiln where wood is stacked from the front firebox at first, then only through the side-stoking holes with the benefit of having air heated up to from the front firebox, enabling more efficient firings.
Khmer Kiln: quite similar to the anagama kiln; however, traditional Khmer Kilns had a flat roof. Chinese, Korean or Japanese kilns have an arch roof. These types of kiln vary in size and can measure in the tens of meters. The firing time also varies and can last several days.
Bottle kiln: a type of intermittent kiln, usually coal-fired, formerly used in the firing of pottery; such a kiln was surrounded by a tall brick hovel or cone, of typical bottle shape. The tableware was enclosed in sealed fireclay saggars; as the heat and smoke from the fires passed through the oven it would be fired at temperatures up to .
Biscuit kiln: The first firing would take place in the biscuit kiln.
Glost kiln: The biscuit-ware was glazed and given a second glost firing in glost kilns.
Mantou kiln of north China, smaller and more compact than the dragon kiln
Muffle kiln: This was used to fire over-glaze decoration, at a temperature under . In these cool kilns the smoke from the fires passed through flues outside the oven.
Catenary arch kiln: Typically used for the firing of pottery using salt, these by their form (a catenary arch) tend to retain their shape over repeated heating and cooling cycles, whereas other types require extensive metalwork supports.
Sèvres kiln: invented in Sèvres, France, it efficiently generated high-temperatures to produce waterproof ceramic bodies and easy-to-obtain glazes. It features a down-draft design that produces high temperature in shorter time, even with wood-firing.
Bourry box kiln, similar to previous one
Modern kilns
With the industrial age, kilns were designed to use electricity and more refined fuels, including natural gas and propane. Many large industrial pottery kilns use natural gas, as it is generally clean, efficient and easy to control. Modern kilns can be fitted with computerized controls allowing for fine adjustments during the firing. A user may choose to control the rate of temperature climb or ramp, hold or soak the temperature at any given point, or control the rate of cooling. Both electric and gas kilns are common for smaller scale production in industry and craft, handmade and sculptural work.
Modern kilns include:
Retort kiln: a type of kiln which can reach temperatures around for extended periods of time. Typically, these kilns are used in industrial purposes, and feature movable charging cars which make up the bottom and door of the kiln.
Electric kilns: kilns operated by electricity were developed in the 20th century, primarily for smaller scale use such as in schools, universities, and hobby centers. The atmosphere in most designs of electric kiln is rich in oxygen, as there is no open flame to consume oxygen molecules. However, reducing conditions can be created with appropriate gas input, or by using saggars in a particular way.
Feller kiln: brought contemporary design to wood firing by re-using unburnt gas from the chimney to heat intake air before it enters the firebox. This leads to an even shorter firing cycle and less wood consumption. This design requires external ventilation to prevent the in-chimney radiator from melting, being typically in metal. The result is a very efficient wood kiln firing one cubic metre of ceramics with one cubic meter of wood.
Microwave assisted firing: this technique combines microwave energy with more conventional energy sources, such as radiant gas or electric heating, to process ceramic materials to the required high temperatures. Microwave-assisted firing offers significant economic benefits.
Microwave kiln: These small kilns are designed to be placed inside a standard microwave oven. The kiln body is made from a porous ceramic material lined with a coating that absorbs microwave energy. The microwave kiln is placed inside a microwave oven and heated to the desired temperature. The heating process is much less controlled than most modern electric kilns, as there is no built-in temperature monitoring. The user must monitor the process closely to achieve the desired results, adjusting time and power levels programmed on the microwave oven. A small hole in the lid of the kiln can be used to estimate the interior temperature visually, as hot materials will glow. Microwave kilns are designed to reach internal temperatures of over , hot enough to work some types of glass, metals, and ceramics, while the outside of the kiln remains cool enough to handle with hot pads or tongs. After firing, the kiln should be removed from the microwave oven and placed on heat-proof surface while it is allowed to cool. Microwave kilns are limited in size, usually no more than in diameter.
Top-hat kiln: an intermittent kiln of a type sometimes used to fire pottery. The ware is set on a refractory hearth, or plinth, over which a box-shaped cover is lowered.
Wood-drying kiln
Green wood coming straight from the felled tree has far too high a moisture content to be commercially useful and will rot, warp and split. Both hardwoods and softwood must be left to dry out until the moisture content is between 18% and 8%. This can be a long process unless accelerated by use of a kiln. A variety of kiln technologies exist today: conventional, dehumidification, solar, vacuum and radio frequency.
Conventional wood dry kilns are either package-type (side-loader) or track-type (tram) construction. Most hardwood lumber kilns are side-loader kilns in which fork trucks are used to load lumber packages into the kiln. Most softwood kilns are track types in which the timber is loaded on kiln/track cars for loading the kiln. Modern high-temperature, high-air-velocity conventional kilns can typically dry green wood in 10 hours down to a moisture content of 18%. However, 25-mm-thick green red oak requires about 28 days to dry down to a moisture content of 8%.
Heat is typically introduced via steam running through fin/tube heat exchangers controlled by on/off pneumatic valves. Humidity is removed by a system of vents, the specific layout of which are usually particular to a given manufacturer. In general, cool dry air is introduced at one end of the kiln while warm moist air is expelled at the other. Hardwood conventional kilns also require the introduction of humidity via either steam spray or cold water misting systems to keep the relative humidity inside the kiln from dropping too low during the drying cycle. Fan directions are typically reversed periodically to ensure even drying of larger kiln charges.
Most softwood kilns operate below temperature. Hardwood kiln drying schedules typically keep the dry bulb temperature below . Difficult-to-dry species might not exceed .
Dehumidification kilns are similar to other kilns in basic construction and drying times are usually comparable. Heat comes primarily from an integral dehumidification unit that also removes humidity. Auxiliary heat is often provided early in the schedule to supplement the dehumidifier.
are conventional kilns, typically built by hobbyists to keep initial investment costs low. Heat is provided via solar radiation, while internal air circulation is typically passive.
Vacuum and radio frequency kilns reduce the air pressure to attempt to speed up the drying process. A variety of these vacuum technologies exist, varying primarily in the method heat is introduced into the wood charge. Hot water platten vacuum kilns use aluminum heating plates with the water circulating within as the heat source, and typically operate at significantly reduced absolute pressure. Discontinuous and SSV (super-heated steam) use atmosphere pressure to introduce heat into the kiln charge. The entire kiln charge comes up to full atmospheric pressure, the air in the chamber is then heated and finally a vacuum is pulled as the charge cools. SSV run at partial-atmospheres, typically around 1/3 of full atmospheric pressure, in a hybrid of vacuum and conventional kiln technology (SSV kilns are significantly more popular in Europe where the locally harvested wood is easier to dry than the North American woods.) RF/V (radio frequency + vacuum) kilns use microwave radiation to heat the kiln charge, and typically have the highest operating cost due to the heat of vaporization being provided by electricity rather than local fossil fuel or waste wood sources.
The economics of different wood drying technologies are based on the total energy, capital, insurance/risk, environmental impacts, labor, maintenance, and product degradation costs. These costs, which can be a significant part of plant costs, involve the differential impact of the presence of drying equipment in a specific plant. Every piece of equipment from the green trimmer to the infeed system at the planer mill is part of the "drying system". The true costs of the drying system can only be determined when comparing the total plant costs and risks with and without drying.
Kiln dried firewood was pioneered during the 1980s, and was later adopted extensively in Europe due to the economic and practical benefits of selling wood with a lower moisture content (with optimal moisture levels of under 20% being much easier to achieve).
The total (harmful) air emissions produced by wood kilns, including their heat source, can be significant. Typically, the higher the temperature at which the kiln operates, the larger the quantity of emissions that are produced (per mass unit of water removed). This is especially true in the drying of thin veneers and high-temperature drying of softwoods.
Gallery
| Technology | Materials | null |
17362 | https://en.wikipedia.org/wiki/Kiwi%20%28bird%29 | Kiwi (bird) | Kiwi ( ) are flightless birds endemic to New Zealand of the order Apterygiformes. The five extant species fall into the family Apterygidae () and genus Apteryx (). Approximately the size of a domestic chicken, kiwi are the smallest ratites (which also include ostriches, emus, rheas, cassowaries and the extinct elephant birds and moa).
DNA sequence comparisons have yielded the conclusion that kiwi are much more closely related to the extinct Malagasy elephant birds than to the moa with which they shared New Zealand. There are five recognised species, four of which are currently listed as vulnerable, and one of which is near threatened. All species have been negatively affected by historic deforestation, but their remaining habitat is well protected in large forest reserves and national parks. At present, the greatest threat to their survival is predation by invasive mammalian predators.
The vestigial wings are so small as to be invisible under their bristly, hair-like, two-branched feathers. Kiwi eggs are one of the largest in proportion to body size (up to 20% of the female's weight) of any order of bird in the world. Other unique adaptations of kiwi, such as short and stout legs and using their nostrils at the end of their long beak to detect prey before they see it, have helped the bird to become internationally well known.
The kiwi is recognised as an icon of New Zealand, and the association is so strong that the term Kiwi is used internationally as the colloquial demonym for New Zealanders.
Etymology
The Māori language word is generally accepted to be "of imitative origin" from the call. However, some linguists derive the word from Proto-Nuclear Polynesian , which refers to Numenius tahitiensis, the bristle-thighed curlew, a migratory bird that winters in the tropical Pacific islands. With its long decurved bill and brown body, the curlew resembles the kiwi. So when the first Polynesian settlers arrived, they may have applied the word kiwi to the newfound bird. The bird's name is spelled with a lower-case k and, being a word of Māori origin, normally stays as kiwi when pluralised.
The genus name Apteryx is derived from Ancient Greek 'without wing': (), 'without' or 'not'; (), 'wing'.
Taxonomy and systematics
Although it was long presumed that the kiwi was closely related to the other New Zealand ratites, the moa, recent DNA studies have identified its closest relative as the extinct elephant bird of Madagascar, and among extant ratites, the kiwi is more closely related to the emu and the cassowaries than to the moa.
Research published in 2013 on an extinct genus, Proapteryx, known from the Miocene deposits of the Saint Bathans Fauna, found that it was smaller and probably capable of flight, supporting the hypothesis that the ancestor of the kiwi reached New Zealand independently from moas, which were already large and flightless by the time kiwi appeared.
Species
There are five known species of kiwi, with a number of subspecies.
Relationships in the genus Apteryx
{| class="wikitable" style="width: 85%;"
|-
! Image !! Scientific name !! Common name !! Distribution !! Description
|-
|||Apteryx haastii||Great spotted kiwi or roroa || New Zealand || The largest species, standing about tall, with females weighing about and males about . It has grey-brown plumage with lighter bands. The female lays one egg a year, which both parents incubate. The population is estimated at over 20,000, distributed through the more mountainous parts of northwest Nelson, the northern West Coast, and the Southern Alps of the South Island.
|-
|||Apteryx owenii||Little spotted kiwi || Kapiti Island || A small kiwi the size of a bantam, standing tall, with the female weighing . She lays one egg, which is incubated by the male. This small, docile kiwi is unable to withstand predation by introduced pigs, stoats and cats, leading to its extinction on the mainland. There are about 1350 on Kapiti Island and it has been introduced to other predator-free islands, where it appears to be getting established with about 50 on each island.
|-
| || Apteryx rowi||Okarito kiwi, rowi or Okarito brown kiwi || South Island || The Okarito kiwi, first identified as a new species in 1994, is slightly smaller than the North Island brown kiwi, with a greyish tinge to the plumage and sometimes white facial feathers. Females lay up to three eggs in a season, each one in a different nest. Male and female both incubate. Distribution is now limited to a small area on the West Coast, but studies of ancient DNA have shown that, in prehuman times, it was far more widespread on the western side of the South Island and lived in the lower half of the North Island, where it was the only kiwi species detected.
|-
| || Apteryx australis||Southern brown kiwi, tokoeka or common kiwi || South Island ||Almost as big as the great spotted kiwi and similar in appearance to the brown kiwi, though its plumage is lighter in colour. It is relatively numerous. Ancient DNA studies have shown that, in prehuman times, the distribution of this species included the east coast of the South Island. Several subspecies are recognised:
The Stewart Island southern brown kiwi, Apteryx australis lawryi, is from Stewart Island/Rakiura.
The northern Fiordland tokoeka (Apteryx australis ?) and southern Fiordland tokoeka (Apteryx australis ?) live in Fiordland, the remote southwest part of the South Island. These subspecies of tokoeka are relatively common and are nearly tall.
The Haast southern brown kiwi or Haast tokoeka, ''Apteryx australis 'Haast, is the rarest taxon of kiwi with only about 300 individuals. It was identified as a distinct form in 1993. It occurs only in a restricted area in the Haast Range of the Southern Alps at an altitude of . This form is distinguished by a more strongly downcurved bill and more rufous plumage.
|-
| || Apteryx mantelli or Apteryx australis||North Island brown kiwi || North Island || A. mantelli (or A. australis before 2000 and still in some sources) females stand about tall and weigh about , while the males weigh about . The plumage is streaky red-brown and spiky. The female usually lays two eggs, which are incubated by the male. The North Island brown has demonstrated a remarkable resilience: it adapts to a wide range of habitats, including non-native forests and some farmland. It is widespread in the northern two-thirds of the North Island and is the most common kiwi, with about 35,000 remaining.
|-
|}
Description
Their adaptation to a terrestrial life is extensive: like all the other ratites (ostrich, emu, rhea and cassowary), they have no keel on the sternum to anchor wing muscles. The vestigial wings are so small that they are invisible under the bristly, hair-like, two-branched feathers. While most adult birds have bones with hollow insides to minimise weight and make flight practicable, kiwi have marrow, like mammals and the young of other birds. With no constraints on weight due to flight requirements, brown kiwi females carry and lay a single egg that may weigh as much as . Like most other ratites, they have no uropygial gland (preen gland). Their bill is long, pliable and sensitive to touch, and their eyes have a reduced pecten. Their feathers lack barbules and aftershafts, and they have large vibrissae around the gape. They have 13 flight feathers, no tail and a small pygostyle. Their gizzard is weak and their caecum is long and narrow.
The eye of the kiwi is the smallest relative to body mass in all avian species, resulting in the smallest visual field as well. The eye has small specialisations for a nocturnal lifestyle, but kiwi rely more heavily on their other senses (auditory, olfactory, and somatosensory system). The sight of the kiwi is so underdeveloped that blind specimens have been observed in nature, showing how little they rely on sight for survival and foraging. In an experiment, it was observed that one-third of a population of A. rowi in New Zealand under no environmental stress had ocular lesions in one or both eyes. The same experiment examined three specific specimens that showed complete blindness and found them to be in good physical standing outside of ocular abnormalities. A 2018 study revealed that the kiwi's closest relatives, the extinct elephant birds, also shared this trait despite their great size.
Unlike virtually every other palaeognath, which are generally small-brained by bird standards, kiwi have proportionally large encephalisation quotients. Hemisphere proportions are even similar to those of parrots and songbirds, though there is no evidence of similarly complex behaviour.
Behaviour and ecology
Before the arrival of humans in the 13th century or earlier, New Zealand's only endemic mammals were three species of bat, and the ecological niches that in other parts of the world were filled by creatures as diverse as horses, wolves and mice were taken up by birds (and, to a lesser extent, reptiles, insects and gastropods).
The kiwi's mostly nocturnal habits may be a result of habitat intrusion by predators, including humans. In areas of New Zealand where introduced predators have been removed, such as sanctuaries, kiwi are often seen in daylight. They prefer subtropical and temperate podocarp and beech forests, but they are being forced to adapt to different habitat, such as sub-alpine scrub, tussock grassland, and the mountains. Kiwi have a highly developed sense of smell, unusual in a bird, and are the only birds with nostrils at the end of their long beaks. Kiwi eat small invertebrates, seeds, grubs, and many varieties of worms. They also may eat fruit, small crayfish, eels and amphibians. Because their nostrils are located at the end of their long beaks, kiwi can locate insects and worms underground using their keen sense of smell, without actually seeing or feeling them. This sense of smell is due to a highly developed olfactory chamber and surrounding regions. It is a common belief that the kiwi relies solely on its sense of smell to catch prey but this has not been scientifically observed. Lab experiments have suggested that A. australis can rely on olfaction alone but is not consistent under natural conditions. Instead, the kiwi may rely on auditory and/or vibrotactile cues.
Once bonded, a male and female kiwi tend to live their entire lives as a monogamous couple. During the mating season, June to March, the pair call to each other at night, and meet in the nesting burrow every three days. These relationships may last for up to 20 years. They are unusual among other birds in that, along with some raptors, they have a functioning pair of ovaries. (In most birds and in platypuses, the right ovary never matures, so that only the left is functional.Fitzpatrick, F.L., (1934). Unilateral and bilateral ovaries in raptorial birds. The Wilson Bulletin, 46(1): 19–22)
Kiwi eggs can weigh up to one-quarter the weight of the female. Usually, only one egg is laid per season. The kiwi lays one of the largest eggs in proportion to its size of any bird in the world, so even though the kiwi is about the size of a domestic chicken, it is able to lay eggs that are about six times the size of a chicken's egg. The eggs are smooth in texture, and are ivory or greenish white. The male incubates the egg, except for the great spotted kiwi, A. haastii, in which both parents are involved. The incubation period is 63–92 days. Producing the huge egg places significant physiological stress on the female; for the thirty days it takes to grow the fully developed egg, the female must eat three times her normal amount of food. Two to three days before the egg is laid there is little space left inside the female for her stomach and she is forced to fast.
It was believed that the large eggs were a trait of much larger moa-like ancestors, and that kiwi retained large eggs as an evolutionarily neutral trait as they became smaller. However, research in the early 2010s suggested that kiwi were descended from smaller flighted birds that flew to New Zealand and Madagascar, where they gave rise to kiwi and elephant birds. The large egg is instead thought to be an adaptation for precocity, enabling kiwi chicks to hatch mobile and with yolk to sustain them for two and half weeks. The large eggs would be safe in New Zealand's historical absence of egg-eating ground predators, while the mobile chicks would be able to evade chick-eating flying predators.
Lice in the genus Apterygon and in the subgenus Rallicola (Aptericola) are exclusively ectoparasites of kiwi species.
Status and conservation
Nationwide studies show that only around 5–10% of kiwi chicks survive to adulthood without management. over 70% of kiwi populations are unmanaged. However, in areas under active pest management, survival rates for North Island brown kiwi can be far higher. For example, prior to a joint 1080 poison operation undertaken by DOC and the Animal Health Board in Tongariro Forest in 2006, 32 kiwi chicks were radio-tagged. 57% of the radio-tagged chicks survived to adulthood.
Efforts to protect kiwi have had some success, and in 2017 two species were downlisted from endangered to vulnerable by the IUCN. In 2018 the Department of Conservation released its current Kiwi Conservation Plan.
Sanctuaries
In 2000, the Department of Conservation set up five kiwi sanctuaries focused on developing methods to protect kiwi and to increase their numbers.
There are three kiwi sanctuaries in the North Island:
Whangarei Kiwi Sanctuary (for Northland brown kiwi)
Moehau Kiwi Sanctuary on the Coromandel Peninsula (Coromandel brown kiwi)
Tongariro Kiwi Sanctuary near Taupō (western brown kiwi)
and two in the South Island:
Okarito Kiwi Sanctuary (Okarito kiwi)
Haast Kiwi Sanctuary (Haast tokoeka)
A number of other mainland conservation islands and fenced sanctuaries have significant populations of kiwi, including:
Zealandia fenced sanctuary in Wellington (little spotted kiwi)
Maungatautari Restoration Project in Waikato (brown kiwi)
Bushy Park Forest Reserve near Kai Iwi, Whanganui (brown kiwi)
Otanewainuku Forest in the Bay of Plenty Region (brown kiwi)
Hurunui Mainland Island, south branch, Hurunui River, North Canterbury (great spotted kiwi)
North Island brown kiwi were introduced to the Cape Sanctuary in Hawke's Bay between 2008 and 2011, which in turn provided captive-raised chicks that were released back into Maungataniwha Native Forest.
Sanctuaries for kiwi are also referred to as 'kōhanga sites' from the Māori word for 'nest' or 'nursery'.
Operation Nest Egg
Operation Nest Egg is a programme run by the BNZ Save the Kiwi Trust—a partnership between the Bank of New Zealand, the Department of Conservation and the Royal Forest and Bird Protection Society. Kiwi eggs and chicks are removed from the wild and hatched and/or raised in captivity until big enough to fend for themselves—usually when they weigh around 1200 grams (42 ounces). They are then returned to the wild. An Operation Nest Egg bird has a 65% chance of surviving to adulthood—compared to just 5% for wild-hatched and -raised chicks. The tool is used on all kiwi species except little spotted kiwi.
1080 poison
In 2004, anti-1080 activist Phillip Anderton posed for the New Zealand media with a kiwi he claimed had been poisoned. An investigation revealed that Anderton lied to journalists and the public. He had used a kiwi that had been caught in a possum trap. Extensive monitoring shows that kiwi are not at risk from the use of biodegradable 1080 poison.
Threats
Introduced mammalian predators, namely stoats, dogs, ferrets, and cats, are the principal threats to kiwi. The biggest threat to kiwi chicks is stoats, while dogs are the biggest threat to adult kiwi. Stoats are responsible for approximately half of kiwi chick deaths in many areas through New Zealand. Young kiwi chicks are vulnerable to stoat predation until they reach about in weight, at which time they can usually defend themselves. Cats also to a lesser extent prey on kiwi chicks. These predators can cause large and abrupt declines in populations. In particular, dogs find the distinctive strong scent of kiwi irresistible and easy to track, such that they can catch and kill kiwi in seconds. Motor vehicle strike is a threat to all kiwi where roads cross through their habitat. Badly set possum traps often kill or maim kiwi.
Habitat destruction is another major threat to kiwi; restricted distribution and small size of some kiwi populations increases their vulnerability to inbreeding. Research has shown that the combined effect of predators and other mortality (accidents, etc.) results in less than 5% of kiwi chicks surviving to adulthood.
Relationship to humans
The Māori traditionally believed that kiwi were under the protection of Tāne Mahuta, god of the forest. They were used as food and their feathers were used for kahu kiwi—ceremonial cloaks. Today, while kiwi feathers are still used, they are gathered from birds that die naturally, through road accidents, or predation, and from captive birds. Kiwi are no longer hunted and some Māori consider themselves the birds' guardians.
Scientific documentation
In 1813, George Shaw named the genus Apteryx in his species description of the southern brown kiwi, which he called "the southern apteryx". Captain Andrew Barclay of the ship Providence provided Shaw with the specimen. Shaw's description was accompanied by two plates, engraved by Frederick Polydore Nodder; they were published in volume 24 of The Naturalist's Miscellany.
Zoos
In 1851, London Zoo became the first zoo to keep kiwi. The first captive breeding took place in 1945. As of 2007 only 13 zoos outside New Zealand hold kiwi. The Frankfurt Zoo has 12, the Berlin Zoo has seven, Walsrode Bird Park has one, the Avifauna Bird Park in the Netherlands has three, the San Diego Zoo has five, the San Diego Zoo Safari Park has one, the National Zoo in Washington, DC has eleven, the Smithsonian Conservation Biology Institute has one, and the Columbus Zoo and Aquarium has three.
In 2023, Zoo Miami apologized for mistreating a kiwi, after footage of visitors patting the nocturnal bird under bright lights caused outrage in New Zealand.
As a national symbol
The kiwi as a symbol first appeared in the late 19th century in New Zealand regimental badges. It was later featured in the badges of the South Canterbury Battalion in 1886 and the Hastings Rifle Volunteers in 1887. Soon after, the kiwi appeared in many military badges; and in 1906, when Kiwi Shoe Polish was widely sold in the UK and the US, the symbol became more widely known.
During the First World War, the name "Kiwis" for New Zealand soldiers came into general use, and a giant kiwi (now known as the Bulford kiwi) was carved on the chalk hill above Sling Camp in England. Usage has become so widespread that all New Zealanders overseas and at home are now commonly referred to as "Kiwis".
The kiwi has since become the best-known national symbol for New Zealand, and the bird is prominent in the coat of arms, crests and badges of many New Zealand cities, clubs and organisations. At the national level, the red silhouette of a kiwi is in the centre of the roundel of the Royal New Zealand Air Force. The kiwi is featured in the logo of the New Zealand Rugby League, and the New Zealand national rugby league team are nicknamed the Kiwis.
A kiwi has featured on the reverse side of three New Zealand coins: the one florin (two-shilling) coin from 1933 to 1966, the twenty-cent coin from 1967 to 1990, and the one-dollar coin since 1991. In currency trading the New Zealand dollar is often referred to as "the kiwi".
In popular culture
A song, "Sticky Beak the Kiwi", with words by Bob Edwards and music by Neil Roberts, was recorded in 1961, sung by Julie Nelson (aged 14) and accompanied by the Satins and the Don Bell Orchestra of Whangārei. A Christmas song, it portrays Sticky Beak as insisting on pulling Santa Claus's sleigh when distributing presents south of the equator.
"How the Kiwi Lost its Wings" is a fable written by broadcaster Alwyn Owen in 1963. It uses elements of Māori mythology, such as Tāne Mahuta, and the World War I symbol of cowardice, white feathers, in a pourquoi story explaining features of New Zealand birds. Owen portrays the kiwi as nobly sacrificing its wings and flight in order to protect the trees from depredation by ground-dwelling creatures, and thereby winning its unique renown. Owen's story is sometimes described as "A Maori Legend". It has been recorded as a children's story, published as a book, was made into an animated film in 1980, set to music for the Auckland Philharmonia Orchestra by Thomas Goss as "Tāne and the Kiwi" in 2002 (recorded for RNZ by Orchestra Wellington in 2008), and performed as a ballet by the Royal New Zealand Ballet in 2022.
| Biology and health sciences | Ratites | null |
17363 | https://en.wikipedia.org/wiki/Kiwifruit | Kiwifruit | Kiwifruit (often shortened to kiwi outside Australia and New Zealand), or Chinese gooseberry, is the edible berry of several species of woody vines in the genus Actinidia. The most common cultivar group of kiwifruit (Actinidia deliciosa 'Hayward') is oval, about the size of a large hen's egg: in length and in diameter. Kiwifruit has a thin, fuzzy, fibrous, tart but edible, light brown skin and light green or golden flesh with rows of tiny, black, edible seeds. The fruit has a soft texture with a sweet and unique flavour.
Kiwifruit is native to central and eastern China. The first recorded description of the kiwifruit dates to the 12th century during the Song dynasty. In the early 20th century, cultivation of kiwifruit spread from China to New Zealand, where the first commercial plantings occurred. The fruit became popular with British and American servicemen stationed in New Zealand during World War II, and later became commonly exported, first to Great Britain and then to California in the 1960s.
Etymology
Early varieties were discovered and cultivated in China. They were described in a 1904 nursery catalogue as having "...edible fruits the size of walnuts, and the flavour of ripe gooseberries", leading to the name, Chinese gooseberry.
In the late 1950s, a major New Zealand exporter began calling it "kiwifruit" () after being advised by a United States client that quarantine officials might mistakenly associate the unpopular name gooseberries – which grow close to the ground – with suspicion of anthrax. The name kiwifruit was adopted for the furry, brown fruit in relation to New Zealand's furry, brown, national bird – the kiwi. The name was first registered by Turners & Growers on 15 June 1959, and commercially adopted in 1974.
In New Zealand and Australia, the word kiwi alone either refers to the bird or is used as a nickname for New Zealanders; it is almost never used to refer to the fruit. Kiwifruit has since become a common name for all commercially grown green kiwifruit from the genus Actinidia. In the United States and Canada, the shortened name kiwi is commonly used when referring to the fruit.
History
Kiwifruit is native to central and eastern China. The first recorded description of the kiwifruit dates to 12th century China during the Song dynasty. As it was usually collected from the wild and consumed for medicinal purposes, the plant was rarely cultivated or bred. Cultivation of kiwifruit spread from China in the early 20th century to New Zealand, where the first commercial plantings occurred. After the Hayward variety was developed, the fruit became popular with British and American servicemen stationed in New Zealand during World War II. Kiwifruit were later exported, first to Great Britain and then to California in the 1960s.
In New Zealand during the 1940s and 1950s, the fruit became an agricultural commodity through the development of commercially viable cultivars, agricultural practices, shipping, storage, and marketing.
Species and cultivars
The genus Actinidia comprises around 60 species. Their fruits are quite variable, although most are easily recognised as kiwifruit because of their appearance and shape. The skin of the fruit varies in size, hairiness and colour. The flesh varies in colour, juiciness, texture and taste. Some fruits are unpalatable, while others taste considerably better than the majority of commercial cultivars.
The most commonly sold kiwifruit is derived from A. deliciosa (fuzzy kiwifruit). Other species that are commonly eaten include A. chinensis (golden kiwifruit), A. coriacea (Chinese egg gooseberry), A. arguta (hardy kiwifruit), A. kolomikta (Arctic kiwifruit), A. melanandra (purple kiwifruit), A. polygama (silver vine) and A. purpurea (hearty red kiwifruit).
Fuzzy kiwifruit
Most kiwifruit sold belongs to a few cultivars of A. deliciosa (fuzzy kiwifruit): 'Hayward', 'Blake' and 'Saanichton 12'. They have a fuzzy, dull brown skin and bright green flesh. The familiar cultivar 'Hayward' was developed by Hayward Wright in Avondale, New Zealand, around 1924. It was initially grown in domestic gardens, but commercial planting began in the 1940s.
'Hayward' is the most commonly available cultivar in stores. It is a large, egg-shaped fruit with a sweet flavour. 'Saanichton 12', from British Columbia, is somewhat more rectangular than 'Hayward' and comparably sweet, but the inner core of the fruit can be tough. 'Blake' can self-pollinate, but it has a smaller, more oval fruit and the flavour is considered inferior.
Kiwi berries
Kiwi berries are edible fruits the size of a large grape, similar to fuzzy kiwifruit in taste and internal appearance but with a thin, smooth green skin. They are primarily produced by three species: Actinidia arguta (hardy kiwi), A. kolomikta (Arctic kiwifruit) and A. polygama (silver vine). They are fast-growing, climbing vines, durable over their growing season. They are referred to as "kiwi berry, baby kiwi, dessert kiwi, grape kiwi, or cocktail kiwi".
The cultivar 'Issai' is a hybrid of hardy kiwifruit and silver vine which can self-pollinate. Grown commercially because of its relatively large fruit, 'Issai' is less hardy than most hardy kiwifruit.
Actinidia chinensis
Actinidia chinensis (yellow kiwi or golden kiwifruit) has a smooth, bronze skin, with a beak shape at the stem attachment. Flesh colour varies from bright green to a clear, intense yellow. This species is 'sweeter and more aromatic' in flavour compared to A. deliciosa. One of the most attractive varieties has a red 'iris' around the centre of the fruit and yellow flesh outside. The yellow fruit obtains a higher market price and, being less hairy than the fuzzy kiwifruit, tastes better without peeling.
A commercially viable variety of this red-ringed kiwifruit, patented as EnzaRed, is a cultivar of the Chinese hong yang variety.
'Hort16A' is a golden kiwifruit cultivar marketed worldwide as Zespri Gold. This cultivar suffered significant losses in New Zealand in 2010–2013 due to the PSA bacterium. A new cultivar of golden kiwifruit, Gold3, was found to be more disease-resistant and most growers have now changed to this cultivar. 'Gold3', marketed by Zespri as SunGold is not quite as sweet as 'Hort16A', and lacks its usually slightly pointed tip.
Clones of the new variety SunGold have been used to develop orchards in China, resulting in partially successful legal efforts in China by Zespri to protect their intellectual property. In 2021, Zespri estimated that around 5,000 hectares of Sungold orchards were being cultivated in China, mainly in the Sichuan province.
Cultivation
Kiwifruit can be grown in most temperate climates with adequate summer heat. Where fuzzy kiwifruit (A. deliciosa) is not hardy, other species can be grown as substitutes.
Breeding
Often in commercial farming, different breeds are used for rootstock, fruit bearing plants and pollinators. Therefore, the seeds produced are crossbreeds of their parents. Even if the same breeds are used for pollinators and fruit bearing plants, there is no guarantee that the fruit will have the same quality as the parent. Additionally, seedlings take seven years before they flower, so determining whether the kiwifruit is fruit bearing or a pollinator is time-consuming. Therefore, most kiwifruits, with the exception of rootstock and new cultivars, are propagated asexually. This is done by grafting the fruit producing plant onto rootstock grown from seedlings or, if the plant is desired to be a true cultivar, rootstock grown from cuttings of a mature plant.
Pollination
Kiwifruit plants generally are dioecious, meaning a plant is either male or female. The male plants have flowers that produce pollen, the females receive the pollen to fertilise their ovules and grow fruit; most kiwifruit requires a male plant to pollinate the female plant. For a good yield of fruit, one male vine for every three to eight female vines is considered adequate. Some varieties can self pollinate, but even they produce a greater and more reliable yield when pollinated by male kiwifruit. Cross-species pollination is often (but not always) successful as long as bloom times are synchronised.
In nature, the species are pollinated by birds and native bumblebees, which visit the flowers for pollen, not nectar. The female flowers produce fake anthers with what appears to be pollen on the tips in order to attract the pollinators, although these fake anthers lack the DNA and food value of the male anthers.
Kiwifruit growers rely on honey bees, the principal ‘for-hire’ pollinator, but commercially grown kiwifruit is notoriously difficult to pollinate. The flowers are not very attractive to honey bees, in part because the flowers do not produce nectar and bees quickly learn to prefer flowers with nectar.
Honey bees are inefficient cross-pollinators for kiwifruit because they practice “floral fidelity”. Each honey bee visits only a single type of flower in any foray and maybe only a few branches of a single plant. The pollen needed from a different plant (such as a male for a female kiwifruit) might never reach it were it not for the cross-pollination that principally occurs in the crowded colony; it is in the colonies that bees laden with different pollen literally cross paths.
To deal with these pollination challenges, some producers blow collected pollen over the female flowers. Most common, though, is saturation pollination, in which the honey bee populations are made so large (by placing hives in the orchards at a concentration of about 8 hives per hectare) that bees are forced to use this flower because of intense competition for all flowers within flight distance.
Maturation and harvest
Kiwifruit is picked by hand and commercially grown on sturdy support structures, as it can produce several tonnes per hectare, more than the rather weak vines can support. These are generally equipped with a watering system for irrigation and frost protection in the spring.
Kiwifruit vines require vigorous pruning, similar to that of grapevines. Fruit is borne on 'one-year-old and older' canes, but production declines as each cane ages. Canes should be pruned off and replaced after their third year. In the northern hemisphere the fruit ripens in November, while in the southern it ripens in May. Four year-old plants can produce 15 tonnes of fruit per hectare (14,000 lb per acre) while eight year-old plants can produce 20 tonnes (18,000 lb per acre). The plants produce their maximum at eight to ten years old. The seasonal yields are variable; a heavy crop on a vine one season generally comes with a light crop the following season.
Storage
Fruit harvested when firm will ripen when stored properly for long periods. This allows fruit to be stored up to 8 weeks after harvest.
Firm kiwifruit ripen after a few days to a week when stored at room temperature, but should not be kept in direct sunlight. Faster ripening occurs when placed in a paper bag with an apple, pear, or banana. Once a kiwifruit is ripe, however, it is preserved optimally when stored far from other fruits, as it is very sensitive to the ethylene gas they may emit, thereby tending to over-ripen even in the refrigerator. If stored appropriately, ripe kiwifruit normally keep for about one to two weeks.
Pests and diseases
Pseudomonas syringae actinidiae (PSA) was first identified in Japan in the 1980s. This bacterial strain has been controlled and managed successfully in orchards in Asia. In 1992, it was found in northern Italy. In 2007/2008, economic losses were observed, as a more virulent strain became more dominant (PSA V). In 2010 it was found in New Zealand's Bay of Plenty Region kiwifruit orchards in the North Island. The yellow-fleshed cultivars were particularly susceptible. New, resistant varieties were selected in research funded by the government and fruit growers so that the industry could continue.
Scientists reported they had worked out the strain of PSA affecting kiwifruit from New Zealand, Italy and Chile originated in China.
Production
In 2022, world production of kiwifruit was 4.5 million tonnes, led by China with 52% of the total. In China, kiwifruit is grown mainly in the mountainous area upstream of the Yangtze River, as well as Sichuan. Other major producers were New Zealand and Italy.
Production history
Kiwifruit exports rapidly increased from the late 1960s to early 1970s in New Zealand. By 1976, exports exceeded the amount consumed domestically. Outside of Australasia, New Zealand kiwifruit are marketed under the brand-name label Zespri. The general name, "Zespri", has been used for marketing of all cultivars of kiwifruit from New Zealand since 2012.
In the 1980s, many countries outside New Zealand began to grow and export kiwifruit. In Italy, the infrastructure and techniques required to support grape production were adapted to the kiwifruit. This, coupled with being close to the European kiwifruit market, led to Italians becoming the leading producer of kiwifruit in 1989. The growing season of Italian kiwifruit does not overlap much with the New Zealand or the Chilean growing seasons, therefore direct competition between New Zealand or Chile was not a significant factor.
Much of the breeding to refine the green kiwifruit was undertaken by the Plant & Food Research Institute (formerly HortResearch) during the decades of '1970–1999'. In 1990, the New Zealand Kiwifruit Marketing Board opened an office for Europe in Antwerp, Belgium.
Human consumption
Kiwifruit may be eaten raw, made into juices, used in baked goods, prepared with meat or used as a garnish. The whole fruit, including the skin, is suitable for human consumption; however, the skin of the fuzzy varieties is often discarded due to its texture. Sliced kiwifruit has long been used as a garnish atop whipped cream on pavlova, a meringue-based dessert. Traditionally in China, kiwifruit was not eaten for pleasure, but was given as medicine to children to help them grow and to women who have given birth to help them recover.
Raw kiwifruit contains actinidain (also spelled actinidin) which is commercially useful as a meat tenderizer and possibly as a digestive aid. Actinidain also makes raw kiwifruit unsuitable for use in desserts containing milk or any other dairy products because the enzyme digests milk proteins. This applies to gelatin-based desserts, due to the fact that the actinidain will dissolve the proteins in gelatin, causing the dessert to either liquefy or prevent it from solidifying.
Nutrition
In a amount, green kiwifruit provides of food energy, is 83% water and 15% carbohydrates, with negligible protein and fat (table). It is particularly rich in vitamin C (103% DV) and vitamin K (34% DV), has a moderate content of vitamin E (10% DV), with no other micronutrients in significant content. Gold kiwifruit has similar nutritional value to green kiwifruit, but contains higher vitamin C content (179% DV) and insignificant vitamin K content (table).
Kiwifruit seed oil contains on average 62% alpha-linolenic acid, an omega-3 fatty acid. Kiwifruit pulp contains carotenoids, such as provitamin A beta-carotene, lutein and zeaxanthin.
Allergies
Allergy to kiwifruit was first described in 1981, and there have since been reports of the allergy presenting with numerous symptoms from localized oral allergy syndrome to life-threatening anaphylaxis.
The actinidain found in kiwifruit can be an allergen for some individuals, including children. The most common symptoms are unpleasant itching and soreness of the mouth, with wheezing as the most common severe symptom; anaphylaxis may occur.
| Biology and health sciences | Ericales | null |
17383 | https://en.wikipedia.org/wiki/Kayak | Kayak | A kayak is a small, narrow human-powered watercraft typically propelled by means of a long, double-bladed paddle. The word kayak originates from the Inuktitut word qajaq (). In British English, the kayak is also considered to be a kind of canoe.
There are countless different types of kayaks due to the craft being easily adaptable for different environments and purposes. The traditional kayak has an enclosed deck and one or more cockpits, each seating one occupant or kayaker, differentiating the craft from an open-deck canoe. The cockpit is sometimes covered by a spray deck that prevents unwanted entry of water from waves or splashes. Even within these confines, kayaks vary vastly in respect to materials, length, and width, with some kayaks such as the sprint kayak designed to be fast and light, and others such as the whitewater kayak designed to be sturdy and maneuverable.
Some modern paddlecrafts, which still claim the title "kayak", remove integral parts of the traditional design; for instance, by eliminating the cockpit and seating the paddler on top of a canoe-like open deck, commonly known as a sit-on-top kayak. Other designs include inflated air chambers surrounding the craft; replacing the single hull with twin hulls; and replacing handheld paddles with other human-powered propulsion methods such as pedal-driven propeller and "flippers". Some kayaks are also fitted with external sources of propulsion, such as a battery-powered electric motor to drive a propeller or flippers, a sail (which essentially modifies it into a sailboat), or even a completely independent gasoline outboard engine (which converts it into a de facto motorboat).
The kayak was first used by the indigenous Aleut, Inuit, Yupik and possibly Ainu people hunters in subarctic regions of the world.
History
Kayaks (Inuktitut: qajaq (ᖃᔭᖅ ), Yup'ik: qayaq (from qai- "surface; top"), Aleut: Iqyax) were originally developed by the Inuit, Yup'ik, and Aleut. They used the boats to hunt on inland lakes, rivers and coastal waters of the Arctic Ocean, North Atlantic, Bering Sea and North Pacific oceans. These first kayaks were constructed from stitched seal or other animal skins stretched over a wood or whalebone-skeleton frame. (Western Alaskan Natives used wood whereas the eastern Inuit used whalebone due to the treeless landscape). Kayaks are believed to be at least 4,000 years old. The oldest kayaks remaining are exhibited in the North America department of the State Museum of Ethnology in Munich, with the oldest dating from 1577.
Subarctic people made many types of boats for different purposes. The Aleut baidarka was made in double or triple cockpit designs, for hunting and transporting passengers or goods. An umiak is a large open-sea canoe, ranging from , made with seal skins and wood, originally paddled with single-bladed paddles and typically had more than one paddler.
Subarctic builders designed and built their boats based on their own experience and that of the generations before them passed on through oral tradition. The word "kayak" means "man's boat" or "hunter's boat", and subarctic kayaks were a personal craft, each built by the man who used it and closely fitting his size for maximum maneuverability. For this reason, kayaks were often designed ergonomically using one's own body proportions as units of measure. The paddler wore a tuilik, a garment that was stretched over the rim of the kayak coaming and sealed with drawstrings at the coaming, wrists, and hood edges. This enabled the "eskimo roll" and rescue to become the preferred methods of recovery after capsizing, especially as few Inuit could swim; their waters are too cold for a swimmer to survive for long.
Instead of a tuilik, most traditional kayakers today use a spray deck made of waterproof synthetic material stretchy enough to fit tightly around the cockpit rim and body of the kayaker, and which can be released rapidly from the cockpit to permit easy exit (in particular in a wet exit after a capsizing).
Inuit kayak builders had specific measurements for their boats. The length was typically three times the span of his outstretched arms. The width at the cockpit was the width of the builder's hips plus two fists (sometimes less). The typical depth was his fist plus the outstretched thumb (hitch hiker). Thus typical dimensions were about long by wide by deep.
Traditional kayaks encompass three types: Baidarkas, from the Bering Sea & Aleutian Islands, the oldest design, whose rounded shape and numerous chines give them an almost blimp-like appearance; West Greenland kayaks, with fewer chines and a more angular shape, with gunwales rising to a point at the bow and stern; and East Greenland kayaks that appear similar to the West Greenland style, but often fit more snugly to the paddler and possess a steeper angle between gunwale and stem, which lends maneuverability.
Most of the Aleut people in the Aleutian Islands eastward to Greenland Inuit relied on the kayak for hunting a variety of prey—primarily seals, though whales and caribou were important in some areas. Skin-on-frame kayaks are still being used for hunting by Inuit in Greenland, because the smooth and flexible skin glides silently through the waves.
20th century & contemporary kayaks
Contemporary traditional-style kayaks trace their origins primarily to the native boats of Alaska, northern Canada, and Southwest Greenland. The use of fabric kayaks on wooden frames called a foldboat or folding kayak (German Faltboot or Hardernkahn) became widely popular in Europe beginning in 1907 when they were mass-produced by Johannes Klepper and others. This type of kayak was introduced to England and Europe by John MacGregor (sportsman) in 1860, but Klepper was the first person to mass-produce these boats made of collapsible wooden frames covered by waterproof rubberized canvas. By 1929, Klepper and Company were making 90 foldboats a day. Joined by other European manufacturers, by the mid-1930s there were an estimated half-million foldboat kayaks in use throughout Europe. First Nation masters of the roll taught this technique to Europeans during this time period.
These boats were tough and intrepid individuals were soon doing amazing things in them. In June 1928, a German named Franz Romer Sea kayak rigged his long foldboat with a sail and departed from Las Palmas in the Canary Islands carrying of tinned food and of water. Fifty-eight days and later he reached Saint Thomas, U.S. Virgin Islands. Another German, Oskar Speck, paddled his foldboat down the Danube and four years later reached the Australian coast after having traveled roughly 14,000 miles across the Pacific.
These watercraft were brought to the United States and used competitively in 1940 at the first National Whitewater Championship held in America near Middledam, Maine, on the Rapid River (Maine). One "winner," Royal Little, crossed the finish line clinging to his overturned foldboat. Upstream, the river was "strewn with many badly buffeted and some wrecked boats." Two women were in the competition, Amy Lang and Marjory Hurd. With her partner Ken Hutchinson, Hurd won the double canoe race. Lang won the doubles foldboat event with her partner, Alexander "Zee" Grant.
In the late 1930s and early 1940s, Alexander "Zee" Grant was most likely America's best foldboat pilot. Grant kayaked the Gates of Lodore on the Green River (Colorado River tributary) in Dinosaur National Monument in 1939 and the Middle Fork Salmon River in 1940. In 1941, Grant paddled a foldboat through Grand Canyon National Park. He outfitted his foldboat, named Escalante, with a sponson on each side of his boat and filled the boat with beach balls. As with nearly all American foldboat enthusiasts of the day, he did not know how to roll his boat.
Fiberglass mixed with resin composites, invented in the 1930s and 1940s, were soon used to make kayaks and this type of watercraft saw increased use during the 1950s, including in the US. Kayak Slalom World Champion Walter Kirschbaum built a fiberglass kayak and paddled it through Grand Canyon in June 1960. He knew how to roll and only swam once, in Hance Rapid (see List of Colorado River rapids and features). Like Grant's foldboat, Kirschbaum's fiberglass kayak had no seat and no thigh braces. In June 1987, Ed Gillet, using a stock off the shelf traditional design 20 foot long by 31 inch wide fiberglass tandem kayak paddled over 2,000 miles non-stop from Monterey, California to Hawaii, landing his vessel there on August 27, 1987, after 64 days of paddling. Gillet had navigated his kayak by using a traditional sextant and compass, along with approximately 600 pounds of food and water, including a device to convert sea water to fresh water. Within six days of reaching Hawaii, both he and his yellow kayak were featured on The Tonight Show, hosted by Johnny Carson.
Inflatable rubberized fabric boats were first introduced in Europe and rotomolded plastic kayaks first appeared in 1973. Most kayaks today are made from roto-molded polyethylene resins. The development of plastic and rubberized inflatable kayaks arguably initiated the development of freestyle kayaking as we see it today since these boats could be made smaller, stronger, and more resilient than fiberglass boats.
Design principles
Typically, kayak design is largely a matter of trade-offs: directional stability ("tracking") vs maneuverability; stability vs speed; and primary vs secondary stability. Multihull kayaks face a different set of trade-offs. The paddler's body shape and size is an integral part of the structure, and will also affect the trade-offs made.
Attempting to lift and carry a kayak by oneself or improperly is a significant cause of kayaking injuries. Good lifting technique, sharing loads, and not using needlessly large and heavy kayaks prevent injuries.
Displacement
If the displacement of a kayak is not enough to support the passenger(s) and gear, it will sink. If the displacement is excessive, the kayak will float too high, catch the wind and waves uncomfortably, and handle poorly; it will probably also be bigger and heavier than it needs to be. Being excessively big will create more drag, and the kayak will move more slowly and take more effort. Rolling is easier in lower-displacement kayaks. On the other hand, a higher deck will keep the paddler(s) drier and make self-rescue and coming through surf easier. Many beginning paddlers who use a sit-in kayak feel more secure in a kayak with a weight capacity substantially more than their own weight. Maximum volume in a sit-in kayak is helped by a wide hull with high sides. But paddling ease is helped by lower sides where the paddler sits and a narrower width.
While the kayak's buoyancy must be more than the loaded kayak, the optimal amount of excess buoyancy varies somewhat with kayak type, purpose, and personal taste (squirt boats, for instance, have very little positive buoyancy). Displacements vary with paddler weight. Most manufacturers include kayaks for paddlers weighing , with some kayaks for paddlers down to . Kayaks made for paddlers under are almost all very beamy and intended for beginners.
Length
As a general rule, a longer kayak is faster: it has a higher hull speed. It can also be narrower for a given displacement, reducing the drag, and it will generally track (follow a straight line) better than a shorter kayak. On the other hand, it is less maneuverable. Very long kayaks are less robust, and may be harder to store and transport. Some recreational kayak makers try to maximize hull volume (weight capacity) for a given length as shorter kayaks are easier to transport and store.
Kayaks that are built to cover longer distances such as touring and sea kayaks are longer, generally . With touring kayaks the keel is generally more defined (helping the kayaker track in a straight line). Whitewater kayaks, which generally depend upon river current for their forward motion, are short, to maximize maneuverability. These kayaks rarely exceed in length, and play boats may be only long. Recreational kayak designers try to provide more stability at the price of reduced speed, and compromise between tracking and maneuverability, ranging from .
Rocker
Length alone does not fully predict a kayak's maneuverability: a second design element is rocker, i.e. its lengthwise curvature. A heavily rockered boat curves more, shortening its effective waterline. For example, an kayak with no rocker is in the water from end to end. In contrast, the bow and stern of a rockered boat are out of the water, shortening its lengthwise waterline to only . Rocker is generally most evident at the ends, and in moderation improves handling. Similarly, although a rockered whitewater boat may only be about a meter shorter than a typical recreational kayak, its waterline is far shorter and its maneuverability far greater. When surfing, a heavily rockered boat is less likely to lock into the wave as the bow and stern are still above water. A boat with less rocker cuts into the wave and makes it harder to turn while surfing.
Beam profile
The overall width of a kayak's cross-section is its beam. A wide hull is more stable and packs more displacement into a shorter length. A narrow hull has less drag and is generally easier to paddle; in waves, it will ride more easily and stay dryer.
A narrower kayak makes a somewhat shorter paddle appropriate and a shorter paddle puts less strain on the shoulder joints. Some paddlers are comfortable with a sit-in kayak so narrow that their legs extend fairly straight out. Others want sufficient width to permit crossing their legs inside the kayak.
Types of stability
Primary (sometimes called initial) stability describes how much a boat tips, or rocks back and forth when displaced from level by paddler weight shifts. Secondary stability describes how stable a kayak feels when put on edge or when waves are passing under the hull perpendicular to the length of the boat. For kayak rolling, tertiary stability, or the stability of an upside-down kayak, is also important (lower tertiary stability makes rolling up easier).
Primary stability is often a big concern to a beginner, while secondary stability matters both to beginners and experienced travelers. By example, a wide, flat-bottomed kayak will have high primary stability and feel very stable on flat water. However, when a steep wave breaks on such a boat, it can be easily overturned because the flat bottom is no longer level. By contrast, a kayak with a narrower, more rounded hull with more hull flare can be edged or leaned into waves and (in the hands of a skilled kayaker) provides a safer, more comfortable response on stormy seas. Kayaks with only moderate primary, but excellent secondary stability are, in general, considered more seaworthy, especially in challenging conditions.
The shape of the cross section affects stability, maneuverability, and drag. Hull shapes are categorized by roundness, flatness, and by the presence and angle of chines. This cross-section may vary along the length of the boat.
A chine typically increases secondary stability by effectively widening the beam of the boat when it heels (tips). A V-shaped hull tends to travel straight (track) well but makes turning harder. V-shaped hulls also have the greatest secondary stability. Conversely, flat-bottomed hulls are easy to turn, but harder to direct in a constant direction. A round-bottomed boat has minimal area in contact with the water, and thus minimizes drag; however, it may be so unstable that it will not remain upright when floating empty, and needs continual effort to keep it upright. In a skin-on-frame kayak, chine placement may be constrained by the need to avoid the bones of the pelvis.
Sea kayaks, designed for open water and rough conditions, are generally narrower at and have more secondary stability than recreational kayaks, which are wider , and have a flatter hull shape and more primary stability.
Stability from body shape and skill level
The body of the paddler must also be taken into account. A paddler with a low center of gravity (COG) will find all boats more stable; for a paddler with a high center of gravity, all boats will feel tippier. On average, women have a lower COG than men. Women generally may fit a kayak about 10% narrower than the kayak that would fit a similarly sized man. Commercial kayaks made for women are rare. Unisex kayaks are built for men. Younger children have proportionately smaller and lighter bodies, but near-adult-size heads, and thus a higher center of gravity. A paddler with narrow shoulders will also want a narrower kayak.
Newcomers will often want a craft with high primary stability (see above). The southern method is a wider kayak. The West Greenland method is a removable pair of outriggers, lashed across the stern deck. Such an outrigger pair is often homemade of a small plank and found floats such as empty bottles or plastic ducks. Outriggers are also made commercially, especially for fishing kayaks and sailing. If the floats are set so that they are both in the water, they give primary stability, but produce more drag. If they are set so that they are both out of the water when the kayak is balanced, they give secondary stability.
Hull surface profile
Some kayak hulls are categorized according to the shape from bow to stern
Common shapes include:
Symmetrical: the widest part of the boat is halfway between bow and stern.
Fish form: the widest part is forward (in front) of the midpoint.
Swede form: the widest part is aft (behind) midpoint.
Seating position and contact points
Traditional-style and some modern types of kayaks (e.g. sit-on-top) require that paddler be seated with their legs stretched in front of them, in a right angle, in a position called the "L" kayaking position. Other kayaks offer a different sitting position, in which the paddler's legs are not stretched out in front of them, and the thigh brace bears more on the inside than the top of the thighs (see diagram).
A kayaker must be able to move the hull of their kayak by moving their lower body, and brace themselves against the hull (mostly with the feet) on each stroke. Most kayaks therefore have footrests and a backrest. Some kayaks fit snugly on the hips; others rely more on thigh braces. Mass-produced kayaks generally have adjustable bracing points. Many paddlers also customize their kayaks by putting in shims of closed-cell foam (usually EVA), or more elaborate structures, to make it fit more tightly.
Paddling puts substantial force through the legs, alternately with each stroke. The knees should therefore not be hyperextended. Separately, if the kneecap is in contact with the boat, or the knee joint is in torsion, this will cause pain and may injure the knee. Insufficient foot space will cause painful cramping and inefficient paddling. The paddler should generally be in a comfortable position.
Materials and construction
Today almost all kayaks are commercial products intended for sale rather than for the builder's personal use.
Fiberglass hulls are stiffer than polyethylene hulls, but they are more prone to damage from impact, including cracking. Most modern kayaks have steep V sections at the bow and stern, and a shallow V amidships. Fiberglass kayaks may be "laid-up" in a mold by hand, in which case they are usually more expensive than polyethylene kayaks, which are rotationally molded in a machine. The deck and hull are often made separately and then joined at a horizontal seam.
Plastic kayaks are rotationally molded ('rotomolded') from a various grades and types of polyethylene resins ranging from soft to hard. Such kayaks are seamless and particularly resistant to impact, but heavy.
Inflatable kayaks are increasingly popular due to their ease of storage and transport, as well as the ability to be deflated for extended portage. Although slower than hardshell kayaks, many higher-end models often constructed of hypalon, as opposed to cheaper PVC designs, begin to approach the performance of traditional sea kayaks. Being inflatable they are virtually unsinkable and often more stable than hardshell designs. New drop-stitch technology means slab, rather than tube shapes are used in the designs with higher inflation pressures (up to ), leading to considerably faster, though often less stable kayaks which rival hardshell boats in performance.
Solid wooden hulls don't necessarily require significant skill and handiwork, depending on how they are made. Three main types are popular, especially for the home builder: plywood stitch & glue (S&G), strip-built, and hybrids which have a stitch & glue hull and a strip-built deck. Kayaks made from wood sheathed in fiberglass have proven successful, especially as the price of epoxy resin has decreased in recent years.
Stitch & glue designs typically use modern, marine-grade plywood with a thickness of about . After cutting out the required pieces of hull and deck (kits often have these pre-cut), a series of small holes are drilled along the edges. Copper wire is then used to "stitch" the pieces together through the holes. After the pieces are temporarily stitched together, they are glued with epoxy and the seams reinforced with fiberglass. When the epoxy dries, the copper stitches are removed. Sometimes the entire boat is then covered in fiberglass for additional strength and waterproofing though this adds greatly to the weight and is unnecessary. Construction is fairly straightforward, but because plywood does not bend to form compound curves, design choices are limited. This is a good choice for the first-time kayak builder as the labor and skills required (especially for kit versions) is considerably less than for strip-built boats which can take three times as long to build.
Strip-built designs are similar in shape to rigid fiberglass kayaks but are generally both lighter and tougher. Like their fiberglass counterparts the shape and size of the boat determines performance and optimal uses. The hull and deck are built with thin strips of lightweight wood, often thuja (Western Red cedar), pine or redwood. The strips are edge-glued together around a form, stapled or clamped in place, and allowed to dry. Structural strength comes from a layer of fiberglass cloth and epoxy resin, layered inside and outside the hull. Strip–built kayaks are sold commercially by a few companies, priced US$4,000 and up. An experienced woodworker can build one for about US$400 in 200 hours, though the exact cost and time depend on the builder's skill, the materials and the size and design. As a second kayak project, or for the serious builder with some woodworking expertise, a strip–built boat can be an impressive piece of work. Kits with pre-cut and milled wood strips are commercially available.
Skin-on-frame (SOF) boats are often more traditional in design, materials, and construction. They were traditionally made with driftwood frames, jointed, pegged, and lashed together, and covered with stretched seal skin, as those were the most readily available materials in the Arctic regions (other skins and baleen framing members were also used at need). A "poor man's kayak" might be frameless and stuffed with a snow "frame". Today, seal skin is usually replaced with canvas or nylon cloth covered with paint, polyurethane, or a hypalon rubber coating, on a wooden or aluminum frame. Modern skin-on-frame kayaks often possess greater impact resistance than their fiberglass counterparts, but are less durable against abrasion or sharp objects. They are often the lightest kayaks. Like the older skin-on-frame kayaks, they are often home-built to fit a specific paddler. Engineer Xyla Foxlin built a kayak out of transparent fibreglass as well as LEDs to create a floating vessel that lights up at night, which she calls the Rainbowt.
A special type of skin-on-frame kayak is the folding kayak. It has a collapsible frame, of wood, aluminum or plastic, or a combination thereof, and a skin of water-resistant and durable fabric. Many types have air sponsons built into the hull, making the kayak float even if flooded.
Modern design
Most modern kayaks differ greatly from the original traditional subarctic kayaks in design, manufacturing and usage. They are often designed with computer-aided design (CAD) software, often in combination with CAD customized for naval design.
Modern kayaks serve diverse purposes, ranging from slow and easy touring on placid water, to racing and complex maneuvering in fast-moving whitewater, to fishing and long-distance ocean excursions. Modern forms, materials and construction techniques make it possible to effectively serve these needs while continuing to leverage the insights of the original Arctic inventors.
Types
Modern kayaks have evolved into specialized types that may be broadly categorized according to their application as sea or touring kayaks, whitewater (or river) kayaks, surf kayaks, racing kayaks, fishing kayaks, and recreational kayaks. The broader kayak categories today are 'sit-in' (SI), which is inspired mainly by traditional kayak forms, 'sit-on-top' (SOT), which evolved from paddle boards that were outfitted with footrests and a backrest, 'hybrid', which are essentially canoes featuring a narrower beam and a reduced free board enabling the paddler to propel them from the middle of the boat, using a double blade paddle (i.e. 'kayak paddle'), and twin hull kayaks offering each of the paddler's legs a narrow hull of its own.
In recent decades, kayaks design have proliferated to a point where the only broadly accepted denominator for them is their being designed mainly for paddling using a kayak paddle featuring two blades i.e. 'kayak paddle'. However, even this inclusive definition is being challenged by other means of human powered propulsion, such as foot activated pedal drives combined with rotating or sideways moving propellers, electric motors, and even outboard motors.
Recreational
Recreational kayaks are designed for the casual paddler interested in fishing, photography, or a peaceful paddle on a lake, flatwater stream or protected salt water away from strong ocean waves. These boats presently make up the largest segment of kayak sales. Compared to other kayaks, recreational kayaks have a larger cockpit for easier entry and exit and a wider beam () for more stability. They are generally less than in length and have limited cargo capacity. Less expensive materials like polyethylene and fewer options keep these boats relatively inexpensive. Most canoe/kayak clubs offer introductory instruction in recreational boats. They do not perform as well in the sea. The recreational kayak is usually a type of touring kayak.
Sea
Sea kayaks are typically designed for travel by one, two or even three paddlers on open water and in many cases trade maneuverability for seaworthiness, stability, and cargo capacity. Sea-kayak sub-types include "skin-on-frame" kayaks with traditionally constructed frames, open-deck "sit-on-top" kayaks, and recreational kayaks.
The sea kayak, though descended directly from traditional types, is implemented in a variety of materials. Sea kayaks typically have a longer waterline, and provisions for below-deck storage of cargo. Sea kayaks may also have rudders or skegs (fixed rudder) and upturned bow or stern profiles for wave shedding. Modern sea kayaks usually have two or more internal bulkheads. Some models can accommodate two or sometimes three paddlers.
Sit-on-top
Sealed-hull ("unsinkable") craft were developed for leisure use, as derivatives of surfboards (e.g. paddle or wave skis), or for surf conditions. Variants include planing surf craft, touring kayaks, and sea marathon kayaks. Increasingly, manufacturers build leisure 'sit-on-top' variants of extreme sports craft, typically using polyethylene to ensure strength and affordability, often with a skeg for directional stability.
Sit-on-top kayaks come in 1–4 paddler configurations. Sit-on-top kayaks are particularly popular for fishing and SCUBA diving, since participants need to easily enter and exit the water, change seating positions, and access hatches and storage wells. Ordinarily the seat of a sit-on-top is slightly above water level, so the center of gravity for the paddler is higher than in a traditional kayak. To compensate for the higher center of gravity, sit-on-tops are often wider and slower than a traditional kayak of the same length.
Water that enters the cockpit of a sit-on-top kayak drains out through scupper holes—tubes that run from the cockpit to the bottom of the hull. The cockpit is thus self-bailing. The hull may be sealed, or perforated by hatches and deck fixtures. Contrary to popular belief, the sit-on-top kayak hull is not self-bailing, since water penetrating it does not drain out automatically, as it does in bigger boats equipped with self-bailing systems. Furthermore, the sit-on-top hull cannot be molded in a way that would assure water tightness, and water may get in through various holes in its hull, usually around hatches and deck accessories. If the sit-on-top kayak is loaded to a point where such perforations are covered with water, or if the water paddled is rough enough that such perforations often go under water, the sit-on-top hull may fill with water without the paddler noticing it in time. If a sealed hull develops a split or hole, it will also fill and sink.
Surf
Specialty surf boats typically have flat bottoms, and hard edges, similar to surf boards. The design of a surf kayak promotes the use of an ocean surf wave (moving wave) as opposed to a river or feature wave (moving water). They are typically made from rotomolded plastic, or fiberglass.
Surf kayaking comes in two main varieties, High Performance (HP) and International Class (IC). High Performance boats tend to have a lot of nose rocker, little to no tail rocker, flat hulls, sharp rails and up to four fins set up as either a three fin thruster or a quad fin. This enables them to move at high speed and maneuver dynamically. International Class boats have to be at least long and until a recent rule change had to have a convex hull; now flat and slightly concave hulls are also allowed, although fins are not. Surfing on international boats tends to be smoother and more flowing, and they are thought of as kayaking's long boarding. Surf boats come in a variety of materials ranging from tough but heavy plastics to super light, super stiff but fragile foam–cored carbon fiber. Surf kayaking has become popular in traditional surfing locations, as well as new locations such as the Great Lakes.
Waveskis
A variation on the closed-cockpit surf kayak is called a waveski. Although the waveski offers dynamics similar to a sit–on–top, its paddling technique and surfing performance and construction can be similar to surfboard designs.
Whitewater
Whitewater kayaks are rotomolded in a semi-rigid, high impact plastic, usually polyethylene. Careful construction ensures that the boat remains structurally sound when subjected to fast-moving water. The plastic hull allows these kayaks to bounce off rocks without leaking, although they scratch and eventually puncture with enough use. Whitewater kayaks range from long. There are two main types of whitewater kayak, playboats and river-running boats. Creekboats (for small rivers) and squirt boats are more specialized.
Playboat
One type, the playboat, is short, with a scooped bow and blunt stern. These trade speed and stability for high maneuverability. Their primary use is performing tricks in individual water features or short stretches of river. In playboating or freestyle competition (also known as rodeo boating), kayakers exploit the complex currents of rapids to execute a series of tricks, which are scored for skill and style.
Creekboats and river-running kayaks
The other primary type is the creek boat, which gets its name from its purpose: running narrow, low-volume waterways. Creekboats are longer and have far more volume than playboats, which makes them more stable, faster and higher-floating. Many paddlers use creekboats in "short boat" downriver races, and they are often seen on large rivers where their extra stability and speed may be necessary to get through rapids.
Between the creekboat and playboat extremes is a category called river–running kayaks. These medium–sized boats are designed for rivers of moderate to high volume, and some, known as river running playboats, are capable of basic playboating moves. They are typically owned by paddlers who do not have enough whitewater involvement to warrant the purchase of more–specialized boats.
Squirt boats
Squirt boating involves paddling both on the surface of the river and underwater. Squirt boats must be custom-fitted to the paddler to ensure comfort while maintaining the low interior volume necessary to allow the paddler to submerge completely in the river.
Racing
Whitewater
White water racers combine a fast, unstable lower hull portion with a flared upper hull portion to combine flat water racing speed with extra stability in open water: they are not fitted with rudders and have similar maneuverability to flat water racers. They usually require substantial skill to achieve stability, due to extremely narrow hulls. Whitewater racing kayaks, like all racing kayaks, are made to regulation lengths, usually of fiber reinforced resin (usually epoxy or polyester reinforced with Kevlar, glass fiber, carbon fiber, or some combination). This form of construction is stiffer and has a harder skin than non-reinforced plastic construction such as rotomolded polyethylene: stiffer means faster, and harder means fewer scratches and therefore also faster.
Flatwater sprint
Sprint kayak is a sport held on calm water. Crews or individuals race over 200 m, 500 m, 1000 m or 5000 m with the winning boat being the first to cross the finish line. The paddler is seated, facing forward, and uses a double-bladed paddle pulling the blade through the water on alternate sides to propel the boat forward. In competition the number of paddlers within a boat is indicated by a figure besides the type of boat; K1 signifies an individual kayak race, K2 pairs, and K4 four-person crews. Kayak sprint has been in every summer olympics since it debuted at the 1936 summer olympics. Racing is governed by the International Canoe Federation.
Slalom
Slalom kayaks are flat–hulled, and—since the early 1970s—feature low profile decks. They are highly maneuverable, and stable but not fast in a straight line.
Surfskis
A specialized variant of racing craft called a surf ski has an open cockpit and can be up to long but only wide, requiring expert balance and paddling skill. Surf skis were originally created for surf and are still used in races in New Zealand, Australia, and South Africa. They have become popular in the United States for ocean races, lake races and even downriver races.
Marathon
Marathon races vary in distances from ten kilometres to over 1000 kilometres for multi-day stage races.
Specialty and hybrids
The term "kayak" is increasingly applied to craft that look little like traditional kayaks.
Inflatable
Inflatables, also known as the duckies or IKs, can usually be transported by hand using a carry bag. They are generally made of hypalon (a kind of neoprene), nitrilon (nitrile-rubberized fabric), PVC, or polyurethane-coated cloth. They can be inflated with foot, hand or electric pumps. Multiple compartments in all but the least expensive increase safety. They generally use low pressure air, almost always below .
While many inflatables are non-rigid, essentially pointed rafts, best suited for use on rivers and calm water, the higher-end inflatables are designed to be hardy, seaworthy vessels. Recently some manufacturers have added an internal frame (folding-style) to a multi-section inflatable sit-on-top kayak to produce a seaworthy boat. Fully drop-stitch inflatable kayaks are also available, which are inflated to 8–10 PSI. They are much stiffer, which enhances their paddling characteristics to vastly outperform traditional inflatable kayaks.
The appeal of inflatable kayaks is their portability, their durability (they don't dent), ruggedness in white water (they bounce off rocks rather than break) and their easy storage. In addition, inflatable kayaks generally are stable, have a small turning radius and are easy to master, although some models take more effort to paddle and are slower than traditional kayaks.
Because inflatable kayaks aren't as sturdy as traditional, hard-shelled kayaks, a lot of people tend to steer away from them. However, there have been considerable advancements in inflatable kayak technology over recent years.
Folding
Folding kayaks are direct descendants of the skin-on-frame boats used by the Inuit and Greenlandic peoples. Modern folding kayaks are constructed from a wooden or aluminum frame over which is placed a synthetic skin made of polyester, cotton canvas, polyurethane, or Hypalon. They are more expensive than inflatable kayaks, but have the advantage of greater stiffness and consequently better seaworthiness.
Walter Höhn (English Hoehn) had built, developed and then tested his design for a folding kayak in the white-water rivers of Switzerland from 1924 to 1927. In 1928, on emigrating to Australia, he brought 2 of them with him, lodged a patent for the design and proceeded to manufacture them.
In 1942 the Australian Director of Military operations approached him to develop them for Military use. Orders were placed and eventually a total of 1024, notably the MKII & MKIII models, were produced by him and another enterprise, based on his 1942 patent (No. 117779)
Pedal
Pedal kayaks represent a pioneering type of watercraft engineered for hands-free functionality, utilizing a propulsion system operated by the kayaker's feet. This mechanism usually consists of pedals that are rotated in a circular motion, akin to bicycling, generating forward momentum through a propeller or fins situated beneath the kayak. Steering is managed by a rudder or steering mechanism, typically operated by a hand lever or supplementary foot pedals for directional control.
Twin hull and outrigger
Traditional multi-hull vessels such as catamarans and outrigger canoes benefit from increased lateral stability without sacrificing speed, and these advantages have been successfully applied in twin hull kayaks. Outrigger kayaks attach one or two smaller hulls to the main hull to enhance stability, especially for fishing, touring, kayak sailing and motorized kayaking.
Twin hull kayaks feature two long and narrow hulls, and since all their buoyancy is distributed as far as possible from their center line, they are more stable than mono hull kayaks outfitted with outriggers.
Fishing
While native people of the Arctic regions hunted rather than fished from kayaks, in recent years kayak sport fishing has become popular in both fresh and salt water, especially in warmer regions. Traditional fishing kayaks are characterized by wide beams of up to that increase their lateral stability. Some are equipped with outriggers that increase their stability, and others feature twin hulls enabling stand up paddling and fishing. Compared with motorboats, fishing kayaks are inexpensive and have few maintenance costs. Many kayak anglers like to customize their kayaks for fishing, a process known as 'rigging'.
Military
Kayaks were adapted for military use in the Second World War. Used mainly by British Commandos and special forces, principally the Combined Operations Pilotage Parties (COPPs), the Special Boat Service and the Royal Marines Boom Patrol Detachment. The latter made perhaps the best known use of them in the Operation Frankton raid on Bordeaux harbor. Both the Special Air Service (SAS) and the Special Boat Service (SBS) used kayaks for reconnaissance in the 1982 Falklands War. US Navy SEALs reportedly used them at the start of Unified Task Force operations in Somalia in 1992. The SBS currently use Klepper two-person folding kayaks that can be launched from surfaced submarines or carried to the surface by divers from submerged ones. They can be parachuted from transport aircraft at sea or dropped from the back of Chinook helicopters. US Special Forces have used Kleppers but now primarily use Long Haul folding kayaks, which are made in the US.
The Australian Military MKII and MKIII folding kayaks were extensively used during WWII in the Pacific Theater for some 33 raids and missions on and around the South-East Asian islands. Documentation for this will be found in the National Archives of Australia official records, reference No. NAA K1214-123/1/06. They were deployed from disguised watercraft, submarines, Catalina aircraft, P.T. boats, motor launches and by parachute.
| Technology | Naval transport | null |
17459 | https://en.wikipedia.org/wiki/Koi | Koi | , or more specifically , are colored varieties of carp (Cyprinus sp.) that are kept for decorative purposes in outdoor koi ponds or water gardens.
Koi is an informal name for the colored variants of carp kept for ornamental purposes. There are many varieties of ornamental koi, originating from breeding that began in Niigata, Japan in the early 19th century.
Several varieties are recognized by Japanese breeders and owners, distinguished by coloration, patterning, and scalation. Some of the major colors are white, black, red, orange, yellow, blue, brown and cream, besides metallic shades like gold and silver-white ('platinum') scales. The most popular category of koi is the Gosanke, which is made up of the Kōhaku, Taishō Sanshoku and Shōwa Sanshoku varieties.
History
Carp are a large group of fish species originally native to Central Europe and Asia. Various carp species were originally domesticated in China, where they were used primarily for consumption. Carp are coldwater fish, and their ability to survive and adapt to many climates and water conditions allowed the domesticated species to be propagated to many new locations, including Japan.
Prehistory
In Japan, Miocene fossils of the carp family (Cyprinidae) have been excavated from Iki Island, Nagasaki Prefecture. Furthermore, numerous carp pharyngeal teeth have been excavated from Jomon and Yayoi period sites. For example, pharyngeal teeth of the extinct species Jōmon Koi (Cyprinus sp.) in addition to the modern species of carp (Cyprinus carpio or Cyprinus rubrofuscus) have been excavated from the Akanoi Bay lakebed site () in Lake Biwa at the end of the Early Jomon Period (11,500 – 7,000 years ago). In addition, pharyngeal teeth of all six subfamilies of the Cyprinidae family living in Japan today, including carp (Cyprinus), have been found at the Awazu lakebed site () dating from the Middle Jomon Period (5500 – 4400 years ago).
There are differences in the length distribution of carp excavated from Jomon and Yayoi sites, as estimated from the size of their pharyngeal teeth. Specifically, not only adult carp but also juvenile carp (less than 150 mm in length) have been found at the Yayoi site. This difference is thought to be because the Jomon only collected carp from lakes and rivers, while the Yayoi cultivated primitive carp along with the spread of rice paddies.
It was previously thought that all Japanese carp were introduced from China in prehistoric times. However, recent analysis of mitochondrial DNA revealed a significant evolutionary divergence (phylogenetic split) within common carp Cyprinus carpio between the native wild form found in Lake Biwa and the Eurasian wild form, along with domesticated varieties. This supports the idea of the ancient origin of the native Japanese form (Cyprinus sp.), as well as the East Asian ancient lineage of wild common carp (C. carpio), previously proposed on the basis of fossil data. However, it is unknown when the carp from the continent was introduced to Japan. In addition, a possible multiple origin of koi carp was indicated by the polyphyletic distribution of five mtDNA haplotypes of koi carp within the ‘Eurasian’ clade. Moreover, the oldest record of the introduction of non-native fish in Japan is that of goldfish from China (1502 or 1602), and there is no record of carp (including colored carp) until the introduction of the mirror carp, called Doitsugoi (German carp), in 1904.
Middle Ages
In the Japanese history book Nihon Shoki (Chronicles of Japan, 720), it is written that Emperor Keikō released carp in a pond for viewing when he visited Mino Province (present Gifu Prefecture) in the fourth year of his reign (74 AD). In Cui Bao's Gǔjīnzhù (, Annotations on the Ancient and Modern Period) from the Western Jin Dynasty (4th century A.D.) in China, carp of the following colors are described: red horse (), blue horse (), black horse (), white horse (), and yellow pheasant (). In China in those days, carp were called horses because they were believed to be the vehicles of hermits and to run in the sky.
Japan's oldest drug dictionary, Fukane Sukehito's Honzō Wamyō (, 918) mentions red carp (), blue carp (), black carp (), white carp (), and yellow carp () as Japanese names corresponding to the above Chinese names, suggesting that carp of these colors existed in China and Japan in those days. In addition, Hitomi Hitsudai's drug dictionary Honchō Shokkan (, Japanese Medicine Encyclopedia, 1697) states that red, yellow, and white carp of the three colors were in Japan at that time.
However, it is believed that these single-colored carp were not a variety created by artificial selection, as is the case with today's koi, but rather a mutation-induced color change. In ancient times, carp was farmed primarily for food. Mutational color variation in carp is relatively common in nature, but is not suitable for development alongside farming for food in poor rural communities; color inheritance is unstable and selection to maintain color variation is costly. For example, in current-day farming of koi as ornamental fish, the percentage of superior colored fish to the number of spawn is less than 1%.
The Amur carp (Cyprinus rubrofuscus) is a member of the cyprinid family species complex native to East Asia. Amur carp were previously identified as a subspecies of the common carp (as C. c. haematopterus), but recent authorities treat it as a separate species under the name C. rubrofuscus. Amur carp have been aquacultured as a food fish at least as long ago as the fifth century BC in China.
Modern period
The systematic breeding of ornamental Amur carp began in the 1820s in an area known as "Nijūmuragō" (, ) which spans Ojiya and Yamakoshi in Niigata Prefecture (located on the northeastern coast of Honshu) in Japan. In Niigata Prefecture, Amur carp were farmed for food in Musubu Shinden, Kanbara County (present Akiba Ward, Niigata City) from the end of the Genna era (1615–1624). In the Nijūmuragō area, carp were also farmed in terraced ponds near terraced rice paddies by 1781 at the latest, but the ponds ran dry due to a severe drought that occurred around that time, and the carp escaped the disaster by taking refuge in ponds on the grounds of Senryu Shrine in Higashiyama Village and Juni Shrine in Higashitakezawa Village.
During the Bunka and Bunsei eras (1804 – 1830), people in the Nijūmuragō area bred red and white koi in addition to black koi, and crossed them to produce red and white colored koi. After that, they further crossed them and perfected them.
Around 1875, colored koi became popular and the number of breeders increased, and some expensive koi were produced, but Niigata Prefecture banned the aquaculture of ornamental koi because it was considered a speculative business, and the business suffered a major blow for a time. However, the ban was lifted soon after, thanks to the petition of the villagers. At that time, colored koi included Kōhaku, Asagi, Ki Utsuri, etc. From this original handful of koi varieties, all other Nishikigoi varieties were bred, with the exception of the Ogon variety (single-colored, metallic koi), which was developed relatively recently.
Koi breeding flourished in the Nijūmuragō area for two reasons: 1) the custom of raising koi in fallow fields for emergency food during the winter, and 2) the existence of many inden (), or hidden rice fields in the mountains, unknown to the lord, which allowed the farmers to avoid taxes and become relatively wealthy. Breeding of koi was promoted as a hobby of farmers who could afford it, and high-quality individuals came to be bought and sold.
The name Nishikigoi (brocaded carp) did not exist until the 1910s. Before that time, Nishikigoi were called Madaragoi (, ), Kawarigoi (, ), Irogoi (, ), Moyōgoi (, ), and so on.
A geographical book on Suruga Province (present-day Shizuoka Prefecture), Abe Masanobu's Sunkoku Zasshi (1843), mentions that in addition to Asagi, purple, red, and white carp, there are "spotted carp (also known as Bekko carp)." This probably refers to two- or three-colored carp caused by mutation, and is a valuable record of Nishikigoi of the Edo period (1603 – 1868).
In 1900, there was a three-colored carp in Ritsurin Garden in Takamatsu, Kagawa Prefecture, and the price was over 1,000 yen per fish, which was a high price for that time. The three-colored carp had a red belly and asagi (light blue) back with black spots, and is thought to have been a mutation similar to today's Asagi koi.
The magazine "Shonen" (1910) introduced Nishikigoi under the name of Madaragoi (spotted carp) or Kawarigoi (variant carp), and said that even skilled fish breeders did not know how they could produce Nishikigoi, but only waited for them to be produced by chance. The price of Nishikigoi at a fish show in Fukagawa, Tokyo, was 100 to 150 yen per fish, which was "extremely expensive" at the time. Therefore, even at that time, mutant Nishikigoi were known to some fish breeders and hobbyists in Tokyo, but artificial breeds such as Nijūmuragō's Nishikigoi were still unknown to the general public.
In 1914, when the Tokyo Taishō Exposition was held, the "Koi Exhibit Association" was formed mainly by koi breeders in Higashiyama and Takezawa villages, and koi were exhibited. At the time, they were still called "colored carp" or "patterned carp," and they were described as "the first of their kind ever seen in the Tokyo area." And the koi received much attention, winning a silver medal. After the exposition closed, they presented eight koi to the Crown Prince (Emperor Showa). This exhibition triggered an expansion of sales channels, and the market value of koi soared.
In 1917, the Taishō Sanshoku (by Eizaburo Hoshino) was fixed as a breed. The name Nishikigoi is said to have been given by Kei Abe, who was the chief fisheries officer of the Niigata Prefectural Government in the Taisho era (1912–1926), after he admired the Taishō Sanshoku when he first saw it. In 1917, the fixation of Kōhaku (by Kunizo Hiroi), which had first been produced in the 1880s, was also assured.
Apart from the koi of Niigata Prefecture's Nijūmuragō area, there is a variety called Shūsui (), which was created by Tokyo-based goldfish breeder Kichigoro Akiyama in 1906 by crossing a female leather carp imported from Germany with a male Japanese Asagi or spotted carp. The leather carp is a low scaled variety bred in 1782 in Austria, and was sent to Japan from Munich, Germany in 1904, along with the mirror carp, which also has few scales. In Japan, these two varieties are called Doitsugoi (German carp), and Shūsui and its lineage are also called Doitsu or Doitsugoi in koi.
In 1927, Shōwa Sanshoku (by Shigekichi Hoshino) was fixed as a breed, and in 1939, koi were exhibited at the Japanese pavilion at the Golden Gate International Exposition held in San Francisco.
Today
The hobby of keeping koi eventually spread worldwide. They are sold in many pet aquarium shops, with higher-quality fish available from specialist dealers. Collecting koi has become a social hobby. Passionate hobbyists join clubs, share their knowledge and help each other with their koi. In particular, since the 21st century, some wealthy Chinese have imported large quantities of koi from Niigata in Japan, and the price of high-quality carp has soared. In 2018, one carp was bought by a Chinese collector for about $2 million, the highest price ever. There are also cases in which purchased carp are bred in China and sold to foreign countries, and many breeds are spreading all over the world.
Etymology
The words "koi" and "nishikigoi" come from the Japanese words 鯉 (carp), and 錦鯉 (brocaded carp), respectively. In Japanese, "koi" is a homophone for 恋, another word that means "affection" or "love", so koi are symbols of love and friendship in Japan.
Colored ornamental carp were originally called Irokoi (色鯉) meaning colored carp, Hanakoi (花鯉) meaning floral carp, and Moyōkoi (模様鯉) meaning patterned carp. There are various theories as to how these words came to be disused, in favor of Nishikigoi (錦鯉), which is used today. One theory holds that, during World War II, the words Irokoi and Hanakoi (which can have sexual meanings) were changed to Nishikigoi because they were not suitable for the social situation of war. Another theory is that Nishikigoi, which was the original name for the popular Taishō Sanshoku variety, gradually became the term used for all ornamental koi.
Taxonomy
The koi are a group of breeds produced by artificial selection primarily from black carp called nogoi (, ) or magoi (, ), which inhabit lakes, ponds, and rivers in Japan. The black carp refers to the Eurasian carp (Cyprinus carpio), which was previously thought to have been introduced to Japan from Eurasia in prehistoric times.
Philipp Franz von Siebold of the Netherlands, who stayed in Japan during the Edo period, reported in Fauna Japonica (1833–1850) that there were three species of carp in Japan: Cyprinus haematopterus, Cyprinus melanotus, and Cyprinus conirostris. This classification has not received much attention until recently, and it was thought that only one species of carp existed in Japan. However, recent analysis of mitochondrial DNA has revealed that there are at least two species of carp in Japan: native carp and carp from Eurasia. Currently, the Japanese native carp is assumed to be Cyprinus melanotus, and a new scientific name for it is being considered.
Cyprinus haematopterus is thought to refer to the Amur carp of Eurasian origin, traditionally called Yamatogoi (, ) in Japan. Yamatogoi have been famous since the Edo period as farmed carp in Yamato Province (now Nara Prefecture). Other carp of the same type as Yamatogoi are known as Yodogoi (, Yodo River carp) from Osaka and Shinshūgoi (, introduced Yodogoi) from Nagano Prefecture. These carp were famous for their delicious taste. Since the Meiji period, Yamatogoi have been released into lakes and rivers throughout Japan, causing genetic contamination with native carp and making research on the origin of the Japanese carp difficult. Koi is thought to be primarily of this Yamatogoi (Amur carp) lineage, but it also carries some genes of the native Japanese carp.
In the past, koi were commonly believed to have been bred from the common carp (Cyprinus carpio). Extensive hybridization between different populations, coupled with widespread translocations, has muddled the historical zoogeography of the common carp and its relatives. Traditionally, Amur carp (C. rubrofuscus) were considered a subspecies of the common carp, often under the scientific name C. carpio haematopterus. However, they differ in meristics from the common carp of Europe and Western Asia, leading recent authorities to recognize them as a separate species, C. rubrofuscus (C. c. haematopterus being a junior synonym). Although one study of mitochondrial DNA (mtDNA) was unable to find a clear genetic structure matching the geographic populations (possibly because of translocation of carp from separate regions), others based on mtDNA, microsatellite DNA and genomic DNA found a clear separation between the European/West Asian population and the East Asian population, with koi belonging in the latter. Consequently, recent authorities have suggested that the ancestral species of the koi is C. rubrofuscus (syn. C. c. haematopterus) or at least an East Asian carp species instead of C. carpio. Regardless, a taxonomic review of Cyprinus carp from eastern and southeastern Asia may be necessary, as the genetic variations do not fully match the currently recognized species pattern, with one study of mtDNA suggesting that koi are close to the Southeast Asian carp, but not necessarily the Chinese.
Varieties
According to Zen Nippon Airinkai, a group that leads the breeding and dissemination of koi in Japan, there are more than 100 varieties of koi created through breeding, and each variety is classified into 16 groups. Koi varieties are distinguished by coloration, patterning, and scalation. Some of the major colors are white, black, red, yellow, blue, and cream. Metallic shades of gold and platinum in the scales have also been developed through selective breeding. Although the possible colors are virtually limitless, breeders have identified and named a number of specific categories. The most notable category is , which is made up of the Kōhaku, Taishō Sanshoku, and Shōwa Sanshoku varieties.
New koi varieties are still being actively developed. Ghost koi developed in the 1980s have become very popular in the United Kingdom; they are a hybrid of wild carp and Ogon koi and are distinguished by their metallic scales. Butterfly koi (also known as longfin koi, or dragon carp), also developed in the 1980s, are notable for their long and flowing fins. They are hybrids of koi with Asian carp. Butterfly koi and ghost koi are considered by some to be not true nishikigoi.
The major named varieties include:
is a white-skinned koi, with large red markings on the top. The name means "red and white"; kōhaku was one of the first ornamental varieties to be established in Japan (late 19th century).
is very similar to the kōhaku, except for the addition of small black markings called . This variety was first exhibited in 1914 by the koi breeder Gonzo Hiroi, during the reign of the Taishō Emperor. In the United States, the name is often abbreviated to just "Sanke". The kanji, 三色, may be read as either sanshoku or as sanke (from its earlier name 三毛).
is a black koi with red (hi 緋) and white (shiroji 白地) markings. The first Shōwa Sanke was exhibited in 1927, during the reign of the Shōwa Emperor. In America, the name is often abbreviated to just "Shōwa". The amount of shiroji on Shōwa Sanke has increased in modern times (Kindai Shōwa 近代昭和), to the point that it can be difficult to distinguish from Taishō Sanke. The kanji, 三色, may be read as either sanshoku or as sanke.
is a white-, red-, or yellow-skinned koi with black markings . The Japanese name means "tortoise shell", and is commonly written as 鼈甲. The white, red, and yellow varieties are called , and , respectively. It may be confused with the Utsuri.
is a black koi with white, red, or yellow markings, in a zebra color pattern. The oldest attested form is the yellow form, called in the 19th century, but renamed by Elizaburo Hoshino, an early 20th-century koi breeder. The red and white versions are called and (piebald color morph), respectively. The word utsuri means to print (the black markings are reminiscent of ink stains). Genetically, it is the same as Shōwa, but lacking either red pigment (Shiro Utsuri) or white pigment (Hi Utsuri/Ki Utsuri).
koi is light blue above and usually red below, but also occasionally pale yellow or cream, generally below the lateral line and on the cheeks. The Japanese name means pale greenish-blue, spring onion color, or indigo.
' means "autumn green"; the Shūsui was created in 1910 by Yoshigoro Akiyama(秋山 吉五郎, by crossing Japanese Asagi with German mirror carp. The fish has no scales, except for a single line of large mirror scales dorsally, extending from head to tail. The most common type of Shūsui has a pale, sky-blue/gray color above the lateral line and red or orange (and very, very rarely bright yellow) below the lateral line and on the cheeks.
is a white fish with a Kōhaku-style pattern with blue or black-edged scales only over the hi pattern. This variety first arose in the 1950s as a cross between a Kōhaku and an Asagi. The most commonly encountered Koromo is an , which is colored like a Kōhaku, except each of the scales within the red patches has a blue or black edge to it. Less common is the , which has a darker (burgundy) hi overlay that gives it the appearance of bunches of grapes. Very rarely seen is the , which is similar to Budō-Goromo, but the hi pattern is such a dark burgundy that it appears nearly black.
is a "catch-all" term for koi that cannot be put into one of the other categories. This is a competition category, and many new varieties of koi compete in this one category. It is also known as .
is a dark koi with red (Kōhaku style) hi pattern. The Japanese name means "five colors". It appears similar to an Asagi, with little or no hi below the lateral line and a Kōhaku Hi pattern over reticulated (fishnet pattern) scales. The base color can range from nearly black to very pale, sky blue.
is a variety of which the whole body is one color and the body is shiny, and it is called differently depending on the color.
is a koi with colored markings over a metallic base or in two metallic colors.
is a cross between utsurimono series and Ōgon.
is a koi with metallic (glittering, metal-flake-appearing) scales. The name translates into English as "gold and silver scales"; it is often abbreviated to Ginrin. Ginrin versions of almost all other varieties of koi occur, and they are fashionable. Their sparkling, glittering scales contrast to the smooth, even, metallic skin and scales seen in the Ogon varieties. Recently, these characteristics have been combined to create the new ginrin Ogon varieties.
is any koi with a solitary red patch on its head. The fish may be a Tanchō Shōwa, Tanchō Sanke, or even Tanchō Goshiki. It is named for the Japanese red-crowned crane (Grus japonensis), which also has a red spot on its head.
, "tea-colored", this koi can range in color from pale olive-drab green or brown to copper or bronze and more recently, darker, subdued orange shades. Famous for its docile, friendly personality and large size, it is considered a sign of good luck among koi keepers.
is a metallic koi of one color only (hikarimono 光者). The most commonly encountered colors are gold, platinum, and orange. Cream specimens are very rare. Ogon compete in the Kawarimono category and the Japanese name means "gold". The variety was created by Sawata Aoki in 1946 from wild carp he caught in 1921.
' (literally "nine tattooed dragons" is a black doitsu-scaled fish with curling white markings. The patterns are thought to be reminiscent of Japanese ink paintings of dragons. They famously change color with the seasons. Kumonryu compete in the Kawarimono category.
is a light blue/gray koi with copper, bronze, or yellow (Kohaku-style) pattern, reminiscent of autumn leaves on water. The Japanese name means "fallen leaves".
Kikokuryū (輝黒竜, literally "sparkle" or "glitter black dragon") is a metallic-skinned version of the Kumonryu.
Kin-Kikokuryū (金輝黒竜, literally "gold sparkle black dragon" or "gold glitter black dragon") is a metallic-skinned version of the Kumonryu with a Kōhaku-style hi pattern developed by Mr. Seiki Igarashi of Ojiya City. At least six different genetic subvarieties of this general variety are seen.
Ghost koi (人面魚、じんめんぎょ), a hybrid of Ogon and wild carp with metallic scales, is considered by some to be not nishikigoi.
Butterfly koi (鰭長錦鯉、ひれながにしきごい) is a hybrid of koi and Asian carp with long flowing fins. Various colorations depend on the koi stock used to cross. It also is considered by some to not be nishikigoi.
originated by crossbreeding numerous different established varieties with "scaleless" German carp (generally, fish with only a single line of scales along each side of the dorsal fin). Also written as 独逸鯉, four main types of Doitsu scale patterns exist. The most common type (referred to above) has a row of scales beginning at the front of the dorsal fin and ending at the end of the dorsal fin (along both sides of the fin). The second type has a row of scales beginning where the head meets the shoulder and running the entire length of the fish (along both sides). The third type is the same as the second, with the addition of a line of (often quite large) scales running along the lateral line (along the side) of the fish, also referred to as "mirror koi". The fourth (and rarest) type is referred to as "armor koi" and is completely (or nearly) covered with very large scales that resemble plates of armor. It also is called Kagami-goi (鏡鯉、カガミゴイ), or mirror carp (ミラーカープ).
Differences from goldfish
Goldfish (Carassius auratus) were developed in China more than a thousand years ago by selectively breeding colored varieties; by the Song dynasty (960–1279), yellow, orange, white, and red-and-white colorations had been developed. Goldfish were introduced to Japan in the 16th century and to Europe in the 17th century. On the other hand, most ornamental koi breeds currently distributed worldwide originate from Amur carp (Cyprinus rubrofuscus) bred in Japan in the first half of the 19th century. Koi are domesticated Amur carp that are selected or culled for color; they are not a different species, and will revert to the original coloration within a few generations if allowed to breed freely.
Some goldfish varieties, such as the common goldfish, comet goldfish, and shubunkin, have body shapes and coloration that are similar to koi, and can be difficult to tell apart from koi when immature. Goldfish and koi can interbreed; however, as they were developed from different species of carp, their offspring are sterile.
Health, maintenance, and longevity
The Amur carp is a hardy fish, and koi retain that durability. Koi are coldwater fish, but benefit from being kept in the range, and do not react well to long, cold, winter temperatures; their immune systems are very weak below . Koi ponds usually have a metre or more of depth in areas of the world that become warm during the summer, whereas in areas that have harsher winters, ponds generally have a minimum of . Specific pond construction has been evolved by koi keepers intent on raising show-quality koi.
The bright colors of koi put them at a severe disadvantage against predators; a white-skinned Kōhaku is highly noticeable against the dark green of a pond. Herons, kingfishers, otters, raccoons, skunk, mink, cats, foxes, and badgers are all capable of spotting out koi and eating them. A well-designed outdoor pond has areas too deep for herons to stand, overhangs high enough above the water that mammals cannot reach in, and shade trees overhead to block the view of aerial passers-by. It may prove necessary to string nets or wires above the surface. A pond usually includes a pump and a filtration system to keep the water clear.
Koi are an omnivorous fish. They eat a wide variety of foods, including peas, lettuce, and watermelon. Koi food is designed not only to be nutritionally balanced, but also to float so as to encourage them to come to the surface. When they are eating, koi can be checked for parasites and ulcers. Naturally, koi are bottom feeders with a mouth configuration adapted for that. Some koi have a tendency to eat mostly from the bottom, so food producers create a mixed sinking and floating combination food. Koi recognize the persons feeding them and gather around them at feeding times. They can be trained to take food from one's hand. In the winter, their digestive systems slow nearly to a halt, and they eat very little, perhaps no more than nibbles of algae from the bottom. Feeding is not recommended when the water temperature drops below . Care should be taken by hobbyists that proper oxygenation, pH stabilization, and off-gassing occur over the winter in small ponds. Their appetites do not come back until the water becomes warm in the spring.
Koi have been reported to achieve ages of 100–200 years. One famous scarlet koi named "Hanako" was owned by several individuals, the last of whom was Komei Koshihara. In July 1974, a study of the growth rings of one of the koi's scales reported that Hanako was 226 years old. Some sources give an accepted age for the species at little more than 50 years.
Disease
Koi are very hardy. With proper care, they resist many of the parasites that affect more sensitive tropical fish species, such as Trichodina, Epistylis, and Ichthyophthirius multifiliis infections. Water changes help reduce the risk of diseases and keep koi from being stressed. Two of the biggest health concerns among koi breeders are the koi herpes virus (KHV) and rhabdovirus carpio, which causes spring viraemia of carp (SVC). No treatment is known for either disease. Some koi farms in Israel use the KV3 vaccine, developed by M. Kotler from the Hebrew University of Jerusalem and produced by Kovax, to immunise fish against KHV. Israel is currently the only country in the world to vaccinate koi against the KHV. The vaccine is injected into the fish when they are under one year old, and is accentuated by using an ultraviolet light. The vaccine has a 90% success rate and when immunized, the fish cannot succumb to a KHV outbreak and neither can the immunised koi pass KHV onto other fish in a pond. Only biosecurity measures such as prompt detection, isolation, and disinfection of tanks and equipment can prevent the spread of the disease and limit the loss of fish stock. In 2002, spring viraemia struck an ornamental koi farm in Kernersville, North Carolina, and required complete depopulation of the ponds and a lengthy quarantine period. For a while after this, some koi farmers in neighboring states stopped importing fish for fear of infecting their own stocks.
Breeding
When koi naturally breed on their own they tend to spawn in the spring and summer seasons. The male will start following the female, swimming right behind her and nudging her. After the female koi releases her eggs they sink to the bottom of the pond and stay there. A sticky outer shell around the egg helps keep it in place so it does not float around. Although the female can produce many spawns, many of the fry do not survive due to being eaten by others.
Like most fish, koi reproduce through spawning in which a female lays a vast number of eggs and one or more males fertilize them. Nurturing the resulting offspring (referred to as "fry") is a tricky and tedious job, usually done only by professionals. Although a koi breeder may carefully select the parents they wish based on their desired characteristics, the resulting fry nonetheless exhibit a wide range of color and quality.
Koi produce thousands of offspring from a single spawning. However, unlike cattle, purebred dogs, or more relevantly, goldfish, the large majority of these offspring, even from the best champion-grade koi, are not acceptable as nishikigoi (they have no interesting colors) or may even be genetically defective. These unacceptable offspring are culled at various stages of development based on the breeder's expert eye and closely guarded trade techniques. Culled fry are usually destroyed or used as feeder fish (mostly used for feeding arowana due to the belief that it will enhance its color), while older culls, within their first year between 3 and 6 inches long (also called tosai), are often sold as lower-grade, pond-quality koi.
The semi-randomized result of the koi's reproductive process has both advantages and disadvantages for the breeder. While it requires diligent oversight to narrow down the favorable result that the breeder wants, it also makes possible the development of new varieties of koi within relatively few generations.
In the wild
Koi have been accidentally or deliberately released into the wild in every continent except Antarctica. They quickly revert to the natural coloration of an Amur carp within a few generations. In many areas, they are considered an invasive species and a pest. In the states of Queensland and New South Wales in Australia, they are considered noxious fish.
In Japan, koi releases are sometimes held as an event for tourism purposes. However, as koi is artificial breed, this causes genetic pollution by breeding with native carps.
Koi greatly increase the turbidity of the water because they are constantly stirring up the substrate. This makes waterways unattractive, reduces the abundance of aquatic plants, and can render the water unsuitable for swimming or drinking, even by livestock. In some countries, koi have caused so much damage to waterways that vast amounts of money and effort have been spent trying to eradicate them, largely unsuccessfully.
In many areas of North America, koi are introduced into the artificial "water hazards" and ponds on golf courses to keep water-borne insect larvae under control through predation.
In common culture
In Japan, the koi is a symbol of luck, prosperity, and good fortune, and also of perseverance in the face of adversity. Ornamental koi are symbolic of Japanese culture and are closely associated with the country's national identity. The custom of koinobori (carp streamers), which began in the Edo period (1603–1867), is still practiced today and displayed in gardens on Children's Day, 5 May.
In Chinese culture, the koi represents fame, family harmony, and wealth. It is a feng shui favorite, symbolizing abundance as well as perseverance and strength, and has a mythical potential to transform into a dragon. Since the late 20th century, the keeping of koi in outdoor water gardens has become popular among the more affluent Chinese. Koi ponds are found in Chinese communities around the world, and the number of people who keep koi imported from Niigata, has been increasing. In addition, there are increasing numbers of Japanese koi bred in China that are sold domestically and exported to foreign countries.
Koi are also popular in many countries in the equatorial region, where outdoor water gardens are popular. In Sri Lanka, interior courtyards most often have one or several fish ponds dedicated to koi.
| Biology and health sciences | Cypriniformes | null |
17479 | https://en.wikipedia.org/wiki/Lynx | Lynx | A lynx ( ; : lynx or lynxes) is any of the four extant species (the Canada lynx, Iberian lynx, Eurasian lynx and the bobcat) within the medium-sized wild cat genus Lynx. The name originated in Middle English via Latin from the Greek word (), derived from the Indo-European root (, ), in reference to the luminescence of its reflective eyes.
Appearance
Lynx have a short tail, characteristic tufts of black hair on the tips of their ears, large, padded paws for walking on snow and long whiskers on the face. Under their neck, they have a ruff, which has black bars resembling a bow tie, although this is often not visible.
Body colour varies from medium brown to goldish to beige-white, and is occasionally marked with dark brown spots, especially on the limbs. All species of lynx have white fur on their chests, bellies and on the insides of their legs, fur which is an extension of the chest and belly fur. The lynx's colouring, fur length and paw size vary according to the climate in their range. In the Southwestern United States, they are short-haired, dark in colour and their paws are smaller and less padded. In colder northern climates lynx have thicker and lighter fur as well as larger and more padded paws that are well-adapted to snow.
The smallest species are the bobcat and the Canada lynx, while the largest is the Eurasian lynx, with considerable variations within species.
Species
All living species of Lynx are thought to descend from Lynx issiodorensis, which first appeared during the early Pliocene in Africa, around 4 million years ago, shortly afterwards dispersing into Eurasia. The bobcat is thought to have arised from a dispersal across the Bering Land Bridge during the Early Pleistocene, around 2.5-2.4 million years ago, with the Iberian lynx suggested to have speciated around 1 million years ago, at the end of the Early Pleistocene, the Eurasian lynx is thought to have evolved from Asian populations of Lynx issidorensis. The Canada lynx is thought to descend from a separate later migration of Eurasian lynx over the Bering Land Bridge around 200,000 years ago.
The Pliocene felid Felis rexroadensis from North America has been proposed as an even earlier ancestor; however, this was larger than any living species, and is not currently classified as a true lynx. Another extinct species of Lynx, L. shansiensis, inhabited what is now northern China during the Early Pleistocene.
Eurasian lynx
Of the four lynx species, the Eurasian lynx (Lynx lynx) is the largest in size. It is native to European, Central Asian, and Siberian forests. While its conservation status has been classified as "least concern", populations of Eurasian lynx have been reduced or extirpated from much of Europe, where it is now being reintroduced.
During the summer, the Eurasian lynx has a relatively short, reddish or brown coat which is replaced by a much thicker silver-grey to greyish-brown coat during winter. The lynx hunts by stalking and jumping on its prey, helped by the rugged, forested country in which it resides. A favorite prey for the lynx in its woodland habitat is roe deer. It will feed however on whatever animal appears easiest, as it is an opportunistic predator much like its cousins.
Canada lynx
The Canada lynx (Lynx canadensis), or Canadian lynx, is a North American felid that ranges in forest and tundra regions across Canada and into Alaska, as well as some parts of the northern United States. Historically, the Canadian lynx ranged from Alaska across Canada and into many of the northern U.S. states. In the eastern states, it resided in the transition zone in which boreal coniferous forests yielded to deciduous forests. By 2010, after an 11-year effort, it had been successfully reintroduced into Colorado, where it had become extirpated in the 1970s. In 2000, the U.S. Fish and Wildlife Service designated the Canada lynx a threatened species in the lower 48 states.
The Canada lynx is a good climber and swimmer; it constructs rough shelters under fallen trees or rock ledges. It has a thick coat and broad paws, and is twice as effective as the bobcat at supporting its weight on the snow. The Canada lynx feeds almost exclusively on snowshoe hares; its population is highly dependent on the population of this prey animal. It will also hunt medium-sized mammals and birds if hare numbers fall.
Iberian lynx
The Iberian lynx (Lynx pardinus) is a vulnerable species native to the Iberian Peninsula in Southern Europe. It was the most endangered cat species in the world, but conservation efforts have changed its status from critical to endangered to vulnerable. The loss of the species would have been the first feline extinction since the Smilodon 10,000 years ago. The species used to be classified as a subspecies of the Eurasian lynx, but is now considered a separate species. Both species occurred together in central Europe in the Pleistocene epoch, being separated by habitat choice. The Iberian lynx is believed to have evolved from Lynx issiodorensis.
Bobcat
The bobcat (Lynx rufus) is a North American wild cat. With 13 recognized subspecies, the bobcat is common throughout southern Canada, the continental United States, and northern Mexico. Like the Eurasian lynx, its conservation status is "least concern." The bobcat is an adaptable predator that inhabits deciduous, coniferous, or mixed woodlands, but unlike other Lynx, does not depend exclusively on the deep forest, and ranges from swamps and desert lands to mountainous and agricultural areas, its spotted coat serving as camouflage. The population of the bobcat depends primarily on the population of its prey. Nonetheless, the bobcat is often killed by larger predators such as coyotes.
The bobcat resembles other species of the genus Lynx, but is on average the smallest of the four. Its coat is variable, though generally tan to grayish brown, with black streaks on the body and dark bars on the forelegs and tail. The ears are black-tipped and pointed, with short, black tufts. There is generally an off-white color on the lips, chin, and underparts. Bobcats in the desert regions of the southwest have the lightest-colored coats, while those in the northern, forested regions have the darkest.
Behavior and diet
The lynx is usually solitary, although a small group of lynx may travel and hunt together occasionally. Mating takes place in the late winter and once a year the female gives birth to between one and four kittens. The gestation time of the lynx is about 70 days. The young stay with the mother for one more winter, a total of around nine months, before moving out to live on their own as young adults. The lynx creates its den in crevices or under ledges. It feeds on a wide range of animals from white-tailed deer, reindeer, roe deer, small red deer, and chamois, to smaller, more usual prey: snowshoe hares, fish, foxes, sheep, squirrels, mice, turkeys and other birds, and goats. It also eats ptarmigans, voles, and grouse.
Distribution and habitat
The lynx inhabits high altitude forests with dense cover of shrubs, reeds, and tall grass. Although this cat hunts on the ground, it can climb trees and can swim swiftly, catching fish.
Europe and Asia
The Eurasian lynx ranges from central and northern Europe across Asia up to Northern Pakistan and India. In Iran, they live in Mount Damavand area. Since the beginning of the 20th century, the Eurasian lynx was considered extinct in the wild in Slovenia and Croatia. A resettlement project, begun in 1973, has successfully reintroduced lynx to the Slovenian Alps and the Croatian regions of Gorski Kotar and Velebit, including Croatia's Plitvice Lakes National Park and Risnjak National Park. In both countries, the lynx is listed as an endangered species and protected by law. The lynx was distributed throughout Japan during Jōmon period; with no paleontological evidence thereafter suggesting extinction at that time.
Several lynx resettlement projects begun in the 1970s have been successful in various regions of Switzerland. Since the 1990s, there have been numerous efforts to resettle the Eurasian lynx in Germany, and since 2000, a small population can now be found in the Harz mountains near Bad Lauterberg.
The lynx is found in the Białowieża Forest in northeastern Poland, and in the northern and western parts of China, particularly the Tibetan Plateau. In Romania, the numbers exceed 2,000, the largest population in Europe outside of Russia, although most experts consider the official population numbers to be overestimated.
The lynx is more common in northern Europe, especially in Norway, Sweden, Estonia, Finland, and the northern parts of Russia. The Swedish population is estimated to be 1200–1500 individuals, spread all over the country, but more common in middle Sweden and in the mountain range. The lynx population in Finland was 1900–2100 individuals in 2008, and the numbers have been increasing every year since 1992. The lynx population in Finland is estimated currently to be larger than ever before. Lynx in Britain were wiped out in the 17th century, but there have been calls to reintroduce them to curb the numbers of deer.
The endangered Iberian lynx lives in southern Spain and formerly in eastern Portugal. There is an Iberian lynx reproduction center outside Silves in the Algarve in southern Portugal.
North America
The two Lynx species in North America, Canada lynx and bobcats, are both found in the temperate zone. While the bobcat is common throughout southern Canada, the continental United States and northern Mexico, the Canada lynx is present mainly in boreal forests of Canada and Alaska.
| Biology and health sciences | Carnivora | null |
17537 | https://en.wikipedia.org/wiki/LSD | LSD | Lysergic acid diethylamide, commonly known as LSD (from German ), is a potent psychedelic drug that intensifies thoughts, emotions, and sensory perception. Often referred to as acid or lucy, LSD can cause mystical, spiritual, or religious experiences. At higher doses, it primarily induces visual and auditory hallucinations. LSD is not considered addictive, because it does not produce compulsive drug-seeking behavior. Using LSD can lead to adverse psychological reactions, such as anxiety, paranoia, and delusions. Additionally, it may trigger "flashbacks," also known as hallucinogen persisting perception disorder (HPPD), where individuals experience persistent visual distortions after use.
The effects of LSD begin within 30 minutes of ingestion and can last up to 20 hours, with most trips averaging 8–12 hours. It is synthesized from lysergic acid and commonly administered via tabs of blotter paper. LSD is mainly used recreationally or for spiritual purposes. As a serotonin receptor agonist, LSD's precise effects are not fully understood, but it is known to alter the brain’s default mode network, leading to its powerful psychedelic effects.
The drug was first synthesized by Swiss chemist Albert Hofmann in 1938 and became widely studied in the 1950s and 1960s. It was used experimentally in psychiatry for treating alcoholism and schizophrenia. However, its association with the counterculture movement of the 1960s led to its classification as a Schedule I drug in the U.S. in 1968. It was also listed as a Schedule I controlled substance by the United Nations in 1971 and remains without approved medical uses.
Despite its legal restrictions, LSD remains influential in scientific and cultural contexts. Its therapeutic potential has been explored, particularly in treating mental health disorders. As of 2017, about 10% of people in the U.S. had used LSD at some point, with 0.7% having used it in the past year. Usage rates have risen, with a 56.4% increase in adult use in the U.S. from 2015 to 2018.
Uses
Recreational
LSD is commonly used as a recreational drug.
Spiritual
LSD can catalyze intense spiritual experiences and is thus considered an entheogen. Some users have reported out of body experiences. In 1966, Timothy Leary established the League for Spiritual Discovery with LSD as its sacrament. Stanislav Grof has written that religious and mystical experiences observed during LSD sessions appear to be phenomenologically indistinguishable from similar descriptions in the sacred scriptures of the great religions of the world and the texts of ancient civilizations.
Medical
LSD currently has no approved uses in medicine. A meta analysis concluded that a single dose was shown to be effective at reducing alcohol consumption in people suffering from alcoholism. LSD has also been studied in depression, anxiety, and drug dependence, with positive preliminary results.
Effects
LSD is exceptionally potent, with as little as 20 μg capable of producing a noticeable effect.
Physical
LSD can induce physical effects such as pupil dilation, decreased appetite, increased sweating, and wakefulness. The physical reactions to LSD vary greatly and some may be a result of its psychological effects. Commonly observed symptoms include increased body temperature, blood sugar, and heart rate, as well as goose bumps, jaw clenching, dry mouth, and hyperreflexia. In cases of adverse reactions, users may experience numbness, weakness, nausea, and tremors.
Psychological
The primary immediate psychological effects of LSD are visual pseudo-hallucinations and altered thought, often referred to as "trips". These sensory alterations are considered pseudohallucinations because the subject does not perceive the patterns seen as being located in three-dimensional space outside the body. LSD is not considered addictive. These effects typically begin within 20–30 minutes of oral ingestion, peak three to four hours after ingestion, and can last up to 20 hours, particularly with higher doses. An "afterglow" effect, characterized by an improved mood or perceived mental state, may persist for days or weeks following ingestion. Positive experiences, or "good trips", are described as intensely pleasurable and can include feelings of joy, euphoria, an increased appreciation for life, decreased anxiety, a sense of spiritual enlightenment, and a feeling of interconnectedness with the universe.
Negative experiences, commonly known as "bad trips", can induce feelings of fear, agitation, anxiety, panic, and paranoia. While the occurrence of a bad trip is unpredictable, factors such as mood, surroundings, sleep, hydration, and social setting, collectively referred to as "set and setting", can influence the risk and are considered important in minimizing the likelihood of a negative experience.
Sensory
LSD induces an animated sensory experience affecting senses, emotions, memories, time, and awareness, lasting from 6 to 20 hours, with the duration dependent on dosage and individual tolerance. Effects typically commence within 30 to 90 minutes post-ingestion, ranging from subtle perceptual changes to profound cognitive shifts. Alterations in auditory and visual perception are common.
Users may experience enhanced visual phenomena, such as vibrant colors, objects appearing to morph, ripple or move, and geometric patterns on various surfaces. Changes in the perception of food's texture and taste are also noted, sometimes leading to aversion towards certain foods.
There are reports of inanimate objects appearing animated, with static objects seeming to move in additional spatial dimensions. The auditory effects of LSD may include echo-like distortions of sounds, and an intensified experience of music. Basic visual effects often resemble phosphenes and can be influenced by concentration, thoughts, emotions, or music. Higher doses can lead to more intense sensory perception alterations, including synesthesia, perception of additional dimensions, and temporary dissociation.
Adverse effects
LSD, a classical psychedelic, is deemed physiologically safe at standard dosages (50–200 μg) and its primary risks lie in psychological effects rather than physiological harm. A 2010 study by David Nutt ranked LSD as significantly less harmful than alcohol, placing it near the bottom of a list assessing the harm of 20 drugs.
Psychological effects
Mental disorders
LSD can induce panic attacks or extreme anxiety, colloquially termed a "bad trip". Despite lower rates of depression and substance abuse found in psychedelic drug users compared to controls, LSD presents heightened risks for individuals with severe mental illnesses like schizophrenia. These hallucinogens can catalyze psychiatric disorders in predisposed individuals, although they do not tend to induce illness in emotionally healthy people.
Suggestibility
While research from the 1960s indicated increased suggestibility under the influence of LSD among both mentally ill and healthy individuals, recent documents suggest that the CIA and Department of Defense have discontinued research into LSD as a means of mind control.
Flashbacks
Flashbacks are psychological episodes where individuals re-experience some of LSD's subjective effects after the drug has worn off, persisting for days or months post-hallucinogen use. These experiences are associated with hallucinogen persisting perception disorder (HPPD), where flashbacks occur intermittently or chronically, causing distress or functional impairment.
The etiology of flashbacks is varied. Some cases are attributed to somatic symptom disorder, where individuals fixate on normal somatic experiences previously unnoticed prior to drug consumption. Other instances are linked to associative reactions to contextual cues, similar to responses observed in individuals with past trauma or emotional experiences. The risk factors for flashbacks remain unclear, but pre-existing psychopathologies may be significant contributors.
Estimating the prevalence of HPPD is challenging. It is considered rare, with occurrences ranging from 1 in 20 users experiencing the transient and less severe type 1 HPPD, to 1 in 50,000 for the more concerning type 2 HPPD. Contrary to internet rumors, LSD is not stored long-term in the spinal cord or other body parts. Pharmacological evidence indicates LSD has a half-life of 175 minutes and is metabolized into water-soluble compounds like 2-oxo-3-hydroxy-LSD, eliminated through urine without evidence of long-term storage. Clinical evidence also suggests that chronic use of SSRIs can potentiate LSD-induced flashbacks, even months after stopping LSD use.
Drug interactions
Several psychedelics, including LSD, are metabolized by CYP2D6. Concurrent use of SSRIs, potent inhibitors of CYP2D6, with LSD may heighten the risk of serotonin syndrome. Chronic usage of SSRIs, TCAs, and MAOIs is believed to diminish the subjective effects of psychedelics, likely due to SSRI-induced 5-HT2A receptor downregulation and MAOI-induced 5-HT2A receptor desensitization. Interactions between psychedelics and antipsychotics or anticonvulsants are not well-documented; however, co-use with mood stabilizers like lithium may induce seizures and dissociative effects, particularly in individuals with bipolar disorder. Lithium notably intensifies LSD reactions, potentially leading to acute comatose states when combined.
Lethal dose
The lethal oral dose of LSD in humans is estimated at 100 mg, based on LD50 and lethal blood concentrations observed in rodent studies.
Tolerance
LSD shows significant tachyphylaxis, with tolerance developing 24 hours after administration. The progression of tolerance at intervals shorter than 24 hours remains largely unknown. Tolerance typically resets to baseline after 3–4 days of abstinence. Significant cross-tolerance occurs between LSD, mescaline and psilocybin. A slight cross-tolerance to DMT is observed in humans highly tolerant to LSD. Tolerance to LSD also builds up with consistent use, and is believed to result from serotonin 5-HT2A receptor downregulation. Researchers believe that tolerance returns to baseline after two weeks of not using psychedelics.
Addiction and dependence liability
LSD is widely considered to be non-addictive, despite its potential for abuse. Attempts to train laboratory animals to self-administer LSD have been largely unsuccessful. Although tolerance to LSD builds up rapidly, a withdrawal syndrome does not appear, suggesting that a potential syndrome does not necessarily relate to the possibility of acquiring rapid tolerance to a substance. A report examining substance use disorder for DSM-IV noted that almost no hallucinogens produced dependence, unlike psychoactive drugs of other classes such as stimulants and depressants.
Cancer and pregnancy
The mutagenic potential of LSD is unclear. Overall, the evidence seems to point to limited or no effect at commonly used doses. Studies showed no evidence of teratogenic or mutagenic effects.
Overdose
There have been no documented fatal human overdoses from LSD, although there has been no "comprehensive review since the 1950s" and "almost no legal clinical research since the 1970s". Eight individuals who had accidentally consumed an exceedingly high amount of LSD, mistaking it for cocaine, and had gastric levels of 1000–7000 μg LSD tartrate per 100 mL and blood plasma levels up to 26 μg/ml, had suffered from comatose states, vomiting, respiratory problems, hyperthermia, and light gastrointestinal bleeding; however, all of them survived without residual effects upon hospital intervention.
Individuals experiencing a bad trip after LSD intoxication may present with severe anxiety and tachycardia, often accompanied by phases of psychotic agitation and varying degrees of delusions. Cases of death on a bad trip have been reported due to prone maximal restraint (commonly known as a hogtie) and positional asphyxia when the individuals were restrained by law enforcement personnel.
Massive doses are largely managed by symptomatic treatments, and agitation can be addressed with benzodiazepines. Reassurance in a calm, safe environment is beneficial. Antipsychotics such as haloperidol are not recommended as they may have adverse psychotomimetic effects. Gastrointestinal decontamination with activated charcoal is of little use due to the rapid absorption of LSD, unless done within 30–60 minutes of ingesting exceedingly huge amounts. Administration of anticoagulants, vasodilators, and sympatholytics may be useful for treating ergotism.
Designer drug overdose
Many novel psychoactive substances of 25-NB (NBOMe) series, such as 25I-NBOMe and 25B-NBOMe, are regularly sold as LSD in blotter papers. NBOMe compounds are often associated with life-threatening toxicity and death. Fatalities involved in NBOMe intoxication suggest that a significant number of individuals ingested the substance which they believed was LSD, and researchers report that "users familiar with LSD may have a false sense of security when ingesting NBOMe inadvertently". Researchers state that the alleged physiological toxicity of LSD is likely due to psychoactive substances other than LSD.
NBOMe compounds are reported to have a bitter taste, are not active orally, and are usually taken sublingually. When NBOMes are administered sublingually, numbness of the tongue and mouth followed by a metallic chemical taste was observed, and researchers describe this physical side effect as one of the main discriminants between NBOMe compounds and LSD. Despite its high potency, recreational doses of LSD have only produced low incidents of acute toxicity, but NBOMe compounds have extremely different safety profiles. Testing with Ehrlich's reagent gives a positive result for LSD and a negative result for NBOMe compounds.
Pharmacology
Pharmacodynamics
Most serotonergic psychedelics are not significantly dopaminergic, and LSD is therefore atypical in this regard. The agonism of the D2 receptor by LSD may contribute to its psychoactive effects in humans.
LSD binds to most serotonin receptor subtypes except for the 5-HT3 and 5-HT4 receptors. However, most of these receptors are affected at too low affinity to be sufficiently activated by the brain concentration of approximately 10–20 nM. In humans, recreational doses of LSD can affect 5-HT1A (Ki = 1.1 nM), 5-HT2A (Ki = 2.9 nM), 5-HT2B (Ki = 4.9 nM), 5-HT2C (Ki = 23 nM), 5-HT5A (Ki = 9 nM [in cloned rat tissues]), and 5-HT6 receptors (Ki = 2.3 nM). Although not present in humans, 5-HT5B receptors found in rodents also have a high affinity for LSD. The psychedelic effects of LSD are attributed to cross-activation of 5-HT2A receptor heteromers. Many but not all 5-HT2A agonists are psychedelics and 5-HT2A antagonists block the psychedelic activity of LSD. LSD exhibits functional selectivity at the 5-HT2A and 5-HT2C receptors in that it activates the signal transduction enzyme phospholipase A2 instead of activating the enzyme phospholipase C as the endogenous ligand serotonin does.
Exactly how LSD produces its effects is unknown, but it is thought that it works by increasing glutamate release in the cerebral cortex and therefore excitation in this area, specifically in layer V. LSD, like many other drugs of recreational use, has been shown to activate DARPP-32-related pathways. The drug enhances dopamine D2 receptor protomer recognition and signaling of D2–5-HT2A receptor complexes, which may contribute to its psychotropic effects. LSD has been shown to have low affinity for H1 receptors, displaying antihistamine effects.
LSD is a biased agonist that induces a conformation in serotonin receptors that preferentially recruits β-arrestin over activating G proteins. LSD also has an exceptionally long residence time when bound to serotonin receptors lasting hours, consistent with the long-lasting effects of LSD despite its relatively rapid clearance. A crystal structure of 5-HT2B bound to LSD reveals an extracellular loop that forms a lid over the diethylamide end of the binding cavity which explains the slow rate of LSD unbinding from serotonin receptors. The related lysergamide lysergic acid amide (LSA) that lacks the diethylamide moiety is far less hallucinogenic in comparison.
LSD, like other psychedelics, has been found to increase the expression of genes related to synaptic plasticity. This is in part due to binding to brain-derived neurotrophic factor (BDNF) receptor TrkB.
Mechanisms of action
Neuroimaging studies using resting state fMRI recently suggested that LSD changes the cortical functional architecture. These modifications spatially overlap with the distribution of serotoninergic receptors. In particular, increased connectivity and activity were observed in regions with high expression of 5-HT2A receptor, while a decrease in activity and connectivity was observed in cortical areas that are dense with 5-HT1A receptor. Experimental data suggest that subcortical structures, particularly the thalamus, play a synergistic role with the cerebral cortex in mediating the psychedelic experience. LSD, through its binding to cortical 5-HT2A receptor, may enhance excitatory neurotransmission along frontostriatal projections and, consequently, reduce thalamic filtering of sensory stimuli towards the cortex. This phenomenon appears to selectively involve ventral, intralaminar, and pulvinar nuclei.
Pharmacokinetics
The acute effects of LSD normally last between 6 and 10 hours depending on dosage, tolerance, and age. Aghajanian and Bing (1964) found LSD had an elimination half-life of only 175 minutes (about 3 hours). However, using more accurate techniques, Papac and Foltz (1990) reported that 1 μg/kg oral LSD given to a single male volunteer had an apparent plasma half-life of 5.1 hours, with a peak plasma concentration of 5 ng/mL at 3 hours post-dose.
The pharmacokinetics of LSD were not properly determined until 2015, which is not surprising for a drug with the kind of low-μg potency that LSD possesses. In a sample of 16 healthy subjects, a single mid-range 200 μg oral dose of LSD was found to produce mean maximal concentrations of 4.5 ng/mL at a median of 1.5 hours (range 0.5–4 hours) post-administration. Concentrations of LSD decreased following first-order kinetics with a half-life of 3.6±0.9 hours and a terminal half-life of 8.9±5.9 hours.
The effects of the dose of LSD given lasted for up to 12 hours and were closely correlated with the concentrations of LSD present in circulation over time, with no acute tolerance observed. Only 1% of the drug was eliminated in urine unchanged, whereas 13% was eliminated as the major metabolite 2-oxo-3-hydroxy-LSD (O-H-LSD) within 24 hours. O-H-LSD is formed by cytochrome P450 enzymes, although the specific enzymes involved are unknown, and it does not appear to be known whether O-H-LSD is pharmacologically active or not. The oral bioavailability of LSD was crudely estimated as approximately 71% using previous data on intravenous administration of LSD. The sample was equally divided between male and female subjects and there were no significant sex differences observed in the pharmacokinetics of LSD.
Chemistry
LSD is a chiral compound with two stereocenters at the carbon atoms C-5 and C-8, so that theoretically four different optical isomers of LSD could exist. LSD, also called (+)-d-LSD, has the absolute configuration (5R,8R). 5S stereoisomers of lysergamides do not exist in nature and are not formed during the synthesis from d-lysergic acid. Retrosynthetically, the C-5 stereocenter could be analysed as having the same configuration of the alpha carbon of the naturally occurring amino acid L-tryptophan, the precursor to all biosynthetic ergoline compounds.
However, LSD and iso-LSD, the two C-8 isomers, rapidly interconvert in the presence of bases, as the alpha proton is acidic and can be deprotonated and reprotonated. Non-psychoactive iso-LSD which has formed during the synthesis can be separated by chromatography and can be isomerized to LSD.
Pure salts of LSD are triboluminescent, emitting small flashes of white light when shaken in the dark. LSD is strongly fluorescent and will glow bluish-white under UV light.
Synthesis
LSD is an ergoline derivative. It is commonly synthesized by reacting diethylamine with an activated form of lysergic acid. Activating reagents include phosphoryl chloride and peptide coupling reagents. Lysergic acid is made by alkaline hydrolysis of lysergamides like ergotamine, a substance usually derived from the ergot fungus on agar plate. Lysergic acid can also be produced synthetically, although these processes are not used in clandestine manufacture due to their low yields and high complexity.
Albert Hofmann synthesized LSD in the following manner: (1) hydrazinolysis of ergotamine into D- and L-isolysergic acid hydrazide, (2) separation of the enantiomers with di-(p-toluyl)-D-tartaric acid to get D-isolysergic acid hydrazide, (3) enantiomerization into D-lysergic acid hydrazide, (4) substitution with HNO2 to D-lysergic acid azide and (5) finally substitution with diethylamine to form D-lysergic acid diethylamide.
Research
The precursor for LSD, lysergic acid, has been produced by GMO baker's yeast.
Dosage
A single dose of LSD is typically between 40 and 500 micrograms—an amount roughly equal to one-tenth the mass of a grain of sand. Threshold effects can be felt with as little as 25 micrograms of LSD. The practice of using sub-threshold doses is called microdosing. Dosages of LSD are measured in micrograms (μg), or millionths of a gram.
In the mid-1960s, the most important black market LSD manufacturer (Owsley Stanley) distributed LSD at a standard concentration of 270 μg, while street samples of the 1970s contained 30 to 300 μg. By the 1980s, the amount had reduced to between 100 and 125 μg, dropping more in the 1990s to the 20–80 μg range, and even more in the 2000s (decade).
Reactivity and degradation
"LSD," writes the chemist Alexander Shulgin, "is an unusually fragile molecule ... As a salt, in water, cold, and free from air and light exposure, it is stable indefinitely."
LSD has two labile protons at the tertiary stereogenic C5 and C8 positions, rendering these centers prone to epimerisation. The C8 proton is more labile due to the electron-withdrawing carboxamide attachment, but the removal of the chiral proton at the C5 position (which was once also an alpha proton of the parent molecule tryptophan) is assisted by the inductively withdrawing nitrogen and pi electron delocalisation with the indole ring.
LSD also has enamine-type reactivity because of the electron-donating effects of the indole ring. Because of this, chlorine destroys LSD molecules on contact; even though chlorinated tap water contains only a slight amount of chlorine, the small quantity of compound typical to an LSD solution will likely be eliminated when dissolved in tap water. The double bond between the 8-position and the aromatic ring, being conjugated with the indole ring, is susceptible to nucleophilic attacks by water or alcohol, especially in the presence of UV or other kinds of light. LSD often converts to "lumi-LSD," which is inactive in human beings.
A controlled study was undertaken to determine the stability of LSD in pooled urine samples.
The concentrations of LSD in urine samples were followed over time at various temperatures, in different types of storage containers, at various exposures to different wavelengths of light, and at varying pH values. These studies demonstrated no significant loss in LSD concentration at 25 °C for up to four weeks. After four weeks of incubation, a 30% loss in LSD concentration at 37 °C and up to a 40% at 45 °C were observed. Urine fortified with LSD and stored in amber glass or nontransparent polyethylene containers showed no change in concentration under any light conditions. The stability of LSD in transparent containers under light was dependent on the distance between the light source and the samples, the wavelength of light, exposure time, and the intensity of light. After prolonged exposure to heat in alkaline pH conditions, 10 to 15% of the parent LSD epimerized to iso-LSD. Under acidic conditions, less than 5% of the LSD was converted to iso-LSD. It was also demonstrated that trace amounts of metal ions in the buffer or urine could catalyze the decomposition of LSD and that this process can be avoided by the addition of EDTA.
Detection
LSD can be detected in concentrations larger than approximately 10% in a sample using Ehrlich's reagent and Hofmann's reagent. However, detecting LSD in human tissues is more challenging due to its active dosage being significantly lower (in micrograms) compared to most other drugs (in milligrams).
LSD may be quantified in urine for drug testing programs, in plasma or serum to confirm poisoning in hospitalized victims, or in whole blood for forensic investigations. The parent drug and its major metabolite are unstable in biofluids when exposed to light, heat, or alkaline conditions, necessitating protection from light, low-temperature storage, and quick analysis to minimize losses. Maximum plasma concentrations are typically observed 1.4 to 1.5 hours after oral administration of 100 μg and 200 μg, respectively, with a plasma half-life of approximately 2.6 hours (ranging from 2.2 to 3.4 hours among test subjects).
Due to its potency in microgram quantities, LSD is often not included in standard pre-employment urine or hair analyses. However, advanced liquid chromatography–mass spectrometry methods can detect LSD in biological samples even after a single use.
History
Swiss chemist Albert Hofmann first synthesized LSD in 1938 from lysergic acid, a chemical derived from the hydrolysis of ergotamine, an alkaloid found in ergot, a fungus that infects grain. LSD was the 25th of various lysergamides Hofmann synthesized from lysergic acid while trying to develop a new analeptic, hence the alternate name LSD-25. Hofmann discovered its effects in humans in 1943, after unintentionally ingesting an unknown amount, possibly absorbing it through his skin. LSD was subject to exceptional interest within the field of psychiatry in the 1950s and early 1960s, with Sandoz distributing LSD to researchers under the trademark name Delysid in an attempt to find a marketable use for it. During this period, LSD was controversially administered to hospitalised schizophrenic autistic children, with varying degrees of therapeutic success.
LSD-assisted psychotherapy was used in the 1950s and early 1960s by psychiatrists such as Humphry Osmond, who pioneered the application of LSD to the treatment of alcoholism, with promising results. Osmond coined the term "psychedelic" (lit. mind manifesting) as a term for LSD and related hallucinogens, superseding the previously held "psychotomimetic" model in which LSD was believed to mimic schizophrenia. In contrast to schizophrenia, LSD can induce transcendent experiences, or mental states that transcend the experience of everyday consciousness, with lasting psychological benefit. During this time, the Central Intelligence Agency (CIA) began using LSD in the research project Project MKUltra, which used psychoactive substances to aid interrogation. The CIA administered LSD to unwitting test subjects to observe how they would react, the most well-known example of this being Operation Midnight Climax. LSD was one of several psychoactive substances evaluated by the U.S. Army Chemical Corps as possible non-lethal incapacitants in the Edgewood Arsenal human experiments.
In the 1960s, LSD and other psychedelics were adopted by and became synonymous with, the counterculture movement due to their perceived ability to expand consciousness. This resulted in LSD being viewed as a cultural threat to American values and the Vietnam War effort, and it was designated as a Schedule I (illegal for medical as well as recreational use) substance in 1968. It was listed as a Schedule I controlled substance by the United Nations in 1971 and currently has no approved medical uses. , about 10% of people in the United States have used LSD at some point in their lives, while 0.7% have used it in the last year. It was most popular in the 1960s to 1980s. The use of LSD among US adults increased by 56.4% from 2015 to 2018.
LSD was first synthesized on November 16, 1938 by Swiss chemist Albert Hofmann at the Sandoz Laboratories in Basel, Switzerland as part of a large research program searching for medically useful ergot alkaloid derivatives. The abbreviation "LSD" is from the German "Lysergsäurediethylamid".
LSD's psychedelic properties were discovered 5 years later when Hofmann himself accidentally ingested an unknown quantity of the chemical. The first intentional ingestion of LSD occurred on April 19, 1943, when Hofmann ingested 250 μg of LSD. He said this would be a threshold dose based on the dosages of other ergot alkaloids. Hofmann found the effects to be much stronger than he anticipated. Sandoz Laboratories introduced LSD as a psychiatric drug in 1947 and marketed LSD as a psychiatric panacea, hailing it "as a cure for everything from schizophrenia to criminal behavior, 'sexual perversions', and alcoholism." Sandoz would send the drug for free to researchers investigating its effects.
Beginning in the 1950s, the US Central Intelligence Agency (CIA) began a research program code-named Project MKUltra. The CIA introduced LSD to the United States, purchasing the entire world's supply for $240,000 and propagating the LSD through CIA front organizations to American hospitals, clinics, prisons, and research centers. Experiments included administering LSD to CIA employees, military personnel, doctors, other government agents, prostitutes, mentally ill patients, and members of the general public to study their reactions, usually without the subjects' knowledge. The project was revealed in the US congressional Rockefeller Commission report in 1975.
In 1963, the Sandoz patents on LSD expired and the Czech company Spofa began to produce the substance. Sandoz stopped the production and distribution in 1965.
Several figures, including Aldous Huxley, Timothy Leary, and Al Hubbard, had begun to advocate the consumption of LSD. LSD became central to the counterculture of the 1960s. In the early 1960s the use of LSD and other hallucinogens was advocated by new proponents of consciousness expansion such as Leary, Huxley, Alan Watts and Arthur Koestler, and according to L. R. Veysey they profoundly influenced the thinking of the new generation of youth.
On October 24, 1968, possession of LSD was made illegal in the United States. The last FDA approved study of LSD in patients ended in 1980, while a study in healthy volunteers was made in the late 1980s. Legally approved and regulated psychiatric use of LSD continued in Switzerland until 1993.
In November 2020, Oregon became the first US state to decriminalize possession of small amounts of LSD after voters approved Ballot Measure 110.
Society and culture
Counterculture
By the mid-1960s, the youth countercultures in California, particularly in San Francisco, had widely adopted the use of hallucinogenic drugs, including LSD. The first major underground LSD factory was established by Owsley Stanley. Around this time, the Merry Pranksters, associated with novelist Ken Kesey, organized the Acid Tests, events in San Francisco involving LSD consumption, accompanied by light shows and improvised music. Their activities, including cross-country trips in a psychedelically decorated bus and interactions with major figures of the beat movement, were later documented in Tom Wolfe's The Electric Kool-Aid Acid Test (1968).
In San Francisco's Haight-Ashbury neighborhood, the Psychedelic Shop was opened in January 1966 by brothers Ron and Jay Thelin to promote the safe use of LSD. This shop played a significant role in popularizing LSD in the area and establishing Haight-Ashbury as the epicenter of the hippie counterculture. The Thelins also organized the Love Pageant Rally in Golden Gate Park in October 1966, protesting against California's ban on LSD.
A similar movement developed in London, led by British academic Michael Hollingshead, who first tried LSD in America in 1961. After experiencing LSD and interacting with notable figures such as Aldous Huxley, Timothy Leary, and Richard Alpert, Hollingshead played a key role in the famous LSD research at Millbrook before moving to New York City for his experiments. In 1965, he returned to the UK and founded the World Psychedelic Center in Chelsea, London.
Music and Art
The influence of LSD in the realms of music and art became pronounced in the 1960s, especially through the Acid Tests and related events involving bands like the Grateful Dead, Jefferson Airplane, and Big Brother and the Holding Company. San Francisco-based artists such as Rick Griffin, Victor Moscoso, and Wes Wilson contributed to this movement through their psychedelic poster and album art. The Grateful Dead, in particular, became central to the culture of "Deadheads," with their music heavily influenced by LSD.
In the United Kingdom, Michael Hollingshead, reputed for introducing LSD to various artists and musicians like Storm Thorgerson, Donovan, Keith Richards, and members of the Beatles, played a significant role in the drug's proliferation in the British art and music scene. Despite LSD's illegal status from 1966, it was widely used by groups including the Beatles, the Rolling Stones, and the Moody Blues. Their experiences influenced works such as the Beatles' Sgt. Pepper's Lonely Hearts Club Band and Cream's Disraeli Gears, featuring psychedelic-themed music and artwork.
Psychedelic music of the 1960s often sought to replicate the LSD experience, incorporating exotic instrumentation, electric guitars with effects pedals, and elaborate studio techniques. Artists and bands utilized instruments like sitars and tablas, and employed studio effects such as backward tapes, panning, and phasing. Songs such as John Prine's "Illegal Smile" and the Beatles' "Lucy in the Sky with Diamonds" have been associated with LSD, although the latter's authors denied such claims.
Contemporary artists influenced by LSD include Keith Haring in the visual arts, various electronic dance music creators, and the jam band Phish. The 2018 Leo Butler play All You Need is LSD is inspired by the author's interest in the history of LSD.
Legal status
The United Nations Convention on Psychotropic Substances of 1971 mandates that signing parties, including the United States, Australia, New Zealand, and most of Europe, prohibit LSD. Enforcement of these laws varies by country. The convention allows medical and scientific research with LSD.
Australia
In Australia, LSD is classified as a Schedule 9 prohibited substance under the Poisons Standard (February 2017), indicating it may be abused or misused and its manufacture, possession, sale, or use should be prohibited except for approved research purposes. In Western Australia, the Misuse of Drugs Act 1981 provides guidelines for possession and trafficking of substances like LSD.
Canada
In Canada, LSD is listed under Schedule III of the Controlled Drugs and Substances Act. Unauthorized possession and trafficking of the substance can lead to significant legal penalties.
United Kingdom
In the United Kingdom, LSD is a Class A drug under the Misuse of Drugs Act 1971, making unauthorized possession and trafficking punishable by severe penalties. The Runciman Report and Transform Drug Policy Foundation have made recommendations and proposals regarding the legal regulation of LSD and other psychedelics.
United States
In the United States, LSD is classified as a Schedule I controlled substance under the Controlled Substances Act of 1970, making its manufacture, possession, and distribution illegal without a DEA license. The law considers LSD to have a high potential for abuse, no legitimate medical use, and to be unsafe even under medical supervision. The US Supreme Court case Neal v. United States (1995) clarified the sentencing guidelines related to LSD possession.
Oregon decriminalized personal possession of small amounts of drugs, including LSD, in February 2021, and California has seen legislative efforts to decriminalize psychedelics.
Mexico
Mexico decriminalized the possession of small amounts of drugs, including LSD, for personal use in 2009. The law specifies possession limits and establishes that possession is not a crime within designated quantities.
Czech Republic
In the Czech Republic, possession of "amount larger than small" of LSD is criminalized, while possession of smaller amounts is a misdemeanor. The definition of "amount larger than small" is determined by judicial practice and specific regulations.
Economics
Production
An active dose of LSD is very minute, allowing a large number of doses to be synthesized from a comparatively small amount of raw material. Twenty-five kilograms of precursor ergotamine tartrate can produce 5–6 kg of pure crystalline LSD; this corresponds to around 50–60 million doses at 100 μg. Because the masses involved are so small, concealing and transporting illicit LSD is much easier than smuggling cocaine, cannabis, or other illegal drugs.
Manufacturing LSD requires laboratory equipment and experience in the field of organic chemistry. It takes two to three days to produce 30 to 100 grams of pure compound. It is believed that LSD is not usually produced in large quantities, but rather in a series of small batches. This technique minimizes the loss of precursor chemicals in case a step does not work as expected.
Forms
LSD is produced in crystalline form and is then mixed with excipients or redissolved for production in ingestible forms. Liquid solution is either distributed in small vials or, more commonly, sprayed onto or soaked into a distribution medium. Historically, LSD solutions were first sold on sugar cubes, but practical considerations forced a change to tablet form. Appearing in 1968 as an orange tablet measuring about 6 mm across, "Orange Sunshine" acid was the first largely available form of LSD after its possession was made illegal. Tim Scully, a prominent chemist, made some of these tablets, but said that most "Sunshine" in the USA came by way of Ronald Stark, who imported approximately thirty-five million doses from Europe.
Over some time, tablet dimensions, weight, shape and concentration of LSD evolved from large (4.5–8.1 mm diameter), heavyweight (≥150 mg), round, high concentration (90–350 μg/tab) dosage units to small (2.0–3.5 mm diameter) lightweight (as low as 4.7 mg/tab), variously shaped, lower concentration (12–85 μg/tab, average range 30–40 μg/tab) dosage units. LSD tablet shapes have included cylinders, cones, stars, spacecraft, and heart shapes. The smallest tablets became known as "Microdots."
After tablets came "computer acid" or "blotter paper LSD," typically made by dipping a preprinted sheet of blotting paper into an LSD/water/alcohol solution. More than 200 types of LSD tablets have been encountered since 1969 and more than 350 blotter paper designs have been observed since 1975. About the same time as blotter paper LSD came "Windowpane" (AKA "Clearlight"), which contained LSD inside a thin gelatin square a quarter of an inch (6 mm) across. LSD has been sold under a wide variety of often short-lived and regionally restricted street names including Acid, Trips, Uncle Sid, Blotter, Lucy, Alice and doses, as well as names that reflect the designs on the sheets of blotter paper. Authorities have encountered the drug in other forms—including powder or crystal, and capsule.
Modern distribution
LSD manufacturers and traffickers in the United States can be categorized into two groups: A few large-scale producers, and an equally limited number of small, clandestine chemists, consisting of independent producers who, operating on a comparatively limited scale, can be found throughout the country.
As a group, independent producers are of less concern to the Drug Enforcement Administration than the large-scale groups because their product reaches only local markets.
Many LSD dealers and chemists describe a religious or humanitarian purpose that motivates their illicit activity. Nicholas Schou's book Orange Sunshine: The Brotherhood of Eternal Love and Its Quest to Spread Peace, Love, and Acid to the World describes one such group, the Brotherhood of Eternal Love. The group was a major American LSD trafficking group in the late 1960s and early 1970s.
In the second half of the 20th century, dealers and chemists loosely associated with the Grateful Dead like Owsley Stanley, Nicholas Sand, Karen Horning, Sarah Maltzer, "Dealer McDope," and Leonard Pickard played an essential role in distributing LSD.
Mimics
Since 2005, law enforcement in the United States and elsewhere has seized several chemicals and combinations of chemicals in blotter paper which were sold as LSD mimics, including DOB, a mixture of DOC and DOI, 25I-NBOMe, and a mixture of DOC and DOB. Many mimics are toxic in comparatively small doses, or have extremely different safety profiles. Many street users of LSD are often under the impression that blotter paper which is actively hallucinogenic can only be LSD because that is the only chemical with low enough doses to fit on a small square of blotter paper. While it is true that LSD requires lower doses than most other hallucinogens, blotter paper is capable of absorbing a much larger amount of material. The DEA performed a chromatographic analysis of blotter paper containing 2C-C which showed that the paper contained a much greater concentration of the active chemical than typical LSD doses, although the exact quantity was not determined. Blotter LSD mimics can have relatively small dose squares; a sample of blotter paper containing DOC seized by Concord, California police had dose markings approximately 6 mm apart. Several deaths have been attributed to 25I-NBOMe.
Research
In the United States, the earliest research began in the 1950s. Albert Kurland and his colleagues published research on LSD's therapeutic potential to treat schizophrenia. In Canada, Humphry Osmond and Abram Hoffer completed LSD studies as early as 1952. By the 1960s, controversies surrounding "hippie" counterculture began to deplete institutional support for continued studies.
Currently, several organizations—including the Beckley Foundation, MAPS, Heffter Research Institute and the Albert Hofmann Foundation—exist to fund, encourage and coordinate research into the medicinal and spiritual uses of LSD and related psychedelics. New clinical LSD experiments in humans started in 2009 for the first time in 35 years. As it is illegal in many areas of the world, potential medical uses are difficult to study.
In 2001 the United States Drug Enforcement Administration stated that LSD "produces no aphrodisiac effects, does not increase creativity, has no lasting positive effect in treating alcoholics or criminals, does not produce a "model psychosis", and does not generate immediate personality change." More recently, experimental uses of LSD have included the treatment of alcoholism, pain and cluster headache relief, and prospective studies on depression.
A 2020 meta-review indicated possible positive effects of LSD in reducing psychiatric symptoms, mainly in cases of alcoholism. There is evidence that psychedelics induce molecular and cellular adaptations related to neuroplasticity and that these could potentially underlie therapeutic benefits.
Psychedelic therapy
In the 1950s and 1960s, LSD was used in psychiatry to enhance psychotherapy, known as psychedelic therapy. Some psychiatrists, such as Ronald A. Sandison, who pioneered its use at Powick Hospital in England, believed LSD was especially useful at helping patients to "unblock" repressed subconscious material through other psychotherapeutic methods, and also for treating alcoholism. One study concluded, "The root of the therapeutic value of the LSD experience is its potential for producing self-acceptance and self-surrender," presumably by forcing the user to face issues and problems in that individual's psyche.
Two recent reviews concluded that conclusions drawn from most of these early trials are unreliable due to serious methodological flaws. These include the absence of adequate control groups, lack of follow-up, and vague criteria for therapeutic outcome. In many cases, studies failed to convincingly demonstrate whether the drug or the therapeutic interaction was responsible for any beneficial effects.
In recent years, organizations like the Multidisciplinary Association for Psychedelic Studies (MAPS) have renewed clinical research of LSD.
It has been proposed that LSD be studied for use in the therapeutic setting, particularly in anxiety. In 2024, the FDA designated a form of LSD as a breakthrough therapy to treat generalized anxiety disorder which is being developed by MindMed.
Other uses
In the 1950s and 1960s, some psychiatrists (e.g., Oscar Janiger) explored the potential effect of LSD on creativity. Experimental studies attempted to measure the effect of LSD on creative activity and aesthetic appreciation. In 1966 Dr. James Fadiman conducted a study with the central question "How can psychedelics be used to facilitate problem solving?" This study attempted to solve 44 different problems and had 40 satisfactory solutions when the FDA banned all research into psychedelics. LSD was a key component of this study.
Since 2008 there has been ongoing research into using LSD to alleviate anxiety for terminally ill cancer patients coping with their impending deaths.
A 2012 meta-analysis found evidence that a single dose of LSD in conjunction with various alcoholism treatment programs was associated with a decrease in alcohol abuse, lasting for several months, but no effect was seen at one year. Adverse events included seizure, moderate confusion and agitation, nausea, vomiting, and acting in a bizarre fashion.
LSD has been used as a treatment for cluster headaches with positive results in some small studies.
LSD is a potent psychoplastogen, a compound capable of promoting rapid and sustained neural plasticity that may have wide-ranging therapeutic benefit. LSD has been shown to increase markers of neuroplasticity in human brain organoids and improve memory performance in human subjects.
LSD may have analgesic properties related to pain in terminally ill patients and phantom pain and may be useful for treating inflammatory diseases including rheumatoid arthritis.
Notable individuals
Some notable individuals have commented publicly on their experiences with LSD. Some of these comments date from the era when it was legally available in the US and Europe for non-medical uses, and others pertain to psychiatric treatment in the 1950s and 1960s. Still others describe experiences with illegal LSD, obtained for philosophic, artistic, therapeutic, spiritual, or recreational purposes.
W. H. Auden, the poet, said, "I myself have taken mescaline once and L.S.D. once. Aside from a slight schizophrenic dissociation of the I from the Not-I, including my body, nothing happened at all." He also said, "LSD was a complete frost. … What it does seem to destroy is the power of communication. I have listened to tapes done by highly articulate people under LSD, for example, and they talk absolute drivel. They may have seen something interesting, but they certainly lose either the power or the wish to communicate." He also said, "Nothing much happened but I did get the distinct impression that some birds were trying to communicate with me."
Daniel Ellsberg, an American peace activist, says he has had several hundred experiences with psychedelics.
Richard Feynman, a notable physicist at California Institute of Technology, tried LSD during his professorship at Caltech. Feynman largely sidestepped the issue when dictating his anecdotes; he mentions it in passing in the "O Americano, Outra Vez" section.
Jerry Garcia stated in a July 3, 1989 interview for Relix Magazine, in response to the question "Have your feelings about LSD changed over the years?," "They haven't changed much. My feelings about LSD are mixed. It's something that I both fear and that I love at the same time. I never take any psychedelic, have a psychedelic experience, without having that feeling of, "I don't know what's going to happen." In that sense, it's still fundamentally an enigma and a mystery."
Bill Gates implied in an interview with Playboy that he tried LSD during his youth.
Aldous Huxley, author of Brave New World, became a user of psychedelics after moving to Hollywood. He was at the forefront of the counterculture's use of psychedelic drugs, which led to his 1954 work The Doors of Perception. Dying from cancer, he asked his wife on 22 November 1963 to inject him with 100 μg of LSD. He died later that day.
Steve Jobs, co-founder and former CEO of Apple Inc., said, "Taking LSD was a profound experience, one of the most important things in my life."
Ernst Jünger, German writer and philosopher, throughout his life had experimented with drugs such as ether, cocaine, and hashish; and later in life he used mescaline and LSD. These experiments were recorded comprehensively in Annäherungen (1970, Approaches). The novel Besuch auf Godenholm (1952, Visit to Godenholm) is clearly influenced by his early experiments with mescaline and LSD. He met with LSD inventor Albert Hofmann and they took LSD together several times. Hofmann's memoir LSD, My Problem Child describes some of these meetings.
In a 2004 interview, Paul McCartney said that The Beatles' songs "Day Tripper" and "Lucy in the Sky with Diamonds" were inspired by LSD trips. Nonetheless, John Lennon consistently stated over the course of many years that the fact that the initials of "Lucy in the Sky with Diamonds" spelled out L-S-D was a coincidence (he stated that the title came from a picture drawn by his son Julian) and that the band members did not notice until after the song had been released, and Paul McCartney corroborated that story. John Lennon, George Harrison, and Ringo Starr also used the drug, although McCartney cautioned that "it's easy to overestimate the influence of drugs on the Beatles' music."
Michel Foucault had an LSD experience with Simeon Wade in Death Valley and later wrote "it was the greatest experience of his life, and that it profoundly changed his life and his work." According to Wade, as soon as he came back to Paris, Foucault scrapped the second History of Sexuality's manuscript, and totally rethought the whole project.
Kary Mullis is reported to credit LSD with helping him develop DNA amplification technology, for which he received the Nobel Prize in Chemistry in 1993.
Carlo Rovelli, an Italian theoretical physicist and writer, has credited his use of LSD with sparking his interest in theoretical physics.
Oliver Sacks, a neurologist famous for writing best-selling case histories about his patients' disorders and unusual experiences, talks about his own experiences with LSD and other perception altering chemicals, in his book, Hallucinations.
Matt Stone and Trey Parker, creators of the TV series South Park, claimed to have shown up at the 72nd Academy Awards, at which they were nominated for Best Original Song, under the influence of LSD.
| Biology and health sciences | Recreational drugs | Health |
17553 | https://en.wikipedia.org/wiki/Kepler%27s%20laws%20of%20planetary%20motion | Kepler's laws of planetary motion | In astronomy, Kepler's laws of planetary motion, published by Johannes Kepler in 1609 (except the third law, and was fully published in 1619), describe the orbits of planets around the Sun. These laws replaced circular orbits and epicycles in the heliocentric theory of Nicolaus Copernicus with elliptical orbits and explained how planetary velocities vary. The three laws state that:
The orbit of a planet is an ellipse with the Sun at one of the two foci.
A line segment joining a planet and the Sun sweeps out equal areas during equal intervals of time.
The square of a planet's orbital period is proportional to the cube of the length of the semi-major axis of its orbit.
The elliptical orbits of planets were indicated by calculations of the orbit of Mars. From this, Kepler inferred that other bodies in the Solar System, including those farther away from the Sun, also have elliptical orbits. The second law establishes that when a planet is closer to the Sun, it travels faster. The third law expresses that the farther a planet is from the Sun, the longer its orbital period.
Isaac Newton showed in 1687 that relationships like Kepler's would apply in the Solar System as a consequence of his own laws of motion and law of universal gravitation.
A more precise historical approach is found in Astronomia nova and Epitome Astronomiae Copernicanae.
Comparison to Copernicus
Johannes Kepler's laws improved the model of Copernicus. According to Copernicus:
The planetary orbit is a circle with epicycles.
The Sun is approximately at the center of the orbit.
The speed of the planet in the main orbit is constant.
Despite being correct in saying that the planets revolved around the Sun, Copernicus was incorrect in defining their orbits. Introducing physical explanations for movement in space beyond just geometry, Kepler correctly defined the orbit of planets as follows:
The planetary orbit is not a circle with epicycles, but an ellipse.
The Sun is not at the center but at a focal point of the elliptical orbit.
Neither the linear speed nor the angular speed of the planet in the orbit is constant, but the area speed (closely linked historically with the concept of angular momentum) is constant.
The eccentricity of the orbit of the Earth makes the time from the March equinox to the September equinox, around 186 days, unequal to the time from the September equinox to the March equinox, around 179 days. A diameter would cut the orbit into equal parts, but the plane through the Sun parallel to the equator of the Earth cuts the orbit into two parts with areas in a 186 to 179 ratio, so the eccentricity of the orbit of the Earth is approximately
which is close to the correct value (0.016710218). The accuracy of this calculation requires that the two dates chosen be along the elliptical orbit's minor axis and that the midpoints of each half be along the major axis. As the two dates chosen here are equinoxes, this will be correct when perihelion, the date the Earth is closest to the Sun, falls on a solstice. The current perihelion, near January 4, is fairly close to the solstice of December 21 or 22.
Nomenclature
It took nearly two centuries for the current formulation of Kepler's work to take on its settled form. Voltaire's Eléments de la philosophie de Newton (Elements of Newton's Philosophy) of 1738 was the first publication to use the terminology of "laws". The Biographical Encyclopedia of Astronomers in its article on Kepler (p. 620) states that the terminology of scientific laws for these discoveries was current at least from the time of Joseph de Lalande. It was the exposition of Robert Small, in An account of the astronomical discoveries of Kepler (1814) that made up the set of three laws, by adding in the third. Small also claimed, against the history, that these were empirical laws, based on inductive reasoning.
Further, the current usage of "Kepler's Second Law" is something of a misnomer. Kepler had two versions, related in a qualitative sense: the "distance law" and the "area law". The "area law" is what became the Second Law in the set of three; but Kepler did himself not privilege it in that way.
History
Kepler published his first two laws about planetary motion in 1609, having found them by analyzing the astronomical observations of Tycho Brahe. Kepler's third law was published in 1619. Kepler had believed in the Copernican model of the Solar System, which called for circular orbits, but he could not reconcile Brahe's highly precise observations with a circular fit to Mars' orbit – Mars coincidentally having the highest eccentricity of all planets except Mercury. His first law reflected this discovery.
In 1621, Kepler noted that his third law applies to the four brightest moons of Jupiter. Godefroy Wendelin also made this observation in 1643. The second law, in the "area law" form, was contested by Nicolaus Mercator in a book from 1664, but by 1670 his Philosophical Transactions were in its favour. As the century proceeded it became more widely accepted. The reception in Germany changed noticeably between 1688, the year in which Newton's Principia was published and was taken to be basically Copernican, and 1690, by which time work of Gottfried Leibniz on Kepler had been published.
Newton was credited with understanding that the second law is not special to the inverse square law of gravitation, being a consequence just of the radial nature of that law, whereas the other laws do depend on the inverse square form of the attraction. Carl Runge and Wilhelm Lenz much later identified a symmetry principle in the phase space of planetary motion (the orthogonal group O(4) acting) which accounts for the first and third laws in the case of Newtonian gravitation, as conservation of angular momentum does via rotational symmetry for the second law.
Formulary
The mathematical model of the kinematics of a planet subject to the laws allows a large range of further calculations.
First law
Kepler's first law states that:
The orbit of every planet is an ellipse with the sun at one of the two foci.
Mathematically, an ellipse can be represented by the formula:
where is the semi-latus rectum, ε is the eccentricity of the ellipse, r is the distance from the Sun to the planet, and θ is the angle to the planet's current position from its closest approach, as seen from the Sun. So (r, θ) are polar coordinates.
For an ellipse 0 < ε < 1 ; in the limiting case ε = 0, the orbit is a circle with the Sun at the centre (i.e. where there is zero eccentricity).
At θ = 0°, perihelion, the distance is minimum
At θ = 90° and at θ = 270° the distance is equal to .
At θ = 180°, aphelion, the distance is maximum (by definition, aphelion is – invariably – perihelion plus 180°)
The semi-major axis a is the arithmetic mean between rmin and rmax:
The semi-minor axis b is the geometric mean between rmin and rmax:
The semi-latus rectum p is the harmonic mean between rmin and rmax:
The eccentricity ε is the coefficient of variation between rmin and rmax:
The area of the ellipse is
The special case of a circle is ε = 0, resulting in r = p = rmin = rmax = a = b and A = πr2.
Second law
Kepler's second law states that:
A line joining a planet and the Sun sweeps out equal areas during equal intervals of time.
The orbital radius and angular velocity of the planet in the elliptical orbit will vary. This is shown in the animation: the planet travels faster when closer to the Sun, then slower when farther from the Sun. Kepler's second law states that the blue sector has constant area.
History and proofs
Kepler notably arrived at this law through assumptions that were either only approximately true or outright false and can be outlined as follows:
Planets are pushed around the Sun by a force from the Sun. This false assumption relies on incorrect Aristotelian physics that an object needs to be pushed to maintain motion.
The propelling force from the Sun is inversely proportional to the distance from the Sun. Kepler reasoned this, believing that gravity spreading in three dimensions would be a waste, since the planets inhabited a plane. Thus, an inverse instead of the [correct] inverse square law.
Because Kepler believed that force would be proportional to velocity, it followed from statements #1 and #2 that velocity would be inverse to the distance from the sun. This is also an incorrect tenet of Aristotelian physics.
Since velocity is inverse to time, the distance from the sun would be proportional to the time to cover a small piece of the orbit. This is approximately true for elliptical orbits.
The area swept out is proportional to the overall time. This is also approximately true.
The orbits of a planet are circular (Kepler discovered his Second Law before his First Law, which contradicts this).
Nevertheless, the result of the Second Law is exactly true, as it is logically equivalent to the conservation of angular momentum, which is true for any body experiencing a radially symmetric force. A correct proof can be shown through this. Since the cross product of two vectors gives the area of a parallelogram possessing sides of those vectors, the triangular area dA swept out in a short period of time is given by half the cross product of the r and dx vectors, for some short piece of the orbit, dx.
for a small piece of the orbit dx and time to cover it dt.
Thus
Since the final expression is proportional to the total angular momentum , Kepler's equal area law will hold for any system that conserves angular momentum. Since any radial force will produce no torque on the planet's motion, angular momentum will be conserved.
In terms of elliptical parameters
In a small time the planet sweeps out a small triangle having base line and height and area , so the constant areal velocity is
The area enclosed by the elliptical orbit is . So the period satisfies
and the mean motion of the planet around the Sun
satisfies
And so,
Third law
Kepler's third law states that:
The ratio of the square of an object's orbital period with the cube of the semi-major axis of its orbit is the same for all objects orbiting the same primary.
This captures the relationship between the distance of planets from the Sun, and their orbital periods.
Kepler enunciated in 1619 this third law in a laborious attempt to determine what he viewed as the "music of the spheres" according to precise laws, and express it in terms of musical notation. It was therefore known as the harmonic law. The original form of this law (referring to not the semi-major axis, but rather a "mean distance") holds true only for planets with small eccentricities near zero.
Using Newton's law of gravitation (published 1687), this relation can be found in the case of a circular orbit by setting the centripetal force equal to the gravitational force:
Then, expressing the angular velocity ω in terms of the orbital period and then rearranging, results in Kepler's Third Law:
A more detailed derivation can be done with general elliptical orbits, instead of circles, as well as orbiting the center of mass, instead of just the large mass. This results in replacing a circular radius, , with the semi-major axis, , of the elliptical relative motion of one mass relative to the other, as well as replacing the large mass with . However, with planet masses being so much smaller than the Sun, this correction is often ignored. The full corresponding formula is:
where is the mass of the Sun, is the mass of the planet, is the gravitational constant, is the orbital period and is the elliptical semi-major axis, and is the astronomical unit, the average distance from earth to the sun.
Table
The following table shows the data used by Kepler to empirically derive his law:
Kepler became aware of John Napier's recent invention of logarithms and log-log graphs before he discovered the pattern.
Upon finding this pattern Kepler wrote:
For comparison, here are modern estimates:
Planetary acceleration
Isaac Newton computed in his Philosophiæ Naturalis Principia Mathematica the acceleration of a planet moving according to Kepler's first and second laws.
The direction of the acceleration is towards the Sun.
The magnitude of the acceleration is inversely proportional to the square of the planet's distance from the Sun (the inverse square law).
This implies that the Sun may be the physical cause of the acceleration of planets. However, Newton states in his Principia that he considers forces from a mathematical point of view, not a physical, thereby taking an instrumentalist view. Moreover, he does not assign a cause to gravity.
Newton defined the force acting on a planet to be the product of its mass and the acceleration (see Newton's laws of motion). So:
Every planet is attracted towards the Sun.
The force acting on a planet is directly proportional to the mass of the planet and is inversely proportional to the square of its distance from the Sun.
The Sun plays an unsymmetrical part, which is unjustified. So he assumed, in Newton's law of universal gravitation:
All bodies in the Solar System attract one another.
The force between two bodies is in direct proportion to the product of their masses and in inverse proportion to the square of the distance between them.
As the planets have small masses compared to that of the Sun, the orbits conform approximately to Kepler's laws. Newton's model improves upon Kepler's model, and fits actual observations more accurately. (See two-body problem.)
Below comes the detailed calculation of the acceleration of a planet moving according to Kepler's first and second laws.
Acceleration vector
From the heliocentric point of view consider the vector to the planet where is the distance to the planet and is a unit vector pointing towards the planet.
where is the unit vector whose direction is 90 degrees counterclockwise of , and is the polar angle, and where a dot on top of the variable signifies differentiation with respect to time.
Differentiate the position vector twice to obtain the velocity vector and the acceleration vector:
So
where the radial acceleration is
and the transversal acceleration is
Inverse square law
Kepler's second law says that is constant.
The transversal acceleration is zero:
So the acceleration of a planet obeying Kepler's second law is directed towards the Sun.
The radial acceleration is
Kepler's first law states that the orbit is described by the equation:
Differentiating with respect to time
or
Differentiating once more
The radial acceleration satisfies
Substituting the equation of the ellipse gives
The relation gives the simple final result
This means that the acceleration vector of any planet obeying Kepler's first and second law satisfies the inverse square law
where
is a constant, and is the unit vector pointing from the Sun towards the planet, and is the distance between the planet and the Sun.
Since mean motion where is the period, according to Kepler's third law, has the same value for all the planets. So the inverse square law for planetary accelerations applies throughout the entire Solar System.
The inverse square law is a differential equation. The solutions to this differential equation include the Keplerian motions, as shown, but they also include motions where the orbit is a hyperbola or parabola or a straight line. (See Kepler orbit.)
Newton's law of gravitation
By Newton's second law, the gravitational force that acts on the planet is:
where is the mass of the planet and has the same value for all planets in the Solar System. According to Newton's third law, the Sun is attracted to the planet by a force of the same magnitude. Since the force is proportional to the mass of the planet, under the symmetric consideration, it should also be proportional to the mass of the Sun, . So
where is the gravitational constant.
The acceleration of Solar System body number i is, according to Newton's laws:
where is the mass of body j, is the distance between body i and body j, is the unit vector from body i towards body j, and the vector summation is over all bodies in the Solar System, besides i itself.
In the special case where there are only two bodies in the Solar System, Earth and Sun, the acceleration becomes
which is the acceleration of the Kepler motion. So this Earth moves around the Sun according to Kepler's laws.
If the two bodies in the Solar System are Moon and Earth the acceleration of the Moon becomes
So in this approximation, the Moon moves around the Earth according to Kepler's laws.
In the three-body case the accelerations are
These accelerations are not those of Kepler orbits, and the three-body problem is complicated. But Keplerian approximation is the basis for perturbation calculations. (See Lunar theory.)
Position as a function of time
Kepler used his two first laws to compute the position of a planet as a function of time. His method involves the solution of a transcendental equation called Kepler's equation.
The procedure for calculating the heliocentric polar coordinates (r,θ) of a planet as a function of the time t since perihelion, is the following five steps:
Compute the mean motion , where P is the period.
Compute the mean anomaly , where t is the time since perihelion.
Compute the eccentric anomaly E by solving Kepler's equation: where is the eccentricity.
Compute the true anomaly θ by solving the equation:
Compute the heliocentric distance r: where is the semimajor axis.
The position polar coordinates (r,θ) can now be written as a Cartesian vector and the Cartesian velocity vector can then be calculated as , where is the standard gravitational parameter.
The important special case of circular orbit, ε = 0, gives . Because the uniform circular motion was considered to be normal, a deviation from this motion was considered an anomaly.
The proof of this procedure is shown below.
Mean anomaly, M
The Keplerian problem assumes an elliptical orbit and the four points:
s the Sun (at one focus of ellipse);
z the perihelion
c the center of the ellipse
p the planet
and
distance between center and perihelion, the semimajor axis,
the eccentricity,
the semiminor axis,
the distance between Sun and planet.
the direction to the planet as seen from the Sun, the true anomaly.
The problem is to compute the polar coordinates (r,θ) of the planet from the time since perihelion, t.
It is solved in steps. Kepler considered the circle with the major axis as a diameter, and
the projection of the planet to the auxiliary circle
the point on the circle such that the sector areas |zcy| and |zsx| are equal,
the mean anomaly.
The sector areas are related by
The circular sector area
The area swept since perihelion,
is by Kepler's second law proportional to time since perihelion. So the mean anomaly, M, is proportional to time since perihelion, t.
where n is the mean motion.
Eccentric anomaly, E
When the mean anomaly M is computed, the goal is to compute the true anomaly θ. The function θ = f(M) is, however, not elementary. Kepler's solution is to use
x as seen from the centre, the eccentric anomaly
as an intermediate variable, and first compute E as a function of M by solving Kepler's equation below, and then compute the true anomaly θ from the eccentric anomaly E. Here are the details.
Division by a2/2 gives Kepler's equation
This equation gives M as a function of E. Determining E for a given M is the inverse problem. Iterative numerical algorithms are commonly used.
Having computed the eccentric anomaly E, the next step is to calculate the true anomaly θ.
But note: Cartesian position coordinates with reference to the center of ellipse are (a cos E, b sin E)
With reference to the Sun (with coordinates (c,0) = (ae,0) ), r = (a cos E – ae, b sin E)
True anomaly would be arctan(ry/rx), magnitude of r would be .
True anomaly, θ
Note from the figure that
so that
Dividing by and inserting from Kepler's first law
to get
The result is a usable relationship between the eccentric anomaly E and the true anomaly θ.
A computationally more convenient form follows by substituting into the trigonometric identity:
Get
Multiplying by 1 + ε gives the result
This is the third step in the connection between time and position in the orbit.
Distance, r
The fourth step is to compute the heliocentric distance r from the true anomaly θ by Kepler's first law:
Using the relation above between θ and E the final equation for the distance r is:
| Physical sciences | Celestial mechanics | null |
17556 | https://en.wikipedia.org/wiki/Laser | Laser | A laser is a device that emits light through a process of optical amplification based on the stimulated emission of electromagnetic radiation. The word laser originated as an acronym for light amplification by stimulated emission of radiation. The first laser was built in 1960 by Theodore Maiman at Hughes Research Laboratories, based on theoretical work by Charles H. Townes and Arthur Leonard Schawlow and the optical amplifier patented by Gordon Gould.
A laser differs from other sources of light in that it emits light that is coherent. Spatial coherence allows a laser to be focused to a tight spot, enabling applications such as optical communication, laser cutting, and lithography. It also allows a laser beam to stay narrow over great distances (collimation), a feature used in applications such as laser pointers, lidar, and free-space optical communication. Lasers can also have high temporal coherence, which permits them to emit light with a very narrow frequency spectrum. Temporal coherence can also be used to produce ultrashort pulses of light with a broad spectrum but durations as short as an attosecond.
Lasers are used in fiber-optic and free-space optical communications, optical disc drives, laser printers, barcode scanners, semiconductor chip manufacturing (photolithography, etching), laser surgery and skin treatments, cutting and welding materials, military and law enforcement devices for marking targets and measuring range and speed, and in laser lighting displays for entertainment. Lasers transport the majority of Internet traffic. The laser is regarded as one of the greatest inventions of the 20th century.
Terminology
The first device using amplification by stimulated emission operated at microwave frequencies, and was called a maser, for "microwave amplification by stimulated emission of radiation". When similar optical devices were developed they were first called optical masers, until "microwave" was replaced by "light" in the acronym, to become laser.
Today, all such devices operating at frequencies higher than microwaves (approximately above 300 GHz) are called lasers (e.g. infrared lasers, ultraviolet lasers, X-ray lasers, gamma-ray lasers), whereas devices operating at microwave or lower radio frequencies are called masers.
The back-formed verb "to lase" is frequently used in the field, meaning "to give off coherent light," especially about the gain medium of a laser; when a laser is operating, it is said to be "lasing". The terms laser and maser are also used for naturally occurring coherent emissions, as in astrophysical maser and atom laser.
A laser that produces light by itself is technically an optical oscillator rather than an optical amplifier as suggested by the acronym. It has been humorously noted that the acronym LOSER, for "light oscillation by stimulated emission of radiation", would have been more correct. Some sources refer to the word laser as an anacronym, meaning an acronym so widely used as a noun that it is no longer considered an abbreviation.
Fundamentals
Photons, the quanta of electromagnetic radiation are released and absorbed from energy levels in atoms and molecules. In a lightbulb or a star, the energy is emitted from many different levels giving photons with a broad range of energies. This process is called thermal radiation.
The underlying physical process creating photons in a laser is the same as in thermal radiation, but the actual emission is not the result of random thermal processes. Instead, the release of a photon is triggered by the nearby passage of another photon. This is called stimulated emission. For this process to work, the passing photon must be similar in energy, and thus wavelength, to the one that could be released by the atom or molecule, and the atom or molecule must be in the suitable excited state.
The photon that is emitted by stimulated emission is identical to the photon that triggered its emission, and both photons can go on to trigger stimulated emission in other atoms, creating the possibility of a chain reaction. For this to happen, many of the atoms or molecules must be in the proper excited state so that the photons can trigger them. In most materials, atoms or molecules drop out of excited states fairly rapidly, making it difficult or impossible to produce a chain reaction. The materials chosen for lasers are the ones that have metastable states, which stay excited for a relatively long time. In laser physics, such a material is called an active laser medium. Combined with an energy source that continues to "pump" energy into the material, it is possible to have enough atoms or molecules in an excited state for a chain reaction to develop.
Lasers are distinguished from other light sources by their coherence. Spatial (or transverse) coherence is typically expressed through the output being a narrow beam, which is diffraction-limited. Laser beams can be focused to very tiny spots, achieving a very high irradiance, or they can have a very low divergence to concentrate their power at a great distance. Temporal (or longitudinal) coherence implies a polarized wave at a single frequency, whose phase is correlated over a relatively great distance (the coherence length) along the beam. A beam produced by a thermal or other incoherent light source has an instantaneous amplitude and phase that vary randomly with respect to time and position, thus having a short coherence length.
Lasers are characterized according to their wavelength in a vacuum. Most "single wavelength" lasers produce radiation in several modes with slightly different wavelengths. Although temporal coherence implies some degree of monochromaticity, some lasers emit a broad spectrum of light or emit different wavelengths of light simultaneously. Certain lasers are not single spatial mode and have light beams that diverge more than is required by the diffraction limit. All such devices are classified as "lasers" based on the method of producing light by stimulated emission. Lasers are employed where light of the required spatial or temporal coherence can not be produced using simpler technologies.
Design
A laser consists of a gain medium, a mechanism to energize it, and something to provide optical feedback. The gain medium is a material with properties that allow it to amplify light by way of stimulated emission. Light of a specific wavelength that passes through the gain medium is amplified (power increases). Feedback enables stimulated emission to amplify predominantly the optical frequency at the peak of the gain-frequency curve. As stimulated emission grows, eventually one frequency dominates over all others, meaning that a coherent beam has been formed.
The process of stimulated emission is analogous to that of an audio oscillator with positive feedback which can occur, for example, when the speaker in a public-address system is placed in proximity to the microphone. The screech one hears is audio oscillation at the peak of the gain-frequency curve for the amplifier.
For the gain medium to amplify light, it needs to be supplied with energy in a process called pumping. The energy is typically supplied as an electric current or as light at a different wavelength. Pump light may be provided by a flash lamp or by another laser.
The most common type of laser uses feedback from an optical cavitya pair of mirrors on either end of the gain medium. Light bounces back and forth between the mirrors, passing through the gain medium and being amplified each time. Typically one of the two mirrors, the output coupler, is partially transparent. Some of the light escapes through this mirror. Depending on the design of the cavity (whether the mirrors are flat or curved), the light coming out of the laser may spread out or form a narrow beam. In analogy to electronic oscillators, this device is sometimes called a laser oscillator.
Most practical lasers contain additional elements that affect the properties of the emitted light, such as the polarization, wavelength, and shape of the beam.
Laser physics
Electrons and how they interact with electromagnetic fields are important in our understanding of chemistry and physics.
Stimulated emission
In the classical view, the energy of an electron orbiting an atomic nucleus is larger for orbits further from the nucleus of an atom. However, quantum mechanical effects force electrons to take on discrete positions in orbitals. Thus, electrons are found in specific energy levels of an atom, two of which are shown below:
An electron in an atom can absorb energy from light (photons) or heat (phonons) only if there is a transition between energy levels that match the energy carried by the photon or phonon. For light, this means that any given transition will only absorb one particular wavelength of light. Photons with the correct wavelength can cause an electron to jump from the lower to the higher energy level. The photon is consumed in this process.
When an electron is excited from one state to that at a higher energy level with energy difference ΔE, it will not stay that way forever. Eventually, a photon will be spontaneously created from the vacuum having energy ΔE. Conserving energy, the electron transitions to a lower energy level that is not occupied, with transitions to different levels having different time constants. This process is called spontaneous emission. Spontaneous emission is a quantum-mechanical effect and a direct physical manifestation of the Heisenberg uncertainty principle. The emitted photon has a random direction, but its wavelength matches the absorption wavelength of the transition. This is the mechanism of fluorescence and thermal emission.
A photon with the correct wavelength to be absorbed by a transition can also cause an electron to drop from the higher to the lower level, emitting a new photon. The emitted photon exactly matches the original photon in wavelength, phase, and direction. This process is called stimulated emission.
Gain medium and cavity
The gain medium is put into an excited state by an external source of energy. In most lasers, this medium consists of a population of atoms that have been excited into such a state using an outside light source, or an electrical field that supplies energy for atoms to absorb and be transformed into their excited states.
The gain medium of a laser is normally a material of controlled purity, size, concentration, and shape, which amplifies the beam by the process of stimulated emission described above. This material can be of any state: gas, liquid, solid, or plasma. The gain medium absorbs pump energy, which raises some electrons into higher energy ("excited") quantum states. Particles can interact with light by either absorbing or emitting photons. Emission can be spontaneous or stimulated. In the latter case, the photon is emitted in the same direction as the light that is passing by. When the number of particles in one excited state exceeds the number of particles in some lower-energy state, population inversion is achieved. In this state, the rate of stimulated emission is larger than the rate of absorption of light in the medium, and therefore the light is amplified. A system with this property is called an optical amplifier. When an optical amplifier is placed inside a resonant optical cavity, one obtains a laser.
For lasing media with extremely high gain, so-called superluminescence, light can be sufficiently amplified in a single pass through the gain medium without requiring a resonator. Although often referred to as a laser (see, for example, nitrogen laser), the light output from such a device lacks the spatial and temporal coherence achievable with lasers. Such a device cannot be described as an oscillator but rather as a high-gain optical amplifier that amplifies its spontaneous emission. The same mechanism describes so-called astrophysical masers/lasers.
The optical resonator is sometimes referred to as an "optical cavity", but this is a misnomer: lasers use open resonators as opposed to the literal cavity that would be employed at microwave frequencies in a maser.
The resonator typically consists of two mirrors between which a coherent beam of light travels in both directions, reflecting on itself so that an average photon will pass through the gain medium repeatedly before it is emitted from the output aperture or lost to diffraction or absorption.
If the gain (amplification) in the medium is larger than the resonator losses, then the power of the recirculating light can rise exponentially. But each stimulated emission event returns an atom from its excited state to the ground state, reducing the gain of the medium. With increasing beam power, the net gain (gain minus loss) reduces to unity and the gain medium is said to be saturated. In a continuous wave (CW) laser, the balance of pump power against gain saturation and cavity losses produces an equilibrium value of the laser power inside the cavity; this equilibrium determines the operating point of the laser. If the applied pump power is too small, the gain will never be sufficient to overcome the cavity losses, and laser light will not be produced. The minimum pump power needed to begin laser action is called the lasing threshold. The gain medium will amplify any photons passing through it, regardless of direction; but only the photons in a spatial mode supported by the resonator will pass more than once through the medium and receive substantial amplification.
The light emitted
In most lasers, lasing begins with spontaneous emission into the lasing mode. This initial light is then amplified by stimulated emission in the gain medium. Stimulated emission produces light that matches the input signal in direction, wavelength, and polarization, whereas the phase of the emitted light is 90 degrees in lead of the stimulating light. This, combined with the filtering effect of the optical resonator gives laser light its characteristic coherence, and may give it uniform polarization and monochromaticity, depending on the resonator's design. The fundamental laser linewidth of light emitted from the lasing resonator can be orders of magnitude narrower than the linewidth of light emitted from the passive resonator. Some lasers use a separate injection seeder to start the process off with a beam that is already highly coherent. This can produce beams with a narrower spectrum than would otherwise be possible.
In 1963, Roy J. Glauber showed that coherent states are formed from combinations of photon number states, for which he was awarded the Nobel Prize in Physics. A coherent beam of light is formed by single-frequency quantum photon states distributed according to a Poisson distribution. As a result, the arrival rate of photons in a laser beam is described by Poisson statistics.
Many lasers produce a beam that can be approximated as a Gaussian beam; such beams have the minimum divergence possible for a given beam diameter. Some lasers, particularly high-power ones, produce multimode beams, with the transverse modes often approximated using Hermite–Gaussian or Laguerre-Gaussian functions. Some high-power lasers use a flat-topped profile known as a "tophat beam". Unstable laser resonators (not used in most lasers) produce fractal-shaped beams. Specialized optical systems can produce more complex beam geometries, such as Bessel beams and optical vortexes.
Near the "waist" (or focal region) of a laser beam, it is highly collimated: the wavefronts are planar, normal to the direction of propagation, with no beam divergence at that point. However, due to diffraction, that can only remain true well within the Rayleigh range. The beam of a single transverse mode (gaussian beam) laser eventually diverges at an angle that varies inversely with the beam diameter, as required by diffraction theory. Thus, the "pencil beam" directly generated by a common helium–neon laser would spread out to a size of perhaps 500 kilometers when shone on the Moon (from the distance of the Earth). On the other hand, the light from a semiconductor laser typically exits the tiny crystal with a large divergence: up to 50°. However even such a divergent beam can be transformed into a similarly collimated beam employing a lens system, as is always included, for instance, in a laser pointer whose light originates from a laser diode. That is possible due to the light being of a single spatial mode. This unique property of laser light, spatial coherence, cannot be replicated using standard light sources (except by discarding most of the light) as can be appreciated by comparing the beam from a flashlight (torch) or spotlight to that of almost any laser.
A laser beam profiler is used to measure the intensity profile, width, and divergence of laser beams.
Diffuse reflection of a laser beam from a matte surface produces a speckle pattern with interesting properties.
Quantum vs. classical emission processes
The mechanism of producing radiation in a laser relies on stimulated emission, where energy is extracted from a transition in an atom or molecule. This is a quantum phenomenon that was predicted by Albert Einstein, who derived the relationship between the A coefficient, describing spontaneous emission, and the B coefficient which applies to absorption and stimulated emission. In the case of the free-electron laser, atomic energy levels are not involved; it appears that the operation of this rather exotic device can be explained without reference to quantum mechanics.
Modes of operation
A laser can be classified as operating in either continuous or pulsed mode, depending on whether the power output is essentially continuous over time or whether its output takes the form of pulses of light on one or another time scale. Of course, even a laser whose output is normally continuous can be intentionally turned on and off at some rate to create pulses of light. When the modulation rate is on time scales much slower than the cavity lifetime and the period over which energy can be stored in the lasing medium or pumping mechanism, then it is still classified as a "modulated" or "pulsed" continuous wave laser. Most laser diodes used in communication systems fall into that category.
Continuous-wave operation
Some applications of lasers depend on a beam whose output power is constant over time. Such a laser is known as a continuous-wave (CW) laser. Many types of lasers can be made to operate in continuous-wave mode to satisfy such an application. Many of these lasers lase in several longitudinal modes at the same time, and beats between the slightly different optical frequencies of those oscillations will produce amplitude variations on time scales shorter than the round-trip time (the reciprocal of the frequency spacing between modes), typically a few nanoseconds or less. In most cases, these lasers are still termed "continuous-wave" as their output power is steady when averaged over longer periods, with the very high-frequency power variations having little or no impact on the intended application. (However, the term is not applied to mode-locked lasers, where the intention is to create very short pulses at the rate of the round-trip time.)
For continuous-wave operation, the population inversion of the gain medium needs to be continually replenished by a steady pump source. In some lasing media, this is impossible. In some other lasers, it would require pumping the laser at a very high continuous power level, which would be impractical, or destroying the laser by producing excessive heat. Such lasers cannot be run in CW mode.
Pulsed operation
The pulsed operation of lasers refers to any laser not classified as a continuous wave so that the optical power appears in pulses of some duration at some repetition rate. This encompasses a wide range of technologies addressing many different motivations. Some lasers are pulsed simply because they cannot be run in continuous mode.
In other cases, the application requires the production of pulses having as large an energy as possible. Since the pulse energy is equal to the average power divided by the repetition rate, this goal can sometimes be satisfied by lowering the rate of pulses so that more energy can be built up between pulses. In laser ablation, for example, a small volume of material at the surface of a workpiece can be evaporated if it is heated in a very short time, while supplying the energy gradually would allow for the heat to be absorbed into the bulk of the piece, never attaining a sufficiently high temperature at a particular point.
Other applications rely on the peak pulse power (rather than the energy in the pulse), especially to obtain nonlinear optical effects. For a given pulse energy, this requires creating pulses of the shortest possible duration utilizing techniques such as Q-switching.
The optical bandwidth of a pulse cannot be narrower than the reciprocal of the pulse width. In the case of extremely short pulses, that implies lasing over a considerable bandwidth, quite contrary to the very narrow bandwidths typical of CW lasers. The lasing medium in some dye lasers and vibronic solid-state lasers produces optical gain over a wide bandwidth, making a laser possible that can thus generate pulses of light as short as a few femtoseconds (10−15 s).
Q-switching
In a Q-switched laser, the population inversion is allowed to build up by introducing loss inside the resonator which exceeds the gain of the medium; this can also be described as a reduction of the quality factor or 'Q' of the cavity. Then, after the pump energy stored in the laser medium has approached the maximum possible level, the introduced loss mechanism (often an electro- or acousto-optical element) is rapidly removed (or that occurs by itself in a passive device), allowing lasing to begin which rapidly obtains the stored energy in the gain medium. This results in a short pulse incorporating that energy, and thus a high peak power.
Mode locking
A mode-locked laser is capable of emitting extremely short pulses on the order of tens of picoseconds down to less than 10 femtoseconds. These pulses repeat at the round-trip time, that is, the time that it takes light to complete one round trip between the mirrors comprising the resonator. Due to the Fourier limit (also known as energy–time uncertainty), a pulse of such short temporal length has a spectrum spread over a considerable bandwidth. Thus such a gain medium must have a gain bandwidth sufficiently broad to amplify those frequencies. An example of a suitable material is titanium-doped, artificially grown sapphire (Ti:sapphire), which has a very wide gain bandwidth and can thus produce pulses of only a few femtoseconds duration.
Such mode-locked lasers are a most versatile tool for researching processes occurring on extremely short time scales (known as femtosecond physics, femtosecond chemistry and ultrafast science), for maximizing the effect of nonlinearity in optical materials (e.g. in second-harmonic generation, parametric down-conversion, optical parametric oscillators and the like). Unlike the giant pulse of a Q-switched laser, consecutive pulses from a mode-locked laser are phase-coherent; that is, the pulses (and not just their envelopes) are identical and perfectly periodic. For this reason, and the extremely large peak powers attained by such short pulses, such lasers are invaluable in certain areas of research.
Pulsed pumping
Another method of achieving pulsed laser operation is to pump the laser material with a source that is itself pulsed, either through electronic charging in the case of flash lamps, or another laser that is already pulsed. Pulsed pumping was historically used with dye lasers where the inverted population lifetime of a dye molecule was so short that a high-energy, fast pump was needed. The way to overcome this problem was to charge up large capacitors which are then switched to discharge through flashlamps, producing an intense flash. Pulsed pumping is also required for three-level lasers in which the lower energy level rapidly becomes highly populated, preventing further lasing until those atoms relax to the ground state. These lasers, such as the excimer laser and the copper vapor laser, can never be operated in CW mode.
History
Foundations
In 1917, Albert Einstein established the theoretical foundations for the laser and the maser in the paper "Zur Quantentheorie der Strahlung" ("On the Quantum Theory of Radiation") via a re-derivation of Max Planck's law of radiation, conceptually based upon probability coefficients (Einstein coefficients) for the absorption, spontaneous emission, and stimulated emission of electromagnetic radiation. In 1928, Rudolf W. Ladenburg confirmed the existence of the phenomena of stimulated emission and negative absorption. In 1939, Valentin A. Fabrikant predicted using stimulated emission to amplify "short" waves. In 1947, Willis E. Lamb and R.C.Retherford found apparent stimulated emission in hydrogen spectra and effected the first demonstration of stimulated emission. In 1950, Alfred Kastler (Nobel Prize for Physics 1966) proposed the method of optical pumping, which was experimentally demonstrated two years later by Brossel, Kastler, and Winter.
Maser
In 1951, Joseph Weber submitted a paper on using stimulated emissions to make a microwave amplifier to the June 1952 Institute of Radio Engineers Vacuum Tube Research Conference in Ottawa, Ontario, Canada. After this presentation, RCA asked Weber to give a seminar on this idea, and Charles H. Townes asked him for a copy of the paper.
In 1953, Charles H. Townes and graduate students James P. Gordon and Herbert J. Zeiger produced the first microwave amplifier, a device operating on similar principles to the laser, but amplifying microwave radiation rather than infrared or visible radiation. Townes's maser was incapable of continuous output. Meanwhile, in the Soviet Union, Nikolay Basov and Aleksandr Prokhorov were independently working on the quantum oscillator and solved the problem of continuous-output systems by using more than two energy levels. These gain media could release stimulated emissions between an excited state and a lower excited state, not the ground state, facilitating the maintenance of a population inversion. In 1955, Prokhorov and Basov suggested optical pumping of a multi-level system as a method for obtaining the population inversion, later a main method of laser pumping.
Townes reports that several eminent physicistsamong them Niels Bohr, John von Neumann, and Llewellyn Thomasargued the maser violated Heisenberg's uncertainty principle and hence could not work. Others such as Isidor Rabi and Polykarp Kusch expected that it would be impractical and not worth the effort. In 1964, Charles H. Townes, Nikolay Basov, and Aleksandr Prokhorov shared the Nobel Prize in Physics, "for fundamental work in the field of quantum electronics, which has led to the construction of oscillators and amplifiers based on the maser–laser principle".
Laser
In April 1957, Japanese engineer Jun-ichi Nishizawa proposed the concept of a "semiconductor optical maser" in a patent application. That same year, Charles H. Townes and Arthur Leonard Schawlow, then at Bell Labs, began a serious study of infrared "optical masers". As ideas developed, they abandoned infrared radiation to instead concentrate on visible light.
Simultaneously, Columbia University graduate student Gordon Gould was working on a doctoral thesis about the energy levels of excited thallium. Gould and Townes met and talked about radiation emission as a general subject, but not the specific work they were pursuing. Later, in November 1957, Gould noted his ideas for how a "laser" could be made, including using an open resonator (an essential laser-device component). His notebook included a diagram of an optically pumped laser. It also contained the first recorded use of the term "laser," an acronym for "light amplification by stimulated emission of radiation," along with suggestions for potential applications of the coherent light beams described.
In 1958, Bell Labs filed a patent application for Schawlow and Townes's proposed optical maser; and Schawlow and Townes published a paper with their theoretical calculations in the Physical Review. That same year, Prokhorov independently proposed using an open resonator, the first published appearance of this idea.
At a conference in 1959, Gordon Gould first published the acronym "LASER" in the paper The LASER, Light Amplification by Stimulated Emission of Radiation. Gould's intention was that different "-ASER" acronyms should be used for different parts of the spectrum: "XASER" for x-rays, "UVASER" for ultraviolet, "RASER" for radio-wave, etc. Instead, the term "LASER" ended up being used for all devices operating at wavelengths shorter than microwaves.
Gould's notes included possible applications for a laser, such as optical telecommunications, spectrometry, interferometry, radar, and nuclear fusion. He continued developing the idea and filed a patent application in April 1959. The United States Patent and Trademark Office (USPTO) denied his application, and awarded a patent to Bell Labs, in 1960. That provoked a twenty-eight-year lawsuit, featuring scientific prestige and money as the stakes. Gould won his first minor patent in 1977, yet it was not until 1987 that he won the first significant patent lawsuit victory when a Federal judge ordered the USPTO to issue patents to Gould for the optically pumped and the gas discharge laser devices. The question of just how to assign credit for inventing the laser remains unresolved by historians.
On May 16, 1960, Theodore H. Maiman operated the first functioning laser at Hughes Research Laboratories, Malibu, California, ahead of several research teams, including those of Townes, at Columbia University, Arthur L. Schawlow, at Bell Labs, and Gould, at the TRG (Technical Research Group) company. Maiman's functional laser used a flashlamp-pumped synthetic ruby crystal to produce red laser light at 694 nanometers wavelength. The device was only capable of pulsed operation, due to its three-level pumping design scheme. Later that year, the Iranian physicist Ali Javan, and William R. Bennett Jr., and Donald R. Herriott, constructed the first gas laser, using helium and neon that was capable of continuous operation in the infrared (U.S. Patent 3,149,290); later, Javan received the Albert Einstein World Award of Science in 1993. In 1962, Robert N. Hall demonstrated the first semiconductor laser, which was made of gallium arsenide and emitted in the near-infrared band of the spectrum at 850 nm. Later that year, Nick Holonyak Jr. demonstrated the first semiconductor laser with a visible emission. This first semiconductor laser could only be used in pulsed-beam operation, and when cooled to liquid nitrogen temperatures (77 K). In 1970, Zhores Alferov, in the USSR, and Izuo Hayashi and Morton Panish of Bell Labs also independently developed room-temperature, continual-operation diode lasers, using the heterojunction structure.
Recent innovations
Since the early period of laser history, laser research has produced a variety of improved and specialized laser types, optimized for different performance goals, including:
new wavelength bands
maximum average output power
maximum peak pulse energy
maximum peak pulse power
minimum output pulse duration
minimum linewidth
maximum power efficiency
minimum cost
Research on improving these aspects of lasers continues to this day.
In 2015, researchers made a white laser, whose light is modulated by a synthetic nanosheet made out of zinc, cadmium, sulfur, and selenium that can emit red, green, and blue light in varying proportions, with each wavelength spanning 191 nm.
In 2017, researchers at the Delft University of Technology demonstrated an AC Josephson junction microwave laser. Since the laser operates in the superconducting regime, it is more stable than other semiconductor-based lasers. The device has the potential for applications in quantum computing. In 2017, researchers at the Technical University of Munich demonstrated the smallest mode locking laser capable of emitting pairs of phase-locked picosecond laser pulses with a repetition frequency up to 200 GHz.
In 2017, researchers from the Physikalisch-Technische Bundesanstalt (PTB), together with US researchers from JILA, a joint institute of the National Institute of Standards and Technology (NIST) and the University of Colorado Boulder, established a new world record by developing an erbium-doped fiber laser with a linewidth of only 10millihertz.
Types and operating principles
Gas lasers
Following the invention of the HeNe gas laser, many other gas discharges have been found to amplify light coherently.
Gas lasers using many different gases have been built and used for many purposes. The helium–neon laser (HeNe) can operate at many different wavelengths, however, the vast majority are engineered to lase at 633 nm; these relatively low-cost but highly coherent lasers are extremely common in optical research and educational laboratories. Commercial carbon dioxide (CO2) lasers can emit many hundreds of watts in a single spatial mode which can be concentrated into a tiny spot. This emission is in the thermal infrared at 10.6 μm; such lasers are regularly used in industry for cutting and welding. The efficiency of a CO2 laser is unusually high: over 30%. Argon-ion lasers can operate at several lasing transitions between 351 and 528.7 nm. Depending on the optical design one or more of these transitions can be lasing simultaneously; the most commonly used lines are 458 nm, 488 nm and 514.5 nm. A nitrogen transverse electrical discharge in gas at atmospheric pressure (TEA) laser is an inexpensive gas laser, often home-built by hobbyists, which produces rather incoherent UV light at 337.1 nm. Metal ion lasers are gas lasers that generate deep ultraviolet wavelengths. Helium-silver (HeAg) 224 nm and neon-copper (NeCu) 248 nm are two examples. Like all low-pressure gas lasers, the gain media of these lasers have quite narrow oscillation linewidths, less than 3 GHz (0.5 picometers), making them candidates for use in fluorescence suppressed Raman spectroscopy.
Lasing without maintaining the medium excited into a population inversion was demonstrated in 1992 in sodium gas and again in 1995 in rubidium gas by various international teams. This was accomplished by using an external maser to induce "optical transparency" in the medium by introducing and destructively interfering the ground electron transitions between two paths so that the likelihood for the ground electrons to absorb any energy has been canceled.
Chemical lasers
Chemical lasers are powered by a chemical reaction permitting a large amount of energy to be released quickly. Such very high-power lasers are especially of interest to the military; however continuous wave chemical lasers at very high power levels, fed by streams of gasses, have been developed and have some industrial applications. As examples, in the hydrogen fluoride laser (2700–2900 nm) and the deuterium fluoride laser (3800 nm) the reaction is the combination of hydrogen or deuterium gas with combustion products of ethylene in nitrogen trifluoride.
Excimer lasers
Excimer lasers are a special sort of gas laser powered by an electric discharge in which the lasing medium is an excimer, or more precisely an exciplex in existing designs. These are molecules that can only exist with one atom in an excited electronic state. Once the molecule transfers its excitation energy to a photon, its atoms are no longer bound to each other, and the molecule disintegrates. This drastically reduces the population of the lower energy state thus greatly facilitating a population inversion. Excimers currently used are all noble gas compounds; noble gasses are chemically inert and can only form compounds while in an excited state. Excimer lasers typically operate at ultraviolet wavelengths, with major applications including semiconductor photolithography and LASIK eye surgery. Commonly used excimer molecules include ArF (emission at 193 nm), KrCl (222 nm), KrF (248 nm), XeCl (308 nm), and XeF (351 nm).
The molecular fluorine laser, emitting at 157 nm in the vacuum ultraviolet, is sometimes referred to as an excimer laser; however, this appears to be a misnomer since F2 is a stable compound.
Solid-state lasers
Solid-state lasers use a crystalline or glass rod that is "doped" with ions that provide the required energy states. For example, the first working laser was a ruby laser, made from ruby (chromium-doped corundum). The population inversion is maintained in the dopant. These materials are pumped optically using a shorter wavelength than the lasing wavelength, often from a flash tube or another laser. The usage of the term "solid-state" in laser physics is narrower than in typical use. Semiconductor lasers (laser diodes) are typically not referred to as solid-state lasers.
Neodymium is a common dopant in various solid-state laser crystals, including yttrium orthovanadate (Nd:YVO4), yttrium lithium fluoride (Nd:YLF) and yttrium aluminium garnet (Nd:YAG). All these lasers can produce high powers in the infrared spectrum at 1064 nm. They are used for cutting, welding, and marking of metals and other materials, and also in spectroscopy and for pumping dye lasers. These lasers are also commonly doubled, tripled or quadrupled in frequency to produce 532 nm (green, visible), 355 nm and 266 nm (UV) beams, respectively. Frequency-doubled diode-pumped solid-state (DPSS) lasers are used to make bright green laser pointers.
Ytterbium, holmium, thulium, and erbium are other common "dopants" in solid-state lasers. Ytterbium is used in crystals such as Yb:YAG, Yb:KGW, Yb:KYW, Yb:SYS, Yb:BOYS, Yb:CaF2, typically operating around 1020–1050 nm. They are potentially very efficient and high-powered due to a small quantum defect. Extremely high powers in ultrashort pulses can be achieved with Yb:YAG. Holmium-doped YAG crystals emit at 2097 nm and form an efficient laser operating at infrared wavelengths strongly absorbed by water-bearing tissues. The Ho-YAG is usually operated in a pulsed mode and passed through optical fiber surgical devices to resurface joints, remove rot from teeth, vaporize cancers, and pulverize kidney and gall stones.
Titanium-doped sapphire (Ti:sapphire) produces a highly tunable infrared laser, commonly used for spectroscopy. It is also notable for use as a mode-locked laser producing ultrashort pulses of extremely high peak power.
Thermal limitations in solid-state lasers arise from unconverted pump power that heats the medium. This heat, when coupled with a high thermo-optic coefficient (dn/dT) can cause thermal lensing and reduce the quantum efficiency. Diode-pumped thin disk lasers overcome these issues by having a gain medium that is much thinner than the diameter of the pump beam. This allows for a more uniform temperature in the material. Thin disk lasers have been shown to produce beams of up to one kilowatt.
Fiber lasers
Solid-state lasers or laser amplifiers where the light is guided due to the total internal reflection in a single mode optical fiber are instead called fiber lasers. Guiding of light allows extremely long gain regions, providing good cooling conditions; fibers have a high surface area to volume ratio which allows efficient cooling. In addition, the fiber's waveguiding properties tend to reduce thermal distortion of the beam. Erbium and ytterbium ions are common active species in such lasers.
Quite often, the fiber laser is designed as a double-clad fiber. This type of fiber consists of a fiber core, an inner cladding, and an outer cladding. The index of the three concentric layers is chosen so that the fiber core acts as a single-mode fiber for the laser emission while the outer cladding acts as a highly multimode core for the pump laser. This lets the pump propagate a large amount of power into and through the active inner core region while still having a high numerical aperture (NA) to have easy launching conditions.
Pump light can be used more efficiently by creating a fiber disk laser, or a stack of such lasers.
Fiber lasers, like other optical media, can suffer from the effects of photodarkening when they are exposed to radiation of certain wavelengths. In particular, this can lead to degradation of the material and loss in laser functionality over time. The exact causes and effects of this phenomenon vary from material to material, although it often involves the formation of color centers.
Photonic crystal lasers
Photonic crystal lasers are lasers based on nano-structures that provide the mode confinement and the density of optical states (DOS) structure required for the feedback to take place. They are typical micrometer-sized and tunable on the bands of the photonic crystals.
Semiconductor lasers
Semiconductor lasers are diodes that are electrically pumped. Recombination of electrons and holes created by the applied current introduces optical gain. Reflection from the ends of the crystal forms an optical resonator, although the resonator can be external to the semiconductor in some designs.
Commercial laser diodes emit at wavelengths from 375 nm to 3500 nm. Low to medium power laser diodes are used in laser pointers, laser printers and CD/DVD players. Laser diodes are also frequently used to optically pump other lasers with high efficiency. The highest-power industrial laser diodes, with power of up to 20 kW, are used in industry for cutting and welding. External-cavity semiconductor lasers have a semiconductor active medium in a larger cavity. These devices can generate high power outputs with good beam quality, wavelength-tunable narrow-linewidth radiation, or ultrashort laser pulses.
In 2012, Nichia and OSRAM developed and manufactured commercial high-power green laser diodes (515/520 nm), which compete with traditional diode-pumped solid-state lasers.
Vertical cavity surface-emitting lasers (VCSELs) are semiconductor lasers whose emission direction is perpendicular to the surface of the wafer. VCSEL devices typically have a more circular output beam than conventional laser diodes. As of 2005, only 850 nm VCSELs are widely available, with 1300 nm VCSELs beginning to be commercialized and 1550 nm devices being an area of research. VECSELs are external-cavity VCSELs. Quantum cascade lasers are semiconductor lasers that have an active transition between energy sub-bands of an electron in a structure containing several quantum wells.
The development of a silicon laser is important in the field of optical computing. Silicon is the material of choice for integrated circuits, and so electronic and silicon photonic components (such as optical interconnects) could be fabricated on the same chip. Unfortunately, silicon is a difficult lasing material to deal with, since it has certain properties which block lasing. However, recently teams have produced silicon lasers through methods such as fabricating the lasing material from silicon and other semiconductor materials, such as indium(III) phosphide or gallium(III) arsenide, materials that allow coherent light to be produced from silicon. These are called hybrid silicon lasers. Recent developments have also shown the use of monolithically integrated nanowire lasers directly on silicon for optical interconnects, paving the way for chip-level applications. These heterostructure nanowire lasers capable of optical interconnects in silicon are also capable of emitting pairs of phase-locked picosecond pulses with a repetition frequency up to 200 GHz, allowing for on-chip optical signal processing. Another type is a Raman laser, which takes advantage of Raman scattering to produce a laser from materials such as silicon.
Dye lasers
Dye lasers use an organic dye as the gain medium. The wide gain spectrum of available dyes, or mixtures of dyes, allows these lasers to be highly tunable, or to produce very short-duration pulses (on the order of a few femtoseconds). Although these tunable lasers are mainly known in their liquid form, researchers have also demonstrated narrow-linewidth tunable emission in dispersive oscillator configurations incorporating solid-state dye gain media. In their most prevalent form, these solid-state dye lasers use dye-doped polymers as laser media.
Bubble lasers are dye lasers that use a bubble as the optical resonator. Whispering gallery modes in the bubble produce an output spectrum composed of hundreds of evenly spaced peaks: a frequency comb. The spacing of the whispering gallery modes is directly related to the bubble circumference, allowing bubble lasers to be used as highly sensitive pressure sensors.
Free-electron lasers
Free-electron lasers (FEL) generate coherent, high-power radiation that is widely tunable, currently ranging in wavelength from microwaves through terahertz radiation and infrared to the visible spectrum, to soft X-rays. They have the widest frequency range of any laser type. While FEL beams share the same optical traits as other lasers, such as coherent radiation, FEL operation is quite different. Unlike gas, liquid, or solid-state lasers, which rely on bound atomic or molecular states, FELs use a relativistic electron beam as the lasing medium, hence the term free-electron.
Exotic media
The pursuit of a high-quantum-energy laser using transitions between isomeric states of an atomic nucleus has been the subject of wide-ranging academic research since the early 1970s. Much of this is summarized in three review articles. This research has been international in scope but mainly based in the former Soviet Union and the United States. While many scientists remain optimistic that a breakthrough is near, an operational gamma-ray laser is yet to be realized.
Some of the early studies were directed toward short pulses of neutrons exciting the upper isomer state in a solid so the gamma-ray transition could benefit from the line-narrowing of Mössbauer effect. In conjunction, several advantages were expected from two-stage pumping of a three-level system. It was conjectured that the nucleus of an atom embedded in the near field of a laser-driven coherently-oscillating electron cloud would experience a larger dipole field than that of the driving laser. Furthermore, the nonlinearity of the oscillating cloud would produce both spatial and temporal harmonics, so nuclear transitions of higher multipolarity could also be driven at multiples of the laser frequency.
In September 2007, the BBC News reported that there was speculation about the possibility of using positronium annihilation to drive a very powerful gamma ray laser. David Cassidy of the University of California, Riverside proposed that a single such laser could be used to ignite a nuclear fusion reaction, replacing the banks of hundreds of lasers currently employed in inertial confinement fusion experiments.
Space-based X-ray lasers pumped by nuclear explosions have also been proposed as antimissile weapons. Such devices would be one-shot weapons.
Living cells have been used to produce laser light. The cells were genetically engineered to produce green fluorescent protein, which served as the laser's gain medium. The cells were then placed between two 20-micrometer-wide mirrors, which acted as the laser cavity. When the cell was illuminated with blue light, it emitted intensely directed green laser light.
Natural lasers
Like astrophysical masers, irradiated planetary or stellar gases may amplify light producing a natural laser. Mars, Venus, and MWC 349 exhibit this phenomenon.
Uses
When the laser was first invented, it was called "a solution looking for a problem", although Gould noted numerous possible applications in his notebook and patent applications. Since then, they have become ubiquitous, finding utility in thousands of highly varied applications in every section of modern society, including consumer electronics, information technology, science, medicine, industry, law enforcement, entertainment, and the military. Fiber-optic communication relies on multiplexed lasers in dense wave-division multiplexing (WDM) systems to transmit large amounts of data over long distances.
The first widely noticeable use of lasers was the supermarket barcode scanner, introduced in 1974. The laserdisc player, introduced in 1978, was the first successful consumer product to include a laser, but the compact disc player was the first laser-equipped device to become common, commercialized in 1982, followed shortly by laser printers.
Some other uses are:
Communications: besides fiber-optic communication, lasers are used for free-space optical communication, including laser communication in space
Medicine: see below
Industry: cutting including converting thin materials, welding, material heat treatment, marking parts (engraving and bonding), additive manufacturing or 3D printing processes such as selective laser sintering and selective laser melting, laser metal deposition, and non-contact measurement of parts and 3D scanning, and laser cleaning.
Military: marking targets, guiding munitions, missile defense, electro-optical countermeasures (EOCM), lidar, blinding troops, firearms sights. See below
Law enforcement: LIDAR traffic enforcement. Lasers are used for latent fingerprint detection in the forensic identification field
Research: spectroscopy, laser ablation, laser annealing, laser scattering, laser interferometry, lidar, laser capture microdissection, fluorescence microscopy, metrology, laser cooling
Commercial products: laser printers, barcode scanners, thermometers, laser pointers, holograms, bubblegrams
Entertainment: optical discs, laser lighting displays, laser turntables.
Informational markings: Laser lighting display technology can be used to project informational markings onto surfaces such as playing fields, roads, runways, or warehouse floors.
In 2004, excluding diode lasers, approximately 131,000 lasers were sold ,with a value of billion. In the same year, approximately 733 million diode lasers, valued at billion, were sold. Global Industrial laser sales in 2023 reached $21.85 billion.
In medicine
Lasers have many uses in medicine, including laser surgery (particularly eye surgery), laser healing (photobiomodulation therapy), kidney stone treatment, ophthalmoscopy, and cosmetic skin treatments such as acne treatment, cellulite and striae reduction, and hair removal.
Lasers are used to treat cancer by shrinking or destroying tumors or precancerous growths. They are most commonly used to treat superficial cancers that are on the surface of the body or the lining of internal organs. They are used to treat basal cell skin cancer and the very early stages of others like cervical, penile, vaginal, vulvar, and non-small cell lung cancer. Laser therapy is often combined with other treatments, such as surgery, chemotherapy, or radiation therapy. Laser-induced interstitial thermotherapy (LITT), or interstitial laser photocoagulation, uses lasers to treat some cancers using hyperthermia, which uses heat to shrink tumors by damaging or killing cancer cells. Lasers are more precise than traditional surgery methods and cause less damage, pain, bleeding, swelling, and scarring. A disadvantage is that surgeons must acquire specialized training, and thus it will likely be more expensive than other treatments.
As weapons
A laser weapon is a type of directed-energy weapon that uses lasers to inflict damage. Whether they will be deployed as practical, high-performance military weapons remains to be seen. One of the major issues with laser weapons is atmospheric thermal blooming, which is still largely unsolved. This issue is exacerbated when there is fog, smoke, dust, rain, snow, smog, foam, or purposely dispersed obscurant chemicals present. The United States Navy has tested the very short range (1 mile), 30-kW Laser Weapon System or LaWS to be used against targets like small UAVs, rocket-propelled grenades, and visible motorboat or helicopter engines. It has been described as "six welding lasers strapped together." A 60 kW system, HELIOS, is being developed for destroyer-class ships .
Lasers can be used as incapacitating non-lethal weapons. They can cause temporary or permanent vision loss when directed at the eyes. Even lasers with a power output of less than one watt can cause immediate and permanent vision loss under certain conditions, making them potentially non-lethal but incapacitating weapons. The use of such lasers is morally controversial due to the extreme handicap that laser-induced blindness represents. The Protocol on Blinding Laser Weapons bans the use of weapons designed to cause permanent blindness. Weapons designed to cause temporary blindness, known as dazzlers, are used by military and sometimes law enforcement organizations.
Hobbies
In recent years, some hobbyists have taken an interest in lasers. Lasers used by hobbyists are generally of class IIIa or IIIb , although some have made their own class IV types. However, due to the cost and potential dangers, this is an uncommon hobby. Some hobbyists salvage laser diodes from broken DVD players (red), Blu-ray players (violet), or even higher power laser diodes from CD or DVD burners.
Hobbyists have also used surplus lasers taken from retired military applications and modified them for holography. Pulsed ruby and YAG lasers work well for this application.
Examples by power
Different applications need lasers with different output powers. Lasers that produce a continuous beam or a series of short pulses can be compared on the basis of their average power. Lasers that produce pulses can also be characterized based on the peak power of each pulse. The peak power of a pulsed laser is many orders of magnitude greater than its average power. The average output power is always less than the power consumed.
Examples of pulsed systems with high peak power:
700 TW (700×1012 W)National Ignition Facility, a 192-beam, 1.8-megajoule laser system adjoining a 10-meter-diameter target chamber
10 PW (10×1015 W)world's most powerful laser as of 2019, located at the ELI-NP facility in Măgurele, Romania.
Safety
Even the first laser was recognized as being potentially dangerous. Theodore Maiman characterized the first laser as having the power of one "Gillette", as it could burn through one Gillette razor blade. Today, it is accepted that even low-power lasers with only a few milliwatts of output power can be hazardous to human eyesight when the beam hits the eye directly or after reflection from a shiny surface. At wavelengths which the cornea and the lens can focus well, the coherence and low divergence of laser light means that it can be focused by the eye into an extremely small spot on the retina, resulting in localized burning and permanent damage in seconds or even less time.
Lasers are usually labeled with a safety class number, which identifies how dangerous the laser is:
Class 1 is inherently safe, usually because the light is contained in an enclosure, for example in CD players
Class 2 is safe during normal use; the blink reflex of the eye will prevent damage. Usually up to 1 mW power, for example, laser pointers.
Class 3R (formerly IIIa) lasers are usually up to 5 mW and involve a small risk of eye damage within the time of the blink reflex. Staring into such a beam for several seconds is likely to cause damage to a spot on the retina.
Class 3B lasers (5–499 mW) can cause immediate eye damage upon exposure.
Class 4 lasers (≥ 500 mW) can burn skin, and in some cases, even scattered light from these lasers can cause eye and/or skin damage. Many industrial and scientific lasers are in this class.
The indicated powers are for visible-light, continuous-wave lasers. For pulsed lasers and invisible wavelengths, other power limits apply. People working with class 3B and class 4 lasers can protect their eyes with safety goggles which are designed to absorb light of a particular wavelength.
Infrared lasers with wavelengths longer than about 1.4micrometers are often referred to as "eye-safe", because the cornea tends to absorb light at these wavelengths, protecting the retina from damage. The label "eye-safe" can be misleading, however, as it applies only to relatively low-power continuous wave beams; a high-power or Q-switched laser at these wavelengths can burn the cornea, causing severe eye damage, and even moderate-power lasers can injure the eye.
Lasers can be a hazard to both civil and military aviation, due to the potential to temporarily distract or blind pilots. See Lasers and aviation safety for more on this topic.
Cameras based on charge-coupled devices may be more sensitive to laser damage than biological eyes.
| Technology | Optical | null |
17561 | https://en.wikipedia.org/wiki/Lithium | Lithium | Lithium () is a chemical element; it has symbol Li and atomic number 3. It is a soft, silvery-white alkali metal. Under standard conditions, it is the least dense metal and the least dense solid element. Like all alkali metals, lithium is highly reactive and flammable, and must be stored in vacuum, inert atmosphere, or inert liquid such as purified kerosene or mineral oil. It exhibits a metallic luster. It corrodes quickly in air to a dull silvery gray, then black tarnish. It does not occur freely in nature, but occurs mainly as pegmatitic minerals, which were once the main source of lithium. Due to its solubility as an ion, it is present in ocean water and is commonly obtained from brines. Lithium metal is isolated electrolytically from a mixture of lithium chloride and potassium chloride.
The nucleus of the lithium atom verges on instability, since the two stable lithium isotopes found in nature have among the lowest binding energies per nucleon of all stable nuclides. Because of its relative nuclear instability, lithium is less common in the solar system than 25 of the first 32 chemical elements even though its nuclei are very light: it is an exception to the trend that heavier nuclei are less common. For related reasons, lithium has important uses in nuclear physics. The transmutation of lithium atoms to helium in 1932 was the first fully human-made nuclear reaction, and lithium deuteride serves as a fusion fuel in staged thermonuclear weapons.
Lithium and its compounds have several industrial applications, including heat-resistant glass and ceramics, lithium grease lubricants, flux additives for iron, steel and aluminium production, lithium metal batteries, and lithium-ion batteries. These uses consume more than three-quarters of lithium production.
Lithium is present in biological systems in trace amounts. It has no established metabolic function in humans. Lithium-based drugs are useful as a mood stabilizer and antidepressant in the treatment of mental illness such as bipolar disorder.
Properties
Atomic and physical
The alkali metals are also called the lithium family, after its leading element. Like the other alkali metals (which are sodium (Na), potassium (K), rubidium (Rb), caesium (Cs), and francium (Fr)), lithium has a single valence electron that, in the presence of solvents, is easily released to form Li+. Because of this, lithium is a good conductor of heat and electricity as well as a highly reactive element, though it is the least reactive of the alkali metals. Lithium's lower reactivity is due to the proximity of its valence electron to its nucleus (the remaining two electrons are in the 1s orbital, much lower in energy, and do not participate in chemical bonds). Molten lithium is significantly more reactive than its solid form.
Lithium metal is soft enough to be cut with a knife. It is silvery-white. In air it oxidizes to lithium oxide. Its melting point of and its boiling point of are each the highest of all the alkali metals while its density of 0.534 g/cm3 is the lowest.
Lithium has a very low density (0.534 g/cm3), comparable with pine wood. It is the least dense of all elements that are solids at room temperature; the next lightest solid element (potassium, at 0.862 g/cm3) is more than 60% denser. Apart from helium and hydrogen, as a solid it is less dense than any other element as a liquid, being only two-thirds as dense as liquid nitrogen (0.808 g/cm3). Lithium can float on the lightest hydrocarbon oils and is one of only three metals that can float on water, the other two being sodium and potassium.
Lithium's coefficient of thermal expansion is twice that of aluminium and almost four times that of iron. Lithium is superconductive below 400 μK at standard pressure and at higher temperatures (more than 9 K) at very high pressures (>20 GPa). At temperatures below 70 K, lithium, like sodium, undergoes diffusionless phase change transformations. At 4.2 K it has a rhombohedral crystal system (with a nine-layer repeat spacing); at higher temperatures it transforms to face-centered cubic and then body-centered cubic. At liquid-helium temperatures (4 K) the rhombohedral structure is prevalent. Multiple allotropic forms have been identified for lithium at high pressures.
Lithium has a mass specific heat capacity of 3.58 kilojoules per kilogram-kelvin, the highest of all solids. Because of this, lithium metal is often used in coolants for heat transfer applications.
Isotopes
Naturally occurring lithium is composed of two stable isotopes, 6Li and 7Li, the latter being the more abundant (95.15% natural abundance). Both natural isotopes have anomalously low nuclear binding energy per nucleon (compared to the neighboring elements on the periodic table, helium and beryllium); lithium is the only low numbered element that can produce net energy through nuclear fission. The two lithium nuclei have lower binding energies per nucleon than any other stable nuclides other than hydrogen-1, deuterium and helium-3. As a result of this, though very light in atomic weight, lithium is less common in the Solar System than 25 of the first 32 chemical elements. Seven radioisotopes have been characterized, the most stable being 8Li with a half-life of 838 ms and 9Li with a half-life of 178 ms. All of the remaining radioactive isotopes have half-lives that are shorter than 8.6 ms. The shortest-lived isotope of lithium is 4Li, which decays through proton emission and has a half-life of 7.6 × 10−23 s. The 6Li isotope is one of only five stable nuclides to have both an odd number of protons and an odd number of neutrons, the other four stable odd-odd nuclides being hydrogen-2, boron-10, nitrogen-14, and tantalum-180m.
7Li is one of the primordial elements (or, more properly, primordial nuclides) produced in Big Bang nucleosynthesis. A small amount of both 6Li and 7Li are produced in stars during stellar nucleosynthesis, but it is further "burned" as fast as produced. 7Li can also be generated in carbon stars. Additional small amounts of both 6Li and 7Li may be generated from solar wind, cosmic rays hitting heavier atoms, and from early solar system 7Be radioactive decay.
Lithium isotopes fractionate substantially during a wide variety of natural processes, including mineral formation (chemical precipitation), metabolism, and ion exchange. Lithium ions substitute for magnesium and iron in octahedral sites in clay minerals, where 6Li is preferred to 7Li, resulting in enrichment of the light isotope in processes of hyperfiltration and rock alteration. The exotic 11Li is known to exhibit a neutron halo, with 2 neutrons orbiting around its nucleus of 3 protons and 6 neutrons. The process known as laser isotope separation can be used to separate lithium isotopes, in particular 7Li from 6Li.
Nuclear weapons manufacture and other nuclear physics applications are a major source of artificial lithium fractionation, with the light isotope 6Li being retained by industry and military stockpiles to such an extent that it has caused slight but measurable change in the 6Li to 7Li ratios in natural sources, such as rivers. This has led to unusual uncertainty in the standardized atomic weight of lithium, since this quantity depends on the natural abundance ratios of these naturally-occurring stable lithium isotopes, as they are available in commercial lithium mineral sources.
Both stable isotopes of lithium can be laser cooled and were used to produce the first quantum degenerate Bose–Fermi mixture.
Occurrence
Astronomical
Although it was synthesized in the Big Bang, lithium (together with beryllium and boron) is markedly less abundant in the universe than other elements. This is a result of the comparatively low stellar temperatures necessary to destroy lithium, along with a lack of common processes to produce it.
According to modern cosmological theory, lithium—in both stable isotopes (lithium-6 and lithium-7)—was one of the three elements synthesized in the Big Bang. Though the amount of lithium generated in Big Bang nucleosynthesis is dependent upon the number of photons per baryon, for accepted values the lithium abundance can be calculated, and there is a "cosmological lithium discrepancy" in the universe: older stars seem to have less lithium than they should, and some younger stars have much more. The lack of lithium in older stars is apparently caused by the "mixing" of lithium into the interior of stars, where it is destroyed, while lithium is produced in younger stars. Although it transmutes into two atoms of helium due to collision with a proton at temperatures above 2.4 million degrees Celsius (most stars easily attain this temperature in their interiors), lithium is more abundant than computations would predict in later-generation stars.
Lithium is also found in brown dwarf substellar objects and certain anomalous orange stars. Because lithium is present in cooler, less-massive brown dwarfs, but is destroyed in hotter red dwarf stars, its presence in the stars' spectra can be used in the "lithium test" to differentiate the two, as both are smaller than the Sun. Certain orange stars can also contain a high concentration of lithium. Those orange stars found to have a higher than usual concentration of lithium (such as Centaurus X-4) orbit massive objects—neutron stars or black holes—whose gravity evidently pulls heavier lithium to the surface of a hydrogen-helium star, causing more lithium to be observed.
On 27 May 2020, astronomers reported that classical nova explosions are galactic producers of lithium-7.
Terrestrial
Although lithium is widely distributed on Earth, it does not naturally occur in elemental form due to its high reactivity. The total lithium content of seawater is very large and is estimated as 230 billion tonnes, where the element exists at a relatively constant concentration of 0.14 to 0.25 parts per million (ppm), or 25 micromolar; higher concentrations approaching 7 ppm are found near hydrothermal vents.
Estimates for the Earth's crustal content range from 20 to 70 ppm by weight. In keeping with its name, lithium forms a minor part of igneous rocks, with the largest concentrations in granites. Granitic pegmatites also provide the greatest abundance of lithium-containing minerals, with spodumene and petalite being the most commercially viable sources. Another significant mineral of lithium is lepidolite which is now an obsolete name for a series formed by polylithionite and trilithionite. Another source for lithium is hectorite clay, the only active development of which is through the Western Lithium Corporation in the United States. At 20 mg lithium per kg of Earth's crust, lithium is the 31st most abundant element.
According to the Handbook of Lithium and Natural Calcium, "Lithium is a comparatively rare element, although it is found in many rocks and some brines, but always in very low concentrations. There are a fairly large number of both lithium mineral and brine deposits but only comparatively few of them are of actual or potential commercial value. Many are very small, others are too low in grade."
Chile is estimated (2020) to have the largest reserves by far (9.2 million tonnes), and Australia the highest annual production (40,000 tonnes). One of the largest reserve bases of lithium is in the Salar de Uyuni area of Bolivia, which has 5.4 million tonnes. Other major suppliers include Australia, Argentina and China. As of 2015, the Czech Geological Survey considered the entire Ore Mountains in the Czech Republic as lithium province. Five deposits are registered, one near is considered as a potentially economical deposit, with 160 000 tonnes of lithium. In December 2019, Finnish mining company Keliber Oy reported its Rapasaari lithium deposit has estimated proven and probable ore reserves of 5.280 million tonnes.
In June 2010, The New York Times reported that American geologists were conducting ground surveys on dry salt lakes in western Afghanistan believing that large deposits of lithium are located there. These estimates are "based principally on old data, which was gathered mainly by the Soviets during their occupation of Afghanistan from 1979–1989". The Department of Defense estimated the lithium reserves in Afghanistan to amount to the ones in Bolivia and dubbed it as a potential "Saudi-Arabia of lithium". In Cornwall, England, the presence of brine rich in lithium was well known due to the region's historic mining industry, and private investors have conducted tests to investigate potential lithium extraction in this area.
Biological
Lithium is found in trace amount in numerous plants, plankton, and invertebrates, at concentrations of 69 to 5,760 parts per billion (ppb). In vertebrates the concentration is slightly lower, and nearly all vertebrate tissue and body fluids contain lithium ranging from 21 to 763 ppb. Marine organisms tend to bioaccumulate lithium more than terrestrial organisms. Whether lithium has a physiological role in any of these organisms is unknown.
Lithium concentrations in human tissue averages about 24 ppb (4 ppb in blood, and 1.3 ppm in bone).
Lithium is easily absorbed by plants and lithium concentration in plant tissue is typically around 1 ppm. Some plant families bioaccumulate more lithium than others. Dry weight lithium concentrations for members of the family Solanaceae (which includes potatoes and tomatoes), for instance, can be as high as 30 ppm while this can be as low as 0.05 ppb for corn grains.
Studies of lithium concentrations in mineral-rich soil give ranges between around 0.1 and 50−100 ppm, with some concentrations as high as 100−400 ppm, although it is unlikely that all of it is available for uptake by plants.
Lithium accumulation does not appear to affect the essential nutrient composition of plants. Tolerance to lithium varies by plant species and typically parallels sodium tolerance; maize and Rhodes grass, for example, are highly tolerant to lithium injury while avocado and soybean are very sensitive. Similarly, lithium at concentrations of 5 ppm reduces seed germination in some species (e.g. Asian rice and chickpea) but not in others (e.g. barley and wheat).
Many of lithium's major biological effects can be explained by its competition with other ions.
The monovalent lithium ion competes with other ions such as sodium (immediately below lithium on the periodic table), which like lithium is also a monovalent alkali metal.
Lithium also competes with bivalent magnesium ions, whose ionic radius (86 pm) is approximately that of the lithium ion (90 pm).
Mechanisms that transport sodium across cellular membranes also transport lithium.
For instance, sodium channels (both voltage-gated and epithelial) are particularly major pathways of entry for lithium.
Lithium ions can also permeate through ligand-gated ion channels as well as cross both nuclear and mitochondrial membranes.
Like sodium, lithium can enter and partially block (although not permeate) potassium channels and calcium channels.
The biological effects of lithium are many and varied but its mechanisms of action are only partially understood.
For instance, studies of lithium-treated patients with bipolar disorder show that, among many other effects, lithium partially reverses telomere shortening in these patients and also increases mitochondrial function, although how lithium produces these pharmacological effects is not understood.
Even the exact mechanisms involved in lithium toxicity are not fully understood.
History
Petalite (LiAlSi4O10) was discovered in 1800 by the Brazilian chemist and statesman José Bonifácio de Andrada e Silva in a mine on the island of Utö, Sweden. However, it was not until 1817 that Johan August Arfwedson, then working in the laboratory of the chemist Jöns Jakob Berzelius, detected the presence of a new element while analyzing petalite ore. This element formed compounds similar to those of sodium and potassium, though its carbonate and hydroxide were less soluble in water and less alkaline. Berzelius gave the alkaline material the name "lithion/lithina", from the Greek word λιθoς (transliterated as lithos, meaning "stone"), to reflect its discovery in a solid mineral, as opposed to potassium, which had been discovered in plant ashes, and sodium, which was known partly for its high abundance in animal blood. He named the new element "lithium".
Arfwedson later showed that this same element was present in the minerals spodumene and lepidolite. In 1818, Christian Gmelin was the first to observe that lithium salts give a bright red color to flame. However, both Arfwedson and Gmelin tried and failed to isolate the pure element from its salts. It was not isolated until 1821, when William Thomas Brande obtained it by electrolysis of lithium oxide, a process that had previously been employed by the chemist Sir Humphry Davy to isolate the alkali metals potassium and sodium. Brande also described some pure salts of lithium, such as the chloride, and, estimating that lithia (lithium oxide) contained about 55% metal, estimated the atomic weight of lithium to be around 9.8 g/mol (modern value ~6.94 g/mol). In 1855, larger quantities of lithium were produced through the electrolysis of lithium chloride by Robert Bunsen and Augustus Matthiessen. The discovery of this procedure led to commercial production of lithium in 1923 by the German company Metallgesellschaft AG, which performed an electrolysis of a liquid mixture of lithium chloride and potassium chloride.
Australian psychiatrist John Cade is credited with reintroducing and popularizing the use of lithium to treat mania in 1949. Shortly after, throughout the mid 20th century, lithium's mood stabilizing applicability for mania and depression took off in Europe and the United States.
The production and use of lithium underwent several drastic changes in history. The first major application of lithium was in high-temperature lithium greases for aircraft engines and similar applications in World War II and shortly after. This use was supported by the fact that lithium-based soaps have a higher melting point than other alkali soaps, and are less corrosive than calcium based soaps. The small demand for lithium soaps and lubricating greases was supported by several small mining operations, mostly in the US.
The demand for lithium increased dramatically during the Cold War with the production of nuclear fusion weapons. Both lithium-6 and lithium-7 produce tritium when irradiated by neutrons, and are thus useful for the production of tritium by itself, as well as a form of solid fusion fuel used inside hydrogen bombs in the form of lithium deuteride. The US became the prime producer of lithium between the late 1950s and the mid-1980s. At the end, the stockpile of lithium was roughly 42,000 tonnes of lithium hydroxide. The stockpiled lithium was depleted in lithium-6 by 75%, which was enough to affect the measured atomic weight of lithium in many standardized chemicals, and even the atomic weight of lithium in some "natural sources" of lithium ion which had been "contaminated" by lithium salts discharged from isotope separation facilities, which had found its way into ground water.
Lithium is used to decrease the melting temperature of glass and to improve the melting behavior of aluminium oxide in the Hall-Héroult process. These two uses dominated the market until the middle of the 1990s. After the end of the nuclear arms race, the demand for lithium decreased and the sale of department of energy stockpiles on the open market further reduced prices. In the mid-1990s, several companies started to isolate lithium from brine which proved to be a less expensive option than underground or open-pit mining. Most of the mines closed or shifted their focus to other materials because only the ore from zoned pegmatites could be mined for a competitive price. For example, the US mines near Kings Mountain, North Carolina, closed before the beginning of the 21st century.
The development of lithium-ion batteries increased the demand for lithium and became the dominant use in 2007. With the surge of lithium demand in batteries in the 2000s, new companies have expanded brine isolation efforts to meet the rising demand.
Chemistry
Of lithium metal
Lithium reacts with water easily, but with noticeably less vigor than other alkali metals. The reaction forms hydrogen gas and lithium hydroxide. When placed over a flame, lithium compounds give off a striking crimson color, but when the metal burns strongly, the flame becomes a brilliant silver. Lithium will ignite and burn in oxygen when exposed to water or water vapor. In moist air, lithium rapidly tarnishes to form a black coating of lithium hydroxide (LiOH and LiOH·H2O), lithium nitride (Li3N) and lithium carbonate (Li2CO3, the result of a secondary reaction between LiOH and CO2). Lithium is one of the few metals that react with nitrogen gas.
Because of its reactivity with water, and especially nitrogen, lithium metal is usually stored in a hydrocarbon sealant, often petroleum jelly. Although the heavier alkali metals can be stored under mineral oil, lithium is not dense enough to fully submerge itself in these liquids.
Lithium has a diagonal relationship with magnesium, an element of similar atomic and ionic radius. Chemical resemblances between the two metals include the formation of a nitride by reaction with N2, the formation of an oxide () and peroxide () when burnt in O2, salts with similar solubilities, and thermal instability of the carbonates and nitrides. The metal reacts with hydrogen gas at high temperatures to produce lithium hydride (LiH).
Lithium forms a variety of binary and ternary materials by direct reaction with the main group elements. These Zintl phases, although highly covalent, can be viewed as salts of polyatomic anions such as Si44-, P73-, and Te52-. With graphite, lithium forms a variety of intercalation compounds.
It dissolves in ammonia (and amines) to give [Li(NH3)4]+ and the solvated electron.
Inorganic compounds
Lithium forms salt-like derivatives with all halides and pseudohalides. Some examples include the halides LiF, LiCl, LiBr, LiI, as well as the pseudohalides and related anions. Lithium carbonate has been described as the most important compound of lithium. This white solid is the principal product of beneficiation of lithium ores. It is a precursor to other salts including ceramics and materials for lithium batteries.
The compounds and are useful reagents. These salts and many other lithium salts exhibit distinctively high solubility in ethers, in contrast with salts of heavier alkali metals.
In aqueous solution, the coordination complex [Li(H2O)4]+ predominates for many lithium salts. Related complexes are known with amines and ethers.
Organic chemistry
Organolithium compounds are numerous and useful. They are defined by the presence of a bond between carbon and lithium. They serve as metal-stabilized carbanions, although their solution and solid-state structures are more complex than this simplistic view. Thus, these are extremely powerful bases and nucleophiles. They have also been applied in asymmetric synthesis in the pharmaceutical industry. For laboratory organic synthesis, many organolithium reagents are commercially available in solution form. These reagents are highly reactive, and are sometimes pyrophoric.
Like its inorganic compounds, almost all organic compounds of lithium formally follow the duet rule (e.g., BuLi, MeLi). However, it is important to note that in the absence of coordinating solvents or ligands, organolithium compounds form dimeric, tetrameric, and hexameric clusters (e.g., BuLi is actually [BuLi]6 and MeLi is actually [MeLi]4) which feature multi-center bonding and increase the coordination number around lithium. These clusters are broken down into smaller or monomeric units in the presence of solvents like dimethoxyethane (DME) or ligands like tetramethylethylenediamine (TMEDA). As an exception to the duet rule, a two-coordinate lithate complex with four electrons around lithium, [Li(thf)4]+[((Me3Si)3C)2Li]–, has been characterized crystallographically.
Production
Lithium production has greatly increased since the end of World War II. The main sources of lithium are brines and ores.
Lithium metal is produced through electrolysis applied to a mixture of fused 55% lithium chloride and 45% potassium chloride at about 450 °C.
Lithium is one of the elements critical in a world running on renewable energy and dependent on batteries. This suggests that lithium will be one of the main objects of geopolitical competition, but this perspective has also been criticised for underestimating the power of economic incentives for expanded production.
Reserves and occurrence
The small ionic size makes it difficult for lithium to be included in early stages of mineral crystallization. As a result, lithium remains in the molten phases, where it gets enriched, until it gets solidified in the final stages. Such lithium enrichment is responsible for all commercially promising lithium ore deposits. Brines (and dry salt) are another important source of Li+. Although the number of known lithium-containing deposits and brines is large, most of them are either small or have too low Li+ concentrations. Thus, only a few appear to be of commercial value.
The US Geological Survey (USGS) estimated worldwide identified lithium reserves in 2020 and 2021 to be 17 million and 21 million tonnes, respectively. An accurate estimate of world lithium reserves is difficult. One reason for this is that most lithium classification schemes are developed for solid ore deposits, whereas brine is a fluid that is problematic to treat with the same classification scheme due to varying concentrations and pumping effects.
In 2019, world production of lithium from spodumene was around 80,000t per annum, primarily from the Greenbushes pegmatite and from some Chinese and Chilean sources. The Talison mine in Greenbushes is reported to be the largest and to have the highest grade of ore at 2.4% Li2O (2012 figures).
Lithium triangle and other brine sources
The world's top four lithium-producing countries from 2019, as reported by the US Geological Survey, are Australia, Chile, China and Argentina.
The three countries of Chile, Bolivia, and Argentina contain a region known as the Lithium Triangle. The Lithium Triangle is known for its high-quality salt flats, which include Bolivia's Salar de Uyuni, Chile's Salar de Atacama, and Argentina's Salar de Arizaro. The Lithium Triangle is believed to contain over 75% of existing known lithium reserves. Deposits are also found in South America throughout the Andes mountain chain. Chile is the leading producer, followed by Argentina. Both countries recover lithium from brine pools. According to USGS, Bolivia's Uyuni Desert has 5.4 million tonnes of lithium. Half the world's known reserves are located in Bolivia along the central eastern slope of the Andes. The Bolivian government has invested US$900 million in lithium production and in 2021 successfully produced 540 tons. The brines in the salt pans of the Lithium Triangle vary widely in lithium content. Concentrations can also vary in time as brines are fluids that are changeable and mobile.
In the US, lithium is recovered from brine pools in Nevada. Projects are also under development in Lithium Valley in California.
Hard-rock deposits
Since 2018 the Democratic Republic of Congo is known to have the largest lithium spodumene hard-rock deposit in the world. The deposit located in Manono, DRC, may hold up to 1.5 billion tons of lithium spodumene hard-rock. The two largest pegmatites (known as the Carriere de l'Este Pegmatite and the Roche Dure Pegmatite) are each of similar size or larger than the famous Greenbushes Pegmatite in Western Australia. Thus, the Democratic Republic of Congo is expected to be a significant supplier of lithium to the world with its high grade and low impurities.
On 16 July 2018 2.5 million tonnes of high-grade lithium resources and 124 million pounds of uranium resources were found in the Falchani hard rock deposit in the region Puno, Peru.
In 2020, Australia granted Major Project Status (MPS) to the Finniss Lithium Project for a strategically important lithium deposit: an estimated 3.45 million tonnes (Mt) of mineral resource at 1.4 percent lithium oxide. Operational mining began in 2022.
A deposit discovered in 2013 in Wyoming's Rock Springs Uplift is estimated to contain 228,000 tons. Additional deposits in the same formation were estimated to be as much as 18 million tons. Similarly in Nevada, the McDermitt Caldera hosts lithium-bearing volcanic muds that consist of the largest known deposits of lithium within the United States.
The Pampean Pegmatite Province in Argentina is known to have a total of at least 200,000 tons of spodumene with lithium oxide (Li2O) grades varying between 5 and 8 wt %.
In Russia the largest lithium deposit Kolmozerskoye is located in Murmansk region. In 2023, Polar Lithium, a joint venture between Nornickel and Rosatom, has been granted the right to develop the deposit. The project aims to produce 45,000 tonnes of lithium carbonate and hydroxide per year and plans to reach full design capacity by 2030.
Sources
Another potential source of lithium was identified as the leachates of geothermal wells, which are carried to the surface. Recovery of this type of lithium has been demonstrated in the field; the lithium is separated by simple filtration. Reserves are more limited than those of brine reservoirs and hard rock.
Pricing
In 1998, the price of lithium metal was about (or US$43/lb). After the 2007 financial crisis, major suppliers, such as Sociedad Química y Minera (SQM), dropped lithium carbonate pricing by 20%. Prices rose in 2012. A 2012 Business Week article outlined an oligopoly in the lithium space: "SQM, controlled by billionaire Julio Ponce, is the second-largest, followed by Rockwood, which is backed by Henry Kravis's KKR & Co., and Philadelphia-based FMC", with Talison mentioned as the biggest producer. Global consumption may jump to 300,000 metric tons a year by 2020 from about 150,000 tons in 2012, to match the demand for lithium batteries that has been growing at about 25% a year, outpacing the 4% to 5% overall gain in lithium production.
The price information service ISE - Institute of Rare Earths Elements and Strategic Metals - gives for various lithium substances in the average of March to August 2022 the following kilo prices stable in the course: Lithium carbonate, purity 99.5% min, from various producers between 63 and 72 EUR/kg. Lithium hydroxide monohydrate LiOH 56.5% min, China, at 66 to 72 EUR/kg; delivered South Korea - 73 EUR/kg. Lithium metal 99.9% min, delivered China - 42 EUR/kg.
Extraction
Lithium and its compounds were historically isolated and extracted from hard rock but by the 1990s mineral springs, brine pools, and brine deposits had become the dominant source. Most of these were in Chile, Argentina and Bolivia. Large lithium-clay deposits under development in the McDermitt caldera (Nevada, United States) require concentrated sulfuric acid to leach lithium from the clay ore.
By early 2021, much of the lithium mined globally comes from either "spodumene, the mineral contained in hard rocks found in places such as Australia and North Carolina" or from the salty brine pumped directly out of the ground, as it is in locations in Chile. In Chile's Salar de Atacama, the lithium concentration in the brine is raised by solar evaporation in a system of ponds. The enrichment by evaporation process may require up to one-and-a-half years, when the brine reaches a lithium content of 6%. The final processing in this example is done near the city of Antofagasta on the coast where pure lithium carbonate, lithium hydroxide, and lithium chloride are produced from the brine.
Low-cobalt cathodes for lithium batteries are expected to require lithium hydroxide rather than lithium carbonate as a feedstock, and this trend favors rock as a source.
One method for lithium extraction, as well as other valuable minerals, is to process geothermal brine water through an electrolytic cell, located within a membrane.
The use of electrodialysis and electrochemical intercalation has been proposed to extract lithium compounds from seawater (which contains lithium at 0.2 parts per million). Ion-selective cells within a membrane in principle could collect lithium either by use of electric field or a concentration difference. In 2024, a redox/electrodialysis system was claimed to offer enormous cost savings, shorter timelines, and less environmental damage than traditional evaporation-based systems.
Environmental issues
The manufacturing processes of lithium, including the solvent and mining waste, presents significant environmental and health hazards.
Lithium extraction can be fatal to aquatic life due to water pollution. It is known to cause surface water contamination, drinking water contamination, respiratory problems, ecosystem degradation and landscape damage. It also leads to unsustainable water consumption in arid regions (1.9 million liters per ton of lithium). Massive byproduct generation of lithium extraction also presents unsolved problems, such as large amounts of magnesium and lime waste.
In the United States, open-pit mining and mountaintop removal mining compete with brine extraction mining. Environmental concerns include wildlife habitat degradation, potable water pollution including arsenic and antimony contamination, unsustainable water table reduction, and massive mining waste, including radioactive uranium byproduct and sulfuric acid discharge.
Human rights issues
A study of relationships between lithium extraction companies and indigenous peoples in Argentina indicated that the state may not have protected indigenous peoples' right to free prior and informed consent, and that extraction companies generally controlled community access to information and set the terms for discussion of the projects and benefit sharing.
Development of the Thacker Pass lithium mine in Nevada, United States, has met with protests and lawsuits from several indigenous tribes who have said they were not provided free prior and informed consent and that the project threatens cultural and sacred sites. They have also expressed concerns that development of the project will create risks to indigenous women, because resource extraction is linked to missing and murdered indigenous women. Protestors have been occupying the site of the proposed mine since January 2021.
Applications
Batteries
In 2021, most lithium is used to make lithium-ion batteries for electric cars and mobile devices.
Ceramics and glass
Lithium oxide is widely used as a flux for processing silica, reducing the melting point and viscosity of the material and leading to glazes with improved physical properties including low coefficients of thermal expansion. Worldwide, this is one of the largest use for lithium compounds. Glazes containing lithium oxides are used for ovenware. Lithium carbonate (Li2CO3) is generally used in this application because it converts to the oxide upon heating.
Electrical and electronic
Late in the 20th century, lithium became an important component of battery electrolytes and electrodes, because of its high electrode potential. Because of its low atomic mass, it has a high charge- and power-to-weight ratio. A typical lithium-ion battery can generate approximately 3 volts per cell, compared with 2.1 volts for lead-acid and 1.5 volts for zinc-carbon. Lithium-ion batteries, which are rechargeable and have a high energy density, differ from lithium metal batteries, which are disposable (primary) batteries with lithium or its compounds as the anode. Other rechargeable batteries that use lithium include the lithium-ion polymer battery, lithium iron phosphate battery, and the nanowire battery.
Over the years opinions have been differing about potential growth. A 2008 study concluded that "realistically achievable lithium carbonate production would be sufficient for only a small fraction of future PHEV and EV global market requirements", that "demand from the portable electronics sector will absorb much of the planned production increases in the next decade", and that "mass production of lithium carbonate is not environmentally sound, it will cause irreparable ecological damage to ecosystems that should be protected and that LiIon propulsion is incompatible with the notion of the 'Green Car'".
Lubricating greases
The third most common use of lithium is in greases. Lithium hydroxide is a strong base, and when heated with a fat, it produces a soap, such as lithium stearate from stearic acid. Lithium soap has the ability to thicken oils, and it is used to manufacture all-purpose, high-temperature lubricating greases.
Metallurgy
Lithium (e.g. as lithium carbonate) is used as an additive to continuous casting mould flux slags where it increases fluidity, a use which accounts for 5% of global lithium use (2011). Lithium compounds are also used as additives (fluxes) to foundry sand for iron casting to reduce veining.
Lithium (as lithium fluoride) is used as an additive to aluminium smelters (Hall–Héroult process), reducing melting temperature and increasing electrical resistance, a use which accounts for 3% of production (2011).
When used as a flux for welding or soldering, metallic lithium promotes the fusing of metals during the process and eliminates the formation of oxides by absorbing impurities. Alloys of the metal with aluminium, cadmium, copper and manganese are used to make high-performance, low density aircraft parts (see also Lithium-aluminium alloys).
Silicon nano-welding
Lithium has been found effective in assisting the perfection of silicon nano-welds in electronic components for electric batteries and other devices.
Pyrotechnics
Lithium compounds are used as pyrotechnic colorants and oxidizers in red fireworks and flares.
Air purification
Lithium chloride and lithium bromide are hygroscopic and are used as desiccants for gas streams. Lithium hydroxide and lithium peroxide are the salts most commonly used in confined areas, such as aboard spacecraft and submarines, for carbon dioxide removal and air purification. Lithium hydroxide absorbs carbon dioxide from the air by forming lithium carbonate, and is preferred over other alkaline hydroxides for its low weight.
Lithium peroxide (Li2O2) in presence of moisture not only reacts with carbon dioxide to form lithium carbonate, but also releases oxygen. The reaction is as follows:
2 Li2O2 + 2 CO2 → 2 Li2CO3 + O2
Some of the aforementioned compounds, as well as lithium perchlorate, are used in oxygen candles that supply submarines with oxygen. These can also include small amounts of boron, magnesium, aluminium, silicon, titanium, manganese, and iron.
Optics
Lithium fluoride, artificially grown as crystal, is clear and transparent and often used in specialist optics for IR, UV and VUV (vacuum UV) applications. It has one of the lowest refractive indices and the furthest transmission range in the deep UV of most common materials. Finely divided lithium fluoride powder has been used for thermoluminescent radiation dosimetry (TLD): when a sample of such is exposed to radiation, it accumulates crystal defects which, when heated, resolve via a release of bluish light whose intensity is proportional to the absorbed dose, thus allowing this to be quantified. Lithium fluoride is sometimes used in focal lenses of telescopes.
The high non-linearity of lithium niobate also makes it useful in non-linear optics applications. It is used extensively in telecommunication products such as mobile phones and optical modulators, for such components as resonant crystals. Lithium applications are used in more than 60% of mobile phones.
Organic and polymer chemistry
Organolithium compounds are widely used in the production of polymer and fine-chemicals. In the polymer industry, which is the dominant consumer of these reagents, alkyl lithium compounds are catalysts/initiators in anionic polymerization of unfunctionalized olefins. For the production of fine chemicals, organolithium compounds function as strong bases and as reagents for the formation of carbon-carbon bonds. Organolithium compounds are prepared from lithium metal and alkyl halides.
Many other lithium compounds are used as reagents to prepare organic compounds. Some popular compounds include lithium aluminium hydride (LiAlH4), lithium triethylborohydride, n-butyllithium and tert-butyllithium.
Military
Metallic lithium and its complex hydrides, such as lithium aluminium hydride (LiAlH4), are used as high-energy additives to rocket propellants. LiAlH4 can also be used by itself as a solid fuel.
The Mark 50 torpedo stored chemical energy propulsion system (SCEPS) uses a small tank of sulfur hexafluoride, which is sprayed over a block of solid lithium. The reaction generates heat, creating steam to propel the torpedo in a closed Rankine cycle.
Lithium hydride containing lithium-6 is used in thermonuclear weapons, where it serves as fuel for the fusion stage of the bomb.
Nuclear
Lithium-6 is valued as a source material for tritium production and as a neutron absorber in nuclear fusion. Natural lithium contains about 7.5% lithium-6 from which large amounts of lithium-6 have been produced by isotope separation for use in nuclear weapons. Lithium-7 gained interest for use in nuclear reactor coolants.
Lithium deuteride was the fusion fuel of choice in early versions of the hydrogen bomb. When bombarded by neutrons, both 6Li and 7Li produce tritium — this reaction, which was not fully understood when hydrogen bombs were first tested, was responsible for the runaway yield of the Castle Bravo nuclear test. Tritium fuses with deuterium in a fusion reaction that is relatively easy to achieve. Although details remain secret, lithium-6 deuteride apparently still plays a role in modern nuclear weapons as a fusion material.
Lithium fluoride, when highly enriched in the lithium-7 isotope, forms the basic constituent of the fluoride salt mixture LiF-BeF2 used in liquid fluoride nuclear reactors. Lithium fluoride is exceptionally chemically stable and LiF-BeF2 mixtures have low melting points. In addition, 7Li, Be, and F are among the few nuclides with low enough thermal neutron capture cross-sections not to poison the fission reactions inside a nuclear fission reactor.
In conceptualized (hypothetical) nuclear fusion power plants, lithium will be used to produce tritium in magnetically confined reactors using deuterium and tritium as the fuel. Naturally occurring tritium is extremely rare and must be synthetically produced by surrounding the reacting plasma with a 'blanket' containing lithium, where neutrons from the deuterium-tritium reaction in the plasma will fission the lithium to produce more tritium:
6Li + n → 4He + 3H.
Lithium is also used as a source for alpha particles, or helium nuclei. When 7Li is bombarded by accelerated protons 8Be is formed, which almost immediately undergoes fission to form two alpha particles. This feat, called "splitting the atom" at the time, was the first fully human-made nuclear reaction. It was produced by Cockroft and Walton in 1932. Injection of lithium powders is used in fusion reactors to manipulate plasma-material interactions and dissipate energy in the hot thermo-nuclear fusion plasma boundary.
In 2013, the US Government Accountability Office said a shortage of lithium-7 critical to the operation of 65 out of 100 American nuclear reactors "places their ability to continue to provide electricity at some risk." The problem stems from the decline of US nuclear infrastructure. The equipment needed to separate lithium-6 from lithium-7 is mostly a cold war leftover. The US shut down most of this machinery in 1963, when it had a huge surplus of separated lithium, mostly consumed during the twentieth century. The report said it would take five years and $10 million to $12 million to reestablish the ability to separate lithium-6 from lithium-7.
Reactors that use lithium-7 heat water under high pressure and transfer heat through heat exchangers that are prone to corrosion. The reactors use lithium to counteract the corrosive effects of boric acid, which is added to the water to absorb excess neutrons.
Medicine
Lithium is useful in the treatment of bipolar disorder. Lithium salts may also be helpful for related diagnoses, such as schizoaffective disorder and cyclic major depressive disorder. The active part of these salts is the lithium ion Li+. Lithium may increase the risk of developing Ebstein's cardiac anomaly in infants born to women who take lithium during the first trimester of pregnancy.
Precautions
Lithium metal is corrosive and requires special handling to avoid skin contact. Breathing lithium dust or lithium compounds (which are often alkaline) initially irritate the nose and throat, while higher exposure can cause a buildup of fluid in the lungs, leading to pulmonary edema. The metal itself is a handling hazard because contact with moisture produces the caustic lithium hydroxide. Lithium is safely stored in non-reactive compounds such as naphtha.
| Physical sciences | Chemical elements_2 | null |
17570 | https://en.wikipedia.org/wiki/Linear%20equation | Linear equation | In mathematics, a linear equation is an equation that may be put in the form
where are the variables (or unknowns), and are the coefficients, which are often real numbers. The coefficients may be considered as parameters of the equation and may be arbitrary expressions, provided they do not contain any of the variables. To yield a meaningful equation, the coefficients are required to not all be zero.
Alternatively, a linear equation can be obtained by equating to zero a linear polynomial over some field, from which the coefficients are taken.
The solutions of such an equation are the values that, when substituted for the unknowns, make the equality true.
In the case of just one variable, there is exactly one solution (provided that ). Often, the term linear equation refers implicitly to this particular case, in which the variable is sensibly called the unknown.
In the case of two variables, each solution may be interpreted as the Cartesian coordinates of a point of the Euclidean plane. The solutions of a linear equation form a line in the Euclidean plane, and, conversely, every line can be viewed as the set of all solutions of a linear equation in two variables. This is the origin of the term linear for describing this type of equation. More generally, the solutions of a linear equation in variables form a hyperplane (a subspace of dimension ) in the Euclidean space of dimension .
Linear equations occur frequently in all mathematics and their applications in physics and engineering, partly because non-linear systems are often well approximated by linear equations.
This article considers the case of a single equation with coefficients from the field of real numbers, for which one studies the real solutions. All of its content applies to complex solutions and, more generally, to linear equations with coefficients and solutions in any field. For the case of several simultaneous linear equations, see system of linear equations.
One variable
A linear equation in one variable can be written as with .
The solution is .
Two variables
A linear equation in two variables and can be written as where and are not both .
If and are real numbers, it has infinitely many solutions.
Linear function
If , the equation
is a linear equation in the single variable for every value of . It has therefore a unique solution for , which is given by
This defines a function. The graph of this function is a line with slope and -intercept The functions whose graph is a line are generally called linear functions in the context of calculus. However, in linear algebra, a linear function is a function that maps a sum to the sum of the images of the summands. So, for this definition, the above function is linear only when , that is when the line passes through the origin. To avoid confusion, the functions whose graph is an arbitrary line are often called affine functions, and the linear functions such that are often called linear maps.
Geometric interpretation
Each solution of a linear equation
may be viewed as the Cartesian coordinates of a point in the Euclidean plane. With this interpretation, all solutions of the equation form a line, provided that and are not both zero. Conversely, every line is the set of all solutions of a linear equation.
The phrase "linear equation" takes its origin in this correspondence between lines and equations: a linear equation in two variables is an equation whose solutions form a line.
If , the line is the graph of the function of that has been defined in the preceding section. If , the line is a vertical line (that is a line parallel to the -axis) of equation which is not the graph of a function of .
Similarly, if , the line is the graph of a function of , and, if , one has a horizontal line of equation
Equation of a line
There are various ways of defining a line. In the following subsections, a linear equation of the line is given in each case.
Slope–intercept form or Gradient-intercept form
A non-vertical line can be defined by its slope , and its -intercept (the coordinate of its intersection with the -axis). In this case, its linear equation can be written
If, moreover, the line is not horizontal, it can be defined by its slope and its -intercept . In this case, its equation can be written
or, equivalently,
These forms rely on the habit of considering a nonvertical line as the graph of a function. For a line given by an equation
these forms can be easily deduced from the relations
Point–slope form or Point-gradient form
A non-vertical line can be defined by its slope , and the coordinates of any point of the line. In this case, a linear equation of the line is
or
This equation can also be written
for emphasizing that the slope of a line can be computed from the coordinates of any two points.
Intercept form
A line that is not parallel to an axis and does not pass through the origin cuts the axes into two different points. The intercept values and of these two points are nonzero, and an equation of the line is
(It is easy to verify that the line defined by this equation has and as intercept values).
Two-point form
Given two different points and , there is exactly one line that passes through them. There are several ways to write a linear equation of this line.
If , the slope of the line is Thus, a point-slope form is
By clearing denominators, one gets the equation
which is valid also when (for verifying this, it suffices to verify that the two given points satisfy the equation).
This form is not symmetric in the two given points, but a symmetric form can be obtained by regrouping the constant terms:
(exchanging the two points changes the sign of the left-hand side of the equation).
Determinant form
The two-point form of the equation of a line can be expressed simply in terms of a determinant. There are two common ways for that.
The equation is the result of expanding the determinant in the equation
The equation can be obtained by expanding with respect to its first row the determinant in the equation
Besides being very simple and mnemonic, this form has the advantage of being a special case of the more general equation of a hyperplane passing through points in a space of dimension . These equations rely on the condition of linear dependence of points in a projective space.
More than two variables
A linear equation with more than two variables may always be assumed to have the form
The coefficient , often denoted is called the constant term (sometimes the absolute term in old books). Depending on the context, the term coefficient can be reserved for the with .
When dealing with variables, it is common to use and instead of indexed variables.
A solution of such an equation is a -tuple such that substituting each element of the tuple for the corresponding variable transforms the equation into a true equality.
For an equation to be meaningful, the coefficient of at least one variable must be non-zero. If every variable has a zero coefficient, then, as mentioned for one variable, the equation is either inconsistent (for ) as having no solution, or all are solutions.
The -tuples that are solutions of a linear equation in are the Cartesian coordinates of the points of an -dimensional hyperplane in an Euclidean space (or affine space if the coefficients are complex numbers or belong to any field). In the case of three variables, this hyperplane is a plane.
If a linear equation is given with , then the equation can be solved for , yielding
If the coefficients are real numbers, this defines a real-valued function of real variables.
| Mathematics | Algebra | null |
17643 | https://en.wikipedia.org/wiki/Lavandula | Lavandula | Lavandula (common name lavender) is a genus of 47 known species of perennial flowering plants in the mints family, Lamiaceae. It is native to the Old World, primarily found across the drier, warmer regions of mainland Eurasia, with an affinity for maritime breezes.
Lavender is found on the Iberian Peninsula and around the entirety of the Mediterranean coastline (including the Adriatic coast, the Balkans, the Levant, and coastal North Africa), in parts of Eastern and Southern Africa and the Middle East, as well as in South Asia and on the Indian subcontinent.
Many members of the genus are cultivated extensively in temperate climates as ornamental plants for garden and landscape use, for use as culinary herbs, and also commercially for the extraction of essential oils. Lavender is used in traditional medicine and as an ingredient in cosmetics.
Description
The genus includes annual or short-lived herbaceous perennial plants, and shrub-like perennials, subshrubs or small shrubs.
Leaf shape is diverse across the genus. They are simple in some commonly cultivated species; in other species, they are pinnately toothed, or pinnate, sometimes multiple pinnate and dissected. In most species, the leaves are covered in fine hairs or indumentum, which normally contain essential oils.
Flowers are contained in whorls, held on spikes rising above the foliage, the spikes being branched in some species. Some species produce colored bracts at the tips of the inflorescences. The flowers may be blue, violet, or lilac in the wild species, occasionally blackish purple or yellowish. The sepal calyx is tubular. The corolla is also tubular, usually with five lobes (the upper lip often cleft, and the lower lip has two clefts).
Phytochemicals
Some 100 individual phytochemicals have been identified in lavender oil, including major contents of linalyl acetate (30–55%), linalool (20–35%), tannins (5–10%), and caryophyllene (8%), with lesser amounts of sesquiterpenoids, perillyl alcohols, esters, oxides, ketones, cineole, camphor, beta-ocimene, limonene, caproic acid, and caryophyllene oxide. The relative amounts of these compounds vary considerably among lavender species.
Taxonomy
Lavandula stoechas, L. pedunculata, and L. dentata were known in Roman times. From the Middle Ages onwards, the European species were considered two separate groups or genera, Stoechas (L. stoechas, L. pedunculata, L. dentata) and Lavandula (L. spica and L. latifolia), until Carl Linnaeus combined them. He recognised only five species in Species Plantarum (1753), L. multifida and L. dentata (Spain) and L. stoechas and L. spica from Southern Europe. L. pedunculata was included within L. stoechas.
By 1790, L. pinnata and L. carnosa were recognised. The latter was subsequently transferred to Anisochilus. By 1826, Frédéric Charles Jean Gingins de la Sarraz listed 12 species in three sections, and by 1848 eighteen species were known.
One of the first modern major classifications was that of Dorothy Chaytor in 1937 at Kew. The six sections she proposed for 28 species still left many intermediates that could not easily be assigned. Her sections included Stoechas, Spica, Subnudae, Pterostoechas, Chaetostachys, and Dentatae. However, all the major cultivated and commercial forms resided in the Stoechas and Spica sections. There were four species within Stoechas (Lavandula stoechas, L. dentata, L. viridis, and L. pedunculata) while Spica had three (L. officinalis (now L. angustifolia), L. latifolia and L. lanata). She believed that the garden varieties were hybrids between true lavender L. angustifolia and spike lavender (L. latifolia).
Lavandula has three subgenera:
Subgenus Lavandula is mainly of woody shrubs with entire leaves. It contains the principal species grown as ornamental plants and for oils. They are found across the Mediterranean region to northeast Africa and western Arabia.
Subgenus Fabricia consists of shrubs and herbs, and it has a wide distribution from the Atlantic to India. It contains some ornamental plants.
Subgenus Sabaudia constitutes two species in the southwest Arabian peninsula and Eritrea, which are rather distinct from the other species, and are sometimes placed in their own genus Sabaudia.
In addition, there are numerous hybrids and cultivars in commercial and horticultural usage.
The first major clade corresponds to subgenus Lavandula, and the second Fabricia. The Sabaudia group is less clearly defined. Within the Lavandula clade, the subclades correspond to the existing sections but place Dentatae separately from Stoechas, not within it. Within the Fabricia clade, the subclades correspond to Pterostoechas, Subnudae, and Chaetostachys.
Thus the current classification includes 39 species distributed across 8 sections (the original 6 of Chaytor and the two new sections of Upson and Andrews), in three subgenera (see table below). However, since lavender cross-pollinates easily, countless variations present difficulties in classification.
Taxonomic table
This is based on the classification of Upson and Andrews, 2004.
Etymology
The English word lavender came into use in the 13th century, and is generally thought to derive from Old French lavandre, ultimately from Latin lavare from lavo (to wash), referring to the use of blue infusions of the plants for bathing. The botanic name Lavandula as used by Linnaeus is considered to be derived from this and other European vernacular names for the plants.
The names widely used for some of the species, "English lavender", "French lavender" and "Spanish lavender" are all imprecisely applied. "English lavender" is commonly used for L. angustifolia, though some references say the proper term is "Old English lavender". The name "French lavender" may refer to either L. stoechas or to L. dentata. "Spanish lavender" may refer to L. pedunculata, L. stoechas, or L. lanata.
Cultivation
The most common form in cultivation is the common or English lavender Lavandula angustifolia (formerly named L. officinalis). A wide range of cultivars can be found. Other commonly grown ornamental species are L. stoechas, L. dentata, and L. multifida (Egyptian lavender).
Because the cultivated forms are planted in gardens worldwide, they are occasionally found growing wild as garden escapes, well beyond their natural range. Such spontaneous growth is usually harmless, but in some cases, Lavandula species have become invasive. For example, in Australia, L. stoechas has become a cause for concern; it occurs widely throughout the continent and has been declared a noxious weed in Victoria since 1920. It is regarded as a weed in parts of Spain.
Lavenders flourish best in dry, well-drained, sandy or gravelly soils in full sun. English lavender has a long germination process (14–28 days) and matures within 100–110 days. All types need little or no fertilizer and good air circulation. In areas of high humidity, root rot due to fungus infection can be a problem. Organic mulches can trap moisture around the plants' bases, encouraging root rot. Gravelly materials such as crushed rocks give better results. It grows best in soils with a pH between 6 and 8. Most lavender is hand-harvested, and harvest times vary depending on intended use.
Health risks
The U.S. National Center for Complementary and Integrative Health (NCCIH) states that lavender is considered likely safe in food amounts, and that topical uses may cause allergic reactions. The NCCIH does not recommend the use of lavender while pregnant or breastfeeding because of lack of knowledge of its effects. It recommends caution if young boys use lavender oil because of possible hormonal effects leading to gynecomastia.
A 2007 study examined the relationship between various fragrances and photosensitivity, stating that lavender is known "to elicit cutaneous photo-toxic reactions", but does not induce photohaemolysis.
Some people experience contact dermatitis, allergic eczema, or facial dermatitis from the use of lavender oil on skin.
Uses
Lavender oil
Commercially, the plant is grown mainly for the production of lavender essential oil. English lavender (Lavandula angustifolia) yields an oil with sweet overtones and can be used in balms, salves, perfumes, cosmetics, and topical applications.
Lavandula × intermedia, also known as lavandin or Dutch lavender, hybrids of L. angustifolia and L. latifolia. are widely cultivated for commercial use since their flowers tend to be bigger than those of English lavender and the plants tend to be easier to harvest. They yield a similar essential oil, but with higher levels of terpenes, including camphor, which add a sharper overtone to the fragrance, regarded by some as of lower quality than that of English lavender.
The US Food and Drug Administration considers lavender as generally recognized as safe for human consumption. The essential oil was used in hospitals during World War I.
Culinary
Culinary lavender is usually English lavender, the most commonly used species in cooking (L. angustifolia 'Munstead'). As an aromatic, it has a sweet fragrance with lemon or citrus notes. It is used as a spice or condiment in pastas, salads and dressings, and desserts. Their buds and greens are used in teas, and their buds, processed by bees, are the essential ingredient of a monofloral honey.
Culinary history
Spanish nard (), referring to L. stoechas, is listed as an ingredient in making a spiced wine, namely hippocras, in The Forme of Cury.
Lavender was introduced into England in the 1600s. It is said that Queen Elizabeth I of England prized a lavender conserve (jam) at her table, so lavender was produced as a jam at that time, as well as used in teas both medicinally and for its taste.
Lavender was not used in traditional southern French cooking at the turn of the 20th century. It does not appear at all in the best-known compendium of Provençal cooking, J.-B. Reboul's Cuisinière Provençale. French lambs have been allowed to graze on lavender as it is alleged to make their meat more tender and fragrant. In the 1970s, a blend of herbs called herbes de Provence was invented by spice wholesalers. Culinary lavender is added to the mixture in the North American version.
In the 21st century, lavender is used in many world regions to flavor tea, vinegar, jellies, baked goods, and beverages.
Buds
For most cooking applications, the dried buds (also called flowers) are used.
The potency of the lavender buds increases with drying which necessitates more sparing use to avoid a heavy, soapy aftertaste. Chefs note to reduce by two-thirds the dry amount in recipes that call for fresh lavender buds.
Lavender buds can amplify both sweet and savory flavors in dishes and are sometimes paired with sheep's milk and goat's milk cheeses. Lavender flowers are occasionally blended with black, green, or herbal teas. Lavender flavors baked goods and desserts, pairing especially well with chocolate. In the United States, both lavender syrup and dried lavender buds are used to make lavender scones and marshmallows.
Lavender buds are put into sugar for two weeks to allow the essential oils and fragrance to transfer; then the sugar itself is used in baking. Lavender can be used in breads where recipes call for rosemary. Lavender can be used decoratively in dishes or spirits, or as a decorative and aromatic in a glass of champagne. Lavender is used in savory dishes, giving stews and reduced sauces aromatic flair. It is also used to scent flans, custards, and sorbets.
In honey
The flowers yield abundant nectar, from which bees make a high-quality honey. Monofloral honey is produced primarily around the Mediterranean Sea, and is marketed worldwide as a premium product. Flowers can be candied and are sometimes used as cake decorations. It is also used to make "lavender sugar".
Herbalism
The German scientific committee on traditional medicine, Commission E, reported uses of lavender flower in practices of herbalism, including its use for restlessness or insomnia, Roemheld syndrome, intestinal discomfort, and cardiovascular diseases, among others.
Other uses
Flower spikes are used for dried flower arrangements. The fragrant, pale purple flowers and flower buds are used in potpourris. Lavender is also used as herbal filler inside sachets used to freshen linens. Dried and sealed in pouches, lavender flowers are placed among stored items of clothing to give a fresh fragrance and to deter moths. Dried lavender flowers may be used for wedding confetti. Lavender is also used in scented waters, soaps, and sachets.
In culture
The ancient Greeks called the lavender herb νάρδος: nárdos, Latinized as nardus, after the Syrian city of Naarda (possibly the modern town of Duhok, Iraq). It was also commonly called nard. The species originally grown was L. stoechas.
During Roman times, flowers were sold for 100 denarii per pound, which was about the same as a month's wages for a farm laborer, or fifty haircuts from the local barber. Its late Latin name was lavandārius, from lavanda (things to be washed), from lavāre from the verb lavo (to wash).
Since the late 19th century, lavenders and their color have been associated with the queer community.
Gallery
| Biology and health sciences | Lamiales | null |
17651 | https://en.wikipedia.org/wiki/Lamiales | Lamiales | The order Lamiales (also known as the mint order) are an order in the asterid group of dicotyledonous flowering plants. It includes about 23,810 species, 1,059 genera, and is divided into about 25 families. These families include Acanthaceae, Bignoniaceae, Byblidaceae, Calceolariaceae, Carlemanniaceae, Gesneriaceae, Lamiaceae, Lentibulariaceae, Linderniaceae, Martyniaceae, Mazaceae, Oleaceae, Orobanchaceae, Paulowniaceae, Pedaliaceae, Peltantheraceae, Phrymaceae, Plantaginaceae, Plocospermataceae, Schlegeliaceae, Scrophulariaceae, Stilbaceae, Tetrachondraceae, Thomandersiaceae, Verbenaceae.
Being one of the largest orders of flowering plants, Lamiales have representatives found all over the world. Well-known or economically important members of this order include lavender, lilac, olive, jasmine, the ash tree, teak, snapdragon, sesame, psyllium, garden sage, and a number of table herbs such as mint, basil, and rosemary.
Description
Plant species within the order Lamiales are eudicots and are herbaceous or have woody stems. Zygomorphic flowers are common, having five petals with an upper lip of two petals and lower lip of three petals, but actinomorphic flowers are also seen. Species potentially have five stamens, but these are typically reduced to two or four. Lamiales also produce a single style attached to an ovary typically containing two carpels. The ovary is mostly observed to be superior. The inflorescence is typically seen as cyme, raceme or spike. The fruit type is usually dehiscent capsules. Glandular hairs are present on the plants.
A number of species of carnivorous plants are found in the families Lentibulariaceae and Byblidaceae. Protocarnivorous plant species have also been found in the order, specifically in the Martyniaceae family.
Parasitic plant species are found in the order, belonging to the family Orobanchaceae. These parasitic plants can either be hemi-parasites or holoparasites.
Taxonomy
The Lamiales previously had a restricted circumscription (e.g., by Arthur Cronquist) that included the major families Lamiaceae (Labiatae), Verbenaceae, and Boraginaceae, plus a few smaller families. In the classification system of Dahlgren the Lamiales were in the superorder Lamiiflorae (also called Lamianae). Recent phylogenetic work has shown the Lamiales are polyphyletic with respect to order Scrophulariales and the two groups are now usually combined in a single order that also includes the former orders Hippuridales and Plantaginales. Lamiales has become the preferred name for this much larger combined group. The placement of the Boraginaceae is unclear, but phylogenetic work shows this family does not belong in Lamiales.
Also, the circumscription of family Scrophulariaceae, formerly a paraphyletic group defined primarily by plesiomorphic characters and from within which numerous other families of the Lamiales were derived, has been radically altered to create a number of smaller, better-defined, and putatively monophyletic families.
Dating
Much research has been conducted in recent years regarding the dating the Lamiales lineage, although there still remains some ambiguity. A 2004 study, on the molecular phylogenetic dating of asterid flowering plants, estimated 106 million years (MY) for the stem lineage of Lamiales. A similar study in 2009 estimated 80 million years. Another 2009 study gives several reasons why the issue is particularly difficult to solve.
Habitat
The Lamiales order can be found in almost all kinds of habitats world-wide. These habitats include forests, valleys, grasslands, rocky terrain, rainforests, the tropics, temperate regions, marshes, coastlines, and even frozen areas.
Uses
The order Lamiales has a variety of species with anthropogenic uses, the most popular belonging to the Lamiaceae and Acanthaceae families. Many of these species in the order Lamiales produce medicinal properties from alkaloids and saponins to help a variety of infections and diseases. These alkaloids and saponins may help with digestion, the common cold or flu, asthma, liver infections, pulmonary infections and contain antioxidant properties.
Species within the order are also known to have properties to repel insects and help control harmful diseases from insects, such as Malaria from mosquitos. Plants of the family Acanthaceae have bioactive secondary metabolites within their mature leaves, which have been found to be toxic to insect larvae. Botanical derived insecticides are a good alternate for chemical or synthetic insecticides as it is inexpensive, abundant and safe for other plants, non-target organisms and the environment.
Many species within the order are used as decorations, flavouring agents, cosmetics and fragrances. Natural dyes can also be extracted from Lamiales species. For example, in Sardinia culture, the most common Lamiales plant species used for natural dyes is Lavandula stoechas, where a light-green dye is extracted from the stem.
| Biology and health sciences | Lamiales | Plants |
17703 | https://en.wikipedia.org/wiki/Leo%20%28constellation%29 | Leo (constellation) | Leo is one of the constellations of the zodiac, between Cancer the crab to the west and Virgo the maiden to the east. It is located in the Northern celestial hemisphere. Its name is Latin for lion, and to the ancient Greeks represented the Nemean Lion killed by the mythical Greek hero Heracles as one of his twelve labors. Its old astronomical symbol is (♌︎). One of the 48 constellations described by the 2nd-century astronomer Ptolemy, Leo remains one of the 88 modern constellations today, and one of the most easily recognizable due to its many bright stars and a distinctive shape that is reminiscent of the crouching lion it depicts.
Features
Stars
Leo contains many bright stars, many of which were individually identified by the ancients. There are nine bright stars that can be easily seen with the naked eye, four of the nine stars are either first or second magnitude which render this constellation especially prominent. Six of the nine stars also form an asterism known as "The Sickle," which to modern observers may resemble a backwards "question mark", The sickle is marked by six stars: Epsilon Leonis, Mu Leonis, Zeta Leonis, Gamma Leonis, Eta Leonis, and Alpha Leonis. The rest of the three stars form an isosceles triangle, Beta Leonis (Denebola) marks the lion's tail and the rest of his body is delineated by Delta Leonis and Theta Leonis.
Regulus, designated Alpha Leonis, is a blue-white main-sequence star of magnitude 1.34, 77.5 light-years from Earth. It is a double star divisible in binoculars, with a secondary of magnitude 7.7. Its traditional name (Regulus) means "the little king".
Beta Leonis, called Denebola, is at the opposite end of the constellation to Regulus. It is a blue-white star of magnitude 2.23, 36 light-years from Earth. The name Denebola means "the lion's tail".
Algieba, Gamma Leonis, is a binary star with a third optical component; the primary and secondary are divisible in small telescopes and the tertiary is visible in binoculars. The primary is a gold-yellow giant star of magnitude 2.61 and the secondary is similar but at magnitude 3.6; they have a period of 600 years and are 126 light-years from Earth. The unrelated tertiary, 40 Leonis, is a yellow-tinged star of magnitude 4.8. Its traditional name, Algieba, means "the forehead".
Delta Leonis, called Zosma, is a blue-white star of magnitude 2.58, 58 light-years from Earth.
Other named stars in Leo include Mu Leonis, Rasalas (an abbreviation of "Al Ras al Asad al Shamaliyy", meaning "The Lion's Head Toward the South"); and Theta Leonis, Chertan.
Leo is also home to a bright variable star, the red giant R Leonis. It is a Mira variable with a minimum magnitude of 10 and normal maximum magnitude of 6; it periodically brightens to magnitude 4.4. R Leonis, 330 light-years from Earth, has a period of 310 days and a diameter of 450 solar diameters.
The star Wolf 359 (CN Leonis), one of the nearest stars to Earth at 7.8 light-years away, is in Leo. Wolf 359 is a red dwarf of magnitude 13.5; it periodically brightens by one magnitude or less because it is a flare star. Gliese 436, a faint star in Leo about 33 light-years away from the Sun, is orbited by a transiting Neptune-mass extrasolar planet.
The carbon star CW Leo (IRC +10216) is the brightest star in the night sky at the infrared N-band (10 μm wavelength).
The star SDSS J102915+172927 (Caffau's star) is a population II star in the galactic halo seen in Leo. It is about 13 billion years old, making it one of the oldest stars in the Galaxy. It has the lowest metallicity of any known star.
Modern astronomers, including Tycho Brahe in 1602, excised a group of stars that once made up the "tuft" of the lion's tail and used them to form the new constellation Coma Berenices (Berenice's hair), although there was precedent for that designation among the ancient Greeks and Romans.
Deep-sky objects
Leo contains many bright galaxies; Messier 65, Messier 66, Messier 95, Messier 96, Messier 105, and NGC 3628 are the most famous, the first two being part of the Leo Triplet.
The Leo Ring, a cloud of hydrogen, helium gas, is found in the orbit of two galaxies found within this constellation. M66 is a spiral galaxy that is part of the Leo Triplet, whose other two members are M65 and NGC 3628. It is at a distance of 37 million light-years and has a somewhat distorted shape due to gravitational interactions with the other members of the Triplet, which are pulling stars away from M66. Eventually, the outermost stars may form a dwarf galaxy orbiting M66. Both M65 and M66 are visible in large binoculars or small telescopes, but their concentrated nuclei and elongation are only visible in large amateur instruments.
M95 and M96 are both spiral galaxies 20 million light-years from Earth. Though they are visible as fuzzy objects in small telescopes, their structure is only visible in larger instruments. M95 is a barred spiral galaxy. M105 is about a degree away from the M95/M96 pair; it is an elliptical galaxy of the 9th magnitude, also about 20 million light-years from Earth.
NGC 2903 is a barred spiral galaxy discovered by William Herschel in 1784. It is very similar in size and shape to the Milky Way and is located 25 million light-years from Earth. In its core, NGC 2903 has many "hotspots", which have been found to be near regions of star formation. The star formation in this region is thought to be due to the presence of the dusty bar, which sends shock waves through its rotation to an area with a diameter of 2,000 light-years. The outskirts of the galaxy have many young open clusters.
Leo is also home to some of the largest structures in the observable universe. Some of the structures found in the constellation are the Clowes–Campusano LQG, U1.11, U1.54, and the Huge-LQG, which are all large quasar groups; the latter being the second largest structure known (see also NQ2-NQ4 GRB overdensity).
Meteor showers
The Leonids occur in November, peaking on November 14–15, and have a radiant close to Gamma Leonis. Its parent body is Comet Tempel-Tuttle, which causes significant outbursts every 35 years. The normal peak rate is approximately 10 meteors per hour.
The January Leonids are a minor shower that peaks between January 1 and 7.
History and mythology
Leo was one of the earliest recognized constellations, with archaeological evidence that the Mesopotamians had a similar constellation as early as 4000 BCE. The Persians called Leo Ser or Shir; the Turks, Artan; the Syrians, Aryo; the Jews, Arye; the Indians, Simha, all meaning "lion".
Some mythologists believe that in Sumeria, Leo represented the monster Humbaba, who was killed by Gilgamesh.
In Babylonian astronomy, the constellation was called UR.GU.LA, the "Great Lion"; the bright star Regulus was known as "the star that stands at the Lion's breast." Regulus also had distinctly regal associations, as it was known as the King Star.
In Greek mythology, Leo was identified as the Nemean Lion which was killed by Heracles (Hercules to the Romans) during the first of his twelve labours. The Nemean Lion would take women as hostages to its lair in a cave, luring warriors from nearby towns to save the damsel in distress, to their misfortune. The Lion was impervious to any weaponry; thus, the warriors' clubs, swords, and spears were rendered useless against it. Realizing that he must defeat the Lion with his bare hands, Hercules slipped into the Lion's cave and engaged it at close quarters. When the Lion pounced, Hercules caught it in midair, one hand grasping the Lion's forelegs and the other its hind legs, and bent it backwards, breaking its back and freeing the trapped maidens. Zeus commemorated this labor by placing the Lion in the sky.
The Roman poet Ovid called it Herculeus Leo and Violentus Leo. Bacchi Sidus (star of Bacchus) was another of its titles, the god Bacchus always being identified with this animal. However, Manilius called it Jovis et Junonis Sidus (Star of Jupiter and Juno).
Astrology
, the Sun appears in the constellation Leo from August 10 to September 16. In tropical astrology, the Sun is considered to be in the sign Leo from July 23 to August 22, and in sidereal astrology, from August 16 to September 17.
Namesakes
USS Leonis (AK-128) was a United States Navy Crater class cargo ship.
| Physical sciences | Zodiac | Astronomy |
17704 | https://en.wikipedia.org/wiki/Libra%20%28constellation%29 | Libra (constellation) | Libra is a constellation of the zodiac and is located in the Southern celestial hemisphere. Its name is Latin for weighing scales. Its old astronomical symbol is (♎︎). It is fairly faint, with no first magnitude stars, and lies between Virgo to the west and Scorpius to the east. Beta Librae, also known as Zubeneschamali, is the brightest star in the constellation. Three star systems are known to have planets.
Features
Stars
Overall, there are 83 stars within the constellation's borders brighter than or equal to apparent magnitude 6.5.
The brightest stars in Libra form a quadrangle that distinguishes it for the unaided observer. Traditionally, Alpha and Beta Librae are considered to represent the scales' balance beam, while Gamma and Sigma are the weighing pans.
Alpha Librae, called Zubenelgenubi, is a multiple star system divisible into two stars when seen through binoculars, The primary (Alpha2 Librae) is a blue-white star of magnitude 2.7 and the secondary (Alpha1 Librae) is a white star of magnitude 5.2 and spectral type F3V that is 74.9 ± 0.7 light-years from Earth. Its traditional name means "the southern claw". Zubeneschamali (Beta Librae) is the corresponding "northern claw" to Zubenelgenubi. The brightest star in Libra, Zubeneschamali is a green-tinged star of magnitude 2.6, 160 (or 185 ± 2) light-years from Earth. Gamma Librae is called Zubenelakrab, which means "the scorpion's claw", completing the suite of names referring to Libra's archaic status. It is an orange giant of magnitude 3.9, 152 light-years from Earth.
Iota Librae is a complex multiple star, 377 light-years from Earth, with both optical and true binary components in it. The primary appears as a blue-white star of magnitude 4.5; it is a binary star indivisible in even the largest amateur instruments with a period of 23 years. The secondary, visible in small telescopes as a star of magnitude 9.4, is a binary with two components, magnitudes 10 and 11. There is an optical companion to Iota Librae; 25 Librae is a star of magnitude 6.1, 219 light-years from Earth and visible in binoculars. Mu Librae is a binary star divisible in medium-aperture amateur telescopes, 235 light-years from Earth. The primary is of magnitude 5.7 and the secondary is of magnitude 6.8.
Delta Librae is an Algol-type eclipsing variable star, 304 lightyears from Earth. It has a period of 2 days, 8 hours; its minimum magnitude of 5.9 and its maximum magnitude is 4.9. FX Librae, designated 48 Librae, is a shell star of magnitude 4.9. Shell stars, like Pleione and Gamma Cassiopeiae, are blue supergiants with irregular variations caused by their abnormally high speed of rotation. This ejects gas from the star's equator.
Sigma Librae (the proper name is Brachium) was formerly known as Gamma Scorpii despite being well inside the boundaries of Libra. It was not redesignated as Sigma Librae until 1851 by Benjamin A. Gould.
History and mythology
Libra was known in Babylonian astronomy as MUL Zibanu (the "scales" or "balance"), or alternatively as the Claws of the Scorpion. The scales were held sacred to the sun god Shamash, who was also the patron of truth and justice.
It was also seen as the Scorpion's Claws in ancient Greece. Since these times, Libra has been associated with law, fairness and civility. In Arabic zubānā means "scorpion's claws", and likely similarly in other Semitic languages: this resemblance of words may be why the Scorpion's claws became the Scales. Indeed, Zubenelgenubi and Zubeneschamali, the names of the constellation's two main stars, in Arabic mean "southern claw" and "northern claw" respectively. It has also been suggested that the scales are an allusion to the fact that when the sun entered this part of the ecliptic at the autumnal equinox, the days and nights are equal. Libra's status as the location of the equinox earned the equinox the name "First Point of Libra", though this location ceased to coincide with the constellation in 730 BC because of the precession of the equinoxes.
In ancient Egypt the three brightest stars of Libra (α, β, and σ Librae) formed a constellation that was viewed as a boat. Libra is a constellation not mentioned by Eudoxus or Aratus. Libra is mentioned by Manetho (3rd century B.C.) and Geminus (1st century B.C.), and included by Ptolemy in his 48 asterisms. Ptolemy catalogued 17 stars, Tycho Brahe 10, and Johannes Hevelius 20. It only became a constellation in ancient Rome, when it began to represent the scales held by Astraea, the goddess of justice, associated with Virgo in the Greek mythology.
The constellation
Libra is bordered by the head of Serpens to the north, Virgo to the northwest, Hydra to the southwest, the corner of Centaurus to the southwest, Lupus to the south, Scorpius to the east and Ophiuchus to the northeast. Covering 538.1 square degrees and 1.304% of the night sky, it ranks 29th of the 88 constellations in size. The three-letter abbreviation for the constellation, as adopted by the International Astronomical Union in 1922, is "Lib". The official constellation boundaries, as set by Eugène Delporte in 1930, are defined by a polygon of 12 segments (illustrated in infobox). In the equatorial coordinate system, the right ascension coordinates of these borders lie between and , while the declination coordinates are between −0.47° and −30.00°. The whole constellation is visible to observers south of latitude 60°N.
Planetary systems
Libra is home to the Gliese 581 planetary system, which consists of the star Gliese 581 and three confirmed planets. This system gained attention in the late 2000s and early 2010s as the subject of some of the earliest claims of potentially habitable exoplanets, but it is now known that Gliese 581c is too hot to be potentially habitable, and the planet candidates Gliese 581d and g likely do not exist. At the time of its discovery in 2009, Gliese 581e was the smallest mass exoplanet known orbiting a normal star.
Deep-sky objects
Libra is home to one bright globular cluster, NGC 5897. It is a loose cluster, 50,000 light-years from Earth; it is fairly large and has an integrated magnitude of 9. IC 1059 is a galaxy in the constellation Libra.
Astrology
, the Sun appears in the constellation Libra from October 31 to November 22. In tropical astrology, the Sun is considered to be in the sign Libra from the northern autumnal equinox (c. September 23) to on or about October 23, and in sidereal astrology, from October 16 to November 15.
Namesakes
Libra (AKA-12) was a United States navy ship named after the constellation.
Tropical Storm Tembin - Four tropical cyclones in the western Pacific have been given its Japanese name.
| Physical sciences | Zodiac | Astronomy |
17717 | https://en.wikipedia.org/wiki/Locomotive | Locomotive | A locomotive is a rail transport vehicle that provides the motive power for a train. If a locomotive is capable of carrying a payload, it is usually rather referred to as a multiple unit, motor coach, railcar or power car; the use of these self-propelled vehicles is increasingly common for passenger trains, but rare for freight trains.
Traditionally, locomotives pulled trains from the front. However, push–pull operation has become common, where the train may have a locomotive (or locomotives) at the front, at the rear, or at each end. Most recently railroads have begun adopting DPU, or distributed power. The front may have one or two locomotives followed by a mid-train locomotive that is controlled remotely from the lead unit.
Etymology
The word locomotive originates from the Latin 'from a place', ablative of 'place', and the Medieval Latin 'causing motion', and is a shortened form of the term locomotive engine, which was first used in 1814 to distinguish between self-propelled and stationary steam engines.
Classifications
Prior to locomotives, the motive force for railways had been generated by various lower-technology methods such as human power, horse power, gravity or stationary engines that drove cable systems. Few such systems are still in existence today. Locomotives may generate their power from fuel (wood, coal, petroleum or natural gas), or they may take power from an outside source of electricity. It is common to classify locomotives by their source of energy. The common ones include:
Steam
A steam locomotive is a locomotive whose primary power source is a steam engine. The most common form of steam locomotive also contains a boiler to generate the steam used by the engine. The water in the boiler is heated by burning combustible material – usually coal, wood, or oil – to produce steam. The steam moves reciprocating pistons which are connected to the locomotive's main wheels, known as the "driving wheels". Both fuel and water supplies are carried with the locomotive, either on the locomotive itself, in bunkers and tanks, (this arrangement is known as a "tank locomotive") or pulled behind the locomotive, in tenders, (this arrangement is known as a "tender locomotive").
The first full-scale working railway steam locomotive was built by Richard Trevithick in 1802. It was constructed for the Coalbrookdale ironworks in Shropshire in England though no record of it working there has survived. On 21 February 1804, the first recorded steam-hauled railway journey took place as another of Trevithick's locomotives hauled a train from the Penydarren ironworks, in Merthyr Tydfil, to Abercynon in South Wales. Accompanied by Andrew Vivian, it ran with mixed success. The design incorporated a number of important innovations including the use of high-pressure steam which reduced the weight of the engine and increased its efficiency.
In 1812, Matthew Murray's twin-cylinder rack locomotive Salamanca first ran on the edge-railed rack-and-pinion Middleton Railway; this is generally regarded as the first commercially successful locomotive. Another well-known early locomotive was Puffing Billy, built 1813–14 by engineer William Hedley for the Wylam Colliery near Newcastle upon Tyne. This locomotive is the oldest preserved, and is on static display in the Science Museum, London. George Stephenson built Locomotion No. 1 for the Stockton & Darlington Railway in the north-east of England, which was the first public steam railway in the world. In 1829, his son Robert built The Rocket in Newcastle upon Tyne. Rocket was entered into, and won, the Rainhill Trials. This success led to the company emerging as the pre-eminent early builder of steam locomotives used on railways in the UK, US and much of Europe. The Liverpool & Manchester Railway, built by Stephenson, opened a year later making exclusive use of steam power for passenger and goods trains.
The steam locomotive remained by far the most common type of locomotive until after World War II. Steam locomotives are less efficient than modern diesel and electric locomotives, and a significantly larger workforce is required to operate and service them. British Rail figures showed that the cost of crewing and fuelling a steam locomotive was about two and a half times larger than the cost of supporting an equivalent diesel locomotive, and the daily mileage they could run was lower. Between about 1950 and 1970, the majority of steam locomotives were retired from commercial service and replaced with electric and diesel–electric locomotives. While North America transitioned from steam during the 1950s, and continental Europe by the 1970s, in other parts of the world, the transition happened later. Steam was a familiar technology that used widely-available fuels and in low-wage economies did not suffer as wide a cost disparity. It continued to be used in many countries until the end of the 20th century. By the end of the 20th century, almost the only steam power remaining in regular use around the world was on heritage railways.
Internal combustion
Internal combustion locomotives use an internal combustion engine, connected to the driving wheels by a transmission. They typically keep the engine running at a near-constant speed whether the locomotive is stationary or moving. Internal combustion locomotives are categorised by their fuel type and sub-categorised by their transmission type.
The first internal combustion rail vehicle was a kerosene-powered draisine built by Gottlieb Daimler in 1887, but this was not technically a locomotive as it carried a payload.
The earliest gasoline locomotive in the western United States was built by the Best Manufacturing Company in 1891 for San Jose and Alum Rock Railroad. It was only a limited success and was returned to Best in 1892.
The first commercially successful petrol locomotive in the United Kingdom was a petrol–mechanical locomotive built by the Maudslay Motor Company in 1902, for the Deptford Cattle Market in London. It was an 80 hp locomotive using a three-cylinder vertical petrol engine, with a two speed mechanical gearbox.
Diesel
Diesel locomotives are powered by diesel engines. In the early days of diesel propulsion development, various transmission systems were employed with varying degrees of success, with electric transmission proving to be the most popular. In 1914, Hermann Lemp, a General Electric electrical engineer, developed and patented a reliable direct current electrical control system (subsequent improvements were also patented by Lemp). Lemp's design used a single lever to control both engine and generator in a coordinated fashion, and was the prototype for all diesel–electric locomotive control. In 1917–18, GE produced three experimental diesel–electric locomotives using Lemp's control design. In 1924, a diesel–electric locomotive (Eel2 original number Юэ 001/Yu-e 001) started operations. It had been designed by a team led by Yury Lomonosov and built 1923–1924 by Maschinenfabrik Esslingen in Germany. It had five driving axles (1'E1'). After several test rides, it hauled trains for almost three decades from 1925 to 1954.
Electric
An electric locomotive is a locomotive powered only by electricity. Electricity is supplied to moving trains with a (nearly) continuous conductor running along the track that usually takes one of three forms: an overhead line, suspended from poles or towers along the track or from structure or tunnel ceilings; a third rail mounted at track level; or an onboard battery. Both overhead wire and third-rail systems usually use the running rails as the return conductor but some systems use a separate fourth rail for this purpose. The type of electrical power used is either direct current (DC) or alternating current (AC).
Various collection methods exist: a trolley pole, which is a long flexible pole that engages the line with a wheel or shoe; a bow collector, which is a frame that holds a long collecting rod against the wire; a pantograph, which is a hinged frame that holds the collecting shoes against the wire in a fixed geometry; or a contact shoe, which is a shoe in contact with the third rail. Of the three, the pantograph method is best suited for high-speed operation.
Electric locomotives almost universally use axle-hung traction motors, with one motor for each powered axle. In this arrangement, one side of the motor housing is supported by plain bearings riding on a ground and polished journal that is integral to the axle. The other side of the housing has a tongue-shaped protuberance that engages a matching slot in the truck (bogie) bolster, its purpose being to act as a torque reaction device, as well as a support. Power transfer from motor to axle is effected by spur gearing, in which a pinion on the motor shaft engages a bull gear on the axle. Both gears are enclosed in a liquid-tight housing containing lubricating oil. The type of service in which the locomotive is used dictates the gear ratio employed. Numerically high ratios are commonly found on freight units, whereas numerically low ratios are typical of passenger engines.
Electricity is typically generated in large and relatively efficient generating stations, transmitted to the railway network and distributed to the trains. Some electric railways have their own dedicated generating stations and transmission lines but most purchase power from an electric utility. The railway usually provides its own distribution lines, switches and transformers.
Electric locomotives usually cost 20% less than diesel locomotives, their maintenance costs are 25–35% lower, and cost up to 50% less to run.
Direct current
The earliest systems were DC systems. The first electric passenger train was presented by Werner von Siemens at Berlin in 1879. The locomotive was driven by a 2.2 kW, series-wound motor, and the train, consisting of the locomotive and three cars, reached a speed of 13 km/h. During four months, the train carried 90,000 passengers on a 300-metre-long (984 feet) circular track. The electricity (150 V DC) was supplied through a third insulated rail between the tracks. A contact roller was used to collect the electricity. The world's first electric tram line opened in Lichterfelde near Berlin, Germany, in 1881. It was built by Werner von Siemens (see Gross-Lichterfelde Tramway and Berlin Straßenbahn). The Volk's Electric Railway opened in 1883 in Brighton, and is the oldest surviving electric railway. Also in 1883, Mödling and Hinterbrühl Tram opened near Vienna in Austria. It was the first in the world in regular service powered from an overhead line. Five years later, in the U.S. electric trolleys were pioneered in 1888 on the Richmond Union Passenger Railway, using equipment designed by Frank J. Sprague.
The first electrically worked underground line was the City & South London Railway, prompted by a clause in its enabling act prohibiting use of steam power. It opened in 1890, using electric locomotives built by Mather & Platt. Electricity quickly became the power supply of choice for subways, abetted by the Sprague's invention of multiple-unit train control in 1897.
The first use of electrification on a main line was on a four-mile stretch of the Baltimore Belt Line of the Baltimore & Ohio (B&O) in 1895 connecting the main portion of the B&O to the new line to New York through a series of tunnels around the edges of Baltimore's downtown. Three Bo+Bo units were initially used, at the south end of the electrified section; they coupled onto the locomotive and train and pulled it through the tunnels.
DC was used on earlier systems. These systems were gradually replaced by AC. Today, almost all main-line railways use AC systems. DC systems are confined mostly to urban transit such as metro systems, light rail and trams, where power requirement is less.
Alternating current
The first practical AC electric locomotive was designed by Charles Brown, then working for Oerlikon, Zürich. In 1891, Brown had demonstrated long-distance power transmission, using three-phase AC, between a hydro-electric plant at Lauffen am Neckar and Frankfurt am Main West, a distance of 280 km. Using experience he had gained while working for Jean Heilmann on steam–electric locomotive designs, Brown observed that three-phase motors had a higher power-to-weight ratio than DC motors and, because of the absence of a commutator, were simpler to manufacture and maintain. However, they were much larger than the DC motors of the time and could not be mounted in underfloor bogies: they could only be carried within locomotive bodies.
In 1894, Hungarian engineer Kálmán Kandó developed a new type 3-phase asynchronous electric drive motors and generators for electric locomotives.
Kandó's early 1894 designs were first applied in a short three-phase AC tramway in Evian-les-Bains (France), which was constructed between 1896 and 1898. In 1918, Kandó invented and developed the rotary phase converter, enabling electric locomotives to use three-phase motors whilst supplied via a single overhead wire, carrying the simple industrial frequency (50 Hz) single phase AC of the high voltage national networks.
In 1896, Oerlikon installed the first commercial example of the system on the Lugano Tramway. Each 30-tonne locomotive had two motors run by three-phase 750 V 40 Hz fed from double overhead lines. Three-phase motors run at constant speed and provide regenerative braking, and are well suited to steeply graded routes, and the first main-line three-phase locomotives were supplied by Brown (by then in partnership with Walter Boveri) in 1899 on the 40 km Burgdorf—Thun line, Switzerland. The first implementation of industrial frequency single-phase AC supply for locomotives came from Oerlikon in 1901, using the designs of Hans Behn-Eschenburg and Emil Huber-Stockar; installation on the Seebach-Wettingen line of the Swiss Federal Railways was completed in 1904. The 15 kV, 50 Hz , 48 tonne locomotives used transformers and rotary converters to power DC traction motors.
Italian railways were the first in the world to introduce electric traction for the entire length of a main line rather than just a short stretch. The 106 km Valtellina line was opened on 4 September 1902, designed by Kandó and a team from the Ganz works. The electrical system was three-phase at 3 kV 15 Hz. The voltage was significantly higher than used earlier and it required new designs for electric motors and switching devices. The three-phase two-wire system was used on several railways in Northern Italy and became known as "the Italian system". Kandó was invited in 1905 to undertake the management of Società Italiana Westinghouse and led the development of several Italian electric locomotives.
Battery–electric
A battery–electric locomotive (or battery locomotive) is an electric locomotive powered by onboard batteries; a kind of battery electric vehicle.
Such locomotives are used where a conventional diesel or electric locomotive would be unsuitable. An example is maintenance trains on electrified lines when the electricity supply is turned off. Another use is in industrial facilities where a combustion-powered locomotive (i.e., steam- or diesel-powered) could cause a safety issue due to the risks of fire, explosion or fumes in a confined space. Battery locomotives are preferred for mines where gas could be ignited by trolley-powered units arcing at the collection shoes, or where electrical resistance could develop in the supply or return circuits, especially at rail joints, and allow dangerous current leakage into the ground. Battery locomotives in over-the-road service can recharge while absorbing dynamic-braking energy.
The first known electric locomotive was built in 1837 by chemist Robert Davidson of Aberdeen, and it was powered by galvanic cells (batteries). Davidson later built a larger locomotive named Galvani, exhibited at the Royal Scottish Society of Arts Exhibition in 1841. The seven-ton vehicle had two direct-drive reluctance motors, with fixed electromagnets acting on iron bars attached to a wooden cylinder on each axle, and simple commutators. It hauled a load of six tons at four miles per hour (6 kilometers per hour) for a distance of . It was tested on the Edinburgh and Glasgow Railway in September of the following year, but the limited power from batteries prevented its general use.
Another example was at the Kennecott Copper Mine, Latouche, Alaska, where in 1917 the underground haulage ways were widened to enable working by two battery locomotives of tons. In 1928, Kennecott Copper ordered four 700-series electric locomotives with on-board batteries. These locomotives weighed 85 tons and operated on 750-volt overhead trolley wire with considerable further range whilst running on batteries. The locomotives provided several decades of service using Nickel–iron battery (Edison) technology. The batteries were replaced with lead-acid batteries, and the locomotives were retired shortly afterward. All four locomotives were donated to museums, but one was scrapped. The others can be seen at the Boone and Scenic Valley Railroad, Iowa, and at the Western Railway Museum in Rio Vista, California. The Toronto Transit Commission previously operated a battery electric locomotive built by Nippon Sharyo in 1968 and retired in 2009.
London Underground regularly operates battery–electric locomotives for general maintenance work.
Other types
Fireless
Atomic–electric
In the early 1950s, Lyle Borst of the University of Utah was given funding by various US railroad line and manufacturers to study the feasibility of an electric-drive locomotive, in which an onboard atomic reactor produced the steam to generate the electricity. At that time, atomic power was not fully understood; Borst believed the major stumbling block was the price of uranium. With the Borst atomic locomotive, the center section would have a 200-ton reactor chamber and steel walls 5 feet thick to prevent releases of radioactivity in case of accidents. He estimated a cost to manufacture atomic locomotives with 7000 h.p. engines at approximately $1,200,000 each. Consequently, trains with onboard nuclear generators were generally deemed unfeasible due to prohibitive costs.
Fuel cell–electric
In 2002, the first 3.6 tonne, 17 kW hydrogen-(fuel-cell)–powered mining locomotive was demonstrated in Val-d'Or, Quebec. In 2007 the educational mini-hydrail in Kaohsiung, Taiwan went into service. The Railpower GG20B finally is another example of a fuel cell–electric locomotive.
Hybrid locomotives
There are many different types of hybrid or dual-mode locomotives using two or more types of motive power. The most common hybrids are electro-diesel locomotives powered either from an electricity supply or else by an onboard diesel engine. These are used to provide continuous journeys along routes that are only partly electrified. Examples include the EMD FL9 and Bombardier ALP-45DP
Use
There are three main uses of locomotives in rail transport operations: for hauling passenger trains, freight trains, and for switching (UK English: shunting).
Freight locomotives are normally designed to deliver high starting tractive effort and high sustained power. This allows them to start and move long, heavy trains, but usually comes at the cost of relatively low maximum speeds. Passenger locomotives usually develop lower starting tractive effort but are able to operate at the high speeds required to maintain passenger schedules. Mixed-traffic locomotives (US English: general purpose or road switcher locomotives) meant for both passenger and freight trains do not develop as much starting tractive effort as a freight locomotive but are able to haul heavier trains than a passenger locomotive.
Most steam locomotives have reciprocating engines, with pistons coupled to the driving wheels by means of connecting rods, with no intervening gearbox. This means the combination of starting tractive effort and maximum speed is greatly influenced by the diameter of the driving wheels. Steam locomotives intended for freight service generally have smaller diameter driving wheels than passenger locomotives.
In diesel-electric and electric locomotives the control system between the traction motors and axles adapts the power output to the rails for freight or passenger service. Passenger locomotives may include other features, such as head-end power (also referred to as hotel power or electric train supply) or a steam generator.
Some locomotives are designed specifically to work steep grade railways, and feature extensive additional braking mechanisms and sometimes rack and pinion. Steam locomotives built for steep rack and pinion railways frequently have the boiler tilted relative to the locomotive frame, so that the boiler remains roughly level on steep grades.
Locomotives are also used on some high-speed trains. Some of them are operated in push-pull formation with trailer control cars at another end of a train, which often have a cabin with the same design as a cabin of locomotive; examples of such trains with conventional locomotives are Railjet and Intercity 225.
Also many high-speed trains, including all TGV, many Talgo (250 / 350 / Avril / XXI), some Korea Train Express, ICE 1/ICE 2 and Intercity 125, use dedicated power cars, which do not have places for passengers and technically are special single-ended locomotives. The difference from conventional locomotives is that these power cars are integral part of a train and are not adapted for operation with any other types of passenger coaches. On the other hand, many high-speed trains such as the Shinkansen network never use locomotives. Instead of locomotive-like power-cars, they use electric multiple units (EMUs) or diesel multiple units (DMUs) – passenger cars that also have traction motors and power equipment. Using dedicated locomotive-like power cars allows for a high ride quality and less electrical equipment;
but EMUs have less axle weight, which reduces maintenance costs, and EMUs also have higher acceleration and higher seating capacity.
Also some trains, including TGV PSE, TGV TMST and TGV V150, use both non-passenger power cars and additional passenger motor cars.
Operational role
Locomotives occasionally work in a specific role, such as:
Train engine is the technical name for a locomotive attached to the front of a railway train to haul that train. Alternatively, where facilities exist for push-pull operation, the train engine might be attached to the rear of the train;
Pilot engine – a locomotive attached in front of the train engine, to enable double-heading;
Banking engine – a locomotive temporarily assisting a train from the rear, due to a difficult start or a sharp incline gradient;
Light engine – a locomotive operating without a train behind it, for relocation or operational reasons. Occasionally, a light engine is referred to as a train in and of itself.
Station pilot – a locomotive used to shunt passenger trains at a railway station.
Wheel arrangement
The wheel arrangement of a locomotive describes how many wheels it has; common methods include the AAR wheel arrangement, UIC classification, and Whyte notation systems.
Remote control locomotives
In the second half of the twentieth century remote control locomotives started to enter service in switching operations, being remotely controlled by an operator outside of the locomotive cab.
The main benefit is one operator can control the loading of grain, coal, gravel, etc. into the cars. In addition, the same operator can move the train as needed. Thus, the locomotive is loaded or unloaded in about a third of the time.
| Technology | Trains | null |
17725 | https://en.wikipedia.org/wiki/Lighthouse | Lighthouse | A lighthouse is a tower, building, or other type of physical structure designed to emit light from a system of lamps and lenses and to serve as a beacon for navigational aid for maritime pilots at sea or on inland waterways.
Lighthouses mark dangerous coastlines, hazardous shoals, reefs, rocks, and safe entries to harbors; they also assist in aerial navigation. Once widely used, the number of operational lighthouses has declined due to the expense of maintenance and the advent of much cheaper, more sophisticated, and more effective electronic navigational systems.
History
Ancient lighthouses
Before the development of clearly defined ports, mariners were guided by fires built on hilltops. Since elevating the fire would improve visibility, placing the fire on a platform became a practice that led to the development of the lighthouse. In antiquity, the lighthouse functioned more as an entrance marker to ports than as a warning signal for reefs and promontories, unlike many modern lighthouses. The most famous lighthouse structure from antiquity was the Pharos of Alexandria, Egypt, which collapsed following a series of earthquakes between 956 and 1323.
The intact Tower of Hercules at A Coruña, Spain gives insight into ancient lighthouse construction; other evidence about lighthouses exists in depictions on coins and mosaics, of which many represent the lighthouse at Ostia. Coins from Alexandria, Ostia, and Laodicea in Syria also exist.
Modern construction
The modern era of lighthouses began at the turn of the 18th century, as the number of lighthouses being constructed increased significantly due to much higher levels of transatlantic commerce. Advances in structural engineering and new and efficient lighting equipment allowed for the creation of larger and more powerful lighthouses, including ones exposed to the sea. The function of lighthouses was gradually changed from indicating ports to the providing of a visible warning against shipping hazards, such as rocks or reefs.
The Eddystone Rocks were a major shipwreck hazard for mariners sailing through the English Channel. The first lighthouse built there was an octagonal wooden structure, anchored by 12 iron stanchions secured in the rock, and was built by Henry Winstanley from 1696 to 1698. His lighthouse was the first tower in the world to have been fully exposed to the open sea.
The civil engineer John Smeaton rebuilt the lighthouse from 1756 to 1759; his tower marked a major step forward in the design of lighthouses and remained in use until 1877. He modeled the shape of his lighthouse on that of an oak tree, using granite blocks. He rediscovered and used "hydraulic lime", a form of concrete that will set under water used by the Romans, and developed a technique of securing the granite blocks together using dovetail joints and marble dowels. The dovetailing feature served to improve the structural stability, although Smeaton also had to taper the thickness of the tower towards the top, for which he curved the tower inwards on a gentle gradient. This profile had the added advantage of allowing some of the energy of the waves to dissipate on impact with the walls. His lighthouse was the prototype for the modern lighthouse and influenced all subsequent engineers.
One such influence was Robert Stevenson, himself a seminal figure in the development of lighthouse design and construction. His greatest achievement was the construction of the Bell Rock Lighthouse in 1810, one of the most impressive feats of engineering of the age. This structure was based upon Smeaton's design, but with several improved features, such as the incorporation of rotating lights, alternating between red and white. Stevenson worked for the Northern Lighthouse Board for nearly fifty years during which time he designed and oversaw the construction and later improvement of numerous lighthouses. He innovated in the choice of light sources, mountings, reflector design, the use of Fresnel lenses, and in rotation and shuttering systems providing lighthouses with individual signatures allowing them to be identified by seafarers. He also invented the movable jib and the balance-crane as a necessary part for lighthouse construction.
Alexander Mitchell designed the first screw-pile lighthouse – his lighthouse was built on piles that were screwed into the sandy or muddy seabed. Construction of his design began in 1838 at the mouth of the Thames and was known as the Maplin Sands lighthouse, and first lit in 1841. Although its construction began later, the Wyre Light in Fleetwood, Lancashire, was the first to be lit (in 1840).
Lighting improvements
Until 1782 the source of illumination had generally been wood pyres or burning coal. The Argand lamp, invented in 1782 by the Swiss scientist Aimé Argand revolutionized lighthouse illumination with its steady smokeless flame. Early models used ground glass which was sometimes tinted around the wick. Later models used a mantle of thorium dioxide suspended over the flame, creating a bright, steady light. The Argand lamp used whale oil, colza, olive oil or other vegetable oil as fuel, supplied by a gravity feed from a reservoir mounted above the burner. The lamp was first produced by Matthew Boulton, in partnership with Argand, in 1784, and became the standard for lighthouses for over a century.
South Foreland Lighthouse was the first tower to successfully use an electric light in 1875. The lighthouse's carbon arc lamps were powered by a steam-driven magneto. John Richardson Wigham was the first to develop a system for gas illumination of lighthouses. His improved gas 'crocus' burner at the Baily Lighthouse near Dublin was 13 times more powerful than the most brilliant light then known.
The vaporized oil burner was invented in 1901 by Arthur Kitson, and improved by David Hood at Trinity House. The fuel was vaporized at high pressure and burned to heat the mantle, giving an output of over six times the luminosity of traditional oil lights. The use of gas as illuminant became widely available with the invention of the Dalén light by Swedish engineer Gustaf Dalén. He used Agamassan (Aga), a substrate, to absorb the gas, allowing the gas to be stored, and hence used, safely. Dalén also invented the 'sun valve', which automatically regulated the light and turned it off during the daytime. The technology was the predominant light source in lighthouses from the 1900s to the 1960s, when electric lighting had become dominant.
Optical systems
With the development of the steady illumination of the Argand lamp, the application of optical lenses to increase and focus the light intensity became a practical possibility. William Hutchinson developed the first practical optical system in 1777, known as a catoptric system. This rudimentary system effectively collimated the emitted light into a concentrated beam, thereby greatly increasing the light's visibility. The ability to focus the light led to the first revolving lighthouse beams, where the light would appear to the mariners as a series of intermittent flashes. It also became possible to transmit complex signals using the light flashes.
French physicist and engineer Augustin-Jean Fresnel developed the multi-part Fresnel lens for use in lighthouses. His design allowed for the construction of lenses of large aperture and short focal length, without the mass and volume of material that would be required by a lens of conventional design. A Fresnel lens can be made much thinner than a comparable conventional lens, in some cases taking the form of a flat sheet. A Fresnel lens can also capture more oblique light from a light source, thus allowing the light from a lighthouse equipped with one to be visible over greater distances.
The first Fresnel lens was used in 1823 in the Cordouan lighthouse at the mouth of the Gironde estuary; its light could be seen from more than out. Fresnel's invention increased the luminosity of the lighthouse lamp by a factor of four and his system is still in common use.
Modern lighthouses
The introduction of electrification and automatic lamp changers began to make lighthouse keepers obsolete. For many years, lighthouses still had keepers, partly because lighthouse keepers could serve as a rescue service, if necessary. Improvements in maritime navigation and safety, such Global Positioning System (GPS), led to the phasing out of non-automated lighthouses across the world. Although several closed due to safety concerns, Canada still maintains 49 staffed lighthouses, split roughly evenly across east and west coasts.
The remaining modern lighthouses are usually illuminated by a single stationary flashing light powered by solar-charged batteries and mounted on a steel skeleton tower. Where the power requirement is too great for solar power alone, cycle charging of the battery by a Diesel generator is provided. The generator only comes into use when the battery needs charging, saving fuel and increasing periods between maintenance.
Famous lighthouse builders
John Smeaton is noteworthy for having designed the third and most famous Eddystone Lighthouse, but some builders are well known for their work in building multiple lighthouses. The Stevenson family (Robert, Alan, David, Thomas, David Alan, and Charles) made lighthouse building a three-generation profession in Scotland.
Richard Henry Brunton designed and built 26 Japanese lighthouses in Meiji Era Japan, which became known as Brunton's "children". Blind Irishman Alexander Mitchell invented and built a number of screw-pile lighthouses. Englishman James Douglass was knighted for his work on the fourth Eddystone Lighthouse.
United States Army Corps of Engineers Lieutenant George Meade built numerous lighthouses along the Atlantic and Gulf coasts before gaining wider fame as the winning general at the Battle of Gettysburg. Colonel Orlando M. Poe, engineer to General William Tecumseh Sherman in the siege of Atlanta, designed and built some of the most exotic lighthouses in the most difficult locations on the U.S. Great Lakes.
French merchant navy officer Marius Michel Pasha built almost a hundred lighthouses along the coasts of the Ottoman Empire in a period of twenty years after the Crimean War (1853–1856).
Technology
In a lighthouse, the source of light is called the "lamp" (whether electric or fuelled by oil) and the light is concentrated, if needed, by the "lens" or "optic". Power sources for lighthouses in the 20th–21st centuries vary.
Power
Originally lit by open fires and later candles, the Argand hollow wick lamp and parabolic reflector were introduced in the late 18th century.
Whale oil was also used with wicks as the source of light. Kerosene became popular in the 1870s and electricity and acetylene gas derived on-site from calcium carbide began replacing kerosene around the turn of the 20th century. Carbide was promoted by the Dalén light, which automatically lit the lamp at nightfall and extinguished it at dawn.
In the second half of the 20th century, many remote lighthouses in Russia (then Soviet Union) were powered by radioisotope thermoelectric generators (RTGs). These had the advantage of providing power day or night and did not need refuelling or maintenance. However, after the collapse of the Soviet government in 1990s, most of the official records on the locations, and condition, of these lighthouses were reportedly lost. Over time, the condition of RTGs in Russia degraded; many of them fell victim to vandalism and scrap metal thieves, who may not have been aware of the dangerous radioactive contents.
Energy-efficient LED lights can be powered by solar panels, with batteries instead of a Diesel generator for backup.
Light source
Many Fresnel lens installations have been replaced by rotating aerobeacons, which require less maintenance.
In modern automated lighthouses, the system of rotating lenses is often replaced by a high intensity light that emits brief omnidirectional flashes, concentrating the light in time rather than direction. These lights are similar to obstruction lights used to warn aircraft of tall structures. Later innovations were "Vega Lights", and experiments with light-emitting diode (LED) panels.
LED lights, which use less energy and are easier to maintain, had come into widespread use by 2020. In the United Kingdom and Ireland about a third of lighthouses had been converted from filament light sources to use LEDs, and conversion continued with about three per year. The light sources are designed to replicate the colour and character of the traditional light as closely as possible. The change is often not noticed by people in the region, but sometimes a proposed change leads to calls to preserve the traditional light, including in some cases a rotating beam. A typical LED system designed to fit into the traditional 19th century Fresnel lens enclosure was developed by Trinity House and two other lighthouse authorities and costs about €20,000, depending on configuration, according to a supplier; it has large fins to dissipate heat. Lifetime of the LED light source is 50,000 to 100,000 hours, compared to about 1,000 hours for a filament source.
Laser light
Experimental installations of laser lights, either at high power to provide a "line of light" in the sky or, utilising low power, aimed towards mariners have identified problems of increased complexity in installation and maintenance, and high power requirements. The first practical installation, in 1971 at Point Danger lighthouse, Queensland, was replaced by a conventional light after four years, because the beam was too narrow to be seen easily.
Light characteristics
In any of these designs an observer, rather than seeing a continuous weak light, sees a brighter light during short time intervals. These instants of bright light are arranged to create a light characteristic or pattern specific to a lighthouse. For example, the Scheveningen Lighthouse flashes are alternately 2.5 and 7.5 seconds. Some lights have sectors of a particular color (usually formed by colored panes in the lantern) to distinguish safe water areas from dangerous shoals. Modern lighthouses often have unique reflectors or racon transponders so the radar signature of the light is also unique.
Lens
Before modern strobe lights, lenses were used to concentrate the light from a continuous source. Vertical light rays of the lamp are redirected into a horizontal plane, and horizontally the light is focused into one or a few directions at a time, with the light beam swept around. As a result, in addition to seeing the side of the light beam, the light is directly visible from greater distances, and with an identifying light characteristic.
This concentration of light is accomplished with a rotating lens assembly. In early lighthouses, the light source was a kerosene lamp or, earlier, an animal or vegetable oil Argand lamp, and the lenses rotated by a weight driven clockwork assembly wound by lighthouse keepers, sometimes as often as every two hours. The lens assembly sometimes floated in liquid mercury to reduce friction. In more modern lighthouses, electric lights and motor drives were used, generally powered by diesel electric generators. These also supplied electricity for the lighthouse keepers.
Efficiently concentrating the light from a large omnidirectional light source requires a very large diameter lens. This would require a very thick and heavy lens if a conventional lens were used. The Fresnel lens (pronounced ) focused 85% of a lamp's light versus the 20% focused with the parabolic reflectors of the time. Its design enabled construction of lenses of large size and short focal length without the weight and volume of material in conventional lens designs.
Fresnel lighthouse lenses are ranked by order, a measure of refracting power, with a first order lens being the largest, most powerful and expensive; and a sixth order lens being the smallest. The order is based on the focal length of the lens. A first order lens has the longest focal length, with the sixth being the shortest. Coastal lighthouses generally use first, second, or third order lenses, while harbor lights and beacons use fourth, fifth, or sixth order lenses.
Some lighthouses, such as those at Cape Race, Newfoundland, and Makapuu Point, Hawaii, used a more powerful hyperradiant Fresnel lens manufactured by the firm of Chance Brothers.
Building
Components
While lighthouse buildings differ depending on the location and purpose, they tend to have common components.
A light station comprises the lighthouse tower and all outbuildings, such as the keeper's living quarters, fuel house, boathouse, and fog-signaling building. The Lighthouse itself consists of a tower structure supporting the lantern room where the light operates.
The lantern room is the glassed-in housing at the top of a lighthouse tower containing the lamp and lens. Its glass storm panes are supported by metal muntins (glazing bars) running vertically or diagonally. At the top of the lantern room is a stormproof ventilator designed to remove the smoke of the lamps and the heat that builds in the glass enclosure. A lightning rod and grounding system connected to the metal cupola roof provides a safe conduit for any lightning strikes.
Immediately beneath the lantern room is usually a Watch Room or Service Room where fuel and other supplies were kept and where the keeper prepared the lanterns for the night and often stood watch. The clockworks (for rotating the lenses) were also located there. On a lighthouse tower, an open platform called the gallery is often located outside the watch room (called the Main Gallery) or Lantern Room (Lantern Gallery). This was mainly used for cleaning the outside of the windows of the Lantern Room.
Lighthouses near to each other that are similar in shape are often painted in a unique pattern so they can easily be recognized during daylight, a marking known as a daymark. The black and white barber pole spiral pattern of Cape Hatteras Lighthouse is one example. Race Rocks Light in western Canada is painted in horizontal black and white bands to stand out against the horizon.
Design
For effectiveness, the lamp must be high enough to be seen before the danger is reached by a mariner. The minimum height is calculated by trigonometry (see Distance to the horizon) as , where H is the height above water in feet, and D is the distance from the lighthouse to the horizon in nautical miles, the lighthouse range.
Where dangerous shoals are located far off a flat sandy beach, the prototypical tall masonry coastal lighthouse is constructed to assist the navigator making a landfall after an ocean crossing. Often these are cylindrical to reduce the effect of wind on a tall structure, such as Cape May Light. Smaller versions of this design are often used as harbor lights to mark the entrance into a harbor, such as New London Harbor Light.
Where a tall cliff exists, a smaller structure may be placed on top such as at Horton Point Light. Sometimes, such a location can be too high, for example along the west coast of the United States, where frequent low clouds can obscure the light. In these cases, lighthouses are placed below the clifftop to ensure that they can still be seen at the surface during periods of fog or low clouds, as at Point Reyes Lighthouse. Another example is in San Diego, California: the Old Point Loma lighthouse was too high up and often obscured by fog, so it was replaced in 1891 with a lower lighthouse, New Point Loma lighthouse.
As technology advanced, prefabricated skeletal iron or steel structures tended to be used for lighthouses constructed in the 20th century. These often have a narrow cylindrical core surrounded by an open lattice work bracing, such as Finns Point Range Light.
Sometimes a lighthouse needs to be constructed in the water itself. Wave-washed lighthouses are masonry structures constructed to withstand water impact, such as Eddystone Lighthouse in Britain and the St. George Reef Light of California. In shallower bays, Screw-pile lighthouse ironwork structures are screwed into the seabed and a low wooden structure is placed above the open framework, such as Thomas Point Shoal Lighthouse. As screw piles can be disrupted by ice, steel caisson lighthouses such as Orient Point Light are used in cold climates. Orient Long Beach Bar Light (Bug Light) is a blend of a screw pile light that was converted to a caisson light because of the threat of ice damage. Skeletal iron towers with screw-pile foundations were built on the Florida Reef along the Florida Keys, beginning with the Carysfort Reef Light in 1852.
In waters too deep for a conventional structure, a lightship might be used instead of a lighthouse, such as the former lightship Columbia. Most of these have now been replaced by fixed light platforms (such as Ambrose Light) similar to those used for offshore oil exploration.
Range lights
Aligning two fixed points on land provides a navigator with a line of position called a range in North America and a transit in Britain. Ranges can be used to precisely align a vessel within a narrow channel such as a river. With landmarks of a range illuminated with a set of fixed lighthouses, nighttime navigation is possible.
Such paired lighthouses are called range lights in North America and leading lights in the United Kingdom. The closer light is referred to as the beacon or front range; the further light is called the rear range. The rear range light is almost always taller than the front.
When a vessel is on the correct course, the two lights align vertically, but when the observer is out of position, the difference in alignment indicates the direction of travel to correct the course.
Location
There are two types of lighthouses: ones that are located on land, and ones that are offshore.
Offshore Lighthouses are lighthouses that are not close to land. There can be a number of reasons for these lighthouses to be built. There can be a shoal, reef or submerged island several miles from land.
The current Cordouan Lighthouse was completed in 1611, from the shore on a small islet, but was built on a previous lighthouse that can be traced back to the 880s and is the oldest surviving lighthouse in France. It is connected to the mainland by a causeway. The oldest surviving oceanic offshore lighthouse is Bell Rock Lighthouse in the North Sea, off the coast of Scotland.
Maintenance
Asia and Oceania
In Australia, lighthouses are conducted by the Australian Maritime Safety Authority.
In India, lighthouses are maintained by the Directorate General of Lighthouses and Lightships, an office of the Ministry of Ports, Shipping and Waterways.
Europe
The former Soviet government built a number of automated lighthouses powered by radioisotope thermoelectric generators in remote locations in northern Russia. They operated for long periods without external support with great reliability. However, numerous installations deteriorated, were stolen, or vandalized. Some cannot be found due to poor record-keeping.
The United Kingdom and the Republic of Ireland together have three bodies: lighthouses around the coasts of England and Wales are looked after by Trinity House, those around Scotland and the Isle of Man by the Northern Lighthouse Board and those around Ireland by the Commissioners of Irish Lights.
North America
In Canada, lighthouses are managed by the Canadian Coast Guard.
In the United States, lighthouses are maintained by the United States Coast Guard, into which the United States Lighthouse Service was merged in 1939.
Preservation
As lighthouses became less essential to navigation, many of their historic structures faced demolition or neglect. In the United States, the National Historic Lighthouse Preservation Act of 2000 provides for the transfer of lighthouse structures to local governments and private non-profit groups, while the USCG continues to maintain the lamps and lenses. In Canada, the Nova Scotia Lighthouse Preservation Society won heritage status for Sambro Island Lighthouse, and sponsored the Heritage Lighthouse Protection Act to change Canadian federal laws to protect lighthouses.
Many groups formed to restore and save lighthouses around the world, including the World Lighthouse Society and the United States Lighthouse Society, as well as the Amateur Radio Lighthouse Society, which sends amateur radio operators to publicize the preservation of remote lighthouses throughout the world.
| Technology | Coastal infrastructure | null |
17744 | https://en.wikipedia.org/wiki/Lanthanum | Lanthanum | Lanthanum is a chemical element with the symbol La and the atomic number 57. It is a soft, ductile, silvery-white metal that tarnishes slowly when exposed to air. It is the eponym of the lanthanide series, a group of 15 similar elements between lanthanum and lutetium in the periodic table, of which lanthanum is the first and the prototype. Lanthanum is traditionally counted among the rare earth elements. Like most other rare earth elements, its usual oxidation state is +3, although some compounds are known with an oxidation state of +2. Lanthanum has no biological role in humans but is used by some bacteria. It is not particularly toxic to humans but does show some antimicrobial activity.
Lanthanum usually occurs together with cerium and the other rare earth elements. Lanthanum was first found by the Swedish chemist Carl Gustaf Mosander in 1839 as an impurity in cerium nitrate – hence the name lanthanum, from the ancient Greek (), meaning 'to lie hidden'. Although it is classified as a rare earth element, lanthanum is the 28th most abundant element in the Earth's crust, almost three times as abundant as lead. In minerals such as monazite and bastnäsite, lanthanum composes about a quarter of the lanthanide content. It is extracted from those minerals by a process of such complexity that pure lanthanum metal was not isolated until 1923.
Lanthanum compounds have numerous applications including catalysts, additives in glass, carbon arc lamps for studio lights and projectors, ignition elements in lighters and torches, electron cathodes, scintillators, and gas tungsten arc welding electrodes. Lanthanum carbonate is used as a phosphate binder to treat high levels of phosphate in the blood accompanied by kidney failure.
Characteristics
Physical
Lanthanum is the first element and prototype of the lanthanide series. In the periodic table, it appears to the right of the alkaline earth metal barium and to the left of the lanthanide cerium. Lanthanum is generally considered the first of the f-block elements by authors writing on the subject. The 57 electrons of a lanthanum atom are arranged in the configuration [Xe]5d6s, with three valence electrons outside the noble gas core. In chemical reactions, lanthanum almost always gives up these three valence electrons from the 5d and 6s subshells to form the +3 oxidation state, achieving the stable configuration of the preceding noble gas xenon. Some lanthanum(II) compounds are also known, but they are usually much less stable. Lanthanum monoxide (LaO) produces strong absorption bands in some stellar spectra.
Among the lanthanides, lanthanum is exceptional as it has no 4f electrons as a single gas-phase atom. Thus it is only very weakly paramagnetic, unlike the strongly paramagnetic later lanthanides (with the exceptions of the last two, ytterbium and lutetium, where the 4f shell is completely full). However, the 4f shell of lanthanum can become partially occupied in chemical environments and participate in chemical bonding. For example, the melting points of the trivalent lanthanides (all but europium and ytterbium) are related to the extent of hybridisation of the 6s, 5d, and 4f electrons (lowering with increasing 4f involvement), and lanthanum has the second-lowest melting point among them: 920 °C. (Europium and ytterbium have lower melting points because they delocalise about two electrons per atom rather than three.) This chemical availability of f orbitals justifies lanthanum's placement in the f-block despite its anomalous ground-state configuration (which is merely the result of strong interelectronic repulsion making it less profitable to occupy the 4f shell, as it is small and close to the core electrons).
The lanthanides become harder as the series is traversed: as expected, lanthanum is a soft metal. Lanthanum has a relatively high resistivity of 615 nΩm at room temperature; in comparison, the value for the good conductor aluminium is only 26.50 nΩm. Lanthanum is the least volatile of the lanthanides. Like most of the lanthanides, lanthanum has a hexagonal crystal structure at room temperature (-La). At 310 °C, lanthanum changes to a face-centered cubic structure (-La), and at 865 °C, it changes to a body-centered cubic structure (-La).
Chemical
As expected from periodic trends, lanthanum has the largest atomic radius of the lanthanides. Hence, it is the most reactive among them, tarnishing quite rapidly in air, turning completely dark after several hours and can readily burn to form lanthanum(III) oxide, , which is almost as basic as calcium oxide. A centimeter-sized sample of lanthanum will corrode completely in a year as its oxide spalls off like iron rust, instead of forming a protective oxide coating like aluminium, scandium, yttrium, and lutetium. Lanthanum reacts with the halogens at room temperature to form the trihalides, and upon warming will form binary compounds with the nonmetals nitrogen, carbon, sulfur, phosphorus, boron, selenium, silicon and arsenic. Lanthanum reacts slowly with water to form lanthanum(III) hydroxide, . In dilute sulfuric acid, lanthanum readily forms the aquated tripositive ion : This is colorless in aqueous solution since has no d or f electrons. Lanthanum is the strongest and hardest base among the rare earth elements, which is again expected from its being the largest of them.
Some lanthanum(II) compounds are also known, but they are much less stable. Therefore, in officially naming compounds of lanthanum its oxidation number always is to be mentioned.
Isotopes
Naturally occurring lanthanum is made up of two isotopes, the stable and the primordial long-lived radioisotope . is by far the most abundant, making up 99.910% of natural lanthanum: it is produced in the s-process (slow neutron capture, which occurs in low- to medium-mass stars) and the r-process (rapid neutron capture, which occurs in core-collapse supernovae). It is the only stable isotope of lanthanum. The very rare isotope is one of the few primordial odd–odd nuclei, with a long half-life of It is one of the proton-rich p-nuclei which cannot be produced in the s- or r-processes. , along with the even rarer , is produced in the ν-process, where neutrinos interact with stable nuclei. All other lanthanum isotopes are synthetic: With the exception of with a half-life of about 60,000 years, all of them have half-lives less than two days, and most have half-lives less than a minute. The isotopes and occur as fission products of uranium.
Compounds
Lanthanum oxide is a white solid that can be prepared by direct reaction of its constituent elements. Due to the large size of the ion, adopts a hexagonal 7-coordinate structure that changes to the 6-coordinate structure of scandium oxide () and yttrium oxide () at high temperature. When it reacts with water, lanthanum hydroxide is formed: a lot of heat is evolved in the reaction and a hissing sound is heard. Lanthanum hydroxide will react with atmospheric carbon dioxide to form the basic carbonate.
Lanthanum fluoride is insoluble in water and can be used as a qualitative test for the presence of . The heavier halides are all very soluble deliquescent compounds. The anhydrous halides are produced by direct reaction of their elements, as heating the hydrates causes hydrolysis: for example, heating hydrated produces .
Lanthanum reacts exothermically with hydrogen to produce the dihydride , a black, pyrophoric, brittle, conducting compound with the calcium fluoride structure. This is a non-stoichiometric compound, and further absorption of hydrogen is possible, with a concomitant loss of electrical conductivity, until the more salt-like is reached. Like and , is probably an electride compound.
Due to the large ionic radius and great electropositivity of , there is not much covalent contribution to its bonding and hence it has a limited coordination chemistry, like yttrium and the other lanthanides. Lanthanum oxalate does not dissolve very much in alkali-metal oxalate solutions, and decomposes around 500 °C. Oxygen is the most common donor atom in lanthanum complexes, which are mostly ionic and often have high coordination numbers over is the most characteristic, forming square antiprismatic and dodecadeltahedral structures. These high-coordinate species, reaching up to coordination number 12 with the use of chelating ligands such as in , often have a low degree of symmetry because of stereo-chemical factors.
Lanthanum chemistry tends not to involve due to the electron configuration of the element: thus its organometallic chemistry is quite limited. The best characterized organolanthanum compounds are the cyclopentadienyl complex , which is produced by reacting anhydrous with in tetrahydrofuran, and its methyl-substituted derivatives.
History
In 1751, the Swedish mineralogist Axel Fredrik Cronstedt discovered a heavy mineral from the mine at Bastnäs, later named cerite. Thirty years later, the fifteen-year-old Wilhelm Hisinger, from the family owning the mine, sent a sample of it to Carl Scheele, who did not find any new elements within. In 1803, after Hisinger had become an ironmaster, he returned to the mineral with Jöns Jacob Berzelius and isolated a new oxide which they named ceria after the dwarf planet Ceres, which had been discovered two years earlier. Ceria was simultaneously independently isolated in Germany by Martin Heinrich Klaproth. Between 1839 and 1843, ceria was shown to be a mixture of oxides by the Swedish surgeon and chemist Carl Gustaf Mosander, who lived in the same house as Berzelius and studied under him: he separated out two other oxides which he named lanthana and didymia. He partially decomposed a sample of cerium nitrate by roasting it in air and then treating the resulting oxide with dilute nitric acid. That same year, Axel Erdmann, a student also at the Karolinska Institute, discovered lanthanum in a new mineral from Låven island located in a Norwegian fjord.
Finally, Mosander explained his delay, saying that he had extracted a second element from cerium, and this he called didymium. Although he did not realise it, didymium too was a mixture, and in 1885 it was separated into praseodymium and neodymium.
Since lanthanum's properties differed only slightly from those of cerium, and occurred along with it in its salts, he named it from the Ancient Greek [] (lit. to lie hidden). Relatively pure lanthanum metal was first isolated in 1923.
Occurrence and production
Lanthanum makes up 39 mg/kg of the Earth's crust, behind neodymium at 41.5 mg/kg and cerium at 66.5 mg/kg. Despite being among the so-called "rare earth metals", lanthanum is thus not rare at all, but it is historically so-named because it is rarer than "common earths" such as lime and magnesia, and at the time it was recognized only a few deposits were known. Lanthanum is also ruefully considered a 'rare earth' metal because the process to mine it is difficult, time-consuming, and expensive. Lanthanum is rarely the dominant lanthanide found in the rare earth minerals, and in their chemical formulae it is usually preceded by cerium. Rare examples of La-dominant minerals are monazite-(La) and lanthanite-(La).
The ion is similarly sized to the early lanthanides of the cerium group (those up to samarium and europium) that immediately follow in the periodic table, and hence it tends to occur along with them in phosphate, silicate and carbonate minerals, such as monazite () and bastnäsite (), where M refers to all the rare earth metals except scandium and the radioactive promethium (mostly Ce, La, and Y). Bastnäsite is usually lacking in thorium and the heavy lanthanides, and the purification of the light lanthanides from it is less involved. The ore, after being crushed and ground, is first treated with hot concentrated sulfuric acid, evolving carbon dioxide, hydrogen fluoride, and silicon tetrafluoride: the product is then dried and leached with water, leaving the early lanthanide ions, including lanthanum, in solution.
The procedure for monazite, which usually contains all the rare earths as well as thorium, is more involved. Monazite, because of its magnetic properties, can be separated by repeated electromagnetic separation. After separation, it is treated with hot concentrated sulfuric acid to produce water-soluble sulfates of rare earths. The acidic filtrates are partially neutralized with sodium hydroxide to pH 3–4. Thorium precipitates out of solution as hydroxide and is removed. After that, the solution is treated with ammonium oxalate to convert rare earths to their insoluble oxalates. The oxalates are converted to oxides by annealing. The oxides are dissolved in nitric acid that excludes one of the main components, cerium, whose oxide is insoluble in . Lanthanum is separated as a double salt with ammonium nitrate by crystallization. This salt is relatively less soluble than other rare earth double salts and therefore stays in the residue. Care must be taken when handling some of the residues as they contain , the daughter of , which is a strong gamma emitter. Lanthanum is relatively easy to extract as it has only one neighbouring lanthanide, cerium, which can be removed by making use of its ability to be oxidised to the +4 state; thereafter, lanthanum may be separated out by the historical method of fractional crystallization of , or by ion-exchange techniques when higher purity is desired.
Lanthanum metal is obtained from its oxide by heating it with ammonium chloride or fluoride and hydrofluoric acid at 300–400 °C to produce the chloride or fluoride:
+ + +
This is followed by reduction with alkali or alkaline earth metals in vacuum or argon atmosphere:
+ +
Also, pure lanthanum can be produced by electrolysis of molten mixture of anhydrous and or at elevated temperatures.
Applications
The first historical application of lanthanum was in gas lantern mantles. Carl Auer von Welsbach used a mixture of lanthanum oxide and zirconium oxide, which he called Actinophor and patented in 1886. The original mantles gave a green-tinted light and were not very successful, and his first company, which established a factory in Atzgersdorf in 1887, failed in 1889.
Modern uses of lanthanum include:
One material used for anodic material of nickel–metal hydride batteries is . Due to high cost to extract the other lanthanides, a mischmetal with more than 50% of lanthanum is used instead of pure lanthanum. The compound is an intermetallic component of the type. NiMH batteries can be found in many models of the Toyota Prius sold in the US. These larger nickel-metal hydride batteries require massive quantities of lanthanum for the production. The 2008 Toyota Prius NiMH battery requires of lanthanum. As engineers push the technology to increase fuel efficiency, twice that amount of lanthanum could be required per vehicle.
Hydrogen sponge alloys can contain lanthanum. These alloys are capable of storing up to 400 times their own volume of hydrogen gas in a reversible adsorption process. Heat energy is released every time they do so; therefore these alloys have possibilities in energy conservation systems.
Mischmetal, a pyrophoric alloy used in lighter flints, contains 25% to 45% lanthanum.
Lanthanum oxide and the boride are used in electronic vacuum tubes as hot cathode materials with strong emissivity of electrons. Crystals of are used in high-brightness, extended-life, thermionic electron emission sources for electron microscopes and Hall-effect thrusters.
Lanthanum trifluoride () is an essential component of a heavy fluoride glass named ZBLAN. This glass has superior transmittance in the infrared range and is therefore used for fiber-optical communication systems.
Cerium-doped lanthanum bromide and lanthanum chloride are the recent inorganic scintillators, which have a combination of high light yield, best energy resolution, and fast response. Their high yield converts into superior energy resolution; moreover, the light output is very stable and quite high over a very wide range of temperatures, making it particularly attractive for high-temperature applications. These scintillators are already widely used commercially in detectors of neutrons or gamma rays.
Carbon arc lamps use a mixture of rare earth elements to improve the light quality. This application, especially by the motion picture industry for studio lighting and projection, consumed about 25% of the rare-earth compounds produced until the phase out of carbon arc lamps.
Lanthanum(III) oxide () improves the alkali resistance of glass and is used in making special optical glasses, such as infrared-absorbing glass, as well as camera and telescope lenses, because of the high refractive index and low dispersion of rare-earth glasses. Lanthanum oxide is also used as a grain-growth additive during the liquid-phase sintering of silicon nitride and zirconium diboride.
Small amounts of lanthanum added to steel improves its malleability, resistance to impact, and ductility, whereas addition of lanthanum to molybdenum decreases its hardness and sensitivity to temperature variations.
Small amounts of lanthanum are present in many pool products to remove the phosphates that feed algae.
Lanthanum oxide additive to tungsten is used in gas tungsten arc welding electrodes, as a substitute for radioactive thorium.
Various compounds of lanthanum and other rare-earth elements (oxides, chlorides, triflates, etc.) are components of various catalysis, such as petroleum cracking catalysts.
Lanthanum-barium radiometric dating is used to estimate age of rocks and ores, though the technique has limited popularity.
Lanthanum carbonate was approved as a medication (Fosrenol, Shire Pharmaceuticals) to absorb excess phosphate in cases of hyperphosphatemia seen in end-stage kidney disease.
Lanthanum fluoride is used in phosphor lamp coatings. Mixed with europium fluoride, it is also applied in the crystal membrane of fluoride ion-selective electrodes.
Like horseradish peroxidase, lanthanum is used as an electron-dense tracer in molecular biology.
Lanthanum-modified bentonite (or phoslock) is used to remove phosphates from water in lake treatments.
Lanthanum telluride () is considered to be applied in the field of radioisotope power system (nuclear power plant) due to its significant conversion capabilities. The transmuted elements and isotopes in the segment will not react with the material itself, thus presenting no harm to the safety of the power plant. Though iodine, which can be generated during transmutation, is suspected to react with segment, the quantity of iodine is small enough to pose no threat to the power system.
Biological role
Lanthanum has no known biological role in humans. The element is very poorly absorbed after oral administration and when injected its elimination is very slow. Lanthanum carbonate (Fosrenol) was approved as a phosphate binder to absorb excess phosphate in cases of end stage renal disease.
While lanthanum has pharmacological effects on several receptors and ion channels, its specificity for the GABA receptor is unique among trivalent cations. Lanthanum acts at the same modulatory site on the GABA receptor as zinc, a known negative allosteric modulator. The lanthanum cation is a positive allosteric modulator at native and recombinant GABA receptors, increasing open channel time and decreasing desensitization in a subunit configuration dependent manner.
Lanthanum is a cofactor for the methanol dehydrogenase of the methanotrophic bacterium Methylacidiphilum fumariolicum SolV, although the great chemical similarity of the lanthanides means that it may be substituted with cerium, praseodymium, or neodymium without ill effects, and with the smaller samarium, europium, or gadolinium giving no side effects other than slower growth.
Precautions
Lanthanum has a low to moderate level of toxicity and should be handled with care. The injection of lanthanum solutions produces hyperglycemia, low blood pressure, degeneration of the spleen and hepatic alterations. The application in carbon arc light led to the exposure of people to rare earth element oxides and fluorides, which sometimes led to pneumoconiosis. As the ion is similar in size to the ion, it is sometimes used as an easily traced substitute for the latter in medical studies. Lanthanum, like the other lanthanides, is known to affect human metabolism, lowering cholesterol levels, blood pressure, appetite, and risk of blood coagulation. When injected into the brain, it acts as a painkiller, similarly to morphine and other opiates, though the mechanism behind this is still unknown. Lanthanum meant for ingestion, typically as a chewable tablet or oral powder, can interfere with gastrointestinal (GI) imaging by creating opacities throughout the GI tract; if chewable tablets are swallowed whole, they will dissolve but present initially as coin-shaped opacities in the stomach, potentially confused with ingested metal objects such as coins or batteries.
Prices
The price for a (metric) ton [1000 kg] of Lanthanum oxide 99% (FOB China in USD/Mt) is given by the Institute of Rare Earths Elements and Strategic Metals (IREESM) as below $2,000 for most of the period from early 2001 to September 2010 (at $10,000 in the short term in 2008); it rose steeply to $140,000 in mid-2011 and fell back just as rapidly to $38,000 by early 2012. The average price for the last six months (April–September 2022) is given by the IREESM as follows: Lanthanum Oxide - 99.9%min FOB China - 1308 EUR/mt and for Lanthanum Metal - 99%min FOB China - 3706 EUR/mt.
| Physical sciences | Chemical elements_2 | null |
17745 | https://en.wikipedia.org/wiki/Lutetium | Lutetium | Lutetium is a chemical element; it has symbol Lu and atomic number 71. It is a silvery white metal, which resists corrosion in dry air, but not in moist air. Lutetium is the last element in the lanthanide series, and it is traditionally counted among the rare earth elements; it can also be classified as the first element of the 6th-period transition metals.
Lutetium was independently discovered in 1907 by French scientist Georges Urbain, Austrian mineralogist Baron Carl Auer von Welsbach, and American chemist Charles James. All of these researchers found lutetium as an impurity in the mineral ytterbia, which was previously thought to consist entirely of ytterbium and oxygen. The dispute on the priority of the discovery occurred shortly after, with Urbain and Welsbach accusing each other of publishing results influenced by the published research of the other; the naming honor went to Urbain, as he had published his results earlier. He chose the name lutecium for the new element, but in 1949 the spelling was changed to lutetium. In 1909, the priority was finally granted to Urbain and his names were adopted as official ones; however, the name cassiopeium (or later cassiopium) for element 71 proposed by Welsbach was used by many German scientists until the 1950s.
Lutetium is not a particularly abundant element, although it is significantly more common than silver in the Earth's crust. It has few specific uses. Lutetium-176 is a relatively abundant (2.5%) radioactive isotope with a half-life of about 38 billion years, used to determine the age of minerals and meteorites. Lutetium usually occurs in association with the element yttrium and is sometimes used in metal alloys and as a catalyst in various chemical reactions. 177Lu-DOTA-TATE is used for radionuclide therapy (see Nuclear medicine) on neuroendocrine tumours. Lutetium has the highest Brinell hardness of any lanthanide, at 890–1300 MPa.
Characteristics
Physical properties
A lutetium atom has 71 electrons, arranged in the configuration [Xe] 4f145d16s2. Lutetium is generally encountered in the 3+ oxidation state, having lost its two outermost 6s and the single 5d-electron. The lutetium atom is the smallest among the lanthanide atoms, due to the lanthanide contraction, and as a result lutetium has the highest density, melting point, and hardness of the lanthanides. As lutetium's 4f orbitals are highly stabilized only the 5d and 6s orbitals are involved in chemical reactions and bonding; thus it is characterized as a d-block rather than an f-block element, and on this basis some consider it not to be a lanthanide at all, but a transition metal like its lighter congeners scandium and yttrium.
Chemical properties and compounds
Lutetium's compounds almost always contain the element in the 3+ oxidation state. Aqueous solutions of most lutetium salts are colorless and form white crystalline solids upon drying, with the common exception of the iodide, which is brown. The soluble salts, such as nitrate, sulfate and acetate form hydrates upon crystallization. The oxide, hydroxide, fluoride, carbonate, phosphate and oxalate are insoluble in water.
Lutetium metal is slightly unstable in air at standard conditions, but it burns readily at 150 °C to form lutetium oxide. The resulting compound is known to absorb water and carbon dioxide, and it may be used to remove vapors of these compounds from closed atmospheres. Similar observations are made during reaction between lutetium and water (slow when cold and fast when hot); lutetium hydroxide is formed in the reaction. Lutetium metal is known to react with the four lightest halogens to form trihalides; except the fluoride they are soluble in water.
Lutetium dissolves readily in weak acids and dilute sulfuric acid to form solutions containing the colorless lutetium ions, which are coordinated by between seven and nine water molecules, the average being .
Oxidation states
Lutetium is usually found in the +3 oxidation state, like most other lanthanides. However, it can also be in the 0, +1 and +2 states as well.
Isotopes
Lutetium occurs on the Earth in form of two isotopes: lutetium-175 and lutetium-176. Out of these two, only the former is stable, making the element monoisotopic. The latter one, lutetium-176, decays via beta decay with a half-life of ; it makes up about 2.5% of natural lutetium.
To date, 40 synthetic radioisotopes of the element have been characterized, ranging in mass number from 149 to 190; the most stable such isotopes are lutetium-174 with a half-life of 3.31 years, and lutetium-173 with a half-life of 1.37 years. All of the remaining radioactive isotopes have half-lives that are less than 9 days, and the majority of these have half-lives that are less than half an hour. Isotopes lighter than the stable lutetium-175 decay via electron capture (to produce isotopes of ytterbium), with some alpha and positron emission; the heavier isotopes decay primarily via beta decay, producing hafnium isotopes.
The element also has 43 known nuclear isomers, with masses of 150, 151, 153–162, and 166–180 (not every mass number corresponds to only one isomer). The most stable of them are lutetium-177m, with a half-life of 160.4 days, and lutetium-174m, with a half-life of 142 days; these are longer than the half-lives of the ground states of all radioactive lutetium isotopes except lutetium-173, 174, and 176.
History
Lutetium, derived from the Latin Lutetia (Paris), was independently discovered in 1907 by French scientist Georges Urbain, Austrian mineralogist Baron Carl Auer von Welsbach, and American chemist Charles James. They found it as an impurity in ytterbia, which was thought by Swiss chemist Jean Charles Galissard de Marignac to consist entirely of ytterbium. The scientists proposed different names for the elements: Urbain chose neoytterbium and lutecium, whereas Welsbach chose aldebaranium and cassiopeium (after Aldebaran and Cassiopeia). Both of these articles accused the other man of publishing results based on those of the author.
The International Commission on Atomic Weights, which was then responsible for the attribution of new element names, settled the dispute in 1909 by granting priority to Urbain and adopting his names as official ones, based on the fact that the separation of lutetium from Marignac's ytterbium was first described by Urbain; after Urbain's names were recognized, neoytterbium was reverted to ytterbium. An obvious issue with this decision is that Urbain was on the International Commission of Atomic Weights. Until the 1950s, some German-speaking chemists called lutetium by Welsbach's name, cassiopeium; in 1949, the spelling of element 71 was changed to lutetium. The reason for this was that Welsbach's 1907 samples of lutetium had been pure, while Urbain's 1907 samples only contained traces of lutetium. This later misled Urbain into thinking that he had discovered element 72, which he named celtium, which was actually very pure lutetium. The later discrediting of Urbain's work on element 72 led to a reappraisal of Welsbach's work on element 71, so that the element was renamed to cassiopeium in German-speaking countries for some time. Charles James, who stayed out of the priority argument, worked on a much larger scale and possessed the largest supply of lutetium at the time. Pure lutetium metal was first produced in 1953.
Occurrence and production
Found with almost all other rare-earth metals but never by itself, lutetium is very difficult to separate from other elements. Its principal commercial source is as a by-product from the processing of the rare earth phosphate mineral monazite (, which has concentrations of only 0.0001% of the element, not much higher than the abundance of lutetium in the Earth crust of about 0.5 mg/kg. No lutetium-dominant minerals are currently known. The main mining areas are China, United States, Brazil, India, Sri Lanka and Australia. The world production of lutetium (in the form of oxide) is about 10 tonnes per year. Pure lutetium metal is very difficult to prepare. It is one of the rarest and most expensive of the rare earth metals with the price about US$10,000 per kilogram, or about one-fourth that of gold.
Crushed minerals are treated with hot concentrated sulfuric acid to produce water-soluble sulfates of rare earths. Thorium precipitates out of solution as hydroxide and is removed. After that the solution is treated with ammonium oxalate to convert rare earths into their insoluble oxalates. The oxalates are converted to oxides by annealing. The oxides are dissolved in nitric acid that excludes one of the main components, cerium, whose oxide is insoluble in HNO3. Several rare earth metals, including lutetium, are separated as a double salt with ammonium nitrate by crystallization. Lutetium is separated by ion exchange. In this process, rare-earth ions are adsorbed onto suitable ion-exchange resin by exchange with hydrogen, ammonium or cupric ions present in the resin. Lutetium salts are then selectively washed out by suitable complexing agent. Lutetium metal is then obtained by reduction of anhydrous LuCl3 or LuF3 by either an alkali metal or alkaline earth metal.
177Lu is produced by neutron activation of 176Lu or by indirectly by neutron activation of 176Yb followed by beta decay. The 6.693 day half life allows transport from the production reactor to the point of use without significant loss in activity.
Applications
Small quantities of lutetium have many speciality uses.
Stable isotopes
Stable lutetium can be used as catalysts in petroleum cracking in refineries and can also be used in alkylation, hydrogenation, and polymerization applications.
Lutetium aluminium garnet () has been proposed for use as a lens material in high refractive index immersion lithography. Additionally, a tiny amount of lutetium is added as a dopant to gadolinium gallium garnet, which was used in magnetic bubble memory devices. Cerium-doped lutetium oxyorthosilicate is currently the preferred compound for detectors in positron emission tomography (PET). Lutetium aluminium garnet (LuAG) is used as a phosphor in light-emitting diode light bulbs.
Lutetium tantalate (LuTaO4) is the densest known stable white material (density 9.81 g/cm3) and therefore is an ideal host for X-ray phosphors. The only denser white material is thorium dioxide, with density of 10 g/cm3, but the thorium it contains is radioactive.
Lutetium is also a compound of several scintillating materials, which convert X-rays to visible light. It is part of LYSO, LuAg and lutetium iodide scintillators.
Research indicates that lutetium-ion atomic clocks could provide greater accuracy than any existing atomic clock.
Unstable isotopes
The suitable half-life and decay mode made lutetium-176 used as a pure beta emitter, using lutetium which has been exposed to neutron activation, and in lutetium–hafnium dating to date meteorites.
The isotope 177Lu emits low-energy beta particles and gamma rays and has a half-life around 7 days, positive characteristics for commercial applications, especially in therapeutic nuclear medicine.
The synthetic isotope lutetium-177 bound to octreotate (a somatostatin analogue), is used experimentally in targeted radionuclide therapy for neuroendocrine tumors. Lutetium-177 is used as a radionuclide in neuroendocrine tumor therapy and bone pain palliation.
Lutetium (177Lu) vipivotide tetraxetan is a therapy for prostate cancer, FDA approved in 2022.
Precautions
Like other rare-earth metals, lutetium is regarded as having a low degree of toxicity, but its compounds should be handled with care nonetheless: for example, lutetium fluoride inhalation is dangerous and the compound irritates skin. Lutetium nitrate may be dangerous as it may explode and burn once heated. Lutetium oxide powder is toxic as well if inhaled or ingested.
Similarly to the other rare-earth metals, lutetium has no known biological role, but it is found even in humans, concentrating in bones, and to a lesser extent in the liver and kidneys. Lutetium salts are known to occur together with other lanthanide salts in nature; the element is the least abundant in the human body of all lanthanides. Human diets have not been monitored for lutetium content, so it is not known how much the average human takes in, but estimations show the amount is only about several micrograms per year, all coming from tiny amounts absorbed by plants. Soluble lutetium salts are mildly toxic, but insoluble ones are not.
| Physical sciences | Chemical elements_2 | null |
17746 | https://en.wikipedia.org/wiki/Lawrencium | Lawrencium | Lawrencium is a synthetic chemical element; it has symbol Lr (formerly Lw) and atomic number 103. It is named after Ernest Lawrence, inventor of the cyclotron, a device that was used to discover many artificial radioactive elements. A radioactive metal, lawrencium is the eleventh transuranium element, the third transfermium, and the last member of the actinide series. Like all elements with atomic number over 100, lawrencium can only be produced in particle accelerators by bombarding lighter elements with charged particles. Fourteen isotopes of lawrencium are currently known; the most stable is 266Lr with half-life 11 hours, but the shorter-lived 260Lr (half-life 2.7 minutes) is most commonly used in chemistry because it can be produced on a larger scale.
Chemistry experiments confirm that lawrencium behaves as a heavier homolog to lutetium in the periodic table, and is a trivalent element. It thus could also be classified as the first of the 7th-period transition metals. Its electron configuration is anomalous for its position in the periodic table, having an s2p configuration instead of the s2d configuration of its homolog lutetium. However, this does not appear to affect lawrencium's chemistry.
In the 1950s, 1960s, and 1970s, many claims of the synthesis of element 103 of varying quality were made from laboratories in the Soviet Union and the United States. The priority of the discovery and therefore the name of the element was disputed between Soviet and American scientists. The International Union of Pure and Applied Chemistry (IUPAC) initially established lawrencium as the official name for the element and gave the American team credit for the discovery; this was reevaluated in 1992, giving both teams shared credit for the discovery but not changing the element's name.
Introduction
History
In 1958, scientists at Lawrence Berkeley National Laboratory claimed the discovery of element 102, now called nobelium. At the same time, they also tried to synthesize element 103 by bombarding the same curium target used with nitrogen-14 ions. Eighteen tracks were noted, with decay energy around and half-life around 0.25 s; the Berkeley team noted that while the cause could be the production of an isotope of element 103, other possibilities could not be ruled out. While the data agrees reasonably with that later discovered for 257Lr (alpha decay energy 8.87 MeV, half-life 0.6 s), the evidence obtained in this experiment fell far short of the strength required to conclusively demonstrate synthesis of element 103. A follow-up on this experiment was not done, as the target was destroyed. Later, in 1960, the Lawrence Berkeley Laboratory attempted to synthesize the element by bombarding 252Cf with 10B and 11B. The results of this experiment were not conclusive.
The first important work on element 103 was done at Berkeley by the nuclear-physics team of Albert Ghiorso, Torbjørn Sikkeland, Almon Larsh, Robert M. Latimer, and their co-workers on February 14, 1961. The first atoms of lawrencium were reportedly made by bombarding a three-milligram target consisting of three isotopes of californium with boron-10 and boron-11 nuclei from the Heavy Ion Linear Accelerator (HILAC). The Berkeley team reported that the isotope 257103 was detected in this manner, and that it decayed by emitting an 8.6 MeV alpha particle with a half-life of . This identification was later corrected to 258103, as later work proved that 257Lr did not have the properties detected, but 258Lr did. This was considered at the time to be convincing proof of synthesis of element 103: while the mass assignment was less certain and proved to be mistaken, it did not affect the arguments in favor of element 103 having been synthesized. Scientists at Joint Institute for Nuclear Research in Dubna (then in the Soviet Union) raised several criticisms: all but one were answered adequately. The exception was that 252Cf was the most common isotope in the target, and in the reactions with 10B, 258Lr could only have been produced by emitting four neutrons, and emitting three neutrons was expected to be much less likely than emitting four or five. This would lead to a narrow yield curve, not the broad one reported by the Berkeley team. A possible explanation was that there was a low number of events attributed to element 103. This was an important intermediate step to the unquestioned discovery of element 103, although the evidence was not completely convincing. The Berkeley team proposed the name "lawrencium" with symbol "Lw", after Ernest Lawrence, inventor of the cyclotron. The IUPAC Commission on Nomenclature of Inorganic Chemistry accepted the name, but changed the symbol to "Lr". This acceptance of the discovery was later characterized as being hasty by the Dubna team.
+ → * → + 5
The first work at Dubna on element 103 came in 1965, when they reported to have made 256103 in 1965 by bombarding 243Am with 18O, identifying it indirectly from its granddaughter fermium-252. The half-life they reported was somewhat too high, possibly due to background events. Later 1967 work on the same reaction identified two decay energies in the ranges 8.35–8.50 MeV and 8.50–8.60 MeV: these were assigned to 256103 and 257103. Despite repeat attempts, they were unable to confirm assignment of an alpha emitter with a half-life of 8 seconds to 257103. The Russians proposed the name "rutherfordium" for the new element in 1967: this name was later proposed by Berkeley for element 104.
+ → * → + 5
Further experiments in 1969 at Dubna and in 1970 at Berkeley demonstrated an actinide chemistry for the new element; so by 1970 it was known that element 103 is the last actinide. In 1970, the Dubna group reported the synthesis of 255103 with half-life 20 s and alpha decay energy 8.38 MeV. However, it was not until 1971, when the nuclear physics team at University of California at Berkeley successfully did a whole series of experiments aimed at measuring the nuclear decay properties of the lawrencium isotopes with mass numbers 255 to 260, that all previous results from Berkeley and Dubna were confirmed, apart from the Berkeley's group initial erroneous assignment of their first produced isotope to 257103 instead of the probably correct 258103. All final doubts were dispelled in 1976 and 1977 when the energies of X-rays emitted from 258103 were measured.
In 1971, the IUPAC granted the discovery of lawrencium to the Lawrence Berkeley Laboratory, even though they did not have ideal data for the element's existence. But in 1992, the IUPAC Transfermium Working Group (TWG) officially recognized the nuclear physics teams at Dubna and Berkeley as co-discoverers of lawrencium, concluding that while the 1961 Berkeley experiments were an important step to lawrencium's discovery, they were not yet fully convincing; and while the 1965, 1968, and 1970 Dubna experiments came very close to the needed level of confidence taken together, only the 1971 Berkeley experiments, which clarified and confirmed previous observations, finally resulted in complete confidence in the discovery of element 103. Because the name "lawrencium" had been in use for a long time by this point, it was retained by IUPAC, and in August 1997, the International Union of Pure and Applied Chemistry (IUPAC) ratified the name lawrencium and the symbol "Lr" during a meeting in Geneva.
Characteristics
Physical
Lawrencium is the last actinide. Authors considering the subject generally consider it a group 3 element, along with scandium, yttrium, and lutetium, as its filled f-shell is expected to make it resemble the other 7th-period transition metals. In the periodic table, it is to the right of the actinide nobelium, to the left of the 6d transition metal rutherfordium, and under the lanthanide lutetium with which it shares many physical and chemical properties. Lawrencium is expected to be a solid under normal conditions and have a hexagonal close-packed crystal structure (c/a = 1.58), similar to its lighter congener lutetium, though this is not yet known experimentally. The enthalpy of sublimation of lawrencium is estimated at 352 kJ/mol, close to the value of lutetium and strongly suggesting that metallic lawrencium is trivalent with three electrons delocalized, a prediction also supported by a systematic extrapolation of the values of heat of vaporization, bulk modulus, and atomic volume of neighboring elements to lawrencium. This makes it unlike the immediately preceding late actinides which are known to be (fermium and mendelevium) or expected to be (nobelium) divalent. The estimated enthalpies of vaporization show that lawrencium deviates from the trend of the late actinides and instead matches the trend of the succeeding 6d elements rutherfordium and dubnium, consistent with lawrencium's interpretation as a group 3 element. Some scientists prefer to end the actinides with nobelium and consider lawrencium to be the first transition metal of the seventh period.
Lawrencium is expected to be a trivalent, silvery metal, easily oxidized by air, steam, and acids, and having an atomic volume similar to that of lutetium and a trivalent metallic radius of 171 pm. It is expected to be a rather heavy metal with a density of around 14.4 g/cm3. It is also predicted to have a melting point of around 1900 K (1600 °C), not far from the value for lutetium (1925 K).
Chemical
In 1949, Glenn T. Seaborg, who devised the actinide concept, predicted that element 103 (lawrencium) should be the last actinide and that the ion should be about as stable as in aqueous solution. It was not until decades later that element 103 was finally conclusively synthesized and this prediction was experimentally confirmed.
Studies on the element, performed in 1969, showed that lawrencium reacts with chlorine to form a product that was most likely the trichloride, . Its volatility was found to be similar to the chlorides of curium, fermium, and nobelium and much less than that of rutherfordium chloride. In 1970, chemical studies were performed on 1500 atoms of 256Lr, comparing it with divalent (No, Ba, Ra), trivalent (Fm, Cf, Cm, Am, Ac), and tetravalent (Th, Pu) elements. It was found that lawrencium coextracted with the trivalent ions, but the short half-life of 256Lr precluded a confirmation that it eluted ahead of in the elution sequence. Lawrencium occurs as the trivalent ion in aqueous solution and hence its compounds should be similar to those of the other trivalent actinides: for example, lawrencium(III) fluoride () and hydroxide () should both be insoluble in water. Due to the actinide contraction, the ionic radius of should be smaller than that of , and it should elute ahead of when ammonium α-hydroxyisobutyrate (ammonium α-HIB) is used as an eluant. Later 1987 experiments on the longer-lived isotope 260Lr confirmed lawrencium's trivalency and that it eluted in roughly the same place as erbium, and found that lawrencium's ionic radius was , larger than would be expected from simple extrapolation from periodic trends. Later 1988 experiments with more lawrencium atoms refined this to and calculated an enthalpy of hydration value of . It was also found that the actinide contraction at the end of the actinides was larger than the analogous lanthanide contraction, with the exception of the last actinide, lawrencium: the cause was speculated to be relativistic effects.
It has been speculated that the 7s electrons are relativistically stabilized, so that in reducing conditions, only the 7p1/2 electron would be ionized, leading to the monovalent ion. However, all experiments to reduce to or in aqueous solution were unsuccessful, similarly to lutetium. On the basis of this, the standard electrode potential of the E°() couple was calculated to be less than −1.56 V, indicating that the existence of ions in aqueous solution was unlikely. The upper limit for the E°() couple was predicted to be −0.44 V: the values for E°() and E°() are predicted to be −2.06 V and +7.9 V. The stability of the group oxidation state in the 6d transition series decreases as RfIV > DbV > SgVI, and lawrencium continues the trend with LrIII being more stable than RfIV.
In the molecule lawrencium dihydride (), which is predicted to be bent, the 6d orbital of lawrencium is not expected to play a role in the bonding, unlike that of lanthanum dihydride (). has La–H bond distances of 2.158 Å, while should have shorter Lr–H bond distances of 2.042 Å due to the relativistic contraction and stabilization of the 7s and 7p orbitals involved in the bonding, in contrast to the core-like 5f subshell and the mostly uninvolved 6d subshell. In general, molecular and LrH are expected to resemble the corresponding thallium species (thallium having a 6s26p1 valence configuration in the gas phase, like lawrencium's 7s27p1) more than the corresponding lanthanide species. The electron configurations of and are expected to be 7s2 and 7s1 respectively. However, in species where all three valence electrons of lawrencium are ionized to give at least formally the cation, lawrencium is expected to behave like a typical actinide and the heavier congener of lutetium, especially because the first three ionization potentials of lawrencium are predicted to be similar to those of lutetium. Hence, unlike thallium but like lutetium, lawrencium would prefer to form than LrH, and LrCO is expected to be similar to the also unknown LuCO, both metals having valence configuration σ2π1 in their monocarbonyls. The pπ–dπ bond is expected to be seen in just as it is for and more generally all the . The complex anion is expected to be stable with a configuration of 6d1 for lawrencium; this 6d orbital would be its highest occupied molecular orbital. This is analogous to the electronic structure of the analogous lutetium compound.
Atomic
Lawrencium has three valence electrons: the 5f electrons are in the atomic core. In 1970, it was predicted that the ground-state electron configuration of lawrencium was [Rn]5f146d17s2 (ground state term symbol 2D3/2), per the Aufbau principle and conforming to the [Xe]4f145d16s2 configuration of lawrencium's lighter homolog lutetium. But the next year, calculations were published that questioned this prediction, instead expecting an anomalous [Rn]5f147s27p1 configuration. Though early calculations gave conflicting results, more recent studies and calculations confirm the s2p suggestion. 1974 relativistic calculations concluded that the energy difference between the two configurations was small and that it was uncertain which was the ground state. Later 1995 calculations concluded that the s2p configuration should be energetically favored, because the spherical s and p1/2 orbitals are nearest to the atomic nucleus and thus move quickly enough that their relativistic mass increases significantly.
In 1988, a team of scientists led by Eichler calculated that lawrencium's enthalpy of adsorption on metal sources would differ enough depending on its electron configuration that it would be feasible to carry out experiments to exploit this fact to measure lawrencium's electron configuration. The s2p configuration was expected to be more volatile than the s2d configuration, and be more similar to that of the p-block element lead. No evidence for lawrencium being volatile was obtained and the lower limit for the enthalpy of adsorption of lawrencium on quartz or platinum was significantly higher than the estimated value for the s2p configuration.
In 2015, the first ionization energy of lawrencium was measured, using the isotope 256Lr. The measured value, , agreed very well with the relativistic theoretical prediction of 4.963(15) eV, and also provided a first step into measuring the first ionization energies of the transactinides. This value is the lowest among all the lanthanides and actinides, and supports the s2p configuration as the 7p1/2 electron is expected to be only weakly bound. As ionisation energies generally increase left to right in the f-block, this low value suggests that lutetium and lawrencium belong in the d-block (whose trend they follow) and not the f-block. That would make them the heavier congeners of scandium and yttrium, rather than lanthanum and actinium. Although some alkali metal-like behaviour has been predicted, adsorption experiments suggest that lawrencium is trivalent like scandium and yttrium, not monovalent like the alkali metals. A lower limit on lawrencium's second ionization energy (>13.3 eV) was experimentally found in 2021.
Even though s2p is now known to be the ground-state configuration of the lawrencium atom, ds2 should be a low-lying excited-state configuration, with an excitation energy variously calculated as 0.156 eV, 0.165 eV, or 0.626 eV. As such lawrencium may still be considered to be a d-block element, albeit with an anomalous electron configuration (like chromium or copper), as its chemical behaviour matches expectations for a heavier analogue of lutetium.
Isotopes
Fourteen isotopes of lawrencium are known, with mass number 251–262, 264, and 266; all are radioactive. Seven nuclear isomers are known. The longest-lived isotope, 266Lr, has a half-life of about ten hours and is one of the longest-lived superheavy isotopes known to date. However, shorter-lived isotopes are usually used in chemical experiments because 266Lr currently can only be produced as a final decay product of even heavier and harder-to-make elements: it was discovered in 2014 in the decay chain of 294Ts. 256Lr (half-life 27 seconds) was used in the first chemical studies on lawrencium: currently, the longer-lived 260Lr (half-life 2.7 minutes) is usually used for this purpose. After 266Lr, the longest-lived isotopes are 264Lr (), 262Lr (3.6 h), and 261Lr (44 min). All other known lawrencium isotopes have half-lives under 5 minutes, and the shortest-lived of them (251Lr) has a half-life of 24.4 milliseconds. The half-lives of lawrencium isotopes mostly increase smoothly from 251Lr to 266Lr, with a dip from 257Lr to 259Lr.
Preparation and purification
Most isotopes of lawrencium can be produced by bombarding actinide (americium to einsteinium) targets with light ions (from boron to neon). The two most important isotopes, 256Lr and 260Lr, can be respectively produced by bombarding californium-249 with 70 MeV boron-11 ions (producing lawrencium-256 and four neutrons) and by bombarding berkelium-249 with oxygen-18 (producing lawrencium-260, an alpha particle, and three neutrons). The two heaviest and longest-lived known isotopes, 264Lr and 266Lr, can only be produced at much lower yields as decay products of dubnium, whose progenitors are isotopes of moscovium and tennessine.
Both 256Lr and 260Lr have half-lives too short to allow a complete chemical purification process. Early experiments with 256Lr therefore used rapid solvent extraction, with the chelating agent thenoyltrifluoroacetone (TTA) dissolved in methyl isobutyl ketone (MIBK) as the organic phase, and with the aqueous phase being buffered acetate solutions. Ions of different charge (+2, +3, or +4) will then extract into the organic phase under different pH ranges, but this method will not separate the trivalent actinides and thus 256Lr must be identified by its emitted 8.24 MeV alpha particles. More recent methods have allowed rapid selective elution with α-HIB to take place in enough time to separate out the longer-lived isotope 260Lr, which can be removed from the catcher foil with 0.05 M hydrochloric acid.
| Physical sciences | Group 3 | Chemistry |
17747 | https://en.wikipedia.org/wiki/Lead | Lead | Lead () is a chemical element; it has symbol Pb (from Latin ) and atomic number 82. It is a heavy metal that is denser than most common materials. Lead is soft and malleable, and also has a relatively low melting point. When freshly cut, lead is a shiny gray with a hint of blue. It tarnishes to a dull gray color when exposed to air. Lead has the highest atomic number of any stable element and three of its isotopes are endpoints of major nuclear decay chains of heavier elements.
Lead is a relatively unreactive post-transition metal. Its weak metallic character is illustrated by its amphoteric nature; lead and lead oxides react with acids and bases, and it tends to form covalent bonds. Compounds of lead are usually found in the +2 oxidation state rather than the +4 state common with lighter members of the carbon group. Exceptions are mostly limited to organolead compounds. Like the lighter members of the group, lead tends to bond with itself; it can form chains and polyhedral structures.
Since lead is easily extracted from its ores, prehistoric people in the Near East were aware of it. Galena is a principal ore of lead which often bears silver. Interest in silver helped initiate widespread extraction and use of lead in ancient Rome. Lead production declined after the fall of Rome and did not reach comparable levels until the Industrial Revolution. Lead played a crucial role in the development of the printing press, as movable type could be relatively easily cast from lead alloys. In 2014, the annual global production of lead was about ten million tonnes, over half of which was from recycling. Lead's high density, low melting point, ductility and relative inertness to oxidation make it useful. These properties, combined with its relative abundance and low cost, resulted in its extensive use in construction, plumbing, batteries, bullets, shots, weights, solders, pewters, fusible alloys, lead paints, leaded gasoline, and radiation shielding.
Lead is a neurotoxin that accumulates in soft tissues and bones. It damages the nervous system and interferes with the function of biological enzymes, causing neurological disorders ranging from behavioral problems to brain damage, and also affects general health, cardiovascular, and renal systems. Lead's toxicity was first documented by ancient Greek and Roman writers, who noted some of the symptoms of lead poisoning, but became widely recognized in Europe in the late 19th century AD.
Physical properties
Atomic
A lead atom has 82 electrons, arranged in an electron configuration of [Xe]4f145d106s26p2. The sum of lead's first and second ionization energies—the total energy required to remove the two 6p electrons—is close to that of tin, lead's upper neighbor in the carbon group. This is unusual; ionization energies generally fall going down a group, as an element's outer electrons become more distant from the nucleus, and more shielded by smaller orbitals.
The sum of the first four ionization energies of lead exceeds that of tin, contrary to what periodic trends would predict. This is explained by relativistic effects, which become significant in heavier atoms, which contract s and p orbitals such that lead's 6s electrons have larger binding energies than its 5s electrons. A consequence is the so-called inert pair effect: the 6s electrons of lead become reluctant to participate in bonding, stabilising the +2 oxidation state and making the distance between nearest atoms in crystalline lead unusually long.
Lead's lighter carbon group congeners form stable or metastable allotropes with the tetrahedrally coordinated and covalently bonded diamond cubic structure. The energy levels of their outer s- and p-orbitals are close enough to allow mixing into four hybrid sp3 orbitals. In lead, the inert pair effect increases the separation between its s- and p-orbitals, and the gap cannot be overcome by the energy that would be released by extra bonds following hybridization. Rather than having a diamond cubic structure, lead forms metallic bonds in which only the p-electrons are delocalized and shared between the Pb2+ ions. Lead consequently has a face-centered cubic structure like the similarly sized divalent metals calcium and strontium.
Bulk
Pure lead has a bright, shiny gray appearance with a hint of blue. It tarnishes on contact with moist air and takes on a dull appearance, the hue of which depends on the prevailing conditions. Characteristic properties of lead include high density, malleability, ductility, and high resistance to corrosion due to passivation.
Lead's close-packed face-centered cubic structure and high atomic weight result in a density of 11.34 g/cm3, which is greater than that of common metals such as iron (7.87 g/cm3), copper (8.93 g/cm3), and zinc (7.14 g/cm3). This density is the origin of the idiom to go over like a lead balloon. Some rarer metals are denser: tungsten and gold are both at 19.3 g/cm3, and osmium—the densest metal known—has a density of 22.59 g/cm3, almost twice that of lead.
Lead is a very soft metal with a Mohs hardness of 1.5; it can be scratched with a fingernail. It is quite malleable and somewhat ductile. The bulk modulus of lead—a measure of its ease of compressibility—is 45.8 GPa. In comparison, that of aluminium is 75.2 GPa; copper 137.8 GPa; and mild steel 160–169 GPa. Lead's tensile strength, at 12–17 MPa, is low (that of aluminium is 6 times higher, copper 10 times, and mild steel 15 times higher); it can be strengthened by adding small amounts of copper or antimony.
The melting point of lead—at 327.5 °C (621.5 °F)—is very low compared to most metals. Its boiling point of 1749 °C (3180 °F) is the lowest among the carbon-group elements. The electrical resistivity of lead at 20 °C is 192 nanoohm-meters, almost an order of magnitude higher than those of other industrial metals (copper at ; gold ; and aluminium at ). Lead is a superconductor at temperatures lower than 7.19 K; this is the highest critical temperature of all type-I superconductors and the third highest of the elemental superconductors.
Isotopes
Natural lead consists of four stable isotopes with mass numbers of 204, 206, 207, and 208, and traces of six short-lived radioisotopes with mass numbers 209–214 inclusive. The high number of isotopes is consistent with lead's atomic number being even. Lead has a magic number of protons (82), for which the nuclear shell model accurately predicts an especially stable nucleus. Lead-208 has 126 neutrons, another magic number, which may explain why lead-208 is extraordinarily stable.
With its high atomic number, lead is the heaviest element whose natural isotopes are regarded as stable; lead-208 is the heaviest stable nucleus. (This distinction formerly fell to bismuth, with an atomic number of 83, until its only primordial isotope, bismuth-209, was found in 2003 to decay very slowly.) The four stable isotopes of lead could theoretically undergo alpha decay to isotopes of mercury with a release of energy, but this has not been observed for any of them; their predicted half-lives range from 1035 to 10189 years (at least 1025 times the current age of the universe).
Three of the stable isotopes are found in three of the four major decay chains: lead-206, lead-207, and lead-208 are the final decay products of uranium-238, uranium-235, and thorium-232, respectively. These decay chains are called the uranium chain, the actinium chain, and the thorium chain. Their isotopic concentrations in a natural rock sample depends greatly on the presence of these three parent uranium and thorium isotopes. For example, the relative abundance of lead-208 can range from 52% in normal samples to 90% in thorium ores; for this reason, the standard atomic weight of lead is given to only one decimal place. As time passes, the ratio of lead-206 and lead-207 to lead-204 increases, since the former two are supplemented by radioactive decay of heavier elements while the latter is not; this allows for lead–lead dating. As uranium decays into lead, their relative amounts change; this is the basis for uranium–lead dating. Lead-207 exhibits nuclear magnetic resonance, a property that has been used to study its compounds in solution and solid state, including in the human body.
Apart from the stable isotopes, which make up almost all lead that exists naturally, there are trace quantities of a few radioactive isotopes. One of them is lead-210; although it has a half-life of only 22.2 years, small quantities occur in nature because lead-210 is produced by a long decay series that starts with uranium-238 (that has been present for billions of years on Earth). Lead-211, −212, and −214 are present in the decay chains of uranium-235, thorium-232, and uranium-238, respectively, so traces of all three of these lead isotopes are found naturally. Minute traces of lead-209 arise from the very rare cluster decay of radium-223, one of the daughter products of natural uranium-235, and the decay chain of neptunium-237, traces of which are produced by neutron capture in uranium ores. Lead-213 also occurs in the decay chain of neptunium-237. Lead-210 is particularly useful for helping to identify the ages of samples by measuring its ratio to lead-206 (both isotopes are present in a single decay chain).
In total, 43 lead isotopes have been synthesized, with mass numbers 178–220. Lead-205 is the most stable radioisotope, with a half-life of around 1.70 years. The second-most stable is lead-202, which has a half-life of about 52,500 years, longer than any of the natural trace radioisotopes.
Chemistry
Bulk lead exposed to moist air forms a protective layer of varying composition. Lead(II) carbonate is a common constituent; the sulfate or chloride may also be present in urban or maritime settings. This layer makes bulk lead effectively chemically inert in the air. Finely powdered lead, as with many metals, is pyrophoric, and burns with a bluish-white flame.
Fluorine reacts with lead at room temperature, forming lead(II) fluoride. The reaction with chlorine is similar but requires heating, as the resulting chloride layer diminishes the reactivity of the elements. Molten lead reacts with the chalcogens to give lead(II) chalcogenides.
Lead metal resists sulfuric and phosphoric acid but not hydrochloric or nitric acid; the outcome depends on insolubility and subsequent passivation of the product salt. Organic acids, such as acetic acid, dissolve lead in the presence of oxygen. Concentrated alkalis dissolve lead and form plumbites.
Inorganic compounds
Lead shows two main oxidation states: +4 and +2. The tetravalent state is common for the carbon group. The divalent state is rare for carbon and silicon, minor for germanium, important (but not prevailing) for tin, and is the more important of the two oxidation states for lead. This is attributable to relativistic effects, specifically the inert pair effect, which manifests itself when there is a large difference in electronegativity between lead and oxide, halide, or nitride anions, leading to a significant partial positive charge on lead. The result is a stronger contraction of the lead 6s orbital than is the case for the 6p orbital, making it rather inert in ionic compounds. The inert pair effect is less applicable to compounds in which lead forms covalent bonds with elements of similar electronegativity, such as carbon in organolead compounds. In these, the 6s and 6p orbitals remain similarly sized and sp3 hybridization is still energetically favorable. Lead, like carbon, is predominantly tetravalent in such compounds.
There is a relatively large difference in the electronegativity of lead(II) at 1.87 and lead(IV) at 2.33. This difference marks the reversal in the trend of increasing stability of the +4 oxidation state going down the carbon group; tin, by comparison, has values of 1.80 in the +2 oxidation state and 1.96 in the +4 state.
Lead(II)
Lead(II) compounds are characteristic of the inorganic chemistry of lead. Even strong oxidizing agents like fluorine and chlorine react with lead to give only PbF2 and PbCl2. Lead(II) ions are usually colorless in solution, and partially hydrolyze to form Pb(OH)+ and finally [Pb4(OH)4]4+ (in which the hydroxyl ions act as bridging ligands), but are not reducing agents as tin(II) ions are. Techniques for identifying the presence of the Pb2+ ion in water generally rely on the precipitation of lead(II) chloride using dilute hydrochloric acid. As the chloride salt is sparingly soluble in water, in very dilute solutions the precipitation of lead(II) sulfide is instead achieved by bubbling hydrogen sulfide through the solution.
Lead monoxide exists in two polymorphs, litharge α-PbO (red) and massicot β-PbO (yellow), the latter being stable only above around 488 °C. Litharge is the most commonly used inorganic compound of lead. There is no lead(II) hydroxide; increasing the pH of solutions of lead(II) salts leads to hydrolysis and condensation. Lead commonly reacts with heavier chalcogens. Lead sulfide is a semiconductor, a photoconductor, and an extremely sensitive infrared radiation detector. The other two chalcogenides, lead selenide and lead telluride, are likewise photoconducting. They are unusual in that their color becomes lighter going down the group.
Lead dihalides are well-characterized; this includes the diastatide and mixed halides, such as PbFCl. The relative insolubility of the latter forms a useful basis for the gravimetric determination of fluorine. The difluoride was the first solid ionically conducting compound to be discovered (in 1834, by Michael Faraday). The other dihalides decompose on exposure to ultraviolet or visible light, especially the diiodide. Many lead(II) pseudohalides are known, such as the cyanide, cyanate, and thiocyanate. Lead(II) forms an extensive variety of halide coordination complexes, such as [PbCl4]2−, [PbCl6]4−, and the [Pb2Cl9]n5n− chain anion.
Lead(II) sulfate is insoluble in water, like the sulfates of other heavy divalent cations. Lead(II) nitrate and lead(II) acetate are very soluble, and this is exploited in the synthesis of other lead compounds.
Lead(IV)
Few inorganic lead(IV) compounds are known. They are only formed in highly oxidizing solutions and do not normally exist under standard conditions. Lead(II) oxide gives a mixed oxide on further oxidation, Pb3O4. It is described as lead(II,IV) oxide, or structurally 2PbO·PbO2, and is the best-known mixed valence lead compound. Lead dioxide is a strong oxidizing agent, capable of oxidizing hydrochloric acid to chlorine gas. This is because the expected PbCl4 that would be produced is unstable and spontaneously decomposes to PbCl2 and Cl2. Analogously to lead monoxide, lead dioxide is capable of forming plumbate anions. Lead disulfide and lead diselenide are only stable at high pressures. Lead tetrafluoride, a yellow crystalline powder, is stable, but less so than the difluoride. Lead tetrachloride (a yellow oil) decomposes at room temperature, lead tetrabromide is less stable still, and the existence of lead tetraiodide is questionable.
Other oxidation states
Some lead compounds exist in formal oxidation states other than +4 or +2. Lead(III) may be obtained, as an intermediate between lead(II) and lead(IV), in larger organolead complexes; this oxidation state is not stable, as both the lead(III) ion and the larger complexes containing it are radicals. The same applies for lead(I), which can be found in such radical species.
Numerous mixed lead(II,IV) oxides are known. When PbO2 is heated in air, it becomes Pb12O19 at 293 °C, Pb12O17 at 351 °C, Pb3O4 at 374 °C, and finally PbO at 605 °C. A further sesquioxide, Pb2O3, can be obtained at high pressure, along with several non-stoichiometric phases. Many of them show defective fluorite structures in which some oxygen atoms are replaced by vacancies: PbO can be considered as having such a structure, with every alternate layer of oxygen atoms absent.
Negative oxidation states can occur as Zintl phases, as either free lead anions, as in Ba2Pb, with lead formally being lead(−IV), or in oxygen-sensitive ring-shaped or polyhedral cluster ions such as the trigonal bipyramidal Pb52− ion, where two lead atoms are lead(−I) and three are lead(0). In such anions, each atom is at a polyhedral vertex and contributes two electrons to each covalent bond along an edge from their sp3 hybrid orbitals, the other two being an external lone pair. They may be made in liquid ammonia via the reduction of lead by sodium.
Organolead
Lead can form multiply-bonded chains, a property it shares with its lighter homologs in the carbon group. Its capacity to do so is much less because the Pb–Pb bond energy is over three and a half times lower than that of the C–C bond. With itself, lead can build metal–metal bonds of an order up to three. With carbon, lead forms organolead compounds similar to, but generally less stable than, typical organic compounds (due to the Pb–C bond being rather weak). This makes the organometallic chemistry of lead far less wide-ranging than that of tin. Lead predominantly forms organolead(IV) compounds, even when starting with inorganic lead(II) reactants; very few organolead(II) compounds are known. The most well-characterized exceptions are Pb[CH(SiMe3)2]2 and plumbocene.
The lead analog of the simplest organic compound, methane, is plumbane. Plumbane may be obtained in a reaction between metallic lead and atomic hydrogen. Two simple derivatives, tetramethyllead and tetraethyllead, are the best-known organolead compounds. These compounds are relatively stable: tetraethyllead only starts to decompose if heated or if exposed to sunlight or ultraviolet light. With sodium metal, lead readily forms an equimolar alloy that reacts with alkyl halides to form organometallic compounds such as tetraethyllead. The oxidizing nature of many organolead compounds is usefully exploited: lead tetraacetate is an important laboratory reagent for oxidation in organic synthesis. Tetraethyllead, once added to automotive gasoline, was produced in larger quantities than any other organometallic compound, and is still widely used in fuel for small aircraft.
Other organolead compounds are less chemically stable. For many organic compounds, a lead analog does not exist.
Origin and occurrence
In space
Lead's per-particle abundance in the Solar System is 0.121 ppb (parts per billion). This figure is two and a half times higher than that of platinum, eight times more than mercury, and seventeen times more than gold. The amount of lead in the universe is slowly increasing as most heavier atoms (all of which are unstable) gradually decay to lead. The abundance of lead in the Solar System since its formation 4.5 billion years ago has increased by about 0.75%. The solar system abundances table shows that lead, despite its relatively high atomic number, is more prevalent than most other elements with atomic numbers greater than 40.
Primordial lead—which comprises the isotopes lead-204, lead-206, lead-207, and lead-208—was mostly created as a result of repetitive neutron capture processes occurring in stars. The two main modes of capture are the s- and r-processes.
In the s-process (s is for "slow"), captures are separated by years or decades, allowing less stable nuclei to undergo beta decay. A stable thallium-203 nucleus can capture a neutron and become thallium-204; this undergoes beta decay to give stable lead-204; on capturing another neutron, it becomes lead-205, which has a half-life of around 17 million years. Further captures result in lead-206, lead-207, and lead-208. On capturing another neutron, lead-208 becomes lead-209, which quickly decays into bismuth-209. On capturing another neutron, bismuth-209 becomes bismuth-210, and this beta decays to polonium-210, which alpha decays to lead-206. The cycle hence ends at lead-206, lead-207, lead-208, and bismuth-209.
In the r-process (r is for "rapid"), captures happen faster than nuclei can decay. This occurs in environments with a high neutron density, such as a supernova or the merger of two neutron stars. The neutron flux involved may be on the order of 1022 neutrons per square centimeter per second. The r-process does not form as much lead as the s-process. It tends to stop once neutron-rich nuclei reach 126 neutrons. At this point, the neutrons are arranged in complete shells in the atomic nucleus, and it becomes harder to energetically accommodate more of them. When the neutron flux subsides, these nuclei beta decay into stable isotopes of osmium, iridium, platinum.
On Earth
Lead is classified as a chalcophile under the Goldschmidt classification, meaning it is generally found combined with sulfur. It rarely occurs in its native, metallic form. Many lead minerals are relatively light and, over the course of the Earth's history, have remained in the crust instead of sinking deeper into the Earth's interior. This accounts for lead's relatively high crustal abundance of 14 ppm; it is the 36th most abundant element in the crust.
The main lead-bearing mineral is galena (PbS), which is mostly found with zinc ores. Most other lead minerals are related to galena in some way; boulangerite, Pb5Sb4S11, is a mixed sulfide derived from galena; anglesite, PbSO4, is a product of galena oxidation; and cerussite or white lead ore, PbCO3, is a decomposition product of galena. Arsenic, tin, antimony, silver, gold, copper, bismuth are common impurities in lead minerals.
World lead resources exceed two billion tons. Significant deposits are located in Australia, China, Ireland, Mexico, Peru, Portugal, Russia, United States. Global reserves—resources that are economically feasible to extract—totaled 88 million tons in 2016, of which Australia had 35 million, China 17 million, Russia 6.4 million.
Typical background concentrations of lead do not exceed 0.1 μg/m3 in the atmosphere; 100 mg/kg in soil; 4 mg/kg in vegetation, 5 μg/L in fresh water and seawater.
Etymology
The modern English word lead is of Germanic origin; it comes from the Middle English and Old English (with the macron above the "e" signifying that the vowel sound of that letter is long). The Old English word is derived from the hypothetical reconstructed Proto-Germanic ('lead'). According to linguistic theory, this word bore descendants in multiple Germanic languages of exactly the same meaning.
There is no consensus on the origin of the Proto-Germanic . One hypothesis suggests it is derived from Proto-Indo-European ('lead'; capitalization of the vowel is equivalent to the macron). Another hypothesis suggests it is borrowed from Proto-Celtic ('lead'). This word is related to the Latin , which gave the element its chemical symbol Pb. The word is thought to be the origin of Proto-Germanic (which also means 'lead'), from which stemmed the German .
The name of the chemical element is not related to the verb of the same spelling, which is derived from Proto-Germanic ('to lead').
History
Prehistory and early history
Metallic lead beads dating back to 7000–6500 BC have been found in Asia Minor and may represent the first example of metal smelting. At that time, lead had few (if any) applications due to its softness and dull appearance. The major reason for the spread of lead production was its association with silver, which may be obtained by burning galena (a common lead mineral). The Ancient Egyptians were the first to use lead minerals in cosmetics, an application that spread to Ancient Greece and beyond; the Egyptians had used lead for sinkers in fishing nets, glazes, glasses, enamels, ornaments. Various civilizations of the Fertile Crescent used lead as a writing material, as coins, and as a construction material. Lead was used by the ancient Chinese as a stimulant, as currency, as contraceptive, and in chopsticks. The Indus Valley civilization and the Mesoamericans used it for making amulets; and the eastern and southern Africans used lead in wire drawing.
Classical era
Because silver was extensively used as a decorative material and an exchange medium, lead deposits came to be worked in Asia Minor from 3000 BC; later, lead deposits were developed in the Aegean and Laurion. These three regions collectively dominated production of mined lead until . Beginning c. 2000 BC, the Phoenicians worked deposits in the Iberian peninsula; by 1600 BC, lead mining existed in Cyprus, Greece, and Sardinia.
Rome's territorial expansion in Europe and across the Mediterranean, and its development of mining, led to it becoming the greatest producer of lead during the classical era, with an estimated annual output peaking at 80,000 tonnes. Like their predecessors, the Romans obtained lead mostly as a by-product of silver smelting. Lead mining occurred in central Europe, Britain, Balkans, Greece, Anatolia, Hispania, the latter accounting for 40% of world production.
Lead tablets were commonly used as a material for letters. Lead coffins, cast in flat sand forms and with interchangeable motifs to suit the faith of the deceased, were used in ancient Judea. Lead was used to make sling bullets from the 5th century BC. In Roman times, lead sling bullets were amply used, and were effective at a distance of between 100 and 150 meters. The Balearic slingers, used as mercenaries in Carthaginian and Roman armies, were famous for their shooting distance and accuracy.
Lead was used for making water pipes in the Roman Empire; the Latin word for the metal, , is the origin of the English word "plumbing". Its ease of working, its low melting point enabling the easy fabrication of completely waterproof welded joints, and its resistance to corrosion ensured its widespread use in other applications, including pharmaceuticals, roofing, currency, warfare. Writers of the time, such as Cato the Elder, Columella, and Pliny the Elder, recommended lead (and lead-coated) vessels for the preparation of sweeteners and preservatives added to wine and food. The lead conferred an agreeable taste due to the formation of "sugar of lead" (lead(II) acetate), whereas copper vessels imparted a bitter flavor through verdigris formation.
The Roman author Vitruvius reported the health dangers of lead and modern writers have suggested that lead poisoning played a major role in the decline of the Roman Empire. Other researchers have criticized such claims, pointing out, for instance, that not all abdominal pain is caused by lead poisoning. According to archaeological research, Roman lead pipes increased lead levels in tap water but such an effect was "unlikely to have been truly harmful". When lead poisoning did occur, victims were called "saturnine", dark and cynical, after the ghoulish father of the gods, Saturn. By association, lead was considered the father of all metals. Its status in Roman society was low as it was readily available and cheap.
Confusion with tin and antimony
Since the Bronze Age, metallurgists and engineers have understood the difference between rare and valuable tin, essential for alloying with copper to produce tough and corrosion resistant bronze, and 'cheap and cheerful' lead. However, the nomenclature in some languages is similar. Romans called lead ("black lead"), and tin ("bright lead"). The association of lead and tin can be seen in other languages: the word in Czech translates to "lead", but in Russian, its cognate () means "tin". To add to the confusion, lead bore a close relation to antimony: both elements commonly occur as sulfides (galena and stibnite), often together. Pliny incorrectly wrote that stibnite would give lead on heating, instead of antimony. In countries such as Turkey and India, the originally Persian name came to refer to either antimony sulfide or lead sulfide, and in some languages, such as Russian, gave its name to antimony ().
Middle Ages and the Renaissance
Lead mining in Western Europe declined after the fall of the Western Roman Empire, with Arabian Iberia being the only region having a significant output. The largest production of lead occurred in South Asia and East Asia, especially China and India, where lead mining grew rapidly.
In Europe, lead production began to increase in the 11th and 12th centuries, when it was again used for roofing and piping. Starting in the 13th century, lead was used to create stained glass. In the European and Arabian traditions of alchemy, lead (symbol ♄ in the European tradition) was considered an impure base metal which, by the separation, purification and balancing of its constituent essences, could be transformed to pure and incorruptible gold. During the period, lead was used increasingly for adulterating wine. The use of such wine was forbidden for use in Christian rites by a papal bull in 1498, but it continued to be imbibed and resulted in mass poisonings up to the late 18th century. Lead was a key material in parts of the printing press, and lead dust was commonly inhaled by print workers, causing lead poisoning. Lead also became the chief material for making bullets for firearms: it was cheap, less damaging to iron gun barrels, had a higher density (which allowed for better retention of velocity), and its lower melting point made the production of bullets easier as they could be made using a wood fire. Lead, in the form of Venetian ceruse, was extensively used in cosmetics by Western European aristocracy as whitened faces were regarded as a sign of modesty. This practice later expanded to white wigs and eyeliners, and only faded out with the French Revolution in the late 18th century. A similar fashion appeared in Japan in the 18th century with the emergence of the geishas, a practice that continued long into the 20th century. The white faces of women "came to represent their feminine virtue as Japanese women", with lead commonly used in the whitener.
Outside Europe and Asia
In the New World, lead production was recorded soon after the arrival of European settlers. The earliest record dates to 1621 in the English Colony of Virginia, fourteen years after its foundation. In Australia, the first mine opened by colonists on the continent was a lead mine, in 1841. In Africa, lead mining and smelting were known in the Benue Trough and the lower Congo Basin, where lead was used for trade with Europeans, and as a currency by the 17th century, well before the scramble for Africa.
Industrial Revolution
In the second half of the 18th century, Britain, and later continental Europe and the United States, experienced the Industrial Revolution. This was the first time during which lead production rates exceeded those of Rome. Britain was the leading producer, losing this status by the mid-19th century with the depletion of its mines and the development of lead mining in Germany, Spain, and the United States. By 1900, the United States was the leader in global lead production, and other non-European nations—Canada, Mexico, and Australia—had begun significant production; production outside Europe exceeded that within. A great share of the demand for lead came from plumbing and painting—lead paints were in regular use. At this time, more (working class) people were exposed to the metal and lead poisoning cases escalated. This led to research into the effects of lead intake. Lead was proven to be more dangerous in its fume form than as a solid metal. Lead poisoning and gout were linked; British physician Alfred Baring Garrod noted a third of his gout patients were plumbers and painters. The effects of chronic ingestion of lead, including mental disorders, were also studied in the 19th century. The first laws aimed at decreasing lead poisoning in factories were enacted during the 1870s and 1880s in the United Kingdom.
Modern era
Further evidence of the threat that lead posed to humans was discovered in the late 19th and early 20th centuries. Mechanisms of harm were better understood, lead blindness was documented, and the element was phased out of public use in the United States and Europe. The United Kingdom introduced mandatory factory inspections in 1878 and appointed the first Medical Inspector of Factories in 1898; as a result, a 25-fold decrease in lead poisoning incidents from 1900 to 1944 was reported. Most European countries banned lead paint—commonly used because of its opacity and water resistance—for interiors by 1930.
The last major human exposure to lead was the addition of tetraethyllead to gasoline as an antiknock agent, a practice that originated in the United States in 1921. It was phased out in the United States and the European Union by 2000.
In the 1970s, the United States and Western European countries introduced legislation to reduce lead air pollution. The impact was significant: while a study conducted by the Centers for Disease Control and Prevention in the United States in 1976–1980 showed that 77.8% of the population had elevated blood lead levels, in 1991–1994, a study by the same institute showed the share of people with such high levels dropped to 2.2%. The main product made of lead by the end of the 20th century was the lead–acid battery.
From 1960 to 1990, lead output in the Western Bloc grew by about 31%. The share of the world's lead production by the Eastern Bloc increased from 10% to 30%, from 1950 to 1990, with the Soviet Union being the world's largest producer during the mid-1970s and the 1980s, and China starting major lead production in the late 20th century. Unlike the European communist countries, China was largely unindustrialized by the mid-20th century; in 2004, China surpassed Australia as the largest producer of lead. As was the case during European industrialization, lead has had a negative effect on health in China.
Production
As of 2014, production of lead is increasing worldwide due to its use in lead–acid batteries. There are two major categories of production: primary from mined ores, and secondary from scrap. In 2014, 4.58 million metric tons came from primary production and 5.64 million from secondary production. The top three producers of mined lead concentrate in that year were China, Australia, and United States. The top three producers of refined lead were China, United States, and India. According to the Metal Stocks in Society report of 2010, the total amount of lead in use, stockpiled, discarded, or dissipated into the environment, on a global basis, is 8 kg per capita. Much of this is in more developed countries (20–150 kg per capita) rather than less developed ones (1–4 kg per capita).
The primary and secondary lead production processes are similar. Some primary production plants now supplement their operations with scrap lead, and this trend is likely to increase in the future. Given adequate techniques, lead obtained via secondary processes is indistinguishable from lead obtained via primary processes. Scrap lead from the building trade is usually fairly clean and is re-melted without the need for smelting, though refining is sometimes needed. Secondary lead production is therefore cheaper, in terms of energy requirements, than is primary production, often by 50% or more.
Primary
Most lead ores contain a low percentage of lead (rich ores have a typical content of 3–8%) which must be concentrated for extraction. During initial processing, ores typically undergo crushing, dense-medium separation, grinding, froth flotation, drying. The resulting concentrate, which has a lead content of 30–80% by mass (regularly 50–60%), is then turned into (impure) lead metal.
There are two main ways of doing this: a two-stage process involving roasting followed by blast furnace extraction, carried out in separate vessels; or a direct process in which the extraction of the concentrate occurs in a single vessel. The latter has become the most common route, though the former is still significant.
Two-stage process
First, the sulfide concentrate is roasted in air to oxidize the lead sulfide:
2 PbS(s) + 3 O2(g) → 2 PbO(s) + 2 SO2(g)↑
As the original concentrate was not pure lead sulfide, roasting yields not only the desired lead(II) oxide, but a mixture of oxides, sulfates, and silicates of lead and of the other metals contained in the ore. This impure lead oxide is reduced in a coke-fired blast furnace to the (again, impure) metal:
2 PbO(s) + C(s) → 2 Pb(s) + CO2(g)↑
Impurities are mostly arsenic, antimony, bismuth, zinc, copper, silver, and gold. Typically they are removed in a series of pyrometallurgical processes. The melt is treated in a reverberatory furnace with air, steam, sulfur, which oxidizes the impurities except for silver, gold, bismuth. Oxidized contaminants float to the top of the melt and are skimmed off. Metallic silver and gold are removed and recovered economically by means of the Parkes process, in which zinc is added to lead. Zinc, which is immiscible in lead, dissolves the silver and gold. The zinc solution can be separated from the lead, and the silver and gold retrieved. De-silvered lead is freed of bismuth by the Betterton–Kroll process, treating it with metallic calcium and magnesium. The resulting bismuth dross can be skimmed off.
Alternatively to the pyrometallurgical processes, very pure lead can be obtained by processing smelted lead electrolytically using the Betts process. Anodes of impure lead and cathodes of pure lead are placed in an electrolyte of lead fluorosilicate (PbSiF6). Once electrical potential is applied, impure lead at the anode dissolves and plates onto the cathode, leaving the majority of the impurities in solution. This is a high-cost process and thus mostly reserved for refining bullion containing high percentages of impurities.
Direct process
In this process, lead bullion and slag is obtained directly from lead concentrates. The lead sulfide concentrate is melted in a furnace and oxidized, forming lead monoxide. Carbon (as coke or coal gas) is added to the molten charge along with fluxing agents. The lead monoxide is thereby reduced to metallic lead, in the midst of a slag rich in lead monoxide.
If the input is rich in lead, as much as 80% of the original lead can be obtained as bullion; the remaining 20% forms a slag rich in lead monoxide. For a low-grade feed, all of the lead can be oxidized to a high-lead slag. Metallic lead is further obtained from the high-lead (25–40%) slags via submerged fuel combustion or injection, reduction assisted by an electric furnace, or a combination of both.
Alternatives
Research on a cleaner, less energy-intensive lead extraction process continues; a major drawback is that either too much lead is lost as waste, or the alternatives result in a high sulfur content in the resulting lead metal. Hydrometallurgical extraction, in which anodes of impure lead are immersed into an electrolyte and pure lead is deposited (electrowound) onto cathodes, is a technique that may have potential, but is not currently economical except in cases where electricity is very cheap.
Secondary
Smelting, which is an essential part of the primary production, is often skipped during secondary production. It is only performed when metallic lead has undergone significant oxidation. The process is similar to that of primary production in either a blast furnace or a rotary furnace, with the essential difference being the greater variability of yields: blast furnaces produce hard lead (10% antimony) while reverberatory and rotary kiln furnaces produce semisoft lead (3–4% antimony).
The ISASMELT process is a more recent smelting method that may act as an extension to primary production; battery paste from spent lead–acid batteries (containing lead sulfate and lead oxides) has its sulfate removed by treating it with alkali, and is then treated in a coal-fueled furnace in the presence of oxygen, which yields impure lead, with antimony the most common impurity. Refining of secondary lead is similar to that of primary lead; some refining processes may be skipped depending on the material recycled and its potential contamination.
Of the sources of lead for recycling, lead–acid batteries are the most important; lead pipe, sheet, and cable sheathing are also significant.
Applications
Contrary to popular belief, pencil leads in wooden pencils have never been made from lead. When the pencil originated as a wrapped graphite writing tool, the particular type of graphite used was named plumbago (literally, lead mockup).
Elemental form
Lead metal has several useful mechanical properties, including high density, low melting point, ductility, and relative inertness. Many metals are superior to lead in some of these aspects but are generally less common and more difficult to extract from parent ores. Lead's toxicity has led to its phasing out for some uses.
Lead has been used for bullets since their invention in the Middle Ages. It is inexpensive; its low melting point means small arms ammunition and shotgun pellets can be cast with minimal technical equipment; and it is denser than other common metals, which allows for better retention of velocity. It remains the main material for bullets, alloyed with other metals as hardeners. Concerns have been raised that lead bullets used for hunting can damage the environment. Shotgun cartridges used for waterfowl hunting must today be lead-free in the United States, Canada, and in Europe.
Lead's high density and resistance to corrosion have been exploited in a number of related applications. It is used as ballast in sailboat keels; its density allows it to take up a small volume and minimize water resistance, thus counterbalancing the heeling effect of wind on the sails. It is used in scuba diving weight belts to counteract the diver's buoyancy. In 1993, the base of the Leaning Tower of Pisa was stabilized with 600 tonnes of lead. Because of its corrosion resistance, lead is used as a protective sheath for underwater cables.
Lead has many uses in the construction industry; lead sheets are used as architectural metals in roofing material, cladding, flashing, gutters and gutter joints, roof parapets. Lead is still used in statues and sculptures, including for armatures. In the past it was often used to balance the wheels of cars; for environmental reasons this use is being phased out in favor of other materials.
Lead is added to copper alloys, such as brass and bronze, to improve machinability and for its lubricating qualities. Being practically insoluble in copper, the lead forms solid globules in imperfections throughout the alloy, such as grain boundaries. In low concentrations, as well as acting as a lubricant, the globules hinder the formation of swarf as the alloy is worked, thereby improving machinability. Copper alloys with larger concentrations of lead are used in bearings. The lead provides lubrication, and the copper provides the load-bearing support.
Lead's high density, atomic number, and formability form the basis for use of lead as a barrier that absorbs sound, vibration, and radiation. Lead has no natural resonance frequencies; as a result, sheet-lead is used as a sound deadening layer in the walls, floors, and ceilings of sound studios. Organ pipes are often made from a lead alloy, mixed with various amounts of tin to control the tone of each pipe. Lead is an established shielding material from radiation in nuclear science and in X-ray rooms due to its denseness and high attenuation coefficient. Molten lead has been used as a coolant for lead-cooled fast reactors.
Batteries
The largest use of lead in the early 21st century is in lead–acid batteries. The lead in batteries undergoes no direct contact with humans, so there are fewer toxicity concerns. People who work in lead battery production or recycling plants may be exposed to lead dust and inhale it. The reactions in the battery between lead, lead dioxide, and sulfuric acid provide a reliable source of voltage. Supercapacitors incorporating lead–acid batteries have been installed in kilowatt and megawatt scale applications in Australia, Japan, and the United States in frequency regulation, solar smoothing and shifting, wind smoothing, and other applications. These batteries have lower energy density and charge-discharge efficiency than lithium-ion batteries, but are significantly cheaper.
Coating for cables
Lead is used in high voltage power cables as shell material to prevent water diffusion into insulation; this use is decreasing as lead is being phased out. Its use in solder for electronics is also being phased out by some countries to reduce the amount of environmentally hazardous waste. Lead is one of three metals used in the Oddy test for museum materials, helping detect organic acids, aldehydes, acidic gases.
Compounds
In addition to being the main application for lead metal, lead–acid batteries are also the main consumer of lead compounds. The energy storage/release reaction used in these devices involves lead sulfate and lead dioxide:
(s) + (s) + 2(aq) → 2(s) + 2(l)
Other applications of lead compounds are very specialized and often fading. Lead-based coloring agents are used in ceramic glazes and glass, especially for red and yellow shades. While lead paints are phased out in Europe and North America, they remain in use in less developed countries such as China, India, or Indonesia. Lead tetraacetate and lead dioxide are used as oxidizing agents in organic chemistry. Lead is frequently used in the polyvinyl chloride coating of electrical cords. It can be used to treat candle wicks to ensure a longer, more even burn. Because of its toxicity, European and North American manufacturers use alternatives such as zinc. Lead glass is composed of 12–28% lead oxide, changing its optical characteristics and reducing the transmission of ionizing radiation, a property used in old TVs and computer monitors with cathode-ray tubes. Lead-based semiconductors such as lead telluride and lead selenide are used in photovoltaic cells and infrared detectors.
Biological effects
Lead has no confirmed biological role, and there is no confirmed safe level of lead exposure. A 2009 Canadian–American study concluded that even at levels that are considered to pose little to no risk, lead may cause "adverse mental health outcomes". Its prevalence in the human body—at an adult average of 120 mg—is nevertheless exceeded only by zinc (2500 mg) and iron (4000 mg) among the heavy metals. Lead salts are very efficiently absorbed by the body. A small amount of lead (1%) is stored in bones; the rest is excreted in urine and feces within a few weeks of exposure. Only about a third of lead is excreted by a child. Continual exposure may result in the bioaccumulation of lead.
Toxicity
Lead is a highly poisonous metal (whether inhaled or swallowed), affecting almost every organ and system in the human body. At airborne levels of 100 mg/m3, it is immediately dangerous to life and health. Most ingested lead is absorbed into the bloodstream. The primary cause of its toxicity is its predilection for interfering with the proper functioning of enzymes. It does so by binding to the sulfhydryl groups found on many enzymes, or mimicking and displacing other metals which act as cofactors in many enzymatic reactions. The essential metals that lead interacts with include calcium, iron, and zinc. High levels of calcium and iron tend to provide some protection from lead poisoning; low levels cause increased susceptibility.
Effects
Lead can cause severe damage to the brain and kidneys and, ultimately, death. By mimicking calcium, lead can cross the blood–brain barrier. It degrades the myelin sheaths of neurons, reduces their numbers, interferes with neurotransmission routes, and decreases neuronal growth. In the human body, lead inhibits porphobilinogen synthase and ferrochelatase, preventing both porphobilinogen formation and the incorporation of iron into protoporphyrin IX, the final step in heme synthesis. This causes ineffective heme synthesis and microcytic anemia.
Symptoms of lead poisoning include nephropathy, colic-like abdominal pains, and possibly weakness in the fingers, wrists, or ankles. Small blood pressure increases, particularly in middle-aged and older people, may be apparent and can cause anemia. Several studies, mostly cross-sectional, found an association between increased lead exposure and decreased heart rate variability. In pregnant women, high levels of exposure to lead may cause miscarriage. Chronic, high-level exposure has been shown to reduce fertility in males.
In a child's developing brain, lead interferes with synapse formation in the cerebral cortex, neurochemical development (including that of neurotransmitters), and the organization of ion channels. Early childhood exposure has been linked with an increased risk of sleep disturbances and excessive daytime drowsiness in later childhood. High blood levels are associated with delayed puberty in girls. The rise and fall in exposure to airborne lead from the combustion of tetraethyl lead in gasoline during the 20th century has been linked with historical increases and decreases in crime levels.
Exposure sources
Lead exposure is a global issue since lead mining and smelting, and battery manufacturing, disposal, and recycling, are common in many countries. Lead enters the body via inhalation, ingestion, or skin absorption. Almost all inhaled lead is absorbed into the body; for ingestion, the rate is 20–70%, with children absorbing a higher percentage than adults.
Poisoning typically results from ingestion of food or water contaminated with lead, and less commonly after accidental ingestion of contaminated soil, dust, or lead-based paint. Seawater products can contain lead if affected by nearby industrial waters. Fruit and vegetables can be contaminated by high levels of lead in the soils they were grown in. Soil can be contaminated through particulate accumulation from lead in pipes, lead paint, residual emissions from leaded gasoline.
The use of lead for water pipes is a problem in areas with soft or acidic water. Hard water forms insoluble protective layers on the inner surface of the pipes, whereas soft and acidic water dissolves the lead pipes. Dissolved carbon dioxide in the carried water may result in the formation of soluble lead bicarbonate; oxygenated water may similarly dissolve lead as lead(II) hydroxide. Drinking such water, over time, can cause health problems due to the toxicity of the dissolved lead. The harder the water the more calcium bicarbonate and sulfate it contains, and the more the inside of the pipes are coated with a protective layer of lead carbonate or lead sulfate.
Ingestion of applied lead-based paint is the major source of exposure for children: a direct source is chewing on old painted window sills. Additionally, as lead paint on a surface deteriorates, it peels and is pulverized into dust. The dust then enters the body through hand-to-mouth contact or contaminated food or drink. Ingesting certain home remedies may result in exposure to lead or its compounds.
Inhalation is the second major exposure pathway, affecting smokers and especially workers in lead-related occupations. Cigarette smoke contains, among other toxic substances, radioactive lead-210. "As a result of EPA's regulatory efforts, levels of lead in the air [in the United States] decreased by 86 percent between 2010 and 2020." The concentration of lead in the air in the United States fell below the national standard of 0.15 μg/m3 in 2014.
Skin exposure may be significant for people working with organic lead compounds. The rate of skin absorption is lower for inorganic lead.
Lead in foods
Lead may be found in food when food is grown in soil that is high in lead, airborne lead contaminates the crops, animals eat lead in their diet, or lead enters the food either from what it was stored or cooked in. Ingestion of lead paint and batteries is also a route of exposure for livestock, which can subsequently affect humans. Milk produced by contaminated cattle can be diluted to a lower lead concentration and sold for consumption.
In Bangladesh, lead compounds have been added to turmeric to make it more yellow. This is believed to have started in the 1980s and continues . It is believed to be one of the main sources of high lead levels in the country. In Hong Kong the maximum allowed lead level in food is 6 parts per million in solids and 1 part per million in liquids.
Lead-containing dust can settle on drying cocoa beans when they are set outside near polluting industrial plants. In December 2022, Consumer Reports tested 28 dark chocolate brands and found that 23 of them contained potentially harmful levels of lead, cadmium or both. They have urged the chocolate makers to reduce the level of lead which could be harmful, especially to a developing fetus.
In March 2024, the US Food and Drug Administration recommended a voluntary recall on 6 brands of cinnamon due to contamination with lead, after 500 reports of child lead poisoning. The FDA determined that cinnamon was adulterated with lead chromate.
Lead in plastic toys
According to the United States Center for Disease Control, the use of lead in plastics has not been banned as of 2024. Lead softens the plastic and makes it more flexible so that it can go back to its original shape. Habitual chewing on colored plastic insulation from stripped electrical wires was found to cause elevated lead levels in a 46-year-old man. Lead may be used in plastic toys to stabilize molecules from heat. Lead dust can be formed when plastic is exposed to sunlight, air, and detergents that break down the chemical bond between the lead and plastics.
Treatment
Treatment for lead poisoning normally involves the administration of dimercaprol and succimer. Acute cases may require the use of disodium calcium edetate, the calcium chelate, and the disodium salt of ethylenediaminetetraacetic acid (EDTA). It has a greater affinity for lead than calcium, with the result that lead chelate is formed by exchange and excreted in the urine, leaving behind harmless calcium.
Environmental effects
The extraction, production, use, and disposal of lead and its products have caused significant contamination of the Earth's soils and waters. Atmospheric emissions of lead were at their peak during the Industrial Revolution, and the leaded gasoline period in the second half of the twentieth century.
Lead releases originate from natural sources (i.e., concentration of the naturally occurring lead), industrial production, incineration and recycling, and mobilization of previously buried lead. In particular, as lead has been phased out from other uses, in the Global South, lead recycling operations designed to extract cheap lead used for global manufacturing have become a well documented source of exposure. Elevated concentrations of lead persist in soils and sediments in post-industrial and urban areas; industrial emissions, including those arising from coal burning, continue in many parts of the world, particularly in the developing countries.
Lead can accumulate in soils, especially those with a high organic content, where it remains for hundreds to thousands of years. Environmental lead can compete with other metals found in and on plant surfaces potentially inhibiting photosynthesis and at high enough concentrations, negatively affecting plant growth and survival. Contamination of soils and plants can allow lead to ascend the food chain affecting microorganisms and animals. In animals, lead exhibits toxicity in many organs, damaging the nervous, renal, reproductive, hematopoietic, and cardiovascular systems after ingestion, inhalation, or skin absorption. Fish uptake lead from both water and sediment; bioaccumulation in the food chain poses a hazard to fish, birds, and sea mammals.
Anthropogenic lead includes lead from shot and sinkers. These are among the most potent sources of lead contamination along with lead production sites. Lead was banned for shot and sinkers in the United States in 2017, although that ban was only effective for a month, and a similar ban is being considered in the European Union.
Analytical methods for the determination of lead in the environment include spectrophotometry, X-ray fluorescence, atomic spectroscopy, and electrochemical methods. A specific ion-selective electrode has been developed based on the ionophore S,S'''-methylenebis(N,N-diisobutyldithiocarbamate). An important biomarker assay for lead poisoning is δ-aminolevulinic acid levels in plasma, serum, and urine.
Restriction and remediation
By the mid-1980s, there was significant decline in the use of lead in industry. In the United States, environmental regulations reduced or eliminated the use of lead in non-battery products, including gasoline, paints, solders, and water systems. Particulate control devices were installed in coal-fired power plants to capture lead emissions. In 1992, U.S. Congress required the Environmental Protection Agency to reduce the blood lead levels of the country's children. Lead use was further curtailed by the European Union's 2003 Restriction of Hazardous Substances Directive. A large drop in lead deposition occurred in the Netherlands after the 1993 national ban on use of lead shot for hunting and sport shooting: from 230 tonnes in 1990 to 47.5 tonnes in 1995. The usage of lead in Avgas 100LL for general aviation is allowed in the EU as of 2022.
In the United States, the permissible exposure limit for lead in the workplace, comprising metallic lead, inorganic lead compounds, and lead soaps, was set at 50 μg/m3 over an 8-hour workday, and the blood lead level limit at 5 μg per 100 g of blood in 2012. Lead may still be found in harmful quantities in stoneware, vinyl (such as that used for tubing and the insulation of electrical cords), and Chinese brass. Old houses may still contain lead paint. White lead paint has been withdrawn from sale in industrialized countries, but specialized uses of other pigments such as yellow lead chromate remain, especially in road pavement marking paint.
Stripping old paint by sanding produces dust which can be inhaled. Lead abatement programs have been mandated by some authorities in properties where young children live. The usage of lead in Avgas 100LL for general aviation is generally allowed in United States as of 2023.
Lead waste, depending on the jurisdiction and the nature of the waste, may be treated as household waste (to facilitate lead abatement activities), or potentially hazardous waste requiring specialized treatment or storage. Lead is released into the environment in shooting places and a number of lead management practices have been developed to counter the lead contamination. Lead migration can be enhanced in acidic soils; to counter that, it is advised soils be treated with lime to neutralize the soils and prevent leaching of lead.
Research has been conducted on how to remove lead from biosystems by biological means: Fish bones are being researched for their ability to bioremediate lead in contaminated soil. The fungus Aspergillus versicolor is effective at absorbing lead ions from industrial waste before being released to water bodies. Several bacteria have been researched for their ability to remove lead from the environment, including the sulfate-reducing bacteria Desulfovibrio and Desulfotomaculum, both of which are highly effective in aqueous solutions. Millet grass Urochloa ramosa'' has the ability to accumulate significant amounts of metals such as lead and zinc in its shoot and root tissues making it an important plant for remediation of contaminated soils.
| Physical sciences | Chemical elements_2 | null |
17748 | https://en.wikipedia.org/wiki/Limestone | Limestone | Limestone (calcium carbonate ) is a type of carbonate sedimentary rock which is the main source of the material lime. It is composed mostly of the minerals calcite and aragonite, which are different crystal forms of . Limestone forms when these minerals precipitate out of water containing dissolved calcium. This can take place through both biological and nonbiological processes, though biological processes, such as the accumulation of corals and shells in the sea, have likely been more important for the last 540 million years. Limestone often contains fossils which provide scientists with information on ancient environments and on the evolution of life.
About 20% to 25% of sedimentary rock is carbonate rock, and most of this is limestone. The remaining carbonate rock is mostly dolomite, a closely related rock, which contains a high percentage of the mineral dolomite, . Magnesian limestone is an obsolete and poorly-defined term used variously for dolomite, for limestone containing significant dolomite (dolomitic limestone), or for any other limestone containing a significant percentage of magnesium. Most limestone was formed in shallow marine environments, such as continental shelves or platforms, though smaller amounts were formed in many other environments. Much dolomite is secondary dolomite, formed by chemical alteration of limestone. Limestone is exposed over large regions of the Earth's surface, and because limestone is slightly soluble in rainwater, these exposures often are eroded to become karst landscapes. Most cave systems are found in limestone bedrock.
Limestone has numerous uses: as a chemical feedstock for the production of lime used for cement (an essential component of concrete), as aggregate for the base of roads, as white pigment or filler in products such as toothpaste or paint, as a soil conditioner, and as a popular decorative addition to rock gardens. Limestone formations contain about 30% of the world's petroleum reservoirs.
Description
Limestone is composed mostly of the minerals calcite and aragonite, which are different crystal forms of calcium carbonate (). Dolomite, , is an uncommon mineral in limestone, and siderite or other carbonate minerals are rare. However, the calcite in limestone often contains a few percent of magnesium. Calcite in limestone is divided into low-magnesium and high-magnesium calcite, with the dividing line placed at a composition of 4% magnesium. High-magnesium calcite retains the calcite mineral structure, which is distinct from dolomite. Aragonite does not usually contain significant magnesium. Most limestone is otherwise chemically fairly pure, with clastic sediments (mainly fine-grained quartz and clay minerals) making up less than 5% to 10% of the composition. Organic matter typically makes up around 0.2% of a limestone and rarely exceeds 1%.
Limestone often contains variable amounts of silica in the form of chert or siliceous skeletal fragments (such as sponge spicules, diatoms, or radiolarians). Fossils are also common in limestone.
Limestone is commonly white to gray in color. Limestone that is unusually rich in organic matter can be almost black in color, while traces of iron or manganese can give limestone an off-white to yellow to red color. The density of limestone depends on its porosity, which varies from 0.1% for the densest limestone to 40% for chalk. The density correspondingly ranges from 1.5 to 2.7 g/cm3. Although relatively soft, with a Mohs hardness of 2 to 4, dense limestone can have a crushing strength of up to 180 MPa. For comparison, concrete typically has a crushing strength of about 40 MPa.
Although limestones show little variability in mineral composition, they show great diversity in texture. However, most limestone consists of sand-sized grains in a carbonate mud matrix. Because limestones are often of biological origin and are usually composed of sediment that is deposited close to where it formed, classification of limestone is usually based on its grain type and mud content.
Grains
Most grains in limestone are skeletal fragments of marine organisms such as coral or foraminifera. These organisms secrete structures made of aragonite or calcite, and leave these structures behind when they die. Other carbonate grains composing limestones are ooids, peloids, and limeclasts (intraclasts and ).
Skeletal grains have a composition reflecting the organisms that produced them and the environment in which they were produced. Low-magnesium calcite skeletal grains are typical of articulate brachiopods, planktonic (free-floating) foraminifera, and coccoliths. High-magnesium calcite skeletal grains are typical of benthic (bottom-dwelling) foraminifera, echinoderms, and coralline algae. Aragonite skeletal grains are typical of molluscs, calcareous green algae, stromatoporoids, corals, and tube worms. The skeletal grains also reflect specific geological periods and environments. For example, coral grains are more common in high-energy environments (characterized by strong currents and turbulence) while bryozoan grains are more common in low-energy environments (characterized by quiet water).
Ooids (sometimes called ooliths) are sand-sized grains (less than 2mm in diameter) consisting of one or more layers of calcite or aragonite around a central quartz grain or carbonate mineral fragment. These likely form by direct precipitation of calcium carbonate onto the ooid. Pisoliths are similar to ooids, but they are larger than 2 mm in diameter and tend to be more irregular in shape. Limestone composed mostly of ooids is called an oolite or sometimes an oolitic limestone. Ooids form in high-energy environments, such as the Bahama platform, and oolites typically show crossbedding and other features associated with deposition in strong currents.
Oncoliths resemble ooids but show a radial rather than layered internal structure, indicating that they were formed by algae in a normal marine environment.
Peloids are structureless grains of microcrystalline carbonate likely produced by a variety of processes. Many are thought to be fecal pellets produced by marine organisms. Others may be produced by endolithic (boring) algae or other microorganisms or through breakdown of mollusc shells. They are difficult to see in a limestone sample except in thin section and are less common in ancient limestones, possibly because compaction of carbonate sediments disrupts them.
Limeclasts are fragments of existing limestone or partially lithified carbonate sediments. Intraclasts are limeclasts that originate close to where they are deposited in limestone, while extraclasts come from outside the depositional area. Intraclasts include grapestone, which is clusters of peloids cemented together by organic material or mineral cement. Extraclasts are uncommon, are usually accompanied by other clastic sediments, and indicate deposition in a tectonically active area or as part of a turbidity current.
Mud
The grains of most limestones are embedded in a matrix of carbonate mud. This is typically the largest fraction of an ancient carbonate rock. Mud consisting of individual crystals less than in length is described as micrite. In fresh carbonate mud, micrite is mostly small aragonite needles, which may precipitate directly from seawater, be secreted by algae, or be produced by abrasion of carbonate grains in a high-energy environment. This is converted to calcite within a few million years of deposition. Further recrystallization of micrite produces microspar, with grains from in diameter.
Limestone often contains larger crystals of calcite, ranging in size from , that are described as sparry calcite or sparite. Sparite is distinguished from micrite by a grain size of over and because sparite stands out under a hand lens or in thin section as white or transparent crystals. Sparite is distinguished from carbonate grains by its lack of internal structure and its characteristic crystal shapes.
Geologists are careful to distinguish between sparite deposited as cement and sparite formed by recrystallization of micrite or carbonate grains. Sparite cement was likely deposited in pore space between grains, suggesting a high-energy depositional environment that removed carbonate mud. Recrystallized sparite is not diagnostic of depositional environment.
Other characteristics
Limestone outcrops are recognized in the field by their softness (calcite and aragonite both have a Mohs hardness of less than 4, well below common silicate minerals) and because limestone bubbles vigorously when a drop of dilute hydrochloric acid is dropped on it. Dolomite is also soft but reacts only feebly with dilute hydrochloric acid, and it usually weathers to a characteristic dull yellow-brown color due to the presence of ferrous iron. This is released and oxidized as the dolomite weathers. Impurities (such as clay, sand, organic remains, iron oxide, and other materials) will cause limestones to exhibit different colors, especially with weathered surfaces.
The makeup of a carbonate rock outcrop can be estimated in the field by etching the surface with dilute hydrochloric acid. This etches away the calcite and aragonite, leaving behind any silica or dolomite grains. The latter can be identified by their rhombohedral shape.
Crystals of calcite, quartz, dolomite or barite may line small cavities (vugs) in the rock. Vugs are a form of secondary porosity, formed in existing limestone by a change in environment that increases the solubility of calcite.
Dense, massive limestone is sometimes described as "marble". For example, the famous Portoro "marble" of Italy is actually a dense black limestone. True marble is produced by recrystallization of limestone during regional metamorphism that accompanies the mountain building process (orogeny). It is distinguished from dense limestone by its coarse crystalline texture and the formation of distinctive minerals from the silica and clay present in the original limestone.
Classification
Two major classification schemes, the Folk and Dunham, are used for identifying the types of carbonate rocks collectively known as limestone.
Folk classification
Robert L. Folk developed a classification system that places primary emphasis on the detailed composition of grains and interstitial material in carbonate rocks. Based on composition, there are three main components: allochems (grains), matrix (mostly micrite), and cement (sparite). The Folk system uses two-part names; the first refers to the grains and the second to the cement. For example, a limestone consisting mainly of ooids, with a crystalline matrix, would be termed an oosparite. It is helpful to have a petrographic microscope when using the Folk scheme, because it is easier to determine the components present in each sample.
Dunham classification
Robert J. Dunham published his system for limestone in 1962. It focuses on the depositional fabric of carbonate rocks. Dunham divides the rocks into four main groups based on relative proportions of coarser clastic particles, based on criteria such as whether the grains were originally in mutual contact, and therefore self-supporting, or whether the rock is characterized by the presence of frame builders and algal mats. Unlike the Folk scheme, Dunham deals with the original porosity of the rock. The Dunham scheme is more useful for hand samples because it is based on texture, not the grains in the sample.
A revised classification was proposed by Wright (1992). It adds some diagenetic patterns to the classification scheme.
Other descriptive terms
Travertine is a term applied to calcium carbonate deposits formed in freshwater environments, particularly waterfalls, cascades and hot springs. Such deposits are typically massive, dense, and banded. When the deposits are highly porous, so that they have a spongelike texture, they are typically described as tufa. Secondary calcite deposited by supersaturated meteoric waters (groundwater) in caves is also sometimes described as travertine. This produces speleothems, such as stalagmites and stalactites.
Coquina is a poorly consolidated limestone composed of abraded pieces of coral, shells, or other fossil debris. When better consolidated, it is described as coquinite.
Chalk is a soft, earthy, fine-textured limestone composed of the tests of planktonic microorganisms such as foraminifera, while
marl is an earthy mixture of carbonates and silicate sediments.
Formation
Limestone forms when calcite or aragonite precipitate out of water containing dissolved calcium, which can take place through both biological and nonbiological processes. The solubility of calcium carbonate () is controlled largely by the amount of dissolved carbon dioxide () in the water. This is summarized in the reaction:
Increases in temperature or decreases in pressure tend to reduce the amount of dissolved and precipitate . Reduction in salinity also reduces the solubility of , by several orders of magnitude for fresh water versus seawater.
Near-surface water of the earth's oceans are oversaturated with by a factor of more than six. The failure of to rapidly precipitate out of these waters is likely due to interference by dissolved magnesium ions with nucleation of calcite crystals, the necessary first step in precipitation. Precipitation of aragonite may be suppressed by the presence of naturally occurring organic phosphates in the water. Although ooids likely form through purely inorganic processes, the bulk of precipitation in the oceans is the result of biological activity. Much of this takes place on carbonate platforms.
The origin of carbonate mud, and the processes by which it is converted to micrite, continue to be a subject of research. Modern carbonate mud is composed mostly of aragonite needles around in length. Needles of this shape and composition are produced by calcareous algae such as Penicillus, making this a plausible source of mud. Another possibility is direct precipitation from the water. A phenomenon known as whitings occurs in shallow waters, in which white streaks containing dispersed micrite appear on the surface of the water. It is uncertain whether this is freshly precipitated aragonite or simply material stirred up from the bottom, but there is some evidence that whitings are caused by biological precipitation of aragonite as part of a bloom of cyanobacteria or microalgae. However, stable isotope ratios in modern carbonate mud appear to be inconsistent with either of these mechanisms, and abrasion of carbonate grains in high-energy environments has been put forward as a third possibility.
Formation of limestone has likely been dominated by biological processes throughout the Phanerozoic, the last 540 million years of the Earth's history. Limestone may have been deposited by microorganisms in the Precambrian, prior to 540 million years ago, but inorganic processes were probably more important and likely took place in an ocean more highly oversaturated in calcium carbonate than the modern ocean.
Diagenesis
Diagenesis is the process in which sediments are compacted and turned into solid rock. During diagenesis of carbonate sediments, significant chemical and textural changes take place. For example, aragonite is converted to low-magnesium calcite. Diagenesis is the likely origin of pisoliths, concentrically layered particles ranging from in diameter found in some limestones. Pisoliths superficially resemble ooids but have no nucleus of foreign matter, fit together tightly, and show other signs that they formed after the original deposition of the sediments.
Silicification occurs early in diagenesis, at low pH and temperature, and contributes to fossil preservation. Silicification takes place through the reaction:
Fossils are often preserved in exquisite detail as chert.
Cementing takes place rapidly in carbonate sediments, typically within less than a million years of deposition. Some cementing occurs while the sediments are still under water, forming hardgrounds. Cementing accelerates after the retreat of the sea from the depositional environment, as rainwater infiltrates the sediment beds, often within just a few thousand years. As rainwater mixes with groundwater, aragonite and high-magnesium calcite are converted to low-calcium calcite. Cementing of thick carbonate deposits by rainwater may commence even before the retreat of the sea, as rainwater can infiltrate over into sediments beneath the continental shelf.
As carbonate sediments are increasingly deeply buried under younger sediments, chemical and mechanical compaction of the sediments increases. Chemical compaction takes place by pressure solution of the sediments. This process dissolves minerals from points of contact between grains and redeposits it in pore space, reducing the porosity of the limestone from an initial high value of 40% to 80% to less than 10%. Pressure solution produces distinctive stylolites, irregular surfaces within the limestone at which silica-rich sediments accumulate. These may reflect dissolution and loss of a considerable fraction of the limestone bed. At depths greater than , burial cementation completes the lithification process. Burial cementation does not produce stylolites.
When overlying beds are eroded, bringing limestone closer to the surface, the final stage of diagenesis takes place. This produces secondary porosity as some of the cement is dissolved by rainwater infiltrating the beds. This may include the formation of vugs, which are crystal-lined cavities within the limestone.
Diagenesis may include conversion of limestone to dolomite by magnesium-rich fluids. There is considerable evidence of replacement of limestone by dolomite, including sharp replacement boundaries that cut across bedding. The process of dolomitization remains an area of active research, but possible mechanisms include exposure to concentrated brines in hot environments (evaporative reflux) or exposure to diluted seawater in delta or estuary environments (Dorag dolomitization). However, Dorag dolomitization has fallen into disfavor as a mechanism for dolomitization, with one 2004 review paper describing it bluntly as "a myth". Ordinary seawater is capable of converting calcite to dolomite, if the seawater is regularly flushed through the rock, as by the ebb and flow of tides (tidal pumping). Once dolomitization begins, it proceeds rapidly, so that there is very little carbonate rock containing mixed calcite and dolomite. Carbonate rock tends to be either almost all calcite/aragonite or almost all dolomite.
Occurrence
About 20% to 25% of sedimentary rock is carbonate rock, and most of this is limestone. Limestone is found in sedimentary sequences as old as 2.7 billion years. However, the compositions of carbonate rocks show an uneven distribution in time in the geologic record. About 95% of modern carbonates are composed of high-magnesium calcite and aragonite. The aragonite needles in carbonate mud are converted to low-magnesium calcite within a few million years, as this is the most stable form of calcium carbonate. Ancient carbonate formations of the Precambrian and Paleozoic contain abundant dolomite, but limestone dominates the carbonate beds of the Mesozoic and Cenozoic. Modern dolomite is quite rare. There is evidence that, while the modern ocean favors precipitation of aragonite, the oceans of the Paleozoic and middle to late Cenozoic favored precipitation of calcite. This may indicate a lower Mg/Ca ratio in the ocean water of those times. This magnesium depletion may be a consequence of more rapid sea floor spreading, which removes magnesium from ocean water. The modern ocean and the ocean of the Mesozoic have been described as "aragonite seas".
Most limestone was formed in shallow marine environments, such as continental shelves or platforms. Such environments form only about 5% of the ocean basins, but limestone is rarely preserved in continental slope and deep sea environments. The best environments for deposition are warm waters, which have both a high organic productivity and increased saturation of calcium carbonate due to lower concentrations of dissolved carbon dioxide. Modern limestone deposits are almost always in areas with very little silica-rich sedimentation, reflected in the relative purity of most limestones. Reef organisms are destroyed by muddy, brackish river water, and carbonate grains are ground down by much harder silicate grains. Unlike clastic sedimentary rock, limestone is produced almost entirely from sediments originating at or near the place of deposition.
Limestone formations tend to show abrupt changes in thickness. Large moundlike features in a limestone formation are interpreted as ancient reefs, which when they appear in the geologic record are called bioherms. Many are rich in fossils, but most lack any connected organic framework like that seen in modern reefs. The fossil remains are present as separate fragments embedded in ample mud matrix. Much of the sedimentation shows indications of occurring in the intertidal or supratidal zones, suggesting sediments rapidly fill available accommodation space in the shelf or platform. Deposition is also favored on the seaward margin of shelves and platforms, where there is upwelling deep ocean water rich in nutrients that increase organic productivity. Reefs are common here, but when lacking, ooid shoals are found instead. Finer sediments are deposited close to shore.
The lack of deep sea limestones is due in part to rapid subduction of oceanic crust, but is more a result of dissolution of calcium carbonate at depth. The solubility of calcium carbonate increases with pressure and even more with higher concentrations of carbon dioxide, which is produced by decaying organic matter settling into the deep ocean that is not removed by photosynthesis in the dark depths. As a result, there is a fairly sharp transition from water saturated with calcium carbonate to water unsaturated with calcium carbonate, the lysocline, which occurs at the calcite compensation depth of . Below this depth, foraminifera tests and other skeletal particles rapidly dissolve, and the sediments of the ocean floor abruptly transition from carbonate ooze rich in foraminifera and coccolith remains (Globigerina ooze) to silicic mud lacking carbonates.
In rare cases, turbidites or other silica-rich sediments bury and preserve benthic (deep ocean) carbonate deposits. Ancient benthic limestones are microcrystalline and are identified by their tectonic setting. Fossils typically are foraminifera and coccoliths. No pre-Jurassic benthic limestones are known, probably because carbonate-shelled plankton had not yet evolved.
Limestones also form in freshwater environments. These limestones are not unlike marine limestone, but have a lower diversity of organisms and a greater fraction of silica and clay minerals characteristic of marls. The Green River Formation is an example of a prominent freshwater sedimentary formation containing numerous limestone beds. Freshwater limestone is typically micritic. Fossils of charophyte (stonewort), a form of freshwater green algae, are characteristic of these environments, where the charophytes produce and trap carbonates.
Limestones may also form in evaporite depositional environments. Calcite is one of the first minerals to precipitate in marine evaporites.
Limestone and living organisms
Most limestone is formed by the activities of living organisms near reefs, but the organisms responsible for reef formation have changed over geologic time. For example, stromatolites are mound-shaped structures in ancient limestones, interpreted as colonies of cyanobacteria that accumulated carbonate sediments, but stromatolites are rare in younger limestones. Organisms precipitate limestone both directly as part of their skeletons, and indirectly by removing carbon dioxide from the water by photosynthesis and thereby decreasing the solubility of calcium carbonate.
Limestone shows the same range of sedimentary structures found in other sedimentary rocks. However, finer structures, such as lamination, are often destroyed by the burrowing activities of organisms (bioturbation). Fine lamination is characteristic of limestone formed in playa lakes, which lack the burrowing organisms. Limestones also show distinctive features such as geopetal structures, which form when curved shells settle to the bottom with the concave face downwards. This traps a void space that can later be filled by sparite. Geologists use geopetal structures to determine which direction was up at the time of deposition, which is not always obvious with highly deformed limestone formations.
The cyanobacterium Hyella balani can bore through limestone; as can the green alga Eugamantia sacculata and the fungus Ostracolaba implexa.
Micritic mud mounds
Micricitic mud mounds are subcircular domes of micritic calcite that lacks internal structure. Modern examples are up to several hundred meters thick and a kilometer across, and have steep slopes (with slope angles of around 50 degrees). They may be composed of peloids swept together by currents and stabilized by Thalassia grass or mangroves. Bryozoa may also contribute to mound formation by helping to trap sediments.
Mud mounds are found throughout the geologic record, and prior to the early Ordovician, they were the dominant reef type in both deep and shallow water. These mud mounds likely are microbial in origin. Following the appearance of frame-building reef organisms, mud mounds were restricted mainly to deeper water.
Organic reefs
Organic reefs form at low latitudes in shallow water, not more than a few meters deep. They are complex, diverse structures found throughout the fossil record. The frame-building organisms responsible for organic reef formation are characteristic of different geologic time periods: Archaeocyathids appeared in the early Cambrian; these gave way to sponges by the late Cambrian; later successions included stromatoporoids, corals, algae, bryozoa, and rudists (a form of bivalve mollusc). The extent of organic reefs has varied over geologic time, and they were likely most extensive in the middle Devonian, when they covered an area estimated at . This is roughly ten times the extent of modern reefs. The Devonian reefs were constructed largely by stromatoporoids and tabulate corals, which were devastated by the late Devonian extinction.
Organic reefs typically have a complex internal structure. Whole body fossils are usually abundant, but ooids and interclasts are rare within the reef. The core of a reef is typically massive and unbedded, and is surrounded by a talus that is greater in volume than the core. The talus contains abundant intraclasts and is usually either floatstone, with 10% or more of grains over 2mm in size embedded in abundant matrix, or rudstone, which is mostly large grains with sparse matrix. The talus grades to planktonic fine-grained carbonate mud, then noncarbonate mud away from the reef.
Limestone landscape
Limestone is partially soluble, especially in acid, and therefore forms many erosional landforms. These include limestone pavements, pot holes, cenotes, caves and gorges. Such erosion landscapes are known as karsts. Limestone is less resistant to erosion than most igneous rocks, but more resistant than most other sedimentary rocks. It is therefore usually associated with hills and downland, and occurs in regions with other sedimentary rocks, typically clays.
Karst regions overlying limestone bedrock tend to have fewer visible above-ground sources (ponds and streams), as surface water easily drains downward through joints in the limestone. While draining, water and organic acid from the soil slowly (over thousands or millions of years) enlarges these cracks, dissolving the calcium carbonate and carrying it away in solution. Most cave systems are through limestone bedrock. Cooling groundwater or mixing of different groundwaters will also create conditions suitable for cave formation.
Coastal limestones are often eroded by organisms which bore into the rock by various means. This process is known as bioerosion. It is most common in the tropics, and it is known throughout the fossil record.
Bands of limestone emerge from the Earth's surface in often spectacular rocky outcrops and islands. Examples include the Rock of Gibraltar, the Burren in County Clare, Ireland; Malham Cove in North Yorkshire and the Isle of Wight, England; the Great Orme in Wales; on Fårö near the Swedish island of Gotland, the Niagara Escarpment in Canada/United States; Notch Peak in Utah; the Ha Long Bay National Park in Vietnam; and the hills around the Lijiang River and Guilin city in China.
The Florida Keys, islands off the south coast of Florida, are composed mainly of oolitic limestone (the Lower Keys) and the carbonate skeletons of coral reefs (the Upper Keys), which thrived in the area during interglacial periods when sea level was higher than at present.
Unique habitats are found on alvars, extremely level expanses of limestone with thin soil mantles. The largest such expanse in Europe is the Stora Alvaret on the island of Öland, Sweden. Another area with large quantities of limestone is the island of Gotland, Sweden. Huge quarries in northwestern Europe, such as those of Mount Saint Peter (Belgium/Netherlands), extend for more than a hundred kilometers.
Uses
Limestone is a raw material that is used globally in a variety of different ways including construction, agriculture and as industrial materials. Limestone is very common in architecture, especially in Europe and North America. Many landmarks across the world, including the Great Pyramid and its associated complex in Giza, Egypt, were made of limestone. So many buildings in Kingston, Ontario, Canada were, and continue to be, constructed from it that it is nicknamed the 'Limestone City'. Limestone, metamorphosed by heat and pressure produces marble, which has been used for many statues, buildings and stone tabletops. On the island of Malta, a variety of limestone called Globigerina limestone was, for a long time, the only building material available, and is still very frequently used on all types of buildings and sculptures.
Limestone can be processed into many various forms such as brick, cement, powdered/crushed, or as a filler. Limestone is readily available and relatively easy to cut into blocks or more elaborate carving. Ancient American sculptors valued limestone because it was easy to work and good for fine detail. Going back to the Late Preclassic period (by 200–100 BCE), the Maya civilization (Ancient Mexico) created refined sculpture using limestone because of these excellent carving properties. The Maya would decorate the ceilings of their sacred buildings (known as lintels) and cover the walls with carved limestone panels. Carved on these sculptures were political and social stories, and this helped communicate messages of the king to his people. Limestone is long-lasting and stands up well to exposure, which explains why many limestone ruins survive. However, it is very heavy (density 2.6), making it impractical for tall buildings, and relatively expensive as a building material.
Limestone was most popular in the late 19th and early 20th centuries. Railway stations, banks and other structures from that era were made of limestone in some areas. It is used as a façade on some skyscrapers, but only in thin plates for covering, rather than solid blocks. In the United States, Indiana, most notably the Bloomington area, has long been a source of high-quality quarried limestone, called Indiana limestone. Many famous buildings in London are built from Portland limestone. Houses built in Odesa in Ukraine in the 19th century were mostly constructed from limestone and the extensive remains of the mines now form the Odesa Catacombs.
Limestone was also a very popular building block in the Middle Ages in the areas where it occurred, since it is hard, durable, and commonly occurs in easily accessible surface exposures. Many medieval churches and castles in Europe are made of limestone. Beer stone was a popular kind of limestone for medieval buildings in southern England.
Limestone is the raw material for production of lime, primarily known for treating soils, purifying water and smelting copper. Lime is an important ingredient used in chemical industries. Limestone and (to a lesser extent) marble are reactive to acid solutions, making acid rain a significant problem to the preservation of artifacts made from this stone. Many limestone statues and building surfaces have suffered severe damage due to acid rain. Likewise limestone gravel has been used to protect lakes vulnerable to acid rain, acting as a pH buffering agent. Acid-based cleaning chemicals can also etch limestone, which should only be cleaned with a neutral or mild alkali-based cleaner.
Other uses include:
It is the raw material for the manufacture of quicklime (calcium oxide), slaked lime (calcium hydroxide), cement and mortar.
Pulverized limestone is used as a soil conditioner to neutralize acidic soils (agricultural lime).
Is crushed for use as aggregate—the solid base for many roads as well as in asphalt concrete.
As a reagent in flue-gas desulfurization, where it reacts with sulfur dioxide for air pollution control.
In glass making, particularly in the manufacture of soda–lime glass.
As an additive toothpaste, paper, plastics, paint, tiles, and other materials as both white pigment and a cheap filler.
As rock dust, to suppress methane explosions in underground coal mines.
Purified, it is added to bread and cereals as a source of calcium.
As a calcium supplement in livestock feed, such as for poultry (when ground up).
For remineralizing and increasing the alkalinity of purified water to prevent pipe corrosion and to restore essential nutrient levels.
In blast furnaces, limestone binds with silica and other impurities to remove them from the iron.
It can aid in the removal of toxic components created from coal burning plants and layers of polluted molten metals.
Many limestone formations are porous and permeable, which makes them important petroleum reservoirs. About 20% of North American hydrocarbon reserves are found in carbonate rock. Carbonate reservoirs are very common in the petroleum-rich Middle East, and carbonate reservoirs hold about a third of all petroleum reserves worldwide. Limestone formations are also common sources of metal ores, because their porosity and permeability, together with their chemical activity, promotes ore deposition in the limestone. The lead-zinc deposits of Missouri and the Northwest Territories are examples of ore deposits hosted in limestone.
Scarcity
Limestone is a major industrial raw material that is in constant demand. This raw material has been essential in the iron and steel industry since the nineteenth century. Companies have never had a shortage of limestone; however, it has become a concern as the demand continues to increase and it remains in high demand today. The major potential threats to supply in the nineteenth century were regional availability and accessibility. The two main accessibility issues were transportation and property rights. Other problems were high capital costs on plants and facilities due to environmental regulations and the requirement of zoning and mining permits. These two dominant factors led to the adaptation and selection of other materials that were created and formed to design alternatives for limestone that suited economic demands.
Limestone was classified as a critical raw material, and with the potential risk of shortages, it drove industries to find new alternative materials and technological systems. This allowed limestone to no longer be classified as critical as replacement substances increased in production; minette ore is a common substitute, for example.
Occupational safety and health
Powdered limestone as a food additive is generally recognized as safe and limestone is not regarded as a hazardous material. However, limestone dust can be a mild respiratory and skin irritant, and dust that gets into the eyes can cause corneal abrasions. Because limestone contains small amounts of silica, inhalation of limestone dust could potentially lead to silicosis or cancer.
United States
The Occupational Safety and Health Administration (OSHA) has set the legal limit (permissible exposure limit) for limestone exposure in the workplace as total exposure and respiratory exposure over an 8-hour workday. The National Institute for Occupational Safety and Health (NIOSH) has set a recommended exposure limit (REL) of total exposure and respiratory exposure over an 8-hour workday.
Graffiti
Removing graffiti from weathered limestone is difficult because it is a porous and permeable material. The surface is fragile so usual abrasion methods run the risk of severe surface loss. Because it is an acid-sensitive stone some cleaning agents cannot be used due to adverse effects.
Gallery
| Physical sciences | Petrology | null |
17839 | https://en.wikipedia.org/wiki/London%20Underground | London Underground | The London Underground (also known simply as the Underground or as the Tube) is a rapid transit system serving Greater London and some parts of the adjacent home counties of Buckinghamshire, Essex and Hertfordshire in England.
The Underground has its origins in the Metropolitan Railway, opening on 10 January 1863 as the world's first underground passenger railway. The Metropolitan is now part of the Circle, District, Hammersmith & City and Metropolitan lines. The first line to operate underground electric traction trains, the City & South London Railway in 1890, is now part of the Northern line.
The network has expanded to 11 lines with of track. However, the Underground does not cover most southern parts of Greater London; there are only 33 Underground stations south of the River Thames. The system's 272 stations collectively accommodate up to 5million passenger journeys a day. In 2023/24 it was used for 1.181 billion passenger journeys.
The system's first tunnels were built just below the ground, using the cut-and-cover method; later, smaller, roughly circular tunnels—which gave rise to its nickname, the Tube—were dug through at a deeper level. Despite its name, only 45% of the system is under the ground: much of the network in the outer environs of London is on the surface.
The early tube lines, originally owned by several private companies, were brought together under the Underground brand in the early 20th century, and eventually merged along with the sub-surface lines and bus services in 1933 to form London Transport under the control of the London Passenger Transport Board (LPTB). The current operator, London Underground Limited (LUL), is a wholly owned subsidiary of Transport for London (TfL), the statutory corporation responsible for the transport network in London. , 92% of operational expenditure is covered by passenger fares. The Travelcard ticket was introduced in 1983 and Oyster card, a contactless ticketing system, in 2003. Contactless bank card payments were introduced in 2014, the first such use on a public transport system.
The LPTB commissioned many new station buildings, posters and public artworks in a modernist style. The schematic Tube map, designed by Harry Beck in 1931, was voted a national design icon in 2006 and now includes other transport systems besides the Underground, such as the Docklands Light Railway, London Overground, Thameslink, the Elizabeth line, and Tramlink. Other famous London Underground branding includes the roundel and the Johnston typeface, created by Edward Johnston in 1916.
History
Early years
Sub-surface lines
The idea of an underground railway linking the City of London with the urban centre was proposed in the 1830s, and the Metropolitan Railway was granted permission to build such a line in 1854. To prepare construction, a short test tunnel was built in 1855 in Kibblesworth, a small town with geological properties similar to London. This test tunnel was used for two years in the development of the first underground train, and was later, in 1861, filled up. The world's first underground railway, it opened in January 1863 between Paddington and Farringdon using gas-lit wooden carriages hauled by steam locomotives. It was hailed as a success, carrying 38,000 passengers on the opening day, and borrowing trains from other railways to supplement the service. The Metropolitan District Railway (commonly known as the District Railway) opened in December 1868 from South Kensington to Westminster as part of a plan for an underground "inner circle" connecting London's main-line stations. The Metropolitan and District railways completed the Circle line in 1884, built using the cut and cover method. Both railways expanded, the District building five branches to the west reaching Ealing, Hounslow, Uxbridge, Richmond and Wimbledon and the Metropolitan eventually extended as far as in Buckinghamshire – more than from Baker Street and the centre of London.
Deep-level lines
For the first deep-level tube line, the City and South London Railway, two diameter circular tunnels were dug between King William Street (close to today's Monument station) and Stockwell, under the roads to avoid the need for agreement with owners of property on the surface. This opened in 1890 with electric locomotives that hauled carriages with small opaque windows, nicknamed padded cells. The Waterloo and City Railway opened in 1898, followed by the Central London Railway in 1900, known as the "twopenny tube". These two ran electric trains in circular tunnels having diameters between and , whereas the Great Northern and City Railway, which opened in 1904, was built to take main line trains from Finsbury Park to a Moorgate terminus in the City and had diameter tunnels.
While steam locomotives were in use on the Underground there were contrasting health reports. There were many instances of passengers collapsing whilst travelling, due to heat and pollution, leading for calls to clean the air through the installation of garden plants. The Metropolitan even encouraged beards for staff to act as an air filter. There were other reports claiming beneficial outcomes of using the Underground, including the designation of Great Portland Street as a "sanatorium for [sufferers of ...] asthma and bronchial complaints", tonsillitis could be cured with acid gas and the Twopenny Tube cured anorexia.
Electrification
With the advent of electric Tube services (the Waterloo and City Railway and the Great Northern and City Railway), the Volks Electric Railway, in Brighton, and competition from electric trams, the pioneering Underground companies needed modernising. In the early 20th century, the District and Metropolitan railways needed to electrify and a joint committee recommended an AC system, the two companies co-operating because of the shared ownership of the inner circle. The District, needing to raise the finance necessary, found an investor in the American Charles Yerkes who favoured a DC system similar to that in use on the City & South London and Central London railways. The Metropolitan Railway protested about the change of plan, but after arbitration by the Board of Trade, the DC system was adopted.
Underground Electric Railways Company era
Yerkes soon had control of the District Railway and established the Underground Electric Railways Company of London (UERL) in 1902 to finance and operate three tube lines, the Baker Street and Waterloo Railway (Bakerloo), the Charing Cross, Euston and Hampstead Railway (Hampstead) and the Great Northern, Piccadilly and Brompton Railway, (Piccadilly), which all opened between 1906 and 1907. When the "Bakerloo" was so named in July 1906, The Railway Magazine called it an undignified "gutter title". By 1907 the District and Metropolitan Railways had electrified the underground sections of their lines.
In January 1913, the UERL acquired the Central London Railway and the City & South London Railway, as well as many of London's bus and tram operators. Only the Metropolitan Railway, along with its subsidiaries the Great Northern & City Railway and the East London Railway, and the Waterloo & City Railway, by then owned by the main line London and South Western Railway, remained outside the Underground Group's control.
A joint marketing agreement between most of the companies in the early years of the 20th century included maps, joint publicity, through ticketing and UNDERGROUND signs, incorporating the first bullseye symbol, outside stations in Central London. At the time, the term Underground was selected from three other proposed names; 'Tube' and 'Electric' were both officially rejected. Ironically, the term Tube was later adopted alongside the Underground. The Bakerloo line was extended north to Queen's Park to join a new electric line from Euston to Watford, but the First World War delayed construction and trains reached in 1917. During air raids in 1915 people used the tube stations as shelters. An extension of the Central line west to Ealing was also delayed by the war and was completed in 1920. After the war, government-backed financial guarantees were used to expand the network and the tunnels of the City and South London and Hampstead railways were linked at Euston and Kennington; the combined service was not named the Northern line until later. The Metropolitan promoted housing estates near the railway with the "Metro-land" brand and nine housing estates were built near stations on the line. Electrification was extended north from Harrow to Rickmansworth, and branches opened from Rickmansworth to Watford in 1925 and from Wembley Park to Stanmore in 1932. The Piccadilly line was extended north to Cockfosters and took over District line branches to Harrow (later Uxbridge) and Hounslow.
London Passenger Transport Board era
In 1933, most of London's underground railways, tramway and bus services were merged to form the London Passenger Transport Board, which used the London Transport brand. The Waterloo & City Railway, which was by then in the ownership of the main line Southern Railway, remained with its existing owners. In the same year that the London Passenger Transport Board was formed, Harry Beck's diagrammatic tube map first appeared.
In the following years, the outlying lines of the former Metropolitan Railway closed, the Brill Tramway in 1935, and the line from Quainton Road to Verney Junction in 1936. The 1935–40 New Works Programme included the extension of the Central and Northern lines and the Bakerloo line to take over the Metropolitan's Stanmore branch. The Second World War suspended these plans after the Bakerloo line had reached Stanmore and the Northern line High Barnet and Mill Hill East in 1941. Following bombing in 1940, passenger services over the West London line were suspended, leaving Olympia exhibition centre without a railway service until a District line shuttle from Earl's Court began after the war. After work restarted on the Central line extensions in east and west London, these were completed in 1949.
During the war many tube stations were used as air-raid shelters. They were not always a guarantee of safety however; on 11 January 1941 during the London Blitz, a bomb penetrated the booking hall of Bank Station, the blast from which killed 111 people, many of whom were sleeping in passageways and on platforms. On 3 March 1943, a test of the air-raid warning sirens, together with the firing of a new type of anti-aircraft rocket, resulted in a crush of people attempting to take shelter in Bethnal Green Underground station. A total of 173 people, including 62 children, died, making this both the worst civilian disaster in Britain during the Second World War, and the largest loss of life in a single incident on the London Underground network.
London Transport Executive and Board era
On 1 January 1948, under the provisions of the Transport Act 1947, the London Passenger Transport Board was nationalised and renamed the London Transport Executive, becoming a subsidiary transport organisation of the British Transport Commission, which was formed on the same day. Under the same act, the country's main line railways were also nationalised, and their reconstruction was given priority over the maintenance of the Underground and most of the unfinished plans of the pre-war New Works Programme were shelved or postponed.
The District line needed new trains and an unpainted aluminium train entered service in 1953, this becoming the standard for new trains. In the early 1960s, the Metropolitan line was electrified as far as Amersham, British Railways providing services for the former Metropolitan line stations between Amersham and Aylesbury. In 1962, the British Transport Commission was abolished, and the London Transport Executive was renamed the London Transport Board, reporting directly to the Minister of Transport. Also during the 1960s, the Victoria line was dug under central London and, unlike the earlier tunnels, did not follow the roads above. The line opened in 1968–71 with the trains being driven automatically and magnetically encoded tickets collected by automatic gates gave access to the platforms.
Greater London Council era
On 1 January 1970, responsibility for public transport within Greater London passed from central government to local government, in the form of the Greater London Council (GLC), and the London Transport Board was abolished. The London Transport brand continued to be used by the GLC.
On 28 February 1975, a southbound train on the Northern City Line failed to stop at its Moorgate terminus and crashed into the wall at the end of the tunnel, in the Moorgate tube crash. There were 43 deaths and 74 injuries, the greatest loss of life during peacetime on the London Underground. In 1976, the Northern City Line was taken over by British Rail and linked up with the main line railway at Finsbury Park, a transfer that had already been planned prior to the accident.
In 1979, another new tube, the Jubilee line, named in honour of the Silver Jubilee of Elizabeth II, took over the Stanmore branch from the Bakerloo line, linking it to a newly constructed line between Baker Street and Charing Cross stations. Under the control of the GLC, London Transport introduced a system of fare zones for buses and underground trains that cut the average fare in 1981. Fares increased following a legal challenge but the fare zones were retained, and in the mid-1980s the Travelcard and the Capitalcard were introduced.
London Regional Transport era
In 1984, control of London Buses and the London Underground passed back to central government with the creation of London Regional Transport (LRT), which reported directly to the Secretary of State for Transport, still retaining the London Transport brand. One person operation had been planned in 1968, but conflict with the trade unions delayed introduction until the 1980s.
On 18 November 1987, fire broke out in an escalator at King's Cross St Pancras tube station. The resulting fire cost the lives of 31 people and injured a further 100. London Underground was strongly criticised in the aftermath for its attitude to fires underground, and publication of the report into the fire led to the resignation of senior management of both London Underground and London Regional Transport. Following the fire, substantial improvements to safety on the Tube were implemented – including the banning of smoking, removal of wooden escalators, installation of CCTV and fire detectors, as well as comprehensive radio coverage for the emergency services.
In April 1994, the Waterloo & City Railway, by then owned by British Rail and known as the Waterloo & City line, was transferred to the London Underground. In 1999, the Jubilee Line Extension project extended the Jubilee line from Green Park station through the growing Docklands to Stratford station. This resulted in the closure of the short section of tunnel between Green Park and Charing Cross stations. The 11 new stations were designed to be "future-proof", with wide passageways, large quantities of escalators and lifts, and emergency exits. The stations were the first on the Underground to have platform edge doors, and were built to have step-free access throughout. The stations have subsequently been praised as exemplary pieces of 20th-century architecture.
Transport for London era
In 2000, Transport for London (TfL) was created as an integrated body responsible for London's transport system. Part of the Greater London Authority, the TfL Board is appointed by the Mayor of London, who also sets the structure and level of public transport fares in London. The day-to-day running of the corporation is left to the Commissioner of Transport for London.
TfL eventually replaced London Regional Transport, and discontinued the use of the London Transport brand in favour of its own brand. The transfer of responsibility was staged, with transfer of control of London Underground delayed until July 2003, when London Underground Limited became an indirect subsidiary of TfL.
In the early 2000s, London Underground was reorganised in a Public-Private Partnership (PPP) as part of a project to upgrade and modernise the system. Private infrastructure companies (infracos) would upgrade and maintain the railway, and London Underground would run the train service. One infraco – Metronet – went into administration in 2007, and TfL took over the other – Tube Lines – in 2010. Despite this, substantial investment to upgrade and modernise the Tube has taken place - with new trains (such as London Underground S7 and S8 Stock), new signalling, upgraded stations (such as King's Cross St Pancras) and improved accessibility (such as at Green Park). Small changes to the Tube network occurred in the 2000s, with extensions to Heathrow Terminal 5, new station at Wood Lane and the Circle line changed from serving a closed loop around the centre of London to a spiral also serving Hammersmith in 2009.
In July 2005, four coordinated terrorist attacks took place, three of them occurring on the Tube network. It was the UK's deadliest terrorist incident since 1988.
Electronic ticketing in the form of the contactless Oyster card was first introduced in 2003, with payment using contactless banks cards introduced in September 2014. In , over 12million Oyster cards and 35million contactless cards were used, generating around £5billion in ticketing revenue.
During the London 2012 Olympic and Paralympic Games, the Underground saw record passenger numbers, with over 4.3million people using the Tube on some days. This record was subsequently beaten in later years, with 4.82million passengers in December 2015. In 2013, the Underground celebrated its 150th anniversary, with celebratory events such as steam trains and installation of a unique Labyrinth artwork at each station.
Under TfL, London's public transport network became more unified, with existing suburban rail lines across London upgraded and rebranded as London Overground from 2007, with the former East London line becoming part of the Overground network in 2010. Many Overground stations interchange with Underground ones, and Overground lines were added onto the Tube map.
In the 2010s, the £18.8billion Crossrail project built a new east–west railway tunnel under central London. The project involved rebuilding and expanding several central Underground stations including Tottenham Court Road and Whitechapel. By increasing rail capacity, the line aims to reduce overcrowding on the Tube and cut cross-London journey times. The railway opened as the Elizabeth line in May 2022. Although not part of the Underground, the line connects with several Underground stations.
In 2020, passenger numbers fell significantly during the COVID-19 pandemic and 40 stations were temporarily closed. The Northern Line Extension opened in September 2021, extending the Northern line from Kennington to Battersea Power Station via Nine Elms. The extension was privately funded, with contributions from developments across the Battersea Power Station, Vauxhall and Nine Elms areas.
Infrastructure
Railway
As of 2021, the Underground serves 272 stations. Sixteen stations (eight on each of the Metropolitan and Central lines) are outside the London region, with five of those beyond the M25 London Orbital motorway (Amersham, Chalfont & Latimer, Chesham, and Chorleywood on the Metropolitan line and Epping on the Central).
Of the thirty-two London boroughs, six (Bexley, Bromley, Croydon, Kingston, Lewisham and Sutton) are not served by the Underground network, while Hackney has Old Street (on the Northern line Bank branch) and Manor House (on the Piccadilly line) just inside its boundaries. Lewisham was served by the East London line (with stations at New Cross and New Cross Gate) until 2010 when the line and the stations were transferred to the London Overground network.
London Underground's eleven lines total in length, making it the eleventh longest metro system in the world. These are made up of the sub-surface network and the deep-tube lines.
The Circle, District, Hammersmith & City, and Metropolitan lines form the sub-surface network, with cut-and-cover railway tunnels just below the surface and of a similar size to those on British main lines They converged on a bi-directional loop in central London, sharing tracks and stations with each other at various places along their respective routes.
The Bakerloo, Central, Jubilee, Northern, Piccadilly, Victoria and Waterloo & City lines are deep-level tubes, with smaller trains that run in circular tunnels (tubes) with a diameter of about , with one tube for each direction. The seven deep-level lines have the exclusive use of tracks and stations along their routes with the exceptions of the Piccadilly line, which shares track with the District line, between Acton Town and Hanger Lane Junction, and with the Metropolitan line, between Rayners Lane and Uxbridge; and the Bakerloo line, which shares track with London Overground's Watford DC Line for its above-ground section north of Queen's Park.
Fifty-five per cent of the system runs on the surface. There are of sub-surface tunnels and of tube tunnels. Many of the central London Underground stations on deep-level tube routes are higher than the running lines to assist deceleration when arriving and acceleration when departing. Trains generally run on the left-hand track. In some places, the tunnels are above each other (for example, the Central line east of St Paul's station); or trains run on the right (for example on the Victoria line between Warren Street and King's Cross St. Pancras, to allow cross-platform interchange with the Northern line at Euston).
The lines are electrified with a four-rail DC system: a conductor rail between the rails is energised at and a rail outside the running rails at , giving a potential difference of . On the sections of line shared with mainline trains, such as the District line from East Putney to Wimbledon and Gunnersbury to Richmond, and the Bakerloo line north of Queen's Park, the centre rail is bonded to the running rails.
The average speed on the Underground is . Outside the tunnels of central London, many lines' trains tend to travel at over in the suburban and countryside areas. The Metropolitan line can reach speeds of .
Lines
The London Underground was used for 1.181billion journeys in the year 20232024.
Services using former and current main lines
The Underground uses several railways and alignments that were built by main-line railway companies.
Bakerloo line
Between Queen's Park and Harrow & Wealdstone this runs over the Watford DC Line also used by London Overground, alongside the London & North Western Railway (LNWR) main line that opened in 1837. The route was laid out by the LNWR in 1912–15 and is part of the Network Rail system.
Central line
The railway from just south of Leyton to just south of Loughton was built by Eastern Counties Railway in 1856 on the same alignment in use today. The Underground also uses the line built in 1865 by the Great Eastern Railway (GER) between Loughton to Ongar via Epping. The connection to the main line south of Leyton was closed in 1970 and removed in 1972. The line from Epping to Ongar was closed in 1994; most of the line is in use today by the heritage Epping Ongar Railway. The line between Newbury Park and Woodford junction (west of Roding Valley) via Hainault was built by the GER in 1903, the connections to the main line south of Newbury Park closing in 1947 (in the Ilford direction) and 1956 (in the Seven Kings direction).
Central line
The line from just north of White City to Ealing Broadway was built in 1917 by the Great Western Railway (GWR) and passenger service introduced by the Underground in 1920. North Acton to West Ruislip was built by GWR on behalf of the Underground in 1947–8 alongside the pre-existing tracks from Old Oak Common junction towards and beyond, which date from 1904. , the original Old Oak Common junction to route has one main-line train a day to and from Paddington.
District line
South of Kensington (Olympia) short sections of the 1862 West London Railway (WLR) and its 1863 West London Extension Railway (WLER) were used when District extended from Earl's Court in 1872. The District had its own bay platform at Olympia built in 1958 along with track on the bed of the 1862–3 WLR/WLER northbound. The southbound WLR/WLER became the new northbound main line at that time, and a new southbound main-line track was built through the site of former goods yard. The 1872 junction closed in 1958, and a further connection to the WLR just south of Olympia closed in 1992. The branch is now segregated.
The line between Campbell Road junction (now closed), near Bromley-by-Bow, and Barking was built by the London, Tilbury & Southend Railway (LTSR) in 1858. The slow tracks were built 1903–05, when District services were extended from Bow Road (though there were no District services east of East Ham from 1905 to 1932). The slow tracks were shared with LTSR stopping and goods trains until segregated by 1962, when main-line trains stopped serving intermediate stations.
The railway from Barking to Upminster was built by LTSR in 1885 and the District extended over the route in 1902. District withdrew between 1905 and 1932, when the route was quadrupled. Main-line trains ceased serving intermediate stations in 1962, and the District line today only uses the 1932 slow tracks.
The westbound track between east of Ravenscourt Park and Turnham Green and Turnham Green to Richmond (also used by London Overground) follows the alignment of a railway built by the London & South Western Railway (LSWR) in 1869. The eastbound track between Turnham Green and east of Ravenscourt Park follows the alignment built in 1911; this was closed 1916 but was re-used when the Piccadilly line was extended in 1932. The section between Turnham Green and Richmond still belongs to Network Rail now.
The line between East Putney and Wimbledon was built by the LSWR in 1889. The last scheduled main-line service ran in 1941 but it still sees a few through Waterloo passenger services at the start and end of the daily timetable. The route is also used for scheduled ECS movements to/from Wimbledon Park depot and for Waterloo services diverted during disruptions and track closures elsewhere. This section is now owned by London Underground but the signalling is still operated by Network Rail.
Hammersmith & City
Between Paddington and Westbourne Park Underground station, the line runs alongside the main line. The Great Western main line opened in 1838, serving a temporary terminus the other side of Bishop's Road. When the current Paddington station opened in 1854, the line passed to the south of the old station. On opening in 1864, the Hammersmith & City Railway (then part of the Metropolitan Railway) ran via the main line to a junction at Westbourne Park, until 1867 when two tracks opened to the south of the main line, with a crossing near Westbourne Bridge, Paddington. The current two tracks to the north of the main line and the subway east of Westbourne Park opened in 1878. The Hammersmith & City route is now completely segregated from the main line.
Jubilee line
The rail route between Canning Town and Stratford was built by the GER in 1846, with passenger services starting in 1847. The original alignment was quadrupled "in stages between 1860 and 1892" for freight services before the extra (western) tracks were lifted as traffic declined during the 20th century, and were re-laid for Jubilee line services that started in 1999. The current Docklands Light Railway (ex-North London line) uses the original eastern alignment and the Jubilee uses the western alignment.
Northern line
The line from East Finchley to Mill Hill East was opened in 1867, and from Finchley Central to High Barnet in 1872, both by the Great Northern Railway.
Piccadilly line
The westbound track between east of Ravenscourt Park and Turnham Green was built by LSWR in 1869, and originally used for eastbound main-line and District services. The eastbound track was built in 1911; it closed in 1916 but was re-used when the Piccadilly line was extended in 1932.
Main line services using LU tracks
Chiltern Railways shares track with the Metropolitan Line between Harrow-on-the-Hill and Amersham. Three South Western Railway passenger trains a day use District Line tracks between Wimbledon and East Putney.
Trains
London Underground trains come in two sizes, larger sub-surface trains and smaller deep-tube trains. Since the early 1960s all passenger trains have been electric multiple units with sliding doors and a train last ran with a guard in 2000. All lines use fixed-length trains with between six and eight cars, except for the Waterloo & City line that uses four cars. New trains are designed for maximum number of standing passengers and for speed of access to the cars and have regenerative braking and public address systems. Since 1999 all new stock has had to comply with accessibility regulations that require such things as access and room for wheelchairs, and the size and location of door controls. All underground trains are required to comply with The Rail Vehicle Accessibility (Non Interoperable Rail System) Regulations 2010 (RVAR 2010) by 2020.
Stock on sub-surface lines is identified by a letter (such as S Stock, used on the Metropolitan line), while tube stock is identified by the year of intended introduction (for example, 1996 Stock, used on the Jubilee line).
Depots
The Underground is served by the following depots:
Bakerloo: Stonebridge Park, Queen's Park, London Road
Central: Hainault, Ruislip, White City
Circle: Hammersmith
District: Ealing Common, Lillie Bridge, Upminster
Hammersmith & City: Hammersmith
Jubilee: Neasden, Stratford Market
Metropolitan: Neasden
Northern: Edgware, Golders Green, Highgate, Morden
Piccadilly: Cockfosters, Northfields
Victoria: Northumberland Park
Waterloo & City: Waterloo
London Underground: Acton Works
Disused and abandoned stations
In the years since the first parts of the London Underground opened, many stations and routes have been closed. Some stations were closed because of low passenger numbers rendering them uneconomical; some became redundant after lines were re-routed or replacements were constructed; and others are no longer served by the Underground but remain open to National Rail main line services. In some cases, such as Aldwych and Ongar, the buildings remain and are used for other purposes. In others, such as British Museum, all evidence of the station has been lost through demolition.
London Transport Museum runs guided tours of several disused stations including Down Street and Aldwych through its "Hidden London" programme. The tours look at the history of the network and feature historical details drawn from the museum's own archives and collections.
Proposed line extensions
Bakerloo line extension to Lewisham
A southern extension of the Bakerloo line from Elephant & Castle has been proposed multiple times since the line opened. In the 2010s, consultation events and preliminary design work took place on an extension. A route from Elephant & Castle to Lewisham via the Old Kent Road and was chosen by Transport for London in 2019. The line could be extended further on the Hayes National Rail line in future. Estimated to cost between £4.7billion to £7.9billion (in 2017 prices), the extension would take around 7 years to construct. Due to financial impacts of the COVID-19 pandemic, work to implement the extension is currently on hold.
Other proposed extensions and lines
Several other extensions have been proposed in recent years, including a further extension of the Northern line to Clapham Junction. The long proposed Croxley Rail Link (an extension of the Metropolitan line) was cancelled in 2018 due to higher than expected costs and lack of funding.
In 2019, the Canary Wharf Group suggested the construction of a new rail line between Euston and Canary Wharf, to improve connections to the future High Speed 2 railway.
In 2021, Harlow District Council proposed extending the Central line from its eastern terminus in Epping to Harlow. They argued this would reduce travel times to Epping and London, and help with efforts to add 19,000 new homes to the town and expand the population to 130,000. However, no funding has been allocated for this proposed extension.
Line improvements
Bakerloo line
The thirty-six 1972-stock trains on the Bakerloo line have already exceeded their original design life of 40 years. London Underground is therefore extending their operational life by making major repairs to many of the trains to maintain reliability. The Bakerloo line will receive new trains as part of the New Tube for London project. This will replace the existing fleet with new air-cooled articulated trains and a new signalling system to allow Automatic Train Operation. The line is predicted to run a maximum of 27 trains per hour, a 25% increase on the current 21 trains per hour during peak periods.
Central line
The Central line was the first line to be modernised in the 1990s, with 85 new 1992-stock trains and a new automatic signalling system installed to allow Automatic Train Operation. The line runs 34 trains per hour for half an hour in the morning peak but is unable to operate more frequently because of a lack of additional trains. The 85 existing 1992-stock trains are the most unreliable on the London Underground as they are equipped with the first generation of solid-state direct-current thyristor-control traction equipment. The trains often break down, have to be withdrawn from service at short notice and at times are not available when required, leading to gaps in service at peak times. Although relatively modern and well within their design life, the trains need work in the medium term to ensure the continued reliability of the traction control equipment and maintain fleet serviceability until renewal, which is expected between 2028 and 2032. Major work is to be undertaken on the fleet to ensure their continued reliability with brakes, traction control systems, doors, automatic control systems being repaired or replaced, among other components. The Central line will be part of the New Tube for London Project. This will replace the existing fleet with new air-cooled walkthrough trains and a new automatic signalling system. The line is predicted to run 36 trains per hour, a 25% increase compared to the present service of 34 trains for the busiest 30 minutes in the morning and evening peaks and 27–30 trains per hour during the rest of the peak.
Jubilee line
The signalling system on the Jubilee line has been replaced to increase capacity on the line by 20%—the line now runs 30 trains per hour at peak times, compared to the previous 24 trains per hour. As with the Victoria line, the service frequency is planned to increase to 36 trains per hour. To enable this, ventilation, power supply and control and signalling systems will be adapted and modified to allow the increase in frequency. London Underground also plans to add up to an additional 18 trains to the current fleet of 63 trains of 1996 stock.
Northern line
The signalling system on the Northern line has been replaced to increase capacity on the line by 20%, as the line now runs 24 trains per hour at peak times, compared to 20 previously. Capacity can be increased further if the operation of the Charing Cross and Bank branches is separated. To enable this up to 50 additional trains will be built in addition to the current 106 1995 stock. Five trains will be required for the Northern line extension and 45 to increase frequencies on the rest of the line. This, combined with segregation of trains at Camden Town junction, will allow 30–36 trains per hour compared to 24 trains per hour currently.
Piccadilly line
The eighty-six 1973 stock trains that operate on the Piccadilly line are some of the most reliable trains on the London Underground. The trains have exceeded their design life of around 40 years and are in need of replacement. The Piccadilly line will be part of the New Tube for London Project. This will replace the existing fleet with new air-cooled walk-through trains and a new signalling system to allow Automatic Train Operation. The line is predicted to run 30–36 trains per hour, up to a 50% increase compared to the 24–25 train per hour service provided today. The line will be the first to be upgraded as part of the New Tube for London Project, as passenger numbers have increased over recent years and are expected to increase further. This line is important in this project because it currently provides a less frequent service than other lines.
Victoria line
The signalling system on the Victoria line has been replaced to increase capacity on the line by around 25%; the line now runs up to 36 trains per hour compared to 27–28 previously. The trains have been replaced with 47 new higher-capacity 2009-stock trains. The peak frequency was increased to 36 trains per hour in 2016 after track works were completed to the layout of the points at Walthamstow Central crossover, which transfers northbound trains to the southbound line for their return journey. This resulted in a 40% increase in capacity between Seven Sisters and Walthamstow Central.
Waterloo & City line
The line was upgraded with five new 1992-stock trains in the early 1990s, at the same time as the Central line was upgraded. The line operates under traditional signalling and does not use Automatic Train Operation. The line will be part of the New Tube for London Project. This will replace the existing fleet with new air-cooled walk-through trains and a new signalling system to allow Automatic Train Operation. The line is predicted to run 30 trains per hour, an increase of up to 50% on the current 21 trains per hour. The line may also be one of the first to be upgraded, alongside the Piccadilly line, with new trains, systems and platform-edge doors to test the systems before the Central and Bakerloo lines are upgraded.
Sub-surface lines (District, Metropolitan, Hammersmith & City and Circle)
New S Stock trains have been introduced on the sub-surface (District, Metropolitan, Hammersmith & City and Circle) lines. These were all delivered by 2017. 191 trains have been introduced: 58 for the Metropolitan line and 133 for the Circle, District and Hammersmith & City lines. The track, electrical supply and signalling systems are also being upgraded in a programme to increase peak-hour capacity. The replacement of the signalling system and the introduction of Automatic Train Operation and Control is scheduled for 2019–22. A control room for the sub-surface network has been built in Hammersmith and an automatic train control (ATC) system is to replace ageing signalling equipment dating from between the mid-1920s and late 1980s, including the signal cabin at Edgware Road, the control room at Earl's Court, and the signalling centre at Baker Street. Bombardier won the contract in June 2011 but was released by agreement in December 2013, and London Underground has now issued another signalling contract, with Thales.
New trains for deep-level lines
In mid-2014, Transport for London issued a tender for up to 18 trains for the Jubilee line and up to 50 trains for the Northern line. These would be used to increase frequencies and cover the Battersea extension on the Northern line.
In early 2014, the Bakerloo, Central, Piccadilly and Waterloo & City line rolling-stock replacement project was renamed New Tube for London (NTfL) and moved from the feasibility stage to the design and specification stage. The study had showed that, with new generation trains and re-signalling:
Piccadilly line capacity could be increased by 60% with 33 trains per hour (tph) at peak times by 2025.
Central line capacity increased by 25% with 33 tph at peak times by 2030.
Waterloo & City line capacity increased by 50% by 2032, after the track at Waterloo station is remodelled.
Bakerloo line capacity could be increased by 25% with 27 tph at peak times by 2033.
The project is estimated to cost £16.42billion (£9.86billion at 2013 prices). A notice was published on 28 February 2014 in the Official Journal of the European Union asking for expressions of interest in building the trains. On 9 October 2014, TFL published a shortlist of those (Alstom, Siemens, Hitachi, CAF and Bombardier) who had expressed an interest in supplying 250 trains for between £1.0billion and £2.5billion, and on the same day opened an exhibition with a design by PriestmanGoode. The fully automated trains may be able to run without drivers, but the ASLEF and RMT trade unions that represent the drivers strongly oppose this, saying it would affect safety. The invitation to tender for the trains was issued in January 2016; the specifications for the Piccadilly line infrastructure are expected in 2016, and the first train is due to run on the Piccadilly line in 2023. Siemens Mobility's Inspiro design was selected in June 2018 in a £1.5billion contract.
Ventilation and cooling
When the Bakerloo line opened in 1906, it was advertised with a maximum temperature of , but over time the tube tunnels have warmed up. In 1938 approval was given for a ventilation improvement programme, and a refrigeration unit was installed in a lift shaft at Tottenham Court Road. Temperatures of were reported in the 2006 European heat wave. It was claimed in 2002 that, if animals were being transported, temperatures on the Tube would break European Commission animal welfare laws. A 2000 study reported that air quality was 73 times worse than at street level, with a passenger inhaling the same mass of particulates during a twenty-minute journey on the Northern line as when smoking a cigarette. The main purpose of the London Underground's ventilation fans is to extract hot air from the tunnels, and fans across the network are being refurbished, although complaints of noise from local residents preclude their use at full power at night.
In June 2006 a groundwater cooling system was installed at Victoria station. In 2012, air-cooling units were installed on platforms at Green Park station using cool deep groundwater and at Oxford Circus using chiller units at the top of an adjacent building. New air-conditioned trains have been introduced on the sub-surface lines, but were initially ruled out for the tube trains due to space being considered limited on tube trains for air-conditioning units and that these would heat the tunnels even more. The New Tube for London, which will replace the trains for the Bakerloo, Central, Waterloo and City and Piccadilly lines, is planned to have air conditioning for the new trains along with better energy conservation and regenerative braking.
In the original Tube design, trains passing through close fitting tunnels act as pistons to create air pressure gradients between stations. This pressure difference drives ventilation between platforms and the surface exits through the passenger foot network. This system depends on adequate cross-sectional area of the airspace above the passengers' heads in the foot tunnels and escalators, where laminar airflow is proportional to the fourth power of the radius, the Hagen–Poiseuille equation. It also depends on an absence of turbulence in the tunnel headspace. In many stations the ventilation system is now ineffective because of alterations that reduce tunnel diameters and increase turbulence. An example is Green Park tube station, where false ceiling panels attached to metal frames have been installed that reduce the above-head airspace diameter by more than half in many parts. This has the effect of reducing laminar airflow by 94%.
Originally, air turbulence was kept to a minimum by keeping all signage flat to the tunnel walls. Now, the ventilation space above head height is crowded with ducting, conduits, cameras, speakers and equipment acting as a baffle plates with predictable reductions in flow. Often, electronic signs have their flat surface at right angles to the main air flow, causing choked flow. Temporary sign boards that stand at the top of escalators also maximise turbulence. The alterations to the ventilation system are important, not only to heat exchange.
Air quality
The Committee on the Medical Effects of Air Pollutants (COMEAP) has reported on the relative risks of breathing air pollution in different situations. In January 2019, for example, it reported that pollution from particulates is up to 30 times higher on the London Underground than on streets in the roads above, with the Northern Line having the worst air quality.
Lifts and escalators
Originally access to the deep-tube platforms was by a lift. Each lift was staffed, and at some quiet stations in the 1920s the ticket office was moved into the lift, or it was arranged that the lift could be controlled from the ticket office. The first escalator on the London Underground was installed in 1911 between the District and Piccadilly platforms at Earl's Court and from the following year new deep-level stations were provided with escalators instead of lifts. The escalators had a diagonal shunt at the top landing. In 1921 a recorded voice instructed passengers to stand on the right and signs followed in the Second World War. Travellers were asked to stand on the right so that anyone wishing to overtake them would have a clear passage on the left side of the escalator. The first 'comb' type escalator was installed in 1924 at Clapham Common. In the 1920s and 1930s many lifts were replaced by escalators. After the fatal 1987 King's Cross fire, all wooden escalators were replaced with metal ones and the mechanisms are regularly degreased to lower the potential for fires. The only wooden escalator not to be replaced was at Greenford station, which remained until March 2014 when TfL replaced it with the first incline lift on the UK transport network in October 2015.
There are 426 escalators on the London Underground system and the longest, at , is at Angel. The shortest, at Stratford, gives a vertical rise of . There are 184 lifts, and numbers have increased in recent years because of investment in making tube stations accessible. Over 28 stations will have lifts installed over the next 10 years, bringing the total of step-free stations to over 100. Lift and escalators are abundant with advertising posters which can be used for artistic purposes due to the nature of their layout.
Wi-Fi and mobile phone reception
In mid-2012, London Underground, in partnership with Virgin Media, trialled Wi-Fi hotspots in many stations, but not in the tunnels, that allowed passengers free internet access. The free trial proved successful and was extended to the end of 2012, whereupon it switched to a service freely available to subscribers to Virgin Media and others, or as a paid-for service. It was not previously possible to use mobile phones on most parts of the Underground (excluding services running overground or occasionally sub-surface, depending on the phone and carrier) using native 2G, 3G or 4G networks, and a project to extend coverage before the 2012 Olympics was abandoned because of commercial and technical difficulties.
In March 2020, 2G, 3G and 4G signal was made available on parts of the Jubilee line, between Westminster and Canning Town, throughout the stations and tunnels as part of an initial trial.
In June 2021, Vodafone dropped London Underground Wi-Fi connectivity across the entire network. This was restored in April 2023 after control of the Wi-Fi connectivity moved from Virgin Media to Boldyn Networks as part of their 20-year concession deal with Transport for London, providing data connectivity across the entire network.
In December 2022, additional mobile coverage, including 5G connectivity, launched at a small subset of stations and tunnel segments on the Central line, with a view to expand to the full set of sub-surface stations and tunnels on the London Underground, and also the Elizabeth Line, by the end of 2024. Further stations on the Northern line were launched from January 2023, with additional Northern line stations also being added in June 2023. Not all stations have identical coverage solutions, with some not having 5G connectivity present. As of June 2023, testing has begun on sections of the Bakerloo, Piccadilly and Victoria lines.
In November and December 2023, more mobile data coverage was launched on more stations on the Northern and Central Lines. On the Northern line: all stations from Tottenham Court Road to Euston. on the Central line: from Oxford Circus to Chancery Lane.
Travelling
Ticketing
The Underground received £2.669billion in fares in 2016/17 and uses Transport for London's zonal fare system to calculate fares. There are nine zones with zone 1 being the central zone, which includes the loop of the Circle line with a few stations to the south of River Thames. The only London Underground stations in Zones 7 to 9 are on the Metropolitan line beyond Moor Park, outside London region. Some stations are in two zones, and the cheapest fare applies. Paper tickets, the contactless Oyster cards, contactless debit or credit cards and Apple Pay and Android Pay smartphones and watches can be used for travel. Single and return tickets are available in either format, but Travelcards (season tickets) for longer than a day are available only on Oyster cards.
TfL introduced the Oyster card in 2003; this is a pre-payment smartcard with an embedded contactless RFID chip. It can be loaded with Travelcards and used on the Underground, the Overground, buses, trams, the Docklands Light Railway, and National Rail services within London. Fares for single journeys are cheaper than paper tickets, and a daily cap limits the total cost in a day to the price of a Day Travelcard. The Oyster card must be 'touched in' at the start and end of a journey, otherwise it is regarded as 'incomplete' and the maximum fare is charged. In March 2012 the cost of this in the previous year to travellers was £66.5million.
In 2014, TfL became the first public transport provider in the world to accept payment from contactless bank cards. The Underground first started accepting contactless debit and credit cards in September 2014. This was followed by the adoption of Apple Pay in 2015 and Android Pay in 2016, allowing payment using a contactless-enabled phone or smartwatch. Over 500million journeys have taken place using contactless payments, and TfL has become one of Europe's largest contactless merchants, with around 1 in 10 contactless transactions in the UK taking place on the TfL network. This technology, developed in-house by TfL, has been licensed to other major cities like New York City and Boston.
A concessionary fare scheme is operated by London Councils for residents who are disabled or meet certain age criteria. Residents born before 1951 were eligible after their 60th birthday, whereas those born in 1955 will need to wait until they are 66. Called a "Freedom Pass", it allows free travel on TfL-operated routes at all times and is valid on some National Rail services within London at weekends and after 09:30 on Monday to Friday. Since 2010, the Freedom Pass has included an embedded holder's photograph; it lasts five years between renewals.
In addition to automatic and staffed faregates at stations, the Underground also operates on a proof-of-payment system. The system is patrolled by both uniformed and plain-clothes fare inspectors with hand-held Oyster card readers. Passengers travelling without a valid ticket must pay a penalty fare of £80 (£40 if paid within 21days) and can be prosecuted for fare evasion under the Regulation of Railways Act 1889 and Transport for London Byelaws.
Hours of operation
The tube closes overnight during the week, but since 2016, the Central, Jubilee, Northern, Piccadilly, and Victoria lines, as well as a short section of the London Overground have operated all night on Friday and Saturday nights. The first trains run from about 05:00 and the last trains until just after 01:00, with later starting times on Sunday mornings. The nightly closures are used for maintenance, but some lines stay open on New Year's Eve and run for longer hours during major public events such as the 2012 London Olympics. Some lines are occasionally closed for scheduled engineering work at weekends.
The Underground runs a limited service on Christmas Eve with some lines closing early, and does not operate on Christmas Day. Since 2010 a dispute between London Underground and trade unions over holiday pay has resulted in a limited service on Boxing Day.
Night Tube
On 19 August 2016, London Underground launched a 24-hour service on the Victoria and Central lines with plans in place to extend this to the Piccadilly, Northern and Jubilee lines starting on Friday morning and continuing right through until Sunday evening. The Night Tube proposal was originally scheduled to start on 12 September 2015, following completion of upgrades, but in August 2015 it was announced that the start date for the Night Tube had been pushed back because of ongoing talks about contract terms between trade unions and London Underground. On 23 May 2016 it was announced that the night service would launch on 19 August 2016 for the Central and Victoria lines. The service operates on the following lines:
Central line
between Ealing Broadway and Hainault via Newbury Park or Loughton. No service on the West Ruislip Branch, between Woodford and Hainault via Grange Hill or between Loughton and Epping.
Northern line
between Morden and Edgware / High Barnet via Charing Cross. No service on Mill Hill East, Battersea or Bank branches.
Piccadilly line
between Cockfosters and Heathrow Terminals 1, 2, 3 and 5. No service to Terminal 4 or between Acton Town and Uxbridge.
Jubilee line
Full line – Stratford to Stanmore.
Victoria line
Full line – Walthamstow Central to Brixton.
The Jubilee, Piccadilly and Victoria lines, and the Central line between White City and Leytonstone, operate at 10-minute intervals. The Central line operates at 20-minute intervals between Leytonstone and Hainault, between Leytonstone and Loughton, and between White City and Ealing Broadway. The Northern line operates at roughly 8-minute intervals between Morden and Camden Town via Charing Cross, and at 15-minute intervals between Camden Town and Edgware and between Camden Town and High Barnet.
Night Tube services were suspended in March 2020 during the COVID-19 pandemic. They were reinstated partially in November 2021 and fully in July 2022.
Accessibility
Accessibility for people with limited mobility was not considered when most of the system was built, and before 1993 fire regulations prohibited wheelchairs on the Underground. The stations on the Jubilee Line Extension, opened in 1999, were the first stations on the system designed with accessibility in mind, but retrofitting accessibility features to the older stations is a major investment that is planned to take over twenty years. A 2010 London Assembly report concluded that over 10% of people in London had reduced mobility and, with an ageing population, numbers will increase in the future.
The standard issue tube map indicates stations that are step-free from street to platforms. There can also be a step from platform to train as large as and a gap between the train and curved platforms, and these distances are marked on the map. Access from platform to train at some stations can be assisted using a boarding ramp operated by staff, and a section has been raised on some platforms to reduce the step.
, there are 90 stations with step-free access from platform to train, and there are plans to provide step-free access at another 11 stations by 2024. By 2016 a third of stations had platform humps that reduce the step from platform to train. New trains, such as those being introduced on the sub-surface network, have access and room for wheelchairs, improved audio and visual information systems and accessible door controls.
Delays and overcrowding
During peak hours, stations can get so crowded that they need to be closed. Passengers may not get on the first train and the majority of passengers do not find a seat on their trains, some trains having more than four passengers every square metre. When asked, passengers report overcrowding as the aspect of the network that they are least satisfied with, and overcrowding has been linked to poor productivity and potential poor heart health. Capacity increases have been overtaken by increased demand, and peak overcrowding has increased by 16 percent since 2004–05.
Compared with 2003–04, the reliability of the network had increased in 2010–11, with lost customer hours reduced from 54million to 40million. Passengers are entitled to a refund if their journey is delayed by 15minutes or more due to circumstances within the control of TfL, and in 2010, 330,000 passengers out of a potential 11million Tube passengers claimed compensation for delays. Mobile phone apps and services have been developed to help passengers claim their refund more efficiently.
Safety
London Underground is authorised to operate trains by the Office of Rail and Road. there had been 310days since the last major incident, when a passenger had died after falling on the track. there have been nine consecutive years in which no employee fatalities have occurred. A special staff training facility was opened at West Ashfield tube station in TFL's Ashfield House, West Kensington in 2010 at a cost of £800,000. Meanwhile, Mayor of London Boris Johnson decided it should be demolished along with the Earls Court Exhibition Centre as part of Europe's biggest regeneration scheme.
In November 2011 it was reported that 80 people had died by suicide in the previous year on the London Underground, up from 46 in 2000. Most platforms at deep tube stations have pits, often referred to as 'suicide pits', beneath the track. These were constructed in 1926 to aid drainage of water from the platforms, but also halve the likelihood of a fatality when a passenger falls or jumps in front of a train.
Tube Challenge
The Tube Challenge is the competition for the fastest time to travel to all London Underground stations, tracked by Guinness World Records since 1960. The goal is to visit all the stations on the system, but not necessarily using all the lines; participants may connect between stations on foot, or by using other forms of public transport.
As of 2021, the record for fastest completion was held by Steve Wilson (UK) and Andi James (Finland), who completed the challenge in 15 hours, 45 minutes and 38 seconds on 21 May 2015.
Design and the arts
Map
Early maps of the Metropolitan and District railways were city maps with the lines superimposed, and the District published a pocket map in 1897. A Central London Railway route diagram appears on a 1904 postcard and 1905 poster, similar maps appearing in District Railway cars in 1908. In the same year, following a marketing agreement between the operators, a joint central area map that included all the lines was published. A new map was published in 1921 without any background details, but the central area was squashed, requiring smaller letters and arrows. Although Fred H. Stingemore enlarged the central area of the map, it was Harry Beck who took this further by distorting geography and simplifying the map so that the railways appeared as straight lines with equally spaced stations. He presented his original draft in 1931, and after initial rejection it was first printed in 1933. Today's tube map is an evolution of that original design, and the ideas are used by many metro systems around the world.
The current standard Tube map shows the Docklands Light Railway, London Overground, IFS Cloud Cable Car, London Tramlink and the London Underground; a more detailed map covering a larger area, published by National Rail and Transport for London, includes suburban railway services. The tube map came second in a BBC and London Transport Museum poll asking for a favourite UK design icon of the 20th century and the underground's 150th anniversary was celebrated by a Google Doodle on the search engine.
Commissioned by Art on the Underground, the cover of the pocket map is designed by various British and international artists, one of the largest public art commissions in the UK.
Roundel
While the first use of a roundel in a London transport context was the trademark of the London General Omnibus Company registered in 1905, it was first used on the Underground in 1908 when the UERL placed a solid red circle behind station nameboards on platforms to highlight the name. The word "UNDERGROUND" was placed in a roundel instead of a station name on posters in 1912 by Charles Sharland and Alfred France, as well as on undated and possibly earlier posters from the same period.
Transport administrator Frank Pick, wanting to establish a strong corporate identity and visual brand for the Underground, thought the solid red disc cumbersome and took a version where the disc became a ring from a 1915 Sharland poster and gave it to Edward Johnston to develop, and registered the symbol as a trademark in 1917. The roundel was first printed on a map cover using the Johnston typeface in June 1919, and printed in colour the following October.
After the UERL was absorbed into the London Passenger Transport Board in 1933, it used forms of the roundel for buses, trams and coaches, as well as the Underground. The words "London Transport" were added inside the ring, above and below the bar. The Carr-Edwards report, published in 1938 as possibly the first attempt at a graphics standards manual, introduced stricter guidelines. Between 1948 and 1957 the word "Underground" in the bar was replaced by "London Transport". , forms of the roundel, with differing colours for the ring and bar, are used for other TfL services, such as London Buses, Tramlink, London Overground, London River Services and Docklands Light Railway. Crossrail will also be identified with a roundel. The 100th anniversary of the roundel was celebrated in 2008 by TfL commissioning 100 artists to produce works that celebrate the design. Roundels are featured outside many underground stations; they are commonly mounted on a white pole known as a "Venetian mast".
In 2016, Tate Modern commissioned conceptual artist Michael Craig-Martin to "reimagine" the roundel, changing its colours for the first time since the sign was introduced. His design was displayed at Southwark Station in collaboration with Art on the Underground to mark the opening weekend of the new Tate Modern gallery situated near the station.
Architecture
Seventy of the 272 London Underground stations use buildings that are on the Statutory List of Buildings of Special Architectural or Historic Interest, and five have entrances in listed buildings. The Metropolitan Railway's original seven stations were inspired by Italianate designs, with the platforms lit by daylight from above and by gas lights in large glass globes. Early District Railway stations were similar and on both railways the further from central London the station the simpler the construction. The City & South London Railway opened with red-brick buildings, designed by Thomas Phillips Figgis, topped with a lead-covered dome that contained the lift mechanism and weather vane (still visible at many stations, such as Clapham Common). The Central London Railway appointed Harry Bell Measures as architect, who designed its pinkish-brown steel-framed buildings with larger entrances.
In the first decade of the 20th century Leslie Green established a house style for the tube stations built by the UERL, which were clad in ox-blood faience blocks. Green pioneered using building design to guide passengers with direction signs on tiled walls, with the stations given a unique identity with patterns on the platform walls. Many of these tile patterns survive, though a significant number of these are now replicas. Harry W. Ford was responsible for the design of at least 17 UERL and District Railway stations, including Barons Court and Embankment, and claimed to have first thought of enlarging the U and D in the UNDERGROUND wordmark. The Met's architect Charles Walter Clark had used a neo-classical design for rebuilding Baker Street and Paddington Praed Street stations before the First World War and, although the fashion had changed, continued with Farringdon in 1923. The buildings had metal lettering attached to pale walls. Clark would later design "Chiltern Court", the large, luxurious block of apartments at Baker Street, that opened in 1929. In the 1920s and 1930s, Charles Holden designed a series of modernist and art-deco stations some of which he described as his 'brick boxes with concrete lids'. Holden's design for the Underground's headquarters building at 55 Broadway included avant-garde sculptures by Jacob Epstein, Eric Gill and Henry Moore.
When the Central line was extended east, the stations were simplified Holden proto-Brutalist designs, and a cavernous concourse built at Gants Hill in honour of early Moscow Metro stations. Few new stations were built in the 50 years after 1948, but Misha Black was appointed design consultant for the 1960s Victoria line, contributing to the line's uniform look, with each station having an individual tile motif. Notable stations from this period include Moor Park, the stations of the Piccadilly line extension to Heathrow and Hillingdon.
In recent years, the stations of the 1990s Jubilee Line Extension were designed in a high-tech style by architects such as Norman Foster and Michael Hopkins. The project was critically acclaimed, with the Royal Fine Arts Commission describing the project as "an example of patronage at its best and most enlightened", and two stations shortlisted for the Stirling Prize. Stations were built to the latest standards, future proofed for growth, with innovations such as Platform screen doors. West Ham station was built as a homage to the red brick tube stations of the 1930s, using brick, concrete and glass.
Many platforms have unique interior designs to help passenger identification. The tiling at Baker Street incorporates repetitions of Sherlock Holmes's silhouette; at Tottenham Court Road semi-abstract mosaics by Eduardo Paolozzi feature musical instruments, tape machines and butterflies; and at Charing Cross, David Gentleman designed the mural depicting the construction of the Eleanor Cross. Robyn Denny designed the murals on the Northern line platforms at Embankment.
Johnston typeface
The first posters used various typefaces, as was contemporary practice, and station signs used sans serif block capitals. The Johnston typeface was developed in upper and lower case in 1916, and a complete set of blocks, marked Johnston Sans, was made by the printers the following year. A bold version of the capitals was developed by Johnston in 1929. The Metropolitan Railway changed to a serif letterform for its signs in the 1920s, used on the stations rebuilt by Clark. Johnston was adopted systemwide after the formation of the LPTB in 1933 and the LT wordmark was applied to locomotives and carriages. Johnston was redesigned, becoming New Johnston, for photo-typesetting in the early 1980s when Elichi Kono designed a range that included Light, Medium and Bold, each with its italic version. The typesetters P22 developed today's electronic version, sometimes called TfL Johnston, in 1997.
Posters and patronage of the arts
Early advertising posters used various typefaces. Graphic posters first appeared in the 1890s, and it became possible to print colour images economically in the early 20th century. The Central London Railway used colour illustrations in their 1905 poster, and from 1908 the Underground Group, under Pick's direction, used images of country scenes, shopping and major events on posters to encourage use of the tube. Pick found he was limited by the commercial artists the printers used, and so commissioned work from artists and designers such as Dora Batty, Edward McKnight Kauffer, the cartoonist George Morrow, Herry (Heather) Perry, Graham Sutherland, Charles Sharland and the sisters Anna and Doris Zinkeisen. According to Ruth Artmonsky, over 150 women artists were commissioned by Pick and latterly Christian Barman to design posters for London Underground, London Transport and London County Council Tramways.
The Johnston Sans letter font began appearing on posters from 1917. The Met, strongly independent, used images on timetables and on the cover of its Metro-land guide that promoted the country it served for the walker, visitor and later the house-hunter. By the time London Transport was formed in 1933 the UERL was considered a patron of the arts and over 1000 works were commissioned in the 1930s, such as the cartoon images of Charles Burton and Kauffer's later abstract cubist and surrealist images. Harold Hutchison became London Transport publicity officer in 1947, after the Second World War and nationalisation, and introduced the "pair poster", where an image on a poster was paired with text on another. Numbers of commissions dropped, to eight a year in the 1950s and just four a year in the 1970s, with images from artists such Harry Stevens and Tom Eckersley.
Art on the Underground was launched in 2000 to revive London Underground as a patron of the arts. Today, commissions range from the pocket Tube map cover, to temporary artworks, to large-scale permanent installations in stations. Major commissions by Art on the Underground in recent years have included Labyrinth by the Turner Prize–winning artist Mark Wallinger, to mark the 150th anniversary of the Underground; Diamonds and Circles, permanent works in situ by the French artist Daniel Buren at Tottenham Court Road; and Beauty < Immortality, a memorial to Frank Pick by Langlands & Bell at Piccadilly Circus.
Similarly, since 1986 Poems on the Underground has commissioned poetry that is displayed in trains.
In popular culture
The Underground (including several fictitious stations) has appeared in many movies and television shows, including Skyfall, Death Line, Die Another Day, Sliding Doors, An American Werewolf in London, Creep, Tube Tales, Sherlock and Neverwhere. The London Underground Film Office received over 200 requests to film in 2000. The Underground has also featured in music such as the Jam's "Down in the Tube Station at Midnight" and in literature such as the graphic novel V for Vendetta. Popular legends about the Underground being haunted persist to this day. In 2016, British composer Daniel Liam Glyn released his concept album Changing Stations based on the 11 main tube lines of the London Underground network.
Call of Duty: Modern Warfare 3 has a single-player level named Mind The Gap where most of the level takes place between the dockyards and Westminster while the player and a team of SAS attempt to take down terrorists attempting to escape using the London Underground via a hijacked train. The game also features the multiplayer map "Underground", in which players are combating in a fictitious Underground station. The London Underground map serves as a playing field for the conceptual game of Mornington Crescent (which is named after a station on the Northern line) and the board game The London Game.
In 1999, Carlton Television premiered a regional game show (Greater London area only) also called Mind the Gap.
Busking
The London Underground provides busking permits for up to 39 pitches across 25 central London stations, with over 100,000 hours of live music performed each year. Performers are chosen by audition, with previous buskers including Ed Sheeran, George Michael and Rod Stewart.
Research
The London Underground is frequently studied by academics because it is one of the largest, oldest, and most widely used systems of public transit in the world. Therefore, the transportation and complex network literatures include extensive information about the Tube system.
For London Underground passengers, research suggests that transfers are highly costly in terms of walk and wait times. Because these costs are unevenly distributed across stations and platforms, path choice analyses may be helpful in guiding upgrades and choice of new stations. Routes on the Underground can also be optimized using a global network optimization approach, akin to routing algorithms for Internet applications. Analysis of the Underground as a network may also be helpful for setting safety priorities, since the stations targeted in the 2005 London bombings were amongst the most effective for disrupting the transportation system.
A study in March 2023, showed that over £1.3million worth of mobile phones were stolen on the London Underground in 2022, more than the entire UK rail network combined.
Notable people
Harry Beck (1902–1974) designed the tube map, named in 2006 as a British design icon.
Hannah Dadds (1941–2011), the first female train driver on the London Underground.
John Fowler (1817–1898) was the railway engineer that designed the Metropolitan Railway.
MacDonald Gill (1884–1947), cartographer credited with drawing, in 1914, "the map that saved the London Underground".
James Henry Greathead (1844–1896) was the engineer that dug the Tower Subway using a method using a wrought iron shield patented by Peter W. Barlow, and later used the same tunnelling shield to build the deep-tube City & South London and Central London railways.
Edward Johnston (1872–1944) developed the Johnston Sans typeface, still in use today on the London Underground.
Charles Pearson (1793–1862) suggested an underground railway in London in 1845 and from 1854 promoted a scheme that eventually became the Metropolitan Railway.
Frank Pick (1878–1941) was UERL publicity officer from 1908, commercial manager from 1912 and joint managing director from 1928. He was chief executive and vice chairman of the LPTB from 1933 to 1940. It was Pick that commissioned Edward Johnston to create the typeface and redesign the roundel, and established the Underground's reputation as patrons of the arts as users of the best in contemporary poster art and architecture.
Robert Selbie (1868–1930) was manager of the Metropolitan Railway from 1908 until his death, marketing it using the Metro-land brand.
Edgar Speyer (1862–1932) Financial backer of Yerkes who served as UERL chairman from 1906 to 1915 during its formative years.
Albert Stanley (1874–1948) was manager of the UERL from 1907, and became the first chairman of the London Passenger Transport Board (LPTB) in 1933.
Edward Watkin (1819–1901) was chairman of the Metropolitan Railway from 1872 to 1894.
Charles Yerkes (1837–1905) was an American who founded the Underground Electric Railways Company of London (UERL) in 1902, which opened three tube lines and electrified the District Railway.
| Technology | Trains | null |
17856 | https://en.wikipedia.org/wiki/Lamiaceae | Lamiaceae | The Lamiaceae ( )
or Labiatae are a family of flowering plants commonly known as the mint, deadnettle, or sage family. Many of the plants are aromatic in all parts and include widely used culinary herbs like basil, mint, rosemary, sage, savory, marjoram, oregano, hyssop, thyme, lavender, and perilla, as well as other medicinal herbs such as catnip, salvia, bee balm, wild dagga, and oriental motherwort.
Some species are shrubs, trees (such as teak), or, rarely, vines. Many members of the family are widely cultivated, not only for their aromatic qualities, but also their ease of cultivation, since they are readily propagated by stem cuttings. Besides those grown for their edible leaves, some are grown for decorative foliage. Others are grown for seed, such as Salvia hispanica (chia), or for their edible tubers, such as Plectranthus edulis, P. esculentus, P. rotundifolius, and Stachys affinis (Chinese artichoke). Many are also grown ornamentally, notably coleus, Plectranthus, and many Salvia species and hybrids.
The family has a cosmopolitan distribution. The enlarged Lamiaceae contain about 236 genera and have been stated to contain 6,900 to 7,200 species, but the World Checklist lists 7,534. The largest genera are Salvia (900), Scutellaria (360), Stachys (300), Plectranthus (300), Hyptis (280), Teucrium (250), Vitex (250), Thymus (220), and Nepeta (200). Clerodendrum was once a genus of over 400 species, but by 2010, it had been narrowed to about 150.
The family has traditionally been considered closely related to the Verbenaceae; in the 1990s, phylogenetic studies suggested that many genera classified in the Verbenaceae should be classified in the Lamiaceae or to other families in the order Lamiales.
The alternative family name Labiatae refers to the flowers typically having petals fused into an upper lip and a lower lip ( in Latin). Although this is still considered an acceptable alternative name, most botanists now use the name Lamiaceae in referring to this family. The flowers are bilaterally symmetrical with five united petals and five united sepals. They are usually bisexual and verticillastrate (a flower cluster that looks like a whorl of flowers, but actually consists of two crowded clusters). The leaves emerge oppositely, each pair at right angles to the previous one (decussate) or whorled. The stems are frequently square in cross section, but this is not found in all members of the family, and is sometimes found in other plant families.
Genera
The last revision of the entire family was published in 2004. It described and provided keys to 236 genera. These are marked with an asterisk (*) in the list below. A few genera have been established or resurrected since 2004. These are marked with a plus sign (+). Other genera have been synonymised. These are marked with a minus sign (-). The remaining genera in the list are mostly of historical interest only and are from a source that includes such genera without explanation. Few of these are recognized in modern treatments of the family.
Kew Gardens provides a list of genera that includes additional information. A list at the Angiosperm Phylogeny Website is frequently updated. Plants of the World Online currently accepts 227 genera.
Recent changes
The circumscription of several genera has changed since 2004. Tsoongia, Paravitex, and Viticipremna have been sunk into synonymy with Vitex. Huxleya has been sunk into Volkameria. Kalaharia, Volkameria, Ovieda, and Tetraclea have been segregated from a formerly polyphyletic Clerodendrum. Rydingia has been separated from Leucas. The remaining Leucas is paraphyletic over four other genera.
Subfamilies and tribes
In 2004, the Lamiaceae were divided into seven subfamilies, plus 10 genera not placed in any of the subfamilies. The unplaced genera are: Tectona, Callicarpa, Hymenopyramis, Petraeovitex, Peronema, Garrettia, Cymaria, Acrymia, Holocheila, and Ombrocharis. The subfamilies are the Symphorematoideae, Viticoideae, Ajugoideae, Prostantheroideae, Nepetoideae, Scutellarioideae, and Lamioideae. The subfamily Viticoideae is probably not monophyletic. The Prostantheroideae and Nepetoideae are divided into tribes. These are shown in the phylogenetic tree below.
Phylogeny
Most of the genera of Lamiaceae have never been sampled for DNA for molecular phylogenetic studies. Most of those that have been are included in the following phylogenetic tree. The phylogeny depicted below is based on seven different sources.
Gallery
| Biology and health sciences | Lamiales | null |
17860 | https://en.wikipedia.org/wiki/Logarithm | Logarithm | In mathematics, the logarithm to base is the inverse function of exponentiation with base . That means that the logarithm of a number to the base is the exponent to which must be raised to produce . For example, since , the logarithm base of is , or . The logarithm of to base is denoted as , or without parentheses, . When the base is clear from the context or is irrelevant it is sometimes written .
The logarithm base is called the decimal or common logarithm and is commonly used in science and engineering. The natural logarithm has the number as its base; its use is widespread in mathematics and physics because of its very simple derivative. The binary logarithm uses base and is frequently used in computer science.
Logarithms were introduced by John Napier in 1614 as a means of simplifying calculations. They were rapidly adopted by navigators, scientists, engineers, surveyors, and others to perform high-accuracy computations more easily. Using logarithm tables, tedious multi-digit multiplication steps can be replaced by table look-ups and simpler addition. This is possible because the logarithm of a product is the sum of the logarithms of the factors:
provided that , and are all positive and . The slide rule, also based on logarithms, allows quick calculations without tables, but at lower precision. The present-day notion of logarithms comes from Leonhard Euler, who connected them to the exponential function in the 18th century, and who also introduced the letter as the base of natural logarithms.
Logarithmic scales reduce wide-ranging quantities to smaller scopes. For example, the decibel (dB) is a unit used to express ratio as logarithms, mostly for signal power and amplitude (of which sound pressure is a common example). In chemistry, pH is a logarithmic measure for the acidity of an aqueous solution. Logarithms are commonplace in scientific formulae, and in measurements of the complexity of algorithms and of geometric objects called fractals. They help to describe frequency ratios of musical intervals, appear in formulas counting prime numbers or approximating factorials, inform some models in psychophysics, and can aid in forensic accounting.
The concept of logarithm as the inverse of exponentiation extends to other mathematical structures as well. However, in general settings, the logarithm tends to be a multi-valued function. For example, the complex logarithm is the multi-valued inverse of the complex exponential function. Similarly, the discrete logarithm is the multi-valued inverse of the exponential function in finite groups; it has uses in public-key cryptography.
Motivation
Addition, multiplication, and exponentiation are three of the most fundamental arithmetic operations. The inverse of addition is subtraction, and the inverse of multiplication is division. Similarly, a logarithm is the inverse operation of exponentiation. Exponentiation is when a number , the base, is raised to a certain power , the exponent, to give a value ; this is denoted
For example, raising to the power of gives :
The logarithm of base is the inverse operation, that provides the output from the input . That is, is equivalent to if is a positive real number. (If is not a positive real number, both exponentiation and logarithm can be defined but may take several values, which makes definitions much more complicated.)
One of the main historical motivations of introducing logarithms is the formula
by which tables of logarithms allow multiplication and division to be reduced to addition and subtraction, a great aid to calculations before the invention of computers.
Definition
Given a positive real number such that , the logarithm of a positive real number with respect to base is the exponent by which must be raised to yield . In other words, the logarithm of to base is the unique real number such that .
The logarithm is denoted "" (pronounced as "the logarithm of to base ", "the logarithm of ", or most commonly "the log, base , of ").
An equivalent and more succinct definition is that the function is the inverse function to the function .
Examples
, since .
Logarithms can also be negative: since
is approximately 2.176, which lies between 2 and 3, just as 150 lies between and .
For any base , and , since and , respectively.
Logarithmic identities
Several important formulas, sometimes called logarithmic identities or logarithmic laws, relate logarithms to one another.
Product, quotient, power, and root
The logarithm of a product is the sum of the logarithms of the numbers being multiplied; the logarithm of the ratio of two numbers is the difference of the logarithms. The logarithm of the -th power of a number is times the logarithm of the number itself; the logarithm of a -th root is the logarithm of the number divided by . The following table lists these identities with examples. Each of the identities can be derived after substitution of the logarithm definitions or in the left hand sides.
Change of base
The logarithm can be computed from the logarithms of and with respect to an arbitrary base using the following formula:
Typical scientific calculators calculate the logarithms to bases 10 and . Logarithms with respect to any base can be determined using either of these two logarithms by the previous formula:
Given a number and its logarithm to an unknown base , the base is given by:
which can be seen from taking the defining equation to the power of
Particular bases
Among all choices for the base, three are particularly common. These are , (the irrational mathematical constant and (the binary logarithm). In mathematical analysis, the logarithm base is widespread because of analytical properties explained below. On the other hand, logarithms (the common logarithm) are easy to use for manual calculations in the decimal number system:
Thus, is related to the number of decimal digits of a positive integer : The number of digits is the smallest integer strictly bigger than
For example, is approximately 3.78 . The next integer above it is 4, which is the number of digits of 5986. Both the natural logarithm and the binary logarithm are used in information theory, corresponding to the use of nats or bits as the fundamental units of information, respectively.
Binary logarithms are also used in computer science, where the binary system is ubiquitous; in music theory, where a pitch ratio of two (the octave) is ubiquitous and the number of cents between any two pitches is a scaled version of the binary logarithm, or log 2 times 1200, of the pitch ratio (that is, 100 cents per semitone in conventional equal temperament), or equivalently the log base and in photography rescaled base 2 logarithms are used to measure exposure values, light levels, exposure times, lens apertures, and film speeds in "stops".
The abbreviation is often used when the intended base can be inferred based on the context or discipline, or when the base is indeterminate or immaterial. Common logarithms (base 10), historically used in logarithm tables and slide rules, are a basic tool for measurement and computation in many areas of science and engineering; in these contexts still often means the base ten logarithm. In mathematics usually refers to the natural logarithm (base ).
In computer science and information theory, often refers to binary logarithms (base 2). The following table lists common notations for logarithms to these bases. The "ISO notation" column lists designations suggested by the International Organization for Standardization.
History
The history of logarithms in seventeenth-century Europe saw the discovery of a new function that extended the realm of analysis beyond the scope of algebraic methods. The method of logarithms was publicly propounded by John Napier in 1614, in a book titled Mirifici Logarithmorum Canonis Descriptio (Description of the Wonderful Canon of Logarithms). Prior to Napier's invention, there had been other techniques of similar scopes, such as the prosthaphaeresis or the use of tables of progressions, extensively developed by Jost Bürgi around 1600. Napier coined the term for logarithm in Middle Latin, , literally meaning , derived from the Greek + .
The common logarithm of a number is the index of that power of ten which equals the number. Speaking of a number as requiring so many figures is a rough allusion to common logarithm, and was referred to by Archimedes as the "order of a number". The first real logarithms were heuristic methods to turn multiplication into addition, thus facilitating rapid computation. Some of these methods used tables derived from trigonometric identities. Such methods are called prosthaphaeresis.
Invention of the function now known as the natural logarithm began as an attempt to perform a quadrature of a rectangular hyperbola by Grégoire de Saint-Vincent, a Belgian Jesuit residing in Prague. Archimedes had written The Quadrature of the Parabola in the third century BC, but a quadrature for the hyperbola eluded all efforts until Saint-Vincent published his results in 1647. The relation that the logarithm provides between a geometric progression in its argument and an arithmetic progression of values, prompted A. A. de Sarasa to make the connection of Saint-Vincent's quadrature and the tradition of logarithms in prosthaphaeresis, leading to the term "hyperbolic logarithm", a synonym for natural logarithm. Soon the new function was appreciated by Christiaan Huygens, and James Gregory. The notation was adopted by Leibniz in 1675, and the next year he connected it to the integral
Before Euler developed his modern conception of complex natural logarithms, Roger Cotes had a nearly equivalent result when he showed in 1714 that
Logarithm tables, slide rules, and historical applications
By simplifying difficult calculations before calculators and computers became available, logarithms contributed to the advance of science, especially astronomy. They were critical to advances in surveying, celestial navigation, and other domains. Pierre-Simon Laplace called logarithms
"...[a]n admirable artifice which, by reducing to a few days the labour of many months, doubles the life of the astronomer, and spares him the errors and disgust inseparable from long calculations."
As the function is the inverse function of , it has been called an antilogarithm. Nowadays, this function is more commonly called an exponential function.
Log tables
A key tool that enabled the practical use of logarithms was the table of logarithms. The first such table was compiled by Henry Briggs in 1617, immediately after Napier's invention but with the innovation of using 10 as the base. Briggs' first table contained the common logarithms of all integers in the range from 1 to 1000, with a precision of 14 digits. Subsequently, tables with increasing scope were written. These tables listed the values of for any number in a certain range, at a certain precision. Base-10 logarithms were universally used for computation, hence the name common logarithm, since numbers that differ by factors of 10 have logarithms that differ by integers. The common logarithm of can be separated into an integer part and a fractional part, known as the characteristic and mantissa. Tables of logarithms need only include the mantissa, as the characteristic can be easily determined by counting digits from the decimal point. The characteristic of is one plus the characteristic of , and their mantissas are the same. Thus using a three-digit log table, the logarithm of 3542 is approximated by
Greater accuracy can be obtained by interpolation:
The value of can be determined by reverse look up in the same table, since the logarithm is a monotonic function.
Computations
The product and quotient of two positive numbers and were routinely calculated as the sum and difference of their logarithms. The product or quotient came from looking up the antilogarithm of the sum or difference, via the same table:
and
For manual calculations that demand any appreciable precision, performing the lookups of the two logarithms, calculating their sum or difference, and looking up the antilogarithm is much faster than performing the multiplication by earlier methods such as prosthaphaeresis, which relies on trigonometric identities.
Calculations of powers and roots are reduced to multiplications or divisions and lookups by
and
Trigonometric calculations were facilitated by tables that contained the common logarithms of trigonometric functions.
Slide rules
Another critical application was the slide rule, a pair of logarithmically divided scales used for calculation. The non-sliding logarithmic scale, Gunter's rule, was invented shortly after Napier's invention. William Oughtred enhanced it to create the slide rule—a pair of logarithmic scales movable with respect to each other. Numbers are placed on sliding scales at distances proportional to the differences between their logarithms. Sliding the upper scale appropriately amounts to mechanically adding logarithms, as illustrated here:
For example, adding the distance from 1 to 2 on the lower scale to the distance from 1 to 3 on the upper scale yields a product of 6, which is read off at the lower part. The slide rule was an essential calculating tool for engineers and scientists until the 1970s, because it allows, at the expense of precision, much faster computation than techniques based on tables.
Analytic properties
A deeper study of logarithms requires the concept of a function. A function is a rule that, given one number, produces another number. An example is the function producing the -th power of from any real number , where the base is a fixed number. This function is written as . When is positive and unequal to 1, we show below that is invertible when considered as a function from the reals to the positive reals.
Existence
Let be a positive real number not equal to 1 and let .
It is a standard result in real analysis that any continuous strictly monotonic function is bijective between its domain and range. This fact follows from the intermediate value theorem. Now, is strictly increasing (for ), or strictly decreasing (for ), is continuous, has domain , and has range . Therefore, is a bijection from to . In other words, for each positive real number , there is exactly one real number such that .
We let denote the inverse of . That is, is the unique real number such that . This function is called the base- logarithm function or logarithmic function (or just logarithm).
Characterization by the product formula
The function can also be essentially characterized by the product formula
More precisely, the logarithm to any base is the only increasing function f from the positive reals to the reals satisfying and
Graph of the logarithm function
As discussed above, the function is the inverse to the exponential function . Therefore, their graphs correspond to each other upon exchanging the - and the -coordinates (or upon reflection at the diagonal line ), as shown at the right: a point on the graph of yields a point on the graph of the logarithm and vice versa. As a consequence, diverges to infinity (gets bigger than any given number) if grows to infinity, provided that is greater than one. In that case, is an increasing function. For , tends to minus infinity instead. When approaches zero, goes to minus infinity for (plus infinity for , respectively).
Derivative and antiderivative
Analytic properties of functions pass to their inverses. Thus, as is a continuous and differentiable function, so is . Roughly, a continuous function is differentiable if its graph has no sharp "corners". Moreover, as the derivative of evaluates to by the properties of the exponential function, the chain rule implies that the derivative of is given by
That is, the slope of the tangent touching the graph of the logarithm at the point equals .
The derivative of is ; this implies that is the unique antiderivative of that has the value 0 for . It is this very simple formula that motivated to qualify as "natural" the natural logarithm; this is also one of the main reasons of the importance of the constant .
The derivative with a generalized functional argument is
The quotient at the right hand side is called the logarithmic derivative of . Computing by means of the derivative of is known as logarithmic differentiation. The antiderivative of the natural logarithm is:
Related formulas, such as antiderivatives of logarithms to other bases can be derived from this equation using the change of bases.
Integral representation of the natural logarithm
The natural logarithm of can be defined as the definite integral:
This definition has the advantage that it does not rely on the exponential function or any trigonometric functions; the definition is in terms of an integral of a simple reciprocal. As an integral, equals the area between the -axis and the graph of the function , ranging from to . This is a consequence of the fundamental theorem of calculus and the fact that the derivative of is . Product and power logarithm formulas can be derived from this definition. For example, the product formula is deduced as:
The equality (1) splits the integral into two parts, while the equality (2) is a change of variable (). In the illustration below, the splitting corresponds to dividing the area into the yellow and blue parts. Rescaling the left hand blue area vertically by the factor and shrinking it by the same factor horizontally does not change its size. Moving it appropriately, the area fits the graph of the function again. Therefore, the left hand blue area, which is the integral of from to is the same as the integral from 1 to . This justifies the equality (2) with a more geometric proof.
The power formula may be derived in a similar way:
The second equality uses a change of variables (integration by substitution), .
The sum over the reciprocals of natural numbers,
is called the harmonic series. It is closely tied to the natural logarithm: as tends to infinity, the difference,
converges (i.e. gets arbitrarily close) to a number known as the Euler–Mascheroni constant . This relation aids in analyzing the performance of algorithms such as quicksort.
Transcendence of the logarithm
Real numbers that are not algebraic are called transcendental; for example, and e are such numbers, but is not. Almost all real numbers are transcendental. The logarithm is an example of a transcendental function. The Gelfond–Schneider theorem asserts that logarithms usually take transcendental, i.e. "difficult" values.
Calculation
Logarithms are easy to compute in some cases, such as . In general, logarithms can be calculated using power series or the arithmetic–geometric mean, or be retrieved from a precalculated logarithm table that provides a fixed precision. Newton's method, an iterative method to solve equations approximately, can also be used to calculate the logarithm, because its inverse function, the exponential function, can be computed efficiently. Using look-up tables, CORDIC-like methods can be used to compute logarithms by using only the operations of addition and bit shifts. Moreover, the binary logarithm algorithm calculates recursively, based on repeated squarings of , taking advantage of the relation
Power series
Taylor series
For any real number that satisfies , the following formula holds:
Equating the function to this infinite sum (series) is shorthand for saying that the function can be approximated to a more and more accurate value by the following expressions (known as partial sums):
For example, with the third approximation yields , which is about greater than , and the ninth approximation yields , which is only about greater. The th partial sum can approximate with arbitrary precision, provided the number of summands is large enough.
In elementary calculus, the series is said to converge to the function , and the function is the limit of the series. It is the Taylor series of the natural logarithm at . The Taylor series of provides a particularly useful approximation to when is small, , since then
For example, with the first-order approximation gives , which is less than off the correct value .
Inverse hyperbolic tangent
Another series is based on the inverse hyperbolic tangent function:
for any real number . Using sigma notation, this is also written as
This series can be derived from the above Taylor series. It converges quicker than the Taylor series, especially if is close to 1. For example, for , the first three terms of the second series approximate with an error of about . The quick convergence for close to 1 can be taken advantage of in the following way: given a low-accuracy approximation and putting
the logarithm of is:
The better the initial approximation is, the closer is to 1, so its logarithm can be calculated efficiently. can be calculated using the exponential series, which converges quickly provided is not too large. Calculating the logarithm of larger can be reduced to smaller values of by writing , so that .
A closely related method can be used to compute the logarithm of integers. Putting in the above series, it follows that:
If the logarithm of a large integer is known, then this series yields a fast converging series for , with a rate of convergence of .
Arithmetic–geometric mean approximation
The arithmetic–geometric mean yields high-precision approximations of the natural logarithm. Sasaki and Kanada showed in 1982 that it was particularly fast for precisions between 400 and 1000 decimal places, while Taylor series methods were typically faster when less precision was needed. In their work is approximated to a precision of (or precise bits) by the following formula (due to Carl Friedrich Gauss):
Here denotes the arithmetic–geometric mean of and . It is obtained by repeatedly calculating the average (arithmetic mean) and (geometric mean) of and then let those two numbers become the next and . The two numbers quickly converge to a common limit which is the value of . is chosen such that
to ensure the required precision. A larger makes the calculation take more steps (the initial and are farther apart so it takes more steps to converge) but gives more precision. The constants and can be calculated with quickly converging series.
Feynman's algorithm
While at Los Alamos National Laboratory working on the Manhattan Project, Richard Feynman developed a bit-processing algorithm to compute the logarithm that is similar to long division and was later used in the Connection Machine. The algorithm relies on the fact that every real number where can be represented as a product of distinct factors of the form . The algorithm sequentially builds that product , starting with and : if , then it changes to . It then increases by one regardless. The algorithm stops when is large enough to give the desired accuracy. Because is the sum of the terms of the form corresponding to those for which the factor was included in the product , may be computed by simple addition, using a table of for all . Any base may be used for the logarithm table.
Applications
Logarithms have many applications inside and outside mathematics. Some of these occurrences are related to the notion of scale invariance. For example, each chamber of the shell of a nautilus is an approximate copy of the next one, scaled by a constant factor. This gives rise to a logarithmic spiral. Benford's law on the distribution of leading digits can also be explained by scale invariance. Logarithms are also linked to self-similarity. For example, logarithms appear in the analysis of algorithms that solve a problem by dividing it into two similar smaller problems and patching their solutions. The dimensions of self-similar geometric shapes, that is, shapes whose parts resemble the overall picture are also based on logarithms. Logarithmic scales are useful for quantifying the relative change of a value as opposed to its absolute difference. Moreover, because the logarithmic function grows very slowly for large , logarithmic scales are used to compress large-scale scientific data. Logarithms also occur in numerous scientific formulas, such as the Tsiolkovsky rocket equation, the Fenske equation, or the Nernst equation.
Logarithmic scale
Scientific quantities are often expressed as logarithms of other quantities, using a logarithmic scale. For example, the decibel is a unit of measurement associated with logarithmic-scale quantities. It is based on the common logarithm of ratios—10 times the common logarithm of a power ratio or 20 times the common logarithm of a voltage ratio. It is used to quantify the attenuation or amplification of electrical signals, to describe power levels of sounds in acoustics, and the absorbance of light in the fields of spectrometry and optics. The signal-to-noise ratio describing the amount of unwanted noise in relation to a (meaningful) signal is also measured in decibels. In a similar vein, the peak signal-to-noise ratio is commonly used to assess the quality of sound and image compression methods using the logarithm.
The strength of an earthquake is measured by taking the common logarithm of the energy emitted at the quake. This is used in the moment magnitude scale or the Richter magnitude scale. For example, a 5.0 earthquake releases 32 times and a 6.0 releases 1000 times the energy of a 4.0. Apparent magnitude measures the brightness of stars logarithmically. In chemistry the negative of the decimal logarithm, the decimal , is indicated by the letter p. For instance, pH is the decimal cologarithm of the activity of hydronium ions (the form hydrogen ions take in water). The activity of hydronium ions in neutral water is 10−7 mol·L−1, hence a pH of 7. Vinegar typically has a pH of about 3. The difference of 4 corresponds to a ratio of 104 of the activity, that is, vinegar's hydronium ion activity is about .
Semilog (log–linear) graphs use the logarithmic scale concept for visualization: one axis, typically the vertical one, is scaled logarithmically. For example, the chart at the right compresses the steep increase from 1 million to 1 trillion to the same space (on the vertical axis) as the increase from 1 to 1 million. In such graphs, exponential functions of the form appear as straight lines with slope equal to the logarithm of . Log-log graphs scale both axes logarithmically, which causes functions of the form to be depicted as straight lines with slope equal to the exponent . This is applied in visualizing and analyzing power laws.
Psychology
Logarithms occur in several laws describing human perception: Hick's law proposes a logarithmic relation between the time individuals take to choose an alternative and the number of choices they have. Fitts's law predicts that the time required to rapidly move to a target area is a logarithmic function of the ratio between the distance to a target and the size of the target. In psychophysics, the Weber–Fechner law proposes a logarithmic relationship between stimulus and sensation such as the actual vs. the perceived weight of an item a person is carrying. (This "law", however, is less realistic than more recent models, such as Stevens's power law.)
Psychological studies found that individuals with little mathematics education tend to estimate quantities logarithmically, that is, they position a number on an unmarked line according to its logarithm, so that 10 is positioned as close to 100 as 100 is to 1000. Increasing education shifts this to a linear estimate (positioning 1000 10 times as far away) in some circumstances, while logarithms are used when the numbers to be plotted are difficult to plot linearly.
Probability theory and statistics
Logarithms arise in probability theory: the law of large numbers dictates that, for a fair coin, as the number of coin-tosses increases to infinity, the observed proportion of heads approaches one-half. The fluctuations of this proportion about one-half are described by the law of the iterated logarithm.
Logarithms also occur in log-normal distributions. When the logarithm of a random variable has a normal distribution, the variable is said to have a log-normal distribution. Log-normal distributions are encountered in many fields, wherever a variable is formed as the product of many independent positive random variables, for example in the study of turbulence.
Logarithms are used for maximum-likelihood estimation of parametric statistical models. For such a model, the likelihood function depends on at least one parameter that must be estimated. A maximum of the likelihood function occurs at the same parameter-value as a maximum of the logarithm of the likelihood (the "log likelihood"), because the logarithm is an increasing function. The log-likelihood is easier to maximize, especially for the multiplied likelihoods for independent random variables.
Benford's law describes the occurrence of digits in many data sets, such as heights of buildings. According to Benford's law, the probability that the first decimal-digit of an item in the data sample is (from 1 to 9) equals , regardless of the unit of measurement. Thus, about 30% of the data can be expected to have 1 as first digit, 18% start with 2, etc. Auditors examine deviations from Benford's law to detect fraudulent accounting.
The logarithm transformation is a type of data transformation used to bring the empirical distribution closer to the assumed one.
Computational complexity
Analysis of algorithms is a branch of computer science that studies the performance of algorithms (computer programs solving a certain problem). Logarithms are valuable for describing algorithms that divide a problem into smaller ones, and join the solutions of the subproblems.
For example, to find a number in a sorted list, the binary search algorithm checks the middle entry and proceeds with the half before or after the middle entry if the number is still not found. This algorithm requires, on average, comparisons, where is the list's length. Similarly, the merge sort algorithm sorts an unsorted list by dividing the list into halves and sorting these first before merging the results. Merge sort algorithms typically require a time approximately proportional to . The base of the logarithm is not specified here, because the result only changes by a constant factor when another base is used. A constant factor is usually disregarded in the analysis of algorithms under the standard uniform cost model.
A function is said to grow logarithmically if is (exactly or approximately) proportional to the logarithm of . (Biological descriptions of organism growth, however, use this term for an exponential function.) For example, any natural number can be represented in binary form in no more than bits. In other words, the amount of memory needed to store grows logarithmically with .
Entropy and chaos
Entropy is broadly a measure of the disorder of some system. In statistical thermodynamics, the entropy of some physical system is defined as
The sum is over all possible states of the system in question, such as the positions of gas particles in a container. Moreover, is the probability that the state is attained and is the Boltzmann constant. Similarly, entropy in information theory measures the quantity of information. If a message recipient may expect any one of possible messages with equal likelihood, then the amount of information conveyed by any one such message is quantified as bits.
Lyapunov exponents use logarithms to gauge the degree of chaoticity of a dynamical system. For example, for a particle moving on an oval billiard table, even small changes of the initial conditions result in very different paths of the particle. Such systems are chaotic in a deterministic way, because small measurement errors of the initial state predictably lead to largely different final states. At least one Lyapunov exponent of a deterministically chaotic system is positive.
Fractals
Logarithms occur in definitions of the dimension of fractals. Fractals are geometric objects that are self-similar in the sense that small parts reproduce, at least roughly, the entire global structure. The Sierpinski triangle (pictured) can be covered by three copies of itself, each having sides half the original length. This makes the Hausdorff dimension of this structure . Another logarithm-based notion of dimension is obtained by counting the number of boxes needed to cover the fractal in question.
Music
Logarithms are related to musical tones and intervals. In equal temperament tunings, the frequency ratio depends only on the interval between two tones, not on the specific frequency, or pitch, of the individual tones. In the 12-tone equal temperament tuning common in modern Western music, each octave (doubling of frequency) is broken into twelve equally spaced intervals called semitones. For example, if the note A has a frequency of 440 Hz then the note B-flat has a frequency of 466 Hz. The interval between A and B-flat is a semitone, as is the one between B-flat and B (frequency 493 Hz). Accordingly, the frequency ratios agree:
Intervals between arbitrary pitches can be measured in octaves by taking the logarithm of the frequency ratio, can be measured in equally tempered semitones by taking the logarithm ( times the logarithm), or can be measured in cents, hundredths of a semitone, by taking the logarithm ( times the logarithm). The latter is used for finer encoding, as it is needed for finer measurements or non-equal temperaments.
Number theory
Natural logarithms are closely linked to counting prime numbers (2, 3, 5, 7, 11, ...), an important topic in number theory. For any integer , the quantity of prime numbers less than or equal to is denoted . The prime number theorem asserts that is approximately given by
in the sense that the ratio of and that fraction approaches 1 when tends to infinity. As a consequence, the probability that a randomly chosen number between 1 and is prime is inversely proportional to the number of decimal digits of . A far better estimate of is given by the offset logarithmic integral function , defined by
The Riemann hypothesis, one of the oldest open mathematical conjectures, can be stated in terms of comparing and . The Erdős–Kac theorem describing the number of distinct prime factors also involves the natural logarithm.
The logarithm of n factorial, , is given by
This can be used to obtain Stirling's formula, an approximation of for large .
Generalizations
Complex logarithm
All the complex numbers that solve the equation
are called complex logarithms of , when is (considered as) a complex number. A complex number is commonly represented as , where and are real numbers and is an imaginary unit, the square of which is −1. Such a number can be visualized by a point in the complex plane, as shown at the right. The polar form encodes a non-zero complex number by its absolute value, that is, the (positive, real) distance to the origin, and an angle between the real () axis and the line passing through both the origin and . This angle is called the argument of .
The absolute value of is given by
Using the geometrical interpretation of sine and cosine and their periodicity in , any complex number may be denoted as
for any integer number . Evidently the argument of is not uniquely specified: both and {{Math|1=φ''' = φ + 2k}} are valid arguments of for all integers , because adding radians or k⋅360° to corresponds to "winding" around the origin counter-clock-wise by turns. The resulting complex number is always , as illustrated at the right for . One may select exactly one of the possible arguments of as the so-called principal argument, denoted , with a capital , by requiring to belong to one, conveniently selected turn, e.g. or . These regions, where the argument of is uniquely determined are called branches of the argument function.
Euler's formula connects the trigonometric functions sine and cosine to the complex exponential:
Using this formula, and again the periodicity, the following identities hold:
where is the unique real natural logarithm, denote the complex logarithms of , and is an arbitrary integer. Therefore, the complex logarithms of , which are all those complex values for which the power of equals , are the infinitely many values
for arbitrary integers .
Taking such that is within the defined interval for the principal arguments, then is called the principal value of the logarithm, denoted , again with a capital . The principal argument of any positive real number is 0; hence is a real number and equals the real (natural) logarithm. However, the above formulas for logarithms of products and powers do generalize to the principal value of the complex logarithm.
The illustration at the right depicts , confining the arguments of to the interval . This way the corresponding branch of the complex logarithm has discontinuities all along the negative real axis, which can be seen in the jump in the hue there. This discontinuity arises from jumping to the other boundary in the same branch, when crossing a boundary, i.e. not changing to the corresponding -value of the continuously neighboring branch. Such a locus is called a branch cut. Dropping the range restrictions on the argument makes the relations "argument of ", and consequently the "logarithm of ", multi-valued functions.
Inverses of other exponential functions
Exponentiation occurs in many areas of mathematics and its inverse function is often referred to as the logarithm. For example, the logarithm of a matrix is the (multi-valued) inverse function of the matrix exponential. Another example is the p-adic logarithm, the inverse function of the p-adic exponential. Both are defined via Taylor series analogous to the real case. In the context of differential geometry, the exponential map maps the tangent space at a point of a manifold to a neighborhood of that point. Its inverse is also called the logarithmic (or log) map.
In the context of finite groups exponentiation is given by repeatedly multiplying one group element with itself. The discrete logarithm is the integer solving the equation
where is an element of the group. Carrying out the exponentiation can be done efficiently, but the discrete logarithm is believed to be very hard to calculate in some groups. This asymmetry has important applications in public key cryptography, such as for example in the Diffie–Hellman key exchange, a routine that allows secure exchanges of cryptographic keys over unsecured information channels. Zech's logarithm is related to the discrete logarithm in the multiplicative group of non-zero elements of a finite field.
Further logarithm-like inverse functions include the double logarithm , the super- or hyper-4-logarithm'' (a slight variation of which is called iterated logarithm in computer science), the Lambert W function, and the logit. They are the inverse functions of the double exponential function, tetration, of , and of the logistic function, respectively.
Related concepts
From the perspective of group theory, the identity expresses a group isomorphism between positive reals under multiplication and reals under addition. Logarithmic functions are the only continuous isomorphisms between these groups. By means of that isomorphism, the Haar measure (Lebesgue measure) on the reals corresponds to the Haar measure on the positive reals. The non-negative reals not only have a multiplication, but also have addition, and form a semiring, called the probability semiring; this is in fact a semifield. The logarithm then takes multiplication to addition (log multiplication), and takes addition to log addition (LogSumExp), giving an isomorphism of semirings between the probability semiring and the log semiring.
Logarithmic one-forms appear in complex analysis and algebraic geometry as differential forms with logarithmic poles.
The polylogarithm is the function defined by
It is related to the natural logarithm by . Moreover, equals the Riemann zeta function .
| Mathematics | Arithmetic | null |
17874 | https://en.wikipedia.org/wiki/Ligament | Ligament | A ligament is the fibrous connective tissue that connects bones to other bones. It also connects flight feathers to bones, in dinosaurs and birds. All 30,000 species of amniotes (land animals with internal bones) have ligaments.
It is also known as articular ligament, articular larua, fibrous ligament, or true ligament.
Comparative anatomy
Ligaments are similar to tendons and fasciae as they are all made of connective tissue. The differences among them are in the connections that they make: ligaments connect one bone to another bone, tendons connect muscle to bone, and fasciae connect muscles to other muscles. These are all found in the skeletal system of the human body. Ligaments cannot usually be regenerated naturally; however, there are periodontal ligament stem cells located near the periodontal ligament which are involved in the adult regeneration of periodontist ligament.
The study of ligaments is known as .
Humans
Other ligaments in the body include the:
Peritoneal ligament: a fold of peritoneum or other membranes.
Fetal remnant ligament: the remnants of a fetal tubular structure.
Periodontal ligament: a group of fibers that attach the cementum of teeth to the surrounding alveolar bone.
Articular ligaments
"Ligament" most commonly refers to a band of dense regular connective tissue bundles made of collagenous fibers, with bundles protected by dense irregular connective tissue sheaths. Ligaments connect bones to other bones to form joints, while tendons connect bone to muscle. Some ligaments limit the mobility of articulations or prevent certain movements altogether.
Capsular ligaments are part of the articular capsule that surrounds synovial joints. They act as mechanical reinforcements. Extra-capsular ligaments join in harmony with the other ligaments and provide joint stability. Intra-capsular ligaments, which are much less common, also provide stability but permit a far larger range of motion. Cruciate ligaments are paired ligaments in the form of a cross.
Ligaments are viscoelastic. They gradually strain when under tension and return to their original shape when the tension is removed. However, they cannot retain their original shape when extended past a certain point or for a prolonged period of time. This is one reason why dislocated joints must be set as quickly as possible: if the ligaments lengthen too much, then the joint will be weakened, becoming prone to future dislocations. Athletes, gymnasts, dancers, and martial artists perform stretching exercises to lengthen their ligaments, making their joints more supple.
The term hypermobility refers to the characteristic of people with more-elastic ligaments, allowing their joints to stretch and contort further; this is sometimes still called double-jointedness.
The consequence of a broken ligament can be instability of the joint. Not all broken ligaments need surgery, but, if surgery is needed to stabilise the joint, the broken ligament can be repaired. Scar tissue may prevent this. If it is not possible to fix the broken ligament, other procedures such as the Brunelli procedure can correct the instability. Instability of a joint can over time lead to wear of the cartilage and eventually to osteoarthritis.
Artificial ligaments
One of the most often torn ligaments in the body is the anterior cruciate ligament (ACL). The ACL is one of the ligaments crucial to knee stability and persons who tear their ACL often undergo reconstructive surgery, which can be done through a variety of techniques and materials. One of these techniques is the replacement of the ligament with an artificial material. Artificial ligaments are a synthetic material composed of a polymer, such as polyacrylonitrile fiber, polypropylene, PET (polyethylene terephthalate), or polyNaSS poly(sodium styrene sulfonate).
Examples
There are about 900 ligaments in an average adult human body, of which about 25 are listed here.
Peritoneal ligaments
Certain folds of peritoneum are referred to as ligaments. Examples include:
The hepatoduodenal ligament, that surrounds the hepatic portal vein and other vessels as they travel from the duodenum to the liver.
The broad ligament of the uterus, also a fold of peritoneum.
Fetal remnant ligaments
Certain tubular structures from the fetal period are referred to as ligaments after they close up and turn into cord-like structures:
| Biology and health sciences | Tissues | null |
17895 | https://en.wikipedia.org/wiki/Leap%20year | Leap year | A leap year (also known as an intercalary year or bissextile year) is a calendar year that contains an additional day (or, in the case of a lunisolar calendar, a month) compared to a common year. The 366th day (or 13th month) is added to keep the calendar year synchronised with the astronomical year or seasonal year. Since astronomical events and seasons do not repeat in a whole number of days, calendars having a constant number of days each year will unavoidably drift over time with respect to the event that the year is supposed to track, such as seasons. By inserting ("intercalating") an additional day—a leap day—or month—a leap month—into some years, the drift between a civilization's dating system and the physical properties of the Solar System can be corrected.
An astronomical year lasts slightly less than 365 days. The historic Julian calendar has three common years of 365 days followed by a leap year of 366 days, by extending February to 29 days rather than the common 28. The Gregorian calendar, the world's most widely used civil calendar, makes a further adjustment for the small error in the Julian algorithm. Each leap year has 366 days instead of 365. This extra leap day occurs in each year that is a multiple of 4, except for years evenly divisible by 100 but not by 400.
In the lunisolar Hebrew calendar, Adar Aleph, a 13th lunar month, is added seven times every 19 years to the twelve lunar months in its common years to keep its calendar year from drifting through the seasons. In the Solar Hijri and Bahá'í calendars, a leap day is added when needed to ensure that the following year begins on the March equinox.
The term leap year probably comes from the fact that a fixed date in the Gregorian calendar normally advances one day of the week from one year to the next, but the day of the week in the 12 months following the leap day (from 1March through 28February of the following year) will advance two days due to the extra day, thus leaping over one day in the week. For example, since 1March was a Friday in 2024, it will be a Saturday in 2025, a Sunday in 2026, and a Monday in 2027, but will then "leap" over Tuesday to fall on a Wednesday in 2028.
The length of a day is also occasionally corrected by inserting a leap second into Coordinated Universal Time (UTC) because of variations in Earth's rotation period. Unlike leap days, leap seconds are not introduced on a regular schedule because variations in the length of the day are not entirely predictable.
Leap years can present a problem in computing, known as the leap year bug, when a year is not correctly identified as a leap year or when 29February is not handled correctly in logic that accepts or manipulates dates.
Julian calendar
On , by edict, Julius Caesar reformed the historic Roman calendar to make it a consistent solar calendar (rather than one which was neither strictly lunar nor strictly solar), thus removing the need for frequent intercalary months. His rule for leap years was a simple one: add a leap day every 4 years. This algorithm is close to reality: a Julian year lasts 365.25days, a mean tropical year about 365.2422 days, a difference of only . Consequently, even this Julian calendar drifts out of 'true' by about 3 days every 400 years. The Julian calendar continued in use unaltered for about 1600 years until the Catholic Church became concerned about the widening divergence between the March Equinox and 21 March, as explained at Gregorian calendar, below.
Prior to Caesar's creation of what would be the Julian calendar, February was already the shortest month of the year for Romans. In the Roman calendar (after the reform of Numa Pompilius that added January and February), all months except February had an odd number of days29 or 31. This was because of a Roman superstition that even numbers were unlucky. When Caesar changed the calendar to follow the solar year closely, he made all months have 30 or 31 days, leaving February unchanged except in leap years.
Gregorian calendar
In the Gregorian calendar, the standard calendar in most of the world, almost every fourth year is a leap year. Each leap year, the month of February has 29 days instead of 28. Adding one extra day in the calendar every four years compensates for the fact that a period of 365 days is shorter than a tropical year by almost six hours. However, this correction is excessive and the Gregorian reform modified the Julian calendar's scheme of leap years as follows:
Every year that is exactly divisible by four is a leap year, except for years that are exactly divisible by 100, but these centurial years are leap years if they are exactly divisible by 400. For example, the years 1700, 1800, and 1900 are not leap years, but the years 1600 and 2000 are.
Whereas the Julian calendar year incorrectly summarised Earth's tropical year as 365.25 days, the Gregorian calendar makes these exceptions to follow a calendar year of 365.2425 days. This more closely resembles a mean tropical year of 365.2422 days. Over a period of four centuries, the accumulated error of adding a leap day every four years amounts to about three extra days. The Gregorian calendar therefore omits three leap days every 400 years, which is the length of its leap cycle. This is done by omitting 29 February in the three century years (multiples of 100) that are not multiples of 400. The years 2000 and 2400 are leap years, but not 1700, 1800, 1900, 2100, 2200, and 2300. By this rule, an entire leap cycle is 400 years, which totals 146,097 days, and the average number of days per year is 365 + − + = 365 + = 365.2425. This rule could be applied to years before the Gregorian reform to create a proleptic Gregorian calendar, though the result would not match any historical records.
The Gregorian calendar was designed to keep the vernal equinox on or close to 21 March, so that the date of Easter (celebrated on the Sunday after the ecclesiastical full moon that falls on or after 21 March) remains close to the vernal equinox. The "Accuracy" section of the "Gregorian calendar" article discusses how well the Gregorian calendar achieves this objective, and how well it approximates the tropical year.
Leap day in the Julian and Gregorian calendars
The intercalary day that usually occurs every 4 years is called leap day and is created by adding an extra day to February. This day is added to the calendar in leap years as a corrective measure because the Earth does not orbit the Sun in precisely 365 days. Since about the 15th century, this extra day has been 29 February, but when the Julian calendar was introduced, the leap day was handled differently in two respects. First, leap day fell February and not at the end: 24 February was doubled to create, strangely to modern eyes, two days both dated 24 February. Second, the leap day was simply not counted so that a leap year still had 365 days.
Early Roman practice
The early Roman calendar was a lunisolar one that consisted of 12 months, for a total of 355 days. In addition, a 27- or 28-day intercalary month, the , was sometimes inserted into February, at the first or second day after the (23 February), to resynchronise the lunar and solar cycles. The remaining days of Februarius were discarded. This intercalary month, named or , contained 27 days. The religious festivals that were normally celebrated in the last 5 days of February were moved to the last 5 days of Intercalaris. The lunisolar calendar was abandoned about 450 BC by the , who implemented the Roman Republican calendar, used until 46 BC. The days of these calendars were counted down (inclusively) to the next named day, so 24 February was ["the sixth day before the calends of March"] often abbreviated The Romans counted days inclusively in their calendars, so this was the fifth day before 1 March when counted in the modern exclusive manner (i.e., not including both the starting and ending day). Because only 22 or 23 days were effectively added, not a full lunation, the calends and ides of the Roman Republican calendar were no longer associated with the new moon and full moon.
Julian reform
In Caesar's revised calendar, there was just one intercalary daynowadays called the leap dayto be inserted every fourth year, and this too was done after 23 February. To create the intercalary day, the existing (sixth day (inclusive: i.e. what we would call the fifth day before) before the (first day) of March, i.e. what we would call 24 February) was doubled, producing [a second sixth day before the Kalends. This ("twice sixth") was rendered in later languages as "bissextile": the "bissextile day" is the leap day, and a "bissextile year" is a year which includes a leap day. This second instance of the sixth day before the Kalends of March was inserted in calendars between the "normal" fifth and sixth days. By legal fiction, the Romans treated both the first "sixth day" and the additional "sixth day" before the Kalends of March as one day. Thus a child born on either of those days in a leap year would have its first birthday on the following sixth day before the Kalends of March. In a leap year in the original Julian calendar, there were indeed two days both numbered 24 February. This practice continued for another fifteen to seventeen centuries, even after most countries had adopted the Gregorian calendar.
For legal purposes, the two days of the were considered to be a single day, with the second sixth being intercalated; but in common practice by the year 238, when Censorinus wrote, the intercalary day was followed by the last five days of February, a. d. VI, V, IV, III, and (the days numbered 24, 25, 26, 27, and 28 from the beginning of February in a common year), so that the intercalated day was the first of the doubled pair. Thus the intercalated day was effectively inserted between the 23rd and 24th days of February. All later writers, including Macrobius about 430, Bede in 725, and other medieval computists (calculators of Easter), continued to state that the bissextum (bissextile day) occurred before the last five days of February.
In England, the Church and civil society continued the Roman practice whereby the leap day was simply not counted, so that a leap year was only reckoned as 365 days. Henry III's 1236 instructed magistrates to treat the leap day and the day before as one day. The practical application of the rule is obscure. It was regarded as in force in the time of the famous lawyer Sir Edward Coke (1552–1634) because he cites it in his Institutes of the Lawes of England. However, Coke merely quotes the Act with a short translation and does not give practical examples.
29 February
Replacement (by 29 February) of the awkward practice of having two days with the same date appears to have evolved by custom and practice; the etymological origin of the term "bissextile" seems to have been lost. In England in the fifteenth century, "29 February" appears increasingly often in legal documentsalthough the records of the proceedings of the House of Commons of England continued to use the old system until the middle of the sixteenth century. It was not until the passage of the Calendar (New Style) Act 1750 that 29 February was formally recognised in British law.
Liturgical practices
In the liturgical calendar of the Christian churches, the placement of the leap day is significant because of the date of the feast of Saint Matthias, which is defined as the sixth day before 1 March (counting inclusively). The Church of England's Book of Common Prayer was still using the "two days with the same date" system in its 1542 edition; it first included a calendar which used entirely consecutive day counting from 1662 and showed leap day as falling on 29 February. In the 1680s, the Church of England declared 25 February to be the feast of St Matthias.
Until 1970, the Roman Catholic Church always celebrated the feast of Saint Matthias on , so if the days were numbered from the beginning of the month, it was named 24 February in common years, but the presence of the in a bissextile year immediately before shifted the latter day to 25 February in leap years, with the Vigil of St. Matthias shifting from 23 February to the leap day of 24 February. This shift did not take place in pre-Reformation Norway and Iceland; Pope Alexander III ruled that either practice was lawful. Other feasts normally falling on 25–28 February in common years are also shifted to the following day in a leap year (although they would be on the same day according to the Roman notation). The practice is still observed by those who use the older calendars.
In the Eastern Orthodox Church, the feast of St. John Cassian is celebrated on 29 February, but he is instead commemorated at Compline on 28 February in non-leap years. The feast of St. Matthias is celebrated in August, so leap years do not affect his commemoration, and, while the feast of the First and Second Findings of the Head of John the Baptist is celebrated on 24 February, the Orthodox church calculates days from the beginning of the current month, rather than counting down days to the Kalends of the following month, this is not affected. Thus, only the feast of St. John Cassian and any movable feasts associated with the Lenten or Pre-Lenten cycles are affected.
Folk traditions
In Ireland and Britain, it is a tradition that women may propose marriage only in leap years. While it has been claimed that the tradition was initiated by Saint Patrick or Brigid of Kildare in 5th century Ireland, this is dubious, as the tradition has not been attested before the 19th century. Supposedly, a 1288 law by Queen Margaret of Scotland (then age five and living in Norway), required that fines be levied if a marriage proposal was refused by the man; compensation was deemed to be a pair of leather gloves, a single rose, £1, and a kiss. In some places the tradition was tightened to restricting female proposals to the modern leap day, 29 February, or to the medieval (bissextile) leap day, 24 February.
According to Felten: "A play from the turn of the 17th century, 'The Maydes Metamorphosis,' has it that 'this is leape year/women wear breeches.' A few hundred years later, breeches wouldn't do at all: Women looking to take advantage of their opportunity to pitch woo were expected to wear a scarlet petticoatfair warning, if you will."
In Finland, the tradition is that if a man refuses a woman's proposal on leap day, he should buy her the fabrics for a skirt.
In France, since 1980, a satirical newspaper titled La Bougie du Sapeur is published only on leap year, on 29 February.
In Greece, marriage in a leap year is considered unlucky. One in five engaged couples in Greece will plan to avoid getting married in a leap year.
In February 1988 the town of Anthony, Texas, declared itself the "leap year capital of the world", and an international leapling birthday club was started.
Birthdays
A person born on February 29 may be called a "leapling" or a "leaper". In common years, they celebrate their birthdays on 28 February or 1 March.
Technically, a leapling will have fewer birthday anniversaries than their age in years. This phenomenon may be exploited for dramatic effect when a person is declared to be only a quarter of their actual age, by counting their leap-year birthday anniversaries only. For example, in Gilbert and Sullivan's 1879 comic opera The Pirates of Penzance, Frederic (the pirate apprentice) discovers that he is bound to serve the pirates until his 21st birthday (that is, when he turns 88 years old, since 1900 was not a leap year) rather than until his 21st year.
For legal purposes, legal birthdays depend on how local laws count time intervals.
Taiwan
The Civil Code of Taiwan since 10 October 1929, implies that the legal birthday of a leapling is 28 February in common years:
Hong Kong
Since 1990 non-retroactively, Hong Kong considers the legal birthday of a leapling 1 March in common years:
UK
In the UK 1 March is considered to be a leapling's legal birthday.
Revised Julian calendar
The Revised Julian calendar adds an extra day to February in years that are multiples of four, except for years that are multiples of 100 that do not leave a remainder of 200 or 600 when divided by 900. This rule agrees with the rule for the Gregorian calendar until 2799. The first year that dates in the Revised Julian calendar will not agree with those in the Gregorian calendar will be 2800, because it will be a leap year in the Gregorian calendar but not in the Revised Julian calendar.
This rule gives an average year length of 365.242222 days. This is a very good approximation to the mean tropical year, but because the vernal equinox year is slightly longer, the Revised Julian calendar, for the time being, does not do as good a job as the Gregorian calendar at keeping the vernal equinox on or close to 21 March.
Baháʼí calendar
The Baháʼí calendar is a solar calendar composed of 19 months of 19 days each (361 days). Years begin at Naw-Rúz, on the vernal equinox, on or about 21 March. A period of "Intercalary Days", called Ayyam-i-Ha, is inserted before the 19th month. This period normally has 4 days, but an extra day is added when needed to ensure that the following year starts on the vernal equinox. This is calculated and known years in advance.
Bengali, Indian and Thai calendars
The Revised Bengali Calendar of Bangladesh and the Indian National Calendar organise their leap years so that every leap day is close to 29 February in the Gregorian calendar and vice versa. This makes it easy to convert dates to or from Gregorian.
The Thai solar calendar uses the Buddhist Era (BE) but has been synchronised with the Gregorian since AD 1941.
Chinese calendar
The Chinese calendar is lunisolar, so a leap year has an extra month, often called an embolismic month after the Greek word for it. In the Chinese calendar, the leap month is added according to a rule which ensures that month 11 is always the month that contains the northern winter solstice. The intercalary month takes the same number as the preceding month; for example, if it follows the second month (二月) then it is simply called "leap second month" i.e. .
Hebrew calendar
The Hebrew calendar is lunisolar with an embolismic month. This extra month is called Adar Rishon (first Adar) and is added before Adar, which then becomes Adar Sheini (second Adar). According to the Metonic cycle, this is done seven times every nineteen years (specifically, in years 3, 6, 8, 11, 14, 17, and 19). This is to ensure that Passover () is always in the spring as required by the Torah (Pentateuch) in many verses relating to Passover.
In addition, the Hebrew calendar has postponement rules that postpone the start of the year by one or two days. These postponement rules reduce the number of different combinations of year length and starting days of the week from 28 to 14, and regulate the location of certain religious holidays in relation to the Sabbath. In particular, the first day of the Hebrew year can never be Sunday, Wednesday, or Friday. This rule is known in Hebrew as "" (), i.e., "Rosh [ha-Shanah, first day of the year] is not Sunday, Wednesday, or Friday" (as the Hebrew word is written by three Hebrew letters signifying Sunday, Wednesday, and Friday). Accordingly, the first day of Passover is never Monday, Wednesday, or Friday. This rule is known in Hebrew as "" (), which has a double meaning — "Passover is not a legend", but also "Passover is not Monday, Wednesday, or Friday" (as the Hebrew word is written by three Hebrew letters signifying Monday, Wednesday, and Friday).
One reason for this rule is that Yom Kippur, the holiest day in the Hebrew calendar and the tenth day of the Hebrew year, now must never be adjacent to the weekly Sabbath (which is Saturday), i.e., it must never fall on Friday or Sunday, in order not to have two adjacent Sabbath days. However, Yom Kippur can still be on Saturday. A second reason is that Hoshana Rabbah, the 21st day of the Hebrew year, will never be on Saturday. These rules for the Feasts do not apply to the years from the Creation to the deliverance of the Hebrews from Egypt under Moses. It was at that time (cf. Exodus 13) that the God of Abraham, Isaac and Jacob gave the Hebrews their "Law" including the days to be kept holy and the feast days and Sabbaths.
Years consisting of 12 months have between 353 and 355 days. In a ("in order") 354-day year, months have alternating 30 and 29 day lengths. In a ("lacking") year, the month of Kislev is reduced to 29 days. In a ("filled") year, the month of Marcheshvan is increased to 30 days. 13-month years follow the same pattern, with the addition of the 30-day Adar Alef, giving them between 383 and 385 days.
Islamic calendars
The observed and calculated versions of the lunar Islamic calendar do not have regular leap days, even though both have lunar months containing 29 or 30 days, generally in alternating order. However, the tabular Islamic calendar used by Islamic astronomers during the Middle Ages and still used by some Muslims does have a regular leap day added to the last month of the lunar year in 11 years of a 30-year cycle. This additional day is found at the end of the last month, Dhu al-Hijjah, which is also the month of the Hajj.
The Solar Hijri calendar is the modern Iranian calendar. It is an observational calendar that starts on the spring equinox (Northern Hemisphere) and adds a single intercalated day to the last month (Esfand) once every 4 or 5 years; the first leap year occurs as the fifth year of the typical 33-year cycle and the remaining leap years occur every 4 years through the remainder of the 33-year cycle. This system has less periodic deviation or jitter from its mean year than the Gregorian calendar and operates on the simple rule that New Year's Day must fall in the 24 hours of the vernal equinox. The 33-year period is not completely regular; every so often the 33-year cycle will be broken by a cycle of 29 years.
The Hijri-Shamsi calendar, also adopted by the Ahmadiyya Community, is based on solar calculations and is similar to the Gregorian calendar in its structure with the exception that its epoch is the Hijra.
Coptic and Ethiopian calendars
The Coptic calendar has 13 months, 12 of 30 days each, and one at the end of the year of 5 days, or 6 days in leap years. The Coptic Leap Year follows the same rules as the Julian Calendar so that the extra month always has 6 days in the year before a Julian Leap Year. The Ethiopian calendar has 12 months of 30 days plus 5 or 6 epagomenal days, which comprise a 13th month.
| Technology | Timekeeping | null |
17927 | https://en.wikipedia.org/wiki/Logic%20programming | Logic programming | Logic programming is a programming, database and knowledge representation paradigm based on formal logic. A logic program is a set of sentences in logical form, representing knowledge about some problem domain. Computation is performed by applying logical reasoning to that knowledge, to solve problems in the domain. Major logic programming language families include Prolog, Answer Set Programming (ASP) and Datalog. In all of these languages, rules are written in the form of clauses:
A :- B1, ..., Bn.
and are read as declarative sentences in logical form:
A if B1 and ... and Bn.
A is called the head of the rule, B1, ..., Bn is called the body, and the Bi are called literals or conditions. When n = 0, the rule is called a fact and is written in the simplified form:
A.
Queries (or goals) have the same syntax as the bodies of rules and are commonly written in the form:
?- B1, ..., Bn.
In the simplest case of Horn clauses (or "definite" clauses), all of the A, B1, ..., Bn are atomic formulae of the form p(t1 ,..., tm), where p is a predicate symbol naming a relation, like "motherhood", and the ti are terms naming objects (or individuals). Terms include both constant symbols, like "charles", and variables, such as X, which start with an upper case letter.
Consider, for example, the following Horn clause program:
mother_child(elizabeth, charles).
father_child(charles, william).
father_child(charles, harry).
parent_child(X, Y) :-
mother_child(X, Y).
parent_child(X, Y) :-
father_child(X, Y).
grandparent_child(X, Y) :-
parent_child(X, Z),
parent_child(Z, Y).
Given a query, the program produces answers.
For instance for a query ?- parent_child(X, william), the single answer is
X = charles
Various queries can be asked. For instance
the program can be queried both to generate grandparents and to generate grandchildren. It can even be used to generate all pairs of grandchildren and grandparents, or simply to check if a given pair is such a pair:
grandparent_child(X, william).
X = elizabeth
?- grandparent_child(elizabeth, Y).
Y = william;
Y = harry.
?- grandparent_child(X, Y).
X = elizabeth
Y = william;
X = elizabeth
Y = harry.
?- grandparent_child(william, harry).
no
?- grandparent_child(elizabeth, harry).
yes
Although Horn clause logic programs are Turing complete, for most practical applications, Horn clause programs need to be extended to "normal" logic programs with negative conditions. For example, the definition of sibling uses a negative condition, where the predicate = is defined by the clause X = X :
sibling(X, Y) :-
parent_child(Z, X),
parent_child(Z, Y),
not(X = Y).
Logic programming languages that include negative conditions have the knowledge representation capabilities of a non-monotonic logic.
In ASP and Datalog, logic programs have only a declarative reading, and their execution is performed by means of a proof procedure or model generator whose behaviour is not meant to be controlled by the programmer. However, in the Prolog family of languages, logic programs also have a procedural interpretation as goal-reduction procedures. From this point of view, clause A :- B1,...,Bn is understood as:
to solve A, solve B1, and ... and solve Bn.
Negative conditions in the bodies of clauses also have a procedural interpretation, known as negation as failure: A negative literal not B is deemed to hold if and only if the positive literal B fails to hold.
Much of the research in the field of logic programming has been concerned with trying to develop a logical semantics for negation as failure and with developing other semantics and other implementations for negation. These developments have been important, in turn, for supporting the development of formal methods for logic-based program verification and program transformation.
History
The use of mathematical logic to represent and execute computer programs is also a feature of the lambda calculus, developed by Alonzo Church in the 1930s. However, the first proposal to use the clausal form of logic for representing computer programs was made by Cordell Green. This used an axiomatization of a subset of LISP, together with a representation of an input-output relation, to compute the relation by simulating the execution of the program in LISP. Foster and Elcock's Absys, on the other hand, employed a combination of equations and lambda calculus in an assertional programming language that places no constraints on the order in which operations are performed.
Logic programming, with its current syntax of facts and rules, can be traced back to debates in the late 1960s and early 1970s about declarative versus procedural representations of knowledge in artificial intelligence. Advocates of declarative representations were notably working at Stanford, associated with John McCarthy, Bertram Raphael and Cordell Green, and in Edinburgh, with John Alan Robinson (an academic visitor from Syracuse University), Pat Hayes, and Robert Kowalski. Advocates of procedural representations were mainly centered at MIT, under the leadership of Marvin Minsky and Seymour Papert.
Although it was based on the proof methods of logic, Planner, developed by Carl Hewitt at MIT, was the first language to emerge within this proceduralist paradigm. Planner featured pattern-directed invocation of procedural plans from goals (i.e. goal-reduction or backward chaining) and from assertions (i.e. forward chaining). The most influential implementation of Planner was the subset of Planner, called Micro-Planner, implemented by Gerry Sussman, Eugene Charniak and Terry Winograd. Winograd used Micro-Planner to implement the landmark, natural-language understanding program SHRDLU. For the sake of efficiency, Planner used a backtracking control structure so that only one possible computation path had to be stored at a time. Planner gave rise to the programming languages QA4, Popler, Conniver, QLISP, and the concurrent language Ether.
Hayes and Kowalski in Edinburgh tried to reconcile the logic-based declarative approach to knowledge representation with Planner's procedural approach. Hayes (1973) developed an equational language, Golux, in which different procedures could be obtained by altering the behavior of the theorem prover.
In the meanwhile, Alain Colmerauer in Marseille was working on natural-language understanding, using logic to represent semantics and using resolution for question-answering. During the summer of 1971, Colmerauer invited Kowalski to Marseille, and together they discovered that the clausal form of logic could be used to represent formal grammars and that resolution theorem provers could be used for parsing. They observed that some theorem provers, like hyper-resolution, behave as bottom-up parsers and others, like SL resolution (1971) behave as top-down parsers.
It was in the following summer of 1972, that Kowalski, again working with Colmerauer, developed the procedural interpretation of implications in clausal form. It also became clear that such clauses could be restricted to definite clauses or Horn clauses, and that SL-resolution could be restricted (and generalised) to SLD resolution. Kowalski's procedural interpretation and SLD were described in a 1973 memo, published in 1974.
Colmerauer, with Philippe Roussel, used the procedural interpretation as the basis of Prolog, which was implemented in the summer and autumn of 1972. The first Prolog program, also written in 1972 and implemented in Marseille, was a French question-answering system. The use of Prolog as a practical programming language was given great momentum by the development of a compiler by David H. D. Warren in Edinburgh in 1977. Experiments demonstrated that Edinburgh Prolog could compete with the processing speed of other symbolic programming languages such as Lisp. Edinburgh Prolog became the de facto standard and strongly influenced the definition of ISO standard Prolog.
Logic programming gained international attention during the 1980s, when it was chosen by the Japanese Ministry of International Trade and Industry to develop the software for the Fifth Generation Computer Systems (FGCS) project. The FGCS project aimed to use logic programming to develop advanced Artificial Intelligence applications on massively parallel computers. Although the project initially explored the use of Prolog, it later adopted the use of concurrent logic programming, because it was closer to the FGCS computer architecture.
However, the committed choice feature of concurrent logic programming interfered with the language's logical semantics and with its suitability for knowledge representation and problem solving applications. Moreover, the parallel computer systems developed in the project failed to compete with advances taking place in the development of more conventional, general-purpose computers. Together these two issues resulted in the FGCS project failing to meet its objectives. Interest in both logic programming and AI fell into world-wide decline.
In the meanwhile, more declarative logic programming approaches, including those based on the use of Prolog, continued to make progress independently of the FGCS project. In particular, although Prolog was developed to combine declarative and procedural representations of knowledge, the purely declarative interpretation of logic programs became the focus for applications in the field of deductive databases. Work in this field became prominent around 1977, when Hervé Gallaire and Jack Minker organized a workshop on logic and databases in Toulouse. The field was eventually renamed as Datalog.
This focus on the logical, declarative reading of logic programs was given further impetus by the development of constraint logic programming in the 1980s and Answer Set Programming in the 1990s. It is also receiving renewed emphasis in recent applications of Prolog
The Association for Logic Programming (ALP) was founded in 1986 to promote Logic Programming. Its official journal until 2000, was The Journal of Logic Programming. Its founding editor-in-chief was J. Alan Robinson. In 2001, the journal was renamed The Journal of Logic and Algebraic Programming, and the official journal of ALP became Theory and Practice of Logic Programming, published by Cambridge University Press.
Concepts
Logic programs enjoy a rich variety of semantics and problem solving methods, as well as a wide range of applications in programming, databases, knowledge representation and problem solving.
Algorithm = Logic + Control
The procedural interpretation of logic programs, which uses backward reasoning to reduce goals to subgoals, is a special case of the use of a problem-solving strategy to control the use of a declarative, logical representation of knowledge to obtain the behaviour of an algorithm. More generally, different problem-solving strategies can be applied to the same logical representation to obtain different algorithms. Alternatively, different algorithms can be obtained with a given problem-solving strategy by using different logical representations.
The two main problem-solving strategies are backward reasoning (goal reduction) and forward reasoning, also known as top-down and bottom-up reasoning, respectively.
In the simple case of a propositional Horn clause program and a top-level atomic goal, backward reasoning determines an and-or tree, which constitutes the search space for solving the goal. The top-level goal is the root of the tree. Given any node in the tree and any clause whose head matches the node, there exists a set of child nodes corresponding to the sub-goals in the body of the clause. These child nodes are grouped together by an "and". The alternative sets of children corresponding to alternative ways of solving the node are grouped together by an "or".
Any search strategy can be used to search this space. Prolog uses a sequential, last-in-first-out, backtracking strategy, in which only one alternative and one sub-goal are considered at a time. For example, subgoals can be solved in parallel, and clauses can also be tried in parallel. The first strategy is called and the second strategy is called . Other search strategies, such as intelligent backtracking, or best-first search to find an optimal solution, are also possible.
In the more general, non-propositional case, where sub-goals can share variables, other strategies can be used, such as choosing the subgoal that is most highly instantiated or that is sufficiently instantiated so that only one procedure applies. Such strategies are used, for example, in concurrent logic programming.
In most cases, backward reasoning from a query or goal is more efficient than forward reasoning. But sometimes with Datalog and Answer Set Programming, there may be no query that is separate from the set of clauses as a whole, and then generating all the facts that can be derived from the clauses is a sensible problem-solving strategy. Here is another example, where forward reasoning beats backward reasoning in a more conventional computation task, where the goal ?- fibonacci(n, Result) is to find the nth fibonacci number:
fibonacci(0, 0).
fibonacci(1, 1).
fibonacci(N, Result) :-
N > 1,
N1 is N - 1,
N2 is N - 2,
fibonacci(N1, F1),
fibonacci(N2, F2),
Result is F1 + F2.
Here the relation fibonacci(N, M) stands for the function fibonacci(N) = M, and the predicate N is Expression is Prolog notation for the predicate that instantiates the variable N to the value of Expression.
Given the goal of computing the fibonacci number of n, backward reasoning reduces the goal to the two subgoals of computing the fibonacci numbers of n-1 and n-2. It reduces the subgoal of computing the fibonacci number of n-1 to the two subgoals of computing the fibonacci numbers of n-2 and n-3, redundantly computing the fibonacci number of n-2. This process of reducing one fibonacci subgoal to two fibonacci subgoals continues until it reaches the numbers 0 and 1. Its complexity is of the order 2n. In contrast, forward reasoning generates the sequence of fibonacci numbers, starting from 0 and 1 without any recomputation, and its complexity is linear with respect to n.
Prolog cannot perform forward reasoning directly. But it can achieve the effect of forward reasoning within the context of backward reasoning by means of tabling: Subgoals are maintained in a table, along with their solutions. If a subgoal is re-encountered, it is solved directly by using the solutions already in the table, instead of re-solving the subgoals redundantly.
Relationship with functional programming
Logic programming can be viewed as a generalisation of functional programming, in which functions are a special case of relations.
For example, the function, mother(X) = Y, (every X has only one mother Y) can be represented by the relation mother(X, Y). In this respect, logic programs are similar to relational databases, which also represent functions as relations.
Compared with relational syntax, functional syntax is more compact for nested functions. For example, in functional syntax the definition of maternal grandmother can be written in the nested form:
maternal_grandmother(X) = mother(mother(X)).
The same definition in relational notation needs to be written in the unnested, flattened form:
maternal_grandmother(X, Y) :- mother(X, Z), mother(Z, Y).
However, nested syntax can be regarded as syntactic sugar for unnested syntax. Ciao Prolog, for example, transforms functional syntax into relational form and executes the resulting logic program using the standard Prolog execution strategy. Moreover, the same transformation can be used to execute nested relations that are not functional. For example:
grandparent(X) := parent(parent(X)).
parent(X) := mother(X).
parent(X) := father(X).
mother(charles) := elizabeth.
father(charles) := phillip.
mother(harry) := diana.
father(harry) := charles.
?- grandparent(X,Y).
X = harry,
Y = elizabeth.
X = harry,
Y = phillip.
Relationship with relational programming
The term relational programming has been used to cover a variety of programming languages that treat functions as a special case of relations. Some of these languages, such as miniKanren
and relational linear programming
are logic programming languages in the sense of this article.
However, the relational language RML is an imperative programming language
whose core construct is a
relational expression, which is similar to an expression in first-order predicate logic.
Other relational programming languages are based on the relational calculus or relational algebra.
Semantics of Horn clause programs
Viewed in purely logical terms, there are two approaches to the declarative semantics of Horn clause logic programs: One approach is the original logical consequence semantics, which understands solving a goal as showing that the goal is a theorem that is true in all models of the program.
In this approach, computation is theorem-proving in first-order logic; and both backward reasoning, as in SLD resolution, and forward reasoning, as in hyper-resolution, are correct and complete theorem-proving methods. Sometimes such theorem-proving methods are also regarded as providing a separate proof-theoretic (or operational) semantics for logic programs. But from a logical point of view, they are proof methods, rather than semantics.
The other approach to the declarative semantics of Horn clause programs is the satisfiability semantics, which understands solving a goal as showing that the goal is true (or satisfied) in some intended (or standard) model of the program. For Horn clause programs, there always exists such a standard model: It is the unique minimal model of the program.
Informally speaking, a minimal model is a model that, when it is viewed as the set of all (variable-free) facts that are true in the model, contains no smaller set of facts that is also a model of the program.
For example, the following facts represent the minimal model of the family relationships example in the introduction of this article. All other variable-free facts are false in the model:
mother_child(elizabeth, charles).
father_child(charles, william).
father_child(charles, harry).
parent_child(elizabeth, charles).
parent_child(charles, william).
parent_child(charles, harry).
grandparent_child(elizabeth, william).
grandparent_child(elizabeth, harry).
The satisfiability semantics also has an alternative, more mathematical characterisation as the least fixed point of the function that uses the rules in the program to derive new facts from existing facts in one step of inference.
Remarkably, the same problem-solving methods of forward and backward reasoning, which were originally developed for the logical consequence semantics, are equally applicable to the satisfiability semantics: Forward reasoning generates the minimal model of a Horn clause program, by deriving new facts from existing facts, until no new additional facts can be generated. Backward reasoning, which succeeds by reducing a goal to subgoals, until all subgoals are solved by facts, ensures that the goal is true in the minimal model, without generating the model explicitly.
The difference between the two declarative semantics can be seen with the definitions of addition and multiplication in successor arithmetic, which represents the natural numbers 0, 1, 2, ... as a sequence of terms of the form 0, s(0), s(s(0)), .... In general, the term s(X) represents the successor of X, namely X + 1. Here are the standard definitions of addition and multiplication in functional notation:
X + 0 = X.
X + s(Y) = s(X + Y).
i.e. X + (Y + 1) = (X + Y) + 1
X × 0 = 0.
X × s(Y) = X + (X × Y).
i.e. X × (Y + 1) = X + (X × Y).
Here are the same definitions as a logic program, using add(X, Y, Z) to represent X + Y = Z, and multiply(X, Y, Z) to represent X × Y = Z:
add(X, 0, X).
add(X, s(Y), s(Z)) :- add(X, Y, Z).
multiply(X, 0, 0).
multiply(X, s(Y), W) :- multiply(X, Y, Z), add(X, Z, W).
The two declarative semantics both give the same answers for the same existentially quantified conjunctions of addition and multiplication goals. For example 2 × 2 = X has the solution X = 4; and X × X = X + X has two solutions X = 0 and X = 2:
?- multiply(s(s(0)), s(s(0)), X).
X = s(s(s(s(0)))).
?- multiply(X, X, Y), add(X, X, Y).
X = 0, Y = 0.
X = s(s(0)), Y = s(s(s(s(0)))).
However, with the logical-consequence semantics, there are non-standard models of the program, in which, for example, add(s(s(0)), s(s(0)), s(s(s(s(s(0)))))), i.e. 2 + 2 = 5 is true. But with the satisfiability semantics, there is only one model, namely the standard model of arithmetic, in which 2 + 2 = 5 is false.
In both semantics, the goal ?- add(s(s(0)), s(s(0)), s(s(s(s(s(0)))))) fails. In the satisfiability semantics, the failure of the goal means that the truth value of the goal is false. But in the logical consequence semantics, the failure means that the truth value of the goal is unknown.
Negation as failure
Negation as failure (NAF), as a way of concluding that a negative condition not p holds by showing that the positive condition p fails to hold, was already a feature of early Prolog systems. The resulting extension of SLD resolution is called SLDNF. A similar construct, called "thnot", also existed in Micro-Planner.
The logical semantics of NAF was unresolved until Keith Clark showed that, under certain natural conditions, NAF is an efficient, correct (and sometimes complete) way of reasoning with the logical consequence semantics using the completion of a logic program in first-order logic.
Completion amounts roughly to regarding the set of all the program clauses with the same predicate in the head, say:
A :- Body1.
...
A :- Bodyk.
as a definition of the predicate:
A iff (Body1 or ... or Bodyk)
where iff means "if and only if". The completion also includes axioms of equality, which correspond to unification. Clark showed that proofs generated by SLDNF are structurally similar to proofs generated by a natural deduction style of reasoning with the completion of the program.
Consider, for example, the following program:
should_receive_sanction(X, punishment) :-
is_a_thief(X),
not should_receive_sanction(X, rehabilitation).
should_receive_sanction(X, rehabilitation) :-
is_a_thief(X),
is_a_minor(X),
not is_violent(X).
is_a_thief(tom).
Given the goal of determining whether tom should receive a sanction, the first rule succeeds in showing that tom should be punished:
?- should_receive_sanction(tom, Sanction).
Sanction = punishment.
This is because tom is a thief, and it cannot be shown that tom should be rehabilitated. It cannot be shown that tom should be rehabilitated, because it cannot be shown that tom is a minor.
If, however, we receive new information that tom is indeed a minor, the previous conclusion that tom should be punished is replaced by the new conclusion that tom should be rehabilitated:
minor(tom).
?- should_receive_sanction(tom, Sanction).
Sanction = rehabilitation.
This property of withdrawing a conclusion when new information is added, is called non-monotonicity, and it makes logic programming a non-monotonic logic.
But, if we are now told that tom is violent, the conclusion that tom should be punished will be reinstated:
violent(tom).
?- should_receive_sanction(tom, Sanction).
Sanction = punishment.
The completion of this program is:
should_receive_sanction(X, Sanction) iff
Sanction = punishment, is_a_thief(X),
not should_receive_sanction(X, rehabilitation)
or Sanction = rehabilitation, is_a_thief(X), is_a_minor(X),
not is_violent(X).
is_a_thief(X) iff X = tom.
is_a_minor(X) iff X = tom.
is_violent(X) iff X = tom.
The notion of completion is closely related to John McCarthy's circumscription semantics for default reasoning, and to Ray Reiter's closed world assumption.
The completion semantics for negation is a logical consequence semantics, for which SLDNF provides a proof-theoretic implementation. However, in the 1980s, the satisfiability semantics became more popular for logic programs with negation. In the satisfiability semantics, negation is interpreted according to the classical definition of truth in an intended or standard model of the logic program.
In the case of logic programs with negative conditions, there are two main variants of the satisfiability semantics: In the well-founded semantics, the intended model of a logic program is a unique, three-valued, minimal model, which always exists. The well-founded semantics generalises the notion of inductive definition in mathematical logic. XSB Prolog implements the well-founded semantics using SLG resolution.
In the alternative stable model semantics, there may be no intended models or several intended models, all of which are minimal and two-valued. The stable model semantics underpins answer set programming (ASP).
Both the well-founded and stable model semantics apply to arbitrary logic programs with negation. However, both semantics coincide for stratified logic programs. For example, the program for sanctioning thieves is (locally) stratified, and all three semantics for the program determine the same intended model:
should_receive_sanction(tom, punishment).
is_a_thief(tom).
is_a_minor(tom).
is_violent(tom).
Attempts to understand negation in logic programming have also contributed to the development of abstract argumentation frameworks. In an argumentation interpretation of negation, the initial argument that tom should be punished because he is a thief, is attacked by the argument that he should be rehabilitated because he is a minor. But the fact that tom is violent undermines the argument that tom should be rehabilitated and reinstates the argument that tom should be punished.
Metalogic programming
Metaprogramming, in which programs are treated as data, was already a feature of early Prolog implementations. For example, the Edinburgh DEC10 implementation of Prolog included "an interpreter and a compiler, both written in Prolog itself". The simplest metaprogram is the so-called "vanilla" meta-interpreter:
solve(true).
solve((B,C)):- solve(B),solve(C).
solve(A):- clause(A,B),solve(B).
where true represents an empty conjunction, and (B,C) is a composite term representing the conjunction of B and C. The predicate clause(A,B) means that there is a clause of the form A :- B.
Metaprogramming is an application of the more general use of a metalogic or metalanguage to describe and reason about another language, called the object language.
Metalogic programming allows object-level and metalevel representations to be combined, as in natural language. For example, in the following program, the atomic formula attends(Person, Meeting) occurs both as an object-level formula, and as an argument of the metapredicates prohibited and approved.
prohibited(attends(Person, Meeting)) :-
not(approved(attends(Person, Meeting))).
should_receive_sanction(Person, scolding) :- attends(Person, Meeting),
lofty(Person), prohibited(attends(Person, Meeting)).
should_receive_sanction(Person, banishment) :- attends(Person, Meeting),
lowly(Person), prohibited(attends(Person, Meeting)).
approved(attends(alice, tea_party)).
attends(mad_hatter, tea_party).
attends(dormouse, tea_party).
lofty(mad_hatter).
lowly(dormouse).
?- should_receive_sanction(X,Y).
Person = mad_hatter,
Sanction = scolding.
Person = dormouse,
Sanction = banishment.
Relationship with the Computational-representational understanding of mind
In his popular Introduction to Cognitive Science, Paul Thagard includes logic and rules as alternative approaches to modelling human thinking. He argues that rules, which have the form IF condition THEN action, are "very similar" to logical conditionals, but they are simpler and have greater psychological plausability (page 51). Among other differences between logic and rules, he argues that logic uses deduction, but rules use search (page 45) and can be used to reason either forward or backward (page 47). Sentences in logic "have to be interpreted as universally true", but rules can be defaults, which admit exceptions (page 44).
He states that "unlike logic, rule-based systems can also easily represent strategic information
about what to do" (page 45). For example, "IF you want to go home for the weekend, and you have bus fare, THEN
you can catch a bus". He does not observe that the same strategy of reducing a goal to subgoals can be interpreted, in the manner of logic programming, as applying backward reasoning to a logical conditional:
can_go(you, home) :- have(you, bus_fare), catch(you, bus).
All of these characteristics of rule-based systems - search, forward and backward reasoning, default reasoning, and goal-reduction - are also defining characteristics of logic programming. This suggests that Thagard's conclusion (page 56) that:
Much of human knowledge is naturally described in terms of rules, and many kinds of thinking such as planning can be modeled by rule-based systems.
also applies to logic programming.
Other arguments showing how logic programming can be used to model aspects of human thinking are presented by Keith Stenning and Michiel van Lambalgen in their book,
Human Reasoning and Cognitive Science. They show how the non-monotonic character of logic programs can be used to explain human performance on a variety of psychological tasks. They also show (page 237) that "closed–world reasoning in its guise as logic programming has an appealing neural implementation, unlike classical logic."
In The Proper Treatment of Events,
Michiel van Lambalgen and Fritz Hamm investigate the use of constraint logic programming to code "temporal notions in natural language by looking at the way human beings construct time".
Knowledge representation
The use of logic to represent procedural knowledge and strategic information was one of the main goals contributing to the early development of logic programming. Moreover, it continues to be an important feature of the Prolog family of logic programming languages today. However, many applications of logic programming, including Prolog applications, increasingly focus on the use of logic to represent purely declarative knowledge. These applications include both the representation of general commonsense knowledge and the representation of domain specific expertise.
Commonsense includes knowledge about cause and effect, as formalised, for example, in the situation calculus, event calculus and action languages. Here is a simplified example, which illustrates the main features of such formalisms. The first clause states that a fact holds immediately after an event initiates (or causes) the fact. The second clause is a frame axiom, which states that a fact that holds at a time continues to hold at the next time unless it is terminated by an event that happens at the time. This formulation allows more than one event to occur at the same time:
holds(Fact, Time2) :-
happens(Event, Time1),
Time2 is Time1 + 1,
initiates(Event, Fact).
holds(Fact, Time2) :-
happens(Event, Time1),
Time2 is Time1 + 1,
holds(Fact, Time1),
not(terminated(Fact, Time1)).
terminated(Fact, Time) :-
happens(Event, Time),
terminates(Event, Fact).
Here holds is a meta-predicate, similar to solve above. However, whereas solve has only one argument, which applies to general clauses, the first argument of holds is a fact and the second argument is a time (or state). The atomic formula holds(Fact, Time) expresses that the Fact holds at the Time. Such time-varying facts are also called fluents. The atomic formula happens(Event, Time) expresses that the Event happens at the Time.
The following example illustrates how these clauses can be used to reason about causality in a toy blocks world. Here, in the initial state at time 0, a green block is on a table and a red block is stacked on the green block (like a traffic light). At time 0, the red block is moved to the table. At time 1, the green block is moved onto the red block. Moving an object onto a place terminates the fact that the object is on any place, and initiates the fact that the object is on the place to which it is moved:
holds(on(green_block, table), 0).
holds(on(red_block, green_block), 0).
happens(move(red_block, table), 0).
happens(move(green_block, red_block), 1).
initiates(move(Object, Place), on(Object, Place)).
terminates(move(Object, Place2), on(Object, Place1)).
?- holds(Fact, Time).
Fact = on(green_block,table),
Time = 0.
Fact = on(red_block,green_block),
Time = 0.
Fact = on(green_block,table),
Time = 1.
Fact = on(red_block,table),
Time = 1.
Fact = on(green_block,red_block),
Time = 2.
Fact = on(red_block,table),
Time = 2.
Forward reasoning and backward reasoning generate the same answers to the goal holds(Fact, Time). But forward reasoning generates fluents progressively in temporal order, and backward reasoning generates fluents regressively, as in the domain-specific use of regression in the situation calculus.
Logic programming has also proved to be useful for representing domain-specific expertise in expert systems. But human expertise, like general-purpose commonsense, is mostly implicit and tacit, and it is often difficult to represent such implicit knowledge in explicit rules. This difficulty does not arise, however, when logic programs are used to represent the existing, explicit rules of a business organisation or legal authority.
For example, here is a representation of a simplified version of the first sentence of the British Nationality Act, which states that a person who is born in the UK becomes a British citizen at the time of birth if a parent of the person is a British citizen at the time of birth:
initiates(birth(Person), citizen(Person, uk)):-
time_of(birth(Person), Time),
place_of(birth(Person), uk),
parent_child(Another_Person, Person),
holds(citizen(Another_Person, uk), Time).
Historically, the representation of a large portion of the British Nationality Act as a logic program in the 1980s was "hugely influential for the development of computational representations of legislation, showing how logic programming enables intuitively appealing representations that can be directly deployed to generate automatic inferences".
More recently, the PROLEG system, initiated in 2009 and consisting of approximately 2500 rules and exceptions of civil code and supreme court case rules in Japan, has become possibly the largest legal rule base in the world.
Variants and extensions
Prolog
The SLD resolution rule of inference is neutral about the order in which subgoals in the bodies of clauses can be selected for solution. For the sake of efficiency, Prolog restricts this order to the order in which the subgoals are written. SLD is also neutral about the strategy for searching the space of SLD proofs.
Prolog searches this space, top-down, depth-first, trying different clauses for solving the same (sub)goal in the order in which the clauses are written.
This search strategy has the advantage that the current branch of the tree can be represented efficiently by a stack. When a goal clause at the top of the stack is reduced to a new goal clause, the new goal clause is pushed onto the top of the stack. When the selected subgoal in the goal clause at the top of the stack cannot be solved, the search strategy backtracks, removing the goal clause from the top of the stack, and retrying the attempted solution of the selected subgoal in the previous goal clause using the next clause that matches the selected subgoal.
Backtracking can be restricted by using a subgoal, called cut, written as !, which always succeeds but cannot be backtracked. Cut can be used to improve efficiency, but can also interfere with the logical meaning of clauses. In many cases, the use of cut can be replaced by negation as failure. In fact, negation as failure can be defined in Prolog, by using cut, together with any literal, say fail, that unifies with the head of no clause:
not(P) :- P, !, fail.
not(P).
Prolog provides other features, in addition to cut, that do not have a logical interpretation. These include the built-in predicates assert and retract for destructively updating the state of the program during program execution.
For example, the toy blocks world example above can be implemented without frame axioms using destructive change of state:
on(green_block, table).
on(red_block, green_block).
move(Object, Place2) :-
retract(on(Object, Place1)),
assert(on(Object, Place2).
The sequence of move events and the resulting locations of the blocks can be computed by executing the query:
?- move(red_block, table), move(green_block, red_block), on(Object, Place).
Object = red_block,
Place = table.
Object = green_block,
Place = red_block.
Various extensions of logic programming have been developed to provide a logical framework for such destructive change of state.
The broad range of Prolog applications, both in isolation and in combination with other languages is highlighted in the Year of Prolog Book, celebrating the 50 year anniversary of Prolog in 2022.
Prolog has also contributed to the development of other programming languages, including ALF, Fril, Gödel, Mercury, Oz, Ciao, Visual Prolog, XSB, and λProlog.
Constraint logic programming
Constraint logic programming (CLP) combines Horn clause logic programming with constraint solving. It extends Horn clauses by allowing some predicates, declared as constraint predicates, to occur as literals in the body of a clause. Constraint predicates are not defined by the facts and rules in the program, but are predefined by some domain-specific model-theoretic structure or theory.
Procedurally, subgoals whose predicates are defined by the program are solved by goal-reduction, as in ordinary logic programming, but constraints are simplified and checked for satisfiability by a domain-specific constraint-solver, which implements the semantics of the constraint predicates. An initial problem is solved by reducing it to a satisfiable conjunction of constraints.
Interestingly, the first version of Prolog already included a constraint predicate dif(term1, term2), from Philippe Roussel's 1972 PhD thesis, which succeeds if both of its arguments are different terms, but which is delayed if either of the terms contains a variable.
The following constraint logic program represents a toy temporal database of john's history as a teacher:
teaches(john, hardware, T) :- 1990 ≤ T, T < 1999.
teaches(john, software, T) :- 1999 ≤ T, T < 2005.
teaches(john, logic, T) :- 2005 ≤ T, T ≤ 2012.
rank(john, instructor, T) :- 1990 ≤ T, T < 2010.
rank(john, professor, T) :- 2010 ≤ T, T < 2014.
Here ≤ and < are constraint predicates, with their usual intended semantics. The following goal clause queries the database to find out when john both taught logic and was a professor:
?- teaches(john, logic, T), rank(john, professor, T).
The solution
2010 ≤ T, T ≤ 2012
results from simplifying the constraints
2005 ≤ T, T ≤ 2012, 2010 ≤ T, T < 2014.
Constraint logic programming has been used to solve problems in such fields as civil engineering, mechanical engineering, digital circuit verification, automated timetabling, air traffic control, and finance. It is closely related to abductive logic programming.
Datalog
Datalog is a database definition language, which combines a relational view of data, as in relational databases, with a logical view, as in logic programming.
Relational databases use a relational calculus or relational algebra, with relational operations, such as union, intersection, set difference and cartesian product to specify queries, which access a database. Datalog uses logical connectives, such as or, and and not in the bodies of rules to define relations as part of the database itself.
It was recognized early in the development of relational databases that recursive queries cannot be expressed in either relational algebra or relational calculus, and that this defficiency can be remedied by introducing a least-fixed-point operator. In contrast, recursive relations can be defined naturally by rules in logic programs, without the need for any new logical connectives or operators.
Datalog differs from more general logic programming by having only constants and variables as terms. Moreover, all facts are variable-free, and rules are restricted, so that if they are executed bottom-up, then the derived facts are also variable-free.
For example, consider the family database:
mother_child(elizabeth, charles).
father_child(charles, william).
father_child(charles, harry).
parent_child(X, Y) :-
mother_child(X, Y).
parent_child(X, Y) :-
father_child(X, Y).
ancestor_descendant(X, Y) :-
parent_child(X, X).
ancestor_descendant(X, Y) :-
ancestor_descendant(X, Z),
ancestor_descendant(Z, Y).
Bottom-up execution derives the following set of additional facts and terminates:
parent_child(elizabeth, charles).
parent_child(charles, william).
parent_child(charles, harry).
ancestor_descendant(elizabeth, charles).
ancestor_descendant(charles, william).
ancestor_descendant(charles, harry).
ancestor_descendant(elizabeth, william).
ancestor_descendant(elizabeth, harry).
Top-down execution derives the same answers to the query:
?- ancestor_descendant(X, Y).
But then it goes into an infinite loop. However, top-down execution with tabling gives the same answers and terminates without looping.
Answer set programming
Like Datalog, Answer Set programming (ASP) is not Turing-complete. Moreover, instead of separating goals (or queries) from the program to be used in solving the goals, ASP treats the whole program as a goal, and solves the goal by generating a stable model that makes the goal true. For this purpose, it uses the stable model semantics, according to which a logic program can have zero, one or more intended models. For example, the following program represents a degenerate variant of the map colouring problem of colouring two countries red or green:
country(oz).
country(iz).
adjacent(oz, iz).
colour(C, red) :- country(C), not(colour(C, green)).
colour(C, green) :- country(C), not(colour(C, red)).
The problem has four solutions represented by four stable models:
country(oz). country(iz). adjacent(oz, iz). colour(oz, red). colour(iz, red).
country(oz). country(iz). adjacent(oz, iz). colour(oz, green). colour(iz, green).
country(oz). country(iz). adjacent(oz, iz). colour(oz, red). colour(iz, green).
country(oz). country(iz). adjacent(oz, iz). colour(oz, green). colour(iz, red).
To represent the standard version of the map colouring problem, we need to add a constraint that two adjacent countries cannot be coloured the same colour. In ASP, this constraint can be written as a clause of the form:
:- country(C1), country(C2), adjacent(C1, C2), colour(C1, X), colour(C2, X).
With the addition of this constraint, the problem now has only two solutions:
country(oz). country(iz). adjacent(oz, iz). colour(oz, red). colour(iz, green).
country(oz). country(iz). adjacent(oz, iz). colour(oz, green). colour(iz, red).
The addition of constraints of the form :- Body. eliminates models in which Body is true.
Confusingly, constraints in ASP are different from constraints in CLP. Constraints in CLP are predicates that qualify answers to queries (and solutions of goals). Constraints in ASP are clauses that eliminate models that would otherwise satisfy goals. Constraints in ASP are like integrity constraints in databases.
This combination of ordinary logic programming clauses and constraint clauses illustrates the generate-and-test methodology of problem solving in ASP: The ordinary clauses define a search space of possible solutions, and the constraints filter out unwanted solutions.
Most implementations of ASP proceed in two steps: First they instantiate the program in all possible ways, reducing it to a propositional logic program (known as grounding). Then they apply a propositional logic problem solver, such as the DPLL algorithm or a Boolean SAT solver. However, some implementations, such as s(CASP) use a goal-directed, top-down, SLD resolution-like procedure without
grounding.
Abductive logic programming
Abductive logic programming (ALP), like CLP, extends normal logic programming by allowing the bodies of clauses to contain literals whose predicates are not defined by clauses. In ALP, these predicates are declared as abducible (or assumable), and are used as in abductive reasoning to explain observations, or more generally to add new facts to the program (as assumptions) to solve goals.
For example, suppose we are given an initial state in which a red block is on a green block on a table at time 0:
holds(on(green_block, table), 0).
holds(on(red_block, green_block), 0).
Suppose we are also given the goal:
?- holds(on(green_block,red_block), 3), holds(on(red_block,table), 3).
The goal can represent an observation, in which case a solution is an explanation of the observation. Or the goal can represent a desired future state of affairs, in which case a solution is a plan for achieving the goal.
We can use the rules for cause and effect presented earlier to solve the goal, by treating the happens predicate as abducible:
holds(Fact, Time2) :-
happens(Event, Time1),
Time2 is Time1 + 1,
initiates(Event, Fact).
holds(Fact, Time2) :-
happens(Event, Time1),
Time2 is Time1 + 1,
holds(Fact, Time1),
not(terminated(Fact, Time1)).
terminated(Fact, Time) :-
happens(Event, Time),
terminates(Event, Fact).
initiates(move(Object, Place), on(Object, Place)).
terminates(move(Object, Place2), on(Object, Place1)).
ALP solves the goal by reasoning backwards and adding assumptions to the program, to solve abducible subgoals. In this case there are many alternative solutions, including:
happens(move(red_block, table), 0).
happens(tick, 1).
happens(move(green_block, red_block), 2).
happens(tick,0).
happens(move(red_block, table), 1).
happens(move(green_block, red_block), 2).
happens(move(red_block, table), 0).
happens(move(green_block, red_block), 1).
happens(tick, 2).
Here tick is an event that marks the passage of time without initiating or terminating any fluents.
There are also solutions in which the two move events happen at the same time. For example:
happens(move(red_block, table), 0).
happens(move(green_block, red_block), 0).
happens(tick, 1).
happens(tick, 2).
Such solutions, if not desired, can be removed by adding an integrity constraint, which is like a constraint clause in ASP:
:- happens(move(Block1, Place), Time), happens(move(Block2, Block1), Time).
Abductive logic programming has been used for fault diagnosis, planning, natural language processing and machine learning. It has also been used to interpret negation as failure as a form of abductive reasoning.
Inductive logic programming
Inductive logic programming (ILP) is an approach to machine learning that induces logic programs as hypothetical generalisations of positive and negative examples. Given a logic program representing background knowledge and positive examples together with constraints representing negative examples, an ILP system induces a logic program that generalises the positive examples while excluding the negative examples.
ILP is similar to ALP, in that both can be viewed as generating hypotheses to explain observations, and as employing constraints to exclude undesirable hypotheses. But in ALP the hypotheses are variable-free facts, and in ILP the hypotheses are general rules.
For example, given only background knowledge of the mother_child and father_child relations, and suitable examples of the grandparent_child relation, current ILP systems can generate the definition of grandparent_child, inventing an auxiliary predicate, which can be interpreted as the parent_child relation:
grandparent_child(X, Y):- auxiliary(X, Z), auxiliary(Z, Y).
auxiliary(X, Y):- mother_child(X, Y).
auxiliary(X, Y):- father_child(X, Y).
Stuart Russell has referred to such invention of new concepts as the most important step needed for reaching human-level AI.
Recent work in ILP, combining logic programming, learning and probability, has given rise to the fields of statistical relational learning and probabilistic inductive logic programming.
Concurrent logic programming
Concurrent logic programming integrates concepts of logic programming with concurrent programming. Its development was given a big impetus in the 1980s by its choice for the systems programming language of the Japanese Fifth Generation Project (FGCS).
A concurrent logic program is a set of guarded Horn clauses of the form:
H :- G1, ..., Gn | B1, ..., Bn.
The conjunction G1, ... , Gn is called the guard of the clause, and is the commitment operator. Declaratively, guarded Horn clauses are read as ordinary logical implications:
H if G1 and ... and Gn and B1 and ... and Bn.
However, procedurally, when there are several clauses whose heads H match a given goal, then all of the clauses are executed in parallel, checking whether their guards G1, ... , Gn hold. If the guards of more than one clause hold, then a committed choice is made to one of the clauses, and execution proceeds with the subgoals B1, ..., Bn of the chosen clause. These subgoals can also be executed in parallel. Thus concurrent logic programming implements a form of "don't care nondeterminism", rather than "don't know nondeterminism".
For example, the following concurrent logic program defines a predicate shuffle(Left, Right, Merge), which can be used to shuffle two lists Left and Right, combining them into a single list Merge that preserves the ordering of the two lists Left and Right:
shuffle([], [], []).
shuffle(Left, Right, Merge) :-
Left = [First | Rest] |
Merge = [First | ShortMerge],
shuffle(Rest, Right, ShortMerge).
shuffle(Left, Right, Merge) :-
Right = [First | Rest] |
Merge = [First | ShortMerge],
shuffle(Left, Rest, ShortMerge).
Here, [] represents the empty list, and [Head | Tail] represents a list with first element Head followed by list Tail, as in Prolog. (Notice that the first occurrence of in the second and third clauses is the list constructor, whereas the second occurrence of is the commitment operator.) The program can be used, for example, to shuffle the lists [ace, queen, king] and [1, 4, 2] by invoking the goal clause:
shuffle([ace, queen, king], [1, 4, 2], Merge).
The program will non-deterministically generate a single solution, for example Merge = [ace, queen, 1, king, 4, 2].
Carl Hewitt has argued that, because of the indeterminacy of concurrent computation, concurrent logic programming cannot implement general concurrency. However, according to the logical semantics, any result of a computation of a concurrent logic program is a logical consequence of the program, even though not all logical consequences can be derived.
Concurrent constraint logic programming
Concurrent constraint logic programming combines concurrent logic programming and constraint logic programming, using constraints to control concurrency. A clause can contain a guard, which is a set of constraints that may block the applicability of the clause. When the guards of several clauses are satisfied, concurrent constraint logic programming makes a committed choice to use only one.
Higher-order logic programming
Several researchers have extended logic programming with higher-order programming features derived from higher-order logic, such as predicate variables. Such languages include the Prolog extensions HiLog and λProlog.
Linear logic programming
Basing logic programming within linear logic has resulted in the design of logic programming languages that are considerably more expressive than those based on classical logic. Horn clause programs can only represent state change by the change in arguments to predicates. In linear logic programming, one can use the ambient linear logic to support state change. Some early designs of logic programming languages based on linear logic include LO, Lolli, ACL, and Forum. Forum provides a goal-directed interpretation of all linear logic.
Object-oriented logic programming
F-logic extends logic programming with objects and the frame syntax.
Logtalk extends the Prolog programming language with support for objects, protocols, and other OOP concepts. It supports most standard-compliant Prolog systems as backend compilers.
Transaction logic programming
Transaction logic is an extension of logic programming with a logical theory of state-modifying updates. It has both a model-theoretic semantics and a procedural one. An implementation of a subset of Transaction logic is available in the Flora-2 system. Other prototypes are also available.
| Technology | Software development: General | null |
17932 | https://en.wikipedia.org/wiki/Liquid-crystal%20display | Liquid-crystal display | A liquid-crystal display (LCD) is a flat-panel display or other electronically modulated optical device that uses the light-modulating properties of liquid crystals combined with polarizers to display information. Liquid crystals do not emit light directly but instead use a backlight or reflector to produce images in color or monochrome.
LCDs are available to display arbitrary images (as in a general-purpose computer display) or fixed images with low information content, which can be displayed or hidden: preset words, digits, and seven-segment displays (as in a digital clock) are all examples of devices with these displays. They use the same basic technology, except that arbitrary images are made from a matrix of small pixels, while other displays have larger elements.
LCDs are used in a wide range of applications, including LCD televisions, computer monitors, instrument panels, aircraft cockpit displays, and indoor and outdoor signage. Small LCD screens are common in LCD projectors and portable consumer devices such as digital cameras, watches, calculators, and mobile telephones, including smartphones. LCD screens have replaced heavy, bulky and less energy-efficient cathode-ray tube (CRT) displays in nearly all applications since the late 2000s to the early 2010s.
LCDs can either be normally on (positive) or off (negative), depending on the polarizer arrangement. For example, a character positive LCD with a backlight has black lettering on a background that is the color of the backlight, and a character negative LCD has a black background with the letters being of the same color as the backlight.
LCDs are not subject to screen burn-in like on CRTs. However, LCDs are still susceptible to image persistence.
General characteristics
Each pixel of an LCD typically consists of a layer of molecules aligned between two transparent electrodes, often made of indium tin oxide (ITO) and two polarizing filters (parallel and perpendicular polarizers), the axes of transmission of which are (in most of the cases) perpendicular to each other. Without the liquid crystal between the polarizing filters, light passing through the first filter would be blocked by the second (crossed) polarizer. Before an electric field is applied, the orientation of the liquid-crystal molecules is determined by the alignment at the surfaces of electrodes. In a twisted nematic (TN) device, the surface alignment directions at the two electrodes are perpendicular to each other, and so the molecules arrange themselves in a helical structure, or twist. This induces the rotation of the polarization of the incident light, and the device appears gray. If the applied voltage is large enough, the liquid crystal molecules in the center of the layer are almost completely untwisted and the polarization of the incident light is not rotated as it passes through the liquid crystal layer. This light will then be mainly polarized perpendicular to the second filter, and thus be blocked and the pixel will appear black. By controlling the voltage applied across the liquid crystal layer in each pixel, light can be allowed to pass through in varying amounts thus constituting different levels of gray.
The chemical formula of the liquid crystals used in LCDs may vary. Formulas may be patented. An example is a mixture of 2-(4-alkoxyphenyl)-5-alkylpyrimidine with cyanobiphenyl, patented by Merck and Sharp Corporation. The patent that covered that specific mixture has expired.
Most color LCD systems use the same technique, with color filters used to generate red, green, and blue subpixels. The LCD color filters are made with a photolithography process on large glass sheets that are later glued with other glass sheets containing a thin-film transistor (TFT) array, spacers and liquid crystal, creating several color LCDs that are then cut from one another and laminated with polarizer sheets. Red, green, blue and black colored photoresists (resists) are used to create color filters. All resists contain a finely ground powdered pigment, with particles being just 40 nanometers across. The black resist is the first to be applied; this will create a black grid (known in the industry as a black matrix) that will separate red, green and blue subpixels from one another, increasing contrast ratios and preventing light from leaking from one subpixel onto other surrounding subpixels. After the black resist has been dried in an oven and exposed to UV light through a photomask, the unexposed areas are washed away, creating a black grid. Then the same process is repeated with the remaining resists. This fills the holes in the black grid with their corresponding colored resists. Black matrices made in the 1980s and 1990s when most color LCD production was for laptop computers, are made of Chromium due to its high opacity, but due to environmental concerns, manufacturers shifted to black colored photoresist with carbon pigment as the black matrix material. Another color-generation method used in early color PDAs and some calculators was done by varying the voltage in a Super-twisted nematic LCD, where the variable twist between tighter-spaced plates causes a varying double refraction birefringence, thus changing the hue. They were typically restricted to 3 colors per pixel: orange, green, and blue.
The optical effect of a TN device in the voltage-on state is far less dependent on variations in the device thickness than that in the voltage-off state. Because of this, TN displays with low information content and no backlighting are usually operated between crossed polarizers such that they appear bright with no voltage (the eye is much more sensitive to variations in the dark state than the bright state). As most of 2010-era LCDs are used in television sets, monitors and smartphones, they have high-resolution matrix arrays of pixels to display arbitrary images using backlighting with a dark background. When no image is displayed, different arrangements are used. For this purpose, TN LCDs are operated between parallel polarizers, whereas IPS LCDs feature crossed polarizers. In many applications IPS LCDs have replaced TN LCDs, particularly in smartphones. Both the liquid crystal material and the alignment layer material contain ionic compounds. If an electric field of one particular polarity is applied for a long period of time, this ionic material is attracted to the surfaces and degrades the device performance. This is avoided either by applying an alternating current or by reversing the polarity of the electric field as the device is addressed (the response of the liquid crystal layer is identical, regardless of the polarity of the applied field).
Displays for a small number of individual digits or fixed symbols (as in digital watches and pocket calculators) can be implemented with independent electrodes for each segment. In contrast, full alphanumeric or variable graphics displays are usually implemented with pixels arranged as a matrix consisting of electrically connected rows on one side of the liquid crystal (LC) layer and columns on the other side, which makes it possible to address each pixel at the intersections. The general method of matrix addressing consists of sequentially addressing one side of the matrix, for example by selecting the rows one-by-one and applying the picture information on the other side at the columns row-by-row. For details on the various matrix addressing schemes see passive-matrix and active-matrix addressed LCDs.
Manufacturing
History
The origin and the complex history of liquid-crystal displays from the perspective of an insider during the early days were described by Joseph A. Castellano in Liquid Gold: The Story of Liquid Crystal Displays and the Creation of an Industry.
Another report on the origins and history of LCD from a different perspective until 1991 has been published by Hiroshi Kawamoto, available at the IEEE History Center.
A description of Swiss contributions to LCD developments, written by Peter J. Wild, can be found at the Engineering and Technology History Wiki.
Background
In 1888, Friedrich Reinitzer (1858–1927) discovered the liquid crystalline nature of cholesterol extracted from carrots (that is, two melting points and generation of colors) and published his findings. In 1904, Otto Lehmann published his work "Flüssige Kristalle" (Liquid Crystals). In 1911, Charles Mauguin first experimented with liquid crystals confined between plates in thin layers.
In 1922, Georges Friedel described the structure and properties of liquid crystals and classified them in three types (nematics, smectics and cholesterics). In 1927, Vsevolod Frederiks devised the electrically switched light valve, called the Fréedericksz transition, the essential effect of all LCD technology. In 1936, the Marconi Wireless Telegraph company patented the first practical application of the technology, "The Liquid Crystal Light Valve". In 1962, the first major English language publication Molecular Structure and Properties of Liquid Crystals was published by Dr. George W. Gray. In 1962, Richard Williams of RCA found that liquid crystals had some interesting electro-optic characteristics and he realized an electro-optical effect by generating stripe patterns in a thin layer of liquid crystal material by the application of a voltage. This effect is based on an electro-hydrodynamic instability forming what are now called "Williams domains" inside the liquid crystal.
Building on early MOSFETs, Paul K. Weimer at RCA developed the thin-film transistor (TFT) in 1962. It was a type of MOSFET distinct from the standard bulk MOSFET.
1960s
In 1964, George H. Heilmeier, who was working at the RCA laboratories on the effect discovered by Richard Williams, achieved the switching of colors by field-induced realignment of dichroic dyes in a homeotropically oriented liquid crystal. Practical problems with this new electro-optical effect made Heilmeier continue to work on scattering effects in liquid crystals and finally the achievement of the first operational liquid-crystal display based on what he called the dynamic scattering mode (DSM). Application of a voltage to a DSM display switches the initially clear transparent liquid crystal layer into a milky turbid state. DSM displays could be operated in transmissive and in reflective mode but they required a considerable current to flow for their operation. George H. Heilmeier was inducted in the National Inventors Hall of Fame and credited with the invention of LCDs. Heilmeier's work is an IEEE Milestone.
In the late 1960s, pioneering work on liquid crystals was undertaken by the UK's Royal Radar Establishment at Malvern, England. The team at RRE supported ongoing work by George William Gray and his team at the University of Hull who ultimately discovered the cyanobiphenyl liquid crystals, which had correct stability and temperature properties for application in LCDs.
The idea of a TFT-based liquid-crystal display (LCD) was conceived by Bernard Lechner of RCA Laboratories in 1968. Lechner, F.J. Marlowe, E.O. Nester and J. Tults demonstrated the concept in 1968 with an 18x2 matrix dynamic scattering mode (DSM) LCD that used standard discrete MOSFETs.
1970s
On December 4, 1970, the twisted nematic field effect (TN) in liquid crystals was filed for patent by Hoffmann-LaRoche in Switzerland, (Swiss patent No. 532 261 ) with Wolfgang Helfrich and Martin Schadt (then working for the Central Research Laboratories) listed as inventors. Hoffmann-La Roche licensed the invention to Swiss manufacturer Brown, Boveri & Cie, its joint venture partner at that time, which produced TN displays for wristwatches and other applications during the 1970s for the international markets including the Japanese electronics industry, which soon produced the first digital quartz wristwatches with TN-LCDs and numerous other products. James Fergason, while working with Sardari Arora and Alfred Saupe at Kent State University Liquid Crystal Institute, filed an identical patent in the United States on April 22, 1971. In 1971, the company of Fergason, ILIXCO (now LXD Incorporated), produced LCDs based on the TN-effect, which soon superseded the poor-quality DSM types due to improvements of lower operating voltages and lower power consumption. Tetsuro Hama and Izuhiko Nishimura of Seiko received a US patent dated February 1971, for an electronic wristwatch incorporating a TN-LCD. In 1972, the first wristwatch with TN-LCD was launched on the market: The Gruen Teletime which was a four digit display watch.
In 1972, the concept of the active-matrix thin-film transistor (TFT) liquid-crystal display panel was prototyped in the United States by T. Peter Brody's team at Westinghouse, in Pittsburgh, Pennsylvania. In 1973, Brody, J. A. Asars and G. D. Dixon at Westinghouse Research Laboratories demonstrated the first thin-film-transistor liquid-crystal display (TFT LCD). , all modern high-resolution and high-quality electronic visual display devices use TFT-based active matrix displays. Brody and Fang-Chen Luo demonstrated the first flat active-matrix liquid-crystal display (AM LCD) in 1974, and then Brody coined the term "active matrix" in 1975.
In 1972 North American Rockwell Microelectronics Corp introduced the use of DSM LCDs for calculators for marketing by Lloyds Electronics Inc, though these required an internal light source for illumination. Sharp Corporation followed with DSM LCDs for pocket-sized calculators in 1973 and then mass-produced TN LCDs for watches in 1975. Other Japanese companies soon took a leading position in the wristwatch market, like Seiko and its first 6-digit TN-LCD quartz wristwatch, and Casio's 'Casiotron'. Color LCDs based on Guest-Host interaction were invented by a team at RCA in 1968. A particular type of such a color LCD was developed by Japan's Sharp Corporation in the 1970s, receiving patents for their inventions, such as a patent by Shinji Kato and Takaaki Miyazaki in May 1975, and then improved by Fumiaki Funada and Masataka Matsuura in December 1975. TFT LCDs similar to the prototypes developed by a Westinghouse team in 1972 were patented in 1976 by a team at Sharp consisting of Fumiaki Funada, Masataka Matsuura, and Tomio Wada, then improved in 1977 by a Sharp team consisting of Kohei Kishi, Hirosaku Nonomura, Keiichiro Shimizu, and Tomio Wada. However, these TFT-LCDs were not yet ready for use in products, as problems with the materials for the TFTs were not yet solved.
1980s
In 1983, researchers at Brown, Boveri & Cie (BBC) Research Center, Switzerland, invented the super-twisted nematic (STN) structure for passive matrix-addressed LCDs. H. Amstutz et al. were listed as inventors in the corresponding patent applications filed in Switzerland on July 7, 1983, and October 28, 1983. Patents were granted in Switzerland CH 665491, Europe EP 0131216, and many more countries. In 1980, Brown Boveri started a 50/50 joint venture with the Dutch Philips company, called Videlec. Philips had the required know-how to design and build integrated circuits for the control of large LCD panels. In addition, Philips had better access to markets for electronic components and intended to use LCDs in new product generations of hi-fi, video equipment and telephones. In 1984, Philips researchers Theodorus Welzen and Adrianus de Vaan invented a video speed-drive scheme that solved the slow response time of STN-LCDs, enabling high-resolution, high-quality, and smooth-moving video images on STN-LCDs. In 1985, Philips inventors Theodorus Welzen and Adrianus de Vaan solved the problem of driving high-resolution STN-LCDs using low-voltage (CMOS-based) drive electronics, allowing the application of high-quality (high resolution and video speed) LCD panels in battery-operated portable products like notebook computers and mobile phones. In 1985, Philips acquired 100% of the Videlec AG company based in Switzerland. Afterwards, Philips moved the Videlec production lines to the Netherlands. Years later, Philips successfully produced and marketed complete modules (consisting of the LCD screen, microphone, speakers etc.) in high-volume production for the booming mobile phone industry.
The first color LCD televisions were developed as handheld televisions in Japan. In 1980, Hattori Seiko's R&D group began development on color LCD pocket televisions. In 1982, Seiko Epson released the first LCD television, the Epson TV Watch, a wristwatch equipped with a small active-matrix LCD television. Sharp Corporation introduced dot matrix TN-LCD in 1983. In 1984, Epson released the ET-10, the first full-color, pocket LCD television. The same year, Citizen Watch, introduced the Citizen Pocket TV, a 2.7-inch color LCD TV, with the first commercial TFT LCD. In 1988, Sharp demonstrated a 14-inch, active-matrix, full-color, full-motion TFT-LCD. This led to Japan launching an LCD industry, which developed large-size LCDs, including TFT computer monitors and LCD televisions. Epson developed the 3LCD projection technology in the 1980s, and licensed it for use in projectors in 1988. Epson's VPJ-700, released in January 1989, was the world's first compact, full-color LCD projector.
1990s
In 1990, under different titles, inventors conceived electro optical effects as alternatives to twisted nematic field effect LCDs (TN- and STN-LCDs). One approach was to use interdigital electrodes on one glass substrate only to produce an electric field essentially parallel to the glass substrates. To take full advantage of the properties of this In Plane Switching (IPS) technology, further work was needed. After thorough analysis, details of advantageous embodiments are filed in Germany by Guenter Baur et al. and patented in various countries. The Fraunhofer Institute ISE in Freiburg, where the inventors worked, assigns these patents to Merck KGaA, Darmstadt, a supplier of LC substances. In 1992, shortly thereafter, engineers at Hitachi worked out various practical details of the IPS technology to interconnect the thin-film transistor array as a matrix and to avoid undesirable stray fields in between pixels. The first wall-mountable LCD TV was introduced by Sharp Corporation in 1992.
Hitachi also improved the viewing angle dependence further by optimizing the shape of the electrodes (Super IPS). NEC and Hitachi became early manufacturers of active-matrix addressed LCDs based on the IPS technology. This is a milestone for implementing large-screen LCDs having acceptable visual performance for flat-panel computer monitors and television screens. In 1996, Samsung developed the optical patterning technique that enables multi-domain LCD. Multi-domain and In Plane Switching subsequently remain the dominant LCD designs through 2006. In the late 1990s, the LCD industry began shifting away from Japan, towards South Korea and Taiwan, and later on towards China.
2000s
In this period, Taiwanese, Japanese, and Korean manufacturers were the dominant companies in LCD manufacturing. From 2001 to 2006, Samsung and five other major companies held 53 meetings in Taiwan and South Korea to fix prices in the LCD industry. These six companies were fined 1.3 billion dollars by the United States, 650 million Euro by the European Union, and 350 million RMB by China's National Development and Reform Commission.
In 2007 the image quality of LCD televisions surpassed the image quality of cathode-ray-tube-based (CRT) TVs. In the fourth quarter of 2007, LCD televisions surpassed CRT TVs in worldwide sales for the first time. LCD TVs were projected to account 50% of the 200 million TVs to be shipped globally in 2006, according to Displaybank.
2010s
In October 2011, Toshiba announced 2560 × 1600 pixels on a 6.1-inch (155 mm) LCD panel, suitable for use in a tablet computer, especially for Chinese character display. The 2010s also saw the wide adoption of TGP (Tracking Gate-line in Pixel), which moves the driving circuitry from the borders of the display to in between the pixels, allowing for narrow bezels.
In 2016, Panasonic developed IPS LCDs with a contrast ratio of 1,000,000:1, rivaling OLEDs. This technology was later put into mass production as dual layer, dual panel or LMCL (Light Modulating Cell Layer) LCDs. The technology uses 2 liquid crystal layers instead of one, and may be used along with a mini-LED backlight and quantum dot sheets.
LCDs with quantum dot enhancement film or quantum dot color filters were introduced from 2015 to 2018. Quantum dots receive blue light from a backlight and convert it to light that allows LCD panels to offer better color reproduction. Quantum dot color filters are manufactured using photoresists containing quantum dots instead of colored pigments, and the quantum dots can have a special structure to improve their application onto the color filter. Quantum dot color filters offer superior light transmission over quantum dot enhancement films.
2020s
In the 2020s, China became the largest manufacturer of LCDs and Chinese firms had a 40% share of the global market. Chinese firms that increased their production to high levels included BOE Technology, TCL-CSOT, TIANMA, and Visionox. Local governments had a significant role in this growth, including as a result of their investments in LCD manufacturers via state-owned investment companies. China had previously imported significant amounts of LCDs, and the growth of its LCD industry decreased prices for other consumer products that use LCDs and led to growth in other sectors like mobile phones.
Illumination
LCDs do not produce light on their own, so they require external light to produce a visible image. In a transmissive type of LCD, the light source is provided at the back of the glass stack and is called a backlight. Active-matrix LCDs are almost always backlit. Passive LCDs may be backlit but many are reflective as they use a reflective surface or film at the back of the glass stack to utilize ambient light. Transflective LCDs combine the features of a backlit transmissive display and a reflective display.
The common implementations of LCD backlight technology are:
WLED array: The LCD panel is lit by a full array of white LEDs placed behind a diffuser behind the panel. LCDs that use this implementation will usually have the ability to dim or completely turn off the LEDs in the dark areas of the image being displayed, effectively increasing the contrast ratio of the display. The precision with which this can be done will depend on the number of dimming zones of the display. The more dimming zones, the more precise the dimming, with less obvious blooming artifacts which are visible as dark grey patches surrounded by the unlit areas of the LCD. As of 2012, this design gets most of its use from upscale, larger-screen LCD televisions.
CCFL: The LCD panel is lit either by two cold cathode fluorescent lamps placed at opposite edges of the display or an array of parallel CCFLs behind larger displays. A diffuser (made of PMMA acrylic plastic, also known as a wave or light guide/guiding plate) then spreads the light out evenly across the whole display. For many years, this technology had been used almost exclusively. Unlike white LEDs, most CCFLs have an even-white spectral output resulting in better color gamut for the display. However, CCFLs are less energy efficient than LEDs and require a somewhat costly inverter to convert whatever DC voltage the device uses (usually 5 or 12 V) to ≈1000 V needed to light a CCFL. The thickness of the inverter transformers also limits how thin the display can be made.
EL-WLED: The LCD panel is lit by a row of white LEDs placed at one or more edges of the screen. A light diffuser (light guide plate, LGP) is then used to spread the light evenly across the whole display, similarly to edge-lit CCFL LCD backlights. The diffuser is made out of either PMMA plastic or special glass, PMMA is used in most cases because it is rugged, while special glass is used when the thickness of the LCD is of primary concern, because it doesn't expand as much when heated or exposed to moisture, which allows LCDs to be just 5mm thick. Quantum dots may be placed on top of the diffuser as a quantum dot enhancement film (QDEF, in which case they need a layer to be protected from heat and humidity) or on the color filter of the LCD, replacing the resists that are normally used. this design is the most popular one in desktop computer monitors. It allows for the thinnest displays. Some LCD monitors using this technology have a feature called dynamic contrast, invented by Philips researchers Douglas Stanton, Martinus Stroomer and Adrianus de Vaan Using PWM (pulse-width modulation, a technology where the intensity of the LEDs are kept constant, but the brightness adjustment is achieved by varying a time interval of flashing these constant light intensity light sources), the backlight is dimmed to the brightest color that appears on the screen while simultaneously boosting the LCD contrast to the maximum achievable levels, allowing the 1000:1 contrast ratio of the LCD panel to be scaled to different light intensities, resulting in the "30000:1" contrast ratios seen in the advertising on some of these monitors. Since computer screen images usually have full white somewhere in the image, the backlight will usually be at full intensity, making this "feature" mostly a marketing gimmick for computer monitors, however for TV screens it drastically increases the perceived contrast ratio and dynamic range, improves the viewing angle dependency and drastically reducing the power consumption of conventional LCD televisions.
RGB-LED array: Similar to the WLED array, except the panel is lit by an array of RGB LEDs. While displays lit with white LEDs usually have a poorer color gamut than CCFL lit displays, panels lit with RGB LEDs have very wide color gamuts. This implementation is most popular on professional graphics editing LCDs. As of 2012, LCDs in this category usually cost more than $1000. As of 2016 the cost of this category has drastically reduced and such LCD televisions obtained same price levels as the former 28" (71 cm) CRT based categories.
Monochrome LEDs: such as red, green, yellow or blue LEDs are used in the small passive monochrome LCDs typically used in clocks, watches and small appliances. Blue LEDs can be used in LCDs with quantum dot enhancement film or quantum dot color filters.
Mini-LED: Backlighting with Mini-LEDs can support over a thousand Full-area Local Area Dimming (FLAD) zones. This allows deeper blacks and higher contrast ratio.
Today, most LCD screens are being designed with an LED backlight instead of the traditional CCFL backlight, while that backlight is dynamically controlled with the video information (dynamic backlight control). The combination with the dynamic backlight control, invented by Philips researchers Douglas Stanton, Martinus Stroomer and Adrianus de Vaan, simultaneously increases the dynamic range of the display system (also marketed as HDR, high dynamic range television or FLAD, full-area local area dimming).
The LCD backlight systems are made highly efficient by applying optical films such as prismatic structure (prism sheet) to gain the light into the desired viewer directions and reflective polarizing films that recycle the polarized light that was formerly absorbed by the first polarizer of the LCD (invented by Philips researchers Adrianus de Vaan and Paulus Schaareman), generally achieved using so called DBEF films manufactured and supplied by 3M. Improved versions of the prism sheet have a wavy rather than a prismatic structure, and introduce waves laterally into the structure of the sheet while also varying the height of the waves, directing even more light towards the screen and reducing aliasing or moiré between the structure of the prism sheet and the subpixels of the LCD. A wavy structure is easier to mass-produce than a prismatic one using conventional diamond machine tools, which are used to make the rollers used to imprint the wavy structure into plastic sheets, thus producing prism sheets. A diffuser sheet is placed on both sides of the prism sheet to distribute the light of the backlight uniformly, while a mirror is placed behind the light guide plate to direct all light forwards. The prism sheet with its diffuser sheets are placed on top of the light guide plate. The DBEF polarizers consist of a large stack of uniaxial oriented birefringent films that reflect the former absorbed polarization mode of the light.
DBEF polarizers using uniaxial oriented polymerized liquid crystals (birefringent polymers or birefringent glue) were invented in 1989 by Philips researchers Dirk Broer, Adrianus de Vaan and Joerg Brambring. The combination of such reflective polarizers, and LED dynamic backlight control make today's LCD televisions far more efficient than the CRT-based sets, leading to a worldwide energy saving of 600 TWh (2017), equal to 10% of the electricity consumption of all households worldwide or equal to 2 times the energy production of all solar cells in the world.
Connection to other circuits
A standard television receiver screen, a modern LCD panel, has over six million pixels, and they are all individually powered by a wire network embedded in the screen. The fine wires, or pathways, form a grid with vertical wires across the whole screen on one side of the screen and horizontal wires across the whole screen on the other side of the screen. To this grid each pixel has a positive connection on one side and a negative connection on the other side. So the total amount of wires needed for a 1080p display is 3 x 1920 going vertically and 1080 going horizontally for a total of 6840 wires horizontally and vertically. That's three for red, green and blue and 1920 columns of pixels for each color for a total of 5760 wires going vertically and 1080 rows of wires going horizontally. For a panel that is 28.8 inches (73 centimeters) wide, that means a wire density of 200 wires per inch along the horizontal edge.
The LCD panel is powered by LCD drivers that are carefully matched up with the edge of the LCD panel at the factory level. The drivers may be installed using several methods, the most common of which are COG (Chip-On-Glass) and TAB (Tape-automated bonding) These same principles apply also for smartphone screens that are much smaller than TV screens. LCD panels typically use thinly-coated metallic conductive pathways on a glass substrate to form the cell circuitry to operate the panel. It is usually not possible to use soldering techniques to directly connect the panel to a separate copper-etched circuit board. Instead, interfacing is accomplished using anisotropic conductive film or, for lower densities, elastomeric connectors.
Passive-matrix
Monochrome and later color passive-matrix LCDs were standard in most early laptops (although a few used plasma displays) and the original Nintendo Game Boy until the mid-1990s, when color active-matrix became standard on all laptops. The commercially unsuccessful Macintosh Portable (released in 1989) was one of the first to use an active-matrix display (though still monochrome). Passive-matrix LCDs are still used in the 2010s for applications less demanding than laptop computers and TVs, such as inexpensive calculators. In particular, these are used on portable devices where less information content needs to be displayed, lowest power consumption (no backlight) and low cost are desired or readability in direct sunlight is needed.
Displays having a passive-matrix structure use super-twisted nematic STN (invented by Brown Boveri Research Center, Baden, Switzerland, in 1983; scientific details were published) or double-layer STN (DSTN) technology (the latter of which addresses a color-shifting problem with the former), and color-STN (CSTN), in which color is added by using an internal color filter. STN LCDs have been optimized for passive-matrix addressing. They exhibit a sharper threshold of the contrast-vs-voltage characteristic than the original TN LCDs. This is important, because pixels are subjected to partial voltages even while not selected. Crosstalk between activated and non-activated pixels has to be handled properly by keeping the RMS voltage of non-activated pixels below the threshold voltage as discovered by Peter J. Wild in 1972, while activated pixels are subjected to voltages above threshold (the voltages according to the "Alt & Pleshko" drive scheme). Driving such STN displays according to the Alt & Pleshko drive scheme require very high line addressing voltages. Welzen and de Vaan invented an alternative drive scheme (a non "Alt & Pleshko" drive scheme) requiring much lower voltages, such that the STN display could be driven using low voltage CMOS technologies. White-on-blue LCDs are STN and can use a blue polarizer, or birefringence which gives them their distinctive appearance.
STN LCDs have to be continuously refreshed by alternating pulsed voltages of one polarity during one frame and pulses of opposite polarity during the next frame. Individual pixels are addressed by the corresponding row and column circuits. This type of display is called passive-matrix addressed, because the pixel must retain its state between refreshes without the benefit of a steady electrical charge. As the number of pixels (and, correspondingly, columns and rows) increases, this type of display becomes less feasible. Slow response times and poor contrast are typical of passive-matrix addressed LCDs with too many pixels and driven according to the "Alt & Pleshko" drive scheme. Welzen and de Vaan also invented a non RMS drive scheme enabling to drive STN displays with video rates and enabling to show smooth moving video images on an STN display. Citizen, among others, licensed these patents and successfully introduced several STN based LCD pocket televisions on the market.
Bistable LCDs do not require continuous refreshing. Rewriting is only required for picture information changes. In 1984 HA van Sprang and AJSM de Vaan invented an STN type display that could be operated in a bistable mode, enabling extremely high resolution images up to 4000 lines or more using only low voltages. Since a pixel may be either in an on-state or in an off state at the moment new information needs to be written to that particular pixel, the addressing method of these bistable displays is rather complex, a reason why these displays did not make it to the market. That changed when in the 2010 "zero-power" (bistable) LCDs became available. Potentially, passive-matrix addressing can be used with devices if their write/erase characteristics are suitable, which was the case for ebooks which need to show still pictures only. After a page is written to the display, the display may be cut from the power while retaining readable images. This has the advantage that such ebooks may be operated for long periods of time powered by only a small battery.
High-resolution color displays, such as modern LCD computer monitors and televisions, use an active-matrix structure. A matrix of thin-film transistors (TFTs) is added to the electrodes in contact with the LC layer. Each pixel has its own dedicated transistor, allowing each column line to access one pixel. When a row line is selected, all of the column lines are connected to a row of pixels and voltages corresponding to the picture information are driven onto all of the column lines. The row line is then deactivated and the next row line is selected. All of the row lines are selected in sequence during a refresh operation. Active-matrix addressed displays look brighter and sharper than passive-matrix addressed displays of the same size, and generally have quicker response times, producing much better images. Sharp produces bistable reflective LCDs with a 1-bit SRAM cell per pixel that only requires small amounts of power to maintain an image.
Segment LCDs can also have color by using Field Sequential Color (FSC LCD). This kind of displays have a high speed passive segment LCD panel with an RGB backlight. The backlight quickly changes color, making it appear white to the naked eye. The LCD panel is synchronized with the backlight. For example, to make a segment appear red, the segment is only turned ON when the backlight is red, and to make a segment appear magenta, the segment is turned ON when the backlight is blue, and it continues to be ON while the backlight becomes red, and it turns OFF when the backlight becomes green. To make a segment appear black, the segment is always turned ON. An FSC LCD divides a color image into 3 images (one Red, one Green and one Blue) and it displays them in order. Due to persistence of vision, the 3 monochromatic images appear as one color image. An FSC LCD needs an LCD panel with a refresh rate of 180 Hz, and the response time is reduced to just 5 milliseconds when compared with normal STN LCD panels which have a response time of 16 milliseconds. FSC LCDs contain a Chip-On-Glass driver IC can also be used with a capacitive touchscreen. This technique can also be applied in displays meant to show images, as it can offer higher light transmission and thus potential for reduced power consumption in the backlight due to omission of color filters in LCDs.
Samsung introduced UFB (Ultra Fine & Bright) displays back in 2002, utilized the super-birefringent effect. It has the luminance, color gamut, and most of the contrast of a TFT-LCD, but only consumes as much power as an STN display, according to Samsung. It was being used in a variety of Samsung cellular-telephone models produced until late 2006, when Samsung stopped producing UFB displays. UFB displays were also used in certain models of LG mobile phones.
Active-matrix technologies
Twisted nematic (TN)
Twisted nematic displays contain liquid crystals that twist and untwist at varying degrees to allow light to pass through. When no voltage is applied to a TN liquid crystal cell, polarized light passes through the 90-degrees twisted LC layer. In proportion to the voltage applied, the liquid crystals untwist changing the polarization and blocking the light's path. By properly adjusting the level of the voltage almost any gray level or transmission can be achieved.
In-plane switching (IPS)
In-plane switching (IPS) is an LCD technology that aligns the liquid crystals in a plane parallel to the glass substrates. In this method, the electrical field is applied through opposite electrodes on the same glass substrate, so that the liquid crystals can be reoriented (switched) essentially in the same plane, although fringe fields inhibit a homogeneous reorientation. This requires two transistors for each pixel instead of the single transistor needed for a standard thin-film transistor (TFT) display. IPS technology is used in everything from televisions, computer monitors, and even wearable devices, especially almost all LCD smartphone panels are IPS/FFS mode. IPS displays belong to the LCD panel family screen types. The other two types are VA and TN. Before LG Enhanced IPS was introduced in 2001 by Hitachi as 17" monitor in Market, the additional transistors resulted in blocking more transmission area, thus requiring a brighter backlight and consuming more power, making this type of display less desirable for notebook computers. Panasonic Himeji G8.5 was using an enhanced version of IPS, also LG Display in Korea, then currently the world biggest LCD panel manufacture BOE in China is also IPS/FFS mode TV panel.
Super In-plane switching (S-IPS)
Super-IPS was later introduced after in-plane switching with even better response times and color reproduction.
M+ or RGBW controversy
In 2015 LG Display announced the implementation of a new technology called M+ which is the addition of white subpixel along with the regular RGB dots in their IPS panel technology.
Most of the new M+ technology was employed on 4K TV sets which led to a controversy after tests showed that the addition of a white sub pixel replacing the traditional RGB structure had also been accompanied by a reduction in resolution by around 25%. This meant that a "4K" M+ TV would not display the full UHD TV standard. The media and internet users called them "RGBW" TVs because of the white sub pixel. Although LG Display has developed this technology for use in notebook display, outdoor and smartphones, it became more popular in the TV market because of the announced "4K UHD" resolution but still being incapable of achieving true UHD resolution defined by the CTA as 3840x2160 active pixels with 8-bit color. This negatively impacted the rendering of text, making it a bit fuzzier, which was especially noticeable when a TV is used as a PC monitor.
IPS in comparison to AMOLED
In 2011, LG claimed the smartphone LG Optimus Black (IPS LCD (LCD NOVA)) has the brightness up to 700 nits, while the competitor has only IPS LCD with 518 nits and double an active-matrix OLED (AMOLED) display with 305 nits. LG also claimed the NOVA display to be 50 percent more efficient than regular LCDs and to consume only 50 percent of the power of AMOLED displays when producing white on screen. When it comes to contrast ratio, AMOLED display still performs best due to its underlying technology, where the black levels are displayed as pitch black and not as dark gray. On August 24, 2011, Nokia announced the Nokia 701 and also made the claim of the world's brightest display at 1000 nits. The screen also had Nokia's Clearblack layer, improving the contrast ratio and bringing it closer to that of the AMOLED screens.
Advanced fringe field switching (AFFS)
Known as fringe field switching (FFS) until 2003, advanced fringe field switching is similar to IPS or S-IPS offering superior performance and color gamut with high luminosity. AFFS was developed by Hydis Technologies Co., Ltd, Korea (formally Hyundai Electronics, LCD Task Force). AFFS-applied notebook applications minimize color distortion while maintaining a wider viewing angle for a professional display. Color shift and deviation caused by light leakage is corrected by optimizing the white gamut which also enhances white/gray reproduction. In 2004, Hydis Technologies Co., Ltd licensed AFFS to Japan's Hitachi Displays. Hitachi is using AFFS to manufacture high-end panels. In 2006, HYDIS licensed AFFS to Sanyo Epson Imaging Devices Corporation. Shortly thereafter, Hydis introduced a high-transmittance evolution of the AFFS display, called HFFS (FFS+). Hydis introduced AFFS+ with improved outdoor readability in 2007. AFFS panels are mostly utilized in the cockpits of latest commercial aircraft displays. However, it is no longer produced as of February 2015.
Vertical alignment (VA)
Vertical-alignment displays are a form of LCDs in which the liquid crystals naturally align vertically to the glass substrates. When no voltage is applied, the liquid crystals remain perpendicular to the substrate, creating a black display between crossed polarizers. When voltage is applied, the liquid crystals shift to a tilted position, allowing light to pass through and create a gray-scale display depending on the amount of tilt generated by the electric field. It has a deeper-black background, a higher contrast ratio, a wider viewing angle, and better image quality at extreme temperatures than traditional twisted-nematic displays. Compared to IPS, the black levels are still deeper, allowing for a higher contrast ratio, but the viewing angle is narrower, with color and especially contrast shift being more apparent, and the cost of VA is lower than IPS (but higher than TN).
Blue phase mode
Blue phase mode LCDs have been shown as engineering samples early in 2008, but they are not in mass-production. The physics of blue phase mode LCDs suggest that very short switching times (≈1 ms) can be achieved, so time sequential color control can possibly be realized and expensive color filters would be obsolete.
Quality control
Some LCD panels have defective transistors, causing permanently lit or unlit pixels which are commonly referred to as stuck pixels or dead pixels respectively. Unlike integrated circuits (ICs), LCD panels with a few defective transistors are usually still usable. Manufacturers' policies for the acceptable number of defective pixels vary greatly. At one point, Samsung held a zero-tolerance policy for LCD monitors sold in Korea. Samsung adheres to the less restrictive ISO 13406-2 standard. Other companies have been known to tolerate as many as 11 dead pixels in their policies.
Dead pixel policies are often hotly debated between manufacturers and customers. To regulate the acceptability of defects and to protect the end user, ISO released the ISO 13406-2 standard, which was made obsolete in 2008 with the release of ISO 9241, specifically ISO-9241-302, 303, 305, 307:2008 pixel defects. However, not every LCD manufacturer conforms to the ISO standard and the ISO standard is quite often interpreted in different ways. LCD panels are more likely to have defects than most ICs due to their larger size.
Many manufacturers would replace a product even with one defective pixel. Even where such guarantees do not exist, the location of defective pixels is important. A display with only a few defective pixels may be unacceptable if the defective pixels are near each other. LCD panels also commonly have a defect known as clouding, dirty screen effect, or, less commonly, mura, which involves uneven patches of luminance on the panel. It is most visible in dark or black areas of displayed scenes. most premium branded computer LCD panel manufacturers specify their products as having zero defects.
"Zero-power" (bistable) displays
The zenithal bistable device (ZBD), developed by Qinetiq (formerly DERA), can retain an image without power. The crystals may exist in one of two stable orientations ("black" and "white") and power is only required to change the image. ZBD Displays is a spin-off company from QinetiQ who manufactured both grayscale and color ZBD devices. Kent Displays has also developed a "no-power" display that uses polymer stabilized cholesteric liquid crystal (ChLCD). In 2009 Kent demonstrated the use of a ChLCD to cover the entire surface of a mobile phone, allowing it to change colors, and keep that color even when power is removed.
In 2004, researchers at the University of Oxford demonstrated two new types of zero-power bistable LCDs based on Zenithal bistable techniques. Several bistable technologies, like the 360° BTN and the bistable cholesteric, depend mainly on the bulk properties of the liquid crystal (LC) and use standard strong anchoring, with alignment films and LC mixtures similar to the traditional monostable materials. Other bistable technologies, e.g., BiNem technology, are based mainly on the surface properties and need specific weak anchoring materials.
Specifications
Resolution The resolution of an LCD is expressed by the number of columns and rows of pixels (e.g., 1024×768). Each pixel is usually composed of 3 sub-pixels, a red, a green, and a blue one. This had been one of the few features of LCD performance that remained uniform among different designs. However, there are newer designs that share sub-pixels among pixels and add Quattron which attempt to efficiently increase the perceived resolution of a display without increasing the actual resolution, to mixed results.
Spatial performance: For a computer monitor or some other display that is being viewed from a very close distance, resolution is often expressed in terms of dot pitch or pixels per inch, which is consistent with the printing industry. Display density varies per application, with televisions generally having a low density for long-distance viewing and portable devices having a high density for close-range detail. The Viewing Angle of an LCD may be important depending on the display and its usage, the limitations of certain display technologies mean the display only displays accurately at certain angles.
Temporal performance: the temporal resolution of an LCD is how well it can display changing images, or the accuracy and the number of times per second the display draws the data it is being given. LCD pixels do not flash on/off between frames, so LCD monitors exhibit no refresh-induced flicker no matter how low the refresh rate. But a lower refresh rate can mean visual artefacts like ghosting or smearing, especially with fast moving images. Individual pixel response time is also important, as all displays have some inherent latency in displaying an image which can be large enough to create visual artifacts if the displayed image changes rapidly.
Color performance: There are multiple terms to describe different aspects of color performance of a display. Color gamut is the range of colors that can be displayed, and color depth, which is the fineness with which the color range is divided. Color gamut is a relatively straight forward feature, but it is rarely discussed in marketing materials except at the professional level. Having a color range that exceeds the content being shown on the screen has no benefits, so displays are only made to perform within or below the range of a certain specification. There are additional aspects to LCD color and color management, such as white point and gamma correction, which describe what color white is and how the other colors are displayed relative to white.
Brightness and contrast ratio: Contrast ratio is the ratio of the brightness of a full-on pixel to a full-off pixel. The LCD itself is only a light valve and does not generate light; the light comes from a backlight that is either fluorescent or a set of LEDs. Brightness is usually stated as the maximum light output of the LCD, which can vary greatly based on the transparency of the LCD and the brightness of the backlight. Brighter backlight allows stronger contrast and higher dynamic range (HDR displays are graded in peak luminance), but there is always a trade-off between brightness and power consumption.
Advantages and disadvantages
Some of these issues relate to full-screen displays, others to small displays as on watches, etc. Many of the comparisons are with CRT displays.
Advantages
Very compact, thin and light, especially in comparison with CRT displays.
Low power consumption. Depending on the set display brightness and content being displayed, the older CCFT backlit models typically use less than half of the power a CRT monitor of the same size viewing area would use, and the modern LED backlit models typically use 10–25% of the power a CRT monitor would use.
Little heat emitted during operation, due to low power consumption.
No geometric distortion.
The possible ability to have little or no flicker depending on backlight technology.
Usually no refresh-rate flicker, because the LCD pixels hold their state between refreshes (which are usually done at 200 Hz or faster, regardless of the input refresh rate).
Sharp image with no bleeding or smearing when operated at native resolution.
Emits almost no undesirable electromagnetic radiation (in the extremely low frequency range), unlike a CRT monitor.
Can be made in almost any size or shape.
No theoretical resolution limit. When multiple LCD panels are used together to create a single canvas, each additional panel increases the total resolution of the display, which is commonly called stacked resolution.
Can be made in large sizes of over 80-inch (2 m) diagonal.
LCDs can be made transparent and flexible, but they cannot emit light without a backlight like OLED and microLED, which are other technologies that can also be made flexible and transparent.
Masking effect: the LCD grid can mask the effects of spatial and grayscale quantization, creating the illusion of higher image quality.
Unaffected by magnetic fields, including the Earth's, unlike most color CRTs.
As an inherently digital device, the LCD can natively display digital data from a DVI or HDMI connection without requiring conversion to analog. Some LCD panels have native fiber-optic inputs in addition to DVI and HDMI.
Many LCD monitors are powered by a 12 V power supply, and if built into a computer can be powered by its 12 V power supply.
Can be made with very narrow frame borders, allowing multiple LCD screens to be arrayed side by side to make up what looks like one big screen.
Disadvantages
Limited viewing angle in some older or cheaper monitors, causing color, saturation, contrast and brightness to vary with user position, even within the intended viewing angle. Special films can be used to increase the viewing angles of LCDs.
Uneven backlighting in some monitors (more common in IPS-types and older TNs), causing brightness distortion, especially toward the edges ("backlight bleed").
Black levels may not be as dark as required because individual liquid crystals cannot completely block all of the backlight from passing through.
Display motion blur on moving objects caused by slow response times (>8 ms) and eye-tracking on a sample-and-hold display, unless a strobing backlight is used. However, this strobing can cause eye strain, as is noted next:
most implementations of LCD backlighting use pulse-width modulation (PWM) to dim the display, which makes the screen flicker more acutely (this does not mean visibly) than a CRT monitor at 85 Hz refresh rate would (this is because the entire screen is strobing on and off rather than a CRT's phosphor sustained dot which continually scans across the display, leaving some part of the display always lit), causing severe eye-strain for some people. Unfortunately, many of these people don't know that their eye-strain is being caused by the invisible strobe effect of PWM. This problem is worse on many LED-backlit monitors, because the LEDs switch on and off faster than a CCFL lamp.
Only one native resolution. Displaying any other resolution either requires a video scaler, causing blurriness and jagged edges, or running the display at native resolution using 1:1 pixel mapping, causing the image either not to fill the screen (letterboxed display), or to run off the one or more edges of the screen.
Fixed bit depth (also called color depth). Many cheaper LCDs are only able to display 262144 (218) colors. 8-bit S-IPS panels can display 16 million (224) colors and have significantly better black level, but are expensive and have slower response time.
Input lag, because the LCD's A/D converter waits for each frame to be completely been output before drawing it to the LCD panel. Many LCD monitors do post-processing before displaying the image in an attempt to compensate for poor color fidelity, which adds an additional lag. Further, a video scaler must be used when displaying non-native resolutions, which adds yet more time lag. Scaling and post processing are usually done in a single chip on modern monitors, but each function that chip performs adds some delay. Some displays have a video gaming mode which disables all or most processing to reduce perceivable input lag.
Dead or stuck pixels may occur during manufacturing or after a period of use. A stuck pixel will glow with color even on an all-black screen, while a dead one will always remain black.
Subject to burn-in effect, although the cause differs from CRT and the effect may not be permanent, a static image can cause burn-in in a matter of hours in badly designed displays.
In a constant-on situation, thermalization may occur in case of bad thermal management, in which part of the screen has overheated and looks discolored compared to the rest of the screen.
Loss of brightness and much slower response times in low temperature environments. In sub-zero environments, LCD screens may cease to function without the use of supplemental heating.
Loss of contrast in high temperature environments.
Chemicals used
Several different families of liquid crystals are used in liquid crystal displays. The molecules used have to be anisotropic, and to exhibit mutual attraction. Polarizable rod-shaped molecules (biphenyls, terphenyls, etc.) are common. A common form is a pair of aromatic benzene rings, with a nonpolar moiety (pentyl, heptyl, octyl, or alkyl oxy group) on one end and polar (nitrile, halogen) on the other. Sometimes the benzene rings are separated with an acetylene group, ethylene, CH=N, CH=NO, N=N, N=NO, or ester group. In practice, eutectic mixtures of several chemicals are used, to achieve wider temperature operating range (−10..+60 °C for low-end and −20..+100 °C for high-performance displays). For example, the E7 mixture is composed of three biphenyls and one terphenyl: 39 wt.% of 4'-pentyl[1,1'-biphenyl]-4-carbonitrile (nematic range 24..35 °C), 36 wt.% of 4'-heptyl[1,1'-biphenyl]-4-carbonitrile (nematic range 30..43 °C), 16 wt.% of 4'-octoxy[1,1'-biphenyl]-4-carbonitrile (nematic range 54..80 °C), and 9 wt.% of 4-pentyl[1,1':4',1-terphenyl]-4-carbonitrile (nematic range 131..240 °C).
Environmental impact
The production of LCD screens uses nitrogen trifluoride (NF3) as an etching fluid during the production of the thin-film components. NF3 is a potent greenhouse gas, and its relatively long half-life may make it a potentially harmful contributor to global warming. A report in Geophysical Research Letters suggested that its effects were theoretically much greater than better-known sources of greenhouse gasses like carbon dioxide. As NF3 was not in widespread use at the time, it was not made part of the Kyoto Protocol and was deemed "the missing greenhouse gas". NF3 was added to the Kyoto Protocol for the second compliance period during the Doha Round.
Critics of the report point out that it assumes that all of the NF3 produced would be released to the atmosphere. In reality, the vast majority of NF3 is broken down during the cleaning processes; two earlier studies found that only 2 to 3% of the gas escapes destruction after its use. Furthermore, the report failed to compare NF3's effects with what it replaced, perfluorocarbon, another powerful greenhouse gas, of which anywhere from 30 to 70% escapes to the atmosphere in typical use.
| Technology | Media and communication | null |
17939 | https://en.wikipedia.org/wiki/Light | Light | Light, visible light, or visible radiation is electromagnetic radiation that can be perceived by the human eye. Visible light spans the visible spectrum and is usually defined as having wavelengths in the range of 400–700 nanometres (nm), corresponding to frequencies of 750–420 terahertz. The visible band sits adjacent to the infrared (with longer wavelengths and lower frequencies) and the ultraviolet (with shorter wavelengths and higher frequencies), called collectively optical radiation.
In physics, the term "light" may refer more broadly to electromagnetic radiation of any wavelength, whether visible or not. In this sense, gamma rays, X-rays, microwaves and radio waves are also light. The primary properties of light are intensity, propagation direction, frequency or wavelength spectrum, and polarization. Its speed in vacuum, , is one of the fundamental constants of nature. Like all types of electromagnetic radiation, visible light propagates by massless elementary particles called photons that represents the quanta of electromagnetic field, and can be analyzed as both waves and particles. The study of light, known as optics, is an important research area in modern physics.
The main source of natural light on Earth is the Sun. Historically, another important source of light for humans has been fire, from ancient campfires to modern kerosene lamps. With the development of electric lights and power systems, electric lighting has effectively replaced firelight.
Electromagnetic spectrum and visible light
Generally, electromagnetic radiation (EMR) is classified by wavelength into radio waves, microwaves, infrared, the visible spectrum that we perceive as light, ultraviolet, X-rays and gamma rays. The designation "radiation" excludes static electric, magnetic and near fields.
The behavior of EMR depends on its wavelength. Higher frequencies have shorter wavelengths and lower frequencies have longer wavelengths. When EMR interacts with single atoms and molecules, its behavior depends on the amount of energy per quantum it carries.
EMR in the visible light region consists of quanta (called photons) that are at the lower end of the energies that are capable of causing electronic excitation within molecules, which leads to changes in the bonding or chemistry of the molecule. At the lower end of the visible light spectrum, EMR becomes invisible to humans (infrared) because its photons no longer have enough individual energy to cause a lasting molecular change (a change in conformation) in the visual molecule retinal in the human retina, which change triggers the sensation of vision.
There exist animals that are sensitive to various types of infrared, but not by means of quantum-absorption. Infrared sensing in snakes depends on a kind of natural thermal imaging, in which tiny packets of cellular water are raised in temperature by the infrared radiation. EMR in this range causes molecular vibration and heating effects, which is how these animals detect it.
Above the range of visible light, ultraviolet light becomes invisible to humans, mostly because it is absorbed by the cornea below 360 nm and the internal lens below 400 nm. Furthermore, the rods and cones located in the retina of the human eye cannot detect the very short (below 360 nm) ultraviolet wavelengths and are in fact damaged by ultraviolet. Many animals with eyes that do not require lenses (such as insects and shrimp) are able to detect ultraviolet, by quantum photon-absorption mechanisms, in much the same chemical way that humans detect visible light.
Various sources define visible light as narrowly as 420–680 nm to as broadly as 380–800 nm. Under ideal laboratory conditions, people can see infrared up to at least 1,050 nm; children and young adults may perceive ultraviolet wavelengths down to about 310–313 nm.
Plant growth is also affected by the colour spectrum of light, a process known as photomorphogenesis.
Speed of light
The speed of light in vacuum is defined to be exactly (approximately 186,282 miles per second). The fixed value of the speed of light in SI units results from the fact that the metre is now defined in terms of the speed of light. All forms of electromagnetic radiation move at exactly this same speed in vacuum.
Different physicists have attempted to measure the speed of light throughout history. Galileo attempted to measure the speed of light in the seventeenth century. An early experiment to measure the speed of light was conducted by Ole Rømer, a Danish physicist, in 1676. Using a telescope, Rømer observed the motions of Jupiter and one of its moons, Io. Noting discrepancies in the apparent period of Io's orbit, he calculated that light takes about 22 minutes to traverse the diameter of Earth's orbit. However, its size was not known at that time. If Rømer had known the diameter of the Earth's orbit, he would have calculated a speed of .
Another more accurate measurement of the speed of light was performed in Europe by Hippolyte Fizeau in 1849. Fizeau directed a beam of light at a mirror several kilometers away. A rotating cog wheel was placed in the path of the light beam as it traveled from the source, to the mirror and then returned to its origin. Fizeau found that at a certain rate of rotation, the beam would pass through one gap in the wheel on the way out and the next gap on the way back. Knowing the distance to the mirror, the number of teeth on the wheel and the rate of rotation, Fizeau was able to calculate the speed of light as .
Léon Foucault carried out an experiment which used rotating mirrors to obtain a value of in 1862. Albert A. Michelson conducted experiments on the speed of light from 1877 until his death in 1931. He refined Foucault's methods in 1926 using improved rotating mirrors to measure the time it took light to make a round trip from Mount Wilson to Mount San Antonio in California. The precise measurements yielded a speed of .
The effective velocity of light in various transparent substances containing ordinary matter, is less than in vacuum. For example, the speed of light in water is about 3/4 of that in vacuum.
Two independent teams of physicists were said to bring light to a "complete standstill" by passing it through a Bose–Einstein condensate of the element rubidium, one team at Harvard University and the Rowland Institute for Science in Cambridge, Massachusetts and the other at the Harvard–Smithsonian Center for Astrophysics, also in Cambridge. However, the popular description of light being "stopped" in these experiments refers only to light being stored in the excited states of atoms, then re-emitted at an arbitrary later time, as stimulated by a second laser pulse. During the time it had "stopped", it had ceased to be light.
Optics
The study of light and the interaction of light and matter is termed optics. The observation and study of optical phenomena such as rainbows and the aurora borealis offer many clues as to the nature of light.
A transparent object allows light to transmit or pass through. Conversely, an opaque object does not allow light to transmit through and instead reflecting or absorbing the light it receives. Most objects do not reflect or transmit light specularly and to some degree scatters the incoming light, which is called glossiness. Surface scatterance is caused by the surface roughness of the reflecting surfaces, and internal scatterance is caused by the difference of refractive index between the particles and medium inside the object. Like transparent objects, translucent objects allow light to transmit through, but translucent objects also scatter certain wavelength of light via internal scatterance.
Refraction
Refraction is the bending of light rays when passing through a surface between one transparent material and another. It is described by Snell's Law:
where θ1 is the angle between the ray and the surface normal in the first medium, θ2 is the angle between the ray and the surface normal in the second medium and n1 and n2 are the indices of refraction, n = 1 in a vacuum and n > 1 in a transparent substance.
When a beam of light crosses the boundary between a vacuum and another medium, or between two different media, the wavelength of the light changes, but the frequency remains constant. If the beam of light is not orthogonal (or rather normal) to the boundary, the change in wavelength results in a change in the direction of the beam. This change of direction is known as refraction.
The refractive quality of lenses is frequently used to manipulate light in order to change the apparent size of images. Magnifying glasses, spectacles, contact lenses, microscopes and refracting telescopes are all examples of this manipulation.
Light sources
There are many sources of light. A body at a given temperature emits a characteristic spectrum of black-body radiation. A simple thermal source is sunlight, the radiation emitted by the chromosphere of the Sun at around . Solar radiation peaks in the visible region of the electromagnetic spectrum when plotted in wavelength units, and roughly 44% of the radiation that reaches the ground is visible. Another example is incandescent light bulbs, which emit only around 10% of their energy as visible light and the remainder as infrared. A common thermal light source in history is the glowing solid particles in flames, but these also emit most of their radiation in the infrared and only a fraction in the visible spectrum.
The peak of the black-body spectrum is in the deep infrared, at about 10 micrometre wavelength, for relatively cool objects like human beings. As the temperature increases, the peak shifts to shorter wavelengths, producing first a red glow, then a white one and finally a blue-white colour as the peak moves out of the visible part of the spectrum and into the ultraviolet. These colours can be seen when metal is heated to "red hot" or "white hot". Blue-white thermal emission is not often seen, except in stars (the commonly seen pure-blue colour in a gas flame or a welder's torch is in fact due to molecular emission, notably by CH radicals emitting a wavelength band around 425 nm and is not seen in stars or pure thermal radiation).
Atoms emit and absorb light at characteristic energies. This produces "emission lines" in the spectrum of each atom. Emission can be spontaneous, as in light-emitting diodes, gas discharge lamps (such as neon lamps and neon signs, mercury-vapor lamps, etc.) and flames (light from the hot gas itself—so, for example, sodium in a gas flame emits characteristic yellow light). Emission can also be stimulated, as in a laser or a microwave maser.
Deceleration of a free charged particle, such as an electron, can produce visible radiation: cyclotron radiation, synchrotron radiation and bremsstrahlung radiation are all examples of this. Particles moving through a medium faster than the speed of light in that medium can produce visible Cherenkov radiation. Certain chemicals produce visible radiation by chemoluminescence. In living things, this process is called bioluminescence. For example, fireflies produce light by this means and boats moving through water can disturb plankton which produce a glowing wake.
Certain substances produce light when they are illuminated by more energetic radiation, a process known as fluorescence. Some substances emit light slowly after excitation by more energetic radiation. This is known as phosphorescence. Phosphorescent materials can also be excited by bombarding them with subatomic particles. Cathodoluminescence is one example. This mechanism is used in cathode-ray tube television sets and computer monitors.
Certain other mechanisms can produce light:
Electroluminescence
Scintillation
Sonoluminescence
Triboluminescence
When the concept of light is intended to include very-high-energy photons (gamma rays), additional generation mechanisms include:
Particle–antiparticle annihilation
Radioactive decay
Measurement
Light is measured with two main alternative sets of units: radiometry consists of measurements of light power at all wavelengths, while photometry measures light with wavelength weighted with respect to a standardized model of human brightness perception. Photometry is useful, for example, to quantify Illumination (lighting) intended for human use.
The photometry units are different from most systems of physical units in that they take into account how the human eye responds to light. The cone cells in the human eye are of three types which respond differently across the visible spectrum and the cumulative response peaks at a wavelength of around 555 nm. Therefore, two sources of light which produce the same intensity (W/m2) of visible light do not necessarily appear equally bright. The photometry units are designed to take this into account and therefore are a better representation of how "bright" a light appears to be than raw intensity. They relate to raw power by a quantity called luminous efficacy and are used for purposes like determining how to best achieve sufficient illumination for various tasks in indoor and outdoor settings. The illumination measured by a photocell sensor does not necessarily correspond to what is perceived by the human eye and without filters which may be costly, photocells and charge-coupled devices (CCD) tend to respond to some infrared, ultraviolet or both.
Light pressure
Light exerts physical pressure on objects in its path, a phenomenon which can be deduced by Maxwell's equations, but can be more easily explained by the particle nature of light: photons strike and transfer their momentum. Light pressure is equal to the power of the light beam divided by c, the speed of light. Due to the magnitude of c, the effect of light pressure is negligible for everyday objects. For example, a one-milliwatt laser pointer exerts a force of about 3.3 piconewtons on the object being illuminated; thus, one could lift a U.S. penny with laser pointers, but doing so would require about 30 billion 1-mW laser pointers. However, in nanometre-scale applications such as nanoelectromechanical systems (NEMS), the effect of light pressure is more significant and exploiting light pressure to drive NEMS mechanisms and to flip nanometre-scale physical switches in integrated circuits is an active area of research. At larger scales, light pressure can cause asteroids to spin faster, acting on their irregular shapes as on the vanes of a windmill. The possibility of making solar sails that would accelerate spaceships in space is also under investigation.
Although the motion of the Crookes radiometer was originally attributed to light pressure, this interpretation is incorrect; the characteristic Crookes rotation is the result of a partial vacuum. This should not be confused with the Nichols radiometer, in which the (slight) motion caused by torque (though not enough for full rotation against friction) is directly caused by light pressure.
As a consequence of light pressure, Einstein in 1909 predicted the existence of "radiation friction" which would oppose the movement of matter. He wrote, "radiation will exert pressure on both sides of the plate. The forces of pressure exerted on the two sides are equal if the plate is at rest. However, if it is in motion, more radiation will be reflected on the surface that is ahead during the motion (front surface) than on the back surface. The backwardacting force of pressure exerted on the front surface is thus larger than the force of pressure acting on the back. Hence, as the resultant of the two forces, there remains a force that counteracts the motion of the plate and that increases with the velocity of the plate. We will call this resultant 'radiation friction' in brief."
Usually light momentum is aligned with its direction of motion. However, for example in evanescent waves momentum is transverse to direction of propagation.
Historical theories about light, in chronological order
Classical Greece and Hellenism
In the fifth century BC, Empedocles postulated that everything was composed of four elements; fire, air, earth and water. He believed that goddess Aphrodite made the human eye out of the four elements and that she lit the fire in the eye which shone out from the eye making sight possible. If this were true, then one could see during the night just as well as during the day, so Empedocles postulated an interaction between rays from the eyes and rays from a source such as the sun.
In about 300 BC, Euclid wrote Optica, in which he studied the properties of light. Euclid postulated that light travelled in straight lines and he described the laws of reflection and studied them mathematically. He questioned that sight is the result of a beam from the eye, for he asks how one sees the stars immediately, if one closes one's eyes, then opens them at night. If the beam from the eye travels infinitely fast this is not a problem.
In 55 BC, Lucretius, a Roman who carried on the ideas of earlier Greek atomists, wrote that "The light & heat of the sun; these are composed of minute atoms which, when they are shoved off, lose no time in shooting right across the interspace of air in the direction imparted by the shove." (from On the nature of the Universe). Despite being similar to later particle theories, Lucretius's views were not generally accepted. Ptolemy (c. second century) wrote about the refraction of light in his book Optics.
Classical India
In ancient India, the Hindu schools of Samkhya and Vaisheshika, from around the early centuries AD developed theories on light. According to the Samkhya school, light is one of the five fundamental "subtle" elements (tanmatra) out of which emerge the gross elements. The atomicity of these elements is not specifically mentioned and it appears that they were actually taken to be continuous.
The Vishnu Purana refers to sunlight as "the seven rays of the sun".
The Indian Buddhists, such as Dignāga in the fifth century and Dharmakirti in the seventh century, developed a type of atomism that is a philosophy about reality being composed of atomic entities that are momentary flashes of light or energy. They viewed light as being an atomic entity equivalent to energy.
Descartes
René Descartes (1596–1650) held that light was a mechanical property of the luminous body, rejecting the "forms" of Ibn al-Haytham and Witelo as well as the "species" of Roger Bacon, Robert Grosseteste and Johannes Kepler. In 1637 he published a theory of the refraction of light that assumed, incorrectly, that light travelled faster in a denser medium than in a less dense medium. Descartes arrived at this conclusion by analogy with the behaviour of sound waves. Although Descartes was incorrect about the relative speeds, he was correct in assuming that light behaved like a wave and in concluding that refraction could be explained by the speed of light in different media.
Descartes is not the first to use the mechanical analogies but because he clearly asserts that light is only a mechanical property of the luminous body and the transmitting medium, Descartes's theory of light is regarded as the start of modern physical optics.
Particle theory
Pierre Gassendi (1592–1655), an atomist, proposed a particle theory of light which was published posthumously in the 1660s. Isaac Newton studied Gassendi's work at an early age and preferred his view to Descartes's theory of the plenum. He stated in his Hypothesis of Light of 1675 that light was composed of corpuscles (particles of matter) which were emitted in all directions from a source. One of Newton's arguments against the wave nature of light was that waves were known to bend around obstacles, while light travelled only in straight lines. He did, however, explain the phenomenon of the diffraction of light (which had been observed by Francesco Grimaldi) by allowing that a light particle could create a localised wave in the aether.
Newton's theory could be used to predict the reflection of light, but could only explain refraction by incorrectly assuming that light accelerated upon entering a denser medium because the gravitational pull was greater. Newton published the final version of his theory in his Opticks of 1704. His reputation helped the particle theory of light to hold sway during the eighteenth century. The particle theory of light led Pierre-Simon Laplace to argue that a body could be so massive that light could not escape from it. In other words, it would become what is now called a black hole. Laplace withdrew his suggestion later, after a wave theory of light became firmly established as the model for light (as has been explained, neither a particle or wave theory is fully correct). A translation of Newton's essay on light appears in The large scale structure of space-time, by Stephen Hawking and George F. R. Ellis.
The fact that light could be polarized was for the first time qualitatively explained by Newton using the particle theory. Étienne-Louis Malus in 1810 created a mathematical particle theory of polarization. Jean-Baptiste Biot in 1812 showed that this theory explained all known phenomena of light polarization. At that time the polarization was considered as the proof of the particle theory.
Wave theory
To explain the origin of colours, Robert Hooke (1635–1703) developed a "pulse theory" and compared the spreading of light to that of waves in water in his 1665 work Micrographia ("Observation IX"). In 1672 Hooke suggested that light's vibrations could be perpendicular to the direction of propagation. Christiaan Huygens (1629–1695) worked out a mathematical wave theory of light in 1678 and published it in his Treatise on Light in 1690. He proposed that light was emitted in all directions as a series of waves in a medium called the luminiferous aether. As waves are not affected by gravity, it was assumed that they slowed down upon entering a denser medium.
The wave theory predicted that light waves could interfere with each other like sound waves (as noted around 1800 by Thomas Young). Young showed by means of a diffraction experiment that light behaved as waves. He also proposed that different colours were caused by different wavelengths of light and explained colour vision in terms of three-coloured receptors in the eye. Another supporter of the wave theory was Leonhard Euler. He argued in Nova theoria lucis et colorum (1746) that diffraction could more easily be explained by a wave theory. In 1816 André-Marie Ampère gave Augustin-Jean Fresnel an idea that the polarization of light can be explained by the wave theory if light were a transverse wave.
Later, Fresnel independently worked out his own wave theory of light and presented it to the Académie des Sciences in 1817. Siméon Denis Poisson added to Fresnel's mathematical work to produce a convincing argument in favor of the wave theory, helping to overturn Newton's corpuscular theory. By the year 1821, Fresnel was able to show via mathematical methods that polarization could be explained by the wave theory of light if and only if light was entirely transverse, with no longitudinal vibration whatsoever.
The weakness of the wave theory was that light waves, like sound waves, would need a medium for transmission. The existence of the hypothetical substance luminiferous aether proposed by Huygens in 1678 was cast into strong doubt in the late nineteenth century by the Michelson–Morley experiment.
Newton's corpuscular theory implied that light would travel faster in a denser medium, while the wave theory of Huygens and others implied the opposite. At that time, the speed of light could not be measured accurately enough to decide which theory was correct. The first to make a sufficiently accurate measurement was Léon Foucault, in 1850. His result supported the wave theory, and the classical particle theory was finally abandoned (only to partly re-emerge in the twentieth century as photons in quantum theory).
Electromagnetic theory
In 1845, Michael Faraday discovered that the plane of polarization of linearly polarized light is rotated when the light rays travel along the magnetic field direction in the presence of a transparent dielectric, an effect now known as Faraday rotation. This was the first evidence that light was related to electromagnetism. In 1846 he speculated that light might be some form of disturbance propagating along magnetic field lines. Faraday proposed in 1847 that light was a high-frequency electromagnetic vibration, which could propagate even in the absence of a medium such as the ether.
Faraday's work inspired James Clerk Maxwell to study electromagnetic radiation and light. Maxwell discovered that self-propagating electromagnetic waves would travel through space at a constant speed, which happened to be equal to the previously measured speed of light. From this, Maxwell concluded that light was a form of electromagnetic radiation: he first stated this result in 1862 in On Physical Lines of Force. In 1873, he published A Treatise on Electricity and Magnetism, which contained a full mathematical description of the behavior of electric and magnetic fields, still known as Maxwell's equations. Soon after, Heinrich Hertz confirmed Maxwell's theory experimentally by generating and detecting radio waves in the laboratory and demonstrating that these waves behaved exactly like visible light, exhibiting properties such as reflection, refraction, diffraction and interference. Maxwell's theory and Hertz's experiments led directly to the development of modern radio, radar, television, electromagnetic imaging and wireless communications.
In the quantum theory, photons are seen as wave packets of the waves described in the classical theory of Maxwell. The quantum theory was needed to explain effects even with visual light that Maxwell's classical theory could not (such as spectral lines).
Quantum theory
In 1900 Max Planck, attempting to explain black-body radiation, suggested that although light was a wave, these waves could gain or lose energy only in finite amounts related to their frequency. Planck called these "lumps" of light energy "quanta" (from a Latin word for "how much"). In 1905, Albert Einstein used the idea of light quanta to explain the photoelectric effect and suggested that these light quanta had a "real" existence. In 1923 Arthur Holly Compton showed that the wavelength shift seen when low intensity X-rays scattered from electrons (so called Compton scattering) could be explained by a particle-theory of X-rays, but not a wave theory. In 1926 Gilbert N. Lewis named these light quanta particles photons.
Eventually quantum mechanics came to picture light as (in some sense) both a particle and a wave, and (in another sense) as a phenomenon which is neither a particle nor a wave (which actually are macroscopic phenomena, such as baseballs or ocean waves). Instead, under some approximations light can be described sometimes with mathematics appropriate to one type of macroscopic metaphor (particles) and sometimes another macroscopic metaphor (waves).
As in the case for radio waves and the X-rays involved in Compton scattering, physicists have noted that electromagnetic radiation tends to behave more like a classical wave at lower frequencies, but more like a classical particle at higher frequencies, but never completely loses all qualities of one or the other. Visible light, which occupies a middle ground in frequency, can easily be shown in experiments to be describable using either a wave or particle model, or sometimes both.
In 1924–1925, Satyendra Nath Bose showed that light followed different statistics from that of classical particles. With Einstein, they generalized this result for a whole set of integer spin particles called bosons (after Bose) that follow Bose–Einstein statistics. The photon is a massless boson of spin 1.
In 1927, Paul Dirac quantized the electromagnetic field. Pascual Jordan and Vladimir Fock generalized this process to treat many-body systems as excitations of quantum fields, a process with the misnomer of second quantization. And at the end of the 1940s a full theory of quantum electrodynamics was developed using quantum fields based on the works of Julian Schwinger, Richard Feynman, Freeman Dyson, and Shinichiro Tomonaga.
Quantum optics
John R. Klauder, George Sudarshan, Roy J. Glauber, and Leonard Mandel applied quantum theory to the electromagnetic field in the 1950s and 1960s to gain a more detailed understanding of photodetection and the statistics of light (see degree of coherence). This led to the introduction of the coherent state as a concept which addressed variations between laser light, thermal light, exotic squeezed states, etc. as it became understood that light cannot be fully described just referring to the electromagnetic fields describing the waves in the classical picture. In 1977, H. Jeff Kimble et al. demonstrated a single atom emitting one photon at a time, further compelling evidence that light consists of photons. Previously unknown quantum states of light with characteristics unlike classical states, such as squeezed light were subsequently discovered.
Development of short and ultrashort laser pulses—created by Q switching and modelocking techniques—opened the way to the study of what became known as ultrafast processes. Applications for solid state research (e.g. Raman spectroscopy) were found, and mechanical forces of light on matter were studied. The latter led to levitating and positioning clouds of atoms or even small biological samples in an optical trap or optical tweezers by laser beam. This, along with Doppler cooling and Sisyphus cooling, was the crucial technology needed to achieve the celebrated Bose–Einstein condensation.
Other remarkable results are the demonstration of quantum entanglement, quantum teleportation, and quantum logic gates. The latter are of much interest in quantum information theory, a subject which partly emerged from quantum optics, partly from theoretical computer science.
Use for light on Earth
Sunlight provides the energy that green plants use to create sugars mostly in the form of starches, which release energy into the living things that digest them. This process of photosynthesis provides virtually all the energy used by living things. Some species of animals generate their own light, a process called bioluminescence. For example, fireflies use light to locate mates and vampire squid use it to hide themselves from prey.
| Physical sciences | Physics | null |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.