id int64 580 79M | url stringlengths 31 175 | text stringlengths 9 245k | source stringlengths 1 109 | categories stringclasses 160 values | token_count int64 3 51.8k |
|---|---|---|---|---|---|
8,360,749 | https://en.wikipedia.org/wiki/Cryptographic%20Modernization%20Program | The Cryptographic Modernization Program is a Department of Defense directed, NSA Information Assurance Directorate led effort to transform and modernize Information Assurance capabilities for the 21st century. It has three phases:
Replacement- All at risk devices to be replaced.
Modernization- Integrate modular (programmable/ embedded) crypto solutions.
Transformation- Be compliant to GIG/ NetCentrics requirements.
The CM is a joint initiative to upgrade the DoD crypto inventory. Of the 1.3 million cryptographic devices in the U.S. inventory, 73 percent will be replaced over the next 10 to 15 years by ongoing and planned C4ISR systems programs, Information Technology modernization initiatives and advanced weapons platforms.
All command and control, communications, computer, intelligence, surveillance, reconnaissance, information technology and weapons systems that rely upon cryptography for the provision of assured confidentiality, integrity, and authentication services will become a part of this long-term undertaking. The Cryptographic Modernization program is a tightly integrated partnership between the NSA, the military departments, operational commands, defense agencies, the Joint Staff, federal government entities and industry.
The program is a multibillion-dollar, multi-year undertaking that will transform cryptographic security capabilities for national security systems at all echelons and points of use. It will exploit new and emerging technologies, provide advanced enabling infrastructure capabilities, and at the same time, modernize legacy devices that are now operationally employed.
The program also directly supports the DoD vision of the Global Information Grid. The security configuration features enable new cryptosystems to provide secure information delivery anywhere on the global grid while using the grid itself for security configuration and provisioning—seamless integration.
Technology
Cryptography
Most modernized devices will include both Suite A (US only) and Suite B support. This allows for protection of sensitive government data as well as interoperability with coalition partners, such as NATO. The program includes the DOD's Key Management Initiative which is designed to replace cumbersome special purpose channels for distribution of cryptographic keys with a network-based approach by 2015.
Interoperability
The NSA has also led the effort to create standards for devices to prevent vendor lock in.
High Assurance Internet Protocol Encryptor (HAIPE)
Link Encryptor Family (LEF)
Secure Communications Interoperability Protocol (SCIP)
Devices
The modernized devices that are being built usually include the ability to add to or replace the current algorithms as firmware updates as newer ones become available.
References
Military communications
National Security Agency | Cryptographic Modernization Program | Engineering | 504 |
24,551,288 | https://en.wikipedia.org/wiki/C22H28NO3 | {{DISPLAYTITLE:C22H28NO3}}
The molecular formula C22H28NO3 (C22H28NO3+, molar mass: 354.46 g/mol) may refer to:
Benzilone, an antimuscarinic
Bevonium, an antimuscarinic
Pipenzolate | C22H28NO3 | Chemistry | 73 |
72,523,469 | https://en.wikipedia.org/wiki/List%20of%20euasterid%20families | The euasterids or core asterids are a group of 69 interrelated families in 15 orders of flowering plants. They tend to have petals that are fused with each other and with the bases of the stamens, and just one integument (covering) around the embryo sac. The asterids as a whole (the euasterids plus two orders of basal asterids) represent almost a third of all flowering plant species.
The euasterids include trees, shrubs, vines and herbaceous perennials and annuals. Sweet potatoes are a tropical staple food. Basil, oregano, sage, rosemary, thyme and peppermint are all kitchen herbs in the mint family. Olives have been cultivated around the Mediterranean for food and oil for at least five thousand years. The daisy family includes lettuce, artichokes, Stevia, sunflowers and tarragon.
Glossary
From the glossary of botanical terms:
annual: a plant species that completes its life cycle within a single year or growing season
basal: attached close to the base (of a plant or an evolutionary tree diagram)
climber: a vine that leans on, twines around or clings to other plants for vertical support
deciduous: falling seasonally, as with bark, leaves, or petals
glandular hair: a hair tipped with a secretory structure
herbaceous: not woody; usually green and soft in texture
mangrove: any shrub or small tree growing in brackish or salt water
perennial: not an annual or biennial
succulent (adjective): juicy or fleshy
unisexual: of one sex; bearing only male or only female reproductive organs
woody: hard and lignified; not herbaceous
The APG IV system is the fourth in a series of plant taxonomies from the Angiosperm Phylogeny Group. In this system, the euasterids are divided into the lamiids and the campanulids. The order Icacinales is basal within the lamiids.
Six euasterid orders have more than two families: Apiales, Aquifoliales, Asterales, Gentianales, Lamiales and Solanales. Apiales and Asterales are exceptionally diverse, with 2342 genera between them. Aquifoliales is basal within the campanulids. Gentianales species have pitted wood and opposite leaves that are joined across the stem. In Lamiales, plants are mostly herbaceous with opposite leaves, and the five-lobed flowers have approximate mirror-image symmetry. Solanales species usually have sepals that continue to grow with age, even when the plant is fruiting.
Families
See also
List of plant family names with etymologies
Notes
Citations
References
See the Creative Commons license.
See their terms-of-use license.
Systematic
Euasterid
Euasterid families
euasterid families | List of euasterid families | Biology | 591 |
2,657,367 | https://en.wikipedia.org/wiki/Awning | An awning or overhang is a secondary covering attached to the exterior wall of a building. It is typically composed of canvas woven of acrylic, cotton or polyester yarn, or vinyl laminated to polyester fabric that is stretched tightly over a light structure of aluminium, iron or steel, possibly wood or transparent material (used to cover solar thermal panels in the summer, but that must allow as much light as possible in the winter). The configuration of this structure is something of a truss, space frame or planar frame. Awnings are also often constructed of aluminium understructure with aluminium sheeting. These aluminium awnings are often used when a fabric awning is not a practical application where snow load as well as wind loads may be a factor.
The location of an awning on a building may be above a window, a door, or above the area along a sidewalk. With the addition of columns an awning becomes a canopy, which is able to extend further from a building, as in the case of an entrance to a hotel. Restaurants often use awnings broad enough to cover substantial outdoor area for outdoor dining, parties, or reception. In commercial buildings, an awning is often painted with information as to the name, business, and address, thus acting as a sign or billboard as well as providing shade, breaking strong winds, and protecting from rain or snow. In areas with wintry weather, most awnings do not have to be taken down at the end of the summer – they can remain retracted against the building all winter long, or be designed and built for those conditions.
History
Ancient world
Awnings were first used by the ancient Egyptian and Syrian civilizations. They are described as "woven mats" that shaded market stalls and homes. A Roman poet Lucretius, in 50 BC, said "Linen-awning, stretched, over mighty theatres, gives forth at times, a cracking roar, when much 'tis beaten about, betwixt the poles and cross-beams".
Among the most significant awnings in the ancient world was the velarium, the massive complex of retractable shade structures that could be deployed above the seating areas of the Roman Colosseum. Made of linen shadecloths, timber framing, iron sockets and rope, the system could effectively shade about one-third of the arena and seating; another third could be shaded by the high surrounding walls, providing a majority of seats some shade on a blinding afternoon. It is believed that sailors, with their background in sailmaking and rigging were employed to build, maintain and operate the velarium.
Early 19th century
Awnings became common during the first half of the 19th century. At that time they consisted of timber or cast iron posts set along the sidewalk edge and linked by a front cross bar. To lend support to larger installations, angled rafters linked the front cross bar to the building facade. The upper end of the canvas was connected to the facade with nails, with grommets and hooks, or by lacing the canvas to a head rod bolted to the facade. The other (projecting) end of the canvas was draped over or laced to a front bar with the edge often hanging down to form a valance. On ornate examples, metal posts were adorned with filigree and the tops decorated with spear ends, balls or other embellishments. On overcast days or when rain did not threaten, the covering was often rolled up against the building facade; during the winter months proper maintenance called for the removal and storage of awnings. Photographs from the mid-19th century often show the bare framework, suggesting that the covering was extended only when necessary. Canvas duck was the predominant awning fabric, a strong, closely woven cotton cloth used for centuries to make tents and sails.
Awnings became a common feature in the years after the American Civil War. Iron plumbing pipe, which was quickly adapted for awning frames, became widely available and affordable as a result of mid-century industrialization. It was a natural material for awning frames, easily bent and threaded together to make a range of different shapes and sizes. At the same time the advent of the steamship forced canvas mills and sail makers to search for new markets. An awning industry developed offering an array of frame and fabric options adaptable to both storefronts and windows.
Late 19th century
In the second half of the 19th century, manufactured operable awnings grew in popularity. Previously, most awnings had fixed frames-the primary way to retract the covering was to roll it up the rafters by hand. Operable systems for both storefront and window awnings had extension arms that were hinged where they joined the facade. The arms were lowered to project the awning or raised to retract the awning using simple rope and pulley arrangements. Because the canvas remained attached to the framework, retractable awnings allowed a more flexible approach to shading (shopkeepers and owners could incrementally adjust the amount of awning coverage depending upon the weather conditions). When the sun came out from behind clouds, the awning could be deployed with ease. In case of sudden storms, owners could quickly retract the awning against the building wall where it was protected from wind gusts.
Despite their advantages, early operable awnings had drawbacks; when retracted, their cloth coverings often bunched up against the building facade. This left part of the fabric exposed to inclement weather, and deterioration was often accelerated by moisture pooling in the folds of fabric. If poorly designed or badly placed, the retracted fabric could obscure part of the window or door opening, and even if out of the way an imperfectly folded awning presented an unkempt appearance. Modern materials and designs have eliminated all of these issues.
Benefits
Retractable awnings let owners control the weather on their own terms. When passing showers threaten, or when the sun gets hot, they or the home automation system unroll the awning for near-instant protection and shade. Lab test measurements show that it can be as much as cooler under an awning's canopy. Because awnings prevent the sun from shining through windows and sliding glass doors, they can keep temperatures inside cooler as well, which saves on air-conditioning costs. They can help prevent carpets and furniture from fading in sunlight. Awnings also provide a sheltered place for children and pets to play, shielded from direct sun.
Some of today's awnings also offer accessories that can greatly increase the versatility and usefulness owners get from their decks or patios. A screen room add-on can easily turn an awning into a virtually bug-free outdoor room, side screening cuts down on wind and mist coming under the sides of awnings, and patio lights let people enjoy their decks evenings and nights.
It also can be used to cover the thermal solar panels in the summer.
Types
Actuation
Today's awnings come in two basic types: manually operated models which are opened by hand and motorized models which operate by electricity. Each offers its own advantages. Benefits include low-cost affordability, easy adaptability to almost any deck or patio, and support arms that can be angled back against the house or set vertically on the deck or patio floor. These arms provide extra support and stability which some owners prefer in windy areas, and increase the awning's versatility by making the attachment of certain accessories available.
Motorized awnings have no vertical supports. Instead, they have retracting lateral arms, creating an unobstructed shaded area. These awnings are operated by an electric motor, generally hidden inside the roller bar of the awning. The arms open and close the awning at the touch of a wireless remote control or a wall-mounted switch.
Modern awnings may be constructed with covers of various types of fabrics, aluminium, corrugated fibreglass, corrugated polycarbonate or other materials. High winds can cause damage to an extended awning, and newer designs incorporate a wind sensor for automatic retraction in certain conditions.
Wind tolerance and construction
Modern awnings are rated for wind tolerance based on width, length, number of supporting arms, and material. Modern awning design incorporates urethane compression joints, steel support structures, and wind sensors. Such designs are currently in use at the White House, Grand Central Station, and the Kremlin.
Aluminium awnings
Aluminium awnings have long been popular in residential applications throughout the world. They are available in many colors and are usually painted with a baked-on enamel paint. Among the many benefits of these awnings are cooler temperatures inside the home, shade for your patio, extending the life of furniture and window treatments. Possibly the most beneficial feature of the awnings are the fact that they have a usable life of well over 40 years.
Some aluminum awnings are designed to be folded down and fastened to protect windows in case of storms such as hurricanes.
Retractable awnings
Retractable awnings are now becoming very popular with homeowners in the United States. They have been popular in Europe for many years, due to higher energy costs and lack of air conditioning.
Some retractable awnings can include photovoltaic cells or rollable solar panels to generate electricity.
Retractable awnings can include the following types:
Retractable patio cover systems
Retractable patio cover systems are the latest "entry" in to the retractable market. Most of these systems are water-proof as compared to water-resistant (lateral arm awnings) and therefore allow no water penetration through the fabric "roof" section. These systems meet Beaufort scale wind loads up to Beaufort 10 () depending on model and size. Another advantage of retractable patio cover systems is that they can be enclosed on the front and sides using solar shade screens. This allows for an "outdoor room" that can be heated in the winter and air conditioned in the summer.
Retractable lateral arm awnings or folding arm awnings
These are a modern version of the old storefront crank-up awnings of the last century. The two, three or four tension arms (width dependent) and a top tube are supported by a torsion bar. With a Traditional 'Open Style' folding arm awning the torsion bar (also known as square bar) fits into wall or soffit or fascia or roof mounted brackets that spread the load to the wall or roof truss. The newer 'Full Cassette' style of folding arm awning does away with the torsion bar and uses an aluminium extrusion that interlocks with brackets that in-turn spreads the cantilevered force imposed by the awning into the structure it is fitted to.
Hand-cranked awnings are still available, but motorized awnings are now most common due to advances in technology, manufacturing & reliability over the last few decades. The motor is inside the roller tube that the fabric rolls around and therefore is not visible. Many motors now have a built-in receiver and are operated by remote control, smartphone, tablet or home automation.
Lateral arm awnings are also known as folding arm, deck or patio awnings, as they can extend / project as far as and when several awnings are coupled together, can be as wide as or more – thus covering a large outdoor space. Normally a single folding arm awning can only span in a single system due to difficulties transporting, storing and powder coating extrusion greater than this size.
The most common fabric choice for folding arm awning is solution dyed acrylic fabric that comes in a variety of styles, colours, patterns as well as performance grades for water repellence and fire retardancy. Solution dyed acrylic fabric is the most suitable fabric for use in these awnings due to dimensional properties, ease of manufacturing, weight (versus woven mesh) & high levels of filtering of and resistance against UV.
Retractable side or drop arm awnings
Commonly used to shade a window, with a roller tube at the top, spring-loaded side arms, and a motor, crank or tape-pull operator. Awnings with sides are commonly known as traditional style awnings as they have been used for many years dating back to the early 19th century using cotton canvas fabric. Canvas was replaced in the 1960s by more acrylic materials and polyester in more recent times driven by a focus on the environment. Traditional style awnings are appropriate for historical buildings and are still popular today using a more weather resistant fabric and a rope and pulley system for retracting the awnings. Awnings without sides do not provide as much sun protection as those with sides. Awnings without sides come in many different styles. Drop Arm Awnings without sides come with roller tubes at the top and are available with motors and wind sensors for automatic retraction. Spear Awnings are made without sides and are made with Wrought Iron Frames and they can be retracted with a rope and pulley system but are not available with motors.
Awnings with sides provide the best sun protection for east and west facing windows. North and South facing windows can be protected from the sun with awnings without sides. Awnings provide shade keeping your house cooler, awnings are also great shade creators around porches, decks and patios.
Portable, pop-up canopies
A portable pop-up canopy or tent provides a cost-effective temporary solution to people who want to enjoy shade. The portable designs offer versatility to take the unit to social events. The frame usually incorporates an accordion-style truss which folds up compactly.
Retractable solar shade screens
Shade screens utilize acrylic canvas or a mesh fabric, which allows some view-through while blocking the sun's rays. The roller at the top may be hand-cranked or motorized. The fabric is gravity-fed, with a weighted bottom rail pulling the fabric down between guide rails or guy wires. Exterior shades are much more effective at blocking heat than interior shades, since they block the heat before it enters the glass. This style of framed screen is typically done by professional installers, because of the specialized frames and tools required. A recent advancement is frame-less shade screens, which allows a "DIY-er" to install their own exterior shades. Solar shade screens can also be installed at the end of awnings to provide horizontal shade during early morning or late afternoon sun positions.
Retractable solar window awnings
Retractable solar window awnings are made from fabric. They are resistant to hail and harsh winter weather. Many solar window awnings can be programmed to open and close automatically in response to the sun.
Shade sails
Shade sails provide semi-horizontal shading. They can be demounted with some difficulty and are usually left in place year round. Retractable versions also exist, with greater cost but also greater utility, as they can be retracted to admit winter sun if required.
Classification numbers
Construction Specifications Institute (CSI) Division 10 MasterFormat 2004 Edition:
10 73 13 – Awnings
10 73 16 – Canopies
10 71 13 – Exterior Sun Control Devices
10 71 13.43 – Fixed Sun Screen
10 73 00 – Protective Covers (Generic)
CSI MasterFormat 1995 Edition:
10530 – Protective Covers, Awnings & Canopies
See also
Brise soleil
Canopy (architecture)
Curtain
Overhang (architecture)
Rope
Solar panel
Tent
Umbrella
Verandah
Window blind
References
Architectural elements
Low-energy building
Windows
Shading (architecture)
Passive cooling | Awning | Technology,Engineering | 3,207 |
2,671,762 | https://en.wikipedia.org/wiki/Helek | The helek, also spelled chelek (Hebrew חלק, meaning "portion", plural halakim חלקים) is a unit of time used in the calculation of the Molad. Other spellings used are chelak and chelek, both with plural chalakim.
The hour is divided into 1080 halakim. A helek is 3 seconds or 1/18 minute. The helek derives from a small Babylonian time period called a she, meaning '"barleycorn", itself equal to 1/72 of a Babylonian time degree (1° of celestial rotation).
360 degrees × 72 shes per degree / 24 hours = 1080 shes per hour.
The Hebrew calendar defines its mean month to be exactly equal to 29 days 12 hours and 793 halakim, which is 29 days 12 hours 44 minutes and 3 seconds. It defines its mean year as exactly 235/19 times this amount, or 365 days, 5 hours, 55 minutes, and 25 and 25/57 seconds (approximately 365.2468222 days).
Bibliography
Hebrew calendar
Units of time | Helek | Physics,Mathematics | 234 |
40,735,838 | https://en.wikipedia.org/wiki/Vincoline | Vincoline is an alkaloid isolated from Catharanthus roseus. In a mouse model, it has been found to stimulate insulin secretion.
References
Tryptamine alkaloids
Indolizidines | Vincoline | Chemistry | 46 |
46,300,006 | https://en.wikipedia.org/wiki/PSR%20J2124%E2%88%923358 | PSR J2124−3358 is a millisecond pulsar located in the constellation Microscopium. It is one of the brightest examples of its type in the X-ray spectrum. Discovered in 1997, no optical component was observed in 2003.
References
Microscopium
Pulsars | PSR J2124−3358 | Astronomy | 65 |
40,471,726 | https://en.wikipedia.org/wiki/Intel%20ADX | Intel ADX (Multi-Precision Add-Carry Instruction Extensions) is Intel's arbitrary-precision arithmetic extension to the x86 instruction set architecture (ISA). Intel ADX was first supported in the Broadwell microarchitecture.
The instruction set extension contains just two new instructions, though MULX from BMI2 is also considered as a part of the large integer arithmetic support.
Both instructions are more efficient variants of the existing ADC instruction, with the difference that each of the two new instructions affects only one flag, where ADC as a signed addition may set both overflow and carry flags, and as an old-style x86 instruction also reset the rest of the CPU flags. Having two versions affecting different flags means that two chains of additions with carry can be calculated in parallel.
AMD added support in their processors for these instructions starting with Ryzen.
References
External links
Intel
X86 instructions | Intel ADX | Technology | 186 |
616,293 | https://en.wikipedia.org/wiki/Soft%20matter | Soft matter or soft condensed matter is a type of matter that can be deformed or structurally altered by thermal or mechanical stress which is of similar magnitude to thermal fluctuations.
The science of soft matter is a subfield of condensed matter physics. Soft materials include liquids, colloids, polymers, foams, gels, granular materials, liquid crystals, flesh, and a number of biomaterials. These materials share an important common feature in that predominant physical behaviors occur at an energy scale comparable with room temperature thermal energy (of order of kT), and that entropy is considered the dominant factor. At these temperatures, quantum aspects are generally unimportant. When soft materials interact favorably with surfaces, they become squashed without an external compressive force.
Pierre-Gilles de Gennes, who has been called the "founding father of soft matter," received the Nobel Prize in Physics in 1991 for discovering that methods developed for studying order phenomena in simple systems can be generalized to the more complex cases found in soft matter, in particular, to the behaviors of liquid crystals and polymers.
History
The current understanding of soft matter grew from Albert Einstein's work on Brownian motion, understanding that a particle suspended in a fluid must have a similar thermal energy to the fluid itself (of order of kT). This work built on established research into systems that would now be considered colloids.
The crystalline optical properties of liquid crystals and their ability to flow were first described by Friedrich Reinitzer in 1888, and further characterized by Otto Lehmann in 1889. The experimental setup that Lehmann used to investigate the two melting points of cholesteryl benzoate are still used in the research of liquid crystals as of about 2019.
In 1920, Hermann Staudinger, recipient of the 1953 Nobel Prize in Chemistry, was the first person to suggest that polymers are formed through covalent bonds that link smaller molecules together. The idea of a macromolecule was unheard of at the time, with the scientific consensus being that the recorded high molecular weights of compounds like natural rubber were instead due to particle aggregation.
The use of hydrogel in the biomedical field was pioneered in 1960 by Drahoslav Lím and Otto Wichterle. Together, they postulated that the chemical stability, ease of deformation, and permeability of certain polymer networks in aqueous environments would have a significant impact on medicine, and were the inventors of the soft contact lens.
These seemingly separate fields were dramatically influenced and brought together by Pierre-Gilles de Gennes. The work of de Gennes across different forms of soft matter was key to understanding its universality, where material properties are not based on the chemistry of the underlying structure, more so on the mesoscopic structures the underlying chemistry creates. He extended the understanding of phase changes in liquid crystals, introduced the idea of reptation regarding the relaxation of polymer systems, and successfully mapped polymer behavior to that of the Ising model.
Distinctive physics
Interesting behaviors arise from soft matter in ways that cannot be predicted, or are difficult to predict, directly from its atomic or molecular constituents. Materials termed soft matter exhibit this property due to a shared propensity of these materials to self-organize into mesoscopic physical structures. The assembly of the mesoscale structures that form the macroscale material is governed by low energies, and these low energy associations allow for the thermal and mechanical deformation of the material. By way of contrast, in hard condensed matter physics it is often possible to predict the overall behavior of a material because the molecules are organized into a crystalline lattice with no changes in the pattern at any mesoscopic scale. Unlike hard materials, where only small distortions occur from thermal or mechanical agitation, soft matter can undergo local rearrangements of the microscopic building blocks.
A defining characteristic of soft matter is the mesoscopic scale of physical structures. The structures are much larger than the microscopic scale (the arrangement of atoms and molecules), and yet are much smaller than the macroscopic (overall) scale of the material. The properties and interactions of these mesoscopic structures may determine the macroscopic behavior of the material. The large number of constituents forming these mesoscopic structures, and the large degrees of freedom this causes, results in a general disorder between the large-scale structures. This disorder leads to the loss of long-range order that is characteristic of hard matter.
For example, the turbulent vortices that naturally occur within a flowing liquid are much smaller than the overall quantity of liquid and yet much larger than its individual molecules, and the emergence of these vortices controls the overall flowing behavior of the material. Also, the bubbles that compose a foam are mesoscopic because they individually consist of a vast number of molecules, and yet the foam itself consists of a great number of these bubbles, and the overall mechanical stiffness of the foam emerges from the combined interactions of the bubbles.
Typical bond energies in soft matter structures are of similar scale to thermal energies. Therefore the structures are constantly affected by thermal fluctuations and undergo Brownian motion. The ease of deformation and influence of low energy interactions regularly result in slow dynamics of the mesoscopic structures which allows some systems to remain out of equilibrium in metastable states. This characteristic can allow for recovery of initial state through an external stimulus, which is often exploited in research.
Self-assembly is an inherent characteristic of soft matter systems. The characteristic complex behavior and hierarchical structures arise spontaneously as a system evolves towards equilibrium. Self-assembly can be classified as static when the resulting structure is due to a free energy minimum, or dynamic when the system is caught in a metastable state. Dynamic self-assembly can be utilized in the functional design of soft materials with these metastable states through kinetic trapping.
Soft materials often exhibit both elasticity and viscous responses to external stimuli such as shear induced flow or phase transitions. However, excessive external stimuli often result in nonlinear responses. Soft matter becomes highly deformed before crack propagation, which differs significantly from the general fracture mechanics formulation. Rheology, the study of deformation under stress, is often used to investigate the bulk properties of soft matter.
Classes of soft matter
Soft matter consists of a diverse range of interrelated systems and can be broadly categorized into certain classes. These classes are by no means distinct, as often there are overlaps between two or more groups.
Polymers
Polymers are large molecules composed of repeating subunits whose characteristics are governed by their environment and composition. Polymers encompass synthetic plastics, natural fibers and rubbers, and biological proteins. Polymer research finds applications in nanotechnology, from materials science and drug delivery to protein crystallization.
Foams
Foams consist of a liquid or solid through which a gas has been dispersed to form cavities. This structure imparts a large surface-area-to-volume ratio on the system. Foams have found applications in insulation and textiles, and are undergoing active research in the biomedical field of drug delivery and tissue engineering. Foams are also used in automotive for water and dust sealing and noise reduction.
Gels
Gels consist of non-solvent-soluble 3D polymer scaffolds, which are covalently or physically cross-linked, that have a high solvent/content ratio. Research into functionalizing gels that are sensitive to mechanical and thermal stress, as well as solvent choice, has given rise to diverse structures with characteristics such as shape-memory, or the ability to bind guest molecules selectively and reversibly.
Colloids
Colloids are non-soluble particles suspended in a medium, such as proteins in an aqueous solution. Research into colloids is primarily focused on understanding the organization of matter, with the large structures of colloids, relative to individual molecules, large enough that they can be readily observed.
Liquid crystals
Liquid crystals can consist of proteins, small molecules, or polymers, that can be manipulated to form cohesive order in a specific direction. They exhibit liquid-like behavior in that they can flow, yet they can obtain close-to-crystal alignment. One feature of liquid crystals is their ability to spontaneously break symmetry. Liquid crystals have found significant applications in optical devices such as liquid-crystal displays (LCD).
Biological membranes
Biological membranes consist of individual phospholipid molecules that have self-assembled into a bilayer structure due to non-covalent interactions. The localized, low energy associated with the forming of the membrane allows for the elastic deformation of the large-scale structure.
Experimental characterization
Due to the importance of mesoscale structures in the overarching properties of soft matter, experimental work is primarily focused on the bulk properties of the materials. Rheology is often used to investigate the physical changes of the material under stress. Biological systems, such as protein crystallization, are often investigated through X-ray and neutron crystallography, while nuclear magnetic resonance spectroscopy can be used in understanding the average structure and lipid mobility of membranes.
Scattering
Scattering techniques, such as wide-angle X-ray scattering, small-angle X-ray scattering, neutron scattering, and dynamic light scattering can also be used for materials when probing for the average properties of the constituents. These methods can determine particle-size distribution, shape, crystallinity and diffusion of the constituents in the system. There are limitations in the application of scattering techniques to some systems, as they can be more suited to isotropic and dilute samples.
Computational
Computational methods are often employed to model and understand soft matter systems, as they have the ability to strictly control the composition and environment of the structures being investigated, as well as span from microscopic to macroscopic length scales. Computational methods are limited, however, by their suitability to the system and must be regularly validated against experimental results to ensure accuracy. The use of informatics in the prediction of soft matter properties is also a growing field in computer science thanks to the large amount of data available for soft matter systems.
Microscopy
Optical microscopy can be used in the study of colloidal systems, but more advanced methods like transmission electron microscopy (TEM) and atomic force microscopy (AFM) are often used to characterize forms of soft matter due to their applicability to mapping systems at the nanoscale. These imaging techniques are not universally appropriate to all classes of soft matter and some systems may be more suited to one kind of analysis than another. For example, there are limited applications in imaging hydrogels with TEM due to the processes required for imaging. However, fluorescence microscopy can be readily applied. Liquid crystals are often probed using polarized light microscopy to determine the ordering of the material under various conditions, such as temperature or electric field.
Applications
Soft materials are important in a wide range of technological applications, and each soft material can often be associated with multiple disciplines. Liquid crystals, for example, were originally discovered in the biological sciences when the botanist and chemist Friedrich Reinitzer was investigating cholesterols. Now, however, liquid crystals have also found applications as liquid-crystal displays, liquid crystal tunable filters, and liquid crystal thermometers. Active liquid crystals are another example of soft materials, where the constituent elements in liquid crystals can self-propel.
Polymers have found diverse applications, from the natural rubber found in latex gloves to the vulcanized rubber found in tires. Polymers encompass a large range of soft matter, with applications in material science. An example of this is hydrogel. With the ability to undergo shear thinning, hydrogels are well suited for the development of 3D printing. Due to their stimuli responsive behavior, 3D printing of hydrogels has found applications in a diverse range of fields, such as soft robotics, tissue engineering, and flexible electronics. Polymers also encompass biological molecules such as proteins, where research insights from soft matter research have been applied to better understand topics like protein crystallization.
Foams can naturally occur, such as the head on a beer, or be created intentionally, such as by fire extinguishers. The physical properties available to foams have resulted in applications which can be based on their viscosity, with more rigid and self-supporting forms of foams being used as insulation or cushions, and foams that exhibit the ability to flow being used in the cosmetic industry as shampoos or makeup. Foams have also found biomedical applications in tissue engineering as scaffolds and biosensors.
Historically the problems considered in the early days of soft matter science were those pertaining to the biological sciences. As such, an important application of soft matter research is biophysics, with a major goal of the discipline being the reduction of the field of cell biology to the concepts of soft matter physics. Applications of soft matter characteristics are used to understand biologically relevant topics such as membrane mobility, as well as the rheology of blood.
See also
Biological membranes
Biomaterials
Colloids
Complex fluids
Foams
Fracture of soft materials
Gels
Granular materials
Liquids
Liquid crystals
Microemulsions
Polymers
Protein dynamics
Protein structure
Surfactants
Active matter
Roughness
References
I. Hamley, Introduction to Soft Matter (2nd edition), J. Wiley, Chichester (2000).
R. A. L. Jones, Soft Condensed Matter, Oxford University Press, Oxford (2002).
T. A. Witten (with P. A. Pincus), Structured Fluids: Polymers, Colloids, Surfactants, Oxford (2004).
M. Kleman and O. D. Lavrentovich, Soft Matter Physics: An Introduction, Springer (2003).
M. Mitov, Sensitive Matter: Foams, Gels, Liquid Crystals and Other Miracles, Harvard University Press (2012).
J. N. Israelachvili, Intermolecular and Surface Forces, Academic Press (2010).
A. V. Zvelindovsky (editor), Nanostructured Soft Matter - Experiment, Theory, Simulation and Perspectives, Springer/Dordrecht (2007), .
M. Daoud, C.E. Williams (editors), Soft Matter Physics, Springer Verlag, Berlin (1999).
Gerald H. Ristow, Pattern Formation in Granular Materials, Springer Tracts in Modern Physics, v. 161. Springer, Berlin (2000). .
de Gennes, Pierre-Gilles, Soft Matter, Nobel Lecture, December 9, 1991
S. A. Safran, Statistical thermodynamics of surfaces, interfaces and membranes, Westview Press (2003)
R.G. Larson, "The Structure and Rheology of Complex Fluids," Oxford University Press (1999)
Gang, Oleg, "Soft Matter and Biomaterials on the Nanoscale: The WSPC Reference on Functional Nanomaterials — Part I (In 4 Volumes)", World Scientific Publisher (2020)
External links
Pierre-Gilles de Gennes' Nobel Lecture
American Physical Society Topical Group on Soft Matter (GSOFT)
Softbites - a blog run by graduate students and postdocs that makes soft matter more accessible through bite-sized posts that summarize current and classic soft matter research
Softmatterworld.org
Softmatterresources.com
SklogWiki - a wiki dedicated to simple liquids, complex fluids, and soft condensed matter.
Harvard School of Engineering and Applied Sciences Soft Matter Wiki - organizes, reviews, and summarizes academic papers on soft matter.
Soft Matter Engineering - A group dedicated to Soft Matter Engineering at the University of Florida
Google Scholar page on soft matter
Condensed matter physics | Soft matter | Physics,Chemistry,Materials_science,Engineering | 3,150 |
73,563,587 | https://en.wikipedia.org/wiki/Veratric%20acid | Veratric acid, also known as 3,4-dimethoxybenzoic acid, is a benzoic acid. It is a plant metabolite found in species such as Hypericum laricifolium, Artemisia sacrorum, and Zeyheria montana.
Uses
Medical research
A 2023 study at SRM Institute of Science and Technology suggests that veratric acid has apoptotic and antiproliferative effects against triple negative breast cancer cells. These effects were substantially increased when polydopamine nanoparticles were used as a sustained release drug carrier.
References
Benzoic acids
Plant metabolism
Botany
Phytochemicals | Veratric acid | Chemistry,Biology | 134 |
15,129,272 | https://en.wikipedia.org/wiki/Window%20operator | In modal logic, the window operator is a modal operator with the following semantic definition:
for a Kripke model and . Informally, it says that w "sees" every φ-world (or every φ-world is seen by w). This operator is not definable in the basic modal logic (i.e. some propositional non-modal language together with a single primitive "necessity" (universal) operator, often denoted by '', or its existential dual, often denoted by ''). Notice that its truth condition is the converse of the truth condition for the standard "necessity" operator.
For references to some of its applications, see the References section.
References
Logic
Modal logic | Window operator | Mathematics | 150 |
22,648,939 | https://en.wikipedia.org/wiki/Sanmon | A or is the most important mon of a Japanese Zen Buddhist temple, and is part of the Zen shichidō garan, the group of buildings that forms the heart of a Zen Buddhist temple. It can be often found in temples of other denominations too. Most sanmon are 2- or 3-bay nijūmon (a type of two-storied gate), but the name by itself does not imply any specific architecture.
Position, function and structure
Its importance notwithstanding, the sanmon is not the first gate of the temple, and in fact it usually stands between the sōmon (outer gate) and the butsuden (lit. "Hall of Buddha", i.e. the main hall). It used to be connected to a portico-like structure called , which however gradually disappeared during the Muromachi period, being replaced by the , a small building present on both sides of the gate and containing a stairway to the gate's second story. (Both sanrō are clearly visible in Tōfuku-ji's photo above.)
The sanmon's size is an indicator of a Zen temple's status. Structurally, the sanmon of a first rank temple as Nanzen-ji in Kyoto is a two-storied, 5x2 bay, three entrance gate (see photo below). Its three gates are called , and and symbolize the three gates to enlightenment, or satori. Entering, pilgrims can symbolically free themselves from the three passions of , , and . The fact the gate has entrances but no doors, and cannot therefore be closed, emphasizes its purely symbolic function as a limit between the sacred and the profane.
A temple of the second rank will have a two-storied, 3x2-bay, single entrance gate (see photo below). The second story of a first or second rank temple usually contains statues of Shakyamuni or of goddess Kannon, and of the 16 Rakan, and hosts periodical religious ceremonies. The side bays of sanmon of the first two ranks may also house statues of the Niō, wardens who are in charge of repelling evil.
A third rank temple will have a single-storied, lx2-bay, single entrance gate.
Three ranks
Second story
Some images of the second story of Kōmyō-ji's sanmon in Kamakura, Kanagawa Prefecture. It is a high rank Jōdo sect sanmon, the largest of the Kantō region.
Examples
Case 1
Chion-in's sanmon (Kyoto) – The most important sanmon in Japan
Nanzen-ji's sanmon (Kyoto)
Kuonji's sanmon (Minobu)
Case 2
Tōdai-ji's nandaimon (Nara)
Hōryū-ji's nandaimon (Ikaruga)
Tōshō-gū's yomeimon (Nikkō)
Notes
References
"Sanmon" from the Japanese Art Net User System (JAANUS) online dictionary accessed on May 2, 2009
Iwanami Nihonshi Jiten (岩波日本史辞典), CD-Rom Version. Iwanami Shoten, 1999-2001.
Gates in Japan
Japanese Buddhist architecture
Building types
Buildings and structures by type
Urban studies and planning terminology | Sanmon | Engineering | 675 |
7,922,286 | https://en.wikipedia.org/wiki/William%20Merriam%20Burton | William Merriam Burton (November 17, 1865 – December 29, 1954) was an American chemist who developed a widely used thermal cracking process for crude oil.
Burton was born in Cleveland, Ohio. In 1886, he received a Bachelor of Science degree at Western Reserve University. He earned a PhD at Johns Hopkins University in 1889.
Burton initially worked for the Standard Oil refinery at Whiting, Indiana. He became president of Standard Oil from 1918 to 1927, when he retired.
The process of thermal cracking invented by Burton, which became on January 7, 1913, doubled the yield of gasoline that can be extracted from crude oil.
The first thermal cracking method, the Shukhov cracking process, was invented by Russian engineer Vladimir Shukhov (1853-1939), in the Russian empire, Patent No. 12926, November 27, 1891.
Burton died in Miami, Florida.
See also
Cracking (chemistry)
Burton process
Shukhov cracking process
References
External links
Information on cracking in oil refining
1865 births
1954 deaths
American chemists
American energy industry businesspeople
People in the petroleum industry
Oil refining
American chemical engineers
Scientists from Cleveland
People from Whiting, Indiana
Engineers from Ohio | William Merriam Burton | Chemistry | 233 |
7,587,648 | https://en.wikipedia.org/wiki/Fizz%20buzz | Fizz buzz is a group word game for children to teach them about division. Players take turns to count incrementally, replacing any number divisible by three with the word "fizz", and any number divisible by five with the word "buzz", and any number divisible by both three and five with the word "fizzbuzz".
Play
Players generally sit in a circle. The player designated to go first says the number "one", and the players then count upwards in turn. However, any number divisible by three is replaced by the word fizz and any number divisible by five is replaced by the word buzz. Numbers divisible by both three and five (i.e. divisible by fifteen) become fizz buzz. A player who hesitates or makes a mistake is eliminated.
For example, a typical round of fizz buzz would start as follows:
Other variations
In some versions of the game, other divisibility rules such as 7 can be used instead. Another rule that may be used to complicate the game is where numbers containing a digit also trigger the corresponding rule (for instance, 52 would use the same rule for a number divisible by 5).
Programming
Fizz buzz (often spelled FizzBuzz in this context) has been used as an interview screening device for computer programmers. Writing a program to output the first 100 FizzBuzz numbers is a relatively trivial problem requiring little more than a loop and conditional statements in any popular language, and is thus a quick way to weed out applicants with absolutely no programming experience.
References
External links
Rosetta Code: Fizz Buzz at Rosetta Code
Euler's FizzBuzz, an unorthodox programmatic solution making use of Euler's theorem
Enterprise FizzBuzz, Comical 'enterprise' implementation of FizzBuzz with intentional verbosity
Car games
Children's games
Mathematical games
Division (mathematics) | Fizz buzz | Mathematics | 404 |
57,845,230 | https://en.wikipedia.org/wiki/Profidia | Profidia is an extinct genus of leaf beetles in the subfamily Eumolpinae. It contains only one species, Profidia nitida. It is known from Oligo-Miocene amber found near Simojovel in Chiapas, Mexico.
The species was described by American entomologist Judson Linsley Gressitt in 1963, using a single specimen (UCMP 12630) from the collections of the University of California Museum of Paleontology in Berkeley, California.
References
External links
University of California Museum of Paleontology Specimen 12630
†
†
Fossil beetle genera
Mexican amber
Oligocene insects of North America
Miocene insects of North America
Monotypic prehistoric insect genera
Species known from a single specimen | Profidia | Biology | 144 |
52,127,243 | https://en.wikipedia.org/wiki/Orders%20of%20magnitude%20%28molar%20concentration%29 | This page lists examples of the orders of magnitude of molar concentration. Source values are parenthesized where unit conversions were performed.
M denotes the non-SI unit molar:
1 M = 1 mol/L = 10−3 mol/m3.
All orders
SI multiples
See also
Molarity
Osmolarity
Metric system
Scientific notation
References
Chemical properties
Molar concentration | Orders of magnitude (molar concentration) | Chemistry,Mathematics | 79 |
21,564,296 | https://en.wikipedia.org/wiki/Ccrypt | ccrypt is a utility for the secure encryption and decryption of files and streams. It was designed as a replacement for the standard UNIX crypt utility, which is notorious for using a very weak encryption algorithm.
ccrypt is based on the Rijndael cypher, the same cipher used in the AES standard. However, in the AES standard a 128-bit block size is used, whereas ccrypt uses a 256-bit block size. ccrypt commonly uses the .cpt file extension for encrypted files.
ccrypt does not provide an authenticated encryption scheme and therefore does not protect the integrity of encrypted data.
See also
bcrypt
crypt (Unix)
mcrypt
scrypt
References
External links
ccrypt homepage
Cryptographic software | Ccrypt | Mathematics | 166 |
27,284,122 | https://en.wikipedia.org/wiki/Takahashi%20Taxol%20total%20synthesis | The Takahashi Taxol total synthesis published by Takashi Takahashi in 2006 is one of several methods in taxol total synthesis. The method starts from geraniol and differs from the other 6 published methods that it is a formal synthesis (the final product is baccatin III which lacks the amide tail found in taxol itself) and that it is racemic (the product baccatin III is optically inactive). A key feature of the published procedure is that several synthetic steps (construction of rings A, B and C) were performed in an automated synthesizer on a scale up to 300 gram and that purification steps were also automated.
A ring synthesis
Ring A was synthesised starting from geraniol 1 and involved acylation (acetic anhydride, DMAP, Et3N) to 2, epoxidation (N-bromosuccinimide, tBuOH/H2O then triethylamine) to 3, radical cyclisation (titanocene dichloride, manganese, triethylborane, 2,6-lutidine) to 4, alcohol protection (ethyl vinyl ether, camphorsulfonic acid) to 5, alcohol deprotection (NaOH, MeOH/THF/H2O) to alcohol 6, Parikh-Doering oxidation to aldehyde 7, isomerization (DBU) to enone 8, organic reduction (sodium borohydride) to alcohol 9, alcohol protection (TBSCl, Et3N) to TBS ether 10, hydrazone formation (H2NNHTs) to 11 and finally vinyl bromide formation (tBuLi, 1,2-Dibromoethane) in 12.
Ring C synthesis
The synthesis of ring C also required hydroxygeranyl acetate 2. Subsequent steps were allylic oxidation (SeO2, tBuO2H, salicylic acid) to aldehyde 13, then carbonyl reduction (NaBH4) to alcohol 14, then epoxidation (VO(acac)2, tBuO2H) to 15, then alcohol protection (MPM trichloroacetimidate) to MPM ether 16, then radical cyclisation (titanocene dichloride, manganese, triethylborane, TMSCl, K2CO3) to alcohol 17, alcohol protection (BOMCl, DIPEA) to benzyloxymethyl ether 18, acetate hydrolysis (NaOH) and Ley oxidation to aldehyde 19.
Ring B synthesis
Ring A (12) and ring C (19) reacted together to alcohol 20 in a Shapiro reaction (tBuLi, CeCl3) in a similar way as in the Nicolaou Taxol total synthesis. Subsequent steps were epoxidation (VO(acac)2, tBuO2H) to 21, reduction (LiAlH4) to the diol and alcohol protection (aqueous KOH, BnBr, Bu4NHSO4) to benzyl ether 22, alcohol protection (Me2SiHCl, imidazole) and oxidation (DDQ) to DMS ether 23, tosylation (TsCl, DMAP) to 24, deprotection to diol (TBAF) and reprotection (TMSOTf, 2,6-lutidine, DIPEA) as TMS ether 25, Ley oxidation to aldehyde 26, cyanohydrin formation (TMSCN, 18-crown-6, KCN) and alcohol protection (ethyl vinyl ether, camphorsulfonic acid) to EE ether 27.
Ring D synthesis
Cyclisation of 27 took place by alkylation (LiN(TMS)2, dioxane, microwave irradiation) to tricycle 28. Subsequent steps were cyanohydrin hydrolysis (camphorsulfonic acid), TMS deprotection (KOH) and allylic oxidation (SeO2, tBuO2H, salicylic acid) to ketone 29, then Upjohn dihydroxylation to triol 30, then acylation (AcCl, DMAP) and mesitylation (MsCl, DMAP) to 31, then benzyl group and benzyloxy group removal (hydrogenation / Palladium on carbon) followed by carbonate protection (triphosgene, pyridine) to 32, then secondary alcohol protection (TESCl, pyridine) and primary alcohol deprotection (potassium carbonate) to diol 33, then oxetane formation (DIPEA, HMPA) to 34, then acylation (Ac2O, DMAP), then benzoylation (phenyllithium) to 35, then oxidation (tBuOK, (PhSeO)2O, THF) to the acyloin 36, then isomerisation (tBuOK) and acylation (Ac2O, DMAP, pyr) to 37, then oxidation at the allylic position (PCC, celite, NaOAc, benzene), ketone group oxidation (NaBH4) and TES protecting group removal (HF·pyr) to baccatin III (38).
References
Total synthesis
Taxanes | Takahashi Taxol total synthesis | Chemistry | 1,117 |
651,802 | https://en.wikipedia.org/wiki/Newton%27s%20Cannon | Newton's Cannon (1998) is a science fantasy novel by American writer Gregory Keyes, the first book in his The Age of Unreason series. The protagonist for the novel is Benjamin Franklin; other key characters to the novel are James Franklin – Ben's brother, John Collins – Ben's friend, as well as Adrienne and King Louis XIV – the Sun King.
The Age of Unreason Series
The other three novels of the series are:
A Calculus of Angels
Empire of Unreason
The Shadow of Gods
See also
Space gun
External links
References
Green, Roland. "New SF and fantasy books." Booklist 94.18 (15 May 1998): 1601.
KILLHEFFER, ROBERT K.J. "Books." Fantasy & Science Fiction 96.3 (Mar. 1999): 35.
Cassada, Jackie, et al. "Book reviews: Fiction." Library Journal 123.9 (15 May 1998): 118.
Steinberg, Sybil, and Jonathan Bing.. "Forecasts: Fiction." Publishers Weekly 245.15 (13 Apr. 1998): 57.
1998 American novels
1998 science fiction novels
American steampunk novels
American alternate history novels
Science fantasy novels
Novels by J. Gregory Keyes
Cultural depictions of Isaac Newton
Cultural depictions of Benjamin Franklin | Newton's Cannon | Astronomy | 267 |
605,802 | https://en.wikipedia.org/wiki/Alkaline%20mucus | Alkaline mucus is a thick fluid produced by animals which confers tissue protection in an acidic environment, such as in the stomach.
Properties
Mucus that serves a protective function against acidic environments generally has a high viscosity, though the thickness and viscosity of the mucus layer can vary due to several factors. For example, alkaline mucus in the stomach increases in thickness when the stomach is distended. The pH level of the mucus also plays a role in its viscosity, as higher pH levels tend to alter the thickness of the mucus, making it less viscous. Because of this, invading agents such as Helicobacter pylori, a bacterium that causes stomach ulcers, can alter the pH of the mucus to make the mucus pliable enough to move through. Exposure to atmospheric air also tends to increase the pH level of alkaline mucus.
In humans
In humans, alkaline mucus is present in several organs and provides protection by way of its alkalinity and high viscosity. Alkaline mucus exists in the human eye, stomach, saliva, and cervix.
In the stomach, alkaline mucus is secreted by gastric glands in the gastric mucosa of the stomach wall. Secretion of alkaline mucus is necessary to protect the mucous membrane of the stomach from acids released during digestion. Ulcers can develop as a result of damage caused to the gastric mucosal barrier. Duodenal ulcers have been shown to develop in sites that are in direct contact with pepsin and acids. To prevent damage and protect the mucus epithelium, alkaline mucus secretions increase in the digestive system when food is being eaten.
In the cervix, alkaline mucus has been shown to possess bactericidal properties to protect the cervix, uterus, peritoneal cavity, and vagina from microbes.
References
Digestive system | Alkaline mucus | Biology | 423 |
28,596,193 | https://en.wikipedia.org/wiki/Phlorotannin | Phlorotannins are a type of tannins found in brown algae such as kelps and rockweeds or sargassacean species, and in a lower amount also in some red algae. Contrary to hydrolysable or condensed tannins, these compounds are oligomers of phloroglucinol (polyphloroglucinols). As they are called tannins, they have the ability to precipitate proteins. It has been noticed that some phlorotannins have the ability to oxidize and form covalent bonds with some proteins. In contrast, under similar experimental conditions three types of terrestrial tannins (procyanidins, profisetinidins, and gallotannins) apparently did not form covalent complexes with proteins.
These phenolic compounds are integral structural components of cell walls in brown algae, but they also seem to play many other secondary ecological roles such as protection from UV radiation and defense against grazing.
Biosynthesis and localization
Most of the phlorotannins' biosynthesis is still unknown, but it appears they are formed from phloroglucinols via the acetate-malonate pathway.
They are found within the cell in small vesicles called physodes, where the soluble, polar fraction is sequestrated, and as part of the cell wall, where they are insoluble and act as a structural component. Their concentration is known to be highly variable among different taxa as well as among geographical area, since they respond plastically to a variety of environmental factors. Brown algaes also exude phlorotannins in surrounding seawater.
It has been proposed that phlorotannins are first sequestered in physodes under their polar, reactive form before being oxidized and complexed to the alginic acid of brown algal cell wall by a peroxidase. To this date (2012), not much is known about phlorotannins synthesis. The formation of physodes, vesicles containing phenolic compounds, have been investigated for many years. These cytoplasmic constituents were thought to be synthesized in the chloroplast or its membrane, but more recent studies suggest that the formation may be related to the endoplasmic reticulum and Golgi bodies.
The allocation of phlorotannins among tissues varies along with the species.
The localization of phlorotannins can be investigated by light microscopy after vanillin–HCl staining giving an orange color. The ultrastructural localization of physodes can be examined through transmission electron microscopy in samples primarily fixed in 2.5% glutaraldehyde and with postfixation with 1% osmium tetroxide. For staining, uranyl acetate and lead citrate can be used.
Extraction and assays
In many studies where individual phlorotannins are isolated, extracted phlorotannins are acetylated with acetic anhydride-pyridine to protect them from oxidation. Both lowering the temperature and the addition of ascorbic acid seem to prevent oxidation.
Usual assays to quantify phlorotannins in samples are the Folin-Denis and Prussian blue assays. A more specific assay makes use of 2,4-dimethoxybenzaldehyde (DMBA), a product that reacts specifically with 1,3-and 1,3,5-substituted phenols (e.g., phlorotannins) to form a colored product.
Structural diversity
The nomenclature system for the marine phlorotannins was originally introduced by Glombitza.
Phlorotannins are classified following the arrangement of the phloroglucinol monomeres. More than 150 compounds are known, ranging from 126 Da to 650 kDa in molecular weight. Most of them are found between 10 and 100kDa.
They are distributed in six main subgroups: fucols, phlorethols, fucophloretols, fuhalols and eckols, which are only found in the Alariaceae.
According to linkage type, phlorotannins can be classified into four subclasses, i.e., phlorotannins with an ether linkage (fuhalols and phlorethols, fuhalols are constructed of phloroglucinol units that are connected with para- and ortho-arranged ether bridges containing one additional OH-group in every third ring), with a phenyl linkage (fucols), with an ether and a phenyl linkage (fucophlorethols) and with a dibenzodioxin linkage in eckols and carmalols (derivatives of phlorethols containing a dibenzodioxin moiety), most of which have halogenated representatives in brown algae.
Examples of phlorotannins are fucodiphlorethol G from the seaweed Ecklonia cava, eckol from Ecklonia species or phlorofucofuroeckol-B from Eisenia arborea.
The structural diversity of higher molecular weight molecules can be screened through the use of the 'EDIT' Carbon-13 NMR technique.
Roles
The functions of phlorotannins are still an actual research subject (2012). They show primary and secondary roles, at both cellular and organismic scale.
Primary roles
Structural
The structural role of phlorotannins in brown algal cell wall is a primary role of these polyphenolic compounds. This primary role may however not be the main role of the phlorotannins, since studies show they are more abundant in cytoplasm or in the exuded form than in cell wall.
Reproductive
Cytoplasmic as well as exuded phlorotannins seem to play a role in algal reproduction, by contributing to the formation of the zygote's cell wall and perhaps avoiding multiple fertilization by inhibiting spermatozoid movement.
Secondary roles
According to the Carbon Nutrient Balance Model, phlorotannins, which are predominantly carbon molecules free of nitrogen, are produced in higher yields in light environment. Light has greater importance than nitrogen availability.
Studies shown that phlorotannins seem to act as a protection for brown algaes in a number of ways. Here are some examples.
Antiherbivory defense
Phlorotannin production strategy may be constitutive or inducible. As studies demonstrated that herbivory can induce phlorotannin production, it has been suggested that they may have a role in algae defense. However, results form other studies suggest that the deterrent role of phlorotannins on herbivory is highly dependent on both algae and herbivore species. In Fucus vesiculosus, it is galactolipids, rather than phlorotannins, that act as herbivore deterrents against the sea urchin Arbacia punctulata.
UV and heavy metals screening
Phlorotannins are mostly located at the periphery of the cells, as components of the cell wall. They also contribute to absorption of UV-B light (between 280 and 320 nm) and show absorbance maxima at 200 and 265 nm, corresponding to UV-C wavelengths. Studies also demonstrated that sunlight intensity is related to phlorotannins production in Ascophyllum nodosum and Fucus vesiculosus natural populations. For these reasons, it has been suggested that phlorotannins act as photoprotective substances. Further studies with Lessonia nigrescens and Macrocystis integrifolia demonstrated that both UV-A and UV-B radiation can induce soluble phlorotannins and that there is a correlation between induction of phlorotannins and reduction in the inhibition of photosynthesis and DNA damage, two major effects of UV radiation on vegetal tissues. The fact that phlorotannins are exudated in surrounding water enables them to reduce incident UV exposure on kelp meiospores, phytoplankton and other kelp forests inhabitants, where brown algal biomass is high and water motion is low.
They may also be involved in metal sequestration such as divalent metal ions Sr2+, Mg2+, Ca2+, Be2+, Mn2+, Cd2+, Co2+, Zn2+, Ni2+, Pb2+ and Cu2+. If the chelating properties of phlorotannins have been demonstrated in vitro, in situ studies suggest that this characteristic may be species-specific.
Algicidal effect
Studies demonstrated that phlorotannins can act as an algicide against some dinoflagellates species.
Therapeutic properties
It has been demonstrated that phlorotannins can have anti-diabetic, anti-cancer, anti-oxidation, antibacterial, radioprotective and anti-HIV properties. However, in vivo studies on the effects of these compounds are lacking, most of the research having so far been done in vitro. Regarding anti-allergic property, there is in vivo study on the effect of these compounds.
References
External links
Riitta Koivikko. 2008. Brown algal phlorotannins: Improving and applying chemical methods, Ph. D. Thesis, University of Turku, Turku, Finland.
Brown algae | Phlorotannin | Biology | 1,980 |
2,465,964 | https://en.wikipedia.org/wiki/Bibliogram | A bibliogram is a graphical representation of the frequency of certain target words, usually noun phrases, in a given text. The term was introduced in 2005 by Howard D. White to name the linguistic object studied, but not previously named, in informetrics, scientometrics and bibliometrics. The noun phrases in the ranking may be authors, journals, subject headings, or other indexing terms. The "stretches of text” may be a book, a set of related articles, a subject bibliography, a set of Web pages, and so on. Bibliograms are always generated from writings, usually from scholarly or scientific literature.
Definition
A bibliogram is verbal construct made when noun phrases from extended stretches of text are ranked high to low by their frequency of co-occurrence with one or more user-supplied seed terms. Each bibliogram has three components:
A seed term that sets a context.
Words that co-occur with the seed across some set of records.
Counts (frequencies) by which co-occurring words can be ordered high to low.
As a family of term-frequency distributions, the bibliogram has frequently been written about under descriptions such as:
positive skew distribution
empirical hyperbolic
scale-free (see also Scale-free network)
power law
size frequency distribution
reverse-J
It is sometimes called a "core and scatter" distribution. The "core" consists of relatively few top-ranked terms that account for a disproportionately large share of co-occurrences overall.
The "scatter” consists of relatively many lower-ranked terms that account for the remaining share of co-occurrences. Usually the top-ranked terms are not tied in frequency, but identical frequencies and tied ranks become more common as the frequencies get smaller. At the bottom of the distribution, a long tail of terms are tied in rank because each co-occurs with the seed term only once.
In most cases bibliograms can be described by power laws such as Zipf's law and Bradford's law. In this regard, they have long been studied by mathematicians and statisticians in information science. However, these treatments typically ignore the qualitative meanings of the ranked terms themselves, which are often of interest in their own right. For example, the following bibliogram was made with an author's name as seed and shows the descriptors that co-occur with her name in the ERIC database. The descriptors are ranked by how many of her articles they were used to index:
6 Creativity
4 Creativity Tests
3 Divergent Thinking
2 Elementary School Mathematics
2 Instruction
2 Mathematics Education
2 Problem Solving
2 Research
2 Time
1 Acceleration
1 Anxiety
1 Beginning Teachers
1 Behavioral Objectives
1 Child Development
1 Classroom Techniques
1 Cognitive Development
etc.
This author is a researcher in education, and it will be seen that the terms profile her intellectual interests over the years. In general, bibliograms can be used to:
suggest additional terms for search strategies
characterize the work of scholars, scientists, or institutions
show who an author cites over time
show who cites an author over time
show the other authors with whom an author is co-cited over time
show the subjects associated with a journal or an author
show the authors, organizations, or journals associated with a subject
show library classification codes associated with subject headings and vice versa
show the popularity of items in the collections of libraries
model the structure of literatures with title terms, descriptors, author names, journal names
Bibliograms can be created with the RANK command on Dialog (other vendors have similar commands), ranking options within WorldCat, HistCite, Google Scholar, and inexpensive content analysis software.
White suggests that bibliograms have a parallel construct in what he calls associograms. These are the rank-ordered lists of word association norms studied in psycholinguistics. They are similar to bibliograms in statistical structure but are not generated from writings. Rather, they are generated by presenting panels of people with a stimulus term (which functions like a seed term) and tabulating the words they associate with the seed by frequency of co-occurrence. They are currently of interest to information scientists as a nonstandard way of creating thesauri for document retrieval.
Examples
Other examples of bibliograms are the ordered set of an author's co-authors or the list of authors that are published in a specific journal together with their number of articles. A popular example is the list of additional titles to consider for purchase that you get when you search an item in Amazon. These suggested titles are the top terms in the "core" of a bibliogram formed with your search term as seed. The frequencies are counts of the times they have been co-purchased with the seed.
Examples of associagrams may be found in the Edinburgh Associative Thesaurus.
Other methods
Similar but different methods are used in data clustering and data mining. Google Sets also created a list of associated terms for a given set of terms.
See also
Bibliographic coupling
Co-citation
References
Howard D. White (2005): On Extending Informetrics: An Opinion Paper. In: Proceedings of the 10th International Congress of the International Society for Scientometrics and Informetrics. Stockholm p. 442-449
Bibliometrics | Bibliogram | Mathematics,Technology | 1,108 |
504,308 | https://en.wikipedia.org/wiki/Terrace%20garden | A terrace garden is a garden with a raised flat paved or gravelled section overlooking a prospect. A raised terrace keeps a house dry and provides a transition between the hardscape and the softscape.
History
Persia
Since a level site is generally regarded as a requisite for comfort and repose, the terrace as a raised viewing platform made an early appearance in the ancient Persian gardening tradition, where the enclosed orchard, or paradise, was to be viewed from a ceremonial tent. Such a terrace had its origins in the far older agricultural practice of terracing a sloping site: see Terrace (agriculture). The Hanging Gardens of Babylon must have been built on an artificial mountain with stepped terraces, like those on a ziggurat.
Ancient Rome
Lucullus brought back to Rome first-hand experience of Persian gardening in the hilly sites of Asia Minor; the villa gardens of Maecenas, which included libraries open to scholars, incurred the disdain of Seneca. At Praeneste during the early Imperial period, the sanctuary of Fortuna was enlarged and elaborated, the natural slope being shaped into a series of terraces linked by stairs.
The imperial villas at Capri were built to take advantage of varied terraces. At the seaside Villa of the Papyri in Herculaneum, the villa gardens of Julius Caesar's father-in-law fell away in a series of terraces, giving pleasant and varied views of the Bay of Naples. Only some of them have been excavated. At Villa of Livia, probably part of Livia Drusilla's dowry brought to the Julio-Claudian dynasty, rooms in the cryptoporticus beneath terracing were frescoed with trees in bloom and fruit.
Italian Renaissance
During the Italian Renaissance, the formalized, civilizing imprint of human control over wild nature expressed in terracing that was combined with stairs and water features, drew villa patrons and garden designers to escarpments that surveyed a handsome prospect. At the influential Cortile del Belvedere at the Vatican Palace, perfected under a series of popes from the earliest 16th century, the backdrop within the enclosed court was a raised terrace. The view in this case was from the Stanze of Raphael on an upper floor of the Palace.
English landscape garden
Even in the most naturalistic landscape gardens of Capability Brown, a raised gravelled or paved terrace along the garden front offered a dry walk in damp weather and a transition between the hard materials of the architecture and the rolling greensward beyond.
Contemporary
Contemporary terrace gardens, in addition to being in the garden and landscape, often occur in urban areas and are terrace architecture elements that extend out from an apartment or residence at any floor level other than ground level. They are often discussed in conjunction with roof gardens, although they are not always true roof gardens, instead being balconies and decks. These outdoor spaces can become lush gardens through the use of container gardening, automated drip irrigation and low-flow irrigation systems, and outdoor furnishings.
See also
Patio
Balcony
Terrace (building)
References
Garden features
Architectural elements | Terrace garden | Technology,Engineering | 614 |
31,255,149 | https://en.wikipedia.org/wiki/Duffin%E2%80%93Schaeffer%20theorem | The Koukoulopoulos–Maynard theorem, also known as the Duffin-Schaeffer conjecture, is a theorem in mathematics, specifically, the Diophantine approximation proposed as a conjecture by R. J. Duffin and A. C. Schaeffer in 1941 and proven in 2019 by Dimitris Koukoulopoulos and James Maynard. It states that if is a real-valued function taking on positive values, then for almost all (with respect to Lebesgue measure), the inequality
has infinitely many solutions in coprime integers with if and only if
where is Euler's totient function.
A higher-dimensional analogue of this conjecture was resolved by Vaughan and Pollington in 1990.
Introduction
That existence of the rational approximations implies divergence of the series follows from the Borel–Cantelli lemma. The converse implication is the crux of the conjecture.
There have been many partial results of the Duffin–Schaeffer conjecture established to date. Paul Erdős established in 1970 that the conjecture holds if there exists a constant such that for every integer we have either or . This was strengthened by Jeffrey Vaaler in 1978 to the case . More recently, this was strengthened to the conjecture being true whenever there exists some such that the series
This was done by Haynes, Pollington, and Velani.
In 2006, Beresnevich and Velani proved that a Hausdorff measure analogue of the Duffin–Schaeffer conjecture is equivalent to the original Duffin–Schaeffer conjecture, which is a priori weaker. This result was published in the Annals of Mathematics.
See also
Khinchin's theorem
Notes
References
External links
Quanta magazine article about Duffin-Schaeffer conjecture.
Numberphile interview with James Maynard about the proof.
Conjectures
Conjectures that have been proved
Diophantine approximation | Duffin–Schaeffer theorem | Mathematics | 381 |
21,666,983 | https://en.wikipedia.org/wiki/Synchronous%20frame | A synchronous frame is a reference frame in which the time coordinate defines proper time for all co-moving observers. It is built by choosing some constant time hypersurface as an origin, such that has in every point a normal along the time line and a light cone with an apex in that point can be constructed; all interval elements on this hypersurface are space-like. A family of geodesics normal to this hypersurface are drawn and defined as the time coordinates with a beginning at the hypersurface. In terms of metric-tensor components , a synchronous frame is defined such that
where Such a construct, and hence, choice of synchronous frame, is always possible though it is not unique. It allows any transformation of space coordinates that does not depend on time and, additionally, a transformation brought about by the arbitrary choice of hypersurface used for this geometric construct.
Synchronization in an arbitrary frame of reference
Synchronization of clocks located at different space points means that events happening at different places can be measured as simultaneous if those clocks show the same times. In special relativity, the space distance element dl is defined as the intervals between two very close events that occur at the same moment of time. In general relativity this cannot be done, that is, one cannot define dl by just substituting dt ≡ dx0 = 0 in the metric. The reason for this is the different dependence between proper time and time coordinate x0 ≡ t in different points of space., i.e.,
To find dl in this case, time can be synchronized over two infinitesimally neighboring points in the following way (Fig. 1): Bob sends a light signal from some space point B with coordinates to Alice who is at a very close point A with coordinates xα and then Alice immediately reflects the signal back to Bob. The time necessary for this operation (measured by Bob), multiplied by c is, obviously, the doubled distance between Alice and Bob.
The line element, with separated space and time coordinates, is:
where a repeated Greek index within a term means summation by values 1, 2, 3. The interval between the events of signal arrival and its immediate reflection back at point A is zero (two events, arrival and reflection are happening at the same point in space and time). For light signals, the space-time interval is zero and thus setting in the above equation, we can solve for dx0 obtaining two roots:
which correspond to the propagation of the signal in both directions between Alice and Bob. If x0 is the moment of arrival/reflection of the signal to/from Alice in Bob's clock then, the moments of signal departure from Bob and its arrival back to Bob correspond, respectively, to x0 + dx0 (1) and x0 + dx0 (2). The thick lines on Fig. 1 are the world lines of Alice and Bob with coordinates xα and xα + dxα, respectively, while the red lines are the world lines of the signals. Fig. 1 supposes that dx0 (2) is positive and dx0 (1) is negative, which, however, is not necessarily the case: dx0 (1) and dx0 (2) may have the same sign. The fact that in the latter case the value x0 (Alice) in the moment of signal arrival at Alice's position may be less than the value x0 (Bob) in the moment of signal departure from Bob does not contain a contradiction because clocks in different points of space are not supposed to be synchronized. It is clear that the full "time" interval between departure and arrival of the signal in Bob's place is
The respective proper time interval is obtained from the above relationship by multiplication by , and the distance dl between the two points – by additional multiplication by c/2. As a result:
This is the required relationship that defines distance through the space coordinate elements.
It is obvious that such synchronization should be done by exchange of light signals between points. Consider again propagation of signals between infinitesimally close points A and B in Fig. 1. The clock reading in B which is simultaneous with the moment of reflection in A lies in the middle between the moments of sending and receiving the signal in B; in this moment if Alice's clock reads y0 and Bob's clock reads x0 then via Einstein Synchronization condition,
Substitute here to find the difference in "time" x0 between two simultaneous events occurring in infinitesimally close points as
This relationship allows clock synchronization in any infinitesimally small space volume. By continuing such synchronization further from point A, one can synchronize clocks, that is, determine simultaneity of events along any open line. The synchronization condition can be written in another form by multiplying by g00 and bringing terms to the left hand side
or, the "covariant differential" dx0 between two infinitesimally close points should be zero.
However, it is impossible, in general, to synchronize clocks along a closed contour: starting out along the contour and returning to the starting point one would obtain a Δx0 value different from zero. Thus, unambiguous synchronization of clocks over the whole space is impossible. An exception are reference frames in which all components g0α are zeros.
The inability to synchronize all clocks is a property of the reference frame and not of the spacetime itself. It is always possible in infinitely many ways in any gravitational field to choose the reference frame so that the three g0α become zeros and thus enable a complete synchronization of clocks. To this class are assigned cases where g0α can be made zeros by a simple change in the time coordinate which does not involve a choice of a system of objects that define the space coordinates.
In the special relativity theory, too, proper time elapses differently for clocks moving relatively to each other. In general relativity, proper time is different even in the same reference frame at different points of space. This means that the interval of proper time between two events occurring at some space point and the time interval between the events simultaneous with those at another space point are, in general, different.
Example: Uniformly rotating frame
Consider a rest (inertial) frame expressed in cylindrical coordinates and time . The interval in this frame is given by Transforming to a uniformly rotating coordinate system using the relation modifies the interval to
Of course, the rotating frame is valid only for since the frame speed would exceed speed of light beyond this radial location. The non-zero components of the metric tensor are and Along any open curve, the relation
can be used to synchronize clocks. However, along any closed curve, synchronization is impossible because
For instance, when , we have
where is the projected area of the closed curve on a plane perpendicular to the rotation axis (plus or minus sign corresponds to contour traversing in, or opposite to the rotation direction).
The proper time element in the rotating frame is given by
indicating that time slows down as we move away from the axis. Similarly the spatial element can be calculated to find
At a fixed value of and , the spatial element is which upon integration over a full circle shows that the ratio of circumference of a circle to its radius is given by
which is greater than by .
Space metric tensor
can be rewritten in the form
where
is the three-dimensional metric tensor that determines the metric, that is, the geometrical properties of space. Equations give the relationships between the metric of the three-dimensional space and the metric of the four-dimensional spacetime .
In general, however, depends on x0 so that changes with time. Therefore, it doesn't make sense to integrate dl: this integral depends on the choice of world line between the two points on which it is taken. It follows that in general relativity the distance between two bodies cannot be determined in general; this distance is determined only for infinitesimally close points. Distance can be determined for finite space regions only in such reference frames in which gik does not depend on time and therefore the integral along the space curve acquires some definite sense.
The tensor is inverse to the contravariant 3-dimensional tensor . Indeed, writing equation in components, one has:
Determining from the second equation and substituting it in the first proves that
This result can be presented otherwise by saying that are components of a contravariant 3-dimensional tensor corresponding to metric :
The determinants g and composed of elements and , respectively, are related to each other by the simple relationship:
In many applications, it is convenient to define a 3-dimensional vector g with covariant components
Considering g as a vector in space with metric , its contravariant components can be written as . Using and the second of , it is easy to see that
From the third of , it follows
Synchronous coordinates
As concluded from , the condition that allows clock synchronization in different space points is that metric tensor components g0α are zeros. If, in addition, g00 = 1, then the time coordinate x0 = t is the proper time in each space point (with c = 1). A reference frame that satisfies the conditions
is called synchronous frame. The interval element in this system is given by the expression
with the spatial metric tensor components identical (with opposite sign) to the components gαβ:
In synchronous frame time, time lines are normal to the hypersurfaces t = const. Indeed, the unit four-vector normal to such a hypersurface ni = ∂t/∂xi has covariant components nα = 0, n0 = 1. The respective contravariant components with the conditions are again nα = 0, n0 = 1.
The components of the unit normal coincide with those of the four-vector u = dxi/ds which is tangent to the world line x1, x2, x3 = const. The u with components uα = 0, u0 = 1 automatically satisfies the geodesic equations:
since, from the conditions , the Christoffel symbols and vanish identically. Therefore, in the synchronous frame the time lines are geodesics in the spacetime.
These properties can be used to construct synchronous frame in any spacetime (Fig. 2). To this end, choose some spacelike hypersurface as an origin, such that has in every point a normal along the time line (lies inside the light cone with an apex in that point); all interval elements on this hypersurface are space-like. Then draw a family of geodesics normal to this hypersurface. Choose these lines as time coordinate lines and define the time coordinate t as the length s of the geodesic measured with a beginning at the hypersurface; the result is a synchronous frame.
An analytic transformation to synchronous frame can be done with the use of the Hamilton–Jacobi equation. The principle of this method is based on the fact that particle trajectories in gravitational fields are geodesics. The Hamilton–Jacobi equation for a particle (whose mass is set equal to unity) in a gravitational field is
where S is the action. Its complete integral has the form:
Note that the complete integral contains as many arbitrary constants as the number of independent variables which in our case is . In the above equation, these correspond to the three parameters ξα and the fourth constant A being treated as an arbitrary function of the three ξα. With such a representation for S the equations for the trajectory of the particle can be obtained by equating the derivatives ∂S/∂ξα to zero, i.e.
For each set of assigned values of the parameters ξα, the right sides of equations have definite constant values, and the world line determined by these equations is one of the possible trajectories of the particle. Choosing the quantities ξα, which are constant along the trajectory, as new space coordinates, and the quantity S as the new time coordinate, one obtains a synchronous frame; the transformation from the old coordinates to the new ones is given by equations . In fact, it is guaranteed that for such a transformation the time lines will be geodesics and will be normal to the hypersurfaces S = const. The latter point is obvious from the mechanical analogy: the four-vector ∂S/∂xi which is normal to the hypersurface coincides in mechanics with the four-momentum of the particle, and therefore coincides in direction with its four-velocity u i.e. with the four-vector tangent to the trajectory. Finally the condition g00 = 1 is obviously satisfied, since the derivative −dS/ds of the action along the trajectory is the mass of the particle, which was set equal to 1; therefore |dS/ds| = 1.
The gauge conditions do not fix the coordinate system completely and therefore are not a fixed gauge, as the spacelike hypersurface at can be chosen arbitrarily. One still have the freedom of performing some coordinate transformations containing four arbitrary functions depending on the three spatial variables xα, which are easily worked out in infinitesimal form:
Here, the collections of the four old coordinates (t, xα) and four new coordinates are denoted by the symbols x and , respectively. The functions together with their first derivatives are infinitesimally small quantities. After such a transformation, the four-dimensional interval takes the form:
where
In the last formula, the are the same functions gik(x) in which x should simply be replaced by . If one wishes to preserve the gauge also for the new metric tensor in the new coordinates , it is necessary to impose the following restrictions on the functions :
The solutions of these equations are:
where f0 and fα are four arbitrary functions depending only on the spatial coordinates .
For a more elementary geometrical explanation, consider Fig. 2. First, the synchronous time line ξ0 = t can be chosen arbitrarily (Bob's, Carol's, Dana's or any of an infinitely many observers). This makes one arbitrarily chosen function: . Second, the initial hypersurface can be chosen in infinitely many ways. Each of these choices changes three functions: one function for each of the three spatial coordinates . Altogether, four (= 1 + 3) functions are arbitrary.
When discussing general solutions gαβ of the field equations in synchronous gauges, it is necessary to keep in mind that the gravitational potentials gαβ contain, among all possible arbitrary functional parameters present in them, four arbitrary functions of 3-space just representing the gauge freedom and therefore of no direct physical significance.
Another problem with the synchronous frame is that caustics can occur which cause the gauge choice to break down. These problems have caused some difficulties doing cosmological perturbation theory in synchronous frame, but the problems are now well understood. Synchronous coordinates are generally considered the most efficient reference system for doing calculations, and are used in many modern cosmology codes, such as CMBFAST. They are also useful for solving theoretical problems in which a spacelike hypersurface needs to be fixed, as with spacelike singularities.
Einstein equations in synchronous frame
Introduction of a synchronous frame allows one to separate the operations of space and time differentiation in the Einstein field equations. To make them more concise, the notation
is introduced for the time derivatives of the three-dimensional metric tensor; these quantities also form a three-dimensional tensor. In the synchronous frame is proportional to the second fundamental form (shape tensor). All operations of shifting indices and covariant differentiation of the tensor are done in three-dimensional space with the metric γαβ. This does not apply to operations of shifting indices in the space components of the four-tensors Rik, Tik. Thus Tαβ must be understood to be gβγTγα + gβ0T0α, which reduces to gβγTγα and differs in sign from γβγTγα. The sum is the logarithmic derivative of the determinant γ ≡ |γαβ| = − g:
Then for the complete set of Christoffel symbols one obtains:
where are the three-dimensional Christoffel symbols constructed from γαβ:
where the comma denotes partial derivative by the respective coordinate.
With the Christoffel symbols , the components Rik = gilRlk of the Ricci tensor can be written in the form:
Dots on top denote time differentiation, semicolons (";") denote covariant differentiation which in this case is performed with respect to the three-dimensional metric γαβ with three-dimensional Christoffel symbols , , and Pαβ is a three-dimensional Ricci tensor constructed from :
It follows from that the Einstein equations (with the components of the energy–momentum tensor T00 = −T00, Tα0 = −T0α, Tαβ = γβγTγα) become in a synchronous frame:
A characteristic feature of the synchronous frame is that it is not stationary: the gravitational field cannot be constant in such frame. In a constant field would become zero. But in the presence of matter the disappearance of all would contradict (which has a right side different from zero). In empty space from follows that all Pαβ, and with them all the components of the three-dimensional curvature tensor Pαβγδ (Riemann tensor) vanish, i.e. the field vanishes entirely (in a synchronous frame with a Euclidean spatial metric the space-time is flat).
At the same time the matter filling the space cannot in general be at rest relative to the synchronous frame. This is obvious from the fact that particles of matter within which there are pressures generally move along lines that are not geodesics; the world line of a particle at rest is a time line, and thus is a geodesic in the synchronous frame. An exception is the case of dust (p = 0). Here the particles interacting with one another will move along geodesic lines; consequently, in this case the condition for a synchronous frame does not contradict the condition that it be comoving with the matter. Even in this case, in order to be able to choose a synchronously comoving frame, it is still necessary that the matter move without rotation. In the comoving frame the contravariant components of the velocity are u0 = 1, uα = 0. If the frame is also synchronous, the covariant components must satisfy u0 = 1, uα = 0, so that its four-dimensional curl must vanish:
But this tensor equation must then also be valid in any other reference frame. Thus, in a synchronous but not comoving frame the condition curl v = 0 for the three-dimensional velocity v is additionally needed. For other equations of state a similar situation can occur only in special cases when the pressure gradient vanishes in all or in certain directions.
Singularity in synchronous frame
Use of the synchronous frame in cosmological problems requires thorough examination of its asymptotic behaviour. In particular, it must be known if the synchronous frame can be extended to infinite time and infinite space maintaining always the unambiguous labelling of every point in terms of coordinates in this frame.
It was shown that unambiguous synchronization of clocks over the whole space is impossible because of the impossibility to synchronize clocks along a closed contour. As concerns synchronization over infinite time, let's first remind that the time lines of all observers are normal to the chosen hypersurface and in this sense are "parallel". Traditionally, the concept of parallelism is defined in Euclidean geometry to mean straight lines that are everywhere equidistant from each other but in arbitrary geometries this concept can be extended to mean lines that are geodesics. It was shown that time lines are geodesics in synchronous frame. Another, more convenient for the present purpose definition of parallel lines are those that have all or none of their points in common. Excluding the case of all points in common (obviously, the same line) one arrives to the definition of parallelism where no two time lines have a common point.
Since the time lines in a synchronous frame are geodesics, these lines are straight (the path of light) for all observers in the generating hypersurface. The spatial metric is
.
The determinant of the metric tensor is the absolute value of the triple product of the row vectors in the matrix which is also the volume of the parallelepiped spanned by the vectors , , and (i.e., the parallelepiped whose adjacent sides are the vectors , , and ).
If turns into zero then the volume of this parallelepiped is zero. This can happen when one of the vectors lies in the plane of the other two vectors so that the parallelepiped volume transforms to the area of the base (height becomes zero), or more formally, when two of the vectors are linearly dependent. But then multiple points (the points of intersection) can be labelled in the same way, that is, the metric has a singularity.
The Landau group have found that the synchronous frame necessarily forms a time singularity, that is, the time lines intersect (and, respectively, the metric tensor determinant turns to zero) in a finite time.
This is proven in the following way. The right-hand of the , containing the stress–energy tensors of matter and electromagnetic field,
is a positive number because of the strong energy condition. This can be easily seen when written in components.
for matter
for electromagnetic field
With the above in mind, the is then re-written as an inequality
with the equality pertaining to empty space.
Using the algebraic inequality
becomes
.
Dividing both sides to and using the equality
one arrives to the inequality
Let, for example, at some moment of time. Because the derivative is positive, then the ratio decreases with decreasing time, always having a finite non-zero derivative and, therefore, it should become zero, coming from the positive side, during a finite time. In other words, becomes , and because , this means that the determinant becomes zero (according to not faster than ). If, on the other hand, initially, the same is true for increasing time.
An idea about the space at the singularity can be obtained by considering the diagonalized metric tensor. Diagonalization makes the elements of the matrix everywhere zero except the main diagonal whose elements are the three eigenvalues and ; these are three real values when the discriminant of the characteristic polynomial is greater or equal to zero or one real and two complex conjugate values when the discriminant is less than zero. Then the determinant is just the product of the three eigenvalues. If only one of these eigenvalues becomes zero, then the whole determinant is zero. Let, for example, the real eigenvalue becomes zero (). Then the diagonalized matrix becomes a 2 × 2 matrix with the (generally complex conjugate) eigenvalues on the main diagonal. But this matrix is the diagonalized metric tensor of the space where ; therefore, the above suggests that at the singularity () the space is 2-dimensional when only one eigenvalue turns to zero.
Geometrically, diagonalization is a rotation of the basis for the vectors comprising the matrix in such a way that the direction of basis vectors coincide with the direction of the eigenvectors. If is a real symmetric matrix, the eigenvectors form an orthonormal basis defining a rectangular parallelepiped whose length, width, and height are the magnitudes of the three eigenvalues. This example is especially demonstrative in that the determinant which is also the volume of the parallelepiped is equal to length × width × height, i.e., the product of the eigenvalues. Making the volume of the parallelepiped equal to zero, for example by equating the height to zero, leaves only one face of the parallelepiped, a 2-dimensional space, whose area is length × width. Continuing with the obliteration and equating the width to zero, one is left with a line of size length, a 1-dimensional space. Further equating the length to zero leaves only a point, a 0-dimensional space, which marks the place where the parallelepiped has been.
An analogy from geometrical optics is comparison of the singularity with caustics, such as the bright pattern in Fig. 3, which shows caustics formed by a glass of water illuminated from the right side. The light rays are an analogue of the time lines of the free-falling observers localized on the synchronized hypersurface. Judging by the approximately parallel sides of the shadow contour cast by the glass, one can surmise that the light source is at a practically infinite distance from the glass (such as the sun) but this is not certain as the light source is not shown on the photo. So one can suppose that the light rays (time lines) are parallel without this being proven with certainty. The glass of water is an analogue of the Einstein equations or the agent(s) behind them that bend the time lines to form the caustics pattern (the singularity). The latter is not as simple as the face of a parallelepiped but is a complicated mix of various kinds of intersections. One can distinguish an overlap of two-, one-, or zero-dimensional spaces, i.e., intermingling of surfaces and lines, some converging to a point (cusp) such as the arrowhead formation in the centre of the caustics pattern.
The conclusion that timelike geodesic vector fields must inevitably reach a singularity after a finite time has been reached independently by Raychaudhuri by another method that led to the Raychaudhuri equation, which is also called Landau–Raychaudhuri equation to honour both researchers.
See also
Normal coordinates
Congruence (general relativity), for a derivation of the kinematical decomposition and of Raychaudhuri's equation.
References
Bibliography
(English translation: )
; Physical Review Letters, 6, 311 (1961)
General relativity
Coordinate systems
Frames of reference
Coordinate charts in general relativity
Physical cosmology | Synchronous frame | Physics,Astronomy,Mathematics | 5,575 |
27,666,964 | https://en.wikipedia.org/wiki/BENGAL%20%28project%29 | BENGAL was the acronym of the research project High-resolution temporal and spatial study of the BENthic biology and Geochemistry of a north-eastern Atlantic abyssal Locality. The project was funded through the EC MAST III program from 1996 to 1998 (EC contract MAS-3 950018).
The project was a three-year multidisciplinary study of the abyssal benthic boundary layer in the northeast Atlantic. The aim of BENGAL was to determine how the seabed community and the geochemistry of the sediments change seasonally in response to a highly seasonal input of organic matter from the overlying water column. It did this by organising an intensive sampling programme on 14 research cruises over a two-year period and using a range of observational techniques including time-series sediment traps, marine snow cameras, benthic lander systems, long-term moorings and time-lapse photography. The study area was located in the middle of the Porcupine Abyssal Plain at a water depth of about 4850 m, 270 km southwest of Ireland (central location: 48°50′N 16°30′W). The BENGAL project involved 17 partners from 9 European countries.
References
Entry of BENGAL in the EU-project database CORDIS
Rice, A.L., Gage, J.D., Lampitt, R.S., Pfannkuche, O. & Sibuet, M. (1998) BENGAL: High resolution temporal and spatial study of the benthic biology and geochemistry of a north-eastern Atlantic abyssal locality. In: Barthel, K-G. et al. (eds) Third European marine science and technology conference, Lisbon, 23–27 May 1998, project synopses, volume I: marine systems, Office for Official Publications of the European Communities, Luxembourg, pp. 271–286.
Billett, D.S.M. & Rice, A.L. (2001) The BENGAL programme: introduction and overview, Progress In Oceanography, 50 (1-4), 13-25, (followed by 20 publications about the scientific outcome of BENGAL in the same volume).
Data compilation in the PANGAEA data library
Results are published in various journals of the scientific literature.
Oceanography | BENGAL (project) | Physics,Environmental_science | 459 |
56,278,327 | https://en.wikipedia.org/wiki/Margaret%20Douie%20Dougal | Margaret Douie Dougal ( – 1938, née Robertson, later Chaplin) was a British chemical publication indexer for fifteen years (1885–1909) for the Chemical Society. Dougal contributed to the compilation of volumes i-iii of A Collective Index of the Transactions, Proceedings and Abstracts of the Chemical Society. The then president of the Chemical society, Sir James Dewar, congratulated Dougal for her work as "an example of thoroughness and accuracy to her successors." The collected decennial indices were also prepared by Dougal; at the 1906 Annual Meeting of the Chemical Society it was noted that the Council "had pleasure in expressing the high appreciation of the ceaseless energy displayed by the indexer, Mrs. Margaret Dougal, on the completion of this valuable work."
Under Thomas Edward Thorpe, Dougal conducted inorganic chemistry research of mixed salts of chromium by testing their compositions. Research she conducted also provided insight on the stress and fracturing behavior of iron in Scottish craftsmanship and manufacturing in 1892.
Dougal was born Margaret Douie Robertson in Singapore in , the daughter of J.H. Robertson M.D.. She married William Dougal and, after his death, married Arnold Chaplin M.D. F.R.C.P. on 29 July 1909. She died in London on 9 November 1938 at the age of 79.
References
19th-century British chemists
19th-century British women scientists
British chemists
British women chemists
Inorganic chemists
1850s births
1938 deaths
Date of birth missing
Place of birth missing
Place of death missing
Indexers | Margaret Douie Dougal | Chemistry | 320 |
63,075,269 | https://en.wikipedia.org/wiki/Yay%C4%B1k%20ayran%C4%B1 | Yayık ayranı, also known as Turkish buttermilk, is a traditional Turkish drink produced from fermented buttermaking by-products, water and salt. It has been traditionally prepared in barrel churns or skin bags. Despite the similar name, it is distinct from ayran. Goat, sheep, or cow's milk can be used for Turkish buttermilk production. Certain acid curd cheeses such as çökelek could also be obtained from yayık ayranı when heated.
Yayık ayranı is not available on gross markets since it is not produced on an industrial scale, though it is available in local markets.
Production
Yayık ayranı is made of churned soured yogurt, water and salt. It is mostly produced in rural communities for domestic consumption during buttermaking out of yogurt. In general, yogurt for butter production is fermented longer than usual for extra acid production. The yayık ayranı thus has a distinct sour taste. Lactic acid bacteria isolated from yogurt, fermented butter and yayık ayranı encompass Lactobaccillus delbrueckii subsp. bulgaricus, Streptococcus thermophilus and Lactococcus diacetylactis.
Churning and salting
Traditional Turkish churns for butter and buttermilk production are made of various materials such as animal skin (dried and boiled goat or sheepskin), wood barrels, earthenware or metal, although in modern times electrically driven churns has become popular. Mechanical force needed for churning process can also be provided by riding animals such as horses and mules. This method of production is observed especially among nomadic pastoralist communities during seasonal yaylak migrations. There are also older reports stating that local communities in Turkey were using washing machines as churns.
Before churning, yogurt in churns is diluted by 50 per cent with cold water and partly-congealed butter accumulated on top is extracted after churning. Moreover, the amount of water used for dilution varies, resulting in yayık ayranı with different viscosities. Water dilation is followed by salting process, which comprises 0.5-1.0% salt addition.
See also
Buttermilk
Ayran
References
External links
Turkish drinks
Yogurt-based drinks
Fermented dairy products
Fermented drinks
Sour foods | Yayık ayranı | Biology | 520 |
22,161,660 | https://en.wikipedia.org/wiki/AbsoluteTelnet | AbsoluteTelnet is a software terminal client for Windows that implements Telnet, SSH 1 and 2, SFTP, TAPI Dialup and direct COM port connections. It is commercial software, originally released in 1999 and is still in regular development by Brian Pence of Celestial Software.
Features
Some features of AbsoluteTelnet:
Emulates VT52, VT100, VT220, VT320, ANSI, Xterm, QNX, SCO-ANSI, ANSIBBS, and WYSE60
Password, Public-key, keyboard-interactive, Smartcard and GSSAPI authentication support
Support Triple DES, TWOFISH, BLOWFISH, AES, ARCFOUR, CAST128 ciphers
Tabbed interface for multiple concurrent connections (Dockable)
Scripting support using VBScript
Unicode 5.0 support, including bidirectional text, surrogates, combining characters, etc...
Passthru printing
IPv6 support.
IDNA support
Localized into 8 languages (English, German, French, Portuguese, Chinese, Russian, Norwegian and Hungarian)
Pocket PC support
XMODEM, YMODEM, and ZMODEM file transfer for all terminal connections
SSH File Transfer Protocol for ssh2 connections only
See also
Comparison of SSH clients
References
Further reading
Centrify article on Single Sign On with Absolutetelnet
External links
Official AbsoluteTelnet website
Mailing list
Cryptographic software
Internet Protocol based network software
Telnet
Terminal emulators | AbsoluteTelnet | Mathematics | 303 |
198,207 | https://en.wikipedia.org/wiki/Rec.%20601 | ITU-R Recommendation BT.601, more commonly known by the abbreviations Rec. 601 or BT.601 (or its former name CCIR 601), is a standard originally issued in 1982 by the CCIR (an organization, which has since been renamed as the International Telecommunication Union Radiocommunication sector) for encoding interlaced analog video signals in digital video form. It includes methods of encoding 525-line 60 Hz and 625-line 50 Hz signals, both with an active region covering 720 luminance samples and 360 chrominance samples per line. The color encoding system is known as YCbCr 4:2:2.
The Rec. 601 video raster format has been re-used in a number of later standards, including the ISO/IEC MPEG and ITU-T H.26x compressed formats, although compressed formats for consumer applications usually use chroma subsampling reduced from the 4:2:2 sampling specified in Rec. 601 to 4:2:0.
The standard has been revised several times in its history. Its seventh edition, referred to as BT.601-7, was approved in March 2011 and was formally published in October 2011.
Background and history
In the early 1980s, digital television equipment was beginning to emerge, but each manufacturer was developing their own proprietary digital versions of existing analog standards like PAL, SECAM, and NTSC.
At an ITU meeting in autumn 1981, CCIR Study Group 11 approved document 11/1027 describing the parameter values for a unified digital video format. This was adopted as Draft Rec. AA/11 "Encoding Parameters for Digital Television for Studios" by the CCIR Plenary Assembly in February 1982, later becoming ITU-R Rec. 601.
The key feature allowing a globally accepted digital standard were the use of component coding and choosing a luminance sampling frequency that was a common multiple of the line frequencies used in analog standards. This "orthogonal" sampling approach originated from Stanley Baron of NBC.
Preparation preceded the CCIR approval, including laboratory testing around the world to validate the proposed parameter values. International negotiations and efforts to build consensus were led by figures like Mark Krivosheev, Richard Green, and representatives from Japan and Europe.
Signal format
The Rec. 601 signal can be regarded as if it is a digitally encoded analog component video signal, and thus the sampling includes data for the horizontal and vertical sync and blanking intervals. Regardless of the frame rate, the luminance sampling frequency is 13.5 MHz. The samples are uniformly quantized using 8- or 10-bit PCM codes in the YCbCr domain.
For each 8-bit luminance sample, the nominal value to represent black is 16, and the value for white is 235. Eight-bit code values from 1 through 15 provide footroom and can be used to accommodate transient signal content such as filter undershoots. Similarly, code values 236 through 254 provide headroom and can be used to accommodate transient signal content such as filter overshoots. The values 0 and 255 are used to encode the sync pulses and are forbidden within the visible picture area. The Cb and Cr samples are unsigned and use the value 128 to encode the neutral color difference value, as used when encoding a white, grey or black area.
Primary chromaticities
Slightly different primaries are specified for the 625-line (PAL and SECAM) and 525-line (NTSC SMPTE C primaries) systems. Earlier versions of the standard (prior to BT.601-6, approved in January 2007) did not contain an explicit definition of the color primaries.
Transfer characteristics
Rec. 601 defines a nonlinear transfer function which is linear near 0 and then transfers to a gamma curve for the rest of the luminance range:
Awards
The CCIR received a 1982–83 Technology and Engineering Emmy Award for its development of the Rec. 601 standard.
See also
Digital component video
YCbCr
Rec. 709, the corresponding standard for high-definition television (HDTV)
Rec. 2020, ITU-R Recommendation for ultra-high-definition television (UHDTV)
ITU-R BT.656, ITU-R Recommendation for parallel and serial transmission formats for BT.601 video
Pixel aspect ratio
References
Digital television
Film and video technology
ITU-R recommendations
Color space | Rec. 601 | Mathematics | 894 |
12,697,208 | https://en.wikipedia.org/wiki/Tim%20Hunter%20%28astronomer%29 | Timothy B. Hunter, better known as Tim Hunter, is an American radiologist and amateur astronomer, who was the president of the International Dark-Sky Association.
Education and profession
Hunter received his M.D. degree from Northwestern University in 1968. He teaches as a professor of radiology and orthopaedic surgery at the University of Arizona College of Medicine. He also earned a B.S. degree in mathematics from the University of Arizona in 1980 and M.S. in astronomy from Swinburne University of Technology in 2006.
Astronomy
Since 1986, Hunter has studied the problem of increasing light pollution. In 1987, together with David Crawford, he founded the International Dark-Sky Association, which has grown to more than 10,000 members in 75 countries (as of 2007). His effort in this field brought him the Presidential Award of the Astronomical League in 2004 and the Amateur Achievement Award of the Astronomical Society of the Pacific in 2005. He is also a member of the Planetary Science Institute Board of Trustees and a past chairman of the Western Region of the Astronomical League.
Asteroid
Asteroid 6398 Timhunter, discovered by Carolyn Shoemaker at Palomar Observatory in 1991, was named in his honor. The official was published by the Minor Planet Center on 1 June 1996 ().
References
2005 ASP Annual Award Winners: Tim Hunter
University of Arizona - msk biographical sketches
Planetary Science Institute Board of Trustees
The Grasslands observatory
6398 Timhunter at JPL Small-Body Database Browser
Living people
Amateur astronomers
20th-century American astronomers
21st-century American astronomers
University of Arizona faculty
University of Arizona alumni
Feinberg School of Medicine alumni
Year of birth missing (living people) | Tim Hunter (astronomer) | Astronomy | 336 |
19,427,518 | https://en.wikipedia.org/wiki/Flag%20day%20%28computing%29 | A flag day, as used in system administration, is a change which requires a complete restart or conversion of a sizable body of software or data. The change is large and expensive, and—in the event of failure—similarly difficult and expensive to reverse.
The situation may arise if there are limitations on backward compatibility and forward compatibility among system components, which then requires that updates be performed almost simultaneously (during a "flag day cutover") for the system to function after the upgrade. This contrasts with the method of gradually phased-in upgrades, which avoids the disruption of service caused by en masse upgrades.
This systems terminology originates from a major change in the Multics operating system's definition of ASCII, which was scheduled for the United States holiday, Flag Day, on June 14, 1966.
Another historical flag day was January 1, 1983, when the ARPANET changed from NCP to the TCP/IP protocol suite. This major change required all ARPANET nodes and interfaces to be shut down and restarted across the entire network.
See also
Backward compatibility
Forward compatibility
Protocol ossification
References
External links
DNS Flag Day 2019
Computer jargon
1966 neologisms | Flag day (computing) | Technology | 239 |
30,480,219 | https://en.wikipedia.org/wiki/Harmonically%20enhanced%20digital%20audio | Harmonically Enhanced Digital Audio (HEDA) is a class of digital recordings created by using modern digital harmonic enhancement technology. With the proliferation of harmonic enhancement algorithms, pioneered by Crane Song's Dave Hill, a new class of harmonic enhancement algorithms have emerged.
Examples of algorithms used to create HEDA include the HEAT feature (available on Pro Tools HD 8.1 and later) and the multi-platform plug-ins Universal Audio's Studer A800, Slate Digital's Virtual Console Collection, Waves Non-Linear Summer, and Crane Song HEDD-192.
History
Development of harmonic enhancement was in its first stages in 1999 when Antares, developer of Autotune, noticed that the tube emulation feature in its microphone simulator was used more than its other features. In response, Antares created a specific plug-in, called Antares Tube, which implemented tube emulation. The algorithms for harmonic enhancement were just beginning and market interest was shown to be present. During its development period, harmonic exciters and harmonic enhancement plug-ins were looked down upon by most of the pro-audio industry.
Modern Harmonic Enhancement
HEAT (Harmonic Enhancement Algorithm Technology), released in 2010, is a plug-in featured on each track of HD editions of Pro Tools. Avid, developer of Pro Tools, enlisted the help of Dave Hill of Crane Song for its creation. Dave Hill's harmonic enhancement algorithms have been viewed as top-of-the-line by many professionals for years. Before Pro Tools 8.1, no Digital Audio Workstation (DAW) featured harmonic enhancement on each track.
Alternatives available on other platforms are Universal Audio's Studer A800 plug-in (released in late 2010), Slate Digital's Virtual Console Collection (VCC), Sonimus Satson, and Waves' Non-Linear Summer (NLS).
References
External links
HEDA
Audio electronics | Harmonically enhanced digital audio | Engineering | 385 |
92,385 | https://en.wikipedia.org/wiki/Eta%20Carinae | η Carinae (Eta Carinae, abbreviated to η Car), formerly known as η Argus, is a stellar system containing at least two stars with a combined luminosity greater than five million times that of the Sun, located around distant in the constellation Carina. Previously a 4th-magnitude star, it brightened in 1837 to become brighter than Rigel, marking the start of its so-called "Great Eruption". It became the second-brightest star in the sky between 11–14 March 1843 before fading well below naked-eye visibility after 1856. In a smaller eruption, it reached 6th magnitude in 1892 before fading again. It has brightened consistently since about 1940, becoming brighter than magnitude 4.5 by 2014.
At declination −59° 41′ 04.26″, η Carinae is circumpolar from locations on Earth south of latitude 30°S (for reference, the latitude of Johannesburg is 26°12′S), and is not visible north of about latitude 30°N, just south of Cairo (which is at a latitude of 30°02′N).
The two main stars of the η Carinae system have an eccentric orbit with a period of 5.54 years. The primary is an extremely unusual star, similar to a luminous blue variable (LBV). It was initially , of which it has already lost at least , and it is expected to explode as a supernova in the astronomically near future. This is the only star known to produce ultraviolet laser emission. The secondary star is hot and also highly luminous, probably of spectral class O, around 30–80 times as massive as the Sun. The system is heavily obscured by the Homunculus Nebula, which consists of material ejected from the primary during the Great Eruption. It is a member of the open cluster, itself embedded in the much larger Carina Nebula.
Although unrelated to the star and nebula, the weak Eta Carinids meteor shower has a radiant very close to η Carinae.
Observational history
η Carinae was first recorded as a fourth-magnitude star in the 16th or 17th century. It became the second-brightest star in the sky in the mid-19th century, before fading below naked-eye visibility. During the second half of the 20th century, it slowly brightened to again become visible to the naked eye, and by 2014 was again a fourth-magnitude star.
Discovery and naming
There is no reliable evidence of η Carinae being observed or recorded before the 17th century, although Dutch navigator Pieter Keyser described a fourth-magnitude star at approximately the correct position around 1595–1596, which was copied onto the celestial globes of Petrus Plancius and Jodocus Hondius and the 1603 Uranometria of Johann Bayer. Frederick de Houtman's independent star catalogue from 1603 does not include η Carinae among the other 4th magnitude stars in the region. The earliest firm record was made by Edmond Halley in 1677 when he recorded the star simply as Sequens (i.e. "following" relative to another star) within a new constellation Robur Carolinum. His Catalogus Stellarum Australium was published in 1679. The star was also known by the Bayer designations η Roboris Caroli, η Argus, or η Navis. In 1751 Nicolas-Louis de Lacaille gave the stars of Argo Navis and Robur Carolinum a single set of Greek letter Bayer designations within his constellation Argo, and designated three areas within Argo for the purposes of using Latin letter designations three times over. The letter η fell within the keel portion of the ship which was later to become the constellation Carina. It was not generally known as η Carinae until 1879, when the stars of Argo Navis were finally given the epithets of the daughter constellations in the Uranometria Argentina of Gould.
η Carinae is too far south to be part of the mansion-based traditional Chinese astronomy, but it was mapped when the Southern Asterisms were created at the start of the 17th century. Together with s Carinae, λ Centauri and λ Muscae, η Carinae forms the asterism (Sea and Mountain). η Carinae has the names Tseen She (from the Chinese 天社 [Mandarin: tiānshè] "Heaven's altar") and Foramen. It is also known as (, ).
Halley gave an approximate apparent magnitude 4 at the time of discovery, which has been calculated as magnitude 3.3 on the modern scale. The handful of possible earlier sightings suggest that η Carinae was not significantly brighter than this for much of the 17th century. Further sporadic observations over the next 70 years show that η Carinae was probably around 3rd magnitude or fainter, until Lacaille reliably recorded it at 2nd magnitude in 1751. It is unclear whether η Carinae varied significantly in brightness over the next 50 years; there are occasional observations such as William Burchell's at 4th magnitude in 1815, but it is uncertain whether these are just re-recordings of earlier observations.
Great Eruption
In 1827, Burchell specifically noted η Carinae's unusual brightness at 1st magnitude, and was the first to suspect that it varied in brightness. John Herschel, who was in South Africa at the time, made a detailed series of accurate measurements in the 1830s showing that η Carinae consistently shone around magnitude 1.4 until November 1837. On the evening of 16 December 1837, Herschel was astonished to see that it had brightened to slightly outshine Rigel. This event marked the beginning of a roughly 18-year period known as the Great Eruption.
η Carinae was brighter still on 27 January 1838, equivalent to Alpha Centauri, before fading slightly over the following three months. Herschel did not observe the star after this, but received correspondence from the Reverend W.S. Mackay in Calcutta, who wrote in 1843, "To my great surprise I observed this March last (1843), that the star η Argus had become a star of the first magnitude fully as bright as Canopus, and in colour and size very like Arcturus." Observations at the Cape of Good Hope indicated it peaked in brightness, surpassing Canopus, from 11 to 14 March 1843, then began to fade, then brightened to between the brightness of Alpha Centauri and Canopus between 24 and 28 March before fading once again. For much of 1844 the brightness was midway between Alpha Centauri and Beta Centauri, around magnitude +0.2, before brightening again at the end of the year. At its brightest in 1843 it likely reached an apparent magnitude of −0.8, then −1.0 in 1845. The peaks in 1827, 1838 and 1843 are likely to have occurred at the periastron passage—the point the two stars are closest together—of the binary orbit. From 1845 to 1856, the brightness decreased by around 0.1 magnitudes per year, but with possible rapid and large fluctuations.
In their oral traditions, the Boorong clan of the Wergaia people of Lake Tyrrell, north-western Victoria, Australia, told of a reddish star they knew as Collowgullouric War "Old Woman Crow", the wife of War "Crow" (Canopus). In 2010, astronomers Duane Hamacher and David Frew from Macquarie University in Sydney showed that this was η Carinae during its Great Eruption in the 1840s. From 1857, the brightness decreased rapidly until it faded below naked-eye visibility by 1886. This has been calculated to be due to the condensation of dust in the ejected material surrounding the star, rather than to an intrinsic change in luminosity.
Lesser Eruption
A new brightening started in 1887, peaked at about magnitude 6.2 in 1892, then at the end of March 1895 faded rapidly to about magnitude 7.5. Although there are only visual records of the 1890 eruption, it has been calculated that η Carinae was suffering 4.3 magnitudes of visual extinction due to the gas and dust ejected in the Great Eruption. An unobscured brightness would have been magnitude 1.5–1.9, significantly brighter than the historical magnitude. Despite this, it was similar to the first one, even almost matching its brightness, but not the amount of material expelled.
Twentieth century
Between 1900 and at least 1940, η Carinae appeared to have settled at a constant brightness of around magnitude 7.6, but in 1953 it was noted to have brightened again to magnitude 6.5. The brightening continued steadily, but with fairly regular variations of a few tenths of a magnitude.
In 1996, the variations were first identified as having a 5.52 year period, later measured more accurately at 5.54 years, leading to the idea of a binary system. The binary theory was confirmed by observations of radio, optical and near-infrared radial velocity and line profile changes, referred to collectively as a spectroscopic event, at the predicted time of periastron passage in late 1997 and early 1998. At the same time there was a complete collapse of the X-ray emission presumed to originate in a colliding wind zone. The confirmation of a luminous binary companion greatly modified the understanding of the physical properties of the η Carinae system and its variability.
A sudden doubling of brightness was observed in 1998–99 bringing it back to naked-eye visibility. During the 2014 spectroscopic event, the apparent visual magnitude became brighter than magnitude 4.5. The brightness does not always vary consistently at different wavelengths, and does not always exactly follow the 5.5 year cycle. Radio, infrared and space-based observations have expanded coverage of η Carinae across all wavelengths and revealed ongoing changes in the spectral energy distribution.
In July 2018, η Carinae was reported to have the strongest colliding wind shock in the solar neighbourhood. Observations with the NuSTAR satellite gave much higher resolution data than the earlier Fermi Gamma-ray Space Telescope. Using direct focussing observations of the non-thermal source in the extremely hard X-ray band that is spatially coincident with the star, they showed that the source of non-thermal X-rays varies with the orbital phase of the binary star system and that the photon index of the emission is similar to that derived through analysis of the γ-ray (gamma) spectrum.
Visibility
As a fourth-magnitude star, η Carinae is comfortably visible to the naked eye in all but the most light-polluted skies in inner-city areas according to the Bortle scale. Its brightness has varied over a wide range, from the second-brightest star in the sky for a few days in the 19th century, to well below naked-eye visibility. Its location at around 60°S in the far southern celestial hemisphere means it cannot be seen by observers in Europe and much of North America.
Located between Canopus and the Southern Cross, η Carinae is easily pinpointed as the brightest star within the large naked-eye Carina Nebula. In a telescope the "star" is framed within the dark "V" dust lane of the nebula and appears distinctly orange and clearly non-stellar. High magnification will show the two orange lobes of a surrounding reflection nebula known as the Homunculus Nebula on either side of a bright central core. Variable star observers can compare its brightness with several 4th- and 5th-magnitude stars closely surrounding the nebula.
Discovered in 1961, the weak Eta Carinids meteor shower has a radiant very close to η Carinae. Occurring from 14 to 28 January, the shower peaks around 21 January. Meteor showers are not associated with bodies outside the Solar System, making the proximity to η Carinae merely a coincidence.
Visual spectrum
The strength and profile of the lines in the η Carinae spectrum are highly variable, but there are a number of consistent distinctive features. The spectrum is dominated by emission lines, usually broad although the higher excitation lines are overlaid by a narrow central component from dense ionised nebulosity, especially the Weigelt Blobs. Most lines show a P Cygni profile but with the absorption wing much weaker than the emission. The broad P Cygni lines are typical of strong stellar winds, with very weak absorption in this case because the central star is so heavily obscured. Electron scattering wings are present but relatively weak, indicating a clumpy wind. Hydrogen lines are present and strong, showing that η Carinae still retains much of its hydrogen envelope.
HeI lines are much weaker than the hydrogen lines, and the absence of HeII lines provides an upper limit to the possible temperature of the primary star. NII lines can be identified but are not strong, while carbon lines cannot be detected and oxygen lines are at best very weak, indicating core hydrogen burning via the CNO cycle with some mixing to the surface. Perhaps the most striking feature is the rich FeII emission in both permitted and forbidden lines, with the forbidden lines arising from excitation of low density nebulosity around the star.
The earliest analyses of the star's spectrum are descriptions of visual observations from 1869, of prominent emission lines "C, D, b, F and the principal green nitrogen line". Absorption lines are explicitly described as not being visible. The letters refer to Fraunhofer's spectral notation and correspond to Hα, HeI, FeII, and Hβ.
It is assumed that the final line is from FeII very close to the green nebulium line now known to be from OIII.
Photographic spectra from 1893 were described as similar to an F5 star, but with a few weak emission lines. Analysis to modern spectral standards suggests an early F spectral type. By 1895 the spectrum again consisted mostly of strong emission lines, with the absorption lines present but largely obscured by emission. This spectral transition from F supergiant to strong emission is characteristic of novae, where ejected material initially radiates like a pseudo-photosphere and then the emission spectrum develops as it expands and thins.
The emission line spectrum associated with dense stellar winds has persisted ever since the late 19th century. Individual lines show widely varying widths, profiles and Doppler shifts, often multiple velocity components within the same line. The spectral lines also show variation over time, most strongly with a 5.5-year period but also less dramatic changes over shorter and longer periods, as well as ongoing secular development of the entire spectrum. The spectrum of light reflected from the Weigelt Blobs, and assumed to originate mainly with the primary, is similar to the extreme P Cygni-type star which has a spectral type of B0Ieq.
Direct spectral observations did not begin until after the Great Eruption, but light echoes from the eruption reflected from other parts of the Carina Nebula were detected using the U.S. National Optical Astronomy Observatory's Blanco 4-meter telescope at the Cerro Tololo Inter-American Observatory. Analysis of the reflected spectra indicated the light was emitted when η Carinae had the appearance of a G2-to-G5 supergiant, some 2,000 K cooler than expected from other supernova impostor events. Further light echo observations show that following the peak brightness of the Great Eruption the spectrum developed prominent P Cygni profiles and CN molecular bands, although this is likely from the material being ejected which may have been colliding with circumstellar material in a similar way to a type IIn supernova.
In the second half of the 20th century, much higher-resolution visual spectra became available. The spectrum continued to show complex and baffling features, with much of the energy from the central star being recycled into the infrared by surrounding dust, some reflection of light from the star from dense localised objects in the circumstellar material, but with obvious high-ionisation features indicative of very high temperatures. The line profiles are complex and variable, indicating a number of absorption and emission features at various velocities relative to the central star.
The 5.5-year orbital cycle produces strong spectral changes at periastron that are known as spectroscopic events. Certain wavelengths of radiation suffer eclipses, either due to actual occultation by one of the stars or due to passage within opaque portions of the complex stellar winds. Despite being ascribed to orbital rotation, these events vary significantly from cycle to cycle. These changes have become stronger since 2003 and it is generally believed that long-term secular changes in the stellar winds or previously ejected material may be the culmination of a return to the state of the star before its Great Eruption.
Ultraviolet
The ultraviolet spectrum of the η Carinae system shows many emission lines of ionised metals such as FeII and CrII, as well as Lymanα (Lyα) and a continuum from a hot central source. The ionisation levels and continuum require the existence of a source with a temperature at least 37,000 K.
Certain FeII UV lines are unusually strong. These originate in the Weigelt Blobs and are caused by a low-gain lasing effect. Ionised hydrogen between a blob and the central star generates intense Lyα emission which penetrates the blob. The blob contains atomic hydrogen with a small admixture of other elements, including iron photo-ionised by radiation from the central stars. An accidental resonance (where emission coincidentally has a suitable energy to pump the excited state) allows the Lyα emission to pump the Fe+ ions to certain pseudo-metastable states, creating a population inversion that allows the stimulated emission to take place. This effect is similar to the maser emission from dense pockets surrounding many cool supergiant stars, but the latter effect is much weaker at optical and UV wavelengths and η Carinae is the only clear instance detected of an ultraviolet astrophysical laser. A similar effect from pumping of metastable OI states by Lyβ emission has also been confirmed as an astrophysical UV laser.
Infrared
Infrared observations of η Carinae have become increasingly important. The vast majority of the electromagnetic radiation from the central stars is absorbed by surrounding dust, then emitted as mid- and far infrared appropriate to the temperature of the dust. This allows almost the entire energy output of the system to be observed at wavelengths that are not strongly affected by interstellar extinction, leading to estimates of the luminosity that are more accurate than for other extremely luminous stars. η Carinae is the brightest source in the night sky at mid-infrared wavelengths.
Far infrared observations show a large mass of dust at 100–150 K, suggesting a total mass for the Homunculus of 20 solar masses () or more. This is much larger than previous estimates, and is all thought to have been ejected in a few years during the Great Eruption.
Near-infrared observations can penetrate the dust at high resolution to observe features that are completely obscured at visual wavelengths, although not the central stars themselves. The central region of the Homunculus contains a smaller Little Homunculus from the 1890 eruption, a butterfly of separate clumps and filaments from the two eruptions, and an elongated stellar wind region.
High energy radiation
Several X-ray and gamma ray sources have been detected around η Carinae, for example 4U 1037–60 in the 4th Uhuru catalogue and 1044–59 in the HEAO-2 catalog. The earliest detection of X-rays in the η Carinae region was from the Terrier-Sandhawk rocket, followed by Ariel 5, OSO 8, and Uhuru sightings.
More detailed observations were made with the Einstein Observatory, ROSAT X-ray telescope, Advanced Satellite for Cosmology and Astrophysics (ASCA), and Chandra X-ray Observatory. There are multiple sources at various wavelengths right across the high energy electromagnetic spectrum: hard X-rays and gamma rays within 1 light-month of the η Carinae; hard X-rays from a central region about 3 light-months wide; a distinct partial ring "horse-shoe" structure in low-energy X-rays 0.67 parsec (2.2 light-years) across corresponding to the main shockfront from the Great Eruption; diffuse X-ray emission across the whole area of the Homunculus; and numerous condensations and arcs outside the main ring.
All the high-energy emission associated with η Carinae varies during the orbital cycle. A spectroscopic minimum, or X-ray eclipse, occurred in July and August 2003, and similar events in 2009 and 2014 have been intensively observed. The highest-energy gamma rays above 100 MeV detected by AGILE show strong variability, while lower-energy gamma rays observed by Fermi show little variability.
Radio emission
Radio emissions have been observed from η Carinae across the microwave band. It has been detected in the 21 cm HI line, but has been particularly closely studied in the millimetre and centimetre bands. Masing hydrogen recombination lines (from the combining of an electron and proton to form a hydrogen atom) have been detected in this range. The emission is concentrated in a small non-point source less than 4 arcseconds across and appears to be mainly free-free emission (thermal bremsstrahlung) from ionised gas, consistent with a compact HII region at around 10,000 K. High resolution imaging shows the radio frequencies originating from a disk a few arcseconds in diameter, 10,000 astronomical units () wide at the distance of η Carinae.
The radio emission from η Carinae shows continuous variation in strength and distribution over a 5.5 year cycle. The HII and recombination lines vary very strongly, with continuum emission (electromagnetic radiation across a broad band of wavelengths) less affected. This shows a dramatic reduction in the ionisation level of the hydrogen for a short period in each cycle, coinciding with the spectroscopic events at other wavelengths.
Surroundings
η Carinae is found within the Carina Nebula, a giant star-forming region in the Carina–Sagittarius Arm of the Milky Way. The nebula is a prominent naked-eye object in the southern skies showing a complex mix of emission, reflection and dark nebulosity. η Carinae is known to be at the same distance as the Carina Nebula and its spectrum can be seen reflected off various star clouds in the nebula. The appearance of the Carina Nebula, and particularly of the Keyhole region, has changed significantly since it was described by John Herschel over years ago. This is thought to be due to the reduction in ionising radiation from Eta Carinae since the Great Eruption. Prior to the Great Eruption the η Carinae system contributed up to 20% of the total ionising flux for the whole Carina Nebula, but that is now mostly blocked by the surrounding gas and dust.
Trumpler 16
η Carinae lies within the scattered stars of the Trumpler 16 open cluster. All the other members are well below naked eye visibility, although WR 25 is another extremely massive luminous star. Trumpler 16 and its neighbour Trumpler 14 are the two dominant star clusters of the Carina OB1 association, an extended grouping of young luminous stars with a common motion through space.
Homunculus
η Carinae is enclosed by, and lights up, the Homunculus Nebula, a small emission and reflection nebula composed mainly of gas ejected during the Great Eruption event in the mid-19th century, as well as dust that condensed from the debris. The nebula consists of two polar lobes aligned with the rotation axis of the star, plus an equatorial "skirt", the whole being around long. Closer studies show many fine details: a Little Homunculus within the main nebula, probably formed by the 1890 eruption; a jet; fine streams and knots of material, especially noticeable in the skirt region; and three Weigelt Blobs—dense gas condensations very close to the star itself.
The lobes of the Homunculus are considered to be formed almost entirely due to the initial eruption, rather than shaped by or including previously ejected or interstellar material, although the scarcity of material near the equatorial plane allows some later stellar wind and ejected material to mix. Therefore, the mass of the lobes gives an accurate measure of the scale of the Great Eruption, with estimates ranging from up to as high as . The results show that the material from the Great Eruption is strongly concentrated towards the poles; 75% of the mass and 90% of the kinetic energy were released above latitude 45°.
A unique feature of the Homunculus is the ability to measure the spectrum of the central object at different latitudes by the reflected spectrum from different portions of the lobes. These clearly show a polar wind where the stellar wind is faster and stronger at high latitudes thought to be due to rapid rotation causing gravity brightening towards the poles. In contrast the spectrum shows a higher excitation temperature closer to the equatorial plane. By implication the outer envelope of η Carinae A is not strongly convective as that would prevent the gravity darkening. The current axis of rotation of the star does not appear to exactly match the alignment of the Homunculus. This may be due to interaction with η Carinae B which also modifies the observed stellar winds.
Distance
The distance to η Carinae has been determined by several different methods, resulting in a widely accepted value of , with a margin of error around . The distance to η Carinae itself cannot be measured using parallax due to its surrounding nebulosity, but other stars in the Trumpler 16 cluster are expected to be at a similar distance and are accessible to parallax. Gaia Data Release 2 has provided the parallax for many stars considered to be members of Trumpler 16, finding that the four hottest O-class stars in the region have very similar parallaxes with a mean value of (mas), which translates to a distance of . This implies that η Carinae may be more distant than previously thought, and also more luminous, although it is still possible that it is not at the same distance as the cluster or that the parallax measurements have large systematic errors.
The distances to star clusters can be estimated by using a Hertzsprung–Russell diagram or colour–colour diagram to calibrate the absolute magnitudes of the stars, for example fitting the main sequence or identifying features such as a horizontal branch, and hence their distance from Earth. It is also necessary to know the amount of interstellar extinction to the cluster and this can be difficult in regions such as the Carina Nebula. A distance of has been determined from the calibration of O-type star luminosities in Trumpler 16. After determining an abnormal reddening correction to the extinction, the distance to both Trumpler 14 and Trumpler 16 has been measured at ().
The known expansion rate of the Homunculus Nebula provides an unusual geometric method for measuring its distance. Assuming that the two lobes of the nebula are symmetrical, the projection of the nebula onto the sky depends on its distance. Values of 2,300, 2,250, and have been derived for the Homunculus, and η Carinae is clearly at the same distance.
Properties
The η Carinae star system is currently one of the most massive stars that can be studied in great detail. Until recently η Carinae was thought to be the most massive single star, but the system's binary nature was proposed by the Brazilian astronomer Augusto Damineli in 1996 and confirmed in 2005. Both component stars are largely obscured by circumstellar material ejected from η Carinae A, and basic properties such as their temperatures and luminosities can only be inferred. Rapid changes to the stellar wind in the 21st century suggest that the star itself may be revealed when dust from the great eruption finally clears.
Orbit
The binary nature of η Carinae is clearly established, although the components have not been directly observed and cannot even be clearly resolved spectroscopically due to scattering and re-excitation in the surrounding nebulosity. Periodic photometric and spectroscopic variations prompted the search for a companion, and modelling of the colliding winds and partial "eclipses" of some spectroscopic features have constrained the possible orbits.
The period of the orbit is accurately known at 5.539 years, although this has changed over time due to mass loss and accretion. Between the Great Eruption and the smaller 1890 eruption, the orbital period was apparently 5.52 years, while before the Great Eruption it may have been lower still, possibly between 4.8 and 5.4 years. The orbital separation is only known approximately, with a semi-major axis of The orbit is highly eccentric, This means that the separation of the stars varies from around similar to the distance of Mars from the Sun, to 30 AU, similar to the distance of Neptune.
Perhaps the most valuable use of an accurate orbit for a binary star system is to directly calculate the masses of the stars. This requires the dimensions and inclination of the orbit to be accurately known. The dimensions of η Carinae's orbit are only known approximately as the stars cannot be directly and separately observed. The inclination has been modelled at 130–145 degrees, but the orbit is still not known accurately enough to provide the masses of the two components.
Classification
η Carinae A is classified as a luminous blue variable (LBV) due to the distinctive spectral and brightness variations. This type of variable star is characterised by irregular changes from a high temperature quiescent state to a low temperature outburst state at roughly constant luminosity. LBVs in the quiescent state lie on a narrow instability strip, with more luminous stars being hotter. In outburst all LBVs have about the same temperature, which is near 8,000 K. LBVs in a normal outburst are visually brighter than when quiescent although the bolometric luminosity is unchanged.
An event similar to η Carinae A's Great Eruption has been observed in only one other star in the Milky Way — — and in a handful of other possible LBVs in other galaxies. None of them seem to be quite as violent as η Carinae's. It is unclear if this is something that only a very few of the most massive LBVs undergo, something that is caused by a close companion star, or a very brief but common phase for massive stars. Some similar events in external galaxies have been mistaken for supernovae and have been called supernova impostors, although this grouping may also include other types of non-terminal transients that approach the brightness of a supernova.
η Carinae A is not a typical LBV. It is more luminous than any other LBV in the Milky Way although possibly comparable to other supernova impostors detected in external galaxies. It does not currently lie on the S Doradus instability strip, although it is unclear what the temperature or spectral type of the underlying star actually is, and during its Great Eruption it was much cooler than a typical LBV outburst, with a middle-G spectral type. The 1890 eruption may have been fairly typical of LBV eruptions, with an early F spectral type, and it has been estimated that the star may currently have an opaque stellar wind, forming a pseudo-photosphere with a temperature of 9,000–.
η Carinae B is a massive luminous hot star, about which little else is known. From certain high excitation spectral lines that ought not to be produced by the primary, η Carinae B is thought to be a young O-type star. Most authors suggest it is a somewhat evolved star such as a supergiant or giant, although a Wolf–Rayet star cannot be ruled out.
Mass
The masses of stars are difficult to measure except by determination of a binary orbit. η Carinae is a binary system, but certain key information about the orbit is not known accurately. The mass can be strongly constrained to be greater than , due to the high luminosity. Standard models of the system assume masses of and for the primary and secondary, respectively. Higher masses have been suggested, to model the energy output and mass transfer of the Great Eruption, with a combined system mass of over before the Great Eruption. η Carinae A has clearly lost a great deal of mass since it formed, and it is thought that it was initially , although it may have formed through binary merger. Masses of for the primary and for the secondary best-fit one-mass-transfer model of the Great Eruption event.
Mass loss
Mass loss is one of the most intensively studied aspects of massive star research. Put simply, calculated mass loss rates in the best models of stellar evolution do not reproduce the observed properties of evolved massive stars such as Wolf–Rayets, the number and types of core collapse supernovae, or their progenitors. To match those observations, the models require much higher mass loss rates. η Carinae A has one of the highest known mass loss rates, currently around /year, and is an obvious candidate for study.
η Carinae A is losing a lot of mass due to its extreme luminosity and relatively low surface gravity. Its stellar wind is entirely opaque and appears as a pseudo-photosphere; this optically dense surface hides any true physical surface of the star that may be present. (At extreme rates of radiative mass loss, the density gradient of lofted material may become continuous enough that a meaningfully discrete physical surface may not exist.) During the Great Eruption the mass loss rate was a thousand times higher, around /year sustained for ten years or more. The total mass loss during the eruption was at least with much of it now forming the Homunculus Nebula. The smaller 1890 eruption produced the Little Homunculus Nebula, much smaller and only about . The bulk of the mass loss occurs in a wind with a terminal velocity of about 420 km/s, but some material is seen at higher velocities, up to 3,200 km/s, possibly material blown from the accretion disk by the secondary star.
η Carinae B is presumably also losing mass via a thin, fast stellar wind, but this cannot be detected directly. Models of the radiation observed from interactions between the winds of the two stars show a mass loss rate of the order of /year at speeds of 3,000 km/s, typical of a hot O-class star. For a portion of the highly eccentric orbit, it may actually gain material from the primary via an accretion disk. During the Great Eruption of the primary, the secondary could have accreted , producing strong jets which formed the bipolar shape of the Homunculus Nebula.
Luminosity
The stars of the η Carinae system are completely obscured by dust and opaque stellar winds, with much of the ultraviolet and visual radiation shifted to infrared. The total electromagnetic radiation across all wavelengths for both stars combined is several million solar luminosities (). The best estimate for the luminosity of the primary is making it one of the most luminous stars in the Milky Way. The luminosity of η Carinae B is particularly uncertain, probably and almost certainly no more than .
The most notable feature of η Carinae is its giant eruption or supernova impostor event, which originated in the primary star and was observed around 1843. In a few years, it produced almost as much visible light as a faint supernova explosion, but the star survived. It is estimated that at peak brightness the luminosity was as high as . Other supernova impostors have been seen in other galaxies, for example the possible false supernova SN 1961V in NGC 1058 and SN 2006jc's pre-explosion outburst in UGC 4904.
Following the Great Eruption, η Carinae became self-obscured by the ejected material, resulting in dramatic reddening. This has been estimated at four magnitudes at visual wavelengths, meaning the post-eruption luminosity was comparable to the luminosity when first identified. η Carinae is still much brighter at infrared wavelengths, despite the presumed hot stars behind the nebulosity. The recent visual brightening is considered to be largely caused by a decrease in the extinction, due to thinning dust or a reduction in mass loss, rather than an underlying change in the luminosity.
Temperature
Until late in the 20th century, the temperature of η Carinae was assumed to be over 30,000 K because of the presence of high-excitation spectral lines, but other aspects of the spectrum suggested much lower temperatures and complex models were created to account for this. It is now known that the Eta Carinae system consists of at least two stars, both with strong stellar winds and a shocked colliding wind (wind-wind collision or WWC) zone, embedded within a dusty nebula that reprocesses 90% of the electromagnetic radiation into the mid and far infrared. All of these features have different temperatures.
The powerful stellar winds from the two stars collide in a roughly conical WWC zone and produce temperatures as high as at the apex between the two stars. This zone is the source of the hard X-rays and gamma rays close to the stars. Near periastron, as the secondary ploughs through ever denser regions of the primary wind, the colliding wind zone becomes distorted into a spiral trailing behind η Carinae B.
The wind-wind collision cone separates the winds of the two stars. For 55–75° behind the secondary, there is a thin hot wind typical of O or Wolf–Rayet stars. This allows some radiation from η Carinae B to be detected and its temperature can be estimated with some accuracy due to spectral lines that are unlikely to be produced by any other source. Although the secondary star has never been directly observed, there is widespread agreement on models where it has a temperature between 37,000 K and 41,000 K.
In all other directions on the other side of the wind-wind collision zone, there is the wind from η Carinae A, cooler and around 100 times denser than η Carinae B's wind. It is also optically dense, completely obscuring anything resembling a true photosphere and rendering any definition of its temperature moot. The observable radiation originates from a pseudo-photosphere where the optical density of the wind drops to near zero, typically measured at a particular Rossland opacity value such as . This pseudo-photosphere is observed to be elongated and hotter along the presumed axis of rotation.
η Carinae A is likely to have appeared as an early B hypergiant with a temperature of between 20,000 K and 25,000 K at the time of its discovery by Halley. An effective temperature determined for the surface of a spherical optically thick wind at would be 9,400–15,000 K, while the temperature of a theoretical hydrostatic "core" at optical depth 150 would be 35,200 K. The effective temperature of the visible outer edge of the opaque primary wind is generally treated as being 15,000–25,000 K on the basis of visual and ultraviolet spectral features assumed to be directly from the wind or reflected via the Weigelt Blobs. During the great eruption, η Carinae A was much cooler at around 5,000 K.
The Homunculus contains dust at temperatures varying from 150 K to 400 K. This is the source of almost all the infrared radiation that makes η Carinae such a bright object at those wavelengths.
Further out, expanding gases from the Great Eruption collide with interstellar material and are heated to around , producing less energetic X-rays seen in a horseshoe or ring shape.
Size
The size of the two main stars in the η Carinae system is difficult to determine precisely, for neither star can be seen directly. η Carinae B is likely to have a well-defined photosphere, and its radius can be estimated from the assumed type of star. An O supergiant of with a temperature of 37,200 K has an effective radius of .
The size of η Carinae A is not even well defined. It has an optically dense stellar wind, so the typical definition of a star's surface being approximately where it becomes opaque gives a very different result to where a more traditional definition of a surface might be. One study calculated a radius of for a hot "core" of 35,000 K at optical depth 150, near the sonic point or very approximately what might be called a physical surface. At optical depth 0.67 the radius would be , indicating an extended optically thick stellar wind. At the peak of the Great Eruption the radius, so far as such a thing is meaningful during such a violent expulsion of material, would have been around , comparable to the largest-known red supergiants, including VY Canis Majoris.
The stellar sizes should be compared with their orbital separation, which is only around at periastron. The accretion radius of the secondary is around , suggesting strong accretion near periastron leading to a collapse of the secondary wind. It has been proposed that the initial brightening from 4th magnitude to 1st at relatively constant bolometric luminosity was a normal LBV outburst, albeit from an extreme example of the class. Then the companion star passing through the expanded photosphere of the primary at periastron triggered the further brightening, increase in luminosity, and extreme mass loss of the Great Eruption.
Rotation
Rotation rates of massive stars have a critical influence on their evolution and eventual death. The rotation rate of the η Carinae stars cannot be measured directly because their surfaces cannot be seen. Single massive stars spin down quickly due to braking from their strong winds, but there are hints that both η Carinae A and B are fast rotators, up to 90% of critical velocity. One or both could have been spun up by binary interaction, for example accretion onto the secondary and orbital dragging on the primary.
Eruptions
Two eruptions have been observed from η Carinae, the Great Eruption of the mid-19th century and the Lesser Eruption of 1890. In addition, studies of outlying nebulosity suggest at least one earlier eruption around A further eruption may have occurred around although it is possible that the material indicating this eruption is actually from the Great Eruption slowed down by colliding with older nebulosity. The mechanism producing these eruptions is unknown. It is not even clear whether the eruptions involve explosive events or so-called super-Eddington winds, an extreme form of stellar wind involving very high mass loss induced by an increase in the luminosity of the star. The energy source for the explosions or luminosity increase is also unknown.
Theories about the various eruptions must account for: repeating events, at least three eruptions of various sizes; ejecting or more without destroying the star; the highly unusual shape and expansion rates of the ejected material; and the light curve during the eruptions involving a brightness increases of several magnitudes over a period of decades. The best-studied event is the Great Eruption. As well as photometry during the 19th century, light echoes observed in the 21st century give further information about the progression of the eruption, showing a brightening with multiple peaks for approximately 20 years, followed by a plateau period in the 1850s. The light echoes show that the outflow of material during the plateau phase was much higher than before the peak of the eruption. Possible explanations for the eruptions include: a binary merger in what was then a triple system; mass transfer from η Carinae B during periastron passages; or a pulsational pair-instability explosion.
Evolution
η Carinae is a unique object, with no very close analogues currently known in any galaxy. Therefore, its future evolution is highly uncertain, but almost certainly involves further mass loss and an eventual supernova.
η Carinae A would have begun life as an extremely hot star on the main sequence, already a highly luminous object over . The exact properties would depend on the initial mass, which is expected to have been at least and possibly much higher. A typical spectrum when first formed would be O2If and the star would be mostly or fully convective due to CNO cycle fusion at the very high core temperatures. Sufficiently massive or differentially rotating stars undergo such strong mixing that they remain chemically homogeneous during core hydrogen burning.
As core hydrogen burning progresses, a very massive star would slowly expand and become more luminous, becoming a blue hypergiant and eventually an LBV while still fusing hydrogen in the core. When hydrogen at the core is depleted after 2–2.5 million years, hydrogen shell burning continues with further increases in size and luminosity, although hydrogen shell burning in chemically homogeneous stars may be very brief or absent since the entire star would become depleted of hydrogen. In the late stages of hydrogen burning, mass loss is extremely high due to the high luminosity and enhanced surface abundances of helium and nitrogen. As hydrogen burning ends and core helium burning begins, massive stars transition very rapidly to the Wolf–Rayet stage with little or no hydrogen, increased temperatures and decreased luminosity. They are likely to have lost over half their initial mass at this point.
It is unclear whether triple-alpha helium fusion has started at the core of η Carinae A. The elemental abundances at the surface cannot be accurately measured, but ejecta within the Homunculus are around 60% hydrogen and 40% helium, with nitrogen enhanced to ten times solar levels. This is indicative of ongoing CNO cycle hydrogen fusion.
Models of the evolution and death of single very massive stars predict an increase in temperature during helium core burning, with the outer layers of the star being lost. It becomes a Wolf–Rayet star on the nitrogen sequence, moving from WNL to WNE as more of the outer layers are lost, possibly reaching the WC or WO spectral class as carbon and oxygen from the triple alpha process reach the surface. This process would continue with heavier elements being fused until an iron core develops, at which point the core collapses and the star is destroyed. Subtle differences in initial conditions, in the models themselves, and most especially in the rates of mass loss, produce different predictions for the final state of the most massive stars. They may survive to become a helium-stripped star or they may collapse at an earlier stage while they retain more of their outer layers. The lack of sufficiently luminous WN stars and the discovery of apparent LBV supernova progenitors has also prompted the suggestion that certain types of LBVs explode as a supernova without evolving further.
η Carinae is a close binary and this complicates the evolution of both stars. Compact massive companions can strip mass from larger primary stars much more quickly than would occur in a single star, so the properties at core collapse can be very different. In some scenarios, the secondary can accrue significant mass, accelerating its evolution, and in turn be stripped by the now compact Wolf–Rayet primary. In the case of η Carinae, the secondary is clearly causing additional instability in the primary, making it difficult to predict future developments.
Potential supernova
The overwhelming probability is that the next supernova observed in the Milky Way will originate from an unknown white dwarf or anonymous red supergiant, very likely not even visible to the naked eye. Nevertheless, the prospect of a supernova originating from an object as extreme, nearby, and well studied as η Carinae arouses great interest.
As a single star, a star originally around 150 times as massive as the Sun would typically reach core collapse as a Wolf–Rayet star within 3 million years. At low metallicity, many massive stars will collapse directly to a black hole with no visible explosion or a sub-luminous supernova, and a small fraction will produce a pair-instability supernova, but at solar metallicity and above, there is expected to be sufficient mass loss before collapse to allow a visible supernova of type Ib or Ic. If there is still a large amount of expelled material close to the star, the shock formed by the supernova explosion impacting the circumstellar material can efficiently convert kinetic energy to radiation, resulting in a superluminous supernova (SLSN) or hypernova, several times more luminous than a typical core collapse supernova and much longer-lasting. Highly massive progenitors may also eject sufficient nickel to cause a SLSN simply from the radioactive decay. The resulting remnant would be a black hole, for it is highly unlikely such a massive star could ever lose sufficient mass for its core not to exceed the limit for a neutron star.
The existence of a massive companion brings many other possibilities. If η Carinae A was rapidly stripped of its outer layers, it might be a less massive WC- or WO-type star when core collapse was reached. This would result in a type Ib or type Ic supernova due to the lack of hydrogen and possibly helium. This supernova type is thought to be the originator of certain classes of gamma-ray bursts, but models predict they occur only normally in less massive stars.
Several unusual supernovae and impostors have been compared to η Carinae as examples of its possible fate. One of the most compelling is SN 2009ip, a blue supergiant which underwent a supernova impostor event in 2009 with similarities to η Carinae's Great Eruption, then an even brighter outburst in 2012 which is likely to have been a true supernova. SN 2006jc, some 77 million light-years away in UGC 4904, in the constellation Lynx, also underwent a supernova impostor brightening in 2004, followed by a magnitude 13.8 type Ib supernova, first seen on 9 October 2006. η Carinae has also been compared to other possible supernova impostors such as SN 1961V and iPTF14hls, and to superluminous supernovae such as SN 2006gy.
Possible effects on Earth
A typical core collapse supernova at the distance of η Carinae would peak at an apparent magnitude around −4, similar to Venus. A SLSN could be five magnitudes brighter, potentially the brightest supernova in recorded history (currently SN 1006). At 7,500 light-years from the star it is unlikely to directly affect terrestrial lifeforms, as they will be protected from gamma rays by the atmosphere and from some other cosmic rays by the magnetosphere. The main damage would be restricted to the upper atmosphere, the ozone layer, spacecraft, including satellites and any astronauts in space.
At least one paper has projected that complete loss of the Earth's ozone layer is a plausible consequence of a nearby supernova, which would result in a significant increase in UV radiation reaching Earth's surface from the Sun, but this would require a typical supernova to be closer than 50 light-years from Earth, and even a potential hypernova would need to be closer than η Carinae. Another analysis of the possible impact discusses more subtle effects from the unusual illumination, such as possible melatonin suppression with resulting insomnia and increased risk of cancer and depression. It concludes that a supernova of this magnitude would have to be much closer than η Carinae to have any type of major impact on Earth.
η Carinae is not expected to produce a gamma-ray burst, and its axis is not currently aimed near Earth. The Earth's atmosphere protects its inhabitants from all the radiation apart from UV light (it is opaque to gamma rays, which have to be observed using space telescopes). The main effect would result from damage to the ozone layer. η Carinae is too far away to do that even if it did produce a gamma-ray burst.
See also
Lists of astronomical objects
Notes
References
External links
Carina (constellation)
Binary stars
Carina Nebula
Luminous blue variables
B-type hypergiants
O-type stars
Carinae, Eta
Durchmusterung objects
Carinae, 231
093308
4210 | Eta Carinae | Astronomy | 10,613 |
67,578,931 | https://en.wikipedia.org/wiki/Allomothering%20in%20humans | Allomothering, or allomaternal care, is parental care provided by group members other than the genetic mother. This is a common feature of many cooperative breeding species, including some mammal, bird and insect species. Allomothering in humans is universal, but the members who participate in allomothering vary from culture to culture. Common allomothers are grandmothers, older siblings, extended family members, members of religious communities and ritual kin (such as godparents).
The life history strategy of humans involves a long period of dependency, termed "secondary altriciailty" by Adolf Portmann, which should result in longer interbirth intervals. However, compared to other primates, humans have short interbirth intervals resulting in numerous overlapping dependents all without an increase in child mortality. Allomothering explains how humans can have children spaced only a few years apart and manage to raise multiple children at once. Food provisioning, help with childcare and investment in the child's learning can be provided by members of the community to help ease the mother's investment. Allomothering participants and specific helping behavior varies widely from group to group.
Theory
Cooperative breeding is a reproductive strategy that has been observed in birds, insects, and mammals. In cooperative breeding, parents receive support in childrearing from other members of the community. The parental support of helpers other than parents is known as allomaternal care (from the perspective of the mother, in species where both maternity and paternity is known it is often referred to alloparenting/allopaternal care) and can be carried out by kin and non-kin members of the community. Cooperative breeding is often seen to arise in monogamous mating systems with high coefficients of relatedness between group members and where females give birth to multiple offspring. The presence of allomothers is associated with reductions in interbirth intervals, increases in litter size, higher annual rates of survival. Cooperative breeding reduces the investment necessary for the parents of an offspring allowing the freed-up resources to be directed towards producing more offspring. Studies among meerkats suggest that while helpers incur great short term costs, long term costs are minimal or non-existent. Help by allomothers can be conditional upon health at the onset of the reproductive cycle, such as weight in meerkats, and helpers are able to modify their behavior to offset some of the short-term costs, i.e., increased time spent foraging and alternate breeding cycles in which they help. Cockburn shows that helpers among birds may benefit from increased numbers of non-descendant kin, increased access to territory and resources, increased access to mating options, increases in social status and longer time to acquire skills.
Cooperative breeding is common in many mammal species, but the type of care can vary greatly. Isler and van Shaik's survey allomothering in placental mammals found that 46% engaged in no help, 10% only provided protection, 3% provided only allonursing, 24% provided all forms of help other than provisioning and 16% provided complete help, including provisioning (see Figure 1, pg. 55). Allomaternal help with provisioning was most common among members of Carnivora and Primates. The study also looked at correlations of allomaternal care and brain size, finding mixed results among many of the orders, however among Carnivora there was a correlation between male help and brain size and in Primates there was a correlation between allonursing and brain size. It has also been demonstrated that increases in allomothering among chimps is associated with a reduction in lactation efforts and shorter weaning times. Hrdy argues that in order for cooperative breeding to occur, the underlying neural circuitry must be present in both sexes as well as pre- and post-reproductive aged individuals. This neurocircuitry includes a tendency to be attracted to, hold and protect infants, all of which are common among primates in various degrees.
Evolution among humans
Humans produce offspring that develop slowly and are incapable of moving or retrieving food for a lengthy period after birth. In animals that have infant altriciality and extended development, interbirth intervals are often longer to allow the mother to fully invest in one offspring before conceiving another. However, humans do not follow the pattern seen in other apes: human life history features relatively short interbirth intervals resulting in child-rearing costs that the mother alone cannot provide. Allomaternal care is a universal feature of human reproduction that helps provide additional childcare and other resources for the parents. While short interbirth intervals can negatively impact child outcomes, increasing the risk of child mortality, allomaternal care can reduce these negative outcomes while allowing interbirth intervals to remain short. When compared to other living primates, humans have short interbirth intervals (IBI), averaging 3.1 years in natural fertility populations, and total fertility rates (TFR) of 6.1 offspring versus interbirth intervals and total fertility rates for chimpanzees (IBI = 5.5, TFR = 2), gorillas (IBI = 3.9, TFR = 3) and orangutans (IBI = 9.2). Humans also have a unique period in their prolonged dependency, childhood, which allows for increased development and social learning.
The emergence of cooperative breeding in early hominins may explain several key life history traits such as larger brains and our demographic success around the globe. Isler and van Schaik analyzed life history traits such as first age of reproduction and interbirth intervals in primates and applied the resulting data to ancestral hominins and determined that without cooperative breeding the first age of reproduction for Australopithecus afarensis would be around 12.6 years and for Qafzeh Homo sapiens around 26.1 years. The predicted interbirth intervals would range from 6 to 8.4 years. Assuming some form or allomaternal care as early as A. afarensis, first age of reproduction comes down to 10.9 years and interbirth intervals are reduced to approximately 3.4 years. For Qafzeh H. sapiens, first age of reproduction is reduced to 22.6 years and interbirth intervals are around 4.7 years. Based on their study, Isler and van Shaik conclude that a change in lifestyle resulting in substantial increases in allomothering occurred early in the Homo genus. This implies that cooperative breeding has been an important part of human history for nearly two million years.
Demographic reconstructions of hunter-gatherer populations in the Pleistocene attempt to analyze the probability of having allomothers in a community and rely on assumptions about residential patterns in early humans. Kurland and Sparks (pers. comm. In Hrdy, 2006) provide estimates of several relatives' presence assuming different mortality rates. Under low mortality rates, the chance of primipara, a woman giving birth to her first child, having her mother around is around 50% and under high mortality rates the chance drops to 25%. The chance of having an older sibling around is much higher, as are the chances of having cousins. While this indicates that a new mother would have a least some close kin nearby to help with childcare, residential patterns may change these likelihoods. If humans were majority patrilocal, where males remain in natal groups and females disperse, we would expect lower maternal kin available for allomaternal assistance. This suggesting that mothers would need to rely more on paternal kin and unrelated individuals as potential allomothers, or for at least some of the mother's kin to temporarily reside with the parents during periods of high need. However, we know that humans are ambilocal or bilocal, meaning either males or females may disperse, which can impact the availability of maternal or paternal kin. Bilocality may have led to the diverse use of both kin and non-kin as allomothers in humans. Allomothering appears to also be tied to the environment, with increased levels of allomothering seen in regions of reduced climate predictability and lower average temperatures and precipitation.
Cognitive & hormonal implications
Human females respond to social conditions during and immediately after pregnancy and may decide to abandon, neglect or commit infanticide if social support is lacking, if the infant is not considered viable (low birth weight, twins) or under certain extreme conditions (such as famine). Nearly all individuals show a response to infant crying and laughing, including fathers, virgin females, older children and even strangers. Oxytocin and prolactin, hormones released by mothers during lactation that may facilitate bonding, are also produced by males and others in the presence of a crying infant.
Infants appear to have coevolved adaptations in response to those in older children and adults. Infants that look healthier (larger or plump) have higher rates of survival. Newborn humans are also predisposed to seek out faces and will imitate faces they see or respond to attention by smiling and laughing. Smiling and laughing appears to be an attempt to draw in potential allomothers, as well as a way to bond with parents. Likewise, older infants can learn the intentions of others. Humans show advanced theory of mind and the ability to read and predict others' behavior and point of view may be impart due to the high levels of allomothering seen in humans. Early infants engage other humans through laughter, they quickly learn to discriminate between those that show them more attention and care for them, indicating that they possess a quickly understanding who intends to care for them. Also, there is evidence for cognitive and socialization implications in nonhuman primates' allomothering.
Emotional health implications
Allomothering can be helpful in the emotional health of both mother and infants
Mothers:
About the mothers, it is shown that the mothers' social network and supports that are provided through the cooperative breeding system or “availability of others” may mitigate not only the burden of physical pregnancy but also the psychological and emotional burden of motherhood. This reduced pressure on mothers contributes to the improvement of their well-being and parental investment. It is shown that a lack of support from social networks would lead mothers to abandon their children and increase the risk of postpartum depression. In this vein, Hagen (1999) suggested that post-partum depression might help mothers to show their need for social support. For example, research showed that the emotional support of infants' maternal grandmothers was associated with less depression in the mothers. In this study, the interesting finding was that the positive influence of maternal grandmothers' support in decreasing mothers' depression risk was not related to the geographic proximity of the grandmothers.
Infants:
Mothers' post-partum depression can cause reduced parental investment and not responding to infants' cues for their needs. As a result, the ignorance of infants' cues can affect their secure attachment and cognitive functions. Hence, the caregiving network can decrease these negative consequences.
Cameron (1998) mentioned that in mammals generally, some mammals' offspring may suckle the breast for soothing in distressed situations rather than/ as well as for nutritive purposes. Then, Hewlett and Winn suggested that the need for soothing in the distressed situation may be a reason for the evolution of allo-maternal breast suckling (allo-nursing/ allo-suckling) although this is less accepted than the other potential proposed explanations of allomothering.
Immunity implications
The allo-nursing specifically discusses the idea of breastfeeding other individuals' offspring of both kin and non-kin. Since all-nursing is costly for females, there have been many hypotheses to explain the possible reasons for its evolution. One of the hypotheses is the “neuroendocrine function of allosuckling “(NFA).
Based on NFA, both infants, and allo-mothers can benefit from the function of allo-nursing. For example, as a benefit for the allo-nursers, an infant, through suckling, can stimulate the nipple to produce prolactin. Prolactin production can cause fertility suppression and immune system improvement in allonursers. In addition, prohibiting fertility can be beneficial for females and their own offspring especially when their own infants do not stimulate their mothers' nipples.
This hypothesis also predicts that allo-nursing with the increased prolactin and its positive consequences in the immune system in allonursers, are more common in mammals living in an environmental setting with a higher load of parasites.
Infants also can be provided by different bacteria through being breastfed by allo-mother/s. The bacteria can be transmitted by skin-to-skin contact and by milk through the process of allo-nursing. Indeed, there is a critical period in infants' development in which the gastrointestinal is colonized by bacteria to shape the gastrointestinal microbiome (GIM). Regarding hygiene and the related hypothesis including the old friends' hypothesis, it is shown that this period of colonization is critical because colonization helps the development and education of the immune system.
Human milk microbiome (HMM) is found to be important for the colonization of GIM and consequently for the development of the immune system in infants because during the critical period, HMM is the first and most reliable source of bacteria, and this modulates the immune system of infants.
Studies on small-scale societies including foragers (hunter-gatherer) and horticulturalists, showed that allo-nursing is more common among forager women who live in tropical environments (with higher loads of parasites and bacterial infections) than the more arid environment in which infectious diseases are lower.
In the tropical environment, most of the death in small-scale societies is because of parasite, bacterial, viral infectious diseases. Some of these tropical forager societies in which allo-mothering nursing is common are the Ache, Bofi, Agta, Aka, Efe´, Chabu, and Onge´e but it is not common in the forager societies like the Nayaka, Hadza, Paliyan, Martu and !Kung who live in the non-tropical area.
However, the studies showed some exception in horticulturalist in tropical area like Ngandu in which the allo-nursing is discouraged and researchers believe that it may be due to the knowledge about increasing possibility of common infection transmission through breastfeeding in their populated villages. Yet, for the Aka people the primary reason of death is parasite not transmission of other infections. Therefore, allo-nursing, due to NFS and its immunological positive consequences, can benefit Aka women and benefits of allo-nursing may be higher than the cost of infection transmissions.
Furthermore, the size and frequency of caregiving network existing in allo-mothering practices can affect the diversity and bacterial composition of human milk and this may influence infants' immune system and health. The research that has addressed the relationship between HMM and infants' immunity, emphasis the need for more studies since the interaction between the microbes and immunology is complicated.
Allomothering by kin
Hrdy states that the altruism of allomothers can be explained by Hamilton's rule and therefore allomothers enhance their inclusive fitness by helping kin. However, in humans, bilocality and complex cultural norms around marriage or mating produce groups that may not be highly related suggesting that allomothering is not limited to kin. Despite varying and complex residential patterns, kin do appear to be strong sources of allomaternal care in many societies. Fathers, grandparents, older siblings and other close kin help in childcare, provisioning, protection and education.
Allomothers are perhaps less important immediately after birth. Infants rely on the mother for milk to survive and mothers benefit from food provisioning by the fathers to produce sufficient amounts of milk for their infants. Infants whose mother's die during childbirth have a low probability of survival, between 1-5% in some pre-demographic transition populations, higher in post-demographic transition populations. If the mother dies during the first year of life the chance of survival increase to 35-50%, and the effect of the mother's death nearly disappears after the child reaches two. This indicates the important of parents during the first couple of years of life, but also demonstrates that once weaning is complete, allomothers are capable of rearing children to adulthood.
Fathers
The importance of fathers varies considerably, but many authors now agree that there is little impact on child survival from the loss of a father. Sear and Mace's study of 15 populations found that in 53%, the death of the father was not correlated with an increase in child mortality. The father does not directly feed an infant and it is possible that other males of the family or community can step in if something happens to the father. Kramer's (2010) cross-cultural study found that in direct allomaternal care provided by fathers varies from less than 1% (Alyawara) to nearly 16% (Aka) with an average of 4.8% across the populations studied. However, in many societies the father does play an important role. Among the Ache of South America, the loss of the father did affect the child's survival. Meat sharing by hunters is an important part of Ache life and, while the majority of the meat acquired by a male may not go directly to the family, it is used to build relations and exchange for other goods. Interestingly, the father is not very involved in childcare among the Ache. The same study found that among the Hiwi (also of South America), where the father does provide direct childcare and food provisioning, the death of the father had no effect on child survival.
A father's most important contribution to children may be providing protection from other males, especially in groups that practice infanticide. Sear and Mace point out that many of the populations in the study look at the impact of the loss of a father on young children and suggest that since a father cannot provide direct nourishment to breastfeeding children, their importance may come later in the child's life. Fathers teach subsistence strategies to older children (for example, teaching hunting and trapping skills to older boys in hunter-gatherer societies). Allal et al. found that the marriage and fertility of women who do not have fathers may be impacted. Some hunter-gather populations in South America also have “partible paternity”, the idea that multiple men can impregnate a woman and all are considered the father, which could provide “back-up” fathers to children.
Grandparents
Grandmothers can care for children freeing the mother to engage in foraging or economic activities, or in the care of a child that has not been weaned. Likewise, grandmothers often continue to forage late in life to help with food provisioning for their grandchildren and a lactating mother. Child mortality in rural Gambia saw a significant decline when the maternal grandmother was present (see Table 4). In a comparison of nine populations on the proportion of direct childcare received by a child, Kramer found that grandmothers accounted for between 1.2% (Maya) and 14.3% (Mardu) of the total childcare. Sear and Mace determine that grandmothers do not have a universally positive effect on child survival and there is a difference between maternal and paternal grandmothers. Maternal grandmothers improved child survival in 69% of cases while paternal grandmothers improved survival in only 53% of observed cases. They also found that paternal grandmothers were detrimental in two cases and maternal grandmothers in one. Paternal grandmothers may on average be older than maternal grandmothers, due to common age differences in males and females at first reproduction. Paternal grandmothers may also be more reluctant to invest in their grandchildren due to the lack of certainty of paternity. The same study also found the timing of impact of paternal and maternal grandmothers varies, with maternal grandmothers having greater effects after the first year of life (allomaternal care) and paternal grandmothers having greater effects during pregnancy and in the first few months of a child's life (help with tasks during pregnancy or causing high levels of stress to the mother during pregnancy).
Grandfathers do not appear to be important sources of allomaternal care, with maternal grandfathers having no impact on child survival in 83% of cases and paternal grandfathers having no impact on 50% and a negative impact on 25% of cases. Little explanation on why grandfathers contributed less to childcare was found in the anthropological literature. It is possible that the even greater age of a grandfather when compared to grandmothers made it difficult to help with childrearing. In addition, grandfathers, like fathers, may contribute primarily through food provisioning, however the late age of the grandfather is likely to be well passed his hunting prime.
Grandmother hypothesis
Grandmothers are often considered a significant source of allomaternal care and this fact has led to the “grandmother hypothesis”, suggesting that women developed long post-menopausal periods to help with their children's offspring. This long lifespan after menopause is unique to humans and may help explain early weaning and high fertility rates. Hill and Hurtado argue that early reproductive senescence in females is an evolutionary dilemma since natural selection should favor continued reproduction. They test the grandmother hypothesis with data collected from the Ache and determine it does not support the idea that early menopause is maintained by natural selection favoring women who stop reproduction in order to invest more in their grandchildren. Despite Hill and Hurtado's finding, grandmothers often account for a much of the allomaternal care seen in a variety of societies.
Siblings
Older siblings may be the greatest source of allomaternal care among kin. Kramer's (see Table 1) comparison of nine populations found that siblings accounted for between 1.1% and 33% of the direct childcare received by a child. Older sisters appear to be more important than brothers for many of the groups in the study with sisters ranging from 5% - 33% and brothers 1.1% - 16.3%. Ivey (see Figures 1 & 2) found that among the Efe, both sisters and brothers contributed significantly to childcare. Sear and Mace found that five of the six studies indicated that child mortality increased survival of younger siblings. Older siblings, while still dependent on their parents. can engage in a variety of useful tasks depending on their age. It is possible that dependence on adult allomothers was not an early selective pressure for the development of allomothering in humans, but rather maternal – juvenile cooperation likely played a more important role.
Older sisters often engage in childcare, helping to look after younger siblings. This is apparently true regardless of subsistence strategy; and holds true even for Industrial societies. Kramer (see Figure 4) shows a cross cultural comparison of children in ten societies, all of whom engage in at least some childcare of younger siblings. Equally important, older children can help offset their own cost by engaging in foraging or economic actives. The same figure from Kramer shows that all of the groups engaged in more economic work than childcare. Kramer (see Figure 2) compares groups organized by subsistence strategy and shows that foraging and economic work are common among foragers, horticulturalists, agriculturalists and pastoralists, with the latter two groups showing some of the highest rates of food production and domestic tasks by children.
Extended family
Other kin may be sources of allomothers when present, however evidence from the ethnographic literature demonstrates varied amounts of contribution and different impacts on the children. Aunts' and uncles' contribution varies depending on residential patterns, inheritance patterns and resource allocation. Among the Kipsigis of Kenya, a child's paternal uncle had a positive effect on reducing child mortality in the richer half of the sample but not on the poorer half, however for maternal uncles the effects were reversed with poorer families showing a greater reduction in child mortality. Local resource competition, namely conflict over land inheritance among the father's brothers appears to account for the pattern of paternal kin effects. Others have found that the maternal aunts have similar effects in societies with female inheritance. Efe infants spend a significant amount of time in the care of aunts and male cousins.
Allomothering by non-kin
Studies of extant hunter-gatherer groups demonstrates that groups are composed of more than just direct kin, with between 25 – 50% of group members as unrelated or distantly related. Using modern hunter-gatherers as representatives of past hunter-gathers is not a perfect analogy, the data in combination with archaeological and paleoanthropological evidence are the only sources of information we can use to reconstruct past groups. It is likely that past hunter-gatherer groups were also composed of a mixture of related (kin) and unrelated (non kin) individuals meaning that allomothering by non kin occurred in at least some past societies. Research with contemporary hunter-gatherers, horticulturalists, and modern, industrial societies often finds that non-kin – friends, neighbors, and fictive kin – provide allomaternal care.
Surrogate breast feeders, known as a wet nurses in Western medical literature, may have played an important role as allomothers prior to the introduction of bottle feeding and formula. Wet nursing was recorded in ancient Israel and Egypt, among historical and modern Sunni Arab populations, as well as in ancient India and Greece. While the frequency of wet nurses has been debated, there is ample references in the literature to suggest that is did occur (and in some societies still does).
Religion and religious communities may also increase the frequency of allomothering. Studies of religious communities in England and New Zealand show increased allomaternal care by unrelated members of the community. Religion is thought to increase prosocial behavior with religiosity being a costly signal that indicates to other members that a practicing individual is trustworthy and likely to cooperate and reciprocate. Israeli kibbutz are collective settlements where members share almost all aspects of their lives: all incomes are given to the kibbutz an in return the kibbutz distributes goods and services equally, they dine in communal dining halls and childcare is communal. Children live in a communal nursery and later group houses together. Parenting is also distributed among the community; perhaps an extreme form of allomothering.
Fictive kin are unrelated individuals that have kinship terms bestowed on them. There are generally two types of fictive kin: named kin, determined by factors such as age, gender and prestige and applied to a large number of community members (such as in Northern India), and ritual kin, named at a specific ceremony, such as baptism, at which time the relationship between the individual undergoing the ceremony and the named kin is formalized. Named kin may function similarly to religious communities by increasing familiarity and increasing prosocial behavior, however little research appears to have been conducted on this form of fictive kin. Godparents are one of the better-known ritual kin systems in Western culture. Godparents are common to Catholic (and other Christian) communities in Europe and throughout the Americas (due to colonization). Godparents are expected to provide extra resources to the family; naming a godparent creates a strong bond within the community or a tie to an outside community where new resources may be accessible in times of need. Other examples of ritual kin are milk kin in some Arab societies and the Japanese oyabun-kobun system.
In a literature review of alloparental care, Kenkel et al. found that children are between six and hundred times more likely to die from abuse while under the care of unrelated adults in modern societies, however they also state that the term alloparenting is often omitted from studies on modern populations resulting in "blind spots" in the literature.
In urban/industrial societies
The nuclear family has dominated U.S. and some European populations life for many decades. The typical family is often thought of as two parents and their children living in one house. However, a recent poll by the Pew Research Center shows a rise in the number of U.S. Americans living in multigenerational homes, from a low of 12% in 1980 to 20% in 2016. The growing price of housing in the U.S. and the overall rise in the cost of living has made owning homes and living as a nuclear family more difficult. It is once again becoming common for grandparents to live in the same house as their grandchildren, providing a source of childcare for the families. Urban areas in China also show that while two generation and single parent households are on the rise, however the extended family still represents the majority of Chinese households. Older siblings in most modern, industrialized are required to attend school, possibly eliminating a source of allomothering, it appears that grandmothers still play an important role in childcare.
Wet nursing may still be an option in some societies. The Arab populations previously mentioned still have wet nurses but the occurrence is quickly declining, however modern technologies like formula and breast pumps have made it unnecessary in populations with access to those technology. Ritual kin systems, such as godparents and the oyabun-kobun system, are also still active in their respective societies and may still function as a source of allomaternal care. Religious communities within modern societies are still relevant as Shaver et al. has shown in their studies in England and New Zealand.
In industrialized, urban societies daycare, school, nannies, etc. may provide many of the same benefits that would have traditionally been provided by kin and well-known community members. It is possible for parents in urban areas to enroll in daycare as soon as weaning is complete (sometimes early with breast-pumping technology) and as long as resources or finances permit, the child could be looked after for the duration of the day. The cost of these services may be prohibitive to many low-income families, creating a divide in allomaternal care depending on income. It is also unclear whether the primary role of a service such as daycare is to allow for more attention to producing more children or allowing parents to pursue other endeavors (careers). The research into daycare as a form of allomothering may be complicated by its cost and other limitations to access. Daycare may be more common among wealthy and/or higher educated individuals who are less likely to have children out of choice, meaning discerning impact on interbirth intervals or other metrics may be challenging.
Allomothering is still relevant in most industrialized societies, even if the source has shifted. Reliance on extended family may have fallen, but recently is on the rise, and more use of paid childcare system such as daycare or compulsory systems such as schooling fill in some of the gaps that occur from living in a mobile, globalized, industrial society.
Critiques
There are a couple of critiques to consider related to cooperative breeding and allomothering in humans. The first is proposed by Bogin et al. in which they argue that humans are not actually cooperative breeders. The authors argue that because allomothering and provisioning are not based on genetic relatedness, as most other cooperative breeders are, the term needs to be modified to incorporate the wider range of behavior seen in humans. They propose the term “biocultural reproduction” that they believe better describes the high amount of allomothering by kin and non kin, and accounts for the variation in allomothering practices seen from culture to culture in humans. This critique does not apply specifically to allomothering as discussed in this article, but rather to the reproductive strategy system that incorporates it.
The second critique relevant to allomothering concerns human kinship distinctions. Schneider, along with other anthropologists, have argued that distinguishing between real and fictive kin does not always occur in human cultures and perhaps should be abandoned. The critique may be misunderstood to mean that humans cannot tell the difference between related and unrelated individuals. Maternity in humans, like many other animals, is known and the mother's female relatives relatedness can be assumed by individuals with relative certainty. Humans are also unique as fairly stable pair-bonding allows for some degree of paternal certainty. In fact, Chapais argues that patrilineal kinship is a prerequisite for the flexibility of residential patterns seen in humans and this kinship is not culturally based but has a deep biological substrate upon which it is built. Gintis and Chapais arguments suggest that while kinship terms are often applied to individuals outside of related individuals, the relatedness of those individuals is known. A distinction is still useful and we see a difference in the contribution of allomothering by related versus unrelated individuals in many, if not most populations.
References
Parenting
Sociobiology | Allomothering in humans | Biology | 6,777 |
47,435,441 | https://en.wikipedia.org/wiki/John%20C.%20Oxtoby | John C. Oxtoby (1910–1991) was an American mathematician. In 1936, he graduated with a Master of Science in Mathematics from Harvard University. He was Professor of Mathematics at Bryn Mawr College in Pennsylvania from 1939 until his retirement in 1979.
Works
References
External links
20th-century American mathematicians
1910 births
1991 deaths
Bryn Mawr College faculty
Measure theorists
Category theorists
American topologists
Harvard University alumni | John C. Oxtoby | Mathematics | 82 |
38,776,364 | https://en.wikipedia.org/wiki/Rapyuta | Rapyuta is the online database for the RoboEarth Cloud Engine which is a platform as a service (PaaS). The database, which is part of the European RoboEarth Project, is used as an open-source tool for developers creating robotic applications in the cloud platform. It is designed to allow robots to query the database to learn about their environment, build, as well as providing guidance systems. Rapyuta project lead was Mohanarajah Gajamohan.
Background
The name Rapyuta is derived from the Hayao Miyazaki's anime, Castle in the Sky. In the film, there is a place called Rapyuta, which was inspired by Jonathan Swift's island of Laputa, where all robots live. The stated purpose of the project is[T]he goal of RoboEarth is to allow robotic systems to benefit from the experience of other robots, paving the way for rapid advances in machine cognition and behavior, and ultimately, for more subtle and sophisticated human-machine interaction. In simple terms, Rapyuta is considered an online brain that describes unfamiliar objects to robots. Aside from helping users send their application to the cloud for processing, Rapyuta also enables robots to search for data (draw from the "experience" of other robots) that can help it perform its tasks. It uses a combination of ROS and WebSocket communication protocols so that the computing environment can be employed in three types of cases: private cloud, where the robots belong to a single entity; software-as-a-service, where multiple robots access ROS software applications run by Rapyuta; and, platform-as-a-service, where Rapyuta serves as a host to the developers' applications or a platform where they can be shared.
References
Cloud platforms
Cloud robotics | Rapyuta | Technology | 369 |
41,337,714 | https://en.wikipedia.org/wiki/Far%20Cry%204 | Far Cry 4 is a 2014 first-person shooter game developed by Ubisoft Montreal and published by Ubisoft. It is the successor to the 2012 video game Far Cry 3, and the fourth main installment in the Far Cry series. Set in the fictional Himalayan country of Kyrat, the game follows Ajay Ghale, a young Kyrati-American, who becomes caught in a Civil war between Kyrat's Royal Army, controlled by the tyrannical king Pagan Min, and a rebel movement called the Golden Path. The gameplay focuses on combat and open world exploration; players battle enemy soldiers and dangerous wildlife using a wide array of weapons. The game features many elements found in role-playing games, such as a branching storyline and side quests. The game also features a map editor and both cooperative and competitive multiplayer modes.
Announced in May 2014, development on Far Cry 4 began immediately after the shipment of Assassin's Creed III in late 2012. The team originally intended to develop a direct sequel to Far Cry 3 that continues the narrative, but the idea was later scrapped and the team decided to develop a new setting and story for the game. Certain aspects of Far Cry 4 were inspired by the Nepalese Civil War, and the design of the game's antagonist Pagan Min was inspired by Japanese films Ichi the Killer and Brother. Troy Baker was hired to portray Pagan Min. The game's competitive multiplayer was created by Red Storm Entertainment while the Shangri-La segments in the campaign were handled by Ubisoft Toronto.
Far Cry 4 was released worldwide for PlayStation 3, PlayStation 4, Windows, Xbox 360, and Xbox One in November 2014. It received mostly positive reviews, with critics praising the open-world design, visuals, soundtrack, and characters as well as new gameplay additions and the wealth of content. However, some reviewers disliked the story and found the game too similar to its predecessor. The game sold over 10 million units by March 2020. Several releases of downloadable content were subsequently published. A spin-off title, Far Cry Primal, was released in February 2016. A successor, Far Cry 5, was released in March 2018.
Gameplay
Far Cry 4 is a first-person action-adventure game. Players assume control of Ajay Ghale, a Kyrati-American who is on a quest to spread his deceased mother's ashes in the fictional country of Kyrat. Ajay may utilize various short and long-range firearms, including pistols, revolvers, shotguns, assault rifles, submachine guns, bows, a flamethrower, rocket launchers, grenade launchers, and sniper rifles. More powerful versions of these weapons become available after considerable progress through the game. Throwable weapons include fragmentation grenades, Molotov cocktails, and throwing knives. The game allows players to take cover to avoid gunfights and to perform melee takedowns from above or up-close. Unlike previous installments in the series, Far Cry 4 gives players the ability to kick objects and the ability to hide the corpses of enemies.
Players can use a variety of methods to approach missions. For instance, players can utilize stealth to evade enemies and complete objectives without being noticed, or they also have the option to assault enemies with firearms and vehicles. The player character is equipped with a digital camera, which allows him to mark and highlight all visible enemies, animals, and loot. Players are also able to ride on elephants, which serve as tank-like offensive weapons. Players can throw bait towards enemies, which attracts nearby wildlife that is hostile to both the player and enemies. Players can also hunt and skin animals.
The game features an open world environment that is free for players to explore. It features several environments, including forests, rivers, and mountains. To allow players to travel between places faster, the game features various vehicles, including buggies, trucks, and water vehicles like speedboats. Players can drive and shoot at the same time and can enable auto-drive, in which the game's artificial intelligence takes over the role of controlling the vehicle and guides players to their objectives. Players can also hijack other vehicles while driving. The Buzzer, an aerial, gyrocopter-like vehicle, is introduced in the game, allowing players to gain a tactical advantage from the air. Parachutes, wingsuits, and grappling hooks are also featured in the game; these items help players swing across cliffs and quickly navigate the environment. Parts of the game take place in Shangri-La, a mystical dreamland where players battle demons as the Kyrati warrior Kalinag. While in Shangri-La, players are accompanied by an injured tiger which serves as their companion. Players can issue commands to the tiger, which assists them in battle.
The game world is divided into two halves: North and South Kyrat. Players start in South Kyrat and are free to explore it almost immediately, but can only unlock North Kyrat over the course of the story. The map is progressively opened by liberating bell towers, freeing them from Pagan Min's influence and allowing the Golden Path to expand. These towers help players reveal new areas and mark new locations of interest on the map. The world is scattered with outposts controlled by Pagan Min, which can be infiltrated by players. Four larger outposts, or fortresses, can also be found, and feature stronger defenses and more difficult combinations of enemies. If players successfully liberate these outposts, they will serve as fast-travel points, allowing quick navigation through the game's world. Additional missions and quests also become available. There are many side-missions that can be completed, including hostage rescues, bomb disposal quests, and hunting missions. The collected animals' parts can then be used for crafting new pouches and belts.
Like its predecessors, the game features some role-playing elements. Players can earn experience points by completing missions and defeating enemies, and these experience points can then be spent on performance boosts and upgrades. There are two sets of abilities for players to choose from, called The Tiger and The Elephant. The Tiger upgrades mainly improve players' offensive abilities, while The Elephant upgrades improve players' defensive skills. A variety of random events and hostile encounters take place throughout the game; for example, the player may unexpectedly be attacked by an eagle, be hit by a car, or witness an animal attack. Players can accumulate karma by performing kind actions towards the rebels, such as by assisting them in battles when they are attacked by wildlife or enemies. Doing so will give players discounts when purchasing new items at trading posts, and will allow players to call in support and back-up from members of the Golden Path. Players can also gain experience by collecting items like masks, propaganda posters, and prayer wheels. There is also an Arena mode, in which players battle human enemies and animals for additional experience points and rewards.
Multiplayer
Far Cry 4 features a co-operative multiplayer mode known as "Guns for Hire", which supports up to two players. The mode is separated from the game's campaign, and players are free to explore the game's world, defeat enemies, and infiltrate outposts with their companion. In addition to the co-operative mode, players can gain access to several competitive multiplayer modes which have an asymmetrical structure. Players play as either a Rakshasa or a Golden Path member. The Rakshasa are equipped with bows and arrows and have the ability to teleport and summon wildlife to assist them and gain transparency, while Golden Path members are equipped with guns and explosives, and have access to armored vehicles. Known as "Battles of Kyrat", players fight against each other in three modes, called Outpost, Propaganda, and Demon Mask. Far Cry 4 also contains a Map Editor that allows users to create and share custom content. Similar to that of Far Cry 3, players can create their maps by customizing landscapes, and by placing buildings, trees, wildlife, and vehicles. However, the Map Editor did not support competitive multiplayer levels at launch. Multiplayer support was added to the game on February 3, 2015.
Plot
After the death of his mother Ishwari, Ajay Ghale (James A. Woods) returns to his home country of Kyrat to carry out Ishwari's final wish by returning her ashes to Lakshmana. However, his mission is interrupted when the bus he is traveling on is attacked by the Royal Army and he is greeted by Pagan Min (Troy Baker), the country's eccentric and violent king. Min apologizes, brutally kills a soldier for shooting the bus, and acts warmly toward Ajay before kidnapping him and his tour guide and taking them to a dinner party at Lieutenant Paul De Pleur's mansion. After his guide is taken to be interrogated, Ajay flees with the aid of Sabal (Naveen Andrews), a commander in the Golden Path, a rebel movement established by Ajay's father, Mohan Ghale. Ajay is not able to leave the country as the Royal Army has taken control of Kyrat's only airport and sealed the borders.
In the twenty-odd years since Ishwari and Ajay fled Kyrat, the rebellion has stagnated, with the Golden Path now fighting for their very existence. As the son of Mohan Ghale, Ajay becomes a symbol for the Golden Path to rally around. After freeing a group of hostages and liberating territory held by Pagan, the Golden Path plan on breaking Pagan's stranglehold on power by targeting his three regional governors: Paul "De Pleur" Harmon (Travis Willingham), who oversees opium production and runs Pagan's torture chambers; Noore Najjar (Mylène Dinh-Robic), who runs poaching and prostitution rings and who became a victim of Pagan's cruelty herself after he kidnapped her family; and Yuma Lau (Gwendoline Yeo), Min's adopted sister and trusted general who is obsessed with uncovering the secrets of the mystical realm of Shangri-La.
However, the Golden Path's newfound momentum is threatened by deep divisions between its commanders, Sabal, who favors traditional values, and Amita (Janina Gavankar), who argues for progress, which includes relying heavily on producing drugs for export. Ajay is forced to intervene on several occasions, with his decisions influencing the direction the Golden Path takes. The first governor to fall is De Pleur, after Noore helps Ajay find a way to infiltrate De Pleur's stronghold, allowing the rebellion to capture him. Amita and Sabal later task Ajay with confronting and killing Noore. She dies in her fighting arena, either with Ajay killing her, or with Noore committing suicide upon learning Pagan had her family executed years beforehand.
As the Golden Path secures Kyrat's southern provinces, Ajay is contacted by Willis Huntley (Alain Goulem), a CIA agent who offers intelligence for the rebels and pages from his father's diary in exchange for killing Yuma's lieutenants. After Ajay kills several of them, Huntley admits they were in fact CIA assets, and that he was sent to clean up after the CIA as the agency did not see Pagan as a threat anymore. Huntley betrays Ajay to Pagan just as the Golden Path prepares to push into Northern Kyrat.
Ajay ends up in Yuma's mountain prison, where he is drugged and suffers terrifying hallucinations, but manages to escape. In the process, he finds out that Yuma has started despising Pagan, primarily because of his affection toward Ajay's late mother. The Golden Path pushes into the north, and while Ajay attempts to reconnect with another faction of the rebels, Pagan, aware of Yuma's plotting against him, betrays Yuma to the Golden Path. Ajay is drawn into a confrontation with her and prevails, but tensions between Amita and Sabal reach new heights, and Ajay is forced to make a final decision as to who will lead the Golden Path. Whichever leader he chooses then sends Ajay to kill the other to prevent them from starting another civil war, and Ajay can choose to either kill them as ordered or let them go. With the Golden Path now united under a single leader, Ajay joins them for an attack on Pagan's fortress and pushes on alone to Pagan's palace while the Golden Path holds off the military.
Endings
Ajay encounters Pagan, who chastises him for fleeing at the start of the game, claiming that he only ever intended to help Ajay. Pagan offers Ajay a final decision: shoot him now or listen to him. If Ajay shoots Pagan, the game ends immediately and the credits roll. If Ajay instead chooses to listen, Pagan reveals that Mohan sent Ishwari to spy on Pagan in the early days of the Golden Path, but they fell in love and had a daughter together: Lakshmana, Ajay's half-sister. Mohan killed Lakshmana for Ishwari's betrayal, and Ishwari killed him in turn before leaving the country with the infant Ajay. Pagan shows Ajay to a shrine containing Lakshmana's ashes, and Ajay places Ishwari's ashes inside. Pagan then boards a helicopter and departs peacefully, leaving the country in Ajay's hands. The player can choose to shoot down Pagan's helicopter as it flies away, killing Pagan in the process.
In the aftermath, the Golden Path seizes control of Kyrat. The final ending depends on which leader Ajay ultimately sided with. If Amita is placed in charge, she turns Kyrat into an authoritarian drug state, forcing villagers into work in factories and drug fields, and conscripting children into the group as soldiers to bolster their ranks against the remnants of the Royal Army; Ajay also learns that she has had her sister, Bhadra, taken away, "never to come again". If Sabal is placed in charge, Kyrat becomes a patriarchal fundamentalist theocracy where all of Amita's supporters are executed, women are denied fundamental political rights, and Bhadra is anointed as a religious symbol for the country to rally around. The player has the choice to kill the Golden Path's leader or leave them alive.
An Easter egg ending can be found at the beginning of the game. To trigger it, Ajay must simply wait at the dinner table in Pagan's mansion; when Pagan returns, he thanks Ajay for being a "gentleman" and leads him to Lakshmana's shrine, telling Ajay of his family history. After Ajay plants his mother's ashes at the shrine, Pagan invites Ajay to join him to "finally shoot some goddamn guns".
Development
The game's development was led by Ubisoft Montreal, which took over the development of the Far Cry franchise after the release of Far Cry: Instincts in 2005. Additional development was handled by four other in-house Ubisoft studios, Ubisoft Toronto, Red Storm Entertainment, Ubisoft Shanghai, and Ubisoft Kyiv. The Montreal studio worked on the game's campaign, the Toronto studio worked on the Shangri-La segments of the campaign, Red Storm handled the development of the competitive multiplayer, the Shanghai studio worked on the hunting missions, and the Kyiv studio developed the game's PC version. Development of the game began in late 2012, after the shipment of Assassin's Creed III. The game's creative director is Alex Hutchinson, who had previously worked on Maxis's Spore as well as Assassin's Creed III. The game runs on an upgraded version of the Dunia 2 engine that was used in Far Cry 3.
Gameplay design
When brainstorming ideas for the new Far Cry game, the development team originally planned on developing a direct sequel to Far Cry 3. The sequel would be set on the same tropical island, would expand upon the protagonist's story, and would bring back characters, such as Far Cry 3's secondary antagonist, Vaas Montenegro. However, after four days, the team found that a sequel was not what they wanted to achieve. As a result, they decided to scrap the idea and build a brand new game with a new setting and a new set of characters. The team adapted a "we want it all" approach, in which they hoped to experiment with all kinds of ideas. Some team members hoped that the game would allow players to fly, which led to the game's verticality. The game's director also hoped that players would be able to ride a rampaging elephant, in a place with "exotic mountainsides" and "unique culture". This led to the concept of a mountainous setting and the introduction of elephants in the game. The developers aimed for players to consider Far Cry 4 a standalone experience, and therefore they avoided bringing back any characters from Far Cry 3 except for Willis and Hurk. The decision to bring the two back was made because the team thought that they should provide some references to previous games in the series, as all of the games are set in the same universe even though they are not directly related.
Some of the gameplay elements were directly taken from Far Cry 3. Exotic locations, hunting, and the freedom for players to complete missions through different approaches were maintained in Far Cry 4. The team hoped that by incorporating and expanding upon these ideas, while introducing new features, they could make Far Cry 4 an evolution for the series. As a result, the size of the game's outposts became larger and players were given more options to customize their weapons. The team also realized that players spent a lot of time interacting with the open world of Far Cry 3, and decided to put more effort and resources into the world's design and add more quests to the game.
Setting and characters
The game's setting, Kyrat, is a fictional country in the Himalayas region. When building Kyrat, the developers merged elements from real-world regions including India, Nepal and Tibet, but exaggerated those elements. The map's size is similar to that of Far Cry 3, but is more dense, diverse, and features more varied environments. The developers hoped that players could experience a sense of exploration when traveling between different terrains. The team also hoped that the new location could be believable while remaining interesting for players. As a result, they created an identity for Kyrat by doing such things as adding different signboards to the game and creating a fictional mythology and religion for Kyrat. The game's world was also designed to accommodate new features such as the helicopter and the grappling hook. In an effort to make the world feel real, the team added improvements to the design of side-quests. Instead of simply being activities for players to complete, the quests are narrative-driven, which was done to increase the connection between them and the world. In order to increase the credibility of the game's world, the studio sent a team to Nepal to experience and record the local culture, so that they could bring those ideas back to the studio. According to the developer, the trip changed the game's design; the focus shifted from the game's civil war, which is inspired by the real-world Nepalese Civil War, to developing unique and interesting characters.
One of the game's most critically acclaimed characters is its major antagonist Pagan Min, a foreign interloper who usurps the rule of Kyrat by its royal family and names himself after the historical Burmese king within series lore. The team hoped that players would be "shocked, amazed and intrigued" by him in every encounter. Min has a complex relationship with the playable character, Ghale, as the team wanted players to guess Min's intentions and add a layer of mystery to him. The team originally hoped to have a villain that had a "punk-rock mentality", but the idea was abandoned as the team thought that the concept was not original. The pink costume Min wears throughout the game was inspired by Beat Takeshi, a character from Brother, and Kakihara, a character from Ichi the Killer. Min is designed to be sadistic yet confident, and the team hired Troy Baker to provide the voice for Min, as they thought that Baker's voice is charismatic enough to suit Min. According to Baker, Ubisoft gave him a script for the audition but he chose not to follow it, and instead decided to threaten to cut off the face of an assistant using Min's tone. The interviewer was very pleased with Baker's performance and decided to sign him for the job. As for Ghale, he was designed to be "thin", and his backstory was designed to be revealed as players progressed through the game's story. According to the game's narrative director, Mark Thompson, Ghale learns the history and culture of Kyrat along with players. The developers also hoped that Ghale could be an accessible character for players.
In hindsight, the team considered the story of Far Cry 3 "great", even though they thought that it was separated from the game's world. In order to increase players' agency and make the story feel more connected to the world for players, the team introduced a branching storyline that required players to make choices that would lead to different results and alter the game's ending. The team hoped that by adding choices, they could add additional depth and meaning to the game's campaign. Thompson added that they twisted the story of Far Cry 3 for Far Cry 4, and made outsiders the villains instead of the heroes. The team considered it a "risk", but they wanted to try something different.
For the Shangri-La mission, the team wanted to have a structure similar to Far Cry 3s mushroom missions; a game within a game. The Shangri-La missions are not related to Kyrat, but play an important role in the game's narrative. When creating these segments, the team put a lot of emphasis on the use of colors. They hoped that the artistic vision for Far Cry 4 would not feature any resemblance to other typical shooter games. It was originally designed to be a small open world but was later converted into a linear experience due to time constraints and huge creative differences between developers. The team later decided to simplify it, and re-imagined it into an "ancient, natural world". It is made up of five different colors. The main color of Shangri-La is gold; the developers thought that using gold as the foundation added "warmth" to the dreamland. Meanwhile, red was used heavily to add a sense of strangeness, as well as for establishing a tie to the game's narrative and story. Orange was used as a color of interaction, while white was used to add purity to the world. Blue is the last of Shangri-La's main colors and represents danger and honor.
Multiplayer
Ubisoft promised that Far Cry 4 would have much more of a multiplayer element than Far Cry 3. Some elements that were scrapped for Far Cry 3 due to time constraints were featured in Far Cry 4, such as the "Guns for Hire" co-operative multiplayer mode. Building a cooperative experience was the team's goal starting from the beginning of the game's development. Originally intended to be a separate mode, it was later made to be seamlessly integrated into the main campaign. The game's competitive multiplayer was designed to give players freedom, allowing players to progress and defeat enemies in a variety of different ways. Red Storm Entertainment also considered players' feedback from the multiplayer aspect of Far Cry 3, and decided to include vehicles in the game. The company chose an asymmetrical structure for this mode so that players could have different experiences in different matches, as well as to make matches feel more chaotic. The developers originally planned to feature female playable characters, but the plan was scrapped due to animation problems. Ubisoft announced a 'Keys to Kyrat' offer for players that owned a copy of the game for the PlayStation 3 or PlayStation 4. It allows those owners to send out game keys to up to ten other people who do not own a copy of the game. Players who are offered a key can join the person that sent them the key and play the cooperative mode for two hours.
Music
Cliff Martinez was hired to work on the game's soundtrack. A two-disc edition was released that contained 30 tracks heard in the game, and a deluxe edition was released that contained 15 extra tracks. The album was released just before the release of the game and received positive reviews. Particular praise was directed towards the usage of traditional Nepalese instruments which, combined with electronic samples, suggested high octane action and mystical wondering.
Release
With Far Cry 3 being a commercial success, Ubisoft considered the Far Cry series one of their most important brands, and they hinted that a sequel was in development in June 2013. On October 3, 2013, Martinez mentioned that he was working on the soundtrack for the game. In March 2014 the game's setting and features were leaked. The game was officially announced on May 15, 2014, and the first gameplay footage was revealed during Electronic Entertainment Expo 2014. The game's cover art, which shows the light-skinned Pagan Min resting his hand on a dark-skinned person, caused controversy and accusation of racism. Hutchinson later responded and clarified by saying that Pagan Min is not a white person and that the other person depicted is not the game's protagonist. Hutchinson added that the reaction of the community regarding the cover art was "uncomfortable".
In addition to the standard version, a Limited Edition of the game is able to be purchased. This edition features additional in-game missions and an Impaler Harpoon Gun. The Limited Edition was a free upgrade for players who pre-ordered the game. A Kyrat Edition was also announced and it contains a collector's box, a poster, a journal, a map of Kyrat, a figurine of Pagan Min, and the missions from the Limited Edition. Players can also purchase a season pass for the game, which grants them access to additional content, including a new competitive multiplayer mode, a mission called "the Syringe", the missions from the Limited Edition, and the two other pieces of downloadable content. Far Cry 4: Arena Master, developed by Ludomade, was released alongside the game itself, as a companion app for the iOS and Android.
Far Cry 4 was released worldwide on November 18, 2014, for PlayStation 3, PlayStation 4, Windows, Xbox 360 and Xbox One. The PlayStation 4, Windows, and Xbox One versions feature higher visual fidelity, such as having a higher texture resolution and more animal fur. The game was supported by downloadable content upon launch. The first DLC, Escape From Durgesh Prison, featuring a new mission set during the main campaign, was released on January 13, 2015. It can be played solo or with another player. The Overrun DLC, which added new maps, a new vehicle, and a new mode to the game's competitive multiplayer, was released on February 10, 2015, for the consoles, and February 12, 2015, for PC. The Hurk Deluxe Pack was released on January 28, 2015, and added several story missions and weapons. The last downloadable content, Valley of the Yetis, features a new region and new story missions which can be played solo or co-operatively with another player. Valley of the Yetis was released on March 10, 2015, in North America and March 11, 2015, in Europe. The game was made available on Amazon Luna on November 23, 2020.
Reception
Far Cry 4 received "generally favorable" reviews, according to review aggregator Metacritic.
The game's story received mixed responses. Chris Carter from Destructoid praised the personality of Ajay Ghale, which is "less in-your-face" than that of Far Cry 3'''s protagonist Jason Brody. He also praised the villain, Pagan Min, who he considered took the spotlight every time he appeared in the game. Nick Tan, from Game Revolution, also praised Min's personality, but he complained that the character appeared too seldom in the game. Shacknews reviewer, Steven Wong, however, thought most characters are multilayered and interesting. As the examples he gives Pagan Min's chief interrogator, Pagan Min himself, and the main character's mother. Josh Harmon from Electronic Gaming Monthly thought that the characters in this game had more depth and that the choices made by players throughout the game were meaningful. Aoife Wilson from Eurogamer thought that the game's characters were memorable, but was disappointed by the story. JeuxVideo reviewer liked neither plot nor the characters, as they wrote: "If the main plot left us a little hungry [...], it is quite sad to note that our Nemesis Pagan also disappointed us" and "The key characters of this episode are unfortunately a little too caricatured and predictable". Edwin Evans-Thirlwell, writing for GamesRadar, thought that the story grew tiresome as players progressed, even though some of its characters were interesting. He further criticized the game's writing, which he thought was lackluster. Mike Splechta from GameZone praised the game's voice acting and applauded its storyline, calling it "much more satisfying".
The game's setting received positive responses. Carter thought that the vertical nature of the game's map created obstacles for players when they were traveling between places. However, he praised the interesting lore and wildlife found within the world, as well as the game's long draw distance. Harmon had similar comments, praising the game's graphics and Kyrat's culture. Harmon thought that the hilly landscape of the game's world gave players a sense of exploration, and hence made traversal enjoyable. Wilson thought, however, that the game's setting was not as compelling as the tropical setting of Far Cry 3. Furthermore, she thought that most of the time vast territories in Far Cry 4 are similar everywhere. Nevertheless, she praised the Shangri-la section, which, according to her, was one of the exceptions. She also notes that what the game "may lack in looks", "it makes up for by being positively stuffed with things to do". Matt Bertz from Game Informer praised the game's setting, which he thought was vibrant, varied and rich. Ludwig Kietzmann from Joystiq praised the content found within the world, and thought that the world itself was absorbing and interesting. ShackNews reviewer Steven Wong liked that as the player travels from southern to northern territories, the music and soldiers change from Indian to Chinese.
The game's design also received acclaim. Carter from Destructoid thought that the fortress and the outpost system provided players with a sense of accomplishment and success, and he considered having the freedom to use different ways to approach and complete missions one of the greatest parts of the game. In addition, Carter applauded the game's driving mechanics and the auto-drive feature, which he considered a significant improvement for the series. However he criticized the upgrade system, which he thought was directly converted from Far Cry 3 and was uninspiring. Electronic Gaming Monthly's Harmon thought that the introduction of the helicopter was dull. Mitch Dyer, from IGN, praised the game's economy system, which he thought was satisfying. He added that it gives players motivation to complete side-quests. GameSpot reviewer calls the game's economy "excellent" and enjoys the fact that it forces you to upgrade your wallet, so it can hold more money, and to craft a bigger backpack. Justin McElroy of Polygon praised the introduction of the grappling hook and the vertical map design, which he thought had allowed players to develop a strategy before taking action. He also praised the game for allowing players to use multiple approaches toward a single objective. GameSpots reviewer writes that after liberating an outpost and leaving it, the game may inform the player that it is already under another attack, when the player comes back to defend it and leaves again, he meets the same message that the outpost is under attack.
The game's multiplayer mode received a mixed response. Carter compared the competitive multiplayer to that of Tomb Raider, and called it "skippable". He considered the cooperative multiplayer a fun addition to the game but was disappointed by its limitations. He also added that the game would still be a strong title without these multiplayer elements. Bertz from Game Informer also found the multiplayer shallow and poorly-executed. He also criticized the lack of a large player pool and dedicated servers. Evans-Thirlwell of GamesRadar thought that the cooperative multiplayer was fun to play, but the asymmetrical competitive multiplayer was easy to forget. In contrast, GameZone Splechta thought that the competitive multiplayer mode was "a surprise" for him. Dyer echoed a similar statement, and he thought that it had successfully captured the scale and freedom offered by both the game's co-op and campaign. JeuxVideo wrote about multiplayer in the game: "You will understand that exploring Kyrat is honestly funny for two, assuming we are not likely to get tired in the long run".
Harmon thought that Far Cry 4 was an improvement over Far Cry 3, but he thought that the game felt and played too similarly to Far Cry 3, and he added that the game was unambitious. Bertz thought that Ubisoft Montreal's vision for Far Cry 4 is not as bold as its predecessors, and also thought that the experience delivered by Far Cry 4 did not stray far away from Far Cry 3. Tan also noted that the game's open world design felt not only similar to Far Cry 3, but also other Ubisoft franchises like Assassin's Creed and Watch Dogs. Evans-Thirlwell thought that the experience offered by Far Cry 4 was hollow as it had failed to innovate or reinvent its wheel. Dyer thought that the game was not ambitious, but the experience delivered was still gratifying and rewarding.
Sales
Ubisoft expected the game to sell at least six million copies in its first year of release. Far Cry 4 became the fastest-selling game and the most successful launch in the series in the first week of release. Far Cry 4 was the second best selling game in the United Kingdom for all-formats during the week of its release, only behind Grand Theft Auto V''. It was also the sixth best selling game in the US according to The NPD Group. As of December 31, 2014, the game has shipped seven million copies. The game sold more than 10 million copies during the eighth generation of video game consoles.
Awards
Notes
References
External links
Far Cry 4 at MobyGames
2014 video games
Asymmetrical multiplayer video games
Far Cry games
First-person shooters
Multiplayer and single-player video games
Open-world video games
PlayStation 3 games
PlayStation 4 games
Fiction about rebellions
Ubisoft games
Video game sequels
Video games developed in Canada
Video games scored by Cliff Martinez
Video games set in fictional countries
Video games set in mountains
Video games set in South Asia
Windows games
Xbox 360 games
Xbox One games
Video games using Havok
Works about the Nepalese Civil War
BAFTA winners (video games)
Red Storm Entertainment games
The Game Awards winners | Far Cry 4 | Physics | 7,266 |
55,369,138 | https://en.wikipedia.org/wiki/Taylor%20Key | The Taylor Key Award is one of the highest awards of the Society for Advancement of Management. This management awards is awarded annually to one or more persons for "the outstanding contribution to the advancement of the art and science of management as conceived by Frederick W. Taylor."
The Taylor Key has been awarded in cooperation with the American Management Association.
Award winners
The award winners have been:
1937: George W. Barnwell, and George T. Trundle Jr.
1938: Asa A. Knowles, and Hugo Diemer
1939: Moritz A. Dittmer and William H. Gesell and
1940: Henry S. Dennison
1941: Morris L. Cooke
1942: King Hathaway
1943: Harlow S. Person
1944: Henry P. Kendall
1945: Henry P. Dutton
1946: Robert B. Wolf
1947: Harry Arthur Hopf
1948: Dexter S. Kimball
1949: Herbert C. Hoover
1950: Brehon B. Somervell
1951: No Award
1952: Harold F. Smiddy and Donald K. Davis
1954: Henning W. Prentis
1956: F. J. Roethlisberger
1958: Ralph C. Davis ; Frank Henry Neely
1960: John B. Joynt
1961: Lawrence A. Appley
1963: Harold B. Maynard and Lyndall F. Urwick
1965: Phil Carroll
1966: Robert S. McNamara
1967: Peter F. Drucker
1968: Nobuo Noda
1971. Donald C. Burnham
1972: John F. Mee
1973: J. Allyn Taylor
1974: Harold Koontz
1980: Edward C. Schleh
1982: Allan H. Mogensen
1983: W. Edwards Deming
1998: Nobuo Shigenaga
2000: Moustafa H. Abdelsamad
2005: William I. Sauser, Jr.
2019: Edwin A. Fleishman
Other prominent winners of the Taylor Key Awards have been Don G. Mitchell, and Kaichiro Nishino.
References
Awards established in 1937
Management awards
1937 establishments in the United States | Taylor Key | Technology | 416 |
42,233,795 | https://en.wikipedia.org/wiki/Team%20effectiveness | Team effectiveness (also referred to as group effectiveness) is the capacity a team has to accomplish the goals or objectives administered by an authorized personnel or the organization. A team is a collection of individuals who are interdependent in their tasks, share responsibility for outcomes, and view themselves as a unit embedded in an institutional or organizational system which operates within the established boundaries of that system. Teams and groups have established a synonymous relationship within the confines of processes and research relating to their effectiveness (i.e. group cohesiveness, teamwork) while still maintaining their independence as two separate units, as groups and their members are independent of each other's role, skill, knowledge or purpose versus teams and their members, who are interdependent upon each other's role, skill, knowledge and purpose.
There are many team effectiveness models including Rubin, Plovnick, and Fry's GRPI model, the Katzenbach and Smith model, the T7 model, the LaFasto and Larson model, the Hackman model, the Lencioni model and the Google model.
Overview
The evaluation of how effective a team is, is achieved with the aid of a variety of components derived from research and theories that help in creating a description of the multifaceted nature of team effectiveness. According to Hackman (1987), team effectiveness can be defined in terms of three criteria:
Output – The final outputs produced by the team must meet or exceed the standards set by key constituents within the organization
Social Processes – The internal social processes operating as the team interacts should enhance, or at least maintain, the group's ability to work together in the future
Learning – The experience of working in the team environment should act to satisfy rather than aggravate the personal needs of team members
In order for these criteria to be assessed appropriately, an evaluation of team effectiveness should be conducted, which involves both a measure of the teams' final task performance as well as criteria with which to assess intragroup process. The three major intragroup process constructs examined are intra-group conflict, team cohesion, and team-efficacy. Intra-group conflict is an integral part of the process a team undergoes and the effectiveness of the unit that was formed. Previous research has differentiated two components of intra-group conflict:
Relationship conflict – This is the interpersonal incompatibilities between team members such as annoyance and animosity
Task conflict – This occurs when members convey divergent ideas and opinions about specific aspects related to task accomplishment
Team cohesion is viewed as a general indicator of synergistic group interaction—or process. Furthermore, cohesion has been linked to greater coordination during team-tasks as well as improved satisfaction, productivity, and group interactions. Team efficacy refers to team members' perceptions of task-specific team competence. This construct is thought to create a sense of confidence within the team that enables the group to persevere when faced with hardship.
According to Hackman (2002), there are also 5 conditions that research has shown to optimize the effectiveness of the team:
Real Team – Stability in the group membership over time
Compelling Direction – A clear purpose that relies on end goals
Enabling Structure – The groups dynamic must be producing good, not bad
Social Support – The group must have a system to collaborate properly
Coaching – Opportunities for a coach to give help
The Aristotle project, a multi-year initiative by Google Inc. aimed at defining the characteristics of an ideal team in the workplace, has found somewhat similar conditions for group effectiveness. They found that by far, most important factor is psychological safety. The other key factors in productivity are dependability, structure and clarity, personal meaning, and each team member feeling like they have an impact.
Work teams
Work teams (also referred to as production and service teams) are continuing work units responsible for producing goods or providing services for the organization. Their membership is typically stable, usually full-time, and well-defined. These teams are traditionally directed by a supervisor who mandates what work is done, who does it, and in what manner is it executed. Work teams are effectively used in manufacturing sectors such as mining and apparel and service based sectors such as accounting which utilize audit teams.
Self-managed work teams
Self-managed work teams (also referred to as autonomous work groups) allow their members to make a greater contribution at work and constitute a significant competitive advantage for the organization. These work teams determine how they will accomplish the objectives they are mandated to achieve and decide what route they will take to complete the current assignment. Self-managed work teams are granted the responsibility of planning, scheduling, organizing, directing, controlling and evaluating their own work process. They also select their own members and evaluate the members' performance. Self-managed work teams have been favored for their effectiveness over traditionally managed teams due their ability to enhance productivity, costs, customer service, quality, and safety. Self-managed work teams do not always have positive results, however. These teams can be expensive to start, have the potential for the greatest conflict, and are often difficult to monitor the progress of. The move to self-managed work teams at Levi Strauss & Co. in the 1990s pitted highly skilled and efficient workers against their slower counterparts, who the faster workers did not feel were sufficiently contributing to the team.
Parallel teams
Parallel teams (also referred to as advice and involvement teams) pull together people from different work units or jobs to perform functions that the regular organization is not equipped to perform well. These teams are given limited authority and can only make recommendations to individuals higher in the organizational hierarchy. Parallel teams are used for solving problems and activities that are in need of revision or improvement. Examples of parallel teams are quality circles, task forces, quality improvement teams, employee involvement groups. The effectiveness of parallel teams is proven by the continuation of their usage and expansion throughout organizations due to their ability to improve quality and increase employee involvement.
Project teams
Project teams (also referred to as development teams) produce new products and services for an organization or institution on a one-time or limited basis, of which the copyrights of that new product or service will belong to the establishment that it was made for once it is completed. The task of these teams may vary from just improving a current project, concept or plan to creating an entirely new projects with very few limitations. Projects teams rely on their members being knowledgeable and well versed in many disciplines and functions, as this allows them to complete the task effectively. Once a project is completed, the team either disbands and are individually moved to other special functions or moves on to other projects and tasks that they as a unit can accomplish or develop. A common example of project teams are cross-functional teams. A project team's effectiveness is associated with the speed with which they are able to create and develop new products and services which reduces time spent on individual projects.
Management teams
Management teams (also referred to as action and negotiation teams) are responsible for the coordination and direction of a division within an institution or organization during various assigned projects and functional, operational and/or strategic tasks and initiatives. Management teams are responsible for the total performance of the division they oversee with regards to day-to-day operations, delegation of tasks and the supervision of employees. The authority of these teams are based on the members position on the company's or institution's organizational chart. These management teams are constructed of managers from different divisions (e.g. Vice President of Marketing, Assistant Director of Operations). An example of management teams are executive management teams, which consists of members at the top of the organization's hierarchy, such as chief executive officer, board of directors, board of trustees, etc., who establish the strategic initiatives that a company will undertake over a long term period (~ 3–5 years). Management teams have been effective by using their expertise to aid companies in adjusting to the current landscape of a global economy, which helps them compete with their rivals in their respective markets, produce unique initiatives that sets them apart from their rivals and empower the employees who are responsible for the success of the organization or institution.
See also
References
Group processes
Industrial and organizational psychology
Organizational behavior
Teams | Team effectiveness | Biology | 1,648 |
54,908,540 | https://en.wikipedia.org/wiki/Joanna%20Maria%20Vandenberg | Joanna (Joka) Maria Vandenberg (born 1938) is a Dutch solid state chemist and crystallographer who immigrated to the United States in 1968. At Bell Telephone Laboratories, she made a major contribution to the success of the Internet. She invented, developed, and applied the X-ray scanning tool for quality control essential to manufacturing indium gallium arsenide phosphide-based multi-quantum well lasers. These are the lasers that amplify and modulate light that travels through optical fibers that are at the heart of today's Internet.
Early life
Joanna Vandenberg was born January 24, 1938, in Heemstede, a small town near Amsterdam, where she was the youngest of a family of five, and the first one to go to college. Her family was in the tulip business. In 1956 she graduated cum laude from gymnasium-β and went to the State University of Leiden in the Netherlands where she received a B.S. in Physical Sciences and Mathematics, 1959 and a M.S. in Inorganic and Solid State Chemistry with A. E. van Arkel as well as Theoretical Chemistry, 1962. She studied with van Arkel in Leiden and Caroline H. MacGillavry in Amsterdam for a Ph.D. thesis on X-ray diffraction analysis of metal–metal bonding in inorganic compounds, 1964.
Career
She worked for 4 years (1964–1968) at Royal Dutch Shell laboratory in Amsterdam, where she joined the research group on catalytic properties of transition metal-layered chalcogenides. In 1968 she moved to Bell Laboratories where she continued work on structural and magnetic properties of transition-metal chalcogenides. Her career was interrupted when she was laid off seven months into her first pregnancy. She was rehired in 1972 after the AT&T operators won a historic class action lawsuit for being fired when pregnant. With Bernd Matthias of UCSD, she started to work on metal cluster formation in superconducting ternary transition metal compounds. Her extensive knowledge of structural inorganic chemistry enabled her to predict inorganic crystal structures and led to the discovery the superconducting rare earth ternary borides.
In 1980 she changed direction and began research on contact metallization on InGaAsP/InP multi-quantum well layers used as high speed digital lasers in the internet. She designed a temperature-dependent in-situ annealing X-ray diffractometer. This technique made it possible to optimize the electrical behavior of the gold metallization contacts and became a standard reference in semiconductor industry.
In 1986 Vandenberg turned her attention to the quality control of the crystal growth of InGaAsP multi-quantum well (MQW) layers, used as laser light sources and optical modulators designed to work in the 1.3 to 1.55 μm wavelength range. Advancing the design, performance and manufacturability of these devices had been the focus of all the leading optical component suppliers for decades. These devices are manufactured using organometallic vapor phase epitaxy, a complex process involving multiple sources subject to drift. Manufacture of early devices was based on unacceptably low (much less than 1%) end-to-end yields. Dramatic improvement was needed to produce the high performance components used to transport the massive amounts of data in today's Internet. In many cases mono-layer thickness control is required along with variations in bandgap less than 0.5%. This high level of quality control must be achieved using complex crystal growth machines which can fail in hundreds of ways. To insure that these multiple failure modes do not impact the final device, Vandenberg designed a one-room (later bench-top) non-destructive high-resolution X-ray diffractometer to provide immediate on-line feedback into the MQW growth process. She constructed robust algorithms linking X-ray features to layer thickness and strain information essential to crystal growth control and optoelectronic device performance. Her X-ray diffraction technique is used to scan every laser wafer many times during manufacture. All Internet lasers are now manufactured using her tool X-Ray Crystallography, and their operational lifetime exceeds 25 years.
Awards
Vandenberg received the 1995 and 1997 Optoelectronics Award in recognition of contributions to the development of characterization and process control routines for manufacture of Lucent's world class semiconductor lasers.
She is a fellow of the American Physical Society and a corresponding member of the Royal Netherlands Academy of Arts and Sciences.
Selected publications
References
20th-century Dutch chemists
Dutch women chemists
Leiden University alumni
Shell plc people
Bell Labs
Crystallographers
Fellows of the American Physical Society
Members of the Royal Netherlands Academy of Arts and Sciences
20th-century Dutch women scientists
1938 births
Rare earth scientists
Living people
Women inventors | Joanna Maria Vandenberg | Chemistry,Materials_science | 985 |
30,105,864 | https://en.wikipedia.org/wiki/CyberArts%20International | CyberArts International was a series of conferences dealing with emerging technologies that took place during years 1990, 1991, and 1992 in Los Angeles and Pasadena, California. The gatherings brought together artists and developers in all types of new media, including software engineers, electronic musicians, and graphic artists to explore what was a new field at the time, digital media collaborations.
A fourth, reunion, exposition was held in San Francisco in September 2001 but saw its attendance undercut by the transportation difficulties which followed the September 11 terrorist attacks.
The conferences dealt with the interrelationship between computer technology, visual design, music and sound, education, and entertainment.
History
Background
CyberArts International was a series of three annual conferences and exhibitions held in Southern California from 1990 to 1992, focusing upon emerging technologies and techniques for artists working to build interactivity or in the multimedia field. The expositions were originally developed by Dominic Milano, editor of Keyboard Magazine, who served as conference chair, in collaboration with Robert B. Gelman, event producer and Director of Business Development for Miller Freeman Expositions and assisted by Linda Jacobson, who later edited the anthology CyberArts: Exploring Art and Technology, published by Miller Freeman, Inc. in 1992.
Other paid staff members and volunteers also assisted in event preparation, including arts organizations YLEM and EZTV, as well as author and publisher Michael Gosney of Verbum Magazine, who later co-produced a series of Digital Be-Ins with Robert Gelman from 1993 to 1998.
The notion of cyberarts
The term cyberarts is a portmanteau combining the root word of cybernetics, dealing with the study of control systems in machines and human nervous systems, and the word for the broad creative fields dealing with the creation of objects of form, beauty, and expression. Inspiration for the CyberArts International conferences revolved around the artistic implications of the rapidly changing technologies related to computers, input devices, digital storage, networking, and reproduction — parallel technologies that were revolutionizing the traditional visual and sonic arts and making possible new forms of artistic expression.
As one enthusiast noted, these new and changing computer tools served to "enhance the creative process by making it easy to experiment with color schemes, sound layers, scene transitions, 3D models, photo retouching, and animated characters."
Convention structure
The motivating concept behind the CyberArts International conventions was a desire to bring together artists working in the various new media and the firms producing tools for such work, blending artistic exhibition with trade show. During the day the gatherings featured interactive exhibitions and aisles of traditional exhibit booths found at any ordinary trade show. There were in addition numerous interactive art installations, including some that could be ridden like amusement park rides and others which were for their time cutting edge demonstrations of interactive games. Iconic technologies of the future such as CD-ROMs and virtual reality were demonstrated to participants at these tech expositions. The electronic publisher Jaime Levy exhibited and sold her floppy disk magazine "Cyber Rag" that was created in HyperCard.
The CyberArts International festivals also featured lectures and workshops dealing with the process of creation of new media art forms, allowing discussion about the rationale and implications of such work. There was also spontaneous collaborative art and performance in real-time.
Evening concerts were also held in conjunction with the CyberArts International festivals, featuring performances by musicians interested in new technologies such as Jaron Lanier, Stanley Jordan, Todd Rundgren, Tod Machover, and D'Cuckoo.
Events
Three annual conferences were held from 1990 to 1992. These were covered by local and national media, including the Los Angeles Times, Macworld, PC World, and Amusement Today, which reported on various aspects of the eclectic events. Media accounts likened the concerts associated with the conferences to a "Techno-Woodstock" or a "visionary party."
CyberArts International X, a 10-year reunion commemorating the original CyberArts International events was hosted at The Exploratorium in San Francisco on September 15 and 16, 2001. All of the original participants were invited to return and update one another on the developments of the decade past, and a few new art/technology innovations were to be unveiled. The September 11 terrorist attacks intervened, dramatically impacting the American air transportation system and preventing the participation of some scheduled conference participants.
Many key figures, including Fiorella Terenzi of Italy, were limited to participation via web-conference, marred by the relatively low bandwidth of the day. A Haiku Wall was created to allow attendees to express themselves, and performances featured a number of emerging artists of that time.
Event dates
See also
Boston Cyberarts Festival
Ars Electronica
Footnotes
Further reading
Michael Czeiszperger and Atau Tanaka, "CyberArts International," Computer Music Journal, vol. 16, no. 3 (Autumn 1992), pp. 92-96. In JSTOR
Kit Galloway and Sherrie Rabinowitz, "Welcome to 'Electronic Cafe International': A Nice Place for Hot Coffe, Iced Tea, and Virtual Space: Pictorially Enhanced Version", in Linda Jacobson (ed.), Cyberarts: Exploring Art & Technology. San Francisco, CA: Miller Freeman, Inc., 1992.
Cynthia Goodman, Digital Visions. New York: Harry N. Abrams, 1997.
Linda Jacobson (ed.), Cyberarts: Exploring Art & Technology. San Francisco, CA: Miller Freeman, Inc., 1992. .
Theodor Holm Nelson, "Virtual World Without End" in Proceedings of Cyber Arts International Conference, September 1990.
Frank Popper, Art of the Electronic Age. New York: Harry N. Abrams, 1993.
Atau Tanaka, "CyberArts International," Computer Music Journal, vol. 15, no. 1 (Spring 1991), pp. 55-58. In JSTOR
External links
Official website
"CyberArts International 1991 Event Schedule", Miller Freeman, Inc.
"CyberArts International 1992 Event Schedule", Miller Freeman, Inc.
Bruce Damer, "CyberArts International X Conference Photo Album", Damer.com
Art festivals in the United States
Technology conferences
Technology conventions
Computer art
Digital media organizations
Computing culture
New media art festivals
Electronic music festivals in the United States | CyberArts International | Technology | 1,247 |
11,820,876 | https://en.wikipedia.org/wiki/Lentinus%20tigrinus | Lentinus tigrinus is a mushroom in the Polyporaceae family. It is classified as nonpoisonous. It has been reported that mushrooms have significant antioxidant and antimicrobial activity.
References
Further reading
Fungal plant pathogens and diseases
Polyporaceae
Fungi of Europe
Taxa named by Jean Baptiste François Pierre Bulliard
Fungus species | Lentinus tigrinus | Biology | 71 |
19,726,608 | https://en.wikipedia.org/wiki/Optical%20properties%20of%20carbon%20nanotubes | The optical properties of carbon nanotubes are highly relevant for materials science. The way those materials interact with electromagnetic radiation is unique in many respects, as evidenced by their peculiar absorption, photoluminescence (fluorescence), and Raman spectra.
Carbon nanotubes are unique "one-dimensional" materials, whose hollow fibers (tubes) have a unique and highly ordered atomic and electronic structure, and can be made in a wide range of dimension. The diameter typically varies from 0.4 to 40 nm (i.e., a range of ~100 times). However, the length can reach , implying a length-to-diameter ratio as high as 132,000,000:1; which is unequaled by any other material. Consequently, all the electronic, optical, electrochemical and mechanical properties of the carbon nanotubes are extremely anisotropic (directionally dependent) and tunable.
Applications of carbon nanotubes in optics and photonics are still less developed than in other fields. Some properties that may lead to practical use include tuneability and wavelength selectivity. Potential applications that have been demonstrated include light emitting diodes (LEDs), bolometers and optoelectronic memory.
Apart from direct applications, the optical properties of carbon nanotubes can be very useful in their manufacture and application to other fields. Spectroscopic methods offer the possibility of quick and non-destructive characterization of relatively large amounts of carbon nanotubes, yielding detailed measurements of non-tubular carbon content, tube type and chirality, structural defects, and many other properties that are relevant to those other applications.
Geometric structure
Chiral angle
A single-walled carbon nanotubes (SWCNT) can be envisioned as strip of a graphene molecule (a single sheet of graphite) rolled and joined into a seamless cylinder. The structure of the nanotube can be characterized by the width of this hypothetical strip (that is, the circumference c or diameter d of the tube) and the angle α of the strip relative to the main symmetry axes of the hexagonal graphene lattice. This angle, which may vary from 0 to 30 degrees, is called the "chiral angle" of the tube.
The (n,m) notation
Alternatively, the structure can be described by two integer indices (n,m) that describe the width and direction of that hypothetical strip as coordinates in a fundamental reference frame of the graphene lattice. If the atoms around any 6-member ring of the graphene are numbered sequentially from 1 to 6, the two vectors u and v of that frame are the displacements from atom 1 to atoms 3 and 5, respectively. Those two vectors have the same length, and their directions are 60 degrees apart. The vector w = n u + m v is then interpreted as the circumference of the unrolled tube on the graphene lattice; it relates each point A1 on one edge of the strip to the point A2 on the other edge that will be identified with it as the strip is rolled up. The chiral angle α is then the angle between u and w.
The pairs (n,m) that describe distinct tube structures are those with 0 ≤ m ≤ n and n > 0. All geometric properties of the tube, such as diameter, chiral angle, and symmetries, can be computed from these indices.
The type also determines the electronic structure of the tube. Specifically, the tube behaves like a metal if |m–n| is a multiple of 3, and like a semiconductor otherwise.
Zigzag and armchair tubes
Tubes of type (n,m) with n=m (chiral angle = 30°) are called "armchair" and those with m=0 (chiral angle = 0°) "zigzag". These tubes have mirror symmetry, and can be viewed as stacks of simple closed paths ("zigzag" and "armchair" paths, respectively).
Electronic structure
The optical properties of carbon nanotubes are largely determined by their unique electronic structure. The rolling up of the graphene lattice affects that structure in ways that depend strongly on the geometric structure type (n,m).
Van Hove singularities
A characteristic feature of one-dimensional crystals is that their distribution of density of states (DOS) is not a continuous function of energy, but it descends gradually and then increases in a discontinuous spike. These sharp peaks are called Van Hove singularities. In contrast, three-dimensional materials have continuous DOS.
Van Hove singularities result in the following remarkable optical properties of carbon nanotubes:
Optical transitions occur between the v1 − c1, v2 − c2, etc., states of semiconducting or metallic nanotubes and are traditionally labeled as S11, S22, M11, etc., or, if the "conductivity" of the tube is unknown or unimportant, as E11, E22, etc. Crossover transitions c1 − v2, c2 − v1, etc., are dipole-forbidden and thus are extremely weak, but they were possibly observed using cross-polarized optical geometry.
The energies between the Van Hove singularities depend on the nanotube structure. Thus by varying this structure, one can tune the optoelectronic properties of carbon nanotube. Such fine tuning has been experimentally demonstrated using UV illumination of polymer-dispersed CNTs.
Optical transitions are rather sharp (~10 meV) and strong. Consequently, it is relatively easy to selectively excite nanotubes having certain (n, m) indices, as well as to detect optical signals from individual nanotubes.
Kataura plot
The band structure of carbon nanotubes having certain (n, m) indexes can be easily calculated. A theoretical graph based on these calculations was designed in 1999 by Hiromichi Kataura to rationalize experimental findings. A Kataura plot relates the nanotube diameter and its bandgap energies for all nanotubes in a diameter range. The oscillating shape of every branch of the Kataura plot reflects the intrinsic strong dependence of the SWNT properties on the (n, m) index rather than on its diameter. For example, (10, 1) and (8, 3) tubes have almost the same diameter, but very different properties: the former is a metal, but the latter is a semiconductor.
Optical properties
Optical absorption
Optical absorption in carbon nanotubes differs from absorption in conventional 3D materials by presence of sharp peaks (1D nanotubes) instead of an absorption threshold followed by an absorption increase (most 3D solids). Absorption in nanotubes originates from electronic transitions from the v2 to c2 (energy E22) or v1 to c1 (E11) levels, etc. The transitions are relatively sharp and can be used to identify nanotube types. Note that the sharpness deteriorates with increasing energy, and that many nanotubes have very similar E22 or E11 energies, and thus significant overlap occurs in absorption spectra. This overlap is avoided in photoluminescence mapping measurements (see below), which instead of a combination of overlapped transitions identifies individual (E22, E11) pairs.
Interactions between nanotubes, such as bundling, broaden optical lines. While bundling strongly affects photoluminescence, it has much weaker effect on optical absorption and Raman scattering. Consequently, sample preparation for the latter two techniques is relatively simple.
Optical absorption is routinely used to quantify quality of the carbon nanotube powders.
The spectrum is analyzed in terms of intensities of nanotube-related peaks, background and pi-carbon peak; the latter two mostly originate from non-nanotube carbon in contaminated samples. However, it has been recently shown that by aggregating nearly single chirality semiconducting nanotubes into closely packed Van der Waals bundles the absorption background can be attributed to free carrier transition originating from intertube charge transfer.
Carbon nanotubes as a black body
An ideal black body should have emissivity or absorbance of 1.0, which is difficult to attain in practice, especially in a wide spectral range. Vertically aligned "forests" of single-wall carbon nanotubes can have absorbances of 0.98–0.99 from the far-ultraviolet (200 nm) to far-infrared (200 μm) wavelengths.
These SWNT forests (buckypaper) were grown by the super-growth CVD method to about 10 μm height. Two factors could contribute to strong light absorption by these structures: (i) a distribution of CNT chiralities resulted in various bandgaps for individual CNTs. Thus a compound material was formed with broadband absorption. (ii) Light might be trapped in those forests due to multiple reflections.
Luminescence
Photoluminescence (fluorescence)
Semiconducting single-walled carbon nanotubes emit near-infrared light upon photoexcitation, described interchangeably as fluorescence or photoluminescence (PL). The excitation of PL usually occurs as follows: an electron in a nanotube absorbs excitation light via S22 transition, creating an electron-hole pair (exciton). Both electron and hole rapidly relax (via phonon-assisted processes) from c2 to c1 and from v2 to v1 states, respectively. Then they recombine through a c1 − v1 transition resulting in light emission.
No excitonic luminescence can be produced in metallic tubes. Their electrons can be excited, thus resulting in optical absorption, but the holes are immediately filled by other electrons out of the many available in the metal. Therefore, no excitons are produced.
Salient properties
Photoluminescence from SWNT, as well as optical absorption and Raman scattering, is linearly polarized along the tube axis. This allows monitoring of the SWNTs orientation without direct microscopic observation.
PL is quick: relaxation typically occurs within 100 picoseconds.
PL efficiency was first found to be low (~0.01%), but later studies measured much higher quantum yields. By improving the structural quality and isolation of nanotubes, emission efficiency increased. A quantum yield of 1% was reported in nanotubes sorted by diameter and length through gradient centrifugation, and it was further increased to 20% by optimizing the procedure of isolating individual nanotubes in solution.
The spectral range of PL is rather wide. Emission wavelength can vary between 0.8 and 2.1 micrometers depending on the nanotube structure.
Excitons are apparently delocalized over several nanotubes in single chirality bundles as the photoluminescence spectrum displays a splitting consistent with intertube exciton tunneling.
Interaction between nanotubes or between a nanotube and another material may quench or increase PL. No PL is observed in multi-walled carbon nanotubes. PL from double-wall carbon nanotubes strongly depends on the preparation method: CVD grown DWCNTs show emission both from inner and outer shells. However, DWCNTs produced by encapsulating fullerenes into SWNTs and annealing show PL only from the outer shells. Isolated SWNTs lying on the substrate show extremely weak PL which has been detected in few studies only. Detachment of the tubes from the substrate drastically increases PL.
Position of the (S22, S11) PL peaks depends slightly (within 2%) on the nanotube environment (air, dispersant, etc.). However, the shift depends on the (n, m) index, and thus the whole PL map not only shifts, but also warps upon changing the CNT medium.
Raman scattering
Raman spectroscopy has good spatial resolution (~0.5 micrometers) and sensitivity (single nanotubes); it requires only minimal sample preparation and is rather informative. Consequently, Raman spectroscopy is probably the most popular technique of carbon nanotube characterization. Raman scattering in SWNTs is resonant, i.e., only those tubes are probed which have one of the bandgaps equal to the exciting laser energy. Several scattering modes dominate the SWNT spectrum, as discussed below.
Similar to photoluminescence mapping, the energy of the excitation light can be scanned in Raman measurements, thus producing Raman maps. Those maps also contain oval-shaped features uniquely identifying (n, m) indices. Contrary to PL, Raman mapping detects not only semiconducting but also metallic tubes, and it is less sensitive to nanotube bundling than PL. However, requirement of a tunable laser and a dedicated spectrometer is a strong technical impediment.
Radial breathing mode
Radial breathing mode (RBM) corresponds to radial expansion-contraction of the nanotube. Therefore, its frequency νRBM (in cm−1) depends on the nanotube diameter d as, νRBM A/d + B (where A and B are constants dependent on the environment in which the nanotube is present. For example, B=0 for individual nanotubes.) (in nanometers) and can be estimated as for SWNT or for DWNT, which is very useful in deducing the CNT diameter from the RBM position. Typical RBM range is 100–350 cm−1. If RBM intensity is particularly strong, its weak second overtone can be observed at double frequency.
Bundling mode
The bundling mode is a special form of RBM supposedly originating from collective vibration in a bundle of SWNTs.
G mode
Another very important mode is the G mode (G from graphite). This mode corresponds to planar vibrations of carbon atoms and is present in most graphite-like materials. G band in SWNT is shifted to lower frequencies relative to graphite (1580 cm−1) and is split into several peaks. The splitting pattern and intensity depend on the tube structure and excitation energy; they can be used, though with much lower accuracy compared to RBM mode, to estimate the tube diameter and whether the tube is metallic or semiconducting.
D mode
D mode is present in all graphite-like carbons and originates from structural defects. Therefore, the ratio of the G/D modes is conventionally used to quantify the structural quality of carbon nanotubes. High-quality nanotubes have this ratio significantly higher than 100. At a lower functionalisation of the nanotube, the G/D ratio remains almost unchanged. This ratio gives an idea of the functionalisation of a nanotube.
G' mode
The name of this mode is misleading: it is given because in graphite, this mode is usually the second strongest after the G mode. However, it is actually the second overtone of the defect-induced D mode (and thus should logically be named D'). Its intensity is stronger than that of the D mode due to different selection rules. In particular, D mode is forbidden in the ideal nanotube and requires a structural defect, providing a phonon of certain angular momentum, to be induced. In contrast, G' mode involves a "self-annihilating" pair of phonons and thus does not require defects. The spectral position of G' mode depends on diameter, so it can be used roughly to estimate the SWNT diameter. In particular, G' mode is a doublet in double-wall carbon nanotubes, but the doublet is often unresolved due to line broadening.
Other overtones, such as a combination of RBM+G mode at ~1750 cm−1, are frequently seen in CNT Raman spectra. However, they are less important and are not considered here.
Anti-Stokes scattering
All the above Raman modes can be observed both as Stokes and anti-Stokes scattering. As mentioned above, Raman scattering from CNTs is resonant in nature, i.e. only tubes whose band gap energy is similar to the laser energy are excited. The difference between those two energies, and thus the band gap of individual tubes, can be estimated from the intensity ratio of the Stokes/anti-Stokes lines. This estimate however relies on the temperature factor (Boltzmann factor), which is often miscalculated – a focused laser beam is used in the measurement, which can locally heat the nanotubes without changing the overall temperature of the studied sample.
Rayleigh scattering
Carbon nanotubes have very large aspect ratio, i.e., their length is much larger than their diameter. Consequently, as expected from the classical electromagnetic theory, elastic light scattering (or Rayleigh scattering) by straight CNTs has anisotropic angular dependence, and from its spectrum, the band gaps of individual nanotubes can be deduced.
Another manifestation of Rayleigh scattering is the "antenna effect", an array of nanotubes standing on a substrate has specific angular and spectral distributions of reflected light, and both those distributions depend on the nanotube length.
Applications
Light emitting diodes (LEDs) and photo-detectors based on a single nanotube have been produced in the lab. Their unique feature is not the efficiency, which is yet relatively low, but the narrow selectivity in the wavelength of emission and detection of light and the possibility of its fine tuning through the nanotube structure. In addition, bolometer and optoelectronic memory devices have been realised on ensembles of single-walled carbon nanotubes.
Photoluminescence is used for characterization purposes to measure the quantities of semiconducting nanotube species in a sample. Nanotubes are isolated (dispersed) using an appropriate chemical agent ("dispersant") to reduce the intertube quenching. Then PL is measured, scanning both the excitation and emission energies and thereby producing a PL map. The ovals in the map define (S22, S11) pairs, which unique identify (n, m) index of a tube. The data of Weisman and Bachilo are conventionally used for the identification.
Nanotube fluorescence has been investigated for the purposes of imaging and sensing in biomedical applications.
Sensitization
Optical properties, including the PL efficiency, can be modified by encapsulating organic dyes (carotene, lycopene, etc.) inside the tubes. Efficient energy transfer occurs between the encapsulated dye and nanotube — light is efficiently absorbed by the dye and without significant loss is transferred to the SWNT. Thus potentially, optical properties of a carbon nanotube can be controlled by encapsulating certain molecule inside it. Besides, encapsulation allows isolation and characterization of organic molecules which are unstable under ambient conditions. For example, Raman spectra are extremely difficult to measure from dyes because of their strong PL (efficiency close to 100%). However, encapsulation of dye molecules inside SWNTs completely quenches dye PL, thus allowing measurement and analysis of their Raman spectra.
Cathodoluminescence
Cathodoluminescence (CL) — light emission excited by electron beam — is a process commonly observed in TV screens. An electron beam can be finely focused and scanned across the studied material. This technique is widely used to study defects in semiconductors and nanostructures with nanometer-scale spatial resolution. It would be beneficial to apply this technique to carbon nanotubes. However, no reliable CL, i.e. sharp peaks assignable to certain (n, m) indices, has been detected from carbon nanotubes yet.
Electroluminescence
If appropriate electrical contacts are attached to a nanotube, electron-hole pairs (excitons) can be generated by injecting electrons and holes from the contacts. Subsequent exciton recombination results in electroluminescence (EL). Electroluminescent devices have been produced from single nanotubes and their macroscopic assemblies. Recombination appears to proceed via triplet-triplet annihilation giving distinct peaks corresponding to E11 and E22 transitions.
Multi-walled carbon nanotubes
Multi-walled carbon nanotubes (MWNT) may consist of several nested single-walled tubes, or of a single graphene strip rolled up multiple times, like a scroll. They are difficult to study because their properties are determined by contributions and interactions of all individual shells, which have different structures. Moreover, the methods used to synthesize them are poorly selective and result in higher incidence of defects.
See also
Allotropes of carbon
Buckypaper
Carbon nanotube
Carbon nanotubes in photovoltaics
Graphene
Hiromichi Kataura
Mechanical properties of carbon nanotubes
Nanoflower
Potential applications of carbon nanotubes
Resonance Raman spectroscopy
Selective chemistry of single-walled nanotubes
Vantablack, a substance produced in 2014; one of the blackest substances known
References
External links
Selection of free-download articles on carbon nanotubes (New Journal of Physics)
Publications of H. Kataura — many of older ones are downloadable
Carbon Nanotube Black Body (AIST nano tech 2009)
Carbon nanotubes
Carbon nanotubes | Optical properties of carbon nanotubes | Physics | 4,384 |
3,078,180 | https://en.wikipedia.org/wiki/Kiss%20%28cryptanalysis%29 | In cryptanalysis, a kiss is a pair of identical messages sent using different ciphers, one of which has been broken. The term was used at Bletchley Park during World War II. A deciphered message in the breakable system provided a "crib" (piece of known plaintext) which could then be used to read the unbroken messages. One example was where messages read in a German meteorological cipher could be used to provide cribs for reading the difficult 4-wheel Naval Enigma cipher.
cribs from re-encipherments ... were known as 'kisses' in Bletchley Park parlance because the relevant signals were marked with 'xx'''
See also
Cryptanalysis of the Enigma
Known-plaintext attack
References
Smith, Michael and Erskine, Ralph (editors): Action this Day'' (2001, Bantam London)
Bletchley Park
Classical cryptography
Cryptographic attacks | Kiss (cryptanalysis) | Technology | 192 |
1,199,462 | https://en.wikipedia.org/wiki/Widow%27s%20walk | A widow's walk, also known as a widow's watch or roofwalk, is a railed rooftop platform often having an inner cupola/turret frequently found on 19th-century North American coastal houses. The name is said to come from the wives of mariners, who would watch for their spouses' return, often in vain as the ocean took their lives, leaving the women widows. In other coastal communities, the platforms were called captain's walks, as they topped the homes of the more successful captains; supposedly, ship owners and captains would use them to search the horizon for ships due in port.
However, there is little or no evidence that widow's walks were intended or regularly used to observe shipping. Widow's walks are in fact a standard decorative feature of Italianate architecture, which was very popular during the height of the Age of Sail in many North American coastal communities. The widow's walk is a variation of the Italianate cupola. The Italianate cupola, its larger instance being an archetypal belvedere, was an important ornate finish to this style, although it was often high maintenance and prone to leaks.
Beyond their use as viewing platforms, they are frequently built around the chimney of the residence, thus creating access to the structure. This allows the residents of the home to pour sand down burning chimneys in the event of a chimney fire in the hope of preventing the house from burning down.
See also
Belvedere
Bird hide
Gazebo
References
Roofs
Architectural elements
Observation decks
Widowhood | Widow's walk | Technology,Engineering | 306 |
24,434,579 | https://en.wikipedia.org/wiki/Artificial%20Cells%2C%20Nanomedicine%2C%20and%20Biotechnology | Artificial Cells, Nanomedicine, and Biotechnology is a peer-reviewed scientific journal that publishes articles on the development of artificial cells, tissue engineering, artificial organs, blood substitutes, cell therapy, gene and drug delivery systems, bioencapsulation nanosensors, nanodevices, and other areas of biotechnology. It is published by Taylor & Francis and the editors-in-chief are R.D.K. Misra (University of Texas at El Paso) and Wojciech Chrzanowski (University of Sydney).
References
External links
Academic journals established in 1973
Biochemistry journals
Biotechnology journals
8 times per year journals
Taylor & Francis academic journals | Artificial Cells, Nanomedicine, and Biotechnology | Chemistry,Biology | 135 |
46,286,281 | https://en.wikipedia.org/wiki/Bacteriophage%20Mu | Bacteriophage Mu, also known as mu phage or mu bacteriophage, is a muvirus (the first of its kind to be identified) of the family Myoviridae which has been shown to cause genetic transposition. It is of particular importance as its discovery in Escherichia coli by Larry Taylor was among the first observations of insertion elements in a genome. This discovery opened up the world to an investigation of transposable elements and their effects on a wide variety of organisms. While Mu was specifically involved in several distinct areas of research (including E. coli, maize, and HIV), the wider implications of transposition and insertion transformed the entire field of genetics.
Anatomy
Phage Mu is nonenveloped, with a head and a tail. The head has an icosahedral structure of about 54 nm in width. The neck is knob-like, and the tail is contractile with a base plate and six short terminal fibers. The genome has been fully sequenced and consists of 36,717 nucleotides, coding for 55 proteins.
History
Mu phage was first discovered by Larry Taylor at UC Berkeley in the late 1950s. His work continued at Brookhaven National Laboratory, where he first observed the mutagenic properties of Mu; several colonies of Hfr E. coli which had been lysogenized with Mu seemed to have a tendency to develop new nutritional markers. With further investigation, he was able to link the presence of these markers to the physical binding of Mu at a certain loci. He likened the observed genetic alteration to the ‘controlling elements’ in maize, and named the phage ‘Mu’, for mutation. This, however, was only the beginning. Over the next sixty years, the complexities of the phage were fleshed out by numerous researchers and labs, resulting in a far deeper understanding of mobile DNA and the mechanisms underlying transposable elements.
Key Mu-related findings
1972–1975: Ahmad Bukhari shows that Mu can insert randomly and prolifically throughout an entire bacterial genome, creating stable insertions. He also demonstrates that the reversion of the gene to its original and undamaged form is possible with the excision Mu.
1979: Jim Shapiro develops a Mu inspired model for transposition involving the ‘Shapiro Intermediate,’ in which both the donor and the target undergo two cleavages and then the donor is ligated into the target, creating two replication forks and allowing for both transposition and replication.
1983: Kiyoshi Mizuuchi develops a protocol for observing transposition in vitro using mini-Mu plasmids, allowing for a greatly increased understanding of the chemical components of transposition.
1994–2012: Because of shared mechanisms of insertion, Mu acts as a useful organism to elucidate the process of HIV integration, eventually leading to HIV integrase inhibitors such as raltegravir in 2008. Additionally, Montano et al. created a crystal structure of the Mu bacteriophage transpososome, allowing for a detailed understanding of the process Mu amplification.
References
External links
Phage Mu at ViralZone
Myoviridae
Viruses
Bacteriophages | Bacteriophage Mu | Biology | 650 |
76,268,455 | https://en.wikipedia.org/wiki/Edward%20W.%20Packel | Edward Wesler Packel (born July 23, 1941) is an American mathematician, game theorist, theoretical computer scientist, and expert on the use of Wolfram Mathematica in teaching mathematics. His 1981 book The Mathematics of Games and Gambling won the 1986 Beckenbach Book Prize.
Early life and education
Edward W. Packel was born on July 23, 1941 in Philadelphia, Pennsylvania. His parents were Israel Packel (1907–1987) and Reba Wesler Packel (1909–1995), who had three sons and no daughters. In 1959 Edward Packel matriculated at Amherst College and graduated there in 1963 with a B.A. in mathematics. At Amherst, he played soccer.
In autumn 1963 he became a graduate student at Massachusetts Institute of Technology (M.I.T.), where he graduated in 1967 with a Ph.D. in mathematics. His Ph.D. thesis Some Results on (C0) Semigroups and the Cauchy Problem was supervised by Gilbert Strang .
Career
From 1967 to 1971 Packel was an assistant professor of mathematics at Reed College. As a mathematician on the staff of the Chicago metropolitan area's Lake Forest College, he was an assistant professor from 1971 to 1975, an associate professor from 1975 to 1982, and a full professor from 1982 until his retirement as professor emeritus in 2013. At Lake Forest College, he was director of the computer center from 1972 to 1973 and chair of the department of mathematics and computer science department from 1986 to 1996. At Columbia University he held a part-time appointment as a senior lecturer in computer science from 1983 to 1985. He is the author or co-author of several books and more than 35 refereed articles.
Packel was a visiting associate at California Institute of Technology (Caltech) for the academic year 1977–1978 and again in spring 1980. He held appointments a visiting professor for the academic year 1989–1990 at Berkeley's Mathematical Sciences Research Institute (MSRI), for the academic year 1996–1997 at Australia's University of Sydney, and in autumn 2003 at Harvey Mudd College in Claremont, California. As a consultant, he has taught courses in Mathematica for Wolfram Research, conducted workshops for the Rocky Mountain Mathematics Consortium on consulting for business, and worked on various consultations related to topics related to gambling. His consulting activities related to gambling include probability calculations and computer simulations for new games involving chance, aspects of slot machine developments, and project management for machines that play poker against people.
Personal life and avocations
In May 1968, Edward Packel married Doreen Humphreys. They divorced in 1979 after becoming the parents of three daughters, Amanda, Laura, and Lisa. Laura Packel became an epidemiologist and expert in family planning and reproductive health. In July 1980 Edward Packel married Kathryn Helen Dohrmann, whose father was a Lutheran pastor. There is a son Adrian from Edward Packel's second marriage. At Lake Forest College, Kathryn Dohrmann is an assistant professor of psychology, emerita.
Edward Packel helped to establish the Lake Forest/Lake Bluff Running Club, where he and his wife Kathryn participated together in running. In 2001, Edward set a new record time in the mile run for Illinois runners in his age group. He also enjoys golf — in August 2005 he won $180 at the Lake County Senior Invitational at Lake Bluff Golf Club.
Selected publications
Articles
1983
Books
Writing team: John B. Fink, Bonnie Gold, Robert A. Messer, and Edward W. Packel; xiii+165 pages; illustrated. Book cover & title page at maa.org
References
1941 births
Living people
American computer scientists
American education writers
American gambling writers
20th-century American mathematicians
21st-century American mathematicians
Game theorists
Reed College alumni
Massachusetts Institute of Technology alumni
Reed College faculty
Lake Forest College faculty | Edward W. Packel | Mathematics | 778 |
1,203,932 | https://en.wikipedia.org/wiki/DVD%20recorder | A DVD recorder is an optical disc recorder that uses optical disc recording technologies to digitally record analog or digital signals onto blank writable DVD media. Such devices are available as either installable drives for computers or as standalone components for use in television studios or home theater systems.
As of March 1, 2007 all new tuner-equipped television devices manufactured or imported in the United States must include an ATSC tuner. The US Federal Communications Commission (FCC) has interpreted this rule broadly, including apparatus such as computers with TV tuner cards with video capture ability, videocassette recorders and standalone DVD recorders. NTSC DVD recorders are undergoing a transformation, either adding a digital ATSC tuner or removing over-the-air broadcast television tuner capability entirely. However, these DVD recorders can still record analog audio and analog video.
Standalone DVD recorders, alongside Blu-ray recorders, have been relatively scarce in the United States due largely to "restrictions on video recording" and piracy concerns.
The first DVD recorders appeared on the market in 1999–2000.
Technical information
Originally, DVD recorders supported one of three standards: DVD-RAM, DVD-RW (using DVD-VR), and DVD+RW (using DVD+VR), none of which are directly compatible. Most current DVD drives support both the + and - standards, while few support the DVD-RAM standard, which is not directly compatible with standard DVD drives.
Recording speed is generally denoted in values of X (similar to CD-ROM usage), where 1X in DVD usage is equal to 1.321 MB/s, roughly equivalent to a 9X CD-ROM. In practice, this is largely confined to computer-based DVD recorders, since standalone units generally record in real time, that is, 1X speed.
Recorders use a laser (usually 650 nm red) to read and write DVDs. The reading laser is usually not stronger than 5 mW, while the writing laser is considerably more powerful. The faster the writing speed is rated, the stronger the laser. DVD burner lasers often peak at about 100-400 mW in continuous wave (some are pulsed).
Computer-based DVD (Digital Versatile Disc) drives
DVD recorder drives are standard equipment in many computer systems on the market, after being initially popularized by the Pioneer/Apple SuperDrive; aftermarket drives can cost as little as $20. DVD recorder drives can be used in conjunction with DVD authoring software to create DVDs near or equal to commercial quality, and are also widely used for data backup and exchange. As a general rule, computer-based DVD recorders can also handle CD-R and CD-RW media; in fact, a number of standalone DVD recorders use drives designed for computers.
More recently, manufacturers have begun to phase out DVD drives from laptop computers in favor of portability and digital media.
Most internal drives are designed with SATA interfaces, with parallel ATA becoming increasingly rare. External drives often use the USB standard for connectivity.
DVD recorder drives manufactured since January 2000 are required by the DVD consortium to respect DVD region codes when reading a disc. The drives are incapable of assigning region codes when writing a disc as this is stored on a part of the disc to which PC based and standalone video recorders do not have write access.
DVD duplication systems are generally built out of stacks of drives, connected through a computer-based backplane.
Standalone DVD recorders
When the standalone DVD recorder first appeared on the Japanese consumer market in 1999, early units were very expensive, costing between $2500 and $4000 USD. More recently, DVD recorders from notable brands have dropped in price. Early units supported only DVD-RAM and DVD-R discs, but newer units can record to DVD-R, DVD-RW, DVD+R, DVD+RW, DVD-R DL and DVD+R DL. Certain models include mechanical hard disk drive-based digital video recorders (DVRs) to improve ease of use. Standalone DVD recorders generally have basic DVD authoring software built in.
In 2009, Panasonic introduced the world's first Blu-ray disc recorder which was capable of recording both DVDs and Blu-ray discs and featured built in satellite HDTV tuners. A year later, Panasonic introduced Blu-ray disc recorders with terrestrial HDTV tuners.
DVD recorders have technical advantages over VCRs, including:
Superior video and audio quality
Easy-to-handle smaller form-factor disc media, and higher durability compared to magnetic tape
Random access to video chapters without rewinding or fast-forwarding (serial access)
Onscreen multilingual subtitles and labeling not available on VCRs
No playback wear and tear
High-quality digital copying, with little or no generational quality loss
Improved editing on rewritable media
Playlisting
No risk of accidentally recording over existing content or unexpectedly running out of space during recording
Easily accessible recordings as a result of chapter menus
Note: Blu-ray disc recorders can record full high definition videos on BD-Rs and BD-REs.
Disadvantages include:
Slow initial access/load times due to the optical nature of the disc
Limited rewritability on DVD-RW/+RW discs (typically around 1000). DVD-RAM is better suited for high frequency re-recording (around 100,000 rewrites)
Relatively short life of the laser diodes (average of about 2 years depending on usage).
In addition, DVDs recorded with DVD recorders in the standard DVD format must be finalized to view in other DVD players. This disadvantage does not apply to discs recorded in the newer and more flexible DVD-VR format or the DVD+VR format - the latter (but not the former) also being compatible with DVD players. The implementation of MPEG-2 compression used on most standalone DVD recorders is required to compress the picture data in real time, producing results that may not be up to par with professionally rendered DVD video, which can take days to compress.
Standard definition VCR replacement DVD video recorders typically has a set of standard recording modes for fitting 1, 2, 4, 6, 8, 10 hour modes (XP, SP, LP, EP, SLP, SEP, respectively) on single layer 12 cm discs (DVD5). These modes are comparable to those found on VHS VCRs using standard 120-tapes, having SP, LP, SLP modes of 2, 4, 6 hour.
The United States converted its over-the-air television broadcasts to digital "ATSC" in June 2009. This will have a limited impact in ending the need for DVD recorders to perform realtime MPEG-2 encoding or transcoding. The only setup where ATSC could eliminate MPEG-2 encoding/transcoding in a DVD recorder would be where an antenna is hooked directly into a DVD recorder that has an integrated ATSC tuner. However, the DVD recorder will have to transcode the ATSC MPEG-2 into DVD-Video-compliant MPEG-2 if the ATSC MPEG-2 stream isn't already DVD-Video-compatible. This would require transcoding for all high-definition broadcasts and some if not all standard-definition broadcasts. The same general situation applies to digital cable service; only DVD recorders with integrated digital cable ("QAM") tuners can avoid transcoding, and then only if the digital cable system is already sending a DVD-Video-compatible MPEG-2 stream, which again requires transcoding of all HD content and some if not all SD content. All other setups (digital cable box's analog outputs to DVD recorder, satellite box's analog outputs to DVD recorder, DVD recorder tuning and recording analog cable channels which are still permitted after 2/2009, etc.) usually always involve an analog step with MPEG-2 encoding being necessary inside the DVD recorder.
A number of manufacturers have combined DVD recorders with mechanical hard disk drive-based digital video recorders, allowing for recording to large fixed disks, and the ability to view these recordings off the hard disk at a later date.
In Japan, AVCREC recorders, which are able to record MPEG-2 or AVC high definition video from ISDB broadcast with or without re-encoding, get increasingly popular. Initially, AVCREC recorders use DVD recordable discs, but newer models are able to record onto Blu-ray discs as well onto hard disk drives.
ATSC standalone DVD recorders
As a result of the North American digital switchover, tuner-equipped devices manufactured or imported into the United States are now required by the US Federal Communications Commission to include digital tuners.
This has caused most new VHS recorders to be implemented as DVD/VCR combo units, or to be manufactured without tuners. The US requirement of ATSC compatibility forces inclusion of MPEG-2 decoding hardware, which is already part of all DVD players but which otherwise would be unnecessary in an analog-only VCR.
A tunerless recorder does not have RF coaxial connections and can only be used to record from an external device, such as a cable converter box with a composite video output.
An ATSC-capable DVD unit can also serve as a more-powerful alternative to digital television adapters, which allow DTV reception with older NTSC analog televisions. The DVD recorders offer additional capabilities, such as automated VCR-style timeshifting of programming and a variety of output formats, that are deliberately not included in the most common mass-market US ATSC converters.
Unlike the more common digital television adapter boxes, newer DVD recorder units are able to tune both analog and digital signals - an advantage when receiving low-power television and foreign (analogue) signals. Some, however, do suffer from many of the same design limitations as the less costly converter boxes, including poorly designed signal strength meters, incomplete display of broadcast program information, incompatibility with antenna rotators or CEA-909 smart antennas and inability to add digital channels without wiping out all existing channels and rescanning the entire band.
A DVD recording of an over-the-air HDTV broadcast is at DVD resolution, which is inferior to the original broadcast with 720p or 1080i resolution. Some units also provide limited USB or flash memory interface capability, often only supporting viewing of digital camera still photos or playback of MP3s with no ability to write video to these media.
A number of DVD recorders are also capable of recording to SVCD, VCD and Audio CD formats. Recording to DVDs can be done at different speeds that may take between 1 and 6 hours (even up to 8 hours on certain models) on a standard (single sided 12 cm) blank DVD. A trade off exists between recording time and video quality.
MiniDVD recorders
8 cm miniDVDs are used on some digital camcorders, primarily those meant for a consumer market ("point and shoot"); such discs are usually playable on a full-sized DVD player, but may not record on a full-sized DVD recorder system. Though popular for their convenience (in the manner of VHS-C), DVD camcorders are not suitable for professional use due to higher levels of compression compared to MiniDV and the difficulty of editing MPEG-2 video.
See also
Digital video
Digital video recorder (DVR)
DVD
Optical disc recorder
Videocassette recorder (VCR, video recorder)
Video scaler "Upconverting"
References
Recorder
Optical computer storage
Recording devices
Television terminology
Video storage
de:Brenner (Hardware)#DVD-Brenner
fi:DVD-soitin#Tallentavat DVD-soittimet | DVD recorder | Technology | 2,408 |
11,569,517 | https://en.wikipedia.org/wiki/Pholiota%20variicystis | Pholiota variicystis is a species of fungus in the family Strophariaceae. It is a plant pathogen that infects apricots.
See also
List of Pholiota species
References
Fungi described in 1994
Fungal tree pathogens and diseases
Stone fruit tree diseases
Strophariaceae
Fungus species | Pholiota variicystis | Biology | 64 |
64,985,942 | https://en.wikipedia.org/wiki/1%2C3%2C5-Triheptylbenzene | 1,3,5-Triheptylbenzene (also called sym-triheptylbenzene) is an aromatic organic compound with a chemical formula and molar mass 372.67 g/mol. It can be prepared by the hydrogenation reduction reaction of 1,1',1''-(benzene-1,3,5-triyl)tris(heptan-1-one). Alternatively, 1-nonyne trimerizes to 1,3,5-triheptylbenzene when catalyzed by rhodium trichloride.
References
Alkylbenzenes | 1,3,5-Triheptylbenzene | Chemistry | 139 |
78,531,452 | https://en.wikipedia.org/wiki/%C4%86miel%C3%B3w%20figurines | Ćmielów figurines are objects of small-scale (decorative) ceramic sculpture created in Poland during the 1950s and 1960s, reflecting the style of the so-called New Look or the Post-Sevastopol Thaw, cast in porcelain or porcelainite. They were designed by artists employed at the , particularly Henryk Jędrasiak, , , and Lubomir Tomaszewski, and produced in various ceramic manufacturing facilities across Poland. In 1964, the production of figurines designed by the Institute of Industrial Design was (with a few exceptions) moved to the Ćmielów Porcelain Works in Ćmielów.
Ćmielów figurines represent an important phenomenon in the applied arts of their time, with significance comparable to that of Nymphenburg and Meissen figurines in the 18th century and Copenhagen figurines in the era of modernism.
In 2021, the first biography of Lubomir Tomaszewski was published by Agora, authored by Katarzyna Rij and Jerzy A. Wlazło. The book explores the broader context associated with Ćmielów figurines.
Historical background
In the 1950s, a four-member team of artists was established at the Institute of Industrial Design in Warsaw under the leadership of Henryk Jędrasiak. The team included young graduates of the Warsaw Academy of Fine Arts – Mirosław Naruszewicz, Hanna Orthwein, and Lubomir Tomaszewski. Their task was to create a new collection of porcelain sculptures, representing a contemporary take on decorative figurines. Work on the designs began in 1955, initially as experimental efforts due to the need to break away from traditional approaches to small-scale sculpture. Moreover, apart from Naruszewicz, the artists had no prior experience working with this type of material and needed to familiarize themselves with its properties.
The first design was completed in mid-1956. It was Akt – Końska Wenus by Henryk Jędrasiak. This figurine still bore features of the earlier style, characterized by numerous angular details and a decorative base. However, subsequent designs – Jędrasiak's Jeleń and Naruszewicz's Dzik – increasingly showcased new trends. These included simplified forms and a synthesis of the object's silhouette, with deliberate deformations emphasizing specific distinctive elements.
The production of the figurines was entrusted to several factories: the Bogucice Porcelain Works, the Porcelain and Porcelainite Works in Chodzież, the Ćmielów Porcelain Works, the in Jaworzyna Śląska, the Tułowice Porcelainite Works, the Krzysztof Porcelain Works in Wałbrzych, and the Wałbrzych Porcelain Works. The figurines were produced under the supervision of their creators, though this oversight was rarely enforced. Designs were often adjusted locally for production, resulting in variations of the same project in different factories. For example, Sowa, designed by Hanna Orthwein, differs in claw shapes between the versions from Ćmielów and Karolina works.
In 1964, the production of all small-scale sculpture designs was taken over by the Świt division of the Ćmielów Porcelain Works. However, Tułowice retained the rights to produce most of the Institute of Industrial Design's figurines, while Bogucice did not transfer the original molds created in their facility.
Already in 1956, Ćmielów figurines achieved exhibition success at the Poznań International Fair. Until 1964, they were featured attractions at nearly all domestic and international exhibitions and trade fairs. They were showcased at the Leipzig Trade Fair, in New York and Chicago, at the Second Polish Industrial Exhibition in Moscow in 1959, and at the Polish Exhibition of Glass and Ceramics in Berlin.
The English magazine The Studio highlighted them in its annual special editions dedicated to the best designs in applied arts: in 1959 (Dzik and Batalion by M. Naruszewicz), 1960 (Gibbon by H. Orthwein and Kura by L. Tomaszewski), 1961 (Gołębie and Gazela by H. Jędrasiak), and 1962 (Bawół afrykański by L. Tomaszewski).
The design of these figurines continued at the Institute of Industrial Design until about 1965. During the late 1950s and early 1960s, they were widely available consumer goods. By the late 1970s, the Ćmielów Porcelain Works attempted to reintroduce them into production under a new program, but this effort met resistance and was not pursued further until the 1990s. In 1991, the Ćmielów factories declared bankruptcy, but their new owner initiated a reissue of the Institute of Industrial Design patterns.
The new owner, , purchased the Ćmielów Porcelain Works in 1996 and 1997. Along with the factory, he acquired the original molds and models of the figurines. Over four years, damaged designs were restored, a new design studio was established, and production resumed in 2000. Ćmielów figurines are now made from English porcelain, with each figurine accompanied by a certificate of authenticity, a serial number, a trademark (AS Ćmielów), the year of production, and, in some cases, a limited-edition label. In 2004, Adam Spała published a catalog featuring the figurines listed by their catalog names. The catalog includes photographs of the figurines, each labeled with a consecutive number, as well as information about the designer and the year the design was created.
The catalog also explains how to interpret the markings on the figurines. For numbered figurines, the markings follow this format: 1/13/02
1 represents the catalog number
13 indicates the sequential number of the figurine produced in that year
02 shows the year the figurine was created.
For items in a limited edition series, the marking would be: 13/500
13 is the sequential number of the figurine since 2000
500 represents the size of the edition.
The first figurines sold were numbered 13. Figurines numbered 1 to 12 were given a gold certificate and reserved for the owner of the AS Porcelain Factory and his friends.
Characteristics
Ćmielów figurines represent the style known as the New Look or the Post-Sevastopol Thaw, characterized by biomorphic lines, asymmetry, and abstract patterns. The shapes of the figurines simplify the silhouettes of the objects depicted, omitting details except for a few distinctive elements, which are emphasized and define the overall expression of the figurine, reflecting the essence of the object. This approach was based on the observation of nature and a deep understanding of the subject being portrayed. Transparency, lightness, and a sense of openness were used, evoking associations with the work of Henry Moore, as seen in Jędrasiak’s Sziedząca Dziewczyna and, particularly, in the works of Lubomir Tomaszewski (e.g., Dama z lustrem). Typically, the objects are depicted in a static pose, though some figurines suggest movement, such as Naruszewicz's Jeździec meksykański and Bizon.
Researchers have identified around 130 designs of Ćmielów figurines, though incomplete project documentation has led to challenges in attributing some of them. The thematic scope of the designs was inspired by the animal world, with an emphasis on domestic, farm, forest, exotic animals, and a few prehistoric creatures (e.g., Brontozaurus and Ichtiozaurus by H. Jędrasiak, Mamut by L. Tomaszewski), as well as birds. Human figures made up a smaller group.
Figurine models typically came in one size, with some exceptions, such as H. Orthwein's Pingwin, which was available in three sizes, and L. Tomaszewski's Pocałunek, which was produced in two sizes. There were also designs featuring group compositions, the first being H. Jędrasiak’s Gołąbki, which consisted of two figurines. A unique case was the two-part sculptures by Jędrasiak – Pawian, Marabut, and Bażant – which consisted of two separate pieces that formed a complete figurine when assembled. These were not introduced into mass production and remained in the prototype stage.
An important element influencing the final form of the figurines was their painting. In addition to glaze painting, spray painting and selective spraying techniques were used. These techniques enhanced the realism of the design, mimicking the feathers or fur of animals (as in Naruszewicz's Czapla), or emphasized the sculptural qualities with an abstract character. Although bold and expressive colors were used (e.g., Arabka by L. Tomaszewski), the most common palette was a range of greys, based on contrasts of white and black, reflecting the existentialist fashion prevailing in the culture. Multiple painting designs were created for one model, sometimes changing the entire expression of the figurine (e.g., Śpiewaczka by L. Tomaszewski).
The painters involved in the decoration of these figurines included designers from the Institute of Industrial Design, such as , Barbara Frybes, Zofia Przybyszewska, Zofia Galińska, , and . According to Barbara Banaś, some of the painting designs were also created within the factories that produced the figurines.
Designers
The main designers of Ćmielów figurines were members of the team formed at the Institute of Industrial Design: Henryk Jędrasiak, Mieczysław Naruszewicz, Hanna Orthwein, and Lubomir Tomaszewski:
Henryk Jędrasiak was the author of 24 models, including both human and animal figures. He did not shy away from abstraction but avoided distorting the shapes, simplifying them instead. Objects were often reduced to the shape of a triangle, as seen in Ryba skalar. By twisting the head or torso of the object, he added dynamics to the figurine. Most of his designs were decorated in black and white. He believed that decoration should fit the form well and organize its surface appropriately.
Mieczysław Naruszewicz was the author of 44 models, primarily animal figures, especially birds. When simplifying the depicted figures, he reduced the number of support points, such as in his Dzik design, where the limbs of the animal are connected.
Hanna Orthwein created 33 models and was a well-regarded animalist. She claimed that each of her projects aimed to illustrate a specific construction or compositional problem, refer to trends in contemporary art, and prepare the viewer for a better understanding of it.
Lubomir Tomaszewski designed 34 models, focusing mostly on human figures. He was the boldest of the team when it came to deformation, using straight lines to emphasize verticality. He promoted the image of the modern woman, creating slender figures with hair tied in a ponytail, such as in Dziewczyna w spodniach. He applied bold decorations with distinct color patches.
In the 1950s and 1960s, small-scale sculptural works were created that were not directly considered Ćmielów figurines by researchers, but they fit the aesthetic created by them. These works came from other designers associated with the Institute of Industrial Design, as well as from designers employed in modeling centers established by a 1952 directive at porcelain factories.
Other designers at the Institute of Industrial Design involved in the creation of figurines include Zdana Kosicka, who created models of Kotek/Kotek Siedzący and Konik/Konik mały in the early 1950s, and Liliana Borenowska-Ziemka, who created a Kotek model in the 1950s. Kosicka’s Kotek was realized at the Bogucice factory, while the other two models were produced in Ćmielów. In Ćmielów, designed Wesoły byczek (Fernando) and Kazimierz Czuba designed Zajączek/Zajączek ze stojącymi słuchami.
At the Bogucice factory, Paweł Karasek designed Dziewczyna z gitarą/Dziewczyna z mandoliną, and created five designs, including Pierwszy bal. For the Krzysztof factories in Wałbrzych, Stanisław Olszamowski designed Biały niedźwiedź and Panna plażowa, while Zbigniewa Śliwowska-Wawrzyniak created Grzybiarka/Dama z koszykiem. Jan Kwinta, head of the modeling center at Krzysztof, designed Mrówkojad.
In addition to the models mentioned above, there are several figurine designs created in Chodzież, Karolina, and Krzysztof factories whose designers remain unidentified. Some of them are attributed to Jędrasiak, Naruszewicz, Orthwein, or Tomaszewski.
References
Bibliography
Figurines
Design
Arts in Poland
Porcelain sculptures | Ćmielów figurines | Engineering | 2,745 |
198,961 | https://en.wikipedia.org/wiki/Soyuz%2011 | Soyuz 11 () was the only crewed mission to board the world's first space station, Salyut 1. The crew, Georgy Dobrovolsky, Vladislav Volkov, and Viktor Patsayev, arrived at the space station on 7 June 1971, and departed on 29 June 1971. The mission ended in disaster when the crew capsule depressurised during preparations for re-entry, killing the three-person crew. The three crew members of Soyuz 11 are the only humans to have died in space.
Crew
Backup crew
Original crew
Crew notes
The original prime crew for Soyuz 11 consisted of Alexei Leonov, Valery Kubasov, and Pyotr Kolodin. A medical X-ray examination four days before launch suggested that Kubasov might have tuberculosis, and according to the mission rules, the prime crew was replaced with the backup crew. For Dobrovolsky and Patsayev, this was to be their first space mission. After the failure of Salyut 2 to orbit, Kubasov and Leonov were reassigned to Soyuz 19 for the Apollo-Soyuz Test Project in 1975.
Mission
Parameters
Mass:
Perigee:
Apogee:
Inclination: 51.6°
Period: 88.3 minutes
Flight
The Soyuz 7K-OKS spacecraft was launched on 6 June 1971, from the Baikonur Cosmodrome in the central Kazakh Soviet Socialist Republic, and used the callsign Yantar (Amber). Several months earlier, the first mission to the Salyut, Soyuz 10, had failed to successfully dock with the station. During the first day of the flight, maneuvers were made to effect a rendezvous with the uncrewed Salyut (1971-032A). When Soyuz 11 was from Salyut, automatic devices took over, and in 24 minutes closed the gap between the two ships to and reduced the relative speed difference to . Control of the ships went from automatic back to manual at . Docking took 3 hours 19 minutes to complete and involved making the connection mechanically rigid, engaging various electrical and hydraulic links, and establishing air-tight seals before locks could be opened. When the pressure was equalized between the ships, the locks were opened and all three members of the crew passed into Salyut 1. Soyuz 11 successfully docked with Salyut 1 on 7 June 1971 and the cosmonauts remained on board for 22 days, setting space endurance records that would hold until the American Skylab 2 mission in May and June 1973.
Upon first entering the station, the crew encountered a smoky and burnt atmosphere, and after replacing part of the ventilation system, spent the next day back in their Soyuz until the air cleared. Their stay in Salyut was productive, including live television broadcasts. A fire broke out on day 11 of their stay, causing mission planners to consider abandoning the station. The planned highlight of the mission was to have been the observation of an N1 rocket launch, but the launch was postponed. The crew also found that using the exercise treadmill, as they were required to twice a day, caused the whole station to vibrate. Pravda released news of the mission and regular updates while it was in progress.
Re-entry and death
On 29 June 1971, the three cosmonauts loaded scientific specimens, films, tapes, and other gear into Soyuz 11, then transferred manual control back from Salyut 1 to Soyuz 11 and returned to their ferry craft, with undocking occurring at 18:28 GMT. Soyuz 11 flew co-orbit for a while before it retro-fired at 22:35 GMT in preparation for re-entry. Before re-entering Earth's atmosphere, both the work compartment and the service module were jettisoned. This occurred at about 22:47 GMT. Radio communications abruptly ended when the work compartment separated, well before the normal ionospheric blackout.
Almost 25 minutes later, Soyuz 11's automatic systems landed the craft at 23:16:52 GMT, southwest of Karazhal in Kazakhstan, after an abnormally silent return to Earth. The total flight duration of the crew had been 570.22 hours and involved 383 orbits—18 prior to docking, 362 docked, and three after undocking. When the recovery team opened the capsule of the Soyuz 11, they found all three men dead.
Kerim Kerimov, chair of the State Commission, recalled: "Outwardly, there was no damage whatsoever. They knocked on the side, but there was no response from within. On opening the hatch, they found all three men in their couches, motionless, with dark-blue patches on their faces and trails of blood from their noses and ears. They removed them from the descent module. Dobrovolsky was still warm. The doctors gave artificial respiration. Based on their reports, the cause of death was suffocation".
Cause of death
It quickly became apparent that the cosmonauts had asphyxiated. The fault was traced to a breathing ventilation valve, located between the orbital module and the descent module, that had been jolted open as the descent module separated from the service module, 12 minutes and 3 seconds after retrofire. The two modules were held together by explosive bolts designed to fire sequentially; in fact, they had fired simultaneously. The explosive force of the simultaneous bolt firing caused the internal mechanism of the pressure equalisation valve to loosen a seal that was usually discarded later and which normally allowed for automatic adjustment of the cabin pressure. The valve opened at an altitude of , and the resultant loss of pressure was fatal in less than a minute. The valve was located beneath the seats and was impossible to find and block before the air was lost. Flight recorder data from the single cosmonaut outfitted with biomedical sensors showed cardiac arrest occurred within 40 seconds of pressure loss. By 15 minutes 35 seconds after retrofire, the cabin pressure was zero, and remained there until the capsule entered the Earth's atmosphere. Patsayev's body was found positioned near the valve, and he may have been attempting to close or block the valve at the time he lost consciousness. An extensive investigation was conducted to study all components and systems of Soyuz 11 that could have caused the accident, although doctors quickly concluded that the cosmonauts had died of asphyxiation.
The autopsies at Burdenko Main Military Clinical Hospital found that the cause of death for the cosmonauts was haemorrhaging of the blood vessels in their brains, with lesser amounts of bleeding under their skin, in their inner ears, and in their nasal cavities, all of which occurred as exposure to a vacuum environment caused the oxygen and nitrogen in their bloodstreams to bubble and rupture vessels. Their blood was also found to contain heavy concentrations of lactic acid; lactic acid buildup (in tissues and blood) is a sign of inadequate mitochondrial oxygenation, which may be due to hypoxemia (low blood oxygen), poor blood flow (e.g., decompression) or a combination of both. Although they could have remained conscious for almost 40 seconds after decompression began, less than 20 seconds would have passed before the effects of oxygen starvation made it impossible for them to function.
Aftermath
Alexei Leonov, who would have originally commanded Soyuz 11, had advised the cosmonauts before the flight that they should manually close the valves between the orbital and descent modules, as he did not trust them to shut automatically, a procedure he thought up during extensive time in the Soyuz simulator. However, it appears that the crew did not do this. After the flight, Leonov went back and tried closing one of the valves himself, and found that it took nearly a minute to do so, too long in an emergency situation with the spacecraft's atmosphere escaping fast.
The Soviet state media attempted to downplay the tragic end of the mission, and instead emphasized its accomplishments during the crew's stay aboard Salyut 1. Since they did not publicly announce the exact cause of the cosmonauts' deaths for almost two years afterwards, United States space planners were extremely worried about the upcoming Skylab program, as they could not be certain whether prolonged time in a micro-g environment had turned out to be fatal. However, NASA doctor Charles Berry maintained a firm conviction that the cosmonauts could not have died from spending too many weeks in weightlessness. Until the Soviets finally disclosed what had really happened, Berry theorised that the crew had died from inhaling toxic substances.
A film that was later declassified showed support crews attempting cardiopulmonary resuscitation (CPR) on the cosmonauts. Until autopsy, they were not known to have died because of a capsule depressurisation. The ground crew had lost audio contact with the crew before re-entry began, and had already begun preparations for contingencies in case the crew had been lost.
The cosmonauts were given a large state funeral and buried in the Kremlin Wall Necropolis at Red Square, Moscow, near the remains of Yuri Gagarin. They were also each posthumously awarded the Hero of the Soviet Union medal.
The United States sent Tom Stafford, then NASA's Chief Astronaut, to represent President Richard Nixon at the funeral, where the Soviets asked him to be one of the pallbearers. It came at the beginning of a period of more cordial relations between the two nations that would lead to the joint Apollo–Soyuz mission. President Nixon also issued an official statement following the accident:
The Soyuz spacecraft was extensively redesigned after this incident to carry only two cosmonauts. The extra room meant that the crew could wear Sokol space suits during launch and landing. The Sokol was a lightweight pressure suit intended for emergency use; updated versions of the suit remain in use.
Memorials
The Soyuz 11 landed south-west of Karazhal, Karagandy, Kazakhstan, and about north-east of Baikonur. A memorial monument in the form of a three-sided metallic column, with the engraved image of the face of each crew member set into a stylized triangle on each of the three sides, was placed at the site. The memorial is in open, flat country, far from any populated area, within a small, circular fence. In 2012, the memorial was found to have been vandalized beyond repair, with only the base of the metallic column remaining and any roads leading to it overgrown. In 2013, Russian space agency Roscosmos restored the site with a redesigned monument, reflecting the three-sided form of the original, but this time constructed from brick. Also placed at the site was a sign explaining the history of the location and the fate of the original monument.
Craters on the Moon were named after the three cosmonauts: Dobrovolʹskiy, Volkov, and Patsaev. The names of the three cosmonauts are included on the Fallen Astronaut commemorative plaque placed on the Moon during the Apollo 15 mission in August 1971. To honour the loss of the Soyuz 11 crew, a group of hills on Pluto is also named Soyuz Colles.
In the city of Penza, Russia, near the school gymnasium No. 39, in honour of the dead cosmonauts, a memorial stele was made with quotes from the poem by the poet Yevgeny Yevtushenko "Between our Motherland and you is a two-way eternal connection" ().
In addition to the Soviet postage stamp depicted above, a series of postage stamps of the Emirate of Ajman and Bulgaria was issued in memory of the cosmonauts in 1971. Equatorial Guinea also released a series of stamps depicting the entire Soyuz 11 mission.
See also
List of spaceflight-related accidents and incidents
Apollo 1
Space Shuttle Challenger disaster
Space Shuttle Columbia disaster
Timeline of longest spaceflights
Notes
References
Further reading
External links
Mir Hardware Heritage – 1.7.3 (Wikisource)
Webpage with pictures of the original and replaced memorial
Crewed Soyuz missions
Space program fatalities
Space accidents and incidents in the Soviet Union
1971 in the Soviet Union
Decompression accidents and incidents
Spacecraft launched in 1971
Spacecraft which reentered in 1971
Space missions that ended in failure | Soyuz 11 | Chemistry,Engineering | 2,494 |
9,088,666 | https://en.wikipedia.org/wiki/Kenneth%20Murray%20%28biologist%29 | Sir Kenneth "Ken" Murray (30 December 1930 – 7 April 2013) was a British molecular biologist and the Biogen Professor of Molecular Biology at the University of Edinburgh.
An important early figure in genetic engineering, Murray cofounded Biogen. There, he and his team developed one of the first vaccines against hepatitis B. Along with his wife, biologist Lady Noreen (née Parker), Murray also founded the Darwin Trust of Edinburgh, a charity supporting young biologists in their doctoral studies.
Education and career
Murray achieved a 1st class honours degree in chemistry followed by PhD from the University of Birmingham. From 1960 to 1964 he was a researcher at J. Murray Luck's laboratory at Stanford University and from 1964 to 1967 he was a researcher at Fred Sanger's laboratory at Cambridge University. In 1967, he was appointed lecturer at the University of Edinburgh and in 1976 he became Head of Molecular Biology. In 1984 he was appointed Biogen Professor of Molecular Biology, a post which he retained until his retirement. He was elected a Fellow of the Royal Society in 1979, Fellow of the Royal Society of Edinburgh in 1989 and awarded the RSE Royal Medal in 2000 with the citation "For their outstanding contribution to the development of Biotechnology, both nationally and internationally, through his development of what is now known as recombinant DNA technology."
Personal life
Murray was born in Yorkshire and brought up in the Midlands. He left school at the age of 16 to become a laboratory technician at Boots in Nottingham. He studied part-time and obtained a degree in chemistry and then a PhD in microbiology from University of Birmingham.
Sir Kenneth's wife, Lady Noreen Murray CBE, was elected a Fellow of the Royal Society in 1982. She died on 12 May 2011 aged 76.
References
External links
Laureation address on Sir Kenneth Murray at University of Dundee - Graduation 2000
English biologists
British molecular biologists
Fellows of the Royal College of Pathologists
Fellows of the Royal Society
Fellows of the Royal Society of Edinburgh
History of biotechnology
Knights Bachelor
Place of birth missing
1930 births
2013 deaths
Alumni of the University of Birmingham
Academics of the University of Edinburgh
Scientists from Yorkshire | Kenneth Murray (biologist) | Biology | 429 |
2,365,925 | https://en.wikipedia.org/wiki/NetJet | NetJet was the first commercially available web accelerator. The product was developed by Peak Technologies (changed in 1997 to PeakSoft Multinet Corp.) in 1996 and released in November 1996 at COMDEX in Las Vegas, Nevada. NetJet was named a 'Best in Show' product in the internet category.
NetJet was derived from the ExpressO Java Server designed and developed by Charles T. ("Chuck") Russell the founder of Innovative Desktop Inc. a Delaware corporation acquired by Peak Technologies in June, 1996. ExpressO was the first widely distributed, commercially available java application server.
NetJet features provided better response time and enhanced download speeds for web browsers by performing work in the background. NetJet drew interest from the World Wide Web community in December 1996 when CNET magazine wrote: "Web accelerators--one of the hottest tools on the Internet--are supposed to make life easier for Net surfers, but they are causing some headaches for Web site operators." At that time link prefetching was causing a number of HTTP requests to be delivered to each website the browser visited. While this increased browsing speed for the user it did cause web server traffic to be magnified, much to the concern of website administrators.
NetJet paved the way for commercial Java technologies and may be considered notable because of the following:
It was the first commercially available, shrink wrapped application written for the Java platform.
It was the first software product updated over the air via the internet.
It provided the first look ahead technologies to enable link prefetching.
It contained intelligent caching algorithms ensuring frequently visited content was fresh and up-to-date.
The product was the first software product to provide web update features allowing NetJet updates to be automatically downloaded online and applied to the product without user intervention.
The product acted as a client-side caching web proxy and was compatible with most web browsers. All fetched content was cached and updated within background threads based upon user's browsing habits. The 'smart cache' used several algorithmic tricks to ensure that content users browsed regularly was updated and fresh. This behavior sped up browsing, much of which was done through dial-up modem at speeds of 56 KBS.
Peak Technologies changed the name of the product from NetJet to PeakJet in 1997 after settling a trademark dispute with NetJet inc. (www.netjets.com).
PeakSoft's website was shut down in 2002-2003.
References
Web accelerators
1997 software
Internet properties established in 1997 | NetJet | Technology | 517 |
71,202,789 | https://en.wikipedia.org/wiki/HD%20193721 | HD 193721 (HR 7785) is an astrometric binary in the southern circumpolar constellation Octans. It has an apparent magnitude of 5.77, allowing it to be faintly seen with the naked eye. Parallax measurements place the system 760 light years away from the Solar System and it is currently receding with a heliocentric radial velocity .
HD 193721 has a stellar classification of G6/8 II — intermediate between a G6 and 8 bright giant. At present it has 3.49 times the mass of the Sun, but has expanded to 24.4 times its girth. It shines with a luminosity of from its enlarged photosphere at an effective temperature of , giving a yellow hue. HD 193721 is metal deficient with an iron abundance 71% that of the Sun and spins leisurely with a projected rotational velocity of .
The system has an companion designated CPD −81°900. The object has a spectral classification of F8 and is located along a position angle of (as of 1998). CPD −81°900 is a foreground object, having a higher parallax and different proper motion.
References
G-type bright giants
Octans
Astrometric binaries
Double stars
193721
PD-81 00901
101427
7785
Octantis, 47 | HD 193721 | Astronomy | 275 |
54,504,773 | https://en.wikipedia.org/wiki/Euler%20characteristic%20of%20an%20orbifold | In differential geometry, the Euler characteristic of an orbifold, or orbifold Euler characteristic, is a generalization of the topological Euler characteristic that includes contributions coming from nontrivial automorphisms. In particular, unlike a topological Euler characteristic, it is not restricted to integer values and is in general a rational number. It is of interest in mathematical physics, specifically in string theory. Given a compact manifold quotiented by a finite group , the Euler characteristic of is
where is the order of the group , the sum runs over all pairs of commuting elements of , and is the space of simultaneous fixed points of and . (The appearance of in the summation is the usual Euler characteristic.) If the action is free, the sum has only a single term, and so this expression reduces to the topological Euler characteristic of divided by .
See also
Kawasaki's Riemann–Roch formula
References
Further reading
External links
https://mathoverflow.net/questions/51993/euler-characteristic-of-orbifolds
https://mathoverflow.net/questions/267055/is-every-rational-realized-as-the-euler-characteristic-of-some-manifold-or-orbif
Differential geometry
String theory | Euler characteristic of an orbifold | Astronomy,Mathematics | 270 |
49,289,615 | https://en.wikipedia.org/wiki/Entoloma%20moserianum | Entoloma moserianum is a species of fungus in the family Entolomataceae. Found in the Netherlands where it grows on the ground in deciduous forests, it was described as new to science in 1983 by Machiel Noordeloos. The species is classified in Entoloma section Entoloma, and is similar to E. sinuatum. The fruit bodies of E. moserianum are characterized by pale colors, yellow spots on the cap, gills, and stipe, and gill edges that are partially to completely sterile. Its spores measure 9.3–11.5 by 8.1–9.3 μm The specific epithet honors Austrian mycologist Meinhard Michael Moser.
See also
List of Entoloma species
References
Entolomataceae
Fungi of Europe
Fungi described in 1983
Taxa named by Machiel Noordeloos
Fungus species | Entoloma moserianum | Biology | 185 |
69,614,416 | https://en.wikipedia.org/wiki/Geometric%20logic | In mathematical logic, geometric logic is an infinitary generalisation of coherent logic, a restriction of first-order logic due to Skolem that is proof-theoretically tractable. Geometric logic is capable of expressing many mathematical theories and has close connections to topos theory.
Definitions
A theory of first-order logic is geometric if it is can be axiomatised using only axioms of the form
where I and J are disjoint collections of formulae indices that each may be infinite and the formulae φ are either atoms or negations of atoms. If all the axioms are finite (i.e., for each axiom, both I and J are finite), the theory is coherent.
Theorem
Every first-order theory has a coherent conservative extension.
Significance
list eight consequences of the above theorem that explain its significance (omitting footnotes and most references):
In the context of a sequent calculus such as G3c, special coherent implications as axioms can be converted directly to inference rules without affecting the admissibility of the structural rules (Weakening, Contraction and Cut);
In similar terms, coherent theories are “the theories expressible by natural deduction rules in a certain simple form in which only atomic formulas play a critical part”;
Coherent implications form sequents that give a Glivenko class. In this case, the result, known as the first-order Barr’s Theorem, states that if each Ii: 0≤i≤n is a coherent implication and the sequent I1, . . . , In ⇒ I0 is classically provable then it is intuitionistically provable;
There are many examples of coherent/geometric theories: all algebraic theories, such as group theory and ring theory, all essentially algebraic theories, such as category theory, the theory of fields, the theory of local rings, lattice theory, projective geometry, the theory of separably closed local rings (aka “strictly Henselian local rings”) and the infinitary theory of torsion abelian groups;
Coherent/geometric theories are preserved by pullback along geometric morphisms between topoi (Maclane & Moerdijk 1992, chapter X);
Filtered colimits in Set of models of a coherent theory T are also models of T;
Special coherent implications ∀x. C ⊃ D generalise the Horn clauses from logic programming, where D is required to be an atom; in fact, they generalise the “clauses” of disjunctive logic programs, where D is allowed to be a disjunction of atoms.
Effective theorem-proving for coherent theories can, with (in relation to resolution) relative ease and clarity, be automated. As noted by Bezem et al ...the absence of Skolemisation (introduction of new function symbols) is no real hardship, and the non-conversion to clausal form allows the structure of ordinary mathematical arguments to be better retained.
Notes
Bibliography
(Two volumes, Oxford Logic Guides 43 & 44, 3rd volume in preparation)
Further reading
Geometry
Logic | Geometric logic | Mathematics | 626 |
12,662,863 | https://en.wikipedia.org/wiki/Sapogenin | Sapogenins are aglycones (non-saccharide moieties) of saponins, a large family of natural products. Sapogenins contain steroid or other triterpene frameworks as their key organic feature. For example, steroidal sapogenins such as tiggenin, neogitogenin, and tokorogenin have been isolated from the tubers of Chlorophytum arundinaceum. Some steroidal sapogenins can serve as a practical starting point for the semisynthesis of particular steroid hormones.
Diosgenin and hecogenin are other examples of sapogenins.
References
Phytochemicals
Triterpenes
Alcohols | Sapogenin | Chemistry | 151 |
1,675,938 | https://en.wikipedia.org/wiki/Bosque | A bosque ( ) is a type of gallery forest habitat found along the riparian flood plains of streams, river banks, and lakes. It derives its name from the Spanish word for 'forest', pronounced .
Setting
In the predominantly arid or semi-arid southwestern United States, a bosque is an oasis-like ribbon of green forest, often canopied, that only exists near rivers, streams, or other water courses. The most notable bosque is the -long forest ecosystem along the valley of the middle Rio Grande in New Mexico that extends from Santa Fe, through Albuquerque and south to El Paso, Texas. One of the most famous and ecologically intact sections of the bosque is included in the Bosque del Apache National Wildlife Refuge, which is located south of San Antonio, NM. Another bosque can be found in Costa Rica, a beautiful wildlife refuge named Bosque Alegre.
Middle Rio Grande bosque
There are various refuges, parks, and trails for visitors, such as the Paseo Del Bosque trail in Albuquerque, New Mexico.
Flora and fauna
As a desert riparian forest, the middle Rio Grande bosque has a characteristic variety of flora and fauna. Common trees in the bosque habitat include mesquite, cottonwood, desert willow, and desert olive. Because there is often only a single canopy layer and because the tree species found in the bosque are generally deciduous, a wide variety of shrubs, grasses, and other understory vegetation is also supported. Desert hackberry, blue palo verde, graythorn (Condalia lycioides), Mexican elder (Sambucus mexicana), virgin's bower, and Indian root all flourish in the bosque. The habitat also supports a large variety of lichens. For a semi-arid region, there is extraordinary biodiversity at the interface of the bosque and surrounding desert ecosystems. Certain subsets of vegetative association are defined within the Kuchler scheme, including the Mesquite Bosque. In 2017, 150 different species of flora (trees, shrubs, forbs, and grasses) were documented in Albuquerque's Bosque (New Mexico, United States).
The bosque is an important stopover for a variety of migratory birds, such as ducks, geese, egrets, herons, and sandhill cranes. Year-round avian residents include Red-tailed hawks, Cooper's hawks, American kestrels, hummingbirds, owls, woodpeckers, and the southwestern willow flycatcher. Over 270 species of birds can be found in Albuquerque's Bosque (New Mexico, United States). Aquatic fauna of the bosque include the endangered Rio Grande silvery minnow. Mammalian residents include desert cottontail, white-footed mouse, North American porcupine, North American beaver, long-tailed weasel, common raccoon, coyote, mountain lions, and bobcats. Cottonwood trees serve as shelter to a variety of animals. However, a September 2020 report by the Bosque Ecosystem Monitoring Program (BEMP) predicted that cottonwood trees in the middle Rio Grande bosque will be disproportionately impacted as climate change affects groundwater depth and as air temperatures rise. The report separately concluded that invasive plant species were not sensitive to such changes in groundwater, suggesting that the plant structure and animal habitats of the middle Rio Grande bosque will change dramatically as climate changes.
Inhabitants
Even though the earliest inhabitants began to settle around the bosque about 15,000 years ago, they caused only minor ecosystem changes. It was not until rapid population growth and when inhabitants started creating water diversions for farming purposes that the bosque started to be manipulated, and change was noted in the ecosystem.
Restoration
Maintaining the ecosystem and habitat of the bosque is a difficult and ongoing concern for many. The creation of water diversions such as levees, ditches, irrigation canals, etc has caused irreparable damage, causing floodplains to dry and water levels to drop. Thus creating a ripple effect, many different types of native plant species, wildlife, and amphibians have died off or relocated. The drying out waters and loss of wetlands create a land that is susceptible to fires destroying more habitation areas.
There are ongoing efforts to undo damage to the bosque ecosystem caused by human development, fires, and invasive species in the 20th century. Where possible, levees and other flood control devices along the Rio Grande are being removed, to allow the river to undergo its natural cycle. However, in June 2023, the Army Corps of Engineers-Albuquerque District and the Middle Rio Grande Conservancy District signed a design agreement aiming for the reconstruction of multiple levees along the Rio Grande river between Albuquerque and Belen as part of the Middle Rio Grande, Bernalillo to Belen project, which aims to minimize flood damage along the river. To help with the regrowth and maintenance of the bosque, new trees are planted by The Open Space Division.
Since 1996, the Bosque Ecosystem Monitoring Program (BEMP) of the University of New Mexico has worked with local schools on habitat restoration and ecological monitoring within the bosque, as well as raising awareness of the ecological importance of this habitat through educational outreach initiatives. BEMP receives funding from a number of sources, including the federal government. As of 2016, the program maintained thirty permanent sites throughout the middle Rio Grande bosque.
See also
Flora of New Mexico
Riparian forest
Tugay, an analogous forest type in the deserts and steppes of Central Asia
References
External links
Save our Bosque Report (.pdf)
Bosque Management and Endangered Species (BMEP)
Fire commander: Bosque’s urban area presents challenge
Race to reduce bosque fires
Forests of the United States
Habitats
Natural history of New Mexico
Riparian zone | Bosque | Environmental_science | 1,183 |
1,790,627 | https://en.wikipedia.org/wiki/Mackey%20topology | In functional analysis and related areas of mathematics, the Mackey topology, named after George Mackey, is the finest topology for a topological vector space which still preserves the continuous dual. In other words the Mackey topology does not make linear functions continuous which were discontinuous in the default topology. A topological vector space (TVS) is called a Mackey space if its topology is the same as the Mackey topology.
The Mackey topology is the opposite of the weak topology, which is the coarsest topology on a topological vector space which preserves the continuity of all linear functions in the continuous dual.
The Mackey–Arens theorem states that all possible dual topologies are finer than the weak topology and coarser than the Mackey topology.
Definition
Definition for a pairing
Given a pairing the Mackey topology on induced by denoted by is the polar topology defined on by using the set of all -compact disks in
When is endowed with the Mackey topology then it will be denoted by or simply or if no ambiguity can arise.
A linear map is said to be Mackey continuous (with respect to pairings and ) if is continuous.
Definition for a topological vector space
The definition of the Mackey topology for a topological vector space (TVS) is a specialization of the above definition of the Mackey topology of a pairing.
If is a TVS with continuous dual space then the evaluation map on is called the canonical pairing.
The Mackey topology on a TVS denoted by is the Mackey topology on induced by the canonical pairing
That is, the Mackey topology is the polar topology on obtained by using the set of all weak*-compact disks in
When is endowed with the Mackey topology then it will be denoted by or simply if no ambiguity can arise.
A linear map between TVSs is Mackey continuous if is continuous.
Examples
Every metrizable locally convex with continuous dual carries the Mackey topology, that is or to put it more succinctly every metrizable locally convex space is a Mackey space.
Every Hausdorff barreled locally convex space is Mackey.
Every Fréchet space carries the Mackey topology and the topology coincides with the strong topology, that is
Applications
The Mackey topology has an application in economies with infinitely many commodities.
See also
Citations
Bibliography
Topological vector spaces | Mackey topology | Mathematics | 468 |
60,380,042 | https://en.wikipedia.org/wiki/Liquid%20Haskell | Liquid Haskell is a program verifier for the programming language Haskell which allows specifying correctness properties by using refinement types. Properties are verified using a satisfiability modulo theories (SMT) solver which is SMTLIB2-compliant, such as the Z3 Theorem Prover.
See also
Formal verification
References
Further reading
External links
Formal methods tools
Static program analysis tools
Type systems
Free software programmed in Haskell
Software using the BSD license | Liquid Haskell | Mathematics | 98 |
47,717,506 | https://en.wikipedia.org/wiki/GeoEcoMar | The National Institute for Research and Development of Marine Geology and Geoecology – GeoEcoMar () is a Romanian institute of geology and geo-ecology founded in 1993. It was initially named Romanian Centre for Marine Geology and Geo-ecology. Its administrative and scientific headquarters is in the capital of Romania, Bucharest; but the operational center, with the research vessels and marine infrastructure, is in Constanța, an important harbor on the Black Sea. The first director of the institute was the academician Nicolae Panin, now retired and a personal adviser of the current director, Gheorghe Oaie.
Programs and partners
GeoEcoMar is involved in European research programs of hydrological river-delta-sea macro-systems. It has fathomed the study of coastal erosion and its correction and participates in European programs to monitor potential hazards in the Black Sea. It explores the environmental effects of a dramatic decline due to the sediments collected by the upstream dams. Also, the Institute is involved in carbon dioxide capture and storage.
Since 1996, GeoEcoMar has been formally authorized to develop impact studies and environmental evaluations in Romania. Since 2006 it has been certified ISO 9001 for research conducted in geology, geophysics and geo-ecology, and by Lloyd's Register Quality Assurance (Romania) in accordance with ISO 9001: 2008 and ISO 9001: 2008. The Institute obtained the European status of excellence (Euro-EcoGeoCentre Romania).
GeoEcoMar is, alongside similar institutes from Italy, France, UK, Greece, Spain, Ireland, the Netherlands, Germany and Portugal, a member of the European Multidisciplinary Seafloor and water column Observatory (EMSO), a network of various institutes and companies monitoring the open ocean or shallow waters in order to prevent hazards, tsunami or earthquake effects. It has initiated Évolution du littoral danubien: vulnérabilité et prévention project which is to collect seismic data from the mouth of the river Danube, to study the morpho-sedimentary structure of the river-sea system.
Fleet
The Institute’s investigations are undertaken with the help of the largest research vessel in the Black Sea: Mare Nigrum, an interdisciplinary research vessel which is 82 m long and has a displacement capacity of 3200 tonnes; and also Istros, which has a length of 32m and a displacement of 125 tonnes; and Halmyris, a laboratory boat bridge of 32 meters long and a displacement capacity of 90 tonnes.
References
External links
GeoEcoMar home page (in Romanian)
GeoEcoMar home page (in English)
Romania
Black Sea
Research institutes in Romania
Danube Delta
Hydrology organizations
1993 establishments in Romania
Organizations established in 1993 | GeoEcoMar | Environmental_science | 552 |
570,478 | https://en.wikipedia.org/wiki/Housing%20cooperative | A housing cooperative, or housing co-op, is a legal entity which owns real estate consisting of one or more residential buildings. The entity is usually a cooperative or a corporation and constitutes a form of housing tenure. Typically housing cooperatives are owned by shareholders but in some cases they can be owned by a non-profit organization. They are a distinctive form of home ownership that have many characteristics that differ from other residential arrangements such as single family home ownership, condominiums and renting.
The cooperative is membership based, with membership granted by way of a share purchase in the cooperative. Each shareholder in the legal entity is granted the right to occupy one housing unit. A primary advantage of the housing cooperative is the pooling of the members' resources so that their buying power is leveraged; thus lowering the cost per member in all the services and products associated with home ownership.
Another key element in some forms of housing cooperatives is that the members, through their elected representatives, screen and select who may live in the cooperative, unlike any other form of home ownership.
Housing cooperatives fall into two general tenure categories: non-ownership (referred to as non-equity or continuing) and ownership (referred to as equity or strata). In non-equity cooperatives, occupancy rights are sometimes granted subject to an occupancy agreement, which is similar to a lease. In equity cooperatives, occupancy rights are sometimes granted by way of the purchase agreements and legal instruments registered on the title. The corporation's articles of incorporation and bylaws as well as occupancy agreement specifies the cooperative's rules.
The word cooperative is also used to describe a non-share capital co-op model in which fee-paying members obtain the right to occupy a bedroom and share the communal resources of a house owned by a cooperative organization. Such is the case with student cooperatives in some college and university communities across the United States.
Legal status
As a legal entity, a co-op can contract with other companies or hire individuals to provide it with services, such as a maintenance contractor or a building manager. It can also hire employees, such as a manager or a caretaker, to deal with specific upkeep tasks at which volunteers may hesitate or may not be skilled, such as electrical maintenance.
In non-equity cooperatives and in limited equity cooperatives, a shareholder in a co-op does not own real estate, but a share of the legal entity that does own real estate. Co-operative ownership is quite distinct from condominiums where people own individual units and have little say in who moves into the other units. Because of this, most jurisdictions have developed separate legislation, similar to laws that regulate companies, to regulate how co-ops are operated and the rights and obligations of shareholders.
Ownership
Each resident or resident household has membership in the co-operative association. In non-equity cooperatives, members have occupancy rights to a specific suite within the housing co-operative as outlined in their "occupancy agreement", or "proprietary lease", which is essentially a lease. In ownership cooperatives, occupancy rights are transferred to the purchaser by way of the title transfer.
Since the housing cooperative holds title to all the property and housing structures, it bears the cost of maintaining, repairing and replacing them. This relieves the member from the cost and burden of such work. In that sense, the housing cooperative is like the landlord in a rental setting. However, another hallmark of cooperative living is that it is nonprofit, so that the work is done at cost, with no profit motive involved.
In some cases, the co-op follows Rochdale Principles where each shareholder has only one vote. Most cooperatives are incorporated as limited stock companies where the number of votes an owner has is tied to the number of shares owned by the person. Whichever form of voting is employed it is necessary to conduct an election among shareholders to determine who will represent them on the board of directors (if one exists), the governing body of the co-operative. The board of directors is generally responsible for the business decisions including the financial requirements and sustainability of the co-operative. Although politics vary from co-op to co-op and depend largely on the wishes of its members, it is a general rule that a majority vote of the board is necessary to make business decisions.
Management
In larger co-ops, members of a co-op typically elect a board of directors from amongst the shareholders at a general meeting, usually the annual general meeting. In smaller co-ops, all members sit on the board.
A housing cooperative's board of directors is elected by the membership, providing a voice and representation in the governance of the property. Rules are determined by the board, providing a flexible means of addressing the issues that arise in a community to assure the members' peaceful possession of their homes.
Finance
A housing cooperative is normally de facto non-profit, since usually most of its income comes from the rents paid by its residents (if in a formal corporation, then shareholders), who are invariably its members. There is no point in creating a deliberate surplus—except for operational requirements such as setting aside funds for replacement of assets—since that simply means that the rents paid by members are set higher than the expenses. (It is possible for a housing co-op to own other revenue-generating assets, such as a subsidiary business which could produce surplus income to offset the cost of the housing, but in those cases the housing rents are usually reduced to compensate for the additional revenue.)
In the lifecycle of buildings, the replacement of assets (capital repairs) requires significant funds which can be obtained through a variety of ways: assessments on current owners; sales of Treasury Stock (former rental units) to new shareholders; draw downs of reserves; unsecured loans; operating surpluses; fees on the sales of units between shareholders and new and increases to existing mortgages.
There are housing co-ops of the rich and famous: John Lennon, for instance, lived in The Dakota, a housing co-operative,
and most apartments in New York City that are owned rather than rented are held through a co-operative
rather than via a condominium arrangement.
Market-rate and limited-equity co-ops
There are two main types of housing co-operative share pricing: market rate and limited equity. With market rate, the share price is allowed to rise on the open market and shareholders may sell at whatever price the market will bear when they want to move out. In many ways market rate is thus similar financially to owning a condominium, with the difference being that often the co-op may carry a mortgage, resulting in a much higher monthly fee paid to the co-op than would be so in a condominium. The purchase price of a comparable unit in the co-op is typically much lower, however.
With limited equity, the co-op has rules regarding pricing of shares when sold. The idea behind limited equity is to maintain affordable housing. A sub-set of the limited equity model is the no-equity model, which looks very much like renting, with a very low purchase price (comparable to a rental security deposit) and a monthly fee in lieu of rent. When selling, all that is re-couped is that very low purchase price.
Research on housing cooperatives
Research in Canada found that housing cooperatives had residents rate themselves as having the highest quality of life and housing satisfaction of any housing organization in the city.
Other research among older residents from the rural United States found that those living in housing cooperatives felt much safer, independent, satisfied with life, had more friends, had more privacy, were healthier and had things repaired faster. Australian researchers found that cooperative housing built stronger social networks and support, as well as better relationships with neighbours compared to other forms of housing. They cost 14% less for residents and had lower rates of debt and vacancy. Other US research has found that housing cooperatives tended to have higher rates of building quality, building safety, feelings of security among residents, lower crime rates, stable access to housing and significantly lower costs compared to conventional housing.
By country
Australia
Housing co-operatives in Australia are primarily non-equity rental co-operatives, but there are some equity co-operatives as well. The rental co-operatives are generally a part of the Australian social housing/community housing sector and have been funded by various iterations of government funding programs.
One of the largest co-operative housing organisations in Australia is Common Equity Housing Ltd (CEHL) in the state of Victoria. CEHL is a registered housing association with its shares held by its 103-member co-operatives. As of 2023 CEHL co-operatives house 4,291 people in 2,101 homes.
Common Equity, in the state of NSW, is also a registered housing provider and manages 500 properties in 31 member housing co-operatives
Canada
Co-ops in Canada offer an affordable alternative to renting, but waiting lists for the units can be years-long.
France
In 2013, the opening of La Maison des Babayagas, an innovative housing co-op in Paris, gained worldwide attention. It was formed as a self-help community and built with financial assistance from the municipal government, specifically for female senior citizens. Located in the Paris suburb of Montreuil after many years of planning, it looks like any other apartment building. The senior citizens stay out of nursing homes, by staying active, alert, and assisting one another.
The purpose of the Baba Yaga Association is to create and develop an innovative lay residence for aging women that is: (1) self-managed, without hierarchy and without supervision; (2) united collective, with regard to finances as well as daily life; (3) citizen civic-minded, through openness to the community /city and through mutual interaction, engaging in its political, cultural and social life in a spirit of participatory democracy; (4) ecological in all aspects of life, in conformity with the values and actions expressed in the Charter of Living of the House of Babayagas.
Generally, the association's activities are tied to the purpose above, in particular, the development of a popular entity called the University of Knowledge of the Elderly (UNISAVIE: Université du savoir des vieux), and the initiation of a movement to promote other living places that are organized into similar networks.
The community charter sets out expectations for privacy. Each apartment is self-contained. Monthly meetings assure the optimal routines of the building and ensure that each person may participate fully and with complete liberty of expression. Plans set out the routine intervention of a mediator who could help get to the bottom of the causes of eventual conflicts in order to allow for their resolution.
The success of the Paris co-op inspired several Canadian grassroots groups to adopt similar values in senior housing initiatives; these values include autonomy and self-management, solidarity and mutual aid, civic engagement, and ecological responsibility.
Germany
Housing cooperatives, or "Wohnungsgenossenschaften" in German, are a type of housing association that provides affordable housing to its members. They are formed and run by a group of people who come together to pool their resources in order to purchase or build housing for their own use.
In Germany, housing cooperatives are typically organized as non-profit organizations, which means that any profits made from the sale or rental of the housing are reinvested in the cooperative rather than being distributed to shareholders. This allows housing cooperatives to offer lower prices for housing than would be possible for for-profit organizations.
Members of a housing cooperative typically have the right to occupy a specific unit within the cooperative's housing complex, and they also have a say in the management and decision-making of the cooperative. This can include voting on issues related to the maintenance and operation of the housing complex, as well as electing a board of directors to oversee the cooperative's operations.
Housing cooperatives are a popular form of housing in Germany, particularly in urban areas, and they are often seen as a way to provide affordable, community-oriented housing options.
In the Industrialisation in the 19th century there were many housing cooperatives founded in Germany. Presently, there are over 2,000 housing cooperatives with over two million apartments and over three million members in Germany. The public housing cooperatives are organised in the GdW Bundesverband deutscher Wohnungs- und Immobilienunternehmen (Federal association of German housing and real estate enterprise registered associations).
Egypt
The Housing cooperative project in Egypt aims to serve the low-income class, as it provides them with housing units consisting of two rooms, a hall or three rooms and a fully finished hall, with an area ranging from 75 to 90 square meters. In addition, these units are offered at a cost price only, with direct support ranging from 5 to 25 thousand pounds. The beneficiary of this unit can pay its price over a period of 20 years, as 538,000 units have been implemented in all governorates and new cities until 2022, implemented by Ministry of Housing, Utilities & Urban Communities.
India
In India, most 'flats' are owned outright. i.e. the title to each individual flat is separate. There is usually a governing body/society/association to administer maintenance and other building needs. These are comparable to the Condominium Buildings in the USA. The laws governing the building, its governing body and how flats within the building are transferred differ from state to state.
Certain buildings are organized as "Cooperative Housing Societies" where one actually owns a share in the Cooperative rather than the flat itself. This structure was very popular in the past but has become less common in recent times. Most states have separate laws governing Cooperative Housing Societies.
for additional information.
Netherlands
In the Netherlands there are three very different types of organization that could be considered a housing cooperative:
Housing corporation
A housing corporation (woningcorporatie) is a nonprofit organization dedicated to building and maintaining housing for rent for people with lower income. The first housing corporations started in the second half of the 19th century as small cooperative associations. The first such association in the world, VAK ("association for the working class") was founded in 1852 in Amsterdam. Between 2.4 and 2.5 million apartments in the Netherlands are rented by the housing corporations, i.e. more than 30% of the total of household dwellings (apartments and houses).
Owner association
A (house) owners' association (Vereniging van Eigenaren, VvE) is by Dutch law established wherever there are separately owned apartments in one building. The members are legally owners of their own apartment but have to cooperate in the association for the maintenance of the building as a whole.
Living cooperation
A living cooperation (wooncoöperatie) is a construct in which residents jointly own an apartment building using a democratically controlled cooperative, and pay rent to their own organisation. They were prohibited after World War II and legalised in 2015.
New Zealand
"Company-share" apartments operate in the New Zealand housing system.
Philippines
In the Philippines, a tenant-owner's association often forms as a means to buy new flats. When the cooperative is set up, it takes the major part of the loan needed to buy a property. These loans will then be paid off during a fixed period of years (typically 20 to 30), and once this is done, the cooperative is dispersed and the flats are transformed into condominiums.
Nordic countries
A tenant-owner's association (Swedish: bostadsrättsförening, Norwegian: borettslag, Danish: andelsboligforening) is a legal term used in the Scandinavian countries (Sweden, Denmark, and Norway) for a type of joint ownership of property in which the whole property is owned by a co-operative association, which in its turn is owned by its members. Each member holds a share in the association that is proportional to the area of his apartment. Members are required to have a tenant-ownership, which represents the apartment, and in most cases live permanently at the address. There are some legal differences between the countries, mainly concerning the conditions of ownership.
In Sweden, 16% of the population lives in apartments in housing cooperatives, while 25% live in rented apartments (more common among young adults and immigrants) and 50% live in private one-family houses (more common among families with children), the remainder living in other forms such as student dormitories or elderly homes.
In Finland, by contrast to the Scandinavian countries, housing cooperatives in the strict sense are extremely rare; instead, Finnish tenant-owned housing properties are generally organized as limited companies (Finnish ) in a system peculiar to Finnish law. The Finnish arrangement is similar to a housing cooperative in that the property is owned by a non-profit corporation and the right to use each unit is tied to ownership of a certain set of shares.
United Kingdom
Housing co-operatives are uncommon in the UK, making up about 0.1% of housing stock.
Most are based in urban areas and consist of affordable shared accommodation where the members look after the property themselves. Waiting lists can be very long due to the rarity of housing co-operatives. In some areas the application procedure is integrated into the council housing application system. The laws differ between England and Scotland. The Confederation of Co-operative Housing provides information on housing cooperatives in the United Kingdom and has published a guide on setting them up. The Shelter website provides information on housing and has information specific to England and Scotland.
The Catalyst Collective provides information about starting co-operatives in the UK and explains the legal structure of a housing coop. Radical Routes offers a guide on how to set up a housing co-operative.
Student housing cooperatives
Factors of raising cost of living for students and quality of accommodation have led to a drive for Student Housing Co-operatives within the UK inspired by the existing North American Student Housing Cooperatives and their work through North American Students of Cooperation. Edinburgh Student Housing Co-operative and Birmingham Student Housing Co-operative opened in 2014 and Sheffield Student Housing Co-operative in 2015. All existing Student Housing Co-operatives are members of Students for Cooperation.
United States
In the United States, housing co-ops are usually categorized as corporations or LLCs and are found in abundance in the area from Madison, Wisconsin, to the New York metropolitan area. There are also a number of cooperative and mutual housing projects still in operation across the US that were the result of the purchase of federal defense housing developments by their tenants or groups of returning war veterans and their families. These developments include seven of the eight middle-class housing projects built by the US government between 1940 and 1942 under the auspices of the Mutual Ownership Defense Housing Division of the Federal Works Agency. There are many regional housing cooperative associations, such as the Midwest Association of Housing Cooperatives, which is based in Michigan and serves the Midwest region, covering Ohio, Michigan, Indiana, Illinois, Wisconsin, Minnesota, and more.
The National Association of Housing Cooperatives (NAHC) represents all cooperatives within the United States who are members of the organization. This organization is a nonprofit, national federation of housing cooperatives, mutual housing associations, other resident-owned or controlled housing, professionals, organizations, and individuals interested in promoting the interests of cooperative housing communities. NAHC is the only national cooperative housing organization, and aims to support and educate existing and new cooperative housing communities as the best and most economical form of homeownership.
NASCO, or North American Students of Cooperation, is an organization founded in 1968 that has helped organized cooperative living for students. With a presence in over 100 towns and cities across North America, NASCO has provided tens of thousands of students with sustainable housing.
New York metropolitan area
Cooperatives have a long history in metropolitan New York – in November 1882, Harper's Magazine describes several cooperative apartment buildings already in existence, with plans to build more – and can be found throughout New York City, Westchester County, which borders the city to the north, and towns in northern New Jersey that are close to Manhattan, including Fort Lee, Edgewater, Ramsey, Passaic and Weehawken. Alku and Alku Toinen, apartment buildings built in 1916 by the Finnish American immigrant community in the Sunset Park neighborhood of Brooklyn, New York City, were the first nonprofit housing cooperatives in New York City.
Apartment buildings and multiple-family housing make up a more significant share of the housing stock in the New York City area than in most other U.S. cities as over 75% of apartment buildings in NYC are co-ops. Reasons suggested to explain why cooperatives are relatively more common than condominiums in the New York City area are:
Inspired by Abraham Kazan, cooperatives appeared at least as far back as the 1920s while a legal basis for condominium form of ownership was not available in New York State until 1964. Passage of the Condominium Act then opened a wave of construction of condominium buildings.
The cooperative form can be advantageous as a building mortgage can be carried by the cooperative corporation, leaving less financing to be obtained by each co-op owner. Under condominium ownership only the separate condo owners provide financing. Particularly when interest rates are high, a conversion sponsor may find unit buyers more easily under the cooperative arrangement as buyers will have less financing to arrange on their own; the apparent purchase price of a unit in a cooperative building holding an underlying mortgage is lower than a condo purchase. Cooperative unit buyers may not accurately weigh their share of the building's mortgage.
Also, later in a building's life after conversion, major new investments required to repair or replace building systems can be raised by a new central mortgage in a cooperative, while in a condominium funds could only be raised by onerous assessments being required of each individual unit owner. However, New York's condominium law was amended in 1997 to allow condominium associations to borrow money.
The 1974 creation and then subsequent influence on policy by the Urban Homesteading Assistance Board, a housing advocacy group, which enabled the conversion of over 1,600 foreclosed, city-held rentals into limited-equity, resident-controlled co-ops.
A co-op building's board can exercise its own business discretion to impose restrictions on shareholders, and reject prospective purchasers without explanation, as long as the board does not violate federal and state housing or civil rights laws.
Most of the housing cooperatives in the greater New York area were converted to that status during the 1980s; generally, they were large buildings built between the 1920s and 1950s that a single landlord or corporation owned and rented out that became unprofitable as rental properties. To encourage individual ownership of units, the initial buyers of units (buying from the owner of the entire building) did not have to be approved by a board. These units are known as sponsor units. Also, the rental tenants living in the building at the time of the conversion were usually given an option to buy at a discount. If the tenants were rent-controlled, the law usually protects them by allowing them to stay as renters and the unit may not be occupied by a purchaser until said tenant dies or moves out. Many of these buildings, especially in Manhattan, are actually quite luxurious and exclusive; many celebrities live in them and some famous people are even rejected by co-op boards. In the 1990s and 2000s some rental buildings in the Chicago, Washington, D.C., and Miami-Fort Lauderdale-West Palm Beach areas went through a similar conversion process, though not to the degree of New York.
Many of the cooperatives originally built as co-ops were sponsored by trade unions, such as the Amalgamated Clothing Workers of America. One of the largest projects was Cooperative Village in Lower East Side of Manhattan. The United Housing Foundation was set up in 1951 and built Co-op City in The Bronx, designed by architect Herman Jessor. One of the first subsidized, fixed-value cooperatives was Morningside Gardens in Manhattan's Morningside Heights.
Another dynamic also contributed to the large number of cooperatives established in the 1980s and 1990s in New York City – in this case by low- and moderate-income tenant groups. In the 1970s, many New York City private landlords were struggling to maintain their aging properties in the face of high interest rates, redlining, white flight and rising fuel costs. The period also saw some landlord-induced arson to obtain insurance proceeds and widespread non-payment of real estate taxes – over 20% of multi-family residential properties were in arrears in the mid-1970s. In 1977, the city passed Local Law #45, which allowed the city to begin foreclosure proceedings after just one year of non-payment of taxes, not three, resulting in the takeover of thousands of buildings, many of them occupied, by the city of New York through a legal action known as an in rem foreclosure. In September 1978, the city's housing agency, the New York City Department of Housing Preservation and Development (HPD), created a series of new housing programs designed to give building residents and community groups control and eventual ownership of in rem buildings.
The Urban Homesteading Assistance Board (UHAB), established in 1974, began to assist residents of these buildings to manage, rehabilitate and acquire their buildings, and form limited-equity housing co-operatives. Working with the city's housing agency, its existing loan programs and the power to dispose of abandoned property to non-profit organizations, as well as the state laws governing the establishment of co-operatives, UHAB was able to provide low-income people with the tools – seed money, legal advice, architectural plans, bookkeeping training – to build and run limited-equity housing co-operatives. Through a long-standing contract with the city to provide training and technical assistance to residents of buildings in the Tenant Interim Lease (TIL) Program, UHAB has worked with more than 1,600 coops, preserving over 30,000 units of affordable housing.
Some cooperatives in New York City do not own the land upon which their building is situated. These 'land-lease' buildings often have significant drawbacks for cooperative owners. However, there have been cases where shareholders of a building have bought the surrounding land, such as 167 East 61st Street (formerly known as Trump Plaza), where residents gathered $183 million to buy the surrounding land.
Student housing cooperatives
Student cooperatives provide housing and dining services to those who attend specific educational institutions. Some notable groups include Berkeley Student Cooperative, Santa Barbara Housing Cooperative and the Oberlin Student Cooperative Association.
See also
Cohousing
Condop
Subsidized housing
Worker cooperative
References
External links
Social programs
+
Human habitats
Private aid programs
Living arrangements | Housing cooperative | Biology | 5,461 |
63,265,544 | https://en.wikipedia.org/wiki/Reid%27s%20paradox%20of%20rapid%20plant%20migration | Reid's Paradox of Rapid Plant Migration or Reid's Paradox, describes the observation from the paleoecological record that plant ranges shifted northward, after the last glacial maximum, at a faster rate than the seed dispersal rates commonly occur. Rare long-distance seed dispersal events have been hypothesized to explain these fast migration rates, but the dispersal vector(s) are still unknown. The plant species' geographic range expansion rates are compared to the actualistic rates of seed dispersal using mathematical models, and are graphically visualized using dispersal kernels. These observations made in the paleontological record, which inspired Reid's Paradox, are from fossilized remains of plant parts, including needles, leaves, pollen, and seeds, that can be used to identify past shifts in plant species' ranges.
Reid's Paradox is named after Clement Reid, a paleobotanist, who made the principle observations from the paleobotanical record in Europe in 1899. His comparison of oak tree seed dispersal rates, and the observed range of oak trees from the fossil record, did not concur. Reid hypothesized that diffusion was not a possible explanation for the observed paradox, and supplemented his hypothesis by noting that birds were the likely cause of long range seed dispersal. Reid's Paradox has been subsequently documented across Europe and North America.
Dispersal kernels
Dispersal kernels are statistical models that represent the probability of seed dispersal from the source tree. Realistic biological data is required to complete the models. These data are used to accurately fill in variables such as seed number, seed size, and reproductive age. Depending on the plant species, the variables in the equation will change. In the years since Reid hypothesized the methods for seed dispersal, the models have gained more complex elements which attempt to resolve Reid's Paradox.
The dispersal of seeds from a parent tree are initially occurs as a normal distribution, as predicted by a standard diffusion equation. However, biological phenomenon complicate the diffusion equation by adding biotic vectors of dispersal such as blue jays and eastern grey squirrels, species which possess caching behaviors, and abiotic agents of dispersal such as high velocity wind storms. These additional vectors of seed dispersal make the dispersal kernels have a "fat-tail", or a large kurtosis. This means that the probability of a long-range dispersal event is higher than that of the standard diffusion dispersal kernel. In order to resolve Reid's Paradox, the vector(s) of seed-dispersal, which give the dispersal kernel a fat-tail, must be identified.
Possible explanations for Reid's Paradox
Animal dispersal
Long distance seed-dispersal events due to animal-seed interactions (such as caching or endozoochorous dispersal) would fatten the tail of the dispersal kernels. To fully explain Reid's Paradox, these rare animal induced seed-dispersal events must have been more important during migration events than recognized or recorded currently.
Cryptic refugia
Small populations of plants may have grown closer to the ice sheets in microhabitats that possessed the habitat characteristics needed for growth and reproduction. This would minimize the actual post-glacial dispersal distance. Such hypothetical populations would not be abundant enough to leave fossil evidence, so have escaped detection. In North America, there is some genetic evidence of cryptic northern refugia for sugar maple and American beech.
References
Forest ecology
Conservation biology
Environmental modelling
Paleontology | Reid's paradox of rapid plant migration | Biology,Environmental_science | 693 |
55,583,921 | https://en.wikipedia.org/wiki/Assembled%20gem | An assembled gem (also called a composite gem) is a gemstone made up of other smaller gems. An assembled gem can often be a fake gem with a desirable piece of gemstone attached to pieces of inexpensive imitation gemstones. For example, a combination of a thin layer of green glass and a colorless piece of quartz would be a composite gem.
Types
A doublet is a type of assembled gem which is composed of two parts. A false doublet is a doublet which is a glass piece that looks like a real gem and a real gem that have been attached to look like a larger gem. A triplet is a type of assembled gem composed of three distinct parts.
References
Gemstones | Assembled gem | Physics | 142 |
52,071,479 | https://en.wikipedia.org/wiki/NGC%20315 | NGC 315 is an elliptical galaxy in the constellation Pisces. Its velocity with respect to the cosmic microwave background is 4635 ± 22km/s, which corresponds to a Hubble distance of . In addition, eight non-redshift measurements give a distance of . It was discovered by German-British astronomer William Herschel on September 11, 1784.
The SIMBAD database lists NGC315 as a LINER galaxy, i.e. a galaxy whose nucleus has an emission spectrum characterized by broad lines of weakly ionized atoms.
According to A.M. Garcia, NGC 315 is the namesake of the NGC 315 Group (also known as LGG 14). This group contains 42 galaxies, including NGC 226, NGC 243, NGC 262, NGC 266, NGC 311, NGC 338, IC 43, IC 66, AND IC 69, among others. NGC 315, along with triple star NGC 313, and star NGC 316 are listed together as Holm 28 in Erik Holmberg's A Study of Double and Multiple Galaxies Together with Inquiries into some General Metagalactic Problems, published in 1937.
See also
List of NGC objects (1–1000)
References
External links
0315
00597
03455
17840911
Pisces (constellation)
Elliptical galaxies
Discoveries by William Herschel
+05-03-031
F00550+3004
LINER galaxies | NGC 315 | Astronomy | 277 |
16,971 | https://en.wikipedia.org/wiki/Kvass | Kvass is a fermented, cereal-based, low-alcoholic beverage of cloudy appearance and sweet-sour taste.
Kvass originates from northeastern Europe, where grain production was considered insufficient for beer to become a daily drink. The first written mention of kvass is found in Primary Chronicle, describing the celebration of Vladimir the Great's baptism in 988. In the traditional method, kvass is made from a mash obtained from rye bread or rye flour and malt soaked in hot water, fermented for about 12 hours with the help of sugar and bread yeast or baker's yeast at room temperature. In industrial methods, kvass is produced from wort concentrate combined with various grain mixtures. It is a popular drink in Belarus, Estonia, Latvia, Lithuania, Moldova, Poland, Russia, and Ukraine. Kvass (or beverages similar to it) are also popular in some parts of Finland, Uzbekistan, Kazakhstan, and China.
Terminology
The word kvass is ultimately from Proto-Indo-European base *kwh₂et- ('to become sour'). In English it was first mentioned in a text around 1553 as quass. Nowadays, the name of the drink is almost the same in most languages: in Polish: (, to differentiate it from , 'acid', originally from , 'sour'); Belarusian: , ; Russian: , ; Ukrainian: , ; Latvian: ; Romanian: ; Hungarian: ; Serbian: ; Chinese: , ; Eastern Finnish: . Non-cognates include Estonian , Finnish , Latvian (), Latgalian (, similar to Lithuanian ), Lithuanian (, similar to Latvian ), and Swedish ().
Production
In the traditional method, either dried rye bread or a combination of rye flour and rye malt is used. The dried rye bread is extracted with hot water and incubated for 12 hours at room temperature, after which bread yeast and sugar are added to the extract and fermented for 12 hours at . Alternatively, rye flour is boiled, mixed with rye malt, sugar, and baker's yeast and then fermented for 12 hours at .
The simplest industrial method produces kvass from a wort concentrate. The concentrate is warmed up and mixed with a water and sugar solution to create wort with a sugar concentration of 5–7% and pasteurized to stabilize it. After that, the wort is pumped into a fermentation tank, where baker's yeast and lactic acid bacteria culture is added, and the solution is fermented for 12–24 hours at . Only around 1% of the extract is fermented out into ethanol, carbon dioxide, and lactic acid. Afterwards, the kvass is cooled to , clarified through either filtration or centrifugation, and adjusted for sugar content, if necessary.
Initially, it was filled in large containers from which the kvass was sold on streets, but now, the vast majority of industrially produced kvass is filled and sold in 1–3-litre plastic bottles and has a shelf life of 4–6 weeks.
Kvass is usually 0.5–1.0% alcohol by weight, but may sometimes be as high as 2.0%.
History
The exact origins of kvass are unclear, and whether it was invented by Slavic people or any other Eastern European ethnicity is unknown, although some Polish sources claim that kvass was invented by Slavs. Kvass has existed in the northeastern part of Europe, where grain production is thought to have been insufficient for beer to become a daily drink. It has been known among the Early Slavs since the 10th century. Likely invented in the Kievan Rus' and known there since at least the 10th century, kvass has become one of the symbols of East Slavic cuisine. The first written mention of kvass is found in the Primary Chronicle, describing the celebration of Vladimir the Great's baptism in 988, when kvass along with mead and food was given out to the citizens of Kiev. Kvass-making remained a daily household activity well into the 19th century.
In the second half of the 19th century, with military engagement, increasing industrialization, and large-scale projects, such as the construction of the Trans-Siberian Railway creating a growing need to supply large numbers of people with foodstuff for extended periods of time, kvass became commercialized; more than 150 kvass varieties, such as apple, pear, mint, lemon, chicory, raspberry, and cherry were recorded. As commercial kvass producers began selling it in barrels on the streets, domestic kvass-making started to decline. For example, in the year ended 30 June 1912, there were 17 factories in the Governorate of Livonia, producing a total of 437,255 gallons of kvass.
In the 1890s, the first scientific studies into the production of kvass were conducted in Kiev, and in the 1960s, commercial mass production technology of kvass was further developed by chemists in Moscow.
By country
Russia
Although the massive flood of western soft drinks after the fall of the USSR, such as Coca-Cola and Pepsi substantially shrank the market share of kvass in Russia, in recent years it has regained its original popularity, often marketed as a national soft drink or "patriotic" alternative to the famous Coca-Cola drink. For example, the Russian company Nikola has promoted its brand of kvass with an advertising campaign emphasizing "anti-cola-nisation." Moscow-based Business Analytica reported in 2008 that bottled kvass sales had tripled since 2005 and estimated that per capita kvass consumption in Russia would reach three litres in 2008. Between 2005 and 2007, cola's share of the Moscow soft drink market fell from 37% to 32%. Meanwhile, kvass's share more than doubled over the same time period, reaching 16% in 2007. In response, Coca-Cola launched its own brand of kvass in May 2008. This is the first time a foreign company has made an appreciable entrance into the Russian kvass market. Pepsi has also signed an agreement with a Russian kvass manufacturer to act as a distribution agent. The development of new technologies for storage and distribution, and heavy advertising, have contributed to this surge in popularity; three new major brands have been introduced since 2004.
Market shares for Russia (2014)
Belarus
Belarus has several breweries producing kvass: Alivaria Brewery, , and . It also has a variety of kvass tasting and entertainment festivals. The largest show takes place in the city of Lida.
Poland
Kvass may have appeared in Poland as early as the 10th century, it quickly became a trendy beverage thanks to it easy and cheap method of production as well as its thirst-quenching and digestion-aiding qualities. By the time of Jogaila's rule, kvass was universal. It was at first commonly drunk by peasants in the eastern parts of the country, but eventually the drink spread to the szlachta. One example of this is , an old type of Polish kvass that is still sold as a contemporary brand. Its origins can be traced back to the 1500s, when founded the town of Kodeń on land granted by the Polish king. He then bought the mills and 24 villages of the surrounding areas from their previous landowners. Then, the taste of kvass became known among the Polish szlachta, who used it for its supposed healing qualities. Throughout the 19th century, kvass remained popular among Poles who lived in the Congress Poland of Imperial Russia and in Austrian Galicia, especially the inhabitants of rural areas. Up until the 19th century, recipes for local variants of kvass remained well-guarded secrets of families, religious orders, and monasteries.
The beverage production in Poland on an industrial scale can be traced back to the more recent interwar period, when the Polish state regained independence as the Second Polish Republic. In interwar Poland, kvass was brewed and sold in mass numbers by magnates of the Polish drinks market like the Varsovian brewery Haberbusch i Schiele or the Karpiński company. Kvass remained particularly popular in eastern Poland. However, with the collapse of many prewar businesses and much of the Polish industry during World War II, kvass lost popularity following the aftermath of the war. It also gradually lost favour throughout the 20th century upon introducing mass-produced soft drinks and carbonated water into the Polish market. In the early 21st century, kvass experienced a renaissance in Poland due to the heightened interest in healthy diets, natural products, and traditions.
Kvass can be found in some supermarkets and grocery stores, where it is known in Polish as (). Commercial bottled versions of the drink are the most common variant, as some companies specialise in manufacturing a more modern version of the drink (some variants are manufactured in Poland whilst others are imported from its neighbouring countries, Lithuania and Ukraine being the most popular source). However, old recipes for a traditional version of kvass exist. Some of them originate from eastern Poland; others from more central regions include adding honey for flavour. Although commercial kvass is much easier to find in Polish shops, Polish manufacturers of more natural and healthier variants of kvass have become increasingly popular both within and outside of the country's borders. A less healthy alternative of quick-to-make variants using kvass concentrate can also be purchased in shops. One colloquial Polish name for is ('rural orangeade'). In some Polish villages, such as Zaława and its surroundings, kvass was traditionally produced on every farm.
Latvia
In Latvian, kvass was also called . After the dissolution of the Soviet Union in 1991, the street vendors disappeared from the streets of Latvia due to new health laws that banned its sale on the street. Economic disruptions forced many kvass factories to close. The Coca-Cola Company moved in and began quickly dominating the soft drink market. In 1998, the local soft drink industry adapted by selling bottled kvass and launching aggressive marketing campaigns. This surge in sales was stimulated by the fact that kvass sold for about half the price of Coca-Cola. In just three years, kvass constituted as much as 30% of the soft drink market in Latvia, while the market share of Coca-Cola fell from 65% to 44%. The Coca-Cola Company had losses in Latvia of about $1 million in 1999 and 2000. Coca-Cola responded by purchasing kvass manufacturers and producing kvass at their own soft drink plants.
On 30 September 2010, the Saeima (parliament) adopted quality and classification requirements for kvass, defining it as "a beverage obtained by fermenting a mixture of kvass wort with a yeast of microorganism cultures to which sugar and other food sources and food additives are added or not added after the fermentation" with a maximum ABV of 1.2 percent, and differentiating it from an unfermented non-alcoholic mixture of grain product extract, water, flavourings, preservatives, and other ingredients, which is designated as a "kvass (malt) beverage".
In 2014, Latvian kvass producers won seven medals at the Russian Beverage exposition in Moscow, with Ilgezeem's Porter Tanheiser kvass winning two gold medals. In 2019, Iļģuciema kvass ranked second in the Most Loved Latvian Beverage Brand Top, and first in the subsequent 2020 top.
Lithuania
In Lithuania, kvass is known as and is widely available in bottles and drafts. The first written records of kvass and kvass recipes in Lithuania appeared in the 16th century. Many restaurants in Vilnius make their own kvass, which they sell on the premises. Some brands of mass-produced Lithuanian kvass are also sold on the Polish market. Strictly speaking, gira can be made from anything fermentable—such as caraway tea, beetroot juice, or berries—but it is made mainly from black bread, or barley or rye malt.
Estonia
In Estonia, kvass is known as . Initially, it was made from either brewer's spent grain or wort left to ferment in a closed container, but later, special kvass bread () or industrially produced malt concentrate started to be used. Nowadays, generally is industrially produced with the use of pasteurization, the addition of preservatives, and artificial carbonation.
Finland
In Finland, a fermented drink made from a mixture of rye flour and rye malt was ubiquitous in parts of Eastern Finland and was heated in the oven. It was called (which can also be used to refer to small beer) or (in Eastern Finnish), while nowadays the drink is often known as () and is available in many work canteens, gas stations, and lower-end restaurants.
Traditionally, was usually made in households once a week from a mixture of malted and unmalted rye grains. Other grains, such as oats or barley, were also sometimes used; occasionally, leftover potatoes or pieces of bread were added. Everything was mixed with water in a metal cauldron or a clay pot and kept warm in the oven or by the stove for at least six hours for the mixture to darken and sweeten. Sometimes, the grain solids were filtered out through lautering. In Eastern Finland, the mixture was formed into large loaves and briefly baked for the crust to turn brown. The porridge or pieces of the malt bread were mixed into a wooden cask with water and fermented for one or two days with a previous batch, a sourdough starter, spontaneously or in more recent times with commercial baker's yeast. In the early 20th century, with sugar becoming more readily available, it started replacing the malting process, and modern is made from dark rye malt, sugar, and baker's yeast.
Sweden
Kvass was also made in Sweden, where it was known as (). However, it was very likely limited only to areas where rye bread was the standard bread as opposed to crispbread, which was more common in Western Sweden and did not go stale. was still being made in Öland farms up until 1935.
China
In the mid-19th century, kvass was introduced in Xinjiang, where it became known as () and eventually became one of the region's signature drinks. It is usually consumed cold together with barbecue. In 1900, Russian merchant Ivan Churin founded Harbin Churin Food () in Harbin, offering kvass and other specialities, and by 2009, the company was already producing 5,000 tons of kvass a year, making up 90% of the local market. In 2011, it moved its kvass factory to Tianjin, increasing its sales to 20,000 tons in the first year.
Elsewhere
Following the influx of immigrants in the UK due to the 2004 enlargement of the European Union, several stores selling cuisine and beverages from Eastern Europe were established, many of which stock imported (primarily pasteurised) kvass. As a result, since then a number of different flavours of not-pasteurised kvass, fermented using sourdough starter culture, have also become available in the UK in 2023. In recent years, kvass has also become more popular in Serbia.
In 2017, a version of kvass from carrots or beets was developed in California by the producer Biotic Ferments.
Nutritional composition
Naturally fermented kvass contains 5.9%±0.02 carbohydrates, of which 5.7%±0.02 are sugars (mostly fructose, glucose, and maltose), as well as 0.71±0.09, 1.28±0.12, and 18.14±0.48 mg/100 g of thiamine, riboflavin, and niacin respectively. In addition to that, 19 different aroma volatile compounds have also been identified in naturally fermented kvass, most notably 4-penten-2-ol (10.05×107 PAU), which has a fruity odour; carvone (2.28×107 PAU) originating from caraway fruits used as an ingredient in rye bread; and ethyl octanoate (1.03×107 PAU), which has an odour of fruit and fat.
Traditional kvass made from rye wholemeal bread has been found to have, on average, twice the dietary fibre content, 60% more antioxidant activity (due to the addition of caramel and citric acid to the bread), and three times less reducing sugar content than industrially produced kvass.
Historically, alcohol by volume (ABV) of kvass varied depending on the ingredients, microbial flora, as well as temperature and length of fermentation, but nowadays it is usually not higher than 1.5%. The wide availability and consumption of kvass, including by children of all ages, together with the lacking indication of ABV for kvass on the labels and in advertisements, has been named a possible contributor to chronic alcoholism in the former Soviet Union.
Use
Apart from drinking, kvass is also used by families as the basis for many dishes. Traditional cold summertime soups of Russian cuisine, such as okroshka, botvinya, and tyurya, are based on kvass.
Cultural references
The name of Kvasir, a wise being in Norse mythology, is possibly related to kvass.
There is a Russian expression, (literally 'to clamber from bread to kvass'), which means 'to live from hand to mouth' or to 'scrape by' referring to the frugal practice amongst the poor peasants of making kvass from stale leftovers of rye bread. Another kvass-related term in Russian is "" (квасной патриотизм) dating back to an 1823 letter by the Russian poet Pyotr Vyazemsky who defined it as "unqualified praise of everything that is your own".
In the Polish language, several traditional sayings that reference exist. There is also an old Polish folk rhyming song. It shows the history of kvass in the country as having been drunk by generations of Polish reapers as a thirst-quenching beverage used during periods of hard work during the harvest season, long before it became popular as a medicinal drink among the szlachta. The song goes as follows:
In the Polish village of Zaława, there is a customary game known as ('volcano') that is associated with the beverage. The fermentation of sugars makes kvass slightly carbonated, thus, when shaken or heated, it can cause the liquid to suddenly and rapidly rise out of an open vessel. Playing wulkan consists of vigorously shaking a bottle of kvass shortly before handing it to someone else who is going to drink it; the sudden "shooting out" of the beverage onto the person opening the bottle is a source of entertainment for the youth of Zaława and a well-known prank during regional festivities.
In Tolstoy's War and Peace, French soldiers are aware of kvass on entering Moscow, enjoying it but referring to it as "pig's lemonade". In Sholem Aleichem's Motl, Peysi the Cantor's Son, diluted kvass is the focus of one of Motl's older brother's get-rich-quick schemes.
See also
Borș (bran)
Boza
Brottrunk
Chicha
Ginger ale
Malta
Mors
Podpiwek
Pruno
Rejuvelac
Rivella
Tepache
References
External links
Fermented drinks
Rye-based drinks
Ukrainian drinks
Chinese drinks
Estonian drinks
Latvian drinks
Lithuanian drinks
Polish drinks
Belarusian drinks
Russian drinks
Serbian drinks
Soviet cuisine | Kvass | Biology | 4,080 |
43,302,850 | https://en.wikipedia.org/wiki/ABJM%20superconformal%20field%20theory | In theoretical physics, ABJM theory is a quantum field theory studied by Ofer Aharony, Oren Bergman, Daniel Jafferis, and Juan Maldacena. It provides a holographic dual to M-theory on . The ABJM theory is also closely related to Chern–Simons theory, and it serves as a useful toy model for solving problems that arise in condensed matter physics. It is a theory defined on superspace.
See also
6D (2,0) superconformal field theory
Notes
References
Conformal field theory
Supersymmetric quantum field theory
String theory | ABJM superconformal field theory | Physics,Astronomy | 124 |
18,659,267 | https://en.wikipedia.org/wiki/Dyno%20%28company%29 | Dyno-Rod is an emergency drainage and plumbing company operating in the United Kingdom. Formed in 1963 as Dyno-Rod, Dyno initially specialised in the use of electromechanical machines for drain clearance. Since then, the company has grown considerably, consolidating its drainage business and diversifying into comprehensive plumbing services, for both domestic and industrial sites. It is a franchise granting limited company.
Dyno-Rod describes itself as "one of the market leaders in our field." Apart from clearing blockages, it uses CCTV to inspect drains, makes plumbing repairs, and installs new pipework. The company offers a twenty four hour emergency response service across the United Kingdom and Ireland (Éire).
Dyno-Rod was acquired by Centrica's subsidiary British Gas for £57.6 million ($104.3m) in October 2004.
History
Dyno-Rod service was launched in 1963 by Jim Zockoll, in South London, and the business was based in Surbiton for many years. Zockoll was a flight engineer for Pan American, who spotted an opportunity whilst on a stopover in London: the hotel where he was staying was suffering drainage problems, had outdated repair equipment, and was taking too long on repairs. Zockoll flew back to the US to get his electro-mechanical drain-cleaning machine – equipment which was unknown in the UK – and cleared it in 20 minutes, giving birth to the company Dyno-Rod.
Dyno-Rod based its drain cleaning service on what were then new electromechanical techniques. After initial success, Dyno began granting franchise licences in 1965. There are currently twenty five Dyno franchisees in the United Kingdom. As the first non fast food franchised business in the United Kingdom, and the second franchised business of any kind there, Dyno-Rod played a prominent role in the formation of the British Franchise Association, of which it remains a full member.
Over time, the Dyno brand developed other associated businesses. Dyno-Secure was launched in 1987, to offer a range of locks and security services, while in 2001, Dyno-Plumbing offered a comprehensive plumbing service. Dyno Group was acquired by British Gas, a subsidiary of Centrica, in October 2004. Since then, Dyno franchisees have developed larger territories and operate multiple brands within the Dyno group.
In popular culture
The then Prime Minister, David Cameron, compared himself in April 2014 to the company's services, saying "If there are things that are stopping you from doing more, think of me as a giant Dyno-Rod".
References
External links
Centrica
Plumbing
Retail companies established in 1963
2004 mergers and acquisitions
Cleaning companies
1963 establishments in England | Dyno (company) | Engineering | 574 |
66,504,377 | https://en.wikipedia.org/wiki/Old-age-security%20hypothesis | The old-age-security hypothesis is an economic hypothesis according to which parents view their children as a source of income and personal services in old age. Within the framework of this hypothesis, the demand for children is considered as the need to ensure a safe old age. As a consequence, increasing the profitability of alternative assets or introducing a universal public pension system reduces the demand for children.
Description
According to this hypothesis, the presence of a state pension system reduces the overall birth rate and hinders investment in the human capital of children, which in the long term leads to a decrease in the size of the working-age population and affects their overall income growth. On the contrary, the absence of alternative assets or state pension provision makes it necessary to have children.
This hypothesis is based on two basic assumptions: people control the number of children born and people in their actions are guided by selfish motives (that is, proceeding only from their own consumption throughout life). According to this hypothesis, payments from children to support their elderly parents are seen as a return on loans that parents spent to provide for their children in childhood.
Alternative hypotheses explaining the birth rate are intergenerational altruism and various hypotheses related to the labor market.
The earliest mention of the inverse relationship between the birth rate and the level of the population's pension is found in Leibenstein in 1957. Van Groezen, Leers and Meijdam in 2003, Sinn in 2007, Cigno and Werding in 2007, Ehrlich and Kim in 2007, Van Groezen and Meijdam in 2008, Gahvari in 2009, Cigno in 2010, expressed the opinion about the decline in the birth rate as a consequence of the introduction of the pension system. Fenge and von Weizsäcker in 2010, Regös in 2014, Boldrin, De Nardi and Jones in 2015. Guinnane in 2011, based on empirical evidence of declining fertility in historical time, considered the introduction of social protection as one of the reasons for the first demographic transition. Cigno and Rosati in 1992, Cigno in 2003, Billari and Galasso in 2009 examined this hypothesis at the level of specific countries and individual pension systems. Cigno and Werding in 2007 gave an overview of work on the relationship between pensions and fertility in the modern period. According to these studies, a smaller pension coverage leads to a higher birth rate.
Alessandro Cigno
According to Alessandro Cigno, ensuring old age is an incentive for raising children and a dominant factor in increasing the birth rate. Cigno also believes that it has been proven that the coverage of the population by the pension system reduces the birth rate, although it increases household savings. In his opinion, the state pension system prevents parents from investing in the human capital of their children. Considering the rapidly aging population and the imbalance between the number of recipients of pensions and those who pay pension contributions as the reason for the deficit of the pension fund and the decrease in the income of pensioners, Alessandro Cigno proposes to pay the pension to parents directly from the pension contributions of their children.
Robert Fenge and Beatrice Scheubel
In 2017, Robert Fenge and Beatrice Scheubel published an article Pensions and fertility: back to the roots, where they studied the relationship between the development of the state pension system, or rather the dynamics of the share of people participating in pension insurance programs and the dynamics of fertility, using the example of the German Empire at the end of 19 and beginning 20th century. In addition to pensions, the multivariate analysis took into account those factors that scientists usually cite as the cause of the first demographic transition. Together with pensions, the impact of factors such as literacy and urbanization was analyzed. According to Robert Fenge and Beatrice Scheubel, the introduction of pensions in Germany at the turn of the 19th and 20th centuries explains up to 15% of the decline in the birth rate in 1895–1907.
UN Conference on the World Population in Bucharest, 1974
A meeting dedicated to the reduction of the world population was held in Bucharest from August 19 to 30. Participants: more than 1.4 thousand delegates from 136 countries (at that time there were 138 countries in the UN, family planning is already being promoted in 59 countries). Initiator: The USA and the UN.
The plan was written in advance, it was mainly developed by the American side, initially contained quantitative target indicators for the CPR (Controlled Population Reduction) for individual countries, but as a result of the protests they had to be removed. It was the first official international document in the field of demography and fertility reduction.
The Conference is proud for the first time in history, setting the goal of "curbing population growth" for the peoples. And also worked out specific ways to achieve this Goal. Among them is the introduction of benefits and pensions.
"Recognizing the diversity of social, cultural, political and economic conditions, everyone agrees that the following development goals will lead to a moderate birth rate: reduction of infant and child mortality, full involvement of women in the development process (education, economics, politics), promotion of social justice, accessibility of education, eradication of child labor and abuse with children, the introduction of social security and old-age pensions, the establishment of a minimum age for marriage."
By this time, demographic policy and "family planning" programs had been successfully applied in Asian countries, in particular in India, Japan, Pakistan and Sri Lanka. In 1974, teaching staff worked in 39 countries, in which about 80% of the population of the "developing world" lived.
Barbara Entwistle
Here is how the author explains the purpose of his work: In 1974, a meeting took place in Bucharest, which resulted in the adoption of an Action Plan for the World Population. It is strongly recommended to introduce everywhere school education and pensions in order to benefit the population, as well as to reduce its number.
Because in America it was already generally accepted that pensions and education lower fertility. And to check the correctness of this opinion, its effectiveness (degree of impact) and applicability to different countries, as well as the predictions made earlier, Barbara just carried out this huge work.
About previous studies. On page 258, the author mentions previous studies of the dependence of fertility on pensions and education, these are: McGreevy and Birdsall (1974), and the Triangle Research Institute (1971), as well as Kirk (1971), Kasarda (1971), Adelman (1963), Friedlander and Silver (1967) and Beaver (1975). and others, at least 15. They showed very different results, from zero correlation to significant (higher than the statistical error). Light was shed by the study of Friedlander and Silver (1967), where they first explored the whole world, but then developed and developing countries separately, and it turned out that the negative impact of education on fertility is indeed significant in developing countries, but practically zero in developed countries.
These studies used different criteria for education (for example: literacy, school enrollment, circulation of newspapers, etc.), included a different number of variables (i.e., a unified method for calculating dependence has not yet been developed), and Barbara comes to the conclusion that correlation becomes significant when the study includes 4–5 variables, in other words, when education is complemented by something else. (p. 261)
Beaver (1975) finds that the effect of education on fertility is more pronounced in TFR but not OCD because TFR is an age-standardized unit of measure. But the difference in results is still small. The same is indicated by the studies of Adelman (1963) and Janowitz (1971). Another interesting finding from Beaver is that education has an effect on OCD with a lag of 12.5 years, but not 7.5 years. (This conclusion of Beaver is not supported by Barbara's own research, she found no dependence on lags of 5, 10 and 15 years.) (P. 262)
Opportunities for education have skyrocketed around the world since World War II, says Barbara. Other studies of the dependence of fertility on education, using, like Beaver, data for the 1950s - early 1960s (for example, Ekanem, 1972), found no relationship between them. (p. 263)
So the question of the dependence of fertility on education remains unclear, although it has been comprehensively and in detail discussed. On this, with education, Barbara ends and moves on to pensions.
Friedlander and Silver (1967) were the first to introduce pensions into country comparisons. They found a significant influence of this factor (pensions), both for developed and developing (especially developing) countries, which was not found according to earlier data from 1960. (pp. 263–264). Holm (1975), in contrast to Friedlander and Silver, already finds a significant negative correlation between fertility and pension coverage, possibly due to the fact that more recent data from the mid-1960s are used. The size of the pension has very little effect on the birth rate. (p. 264).
Kelly, Cutwright, and Hittle (1976) criticized Holm (1975) for not taking into account the so-called. modernization, they argued that the drop in the birth rate does not actually correlate with pensions, but with modernization, and with the introduction of the modernization coefficient they created, the impact of pensions was already insignificant. (p. 265)
Holm responded with a paper (1976), where he analyzed the correlation of fertility and another indicator, the percentage of government spending on pensions from the GDP of that state, and on a larger sample of countries. She confirmed the pension hypothesis (that pensions reduce the birth rate), even with the inclusion of this modernization index. (These results are consistent with those of Barbara's dissertation in Chapters 5 and 6.) Again, Holm's work shows a stronger negative effect of pensions on fertility in developing countries, especially in the early 1960s (p. 265).
Barbara mentions 4 more studies, which gave the same result as hers, but her work is more detailed, because it shows that pensions affect not only the birth rate, but also the rate at which it declines, that pensions have a delayed effect, and that this effect increased during the 1960s. Initially, the effect (of lower fertility due to pensions) was more pronounced in developing countries, but the difference faded as they developed over a 10-year period. (p. 266)
According to Barbara, the creation of comparative analysis models across countries (in terms of the dependence of fertility on economic indicators) began, according to Barbara, in Weintraub's (1962) rather simple model of the dependence of OCD on economic development indices. (p. 267) Since then, the models have become more complex, including more and more variables, including education and pensions, as well as population density, urbanization index, working women, family planning programs, and others (Leibenstein, 1975, Adelman, 1963, Ekanem, 1972). It is curious that many studies (Ekanem, 1972, Gregory and Campbell, 1976, Moldin et al., 1978) show a positive, albeit small, impact of urbanization, like this thesis, and a clear negative impact of urbanization on fertility - Beaver (1975). Moldin et al. (1978) showed a significant impact of family planning programs in developed countries 1965-1975, but did not take into account pensions.
A study by Moldin (1978) showed that family planning programs (introduced mainly in the 1960s) in developing countries have mitigated the effect of economic inequality on fertility, i.e. the poor, whose fertility is usually higher, have lowered this rate. (p. 267) In the 1970s, the delayed effects of family planning programs were expected to affect developed countries (which started earlier) than developing countries, but Moldin did not find significant differences between these groups of countries in the rate of decline in fertility in the 1970s. ... This may indicate that some other factor influenced. p. 268
Research became more complex, Heer (1966), Kirk (1971), Gregory et al. (1971), Beaver (1975), Gregory and Campbell (1976), examined a variety of relationships on a variety of variables. (p. 269) Attention is paid to threshold values, for example, Moldin (1978) estimates the threshold of female literacy from 55 to 85%, from that moment it begins to influence fertility.
The same threshold should apply to pensions, Barbara notes, but there is no research yet that defines this threshold.
Pages 271-276 are devoted to some controversial issues of methodology, for example, the author is a supporter of the so-called. homogeneity, i.e. claims that different nations under the same conditions will behave in the same way. "Heterogeneous women" let them prove it, she says. Or the question of how best to group nations for analysis? or the issue of the influence of time on the results (at different points in time, the reaction to variables (the same pensions and education) may be different, or the issue of converting one unit of fertility measurement to another, the issue of data accuracy, etc.)
Barbara notes that there is interdependence, but she would view education and pensions as causes, and fertility as a consequence. In the chapter "Pension and education programs as a policy of fertility," Barbara notes that these programs are essentially equal to the political decision of the state to reduce the birth rate and curb the growth of its population, and the choice is up to the state. (p. 287). Thus, Tsul and Baugh (1978, p. 33) conclude that the trends of recent years are encouraging because rid the world of the darkest prospects of overpopulation, famine and world war, outlined, according to the Malthusians, by the year 2000. Has a remedy been found for overpopulation? It's premature to think so, says Barbara. In 1975, the birth rate was 4,688 births per 1,000 women in 24 Latin American countries, 6,264 in 40 African countries, 6,009 in 16 countries in the Middle East, 4,572 in 25 Asian countries - the danger has not yet passed. By 1978 (according to Tsula and Baugh's report), the situation had not changed, and 2/3 of developing countries have this figure above 5,000, and more than half of them are more than 6,000, that is, the countries of the Third World double their numbers per generation. Tsul and Baugh are pinning their hopes on family planning programs that world leaders must implement. (p. 288) But these programs only reduce the number of children to what is desired, which is still quite high in developing countries, says Arnold et al. (1975), examining 4 countries.
The policy of reducing the birth rate must continue, Barbara insists.
Barbara Entwistle, in her 1981 dissertation, proposes a "child role hypothesis" based on the thesis that mass education and pension programs are linked to fertility by feedback. (pp. 39, 277) Barbara notes that this thesis was put forward by scientists and before her, mentions four previous studies of the relationship between pensions and fertility, two of which (Friedlander and Silver, 1967, and Kelly and others, 1976) did not find a connection. one revealed a very weak dependence, on the verge of statistical error (Hom, 1975), and the last (Hom, 1976) showed a strong dependence.
Barbara conducted a comprehensive analysis for 146 ethnic populations (countries), of which 120 had complete data, and for 1960, 1965 and 1970. (p. 145) It turned out that 1970 showed a more pronounced dependence than 1965, and 1965, in turn, showed a more pronounced dependence than 1960, in other words, earlier work examined earlier data, where the dependence is small.
Barbara names two explanations for this negative dependence (fertility on pensions), accepted in American demography: economic, where children are breadwinners in old age, and pensions reduce this role, and generally accepted that pensions change the structure of the family. Both of them lead to the same result - a decrease in the birth rate, so Barbara does not distinguish between them, i.e. does not introduce indices measuring the value of children, nor the strength of bonds between children and parents (p. 203). Although she herself is leaning towards the generally accepted explanation, rather than the explanation of "cost and benefit", because, in her opinion, this explains the delayed effect of pensions. Parents-to-be must learn about pensions and the opportunities they provide, and this takes some time. This generally accepted explanation (the destruction of the family structure) is also supported by the fact that the effect of pensions increases over time (p. 204), which, in her opinion, would not have been the case with the "prices and benefits" mechanism.
Pensions have been found to have a deferred effect, i.e. their action slows down.
But Barbara did not find the dependence of fertility on educational programs, or very weak, on the verge of a statistical error, which "disappointed" her (p. 204), which again, in her opinion, does not confirm the hypothesis of "prices and benefits," according to which The "price" of children is greatly increased precisely from education, because parents are forced to spend many years on school bags and notebooks, instead of sending the child to work and carrying a penny into the house. (pp. 204, 278, 281-282) Another confirmation of the hypothesis of the influence of pensions precisely through the weakening of family ties, Barbara considers the fact that pensions initially had a stronger effect on developing countries, because in developed family ties were already weakened.
Barbara notes two studies (Müller, 1972, and Arnold et al., 1975) that introduced indices of family ties, and showed that fertility changes in accordance with them, as predicted. She emphasizes the importance of developing such indices, and so that they include not only the content of parents, but also cohabitation (which is not in the Hypothesis of the role of children), there are no such indicators yet, and most of the countries cannot be evaluated by this indicator. (Although Barbara's own preliminary research did not show fertility as a function of cohabitation indices and the number of adults in the household, therefore she did not include these indices in the main study.)
Caldwell (1976,1977) put forward a similar hypothesis that Westernization (the Western way of life, namely education and pensions) rather than industrialization reduces fertility, based on African countries where industrialization was in its infancy, but something already influenced fertility - and this, according to the author, is Westernization. In other words, the prospect of receiving a pension in the future already influenced (i.e., the future parent considered himself as having already worked at the factory and a recipient of the pension, although the factory was not yet open), and not an example of ancestors (who did not work in factories ). Although such a claim needs more proof, adds Barbara.
But pensions just fit into the Hypothesis of the role of children, because liberate children from the need to feed their parents, thereby weakening family ties, reducing family and the desire to have their own children, and the fertility predictions made on this basis came true.
On pages 205 (also 283, 289), Barbara sums up that pensions are extremely promising for reducing the birth rate, and advises all Third World countries to introduce pensions (including funded ones) for this purpose. Training programs for such a reduction are not useful, says Barbara, except in terms of spreading knowledge about contraception, activities outside the family, etc. among future parents. And even such education, firstly, will take time, and secondly, it will have little effect.
Cites article 20 of the Universal Declaration of Human Rights (1948), which says that "everyone has the right to social security (cited by Savi, 1972, p. 2), and notes that the Plan of Action on World Population (1974) supports this thesis precisely from the goal of reducing the population, as well as ensuring human rights.
Barbara finds it useful not only to make a comparative analysis across countries, but also for individual communities that differ in terms of pension coverage and education, up to the study of individual families (for example, such studies could clarify the role of "son preference" by examining the role of education of boys and girls in -separability), the hypothesis of the role of children is very promising, because predictions come true. (p. 285) Conclusions: the nuclear family is weaker than the "long" family (which includes many relatives, is less able to pay the "price" of children, because it does not receive adequate support from relatives, although it fits better into the modern industrial society, and the nuclear family accompanies lower fertility.
Also, Barbara calls into question the thesis adopted before her that all nations react to the same innovations in the same way. The results of her work do not confirm either the assumption of homogeneity (that all nations react in the same way) or heterogeneity. For example, in developed countries, the decline in fertility due to the introduction of pensions turned out to be less than the forecast showed. The greatest influence of pensions had on the developing countries and the countries of Latin America, and of the studied 1960, 1965 and 1970, the effect was most pronounced in 1970.
Borisov Vladimir Al.
In his book Prospects of fertility [6], back in 1976, Borisov, among the reasons for having few children, along with the separation of children from family production (in a peasant family), the termination of communal land ownership, also calls social insurance systems, i.e. pensions, as well as developed medicine, which reduce the dependence of the elderly on their children and makes children "unnecessary" for these purposes. Borisov saw the reason for the decline in the birth rate in the reduction of the need for children, and argued that no improvements in living conditions and child allowances will increase the birth rate if there is no socio-cultural need for children-the so-called behavioral approach.
For his views on the demographic situation, the threat of depopulation in the country and the demographic policy adopted at that time, Borisov was dismissed from the university and deprived of the right to teach until 1991, A. I. Antonov reports in his article "80 years since the birth of Vladimir Alexandrovich Borisov" [7]. The reason was his speech at a student conference at Moscow State University. The Leninsky District Committee of the CPSU of Moscow declared him an " apolitical scientist"
At that time, it was customary to make ironic remarks about the "threat of depopulation", and some reviewers generally believed that such a thing was impossible in the next 500 years. In 1982, Borisov was demoted to just a researcher. The reason is again "careless performance" (the words of Borisov himself, from his autobiography). This harassment continued in the post-Soviet period, when in 1994 a meeting on the defense of his doctoral dissertation was disrupted.
Borisov considered the birth rate indicator to be the main one in demography, and the decline in the birth rate was the global problem No. 1. He saw the reason for the lack of children in the lack of motivation, intergenerational ties (the need for children). He also believed that this process (the so-called Demographic Transition) is reversible, provided that such a need is returned. Benefits and living conditions alone will not increase the birth rate-if there is no need for children, Borisov believed.
(Also, in his article, Antonov noted that the phenomenon of low birth rate has not been sufficiently studied).
Borisov was once a student of Urlanis, and if Borisov wrote his landmark work Birth Rate Prospects in 1976, then Urlanis wrote "Problems of Population Dynamics of the USSR" in 1974, from which the forecasts of demographic processes for 2000 were removed by censorship. The future will show that they were correct.
Borisov is also the author of a GMER model of a hypothetical minimum of natural fertility. Antonov defines the essence of this model as "behavioral"
References
Bibliography
The long-term determinants of marital fertility in the developed world (19th and 20th centuries): The role of welfare policies Jesús J. Sánchez-Barricarte
EDUCATION, PENSION PROGRAMS, AND FERTILITY: A CROSS-NATIONAL INVESTIGATION, WITH SPECIAL REFERENCE TO THE POTENTIAL HELD BY EDUCATION AND PENSION PROGRAMS AS FERTILITY REDUCTION POLICIES
Entwisle, Barbara (M.A.: Sociology, 1978) Title: The effect of pension programs on fertility : a replicative study Advisor: Kobrin, Frances E.
Barbara Entwisle, Albert I. Hermalin, William M. Mason Socioeconomic Determinants of Fertility Behavior in Developing Nations: Theory and Initial Results
The Effect of Old-Age Pensions on Fertility: Evidence from a Natural Experiment in Brazil
The impact of pension systems on fertility rate: a lesson for developing countries
Influence of women's workforce participation and pensions on total fertility rate: a theoretical and econometric study
Pension, Fertility, and Education
Fertility and Pension Programs in LDCs: A Model of Mutual Reinforcement
Fertility and education investment incentive with a pay-as-you-go pension
The effects of child-related benefits and pensions on fertility by birth order: a test on Hungarian data
Pensions with endogenous and stochastic fertility
The fertility effects of public pension: Evidence from the new rural pension scheme in China
Sociocultural evolution theory
Demographic economics
Human geography | Old-age-security hypothesis | Environmental_science | 5,295 |
3,845,621 | https://en.wikipedia.org/wiki/Microdosing | Microdosing, or micro-dosing, involves the administration of sub-therapeutic doses of drugs to study their effects in humans, aiming to gather preliminary data on safety, pharmacokinetics, and potential therapeutic benefits without producing significant physiological effects. This is called a "Phase 0 study" and is usually conducted before clinical Phase I to predict whether a drug is viable for the next phase of testing. Human microdosing aims to reduce the resources spent on non-viable drugs and the amount of testing done on animals.
Less commonly, the term "microdosing" is also sometimes used to refer to precise dispensing of small amounts of a drug substance (e.g., a powder API) for a drug product (e.g., a capsule) and, when the drug substance also happens to be liquid, this can potentially overlap what is termed microdispensing. For example, psychedelic microdosing.
Techniques
The basic approach is to label a candidate drug using the radio isotope carbon-14, then administer the compound to human volunteers at levels typically about 100 times lower than the proposed therapeutic dosage (from around 1 to 100 micrograms but not above).
As only microdose levels of the drug are used, analytical methods are limited. Extreme sensitivity is needed. Accelerator mass spectrometry (AMS) is the most common method for microdose analysis. AMS was developed in the late 1970s from two distinct research threads with a common goal: an improvement in radiocarbon dating that would make efficient use of datable material and that would extend the routine and maximum reach of radiocarbon dating. AMS is routinely used in geochronology and archaeology, but biological applications began appearing in 1990 mainly due to the work of scientists at Lawrence Livermore National Laboratory. AMS service is now more accessible for biochemical quantitation from several private companies and non-commercial access to AMS is available at the National Institutes of Health (NIH) Research Resource at Lawrence Livermore National Laboratory, or through the development of smaller affordable spectrometers. AMS does not measure the radioactivity of carbon-14 in microdose samples. AMS, like other mass spectrometry methods, measures ionic species according to mass-to-charge ratio.
Psychedelic
Psychedelic microdosing is the practice of using sub-threshold doses (microdoses) of serotonergic psychedelic drugs in an attempt to improve creativity, boost physical energy level, emotional balance, increase performance on problems-solving tasks and to treat anxiety, depression and addiction, though there is very little evidence supporting these purported effects as of 2019.
Impact of psychedelic microdosing
In 2021 it was reported in a study done that an increased conscientiousness was seen due to microdosing. Microdosing was seen to have improved mental health after microdosing with psychedelics after 30 days. More research is needed to ultimately decide whether or not microdosing helps those who suffer from depression and anxiety. Microdosing has not seen to improve participants motor responses, attention, and cognitive problem-solving abilities. Microdosing is still under investigation as to whether it works or not. Researchers are investigating into microdosing more and more, the placebo effect causes difficulties in research on this topic.
In January 2006, the European Union Microdose AMS Partnership Programme (EUMAPP) was launched. Ten organizations from five different countries (United Kingdom, Sweden, Netherlands, France, and Poland) will study various approaches to the basic AMS technique. The study is set to be published in 2009.
One of the most meaningful potential outcomes of Phase-0/Microdosing studies is the early termination of development. In 2017, Okour et al published the first example in literature of a termination of an oral drug based on IV microdose data.
See also
Pharmacology
References
Further reading
External links
Review article on microdosing as a means of reducing the use of animals in drug testing (PDF format)
EU announcement of EUMAPP project
Pharmacokinetics
Clinical research
Alternatives to animal testing | Microdosing | Chemistry | 839 |
2,792,484 | https://en.wikipedia.org/wiki/Cultivar%20group | A Group (previously cultivar-group) is a formal category in the International Code of Nomenclature for Cultivated Plants (ICNCP) used for cultivated plants (cultivars) that share a defined characteristic. It is represented in a botanical name by the symbol Group or Gp. "Group" or "Gp" is always written with a capital G in a botanical name, or epithet. The Group is not italicized in a plant's name. The ICNCP introduced the term and symbol "Group" in 2004, as a replacement for the lengthy and hyphenated "cultivar-group", which had previously been the category's name since 1969. For the old name "cultivar-group", the non-standard abbreviation cv. group or cv. Group is also sometimes encountered. There is a slight difference in meaning, since a cultivar-group was defined to comprise cultivars, whereas a Group may include individual plants.
The cultivar-groups, in turn, replaced the similar category convariety (convar.), which did not necessarily contain named varieties.
The ICNCP distinguishes between the terms "group" and "Group", a "group" being "an informal taxon not recognized in the ICBN", while a "Group" is the formal taxon defined by the ICNCP (see above).
This categorization does not apply to plant taxonomy generally, only to horticultural and agricultural contexts. Any given Group may have a different taxonomic classification, such as a subspecific name (typically a form or variety name, given in italics) after the genus and species.
A Group is usually united by a distinct common trait, and often includes members of more than one species within a genus. For example, early flowering cultivars in the genus Iris form the Iris Dutch Group. A plant species that loses its taxonomic status in botany, but still has agricultural or horticultural value, meets the criteria for a cultivar group, and its former botanical name can be reused as the name of its cultivar group. For example, Hosta fortunei is usually no longer recognized as a species, and the ICNCP states that the epithet fortunei can be used to form Hosta Fortunei Group.
Orthography
Every word in a Group name is capitalized (unless that conflicts with linguistic custom; for example, lower-case is used after a hyphen in a hyphenated term, like "Red-skinned", and for conjunctions and prepositions except in the first word of the name). This is followed by the capitalized word "Group". The combined Group name is not italicized or otherwise stylized, and follows the italicized Latin epithet. It can also be used after a vernacular name for the species, genus, or other category. Examples:
Lilium Darkest Red Group
Neofinetia falcata Hariba Group
hollyhock Chater's Double Group
"Group" may be abbreviated "Gp" (without a terminal . character). A cultivar group may be surrounded by parentheses (round brackets) for clarity in long epithets:
Solanum tuberosum (Maincrop Group) 'Desiree'
ICNCP illustrates this order consistently, though in actual practice the cultivar name in single quotation marks may come before that of the cultivar group (with or without parentheses):
Solanum tuberosum 'Desiree' Maincrop Group
"Group" is translated in non-English material, and uses the word order of the language in question, but is always capitalized. Translation may or may not be applied to the name itself. For example, "Chater's Double Group" may appear as " Chater's Double" in French (retaining the English name but translating "Group" and using French word order), yet with full translation as "" in German.
Groups are not necessarily mutually exclusive. For example, the same potato may be designated Solanum tuberosum Maincrop Group, or Solanum tuberosum Red-skinned Group, or given with both as Solanum tuberosum Maincrop Red-skinned Group, "depending on the purpose of the classification used".
See also
Grex (horticulture), a taxonomic category for hybrid orchids, defined by parentage rather than by characteristics
Polyploid, having extra sets of chromosomes. Polyploidy is a characteristic sometimes used to define a cultivar group.
Notes
References
group
Botanical nomenclature | Cultivar group | Biology | 919 |
36,841,426 | https://en.wikipedia.org/wiki/UMS/UHD/UHX | UMS/UHD/UHX is a forklift truck in the Tergo series from the Swedish manufacturer Atlet AB. It differs from traditional forklift truck design philosophy in that it is entirely designed around the operator. By letting the forklift truck adjust to the operator's skills and body structure the UMS/UHD/UHX is an example of how ergonomics is an important part of machine efficiency.
Sources
Atlet, Henrik Moberger, Tärnan Reportage AB 2008.
External links
Atlet AB
Engineering vehicles
da:Atlet AB
de:Atlet
fr:Atlet
it:Atlet AB
ru:Atlet AB
sv:Atlet AB | UMS/UHD/UHX | Engineering | 139 |
67,239,644 | https://en.wikipedia.org/wiki/Website%20footer | In web design, a footer is the bottom section of a website. It is used across many websites around the internet. Footers can contain any type of HTML content, including text, images and links.
HTML5 introduced the <footer> element.
Common items that are included or linked to from footers are copyright, sitemaps, privacy policies, terms of use, contact details and directions.
Infinite scrolling cannot be used in combination with footers, because the footer becomes inaccessible.
References
Web design | Website footer | Engineering | 105 |
11,512,301 | https://en.wikipedia.org/wiki/Microsphaera%20hommae | Microsphaera hommae is a plant pathogen.
References
External links
Index Fungorum
USDA ARS Fungal Database
Fungal plant pathogens and diseases
Microsphaera
Fungi described in 1982
Fungus species | Microsphaera hommae | Biology | 42 |
36,971,277 | https://en.wikipedia.org/wiki/Strangulated%20graph | In graph theoretic mathematics, a strangulated graph is a graph in which deleting the edges of any induced cycle of length greater than three would disconnect the remaining graph. That is, they are the graphs in which every peripheral cycle is a triangle.
Examples
In a maximal planar graph, or more generally in every polyhedral graph, the peripheral cycles are exactly the faces of a planar embedding of the graph, so a polyhedral graph is strangulated if and only if all the faces are triangles, or equivalently it is maximal planar. Every chordal graph is strangulated, because the only induced cycles in chordal graphs are triangles, so there are no longer cycles to delete.
Characterization
A clique-sum of two graphs is formed by identifying together two equal-sized cliques in each graph, and then possibly deleting some of the clique edges. For the version of clique-sums relevant to strangulated graphs, the edge deletion step is omitted. A clique-sum of this type between two strangulated graphs results in another strangulated graph, for every long induced cycle in the sum must be confined to one side or the other (otherwise it would have a chord between the vertices at which it crossed from one side of the sum to the other), and the disconnected parts of that side formed by deleting the cycle must remain disconnected in the clique-sum. Every chordal graph can be decomposed in this way into a clique-sum of complete graphs, and every maximal planar graph can be decomposed into a clique-sum of 4-vertex-connected maximal planar graphs.
As show, these are the only possible building blocks of strangulated graphs: the strangulated graphs are exactly the graphs that can be formed as clique-sums of complete graphs and maximal planar graphs.
See also
Line perfect graph, a graph in which every odd cycle is a triangle
References
.
Graph families
Planar graphs | Strangulated graph | Mathematics | 416 |
17,800,749 | https://en.wikipedia.org/wiki/2-Methylacetoacetyl-CoA | 2-Methylacetoacetyl-CoA is an intermediate in the metabolism of isoleucine.
See also
3-hydroxy-2-methylbutyryl-CoA dehydrogenase
Thioesters of coenzyme A | 2-Methylacetoacetyl-CoA | Chemistry | 52 |
42,622,209 | https://en.wikipedia.org/wiki/John%20J.%20Abel%20Award | The John J. Abel Award is an annual award presented by the American Society for Pharmacology and Experimental Therapeutics (ASPET). The award is given for outstanding research in the field of pharmacology and/or experimental therapeutics; which comes with a $5000 prize, An engraved plaque, and all travel expenses paid to attend the ASPET Annual Meeting at Experimental Biology. The Award is named after American biochemist and pharmacologist, John Jacob Abel.
Recipients
1947 George Sayers
1948 J. Garrott Allen
1949 Mark Nickerson
1950 George B. Koelle
1951 Walter F. Riker, Jr.
1952 David F. Marsh
1953 Herbert L. Borison
1954 Eva King Killam
1955 Theodore M. Brody
1956 Fred W. Schueler
1957 Dixon M. Woodbury
1958 H. George Mandel
1959 Parkhurst A. Shore
1960 Jack L. Strominger
1961 Don W. Esplin
1962 John P. Long
1963 Steven E. Mayer
1964 James R. Fouts
1965 Eugene Braunwald
1966 Lewis S. Schanker
1967 Frank S. LaBella
1968 Richard J. Wurtman
1969 Ronald Kuntzman
1970 Solomon H. Snyder
1971 Thomas R. Tephly
1972 Pedro Cuatrecasas
1973 Colin F. Chignell
1974 Philip Needleman
1975 Alfred G. Gilman
1976 Alan P. Poland
1977 Jerry R. Mitchell
1978 Robert J. Lefkowitz
1979 Joseph T. Coyle
1980 Salvatore J. Enna
1981 Sydney D. Nelson
1982 Theodore A. Slotkin
1983 Richard J. Miller
1984 F. Peter Guengerich
1985 P. Michael Conn
1986 Gordon M. Ringold
1987 Lee E. Limbird
1988 Robert R. Ruffolo, Jr.
1989 Kenneth P. Minneman
1990 Alan R. Saltiel
1991 Terry D. Reisine
1992 Frank J. Gonzalez
1993 Susan G. Amara
1994 Brian Kobilka
1995 Thomas M. Michel
1996 John D. Scott
1997 David J. Mangelsdorf
1998 Masashi Yanagisawa
1999 Donald P. McDonell
2000 William C. Sessa
2002 Steven A. Kliewer
2003 David S. Bredt
2004 David Siderovski
2005 Randy A. Hall
2006 Christopher M. Counter
2007 Michael D. Ehlers
2008 Katerina Akassoglou
2009 John J. Tesmer
2010 Russell Debose-Boyd
2011 Laura M. Bohn
2012 Jin Zhang
2013 Arthur Christopoulos
2014 Craig W. Lindsley
2015 Pieter C. Dorrestein
2016 Jing Yang
2017 Samie R. Jaffrey
2018 Kirill A. Martemyanov
2019 Namandjé Bumpus
2020 Andrew Goodman
2021 Michael R. Bruchas
2022 Mikel Garcia-Marcos
References
Awards established in 1947
Pharmacy awards
Pharmacology | John J. Abel Award | Chemistry | 562 |
7,818,844 | https://en.wikipedia.org/wiki/Indian%20Human%20Spaceflight%20Programme | The Indian Human Spaceflight program (or the Gaganyaan program) is an ongoing programme by the Indian Space Research Organisation (ISRO) to develop the technology needed to launch crewed orbital spacecraft into low Earth orbit. Three uncrewed flights, named Gaganyaan-1, Gaganyaan-2 and Gaganyaan-3 are scheduled to launch in 2025, followed by crewed flight in 2026 on an HLVM3 rocket.
Before the Gaganyaan mission announcement in August 2018, human spaceflight was not a priority for ISRO, but it had been working on related technologies since 2007, and it performed a Crew Module Atmospheric Re-entry Experiment and a Pad Abort Test for the mission. In December 2018, the government approved a further 100 billion (US$1.5 billion) for a 7-days crewed flight of 2–3 astronauts.
If completed successfully, India will become the fourth nation to conduct independent human spaceflight after the Soviet Union/Russia, United States, and China. After conducting the first crewed spaceflights, the agency intends to start a space station programme, crewed lunar landings, and crewed interplanetary missions in the long term.
History
On August 9, 2007, the then Chairman of the ISRO, G. Madhavan Nair, indicated the agency was "seriously considering" the creation of the Human Spaceflight Programme. He further indicated that within a year, ISRO would report on its development of new space capsule technologies. Development of a fully autonomous orbital vehicle to carry a two-member crew into low Earth orbit (LEO) began a few months after that when the government allocated for pre-project initiatives for 2007 through 2008. A crewed orbital spaceflight would require about and a period of seven years of development. The Planning Commission estimated that a budget of was required for initial work during 2007–2012 for the crewed spaceflight. In February 2009, the Government of India authorized the human space flight programme, but fell short of fully funding it or creating the programme.
The trials for crewed space missions began in 2007 with the 600 kg Space Capsule Recovery Experiment (SRE), launched using the Polar Satellite Launch Vehicle (PSLV) rocket, and safely returned to Earth 12 days later. This was followed by the Crew Module Atmospheric Re-entry Experiment and the Pad Abort Test in 2018. This enables India to develop heat-resistant materials, technology, and procedures necessary for human space travel.
As per memorandum of understanding (MoU), Defence Research and Development Organisation (DRDO) will provide support for Human Space Mission with critical human-centric systems and technologies like space-grade food, crew healthcare, radiation measurement and protection, parachutes for the safe recovery of the crew module, fire suppression systems, etc. Defence Food Research Laboratory (DFRL) has worked on the space food for the crew and has been conducting trials on a G-suit for astronauts as well. A prototype called 'Advanced Crew Escape Suit' weighing 13 kg and built by Sure Safety (India) Private Limited has been tested and performance verified. While the crew module is designed to carry a total of 3 passengers, the maiden crewed mission may only have one or two crews on board.
Having shown success in all preliminary tests, the decisive push for the creation of the Human Spaceflight Programme took place in 2017, and it was accepted and formally announced by the Prime Minister on August 15, 2018. The funding is approximately Rs 10,000 crore. The testing phase was expected to begin in December 2020, and the first crewed mission was to be undertaken in December 2021. However, on June 11, 2020, it was announced that the overall schedule for the Gaganyaan launches had been postponed due to the impact of the COVID-19 pandemic in India, in turn revising the timetable for the HSP. As of December 2022, the first uncrewed test flight is scheduled to launch no earlier than mid-2024, with the uncrewed second and crewed third flights to follow afterward. As per ISRO, the initial review process is complete for food, potable water, emergency first aid kits, and health monitoring systems for the Gaganyaan mission until March 16, 2021. ISRO and the CNES joint working group on the Human Spaceflight Programme are collaborating on space medicine for Gaganyaan project.
Spacecraft developments
The first phase of this programme is to develop and fly the 3.7-ton spacecraft called Gaganyaan with the capacity to carry a 3-member crew in low Earth orbit and safely return to Earth after a mission duration of a few orbits to two days. The extendable version of the spaceship will allow flights up to seven days, rendezvous and docking capability. Before the flight of the Gaganyaan module, Group Captain Shubhanshu Shukla would fly on the Axiom-4 Mission to the International Space Station.
In the next phase, enhancements will lead to the development of a small habitat, allowing spaceflight durations of 30–40 days at once. Further advances based on experience will subsequently lead to development of a space station.
On October 7, 2016, Vikram Sarabhai Space Center Director K. Sivan stated that ISRO was gearing up to conduct a critical 'crew bailout test' called the ISRO Pad Abort Test to see how fast and effectively the crew module could be released safely in the event of an emergency. The tests were conducted successfully on July 5, 2018, at Satish Dhawan Space Centre, Sriharikota. This was the first of a series of tests to qualify a crew escape system technology. Parachute tests were scheduled before the end of 2019, and multiple in-flight abort tests were planned starting mid-2020.
India will not use any animals for life support system testing but robots resembling humans will be used. ISRO is targeting more than 99.8% reliability for its crew escape system.
ISRO plans to launch its crewed orbiter Gaganyaan atop a Launch Vehicle Mark 3 (LVM3). About 16 minutes after lift-off, the rocket will inject the orbital vehicle into an orbit 300 to km above Earth. The capsule would return for a splashdown in the Arabian Sea near the Gujarat coastline. As of May 2019, the design of the crew module was complete. The spacecraft will be flown twice uncrewed for validation before conducting actual human spaceflight. As of January 2020, the crew module was due to undergo testing in the wind tunnel facility of the Council of Scientific and Industrial Research (CSIR) at the National Aerospace Laboratories (NAL). The spacecraft will carry one crew in its maiden crewed mission to an orbit of .
The first uncrewed flight will involve the launch of a module which, after orbiting will re-enter the atmosphere and decelerate at an altitude of before splashing down.
Infrastructure development
Launch pad
India's maiden crewed mission is expected to take off from the Satish Dhawan Space Centre Second Launch Pad. In November 2019, ISRO released tenders for the pad's augmentation. A third launch pad in Sriharikota has been proposed for India's future launch vehicles and crewed missions. Systems for crew ingress and egress, an access platform, recovery setup for emergencies during the flight’s ascent phase, module preparation facility for assembly and testing will be built. All the facilities will be connected to an upcoming Gaganyaan control facility which will be built nearby and facilitate communication and monitor the crew capsule during flight.
Human-Rating of LVM3
Human-rating allows the system to be capable of safely transporting humans. ISRO will be building and launching 3 missions to validate the human rating of the LVM3. Existing launch facilities will be upgraded to enable them to carry out launches under the Indian Human Spaceflight campaign.
ISRO has been modifying propulsion modules of various stages of the rocket for human rating. Theoretical parameters for human rating were expected to be achieved by August or September 2020 to be followed by simulations and three test launches.
Escape System
ISRO has successfully conducted a pad abort test to validate its launch escape system for fast and effective crew extraction in the event of an emergency. The tests were conducted successfully on July 5, 2018, at Satish Dhawan Space Centre, Sriharikota. This was the first of a series of tests to qualify a crew escape system technology. Work on parachute enlargement is also ongoing. Parachute tests are scheduled before the end of 2019, and multiple in-flight abort tests are planned starting mid-2020, using a liquid-fueled test vehicle.
A new test vehicle was designed in early 2020 for the validation of the crew escape system. The vehicle was built for in-flight crew escape of crew and possesses propulsion on top of the module to pull the module away to a safe distance.
Certification
ISRO is working on developing an indigenous mechanism to certify its spacecraft that will take humans to space.
Communication
The spacecraft is expected to communicate with ISTRAC and other partner antennaes.For the Initial test flights, it has been planned that Two terminal ships are to be placed in both the Pacific Ocean and the Atlantic Ocean for communication with the spacecraft. Future flight s are expected to also be SATCOm capable, talking to India Geostationary Communication Satellites.
Astronauts
Astronaut selection and training
In the spring of 2009, a full-scale mock-up of the crew capsule was built and delivered to Satish Dhawan Space Centre for astronaut training. India was to shortlist 200 Indian Air Force pilots for this purpose. The selection process would begin with the candidates completing an ISRO questionnaire, after which they would be subjected to physical and psychological analyses. Only 4 of the 200 applicants were to be selected for the first space mission training. While two will fly, two shall act as reservists.
ISRO signed a memorandum of understanding in 2009 with the Indian Air Force's Institute of Aerospace Medicine (IAM) to conduct preliminary research on the psychological and physiological needs of the crew and the development of training facilities. IAM played a key role in determining astronaut training, the design of the crew capsule as per the anthropometric dimensions of the Indian population and a number of control and environmental systems as per psychological and physiological needs.
The announcement of Gaganyaan by Prime Minister Modi immediately attracted an enthusiastic reaction from the Indian diaspora, and ISRO received millions of letters and emails from Indian residents as well as expats willing to volunteer as astronauts for the project.
In January 2019, ISRO Chairman K. Sivan announced the creation of India's Human Space Flight Centre in Bengaluru for training astronauts. The centre will train the selected astronauts in rescue and recovery operations, operations in a zero-gravity environment, and monitoring of the radiation environment. While the HSFC will initially operate out of ISRO headquarters, another facility, a dedicated campus, has been planned to be built near Bengaluru. The facility will include offices, housing, testing and integration facilities and will also employ a workforce of 1,000 people in the long term.
An astronaut training facility will be established on a proposed site near Kempegowda International Airport in Devanahalli, Karnataka.
ISRO's Human Space Flight Centre and Glavcosmos, which is a subsidiary of the Russian state corporation Roscosmos, signed an agreement on July 1, 2019, for cooperation in the selection, support, medical examination, and space training of four Indian astronauts. An ISRO Technical Liaison Unit (ITLU) has been approved to be set up in Moscow for coordination activities. Until September 2019, level 1 of the astronaut selection process was completed in Bengaluru. The selected test pilots underwent physical exercise tests, lab investigations, radiological tests, clinical tests, and evaluations on various facets of their psychology. By November 2019 the Indian Air Force had selected 12 potential astronauts who would then go to Russia for further training in two batches.
As selection criteria require test pilot experience, any females will not be part of the first Indian crewed spaceflight. The first crewed flight will consist of a crew of three with one backup and this team of four went to Russia for astronaut training.
In December 2019, the selection process came to a close, and four candidates began their 12-month training at the Gagarin Research & Test Cosmonaut Training Center (GCTC) on February 10, 2020. The astronauts were trained for abnormal landings in various terrains, including forests, rivers, and sea.
In February 2020, Indian astronaut candidates completed their winter survival training.
ISRO has also proposed a plan to establish an astronaut training centre at Challakere in Chitradurga district. The facility would take at least 2–3 years to be established after the government's approval. Following their training in Russia for unexpected and extreme situations, Indian astronauts were to return to India in March 2021 for the rest of their training in an Indian module. However, due to the COVID-19 pandemic, training was put on hold from March 28 to May 11 and restarted on May 12, 2020. CNES is supplying the flight system and training flight physicians and technical teams for the Indian Human Spaceflight Program. It is also collaborating and sharing its expertise in the domains of space medicine, astronaut health monitoring and life support.
On the 91st Indian Air Force Day in 2023, the IAF released a video on Twitter, sharing a glimpse of the astronauts (without revealing their faces) training for the Gaganyaan mission. While two or three out of the four astronauts will be selected to fly on the first crewed flight, one of the remaining backup astronauts on this mission will fly before the Gaganyaan prime crew on a mission to the ISS aboard Ax-4 in early 2024, as the second Indian astronaut in space after Rakesh Sharma, though the plan is yet to be finalized. The four have been conducting mission-specific training in India ever since they returned from Russia.
Candidates announcement and First crew
On 27 February 2024, at the Vikram Sarabhai Space Center, Prime Minister Narendra Modi announced the names of the four designated astronauts who will be eligible for future flights as part of the Gaganyaan program, as well as an Indo-US joint mission (Ax-4) to the International Space Station. Kerala Governor Arif Mohammad Khan, Chief Minister Pinarayi Vijayan, Minister of State for External Affairs V. Muraleedharan, ISRO chairman S. Somanath and other high-ranking ISRO officials were present at the reveal. The selected astronauts are:
Group Captain Prasanth Nair
Group Captain Ajit Krishnan
Group Captain Angad Pratap
Group Captain Shubhanshu Shukla
They were given Indian astronaut wings and the Gaganyaan mission logo and moto. The Indo-US joint mission astronaut is Shubhanshu Shukla, while Prasanth Nair was selected as his backup. Both were thus selected to train at NASA facilities.
The necessity for additional astronauts on future space missions has already been acknowledged by ISRO. The broader pool of potential astronauts will be created in collaboration with the IAF's Institute of Aerospace Medicine. Candidates from experimental domains and those engaged in aeronautical research directly are of particular interest to ISRO. Angad Pratap, the Gaganyaan Group Captain, has stated that priority will be given to research work that will address the difficulties that ISRO faces in its technical endeavors. Even if researchers and military aviators might make up the majority of the initial batches, subsequent choices are probably going to be more diverse. The astronauts have to become experts in space theory, take part in simulator training, and interact with scientists. One essential part of the Astronaut Training School (ATS) is aero-medical training. Also required is survival training in a variety of settings, including the sea, the desert, and the snow.
Ground uniform
The ground uniforms were developed by the staff and students of the National Institute of Fashion Technology (NIFT), Bengaluru. Under the direction of the former NIFT director Susan Thomas, the NIFT team—which consisted of three students, Lamia Anees, Samarpan Pradhan, and Tuliya D—as well as two professors, Jonalee Bajpai and Mohan Kumar V—worked on designing the ground uniform for the Gaganyaan mission. The team highlighted the importance for the astronaut-designates' pockets to fit perfectly and the uniform must operate well in order to support their motions. Seventy possibilities were considered before the final design was chosen. The NIFT team examined various space agency uniforms, such as those from SpaceX and NASA. The theme that the NIFT team has explored is asymmetry. The group worked on a two-colored, asymmetrical style line. The design was commissioned in 2021 by the NIFT team, and in 2022, they handed the design to ISRO.
Space food
The Mysore-based Defence Food Research Laboratory (DFRL), a unit of the Defence Research and Development Organisation (DRDO), has developed dried and packaged food for astronauts. The food laboratory has developed around 70 varieties of dehydrated and processed food items that have undergone strict procedures to eliminate on microbacterial and macrobacterial nutrients. Special care has to be taken in the packaging, and the food items should be of limited weight, but at the same time should be high in nutritional quality. Waste disposal systems for leftover food, liquid dispensing systems, food rehydrating systems and heaters adaptable to outer space conditions are in development, although the list of food products planned to fly aboard Gaganyaan is yet to be publicized as of August 2020. DFRL is expected to launch its Ready-to-Eat (RTE) space food by March 2021. The initial batch for the crewed spaceflight Gaganyaan-H1 will carry foodstuffs sufficient for 7 days.
Space medicine
India sent two flight surgeons to Russia and France to get hands-on experience in space medicines. The flights surgeons are doctors from the Indian Air Force, specializing in aerospace medicine.
Gaganyaan and the Indian Human Spaceflight Programme are coming up, hence ISRO Chairman S. Somanath encouraged National Institute of Mental Health and Neurosciences (NIMHANS) to come up with solutions for spaceflight astronauts' psychological health.
Humanoid robots
Unlike other nations that have carried out human spaceflight, India will not fly animals into space. Instead, it will fly humanoid robots for a better understanding of what weightlessness and radiation do to the human body during long durations in space. A legless humanoid named as Vyom Mitrā was displayed in January 2020 which is expected to fly onboard uncrewed experimental missions as well as assist astronauts on crewed missions.
Experiments and objectives
On 7 November 2018, ISRO released an Announcement of Opportunity seeking proposals from the Indian science community for microgravity experiments that could be carried out during the first two robotic flights of Gaganyaan. The scope of the experiments is not restricted, and other relevant ideas will be entertained. The proposed orbit for microgravity platform is expected to be in an Earth-bound orbit at approximately 400 km altitude. All the proposed internal and external experimental payloads will undergo thermal, vacuum and radiation tests under required temperature and pressure conditions. To carry out micro-gravity experiments for long duration, a satellite may be placed in orbit. Indian vyomanauts will perform four biological and two physical science experiments related to micro-gravity during the mission.
Space station
India plans to deploy a 20-tonne space station named Bharatiya Antariksha Station, as a follow-up programme to the Gaganyaan missions. On 13 June 2019, ISRO Chief K. Sivan announced the plan, saying that India's space station will be deployed 5–7 years after the completion of the Gaganyaan programme. He also said that India will not join the International Space Station program. The space station would be capable of harbouring a crew for 15–20 days at a time. It is expected to be placed in a low Earth orbit of 400 km altitude and be capable of harbouring three humans. Final approval is expected to be given to the programme by the Indian government only after the completion of the Gaganyaan missions.
ISRO is working to develop spacecraft docking and berthing technology, with initial funding of ₹10 crore cleared in 2017. A Space Docking Experiment, or SPADEX, is being worked out by ISRO with systems like signal analysis equipment, high-precision videometer for navigation, docking system electronics and real-time decision making for landing systems being developed in various stages. As part of SPADEX, ISRO will launch 2 small satellites for testing. This technology is crucial for a space station as it will enable the transfer of humans from one vehicle or spacecraft to another.
See also
Gaganyaan
LVM3
Chandrayaan programme
Indian Mars exploration missions
References
External links
Official website
President Kalam's vision: India will land on the Moon in August 2025
Hindustan Aeronautics Ltd (HAL) hands over the first ‘Crew Module Structural Assembly’ to ISRO. 13 February 2014.
Space programme of India
Human spaceflight programs
Articles containing video clips
2006 establishments in India | Indian Human Spaceflight Programme | Engineering | 4,345 |
33,987,543 | https://en.wikipedia.org/wiki/Clevo | Clevo () is a Taiwanese OEM/ODM computer manufacturer that produces laptop computers. They sell barebones laptop chassis (barebooks) to value-added resellers who build customized laptops for individual customers.
History
Clevo was founded in 1983 as Nan Tan Computer (NTC). In 1987, the company established its laptop computer business, with production starting in 1990. In 1992, NTC set up Clevo, a U.S. subsidiary which would distribute its laptops in the country. NTC later adopted the Clevo name for itself and first listed on the Taiwan Stock Exchange in 1997. In 1999, Clevo merged with their subsidiary, Kapok, to increase efficiency. In August 2002, Clevo had built a new factory in Kunshan, China.
Background
Clevo has marketed its products in over 50 countries. The company has also founded several service centers in Canada, Germany, Great Britain, China, Taiwan, South Korea and the United States. These centers serve various businesses, ranging in size from small to multinational, with a variety of product selections in either small or large quantities. Clevo focuses its business on designing, developing, manufacturing, and distributing electronic equipment and laptops.
Support
Some Clevo PCs use MXM upgradable video cards and replaceable desktop CPUs.
Customers
Clevo products are not sold to end users but to companies that configure them and sell them under their own brand.
Suppliers
Companies that use Clevo include:
See also
List of companies of Taiwan
Clevo x7200
References
Computer companies of Taiwan
Computer hardware companies
Computer systems companies
Companies established in 1983
Computer companies established in 1983
Consumer electronics brands
Taiwanese companies established in 1983
Taiwanese brands
Netbook manufacturers | Clevo | Technology | 361 |
6,889,252 | https://en.wikipedia.org/wiki/Twirling | Twirling is a form of object manipulation where an object is twirled by one or two hands, the fingers or by other parts of the body. Twirling practice manipulates the object in circular or near circular patterns. It can also be done indirectly by the use of another object or objects as in the case of devil stick manipulation where handsticks are used. Twirling is performed as a hobby, sport, exercise or performance.
Types
Twirling includes a wide variety of practices that use different equipment or props. All props are 'stick' or simulated stick shape and are rotated during the activity. The types of twirling are arranged alphabetically.
Astrowheeling
By using a heavy spinning wheel with handles, astrowheeling combines the aesthetics of twirling and the resistance of spinning wheels into a form of practical exercise. It was inspired by ancient practices that manipulate the rotational inertia of spinning objects in order to develop balance, focus, and control. The current trend of astrowheeling, which uses "bike-like" wheels, was popularized in the 1980s in North America.
Baton twirling
Baton twirling has expanded beyond parades and is now more comparable to rhythmic gymnastics (see below). The sport is popular in many countries including the United States, Japan, Spain, France, Italy, the Netherlands and Canada. Many countries compete each year at the World Baton Twirling Championships.
Routines for competitive sport baton twirling are designed for athletes of novice through elite stages of development, experience and ability. Individual competitive events utilize one-baton, two-baton, or three-baton to standardized music while group competitive events are performed with members twirling together with precision and unison. Also there are pair and group events which include Freestyle Pairs and Freestyle Team at the highest level. Groups utilize their own pre-recorded music.
Pen spinning
Pen spinning — using one's fingers to manipulate an ordinary inexpensive writing-pen — can be performed anywhere. Sometimes classified as a form of contact juggling, pen spinning may also include tossing and catching of the pen.
Called "rōnin mawashi" in Japan, where it is popular among the per-collegiate community, pen twirling has its stars, as does any other performance or skill. Accomplished masters of the art form that are well known — at least among those who follow the sport — have developed a reputation for creation of certain signature 'moves'. David Weis is credited with creating numerous 'back' style moves, such as the "BackAround". Hideaki Kondoh is generally credited with giving the pen trick "Sonic" its name, because of the way the pen would blur in his fingers.
Penspinning only recently saw a rapid increase in recognition due to the emergence of internet media websites such as YouTube. From 2006 onwards, the art of Penspinning has developed subcultures in many countries of the world including the Asiatic-regions and Europe (France, Germany and Poland).
Poi
Poi is a form of juggling, dance or performance art, accomplished using balls, or various other weights, on ropes or chains — held in each hand, and swung in various circular patterns, similar to club-twirling. It was originally practiced by the Māori people of New Zealand (the word poi means "ball").
Rhythmic gymnastics
Combining elements of ballet, gymnastics, theatrical dance, and apparatus manipulation, Rhythmic Gymnastics, once largely considered a sport for women and girls, is growing in popularity among men as well. The Japanese's version of Men's rhythmic gymnastics includes tumbling and is performed on a spring floor. Men compete in four types of apparatus: rope, stick, double rings and clubs. Groups do not use any apparatus. Japan hosted the first men's world championships in 2003.
Rhythmic gymnastics as a sport began in the 1940s in the former Soviet Union. It was there that for the first time, the spirit of sports was combined with the sensuous art of classical ballet. (To Isadora Duncan, we credit the famous rebellion against the dogma of classical ballet and the shift toward the creation of a new discipline that would blend art and sport.)
Recognized in 1961 as 'modern gymnastics', later 'rhythmic sportive gymnastics', rhythmic gymnastics experienced its first World Championships for individual gymnasts in 1963 in Budapest.
Today, Rhythmic gymnastics as a sport continues on, and hobbyists have adopted rhythmic gymnastics props such as the women's Ball, Clubs, Hoop, Ribbon, and Rope, plus the stick and rings of men's gymnastics, as exercise and recreational gear. These props have found their way into the modern 'juggling and dexterity play community' where they are used to perform tricks and maneuvers for fun fitness, and flexibility.
Sticks and staves
Devil sticks
"Twirling", "sticking," and "stick juggling" are all common terms for using the twirling prop known as devil sticks, flower sticks, or various other names. A set of devil sticks is made up of one baton and two control sticks.
In use the central stick, the baton, is pushed, lifted and caressed by the two control sticks causing the stick to flip, wobble, spin, and fly through various maneuvers or tricks.
Juggling sticks similar to the modern variants have continuously evolved as they were passed down through the centuries. Apparently originating in Africa earlier than 3000 BCE, "devil sticks" may have followed the Silk Road, from Cairo to China, and have been used in Europe since the Renaissance.
Morris dancing
In some forms of Morris dancing, a stick is twirled in one hand during a dance. For example, in stick dances from Brackley in the Cotswold tradition, each dancer twirls one or two sticks throughout the dance.
Staff twirling
Staff twirling is the art or sport of skillfully manipulating a staff, such as a quarterstaff, bo, or other long length of wood, metal, or plastic as recreation, sport, or as a performance.
In the martial art of bojutsu, a bo is used as a weapon, increasing the force delivered in a strike, through leverage. Bojitsu kata—detailed patterns of movements practiced to perfect one's form—are also used in many traditional Japanese arts, such as kabuki. Some of these kata, are very flowing and pleasant to experience, both as the one executing the movement, and as a spectator.
Staff twirling has enjoyed recent growth in the dexterity play, juggling and fire dancing communities, in part due to the influence of martial arts, and in part due to increasing popularity of adult play as recreation.
Mathematical significance
The figure-eight twirl can be used as a demonstration that a double rotation is a loop in rotation space that can be shrunk to a point.
See also
Hooping
Plate trick
Sufi whirling
References
Physical activity and dexterity toys
Play (activity)
Syllabus-free dance
Circus skills | Twirling | Biology | 1,420 |
13,736,599 | https://en.wikipedia.org/wiki/Duquenois%E2%80%93Levine%20reagent | The Duquenois reagent is used in the Rapid Modified Duquenois–Levine test (also known as the simple Rapid Duquenois Test), which is an established screening test for the presence of cannabis. The test was initially developed in the 1930s by the French medical biochemist Pierre Duquénois (1904–1986) and was adopted in the 1950s by the United Nations as the preferred test for cannabis. The test was originally claimed to be specific to cannabis.
After several modifications, it became known as the Duquenois–Levine test. However, in the 1960s and 70s various studies showed that the test was not specific to cannabis. In 1973, the Supreme Court of Wisconsin ruled the D–L test insufficient evidence for demonstrating that a substance was cannabis, specifically noting that the D–L tests used "are not exclusive or specific for marijuana."
The test is one of several forms of modern cannabis drug testing.
Specificity
In the 1960s and 70s various studies showed that the D–L test was not specific to cannabis, although some flawed studies claimed to show the opposite. A 1969 UK government scientist reported twenty-five plant substances giving a D–L test result very similar to that of cannabis and warned that the D–L test "should never be relied upon as the only positive evidence." Another 1969 study found false positives from "a variety of vegetable extracts".
A 1972 study found that the D–L test would test positive for many commonly occurring plant substances known as resorcinols, which are found in over-the-counter medicines. For instance, Sucrets lozenges tested positive for marijuana. This study concluded that the D–L test is useful only as a "screen" test and was not sufficiently selective to be relied upon for "identification". Still another study, in 1974, showed that 12 of 40 plant oils and extracts studied gave positive D–L test results. In 1975, Dr. Marc Kurzman at the University of Minnesota, in collaboration with fourteen other scientists, published a study in The Journal of Criminal Defense that concluded: "The microscopic and chemical screening tests presently used in marijuana analysis are not specific even in combination for 'marijuana' defined in any way." In the 35 years since that study was published, no one has ever refuted this finding.
The study most cited in favour of the specificity of the D–L test is Thornton and Nakamura (1972). The authors themselves reported that the D–L test gave false positives, but declared the D–L test confirmatory when combined with the presence of cystolithic hairs, which marijuana plants possess. However, many plant species have such hairs, and the study only confirmed that 82 of them did not give D–L test false positives.
A 2000 study by the US NIST examined 12 chemical spot tests and concluded that all the tests examined "may indicate a specific drug or class of drugs is in the sample, but the tests are not always specific for a single drug or [class]". The study also noted that "mace, nutmeg and tea reacted with the modified Duquenois–Levine [test]".
A 2012 Brazilian study tested 40 vegetal drugs with the Duquenois–Levine test and obtained false positive results from Chilean boldo (Peumus boldus Molina), pot marigold (Calendula officinalis L.), leather hat (Echinodorus grandiflorus (Cham. & Schltdl.) Micheli.), cecropia (Cecropia hololeuca Miq.), lemon balm (Melissa officinalis), anise (Pimpinella anisum L.), guaraná (Paulinia cupana Kunth.), jaborandi (Pilocarpus jaborandi Holmes.) and laurel (Laurus nobilis L.).
Process
The reagent can be prepared by adding 2 grams of vanillin and 2.5 milliliters of acetaldehyde to 100 milliliters of ethanol.
The test is performed by placing approximately 10 to 20 milligrams of a target substance in a glass test tube, then 10 drops of the Duquenois reagent. After shaking, 10 drops of concentrated hydrochloric acid are added, and the tube is again shaken. Any color that results after the hydrochloric acid step is recorded. Twenty drops of chloroform (or similar solvent) are then added, and the tube is vortexed, then allowed to settle and separate into two layers. Any color that transfers into the organic layer is recorded.
Marijuana (as well as a variety of other plant substances) becomes purple with the addition of the Duquenois reagent and hydrochloric acid. Upon addition of the organic solvent, the purple color transfers to the organic layer, indicating that cannabinoids may be present.
References
External links
Amanda J. Hanson "Specificity of the Duquenois-Levine and Cobalt Thiocyanate Tests Substituting Methylene Chloride or Butyl Chloride for Chloroform."
John Kelly (2008), False Positives Equal False Justice
Chemical tests
Drug testing reagents | Duquenois–Levine reagent | Chemistry | 1,086 |
30,762,149 | https://en.wikipedia.org/wiki/Kepler-11b | Kepler-11b is an exoplanet discovered around the star Kepler-11 by the Kepler space telescope, a NASA-led mission to discover Earth-like planets. Kepler-11b is less than about three times as massive and twice as large as Earth, but it has a lower density (≤ 3 g/cm3), and is thus most likely not of Earth-like composition. Kepler-11b is the hottest of the six planets in the Kepler-11 system, and orbits more closely to Kepler-11 than the other planets in the system. Kepler-11b, along with its five counterparts, form the first discovered planetary system with more than three transiting planets—the most densely packed known planetary system. The system is also the flattest known planetary system. The discovery of this planet and its five sister planets was announced on February 2, 2011, after follow-up investigations.
Naming and discovery
Kepler-11b is named in two parts. The first part of its name is derived from the fact that it orbits the star Kepler-11. As the discovery of Kepler-11b was announced simultaneously with those of other planets, Kepler-11b was given the designation b because it was the innermost of the six announced planets. The host star, Kepler-11, was named for the Kepler Mission that flagged it as host to several potential transit events under the name KOI-157. The Kepler satellite is a NASA-run space telescope that is tasked with the discovery of terrestrial planets that transit, or cross in front of, their host stars as seen from Earth. These transits cause fluctuations in the host star's brightness; these changes may suggest the presence of a planet, which can then be verified by follow-up observations.
Ground-based telescopes in California, Hawaii, the Canary Islands, Arizona, and Texas, as well as the Spitzer Space Telescope, were used to conduct these follow-up observations and verify Kepler-11b's existence. In particular, the detection of an orbital resonance effect between Kepler-11b and Kepler-11c confirmed the find. Kepler-11b's discovery was announced to the public on February 2, 2011. It is part of the first system discovered with more than three transiting planets, and is also part of the most compact and flat system yet discovered. Kepler-11's planetary system is the second known system to have multiple transiting planets, surpassing the three confirmed planets (two transiting) orbiting the star Kepler-9.
Host star
Kepler-11 is a sunlike star in the constellation Cygnus that has a mass of 0.95 (± 0.1) Msun and a radius of 1.1 (± 0.1) Rsun. In other words, Kepler-11 is approximately 5% less massive and 10% wider than the Sun. The star's metallicity is 0 (± 0.1), meaning that the level of iron (and, presumably, other elements) in the star is almost the same as that of the Sun. Metallicity plays an important role in planetary systems, and stars with higher metallicity are more likely to have planets detected around them. This may be because the higher metallicity provides more material with which to quickly build planets into gas giants or because the higher metallicity increases planet migration towards the host star, making the planet easier to detect. The star has five other known planets in orbit: Kepler-11c, Kepler-11d, Kepler-11e, Kepler-11f, and Kepler-11g. The first five planets in the system are all able to fit within the orbit of planet Mercury, while Kepler-11g orbits further out.
Kepler-11 has an apparent magnitude of 14.2. It is too dim to see from Earth with the naked eye.
Characteristics
Kepler-11b is estimated to be 2.78 Earth masses and 1.8 Earth radii, making it around three times (or less) as massive and nearly twice as large as Earth. The mass is determined using transit timing variations of Kepler-11c, and is limited by the quality of the data. Kepler-11b is the closest planet to its host star in the Kepler-11 planetary system. With an estimated density of 4.5 g/cm3, less than Venus's, Kepler-11b is denser than the solar system's gas giants but slightly less dense than the terrestrial planets. Early estimates have suggested that it is not of Earth-like composition, yet it is nonetheless composed mostly of elements heavier than helium. The planet's effective temperature is 900 K, and is thus the hottest of the planets discovered in the Kepler-11 system. Kepler-11b orbits its host star every 10.30375 days at a distance of .091 AU. Planet Mercury, in comparison, orbits the Sun every 87.97 days from a distance of .387 AU. Kepler-11b's inclination of 88.5° means that it deviates slightly from the orbital plane, but it does so more than the other five planets with which Kepler-11b was discovered.
Atmosphere
Kepler-11b close proximity to the star implies a strong insolation, which caused the planet to lose all of light-element envelope acquired during formation. The observed low density does require the presence of a gaseous envelope though, which was most likely produced via outgassing of hydrogen or evaporation of H2O from the condensed core. The mass of atmosphere of Kepler-11b is not strongly constrained by observations, and may represent 0.04% of planetary mass - close to such value of Venus. For comparison, atmosphere mass ratio of Earth is 0.00008%.
Orbit
Kepler-11b and Kepler-11c orbit Kepler-11 with a strong orbital resonance, which gravitationally tugs the planets into stable orbits at a set ratio. The ratio of the resonance between Kepler-11b and c is 5:4. Kepler-11b's orbit is in a regime where secular resonance may abruptly destabilize its orbit; if this were to happen, it would most likely collide with c.
References
External links
Hot Neptunes
Exoplanets with Kepler designations
Exoplanets discovered in 2011
Transiting exoplanets
Cygnus (constellation)
11b
b | Kepler-11b | Astronomy | 1,295 |
645,327 | https://en.wikipedia.org/wiki/Motoo%20Kimura | (November 13, 1924 – November 13, 1994) was a Japanese biologist best known for introducing the neutral theory of molecular evolution in 1968. He became one of the most influential theoretical population geneticists. He is remembered in genetics for his innovative use of diffusion equations to calculate the probability of fixation of beneficial, deleterious, or neutral alleles. Combining theoretical population genetics with molecular evolution data, he also developed the neutral theory of molecular evolution in which genetic drift is the main force changing allele frequencies. James F. Crow, himself a renowned population geneticist, considered Kimura to be one of the two greatest evolutionary geneticists, along with Gustave Malécot, after the great trio of the modern synthesis, Ronald Fisher, J. B. S. Haldane, and Sewall Wright.
Life and work
Kimura was born on November 13, 1924, in Okazaki, Aichi Prefecture. From an early age he was very interested in botany, though he also excelled at mathematics (teaching himself geometry and other maths during a lengthy convalescence due to food poisoning). After entering a selective high school in Nagoya, Kimura focused on plant morphology and cytology; he worked in the laboratory of M. Kumazawa studying the chromosome structure of lilies. With Kumazawa, he also discovered how to connect his interests in botany and mathematics: biometry
Due to World War II, Kimura left high school early to enter Kyoto Imperial University in 1944. On the advice of the prominent geneticist Hitoshi Kihara, Kimura entered the botany program rather than cytology because the former, in the Faculty of Science rather than Agriculture, allowed him to avoid military duty. He joined Kihara's laboratory after the war, where he studied the introduction of foreign chromosomes into plants and learned the foundations of population genetics.
In 1949, Kimura joined the National Institute of Genetics in Mishima, Shizuoka. In 1953 he published his first population genetics paper (which would eventually be very influential), describing a "stepping stone" model for population structure that could treat more complex patterns of migration than Sewall Wright's earlier "island model". After meeting visiting American geneticist Duncan McDonald (part of the Atomic Bomb Casualty Commission), Kimura arranged to enter graduate school at Iowa State College in the summer 1953 to study with J. L. Lush.
Kimura soon found Iowa State College too restricting; he moved to the University of Wisconsin to work on stochastic models with James F. Crow and to join a strong intellectual community of like-minded geneticists, including Newton Morton and most significantly, Sewall Wright. Near the end of his graduate study, Kimura gave a paper at the 1955 Cold Spring Harbor Symposium; though few were able to understand it (both because of mathematical complexity and Kimura's English pronunciation) it received strong praise from Wright and later J.B.S. Haldane.
His accomplishments at Wisconsin included a general model for genetic drift, which could accommodate multiple alleles, selection, migration, and mutations, as well as some work based on R.A. Fisher's fundamental theorem of natural selection. He also built on the work of Wright with the Fokker–Planck equation by introducing the Kolmogorov backward equation to population genetics, allowing the calculation of the probability of an allele to become fixed in a population. He received his PhD in 1956, before returning to Japan (where he would remain for the rest of his life, at the National Institute of Genetics).
Kimura worked on a wide spectrum of theoretical population genetics problems, many of them in collaboration with Takeo Maruyama. He introduced the "infinite alleles", "infinite sites", and "stepwise" models of mutation, all of which would be used widely as the field of molecular evolution grew alongside the number of available peptide and genetic sequences. The stepwise mutation model is a "ladder model" that can be applied to electrophoresis studies where homologous proteins differ by whole units of charge. An early statement of his approach was published in 1960, in his An Introduction to Population Genetics. He also contributed an important review article on the ongoing controversy over genetic load in 1961.
1968 marked a turning point in Kimura's career. In that year he introduced the neutral theory of molecular evolution, the idea that, at the molecular level, the large majority of genetic change is neutral with respect to natural selection—making genetic drift a primary factor in evolution. The field of molecular biology was expanding rapidly, and there was growing tension between advocates of the expanding reductionist field and scientists in organismal biology, the traditional domain of evolution. The neutral theory was immediately controversial, receiving support from many molecular biologists and attracting opposition from many evolutionary biologists.
Kimura spent the rest of his life developing and defending the neutral theory. As James Crow put it, "much of Kimura's early work turned out to be pre-adapted for use in the quantitative study of neutral evolution". As new experimental techniques and genetic knowledge became available, Kimura expanded the scope of the neutral theory and created mathematical methods for testing it against the available evidence. Kimura produced a monograph on the neutral theory in 1983, The Neutral Theory of Molecular Evolution, and also worked to promote the theory through popular writings such as My Views on Evolution, a book that became a best-seller in Japan.
Though difficult to test against alternative selection-centered hypotheses, the neutral theory has become part of modern approaches to molecular evolution.
In 1992, Kimura received the Darwin Medal from the Royal Society, and the following year he was made a Foreign Member of the Royal Society.
Kimura suffered from progressive weakening caused by amyotrophic lateral sclerosis later in life. In an accidental fall at his home in Shizuoka, Japan, Kimura struck his head and died on November 13, 1994, of a cerebral hemorrhage. He was married to Hiroko Kimura. They had one child, a son, Akio, and a granddaughter, Hanako.
Honors
1959 – Genetics Society of Japan Prize
1965 – Weldon Memorial Prize, Oxford
1968 – Japan Academy Prize
1973 – Foreign member of the National Academy of Sciences of the USA
1976 – Person of Cultural Merit
1976 – Order of Culture
1982 – Member of the Japan Academy
1986 – Chevalier de l'Ordre Nationale de Merite
1986 – Asahi Prize
1987 – John J. Carty Award of the National Academy of Sciences in evolutionary biology
1988 – International Prize for Biology
1992 – Darwin Medal
1993 – Foreign member of Royal Society
See also
History of biology
History of evolutionary thought
History of molecular biology
Molecular evolution
References
1924 births
1994 deaths
Kyoto University alumni
University of Wisconsin–Madison alumni
People from Okazaki, Aichi
Population geneticists
Evolutionary biologists
Theoretical biologists
Japanese scientists
Japanese molecular biologists
Japanese geneticists
Foreign members of the Royal Society
Foreign associates of the National Academy of Sciences
Recipients of the Order of Culture
Deaths from motor neuron disease
Scientists with disabilities
20th-century Japanese biologists
Neutral theory | Motoo Kimura | Biology | 1,432 |
11,552,769 | https://en.wikipedia.org/wiki/Monosporascus%20eutypoides | Monosporascus eutypoides is a species of fungus in the family Diatrypaceae. It is a plant pathogen.
References
Sordariales
Fungal plant pathogens and diseases
Fungi described in 1954
Fungus species
Taxa named by Franz Petrak | Monosporascus eutypoides | Biology | 53 |
2,350,163 | https://en.wikipedia.org/wiki/Bausch%20Health | Bausch Health Companies Inc. (formerly Valeant Pharmaceuticals International, Inc.) is an American-Canadian multinational specialty pharmaceutical company based in Laval, Quebec, Canada. It develops, manufactures and markets pharmaceutical products and branded generic drugs, primarily for skin diseases, gastrointestinal disorders, eye health and neurology. Bausch Health owns Bausch & Lomb, a supplier of eye health products. Bausch Health's business model is primarily focused on acquiring small pharmaceutical companies and then sharply increasing the prices of the drugs these companies sell.
Valeant was originally founded in 1959, as ICN Pharmaceuticals by Milan Panić in California. During the 2010s, Valeant adopted a strategy of buying up other pharmaceutical companies which manufactured effective medications for a variety of medical problems, and then increasing the price of those medications. As a result, the company grew rapidly and in 2015 was the most valuable company in Canada.
Valeant was involved in a number of controversies surrounding drug price hikes and the use of a specialty pharmacy for the distribution of its drugs. This led to an investigation by the U.S. Securities and Exchange Commission, causing its stock price to plummet more than 90 percent from its peak, while its debt surpassed $30 billion. In July 2018, the name of the company was changed to Bausch Health Companies Inc., in order to distance itself from the public outrage associated with massive price increases introduced by Valeant. At the same time, a new ticker symbol, BHC replaced VRX.
History
1959–2002: the Panić years
In 1959, Yugoslavian immigrant Milan Panić, who had defected to the US three years earlier, founded ICN Pharmaceuticals (International Chemical and Nuclear Corporation) in his Pasadena garage. Panić ran the company for 43 years, during which ICN established a foothold in the industry by acquiring niche pharmaceuticals and through the development of Ribavirin, an antiviral drug that became the standard treatment for hepatitis C.
In 1994, ICN merged with SPI Pharmaceuticals Inc., Viratek Inc., and ICN Biomedicals Inc.
On June 12, 2002, following a series of controversies, Panić was forced to retire under pressure from shareholders.
2002–2010: Rebranding as Valeant
In 2003, not long after Panić's ouster, ICN changed its name to Valeant. In 2006, the company received approval in the U.S. to market Cesamet (nabilone), a synthetic cannabinoid. The company also acquired the European rights to the drug for $14 million.
In 2008, the Swedish pharmaceutical company Meda AB bought Western and Eastern Europe branches from Valeant for $392 million. In September 2008, Valeant acquired Coria Laboratories for $95 million. In November 2008, Valeant acquired DermaTech for $12.6 million.
In January 2009, Valeant acquired Dow Pharmaceutical Sciences for $285 million. In July 2009, Valeant announced its acquisition of Tecnofarma, a Mexican generic drug company. In December 2009, Valeant announced its Canadian subsidiary would acquire Laboratoire Dr. Renaud, for C$23 million.
In March 2010, Valeant announced its acquisition of a Brazilian generics and over-the-counter company for $28 million and manufacturing plant for a further $28 million. In April 2010, Valeant announced that its Canadian subsidiary would acquire Vital Science Corp. for C$10.5 million. In May 2010, Valeant acquired Aton Pharmaceuticals for $318 million.
2010–2016: the Pearson years
On September 28, 2010, Valeant merged with Biovail. The company retained the Valeant name and J. Michael Pearson as CEO, but was incorporated in Canada and temporarily kept Biovail's headquarters. Setting on a path of aggressive acquisitions, Pearson ultimately turned Valeant into a platform company that grows by systematically acquiring other companies.
In February 2011, Valeant acquired PharmaSwiss S.A. for €350 million. In May 2011, former Biovail Corporation Chairman and CEO Eugene Melnyk was banned from senior roles at public companies in Canada for five years and penalized to pay $565,000 by the Ontario Securities Commission. In the year before the merger with Valeant, Melnyk had settled by the United States Securities and Exchange Commission (SEC), and agreed to pay a civil penalty of $150,000 after having previously paid $1 million to settle other claims with the SEC. In July 2011, Valeant acquired Ortho Dermatologics from Janssen Pharmaceuticals for $345 million. The acquisition included the products Retin-A Micro, Ertaczo, and Renova, also known as tretinoin. In August 2011, Valeant acquired 87.2% of the outstanding shares of Sanitas Group for €314 million. In December 2011, Valeant acquired iNova Pharmaceuticals for A$625 million from Australian private equity firms Archer Capital with additional milestone payments of up to A$75 million. In December 2011, Valeant acquired Dermik, a dermatology unit of Sanofi.
In January 2012, Valeant acquired Brazilian sports nutrition company Probiotica for R$150 million. In February 2012, Valeant acquired ophthalmic biotechnology company Eyetech Inc. In April 2012, Valeant acquired Pedinol. In April 2012, Valeant acquired assets from Atlantis Pharma in Mexico for $71 million. In May 2012, Valeant acquired AcneFree for $64 million plus milestone payments. In June 2012, Valeant acquired OraPharma for approximately $312 million with up to $144 million being paid in milestone payments. In August 2012, Valeant agreed to buy skin-care company Medicis Pharmaceutical for $2.6 billion. In January 2013, Valeant acquired the Russian company Natur Produkt for $163 million. In March 2013, Valeant acquired Obagi Medical Products, Inc. In May 2013, the company acquired Bausch & Lomb from Warburg Pincus for $8.7 billion in a move to dominate the market for specialty contact lenses and related products.
In January 2014, Valeant acquired Solta Medical for approximately $250 million. In May 2014, Nestle acquired the commercial rights to some of Valeant's products for $1.4 billion. In July 2014, Valeant acquired PreCision Dermatology Inc for $475 million. Along with hedge fund manager Bill Ackman, Valeant made a bid to acquire Allergan; however, in November 2014, Allergan announced that it would be acquired by Actavis in a $66 billion transaction. Valeant and Pershing Square were subsequently accused of insider trading prior to their Allergan bid, and eventually settled the case in 2017.
On April 1, 2015, Valeant completed the purchase of gastrointestinal treatment drug developer Salix Pharmaceuticals for $14.5 billion after outbidding Endo Pharmaceuticals. On the final day of trading, Salix shares traded for $172.81, giving a market capitalisation of $10.9 billion. After the acquisition, Valeant raised the price of the diabetes pill Glumetza drastically. In July 2015, the company announced it would acquire Mercury (Cayman) Holdings, the holding company of Amoun Pharmaceutical, one of Egypt's largest drugmakers, for $800 million. In August 2015, Valeant said it would purchase Sprout Pharmaceuticals Inc for $1 billion, a day after Sprout received approval to market the women's libido drug Addyi. In September 2015, Valeant licensed psoriasis drug Brodalumab from AstraZeneca for up to $445 million. In September 2015, the company announced its intention to acquire eye surgery product manufacturer Synergetics USA, for $192 million in order to strengthen the company's Bausch & Lomb division. In October 2015, the company's Bausch & Lomb division acquired Doctor's Allergy Formula for an undisclosed sum.
On October 21, 2015, Citron Research founder Andrew Left, a short seller of Valeant shares, published claims that Valeant recorded false sales of products to specialty pharmacy Philidor Rx Services and its affiliates. These specialty companies were controlled by Valeant, and allegedly resulted in improper bookkeeping of revenues. In addition, by controlling the pharmacy services offered by Philidor, Valeant allegedly steered Philidor's customers to expensive drugs sold by Valeant. One alleged practice entailed Valeant employees directly managing Philidor's business operations while posing as Philidor employees, and with all written communication under fictitious names. Valeant responded that the allegations by Citron Research were "erroneous". On October 30, 2015, Valeant said that it would cut ties with Philidor in response to allegations of aggressive billing practices. Walgreens Boots Alliance Inc, owner of Walgreens, took over distribution for Valeant.
In 2018, Gary Tanner, who was a former Valeant executive, and Andrew Davenport, the former chief executive of Philidor Rx Services, were prosecuted over a kickback scheme. They were sentenced to a year in prison after being convicted on four charges, including wire fraud and conspiracy to commit money laundering. They were also ordered to forfeit $9.7 million in kickbacks. Tanner had been responsible for managing Valeant's relationship with Philidor as well as Valeant's "alternative fulfillment" program, which the company used to increase prescriptions for its own (expensive) drugs instead of generic substitutes.
An important part of the growth strategy for Valeant under Michael Pearson had been the acquisition of medical and pharmaceutical companies and the subsequent price increases for their products. Valeant's strategy of exponential price increases on life-saving medicines was at the time described by Berkshire Hathaway vice chairman Charlie Munger as "deeply immoral" and "similar to the worst abuses in for-profit education." This strategy had also attracted the attention of regulators in the United States, particularly after the publication in The New York Times of an article on price gouging of specialty drugs.
In September 2015, an influential group of politicians criticized Valeant on its pricing strategies. The company raised prices on all its brand name drugs 66% in 2015, five times more than its closest industry peer. The cost of Valeant flucytosine was 10,000% higher in the United States than in Europe. In late September 2015, members of the United States House Committee on Oversight and Government Reform urged the Committee to subpoena Valeant for their documents regarding the sharp increases in the price of "two heart medications it had just bought the rights to sell: Nitropress and Isuprel. Valeant had raised the price of Nitropress by 212% and Isuprel by 525%".
By October 2015, Valeant had received subpoenas from the U.S. Attorney's Office for the District of Massachusetts and the United States Attorney for the Southern District of New York in regards to an investigation on Valeant's "drug pricing, distribution and patient assistance program." The House Oversight Committee also requested documents from Valeant amid public concern around drug prices.
In October 2015, the Federal Trade Commission began an investigation into Valeant's increasing control of the production of rigid gas permeable contact lenses. Valeant's acquisition of Bausch & Lomb in 2013, and Paragon Vision Services in 2015, is alleged to have given the company control of over 80% of the production pipeline for hard contact lenses. A series of unilateral price increases beginning in Fall 2015 spurred the FTC's investigation. On November 15, 2016, Valeant agreed to divest itself of Paragon Holdings and Pelican Products to settle charges that its May 2015 acquisition of Paragon reduced competition for the sale of FDA-approved "buttons", the polymer discs used to make gas permeable contact lenses.
In their 2015 annual report filed on April 29, 2016, Valeant said that it was the "subject of investigations" by the Securities and Exchange Commission, the U.S. Attorney's Offices in Massachusetts and New York, the state of Texas, the North Carolina Department of Justice, the Senate's Special Committee on Aging, and the House's Committee on Oversight and Reform, and had received document requests from the Autorite de Marches Financiers in Canada and the New Jersey State Bureau of Securities."
In January 2016, presidential candidate Hillary Clinton said she would be "going after" Valeant for its price hikes, causing its stock price to fall 9 percent on the New York Stock Exchange.
On April 27, 2016, Bill Ackman, J. Michael Pearson, and Howard Schiller were forced to appear before the United States Senate Special Committee on Aging to answer to concerns about the repercussions for patients and the health care system faced with Valeant's business model.
By April 2016, the market value of hedge fund holdings in Valeant had fallen by $7.3 billion. Hedge fund herding continued to incite hedge fund portfolio managers to continue to buy Valeant shares. From 2015 to 2017, Valeant shares plummeted more than 90 percent. This was later featured in episode 3 of the first season of the Netflix documentary Dirty Money. In 2017, Ackman's Pershing Square fund, which held a major stake in the company, sold out for a reported loss of $2.8 billion.
2016–2022: Valeant under Joseph Papa
On April 25, 2016, Valeant named Perrigo chief executive Joseph Papa as a permanent replacement for Pearson, and entrusted him with turning around the company. Papa set on a path of strategic sales, debt reduction, and organic growth.
By January 2017, the company had sold its skincare brands to L'Oréal for $1.3 billion and its Dendreon biotech unit to Sanpower for $819.9 million. In June, the company sold iNova Pharmaceuticals for $910 million. In July, the company also divested Obagi Medical Products for $190 million. In November, it announced it would sell Sprout Pharmaceuticals back to its original owners, two years after acquiring the business for $1 billion.
Under Papa's leadership, by early 2018, the company had become profitable again; had settled the Allergan case for less than expected; and had lowered its debt by $6.5 billion. The company had divested itself of 13 non-core businesses, reducing its debt to $25 billion, and had settled or dismissed 70 pending lawsuits, including the Allergan insider trading case. On January 8, 2018, the company announced that its Bausch + Lomb unit had received a CE Mark indicating conformity with health, safety, and environmental protection standards from the European Commission for the distribution of its Stellaris product in Europe.
On December 16, 2019, the company settled a shareholder class action lawsuit under Section 11 of the U.S. Securities Act of 1933, alleging the company misled investors about its business operations and financial performance, for approximately $1.21 billion. The company denied allegations of all wrongdoing as part of the settlement.
On July 31, 2020, the SEC announced that Bausch Health had agreed to pay a $45 million penalty to settle charges of improper revenue recognition and misleading disclosures in SEC filings and earnings presentations. It also announced that Pearson would pay $250,000 in civil penalties to the SEC, as well as $450,000 to reimburse Valeant. Howard Schiller and Tanya Carro, two other executives who settled, paid the SEC $100,000 and $75,000, respectively.
Recent developments
Following Ackman's exit, Paulson & Co. increased its stake in the company, became its largest shareholder, with its founder, John Paulson, joining the board, and vowing to rebuild the company's core franchises and to reduce its debt. In May 2022, Papa was replaced by Thomas Appio as the company's chief executive officer. Paulson replaced Papa as chairman of Bausch Health after being on its board of directors from June 2017 to May 2022.
In May 2023, Judge Richard G. Andrews upheld the court's original ruling, which blocks the FDA from approving Norwich Pharmaceuticals' 550 mg rifaximin generic until October 2029. Judge Andrews' decision bolsters his previous ruling that Norwich's abbreviated new drug application for rifaximin infringed on Bausch Health's Xifaxan patents for the reduction in risk of hepatic encephalopathy recurrence.
In June 2023, the company's oral health business OraPharma partnered with Alex Rodriguez to launch its "Cover Your Bases" gum disease awareness campaign.
In 2023, the company's revenue totalled $8.76 billion (7.9% growth compared to 2022). Adjusted EBITDA totalled $3.01 billion (unchanged from 2022). Net debt was $21.65 billion, excluding $947 million on the balance sheet (5.7 times the company's market value). Three of Bausch Health's four operating segments increased revenue, with the highest result coming from Salix Pharmaceuticals (8% growth ), the company's largest business (nearly half of revenue, 59% of segment profit). About 80% of Salix's sales are Xifaxan, an antibiotic for treating diarrhea.
Bausch Health Companies (Salix Pharmaceuticals) is supporting an investigator-initiated Phase 2 study of Relistor. The drug is intended for patients with resectable squamous cell carcinoma of the head and neck.
In July 2023, 404 Media reported that Bausch Health had been hacked and 1.6 million registration numbers that are issued by the United States' Drug Enforcement Administration to healthcare providers for the prescription of controlled substances were stolen and being used for ransom. Cybersecurity firm Mandiant attributed the breach to a threat actor, UNC5537, which is involved in the major Snowflake Inc. hack.
Acquisitions
Products
Bausch Health's main products include drugs in the fields of dermatology, neurology, and infectious disease.
Medications
The company's major prescription drugs are:
Rifaximin (Xifaxan), for treatment of traveler's diarrhea and irritable bowel syndrome with diarrhea
Budesonide (Uceris), to help get mild to moderate ulcerative colitis under control
Minocycline (Arestin, Solodyn), an antibiotic used for procedures related to periodontitis
Efinaconazole (Jublia), for treatment of toenail fungus
Acne drugs: clindamycin/tretinoin (Ziana), benzoyl peroxide/clindamycin (Acanya, Onexton), tretinoin (Atralin, Retin-A Micro), benzoyl peroxide (Microsphere), tazarotene (Arazlo)
Pimecrolimus (Elidel), used to treat atopic dermatitis
Metformin (Glumetza), to improve glycemic control in adults with type 2 diabetes mellitus
Bupropion (Wellbutrin XL), for treatment of depression
Isoprenaline (Isuprel), for treatment of mild or transient episodes of heart block
Tetrabenazine (Xenazine), for treatment of chorea associated with Huntington's disease
Sodium nitroprusside (Nitropress), for the immediate reduction of blood pressure of patients in hypertensive crises
Penicillamine (Cuprimine), to treat Wilson's disease (a condition in which high levels of copper in the body cause damage to the liver, brain, and other organs), cystinuria (a condition which leads to cystine stones in the kidneys), and in people with severe, active rheumatoid arthritis who have failed to respond to an adequate trial of conventional therapy.
Bexarotene (Targretin), a retinoid for treatment of Cutaneous T-Cell Lymphoma
Aciclovir (Zovirax), a topical antiviral used against herpes viruses
Triethylenetetramine (Syprine), used for treatment of patients with Wilson's disease
Loteprednol (Lotemax) gel, a topical corticosteroid indicated for the treatment of inflammation and pain following ocular surgery
Over the counter products
The company's major over the counter drugs are:
Ocuvite, an eye vitamin
PreserVision, an eye vitamin
ReNu Multiplus, for lubrication of contact lenses
Biotrue, an eye lubricant
Artelac, to treat dry eyes
Boston, for cleaning of contact lenses
SootheXP, an eye lubricant
References
External links
Companies listed on the Toronto Stock Exchange
Pharmaceutical companies of Canada
Canadian brands
Companies based in Laval, Quebec
Pharmaceutical companies established in 1959
1959 establishments in California
S&P/TSX 60
Life sciences industry
Specialty drugs
Tax inversions | Bausch Health | Biology | 4,391 |
3,113,292 | https://en.wikipedia.org/wiki/High-level%20waste | High-level waste (HLW) is a type of nuclear waste created by the reprocessing of spent nuclear fuel. It exists in two main forms:
First and second cycle raffinate and other waste streams created by nuclear reprocessing.
Waste formed by vitrification of liquid high-level waste.
Liquid high-level waste is typically held temporarily in underground tanks pending vitrification. Most of the high-level waste created by the Manhattan Project and the weapons programs of the Cold War exists in this form because funding for further processing was typically not part of the original weapons programs. Both spent nuclear fuel and vitrified waste are considered as suitable forms for long term disposal, after a period of temporary storage in the case of spent nuclear fuel.
HLW contains many of the fission products and transuranic elements generated in the reactor core and is the type of nuclear waste with the highest activity. HLW accounts for over 95% of the total radioactivity produced in the nuclear power process. In other words, while most nuclear waste is low-level and intermediate-level waste, such as protective clothing and equipment that have been contaminated with radiation, the majority of the radioactivity produced from the nuclear power generation process comes from high-level waste.
Some countries, particularly France, reprocess commercial spent fuel.
High-level waste is very radioactive and, therefore, requires special shielding during handling and transport. Initially it also needs cooling, because it generates a great deal of heat. Most of the heat, at least after short-lived nuclides have decayed, is from the medium-lived fission products caesium-137 and strontium-90, which have half-lives on the order of 30 years.
A typical large 1000 MWe nuclear reactor produces 25–30 tons of spent fuel per year. If the fuel were reprocessed and vitrified, the waste volume would be only about three cubic meters per year, but the decay heat would be almost the same.
It is generally accepted that the final waste will be disposed of in a deep geological repository, and many countries have developed plans for such a site, including Finland, France, Japan, United States and Sweden.
Definitions
High-level waste is the highly radioactive waste material resulting from the reprocessing of spent nuclear fuel, including liquid waste produced directly in reprocessing and any solid material derived from such liquid waste that contains fission products in sufficient concentrations; and other highly radioactive material that is determined, consistent with existing law, to require permanent isolation.
Spent (used) reactor fuel.
Spent nuclear fuel is used reactor fuel that is no longer efficient in creating electricity, because its fission process has slowed due to a build-up of reaction poisons. However, it is still thermally hot, highly radioactive, and potentially harmful.
Waste materials from reprocessing.
Materials for nuclear weapons are acquired by reprocessing spent nuclear fuel from breeder reactors. Reprocessing is a method of chemically treating spent fuel to separate out uranium and plutonium. The byproduct of reprocessing is a highly radioactive sludge residue.
Storage
High-level radioactive waste is stored for 10 or 20 years in spent fuel pools, and then can be put in dry cask storage facilities.
In 1997, in the 20 countries which account for most of the world's nuclear power generation, spent fuel storage capacity at the reactors was 148,000 tonnes, with 59% of this utilized. Away-from-reactor storage capacity was 78,000 tonnes, with 44% utilized.
See also
Radioactive waste
Low-level waste
Transuranic waste
Mixed waste
Into Eternity (film)
Notes
References
Fentiman, Audeen W. and James H. Saling. Radioactive Waste Management. New York: Taylor & Francis, 2002. Second ed.
Large, John H. Risks and Hazards arising the Transportation of Irradiated Fuel and Nuclear Materials in the United Kingdom R3144-A1, March 2006
External links
NRC Backgrounder on Radioactive Waste
Radioactive waste | High-level waste | Chemistry,Technology | 829 |
2,794,417 | https://en.wikipedia.org/wiki/Circle%20bundle | In mathematics, a circle bundle is a fiber bundle where the fiber is the circle .
Oriented circle bundles are also known as principal U(1)-bundles, or equivalently, as principal SO(2)-bundles. In physics, circle bundles are the natural geometric setting for electromagnetism. A circle bundle is a special case of a sphere bundle.
As 3-manifolds
Circle bundles over surfaces are an important example of 3-manifolds. A more general class of 3-manifolds is Seifert fiber spaces, which may be viewed as a kind of "singular" circle bundle, or as a circle bundle over a two-dimensional orbifold.
Relationship to electrodynamics
The Maxwell equations correspond to an electromagnetic field represented by a 2-form F, with being cohomologous to zero, i.e. exact. In particular, there always exists a 1-form A, the electromagnetic four-potential, (equivalently, the affine connection) such that
Given a circle bundle P over M and its projection
one has the homomorphism
where is the pullback. Each homomorphism corresponds to a Dirac monopole; the integer cohomology groups correspond to the quantization of the electric charge. The Aharonov–Bohm effect can be understood as the holonomy of the connection on the associated line bundle describing the electron wave-function. In essence, the Aharonov–Bohm effect is not a quantum-mechanical effect (contrary to popular belief), as no quantization is involved or required in the construction of the fiber bundles or connections.
Examples
The Hopf fibration is an example of a non-trivial circle bundle.
The unit tangent bundle of a surface is another example of a circle bundle.
The unit tangent bundle of a non-orientable surface is a circle bundle that is not a principal bundle. Only orientable surfaces have principal unit tangent bundles.
Another method for constructing circle bundles is using a complex line bundle and taking the associated sphere (circle in this case) bundle. Since this bundle has an orientation induced from we have that it is a principal -bundle. Moreover, the characteristic classes from Chern-Weil theory of the -bundle agree with the characteristic classes of .
For example, consider the analytification a complex plane curve . Since and the characteristic classes pull back non-trivially, we have that the line bundle associated to the sheaf has Chern class .
Classification
The isomorphism classes of principal -bundles over a manifold M are in one-to-one correspondence with the homotopy classes of maps , where is called the classifying space for U(1). Note that is the infinite-dimensional complex projective space, and that it is an example of the Eilenberg–Maclane space Such bundles are classified by an element of the second integral cohomology group of M, since
.
This isomorphism is realized by the Euler class; equivalently, it is the first Chern class of a smooth complex line bundle (essentially because a circle is homotopically equivalent to , the complex plane with the origin removed; and so a complex line bundle with the zero section removed is homotopically equivalent to a circle bundle.)
A circle bundle is a principal bundle if and only if the associated map is null-homotopic, which is true if and only if the bundle is fibrewise orientable. Thus, for the more general case, where the circle bundle over M might not be orientable, the isomorphism classes are in one-to-one correspondence with the homotopy classes of maps . This follows from the extension of groups, , where .
Deligne complexes
The above classification only applies to circle bundles in general; the corresponding classification for smooth circle bundles, or, say, the circle bundles with an affine connection requires a more complex cohomology theory. Results include that the smooth circle bundles are classified by the second Deligne cohomology ; circle bundles with an affine connection are classified by while classifies line bundle gerbes.
See also
Wang sequence
References
.
Circles
Fiber bundles
K-theory | Circle bundle | Mathematics | 841 |
72,440,978 | https://en.wikipedia.org/wiki/Curviacus | Curviacus is a genus of Ediacaran organism of uncertain lineage that displays a modular body plan consisting of crescent-shaped chambers. It contains a single species, Curviacus ediacaranus.
Etymology
The genus name Curviacus references the shape of the crescent chambers; coming from Latin curvus meaning curved and acus meaning needle.
Phylogeny
The phylogeny of this fossil is not yet known. Some scientists believe the genus to be a coralline algal or fungal stem group.
Occurrence
C. ediacaranus is from the late Ediacaran. The fossil C. ediacaranus has been found in the Shibantan Member of the Dengying Formation. The Shibantan Member is the bituminous limestone section of the formation. It is unusual for Ediacaran biota to be preserved in limestone. As such, C. ediacaranus is the only Palaeopascichnus fossil to be reported from carbonate rock rather than siliclastic rock. This special type of fossilization allows for 3-dimensional analysis.
Description
These fossils occur on bituminous limestone on the bedding surface. The fossilized specimen has calcispar walls with the inner chambers filled with micrite. The walls are raised because the calcispar does not erode as easily. C. ediacaranus is a slightly oblong macrofossil that ranges from 5–14 cm in length. It is characterized by its curved or crescent-shaped chambers that occur arranged in a series with the chambers sharing walls. All of the chambers are convex in the same direction. Each chamber is narrow ranging ~1-3mm in width. Chamber length can be consistent or inconsistent. Inconsistencies can give a false impression of branching. Additionally, the walls of the chambers sometimes converge laterally.
References
Ediacaran life
Incertae sedis | Curviacus | Biology | 386 |
2,399,633 | https://en.wikipedia.org/wiki/Block%20cellular%20automaton | A block cellular automaton or partitioning cellular automaton is a special kind of cellular automaton in which the lattice of cells is divided into non-overlapping blocks (with different partitions at different time steps) and the transition rule is applied to a whole block at a time rather than a single cell. Block cellular automata are useful for simulations of physical quantities, because it is straightforward to choose transition rules that obey physical constraints such as reversibility and conservation laws.
Definition
A block cellular automaton consists of the following components:
A regular lattice of cells
A finite set of the states that each cell may be in
A partition of the cells into a uniform tessellation in which each tile of the partition has the same size and shape
A rule for shifting the partition after each time step
A transition rule, a function that takes as input an assignment of states for the cells in a single tile and produces as output another assignment of states for the same cells.
In each time step, the transition rule is applied simultaneously and synchronously to all of the tiles in the partition. Then, the partition is shifted and the same operation is repeated in the next time step, and so forth. In this way, as with any cellular automaton, the pattern of cell states changes over time to perform some nontrivial computation or simulation.
Neighborhoods
The simplest partitioning scheme is probably the Margolus neighborhood, named after Norman Margolus, who first studied block cellular automata using this neighborhood structure. In the Margolus neighborhood, the lattice is divided into -cell blocks (or squares in two dimensions, or cubes in three dimensions, etc.) which are shifted by one cell (along each dimension) on alternate timesteps.
A closely related technique due to K. Morita and M. Harao consists in partitioning each cell into a finite number of parts, each part being devoted to some neighbor. The evolution proceeds by exchanging the corresponding parts between neighbors and then applying on each cell a purely local transformation depending only on the state of the cell (and not on the states of its neighbors). With such a construction scheme, the cellular automaton is guaranteed to be reversible if the local transformation is itself a bijection. This technique may be viewed as a block cellular automaton on a finer lattice of cells, formed by the parts of each larger cell; the blocks of this finer lattice alternate between the sets of parts within a single large cell and the sets of parts in neighboring cells that share parts with each other.
Reversibility and conservation
As long as the rule for evolving each block is reversible, the entire automaton will also be. More strongly, in this case, the time-reversed behavior of the automaton can also be described as a block cellular automaton, with the same block structure and with a transition rule that inverts the original automaton's rule within each block. The converse is also true: if the blocks are not individually reversible, the global evolution cannot be reversible: if two different configurations x and y of a block lead to the same result state z, then a global configuration with x in one block would be indistinguishable after one step from the configuration in which the x is replaced by y. That is, a cellular automaton is reversible globally if and only if it is reversible at the block level.
The ease of designing reversible block cellular automata, and of testing block cellular automata for reversibility, is in strong contrast to cellular automata with other non-block neighborhood structures, for which it is undecidable whether the automaton is reversible and for which the reverse dynamics may require much larger neighborhoods than the forward dynamics. Any reversible cellular automaton may be simulated by a reversible block cellular automaton with a larger number of states; however, because of the undecidability of reversibility for non-block cellular automata, there is no computable bound on the radius of the regions in the non-block automaton that correspond to blocks in the simulation, and the translation from a non-block rule to a block rule is also not computable.
Block cellular automata are also a convenient formalism in which to design rules that, in addition to reversibility, implement conservation laws such as the conservation of particle number, conservation of momentum, etc.. For instance, if the rule within each block preserves the number of live cells in the block, then the global evolution of the automaton will also preserve the same number. This property is useful in the applications of cellular automata to physical simulation.
Simulation by conventional cellular automata
As Toffoli and Margolus write, the block cellular automaton model does not introduce any additional power compared to a conventional cellular automaton that uses the same neighborhood structure at each time step: any block cellular automaton may be simulated on a conventional cellular automaton by using more states and a larger neighborhood. Specifically, let the two automata use the same lattice of cells, but let each state of the conventional automaton specify the state of the block automaton, the phase of its partition shifting pattern, and the position of the cell within its block. For instance, with the Margolus neighborhood, this would increase the number of states by a factor of eight: there are four possible positions that a cell may take in its block, and two phases to the partition. Additionally, let the neighborhood of the conventional automaton be the union of the blocks containing the given cell in the block cellular automaton. Then with this neighborhood and state structure, each update to the block automaton may be simulated by a single update to the conventional cellular automaton.
Applications
Block cellular automata are commonly used to implement lattice gases and other quasi-physical simulations, due to the ease of simulating physical constraints such as conservation laws in these systems.
For instance, the Margolus model may be used to simulate the HPP lattice gas model, in which particles move in two perpendicular directions and scatter at right angles when they collide with each other. In the block cellular simulation of this model, the update rule moves each cell to the cell diagonally opposite in its block, except in the case that a cell contains two diagonally opposite particles, in which case they are replaced by the complementary pair of diagonally opposite particles. In this way, particles move diagonally and scatter according to the HPP model. An alternative rule that simulates the HPP lattice gas model with horizontal and vertical motion of particles, rather than with diagonal motion, involves rotating the contents of each block clockwise or counterclockwise in alternating phases, except again in the case that a cell contains two diagonally opposite particles, in which case it remains unchanged.
In either of these models, momentum (the sum of the velocity vectors of the moving particles) is conserved, as well as their number, an essential property for simulating physical gases. However, the HPP models are somewhat unrealistic as a model of gas dynamics, because they have additional non-physical conservation rules: the total momentum within each line of motion, as well as the total momentum of the overall system, is conserved. More complex models based on the hexagonal grid avoid this problem.
These automata may also be used to model the motion of grains of sand in sand piles and hourglasses. In this application, one may use a Margolus neighborhood with an update rule that preserves the number of grains within each block but that moves each grain as far down within its block as possible. If a block includes two grains that are stacked vertically on top of each other, the transition function of the automaton replaces it by a block in which the grains are side-by-side, in effect allowing tall sand piles to topple and spread. This model is not reversible, but it still obeys a conservation law on the number of particles. A modified rule, using the same neighborhood but moving the particles sideways to the extent possible as well as down, allows the simulated sandpiles to spread even when they are not very steep. More sophisticated cellular automaton sand pile models are also possible, incorporating phenomena such as wind transport and friction.
Margolus' original application for the block cellular automaton model was to simulate the billiard ball model of reversible computation, in which Boolean logic signals are simulated by moving particles and logic gates are simulated by elastic collisions of those particles. It is possible, for instance, to perform billiard-ball computations in the two-dimensional Margolus model, with two states per cell, and with the number of live cells conserved by the evolution of the model. In the "BBM" rule that simulates the billiard-ball model in this way, signals consist of single live cells, moving diagonally. To accomplish this motion, the block transition function replaces a block containing a single live cell with another block in which the cell has been moved to the opposite corner of the block. Similarly, elastic collisions may be performed by a block transition function that replaces two diagonally opposite live cells by the other two cells of the block. In all other configurations of a block, the block transition function makes no change to its state. In this model, rectangles of live cells (carefully aligned with respect to the partition) remain stable, and may be used as mirrors to guide the paths of the moving particles. For instance, the illustration of the Margolus neighborhood shows four particles and a mirror; if the next step uses the blue partition, then two particles are moving towards the mirror while the other two are about to collide, whereas if the next step uses the red partition, then two particles are moving away from the mirror and the other two have just collided and will move apart from each other.
Additional rules
Toffoli and Margolus suggest two more reversible rules for the Margolus neighborhood with two-state cells that, while not motivated by physical considerations, lead to interesting dynamics.
Critters
In the "Critters" rule, the transition function reverses the state of every cell in a block, except for a block with exactly two live cells which remains unchanged. Additionally, blocks with three live cells undergo a 180-degree rotation as well as the state reversal. This is a reversible rule, and it obeys conservation laws on the number of particles (counting a particle as a live cell in even phases and as a dead cell in odd phases) and on the parity of the number of particles along diagonal lines. Because it is reversible, initial states in which all cells take randomly chosen states remain unstructured throughout their evolution. However, when started with a smaller field of random cells centered within a larger region of dead cells, this rule leads to complex dynamics similar to those in Conway's Game of Life in which many small patterns similar to life's glider escape from the central random area and interact with each other. Unlike the gliders in Life, reversibility and the conservation of particles together imply that when gliders crash together in Critters, at least one must escape, and often these crashes allow both incoming gliders to reconstitute themselves on different outgoing tracks. By means of such collisions, this rule can also simulate the billiard ball model of computing, although in a more complex way than the BBM rule. The Critters rule can also support more complex spaceships of varying speeds as well as oscillators with infinitely many different periods.
Tron
In the "Tron" rule, the transition function leaves each block unchanged except when all four of its cells have the same state, in which case their states are all reversed. Running this rule from initial conditions in the form of a rectangle of live cells, or from similar simple straight-edged shapes, leads to complex rectilinear patterns. Toffoli and Margolus also suggest that this rule can be used to implement a local synchronization rule that allows any Margolus-neighborhood block cellular automaton to be simulated using an asynchronous cellular automaton. In this simulation, each cell of an asynchronous automaton stores both a state for the simulated automaton and a second bit representing the parity of a timestamp for that cell; therefore, the resulting asynchronous automaton has twice as many states as the automaton it simulates. The timestamps are constrained to differ by at most one between adjacent cells, and any block of four cells whose timestamps all have the correct parity may be updated according to the block rule being simulated. When an update of this type is performed, the timestamp parities should also be updated according to the Tron rule, which necessarily preserves the constraint on adjacent timestamps. By performing local updates in this way, the evolution of each cell in the asynchronous automaton is identical to its evolution in the synchronous block automaton being simulated.
See also
Toothpick sequence, a fractal pattern that can be emulated by cellular automata with the Margolus neighborhood
References
External links
Critters simulation, Seth Koehler, Univ. of Florida
Cellular automata | Block cellular automaton | Mathematics | 2,751 |
48,937,308 | https://en.wikipedia.org/wiki/Ecology%20and%20Evolution | Ecology and Evolution is a biweekly open-access scientific journal covering all areas of ecology, evolution, and conservation. The Editors in Chief of this journal are Allen Moore, Andrew Beckerman, Jenn Firn, Chris Foote, and Gareth Jenkins.
Abstracting and Indexing
According to the Journal Citation Reports, the journal has a 2021 impact factor of 3.167. The journal is indexed in Web of Science: Science Citation Index Expanded, BIOSIS Previews, Biological Abstracts, and Scopus.
References
External links
Ecology journals
English-language journals
Wiley-Blackwell academic journals
Biweekly journals
Academic journals established in 2011
Evolutionary biology journals | Ecology and Evolution | Environmental_science | 130 |
48,602,911 | https://en.wikipedia.org/wiki/Mark%20Van%20Raamsdonk | Mark Van Raamsdonk is a professor at the Department of Physics and Astronomy at the University of British Columbia since 2002. Before that, he was a postdoc at Stanford University from 2000 until 2002 and studied as a graduate student at Princeton University from 1995 until 2000 when he received his PhD under the supervision of Washington Taylor. Before that, he did a combined mathematics/physics undergraduate degree at University of British Columbia where he graduated with what is believed to be the highest GPA in the university's prior history.
In 2009 Mark Van Raamsdonk started to work on the relationship between quantum mechanics and gravity during his first sabbatical year. He published his results "Building up spacetime with quantum entanglement" as an essay in 2010, which won the first prize of the annual essay contest run by the Gravity Research Foundation. Van Raamsdonk is a member of the "It from Qubit" collaboration, which was formed in 2015.
Mark Van Raamsdonk plays the saxophone and has organized a concert series at UBC, inspired by a similar one that existed during his time at Princeton.
In 2021, Van Raamsdonk published a short picture book titled "The Hot and Cold Adventures of Mr. Brick".
References
Theoretical physicists
Year of birth missing (living people)
Living people
Academic staff of the University of British Columbia
Princeton University alumni
Simons Investigator | Mark Van Raamsdonk | Physics | 281 |
33,136,231 | https://en.wikipedia.org/wiki/Diphosphene | Diphosphene is a compound having the formula (PH)2. It exists as two geometric isomers, E and Z. Diphosphene is also the parent member of the entire class of diphosphene compounds with the formula (PR)2, where R is an organyl group.
References
Phosphorus hydrides | Diphosphene | Chemistry | 70 |
40,308,088 | https://en.wikipedia.org/wiki/Brilacidin | Brilacidin (formerly PMX-30063), an investigational new drug, is a polymer-based antibiotic currently in human clinical trials, and represents a new class of antibiotics called host defense protein mimetics, or HDP-mimetics, which are non-peptide synthetic small molecules modeled after host defense peptides (HDPs). HDPs, also called antimicrobial peptides, some of which are defensins, are part of the innate immune response and are common to most higher forms of life. As brilacidin is modeled after a defensin, it is also called a defensin mimetic.
Brilacidin is an antibiotic that works by disrupting bacterial cell membranes, mimicking defensins that play a role in innate immunity. Several mimics of antimicrobial peptides, both peptides and non-peptides, have been studied, but none have overcome difficulties to reach the market.
Structure and action
Brilacidin, a non-peptide chemical mimic, is an aryl amide foldamer designed to replicate the amphiphilic properties of antimicrobial peptides while solving the problems encountered by peptide-based antimicrobials.
Brilacidin, a broad-spectrum antibiotic, has potent Gram positive activity and Gram negative coverage, and is highly effective in treating the 'superbug' methicillin-resistant Staphylococcus aureus (MRSA). Brilacidin has low cytotoxicity against mammalian cells while selectively targeting bacteria, directly and rapidly disrupting their membranes, resulting in the bacteria's death. Due to this unique mechanism of action (mimicking the host's natural immune response, proven to be successful in fighting off infections over millions of years of evolution), bacterial antibiotic resistance is less likely to develop.
Potential significance
There has not been a new drug approval from a new class of antibiotics since 1987. While six antibiotics have been approved over the last year, they are all adaptations of existing antibiotic classes. None of the recently approved novel antibiotics represent entirely new classes. Novel antibiotics are crucial as antibiotic resistance poses a global health risk. The World Health Organization, warning of a "post-antibiotic era" has stated that antimicrobial resistance (AMR) is a "problem so serious that it threatens the achievements of modern medicine".
History
Leveraging advanced computational bioinformatics, brilacidin and other defensin mimetics were first developed by University of Pennsylvania-based researchers. Their efforts were consolidated, and officially incorporated, in 2002, under the company name PolyMedix.
PolyMedix conducted pre-clinical and clinical research with brilacidin through a completed Phase 2a human clinical trial with positive results. After discontinuing a clinical trial for an unrelated compound PolyMedix filed for Chapter 7 bankruptcy protection on 1 April 2013. Cellceutix acquired the PolyMedix assets and intellectual property, including the licenses and patents for brilacidin and the rest of the HDP-mimetic pipeline, from bankruptcy court which on 4 September 2013, approved Cellceutix's stalking horse bid.
On 7 June 2017, Cellceutix Announced a Company Name Change to Innovation Pharmaceuticals Inc. On 9 June 2017, the stock ticker name was effectively changed to "IPIX".
Clinical trials
Innovation Pharmaceuticals advanced brilacidin through early stage human clinical trials to a completed Phase 2a proof-of-concept clinical trial. Since acquisition, brilacidin was entered into a Phase 2b clinical trial. Brilacidin was granted the Qualified Infectious Disease Product (QIDP) designation by the FDA under the Generating Antibiotic Incentives Now Act of 2011 (GAIN Act).
Phase 2a clinical trial – ABSSSI
Initial Treatment for Acute Bacterial Skin Infections (ABSSSI) Caused by Staphylococcus aureus Randomized, Dose Ranging, Active Controlled Efficacy and Safety Evaluation of PMX-30063 As Initial Treatment for Acute Bacterial Skin and Skin Structure Infections (ABSSSI) Caused by Staphylococcus aureus
The study started in October 2010 and had a primary completion date of December 2011 for final data collection for the primary outcome measure. Overall, 215 patients were randomized into either one of the three brilacidin arms or the active comparator Daptomycin arm. There were three dosing regimens for brilacidin, a low, medium and high dose administered for three days, and one dosing regimen for Daptomycin administered for seven days.
The clinical trial was successful, demonstrating safety and clinical efficacy for all evaluated doses of brilacidin, with three-day brilacidin cure rates of all dosing regimens comparable with seven days of Daptomycin. The results indicated the potential for a shorter brilacidin dosing regimen. Shorter dosing regimens are important as they reduce the risks from Intravenous therapy complications, reduce costs such as reduced hospital stays and clinic visits, and can help reduce the emergence of antibiotic resistance through a combination of a quick bacterial kill, shorter duration of treatment, and increased patient compliance.
Phase 2b clinical trial – ABSSSI
Efficacy and Safety Study of Brilacidin to Treat Serious Skin Infections
The study started February 2014 and announced completed enrollment 19 August 2014. Overall, 215 patients were randomized to one of three dosing regimens of brilacidin (single dose 0.6 mg/kg; single-dose 0.8 mg/kg; 1.2 mg/kg over 3 days) or 7 days of once daily daptomycin. finding that a single dose brilacidin was comparable to 7 days of daptomycin. The primary endpoint was clinical success in the intent-to-treat population, defined as reduction of at least 20% in area of the ABSSSI lesion, relative to baseline, when observed 48–72 hours after the first dose of study drug, and no rescue antibiotics administered.
Phase 2 clinical trial – oral mucositis
The brilacidin trial for oral mucositis (Briladidin-OM) has started in May 2015 and is expected to be completed in December 2017. Brilacidin-OM is an oral rinse of brilacidin in water. Approximately 60 patients who received chemoradiation for head and neck cancer were randomized to receive either brilacidin-OM or the placebo three times daily for seven weeks. Various primary and secondary outcome measures were recorded to assess efficacy of brilacidin-OM to prevent or reduce the severity of oral mucositis in patients receiving chemo-radiation.
Phase 2 clinical trial – Covid-19 / SARS-CoV-2
The Brilacidin trial for the treatment of COVID-19 infection has started in February 2021 and is expected to be completed in July 2021. The study is a randomized, blinded, placebo-controlled, parallel group design and will accept 120 patients. The placebo or drug will be administered via IV infusion to patients with moderate to severe COVID-19, SARS-CoV-2 infection confirmed by positive standard polymerase chain reaction test (or equivalent/ other approved diagnostic test) within 4 days prior to starting study treatment, and hospitalized with respiratory distress but not yet requiring high-level respiratory support.
The HDP-mimetic pipeline
Development is ongoing for numerous brilacidin analogs, selected by laboratory testing of the various HDP mimetics and defensin-mimetic compounds in the antibiotic pipeline. Pre-clinical research has been shown select brilacidin analogs effective in killing a variety of important Gram-negative pathogens (the so-called superbugs), such as Pseudomonas aeruginosa, Klebsiella pneumoniae, Escherichia coli and Acinetobacter baumannii as well as highly multi-drug resistant ndm-1-producing K. pneumoniae. An abstract update on these efforts was presented at the European Congress of Clinical Microbiology and Infectious Disease (ECCMID) 2015 annual conference. The footnote links to the full presentation. Other HDP-Mimetic analogs have proven effective in vitro against C. albicans and other Candida species.
Also acquired with brilacidin and the HDP-mimetic pipeline were the rights to the related PolyCide family of compounds, polymeric formulations that function as antimicrobial agents. These compounds are similar to brilacidin in that they are also synthetic mimics of HDPs. These compounds have superior bacterial killing activity over triclosan and silver nitrate, common biocidal agents. PolyCide compounds could be used as additives to paints, plastics, textiles and other materials to create self-sterilizing products and surfaces.
Notes
References
Antibiotics
Experimental drugs
Trifluoromethyl compounds
Guanidines
Pyrrolidines
Pyrimidines | Brilacidin | Chemistry,Biology | 1,856 |
75,252,094 | https://en.wikipedia.org/wiki/Flip%20distance | In discrete mathematics and theoretical computer science, the flip distance between two triangulations of the same point set is the number of flips required to transform one triangulation into another. A flip removes an edge between two triangles in the triangulation and then adds the other diagonal in the edge's enclosing quadrilateral, forming a different triangulation of the same point set.
This problem is known to be NP-hard. However, the computational complexity of determining the flip distance between convex polygons, a special case of this problem, is unknown. Computing the flip distance between convex polygon triangulations is also equivalent to rotation distance, the number of rotations required to transform one binary tree into another.
Definition
Given a family of triangulations of some geometric object, a flip is an operation that transforms one triangulation to another by removing an edge between two triangles and adding the opposite diagonal to the resulting quadrilateral. The flip distance between two triangulations is the minimum number of flips needed to transform one triangulation into another. It can also be described as the shortest path distance in a flip graph, a graph that has a vertex for each triangulation and an edge for each flip between two triangulations. Flips and flip distances can be defined in this way for several different kinds of triangulations, including triangulations of sets of points in the Euclidean plane, triangulations of polygons, and triangulations of abstract manifolds.
Feasibility
The flip distance is well-defined only if any triangulation can be converted to any other triangulation via a sequence of flips. An equivalent condition is that the flip graph must be connected.
In 1936, Klaus Wagner showed that maximal planar graphs on a sphere can be transformed to any other maximal planar graph with the same vertices through flipping. A. K. Dewdney generalized this result to triangulations on the surface of a torus while Charles Lawson to triangulations of a point set on a 2-dimensional plane.
For triangulations of a point set in dimension 5 or above, there exists examples where the flip graph is disconnected and a triangulation cannot be obtained from other triangulations via flips. Whether all flip graphs of finite 3- or 4-dimensional point sets are connected is an open problem.
Diameter of the flip graph
The maximum number of flips required to transform a triangulation into another is the diameter of the flip graph. The diameter of the flip graph of a convex -gon has been obtained by Daniel Sleator, Robert Tarjan, and William Thurston when is sufficiently large and by Lionel Pournin for all . This diameter is equal to when .
The diameter of other flip graphs has been studied. For instance Klaus Wagner provided a quadratic upper bound on the diameter of the flip graph of a set of unmarked points on the sphere. The current upper bound on the diameter is , while the best-known lower bound is . The diameter of the flip graphs of arbitrary topological surfaces with boundary has also been studied and their exact value is known in several cases.
Equivalence with other problems
The flip distance between triangulations of a convex polygon is equivalent to the rotation distance between two binary trees.
Computational complexity
Computing the flip distance between triangulations of a point set is both NP-complete and APX-hard. However, it is fixed-parameter tractable (FPT), and several FPT algorithms that run in exponential time have been proposed.
Computing the flip distance between triangulations of a simple polygon is also NP-hard.
The complexity of computing the flip distance between triangulations of a convex polygon remains an open problem.
Algorithms
Let be the number of points in the point set and be the flip distance. The current best FPT algorithm runs in . A faster FPT algorithm exists for the flip distance between convex polygon triangulations; it has time complexity
If no five points of a point set form an empty pentagon, there exists a algorithm for the flip distance between triangulations of this point set.
See also
Associahedron
Flip graph
Rotation distance
Tamari lattice
References
Triangulation (geometry)
Reconfiguration | Flip distance | Mathematics | 884 |
8,017,307 | https://en.wikipedia.org/wiki/Development%20%28topology%29 | In the mathematical field of topology, a development is a countable collection of open covers of a topological space that satisfies certain separation axioms.
Let be a topological space. A development for is a countable collection of open coverings of , such that for any closed subset and any point in the complement of , there exists a cover such that no element of which contains intersects . A space with a development is called developable.
A development such that for all is called a nested development. A theorem from Vickery states that every developable space in fact has a nested development. If is a refinement of , for all , then the development is called a refined development.
Vickery's theorem implies that a topological space is a Moore space if and only if it is regular and developable.
References
General topology | Development (topology) | Mathematics | 171 |
29,558,954 | https://en.wikipedia.org/wiki/Besselian%20elements | The Besselian elements are a set of values used to calculate and predict the local circumstances of occultations for an observer on Earth. This method is particularly used for solar eclipses, but is also applied for occultations of stars or planets by the Moon and transits of Venus or Mercury. In addition, for lunar eclipses a similar method is used, in which the shadow is cast on the Moon instead of the Earth.
For solar eclipses, the Besselian elements are used to calculate the path of the umbra and penumbra on the Earth's surface, and hence the circumstances of the eclipse at a specific location. This method was developed in the 1820s by the German mathematician and astronomer, Friedrich Bessel, and later improved by William Chauvenet.
The basic concept is that Besselian elements describe the movement of the shadow cast by the occulting body – for solar eclipses this is the shadow of the Moon – on a specifically chosen plane, called the fundamental plane. This is the geocentric, normal plane of the shadow axis. In other words, it is the plane through the Earth's center that is perpendicular to the line through the centers of the occulting and the occulted bodies. One advantage, among others, of choosing this plane is that the outline of the shadow on it is always a circle, and there is no perspective distortion.
Comparatively few values are needed to accurately describe the movement of the shadow on the fundamental plane. Based on this, the next step is to project the shadow cone on to the Earth's surface, taking into account the figure of the Earth, its rotation, and the observer's latitude, longitude and elevation.
Although the Besselian elements determine the overall geometry of an eclipse, which longitudes on the Earth's surface will experience an eclipse are determined by the Earth's rotation. A variable called ΔT measures how much that rotation has slowed over time and must also be taken into account when predicting local eclipse circumstances.
References
Further reading
Robin M. Green: Spherical astronomy. Cambridge University Press, Cambridge 1985,
William Chauvenet: A manual of spherical and practical astronomy. J. B. Lippincott & Co, Philadelphia 1863
Eclipses
Solar eclipses
Celestial mechanics | Besselian elements | Physics,Astronomy | 455 |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.