id int64 580 79M | url stringlengths 31 175 | text stringlengths 9 245k | source stringlengths 1 109 | categories stringclasses 160
values | token_count int64 3 51.8k |
|---|---|---|---|---|---|
46,266,195 | https://en.wikipedia.org/wiki/UniKey%20%28software%29 | UniKey is the most popular third-party software and input method editor (IME) for encoding Vietnamese for Windows. The core, UniKey Vietnamese Input Method, is also the engine imbedded in many Vietnamese software-based keyboards in Windows, Android, Linux, macOS and iOS. UniKey is free and the source code for the UniKey Vietnamese Input Method is distributed under GNU General Public License. The official website of UniKey is unikey.org, which supports both English and Vietnamese.
Overview
UniKey supports:
Many Vietnamese character sets/encodings:
TCVN3 (ABC), VN Unicode, VIQR
VNI, VPS, VISCII, BK HCM1, BK HCM2, etc.
Unicode UTF-8, Unicode NCR Decimal/Hexadecimal for Web editors
All 3 popular input methods: TELEX, VNI and VIQR
Win32 platforms: Windows 10, 8, 7, Vista, 2000, XP, 9x/ME
UniKey is a minimalistic software and does not require additional library.
About UniKey
UniKey for Windows was released as a free program in 1999. It gained popularity for encoding Vietnamese thanks to its speed, simplicity, and reliability. It became the most popular keyboard program for inputting Vietnamese.
The core engine, UniKey Vietnamese Input Method, is open source and was first released as a part of the x-unikey Vietnamese keyboard for Linux in 2001. Since then, the engine has been integrated in input methods in different OSes and software frameworks. ibus-unikey (developed by Le Quoc Tuan, using UniKey engine) is widely used for Linux distributions.
From Mac OS X Leopard onwards, released in 2007, Apple has integrated the UniKey Vietnamese Input Method to the built-in Vietnamese input of Mac OS. From 2010, the engine has also been integrated to the built-in Vietnamese keyboard in iOS (starting from iOS 4.0). The UniKey engine is also now running in iPhones, iPads, etc. that uses Vietnamese input.
UniKey is developed by Pham Kim Long.
References
External links
Vietnamese software
Vietnamese character input
Windows-only free software
Free software programmed in C++
Software using the GNU General Public License | UniKey (software) | Technology | 478 |
68,016,367 | https://en.wikipedia.org/wiki/Amaurodon%20aeruginascens | Amaurodon aeruginascens is a species of fungus belonging to the family Thelephoraceae. It is native to Central America.
References
Thelephorales
Fungi described in 1988
Fungi of Central America
Taxa named by Leif Ryvarden
Fungus species | Amaurodon aeruginascens | Biology | 55 |
3,854,094 | https://en.wikipedia.org/wiki/Countertop | A countertop, also counter top, counter, benchtop, worktop (British English) or kitchen bench (Australian or New Zealand English), bunker (Scottish English) is a raised, firm, flat, and horizontal surface. They are built for work in kitchens or other food preparation areas, bathrooms or lavatories, and workrooms in general. The surface is frequently installed upon and supported by cabinets, positioned at an ergonomic height for the user and the particular task for which it is designed. A countertop may be constructed of various materials with different attributes of functionality, durability and aesthetics, and may have built-in appliances, or accessory items relative to the intended application.
In Australian and British English, the term counter is generally reserved for a surface of this type that forms a boundary between a space for public access and a space for workers to carry out service tasks. In other contexts, the term bench, benchtop, or "sink table" is used.
Kitchen countertops
The common fitted Western-style kitchen, developed in the early 20th century, is typically an arrangement of assembled unit cabinetry covered with a more-or-less continuous countertop work surface. The "unfitted" kitchen design style exemplified by Johnny Grey may also include detached and/or varied countertop surfaces mounted on discrete base support structures. Primary considerations of material choice and conformation are durability, functionality, hygienics, appearance, and cost.
When installed in a kitchen on standard (U.S) wall-mounted base unit cabinets, countertops are typically about from front to back and are designed with a slight overhang on the front (leading) edge. This allows for a convenient reach to objects at the back of the countertop while protecting the base cabinet faces. It can also act as kick space that may not have been provided at the floor, allowing a person to stand closer to the countertop, improving ergonomics. In the UK the standard width is 600 mm (approximately 24 inches). Finished heights from the floor will vary depending on usage but typically will be , with a material thickness depending on that chosen. They may include an integrated or applied backsplash (UK: upstand) to prevent spills and objects from falling behind the cabinets. Kitchen countertops may also be installed on freestanding islands, dining areas or bars, desk and table tops, and other specialized task areas; as before, they may incorporate cantilevers, freespans and overhangs depending on application. The horizontal surface and vertical edges of the countertop can be decorated in manners ranging from plain to very elaborate. They are often conformed to accommodate the installation of sinks, stoves (cookers), ranges, and cooktops, or other accessories such as dispensers, integrated drain boards, and cutting boards.
Laboratory countertops
Laboratory countertops are countertops used specifically in science fields for educational labs or research purposes. They can be used to place equipments, tools, projects and chemicals. Characteristics of laboratory countertops are generally determined according to what reagents or corrosive chemicals are being used. The purpose of the countertop would be different depending on whether it is used in a chemistry lab, physics lab, food science lab, microbiology or a biology lab. Common characteristics of preferred laboratory countertops are ones that are strong, durable, and water-, moisture- or chemical resistant. Depending on the objectives of a lab, they may additionally be required to be resistant to acids or high temperatures.
Many laboratory countertops are equipped with drawers that can be used to store materials that might get in the way while conducting an experiment. Materials such as lab notebooks, pencils, extra papers and folders are advised and expected to be stored away in the provided spaces or inside the drawer. The laboratory countertops' styles and variations may differ according to where they are (geographical location) and what labs they are being used for. They are also often made of different materials depending on their usage.
The most common and durable type of material used is phenolic resin because they are lightweight, strong, chemical and moisture resistant. It can handle heat exposure up to , beyond this temperature Epoxy Resin is used. Phenolic Resin and Epoxy Resin are both functionally equivalent, but differ in their heat handling abilities. Other materials to build laboratory countertops may include plastic laminate, stainless steel and even wood.
Materials
Countertops can be made from a wide range of materials and the cost of the completed countertop can vary widely depending on the material chosen. The durability and ease of use of the material often rises with the increasing cost of the material but some costly materials are neither particularly durable nor user-friendly. Some common countertop materials are as follows:
Natural stones
Granite
Limestone
Marble
Soapstone
Gabbro
Slate
Quartzite
Silicate mineral
Travertine
Quartz
Wood
Hardwood
Softwood
Metals
Stainless steel
Copper
Zinc
Aluminium
Crafted glass
Manufactured materials
Concrete
Cast-in-place
Precast
Processed slabs
Compressed paper or fiber
Cultured marble
High-pressure laminates
Post-formed high-pressure decorative laminates
Self-edged high-pressure decorative laminates
Quartz surfacing or engineered stone is 99.9% solid @ 93% aggregate / 7% polyester resin (by weight), colors and binders
Recycled Glass surface either with concrete or polyester resin binders
Solid-surface acrylic plastic materials
Solid-surface polyester acrylic
Terrazzo
Tile
Cast-in-place materials
Natural stone suspended in a resin
Post-consumer glass suspended in a resin
Epoxy
Phenolic resin
Natural stone
Natural stone is one of the most commonly used materials in countertops. Natural stone or dimension stone slabs (e.g. granite) are shaped using cutting and finishing equipment in the shop of the fabricator. The edges are commonly put on by hand-held routers, grinders, or CNC equipment. If the stone has a highly variegated pattern, the stone may be laid out in final position in the shop for the customer's inspection, or the stone slabs may be selected by experienced inspectors. Emerging technology allows for virtual stone placement on a computer. Exact photographs can now be taken which allow for the integration of a dxf file to lay on top of a stone image. Multiple slabs of material may be used in this layout process. Then the countertop assembly is installed on the job site by professionals.
Commonly, initial countertop fabrication takes place at or near the quarry of origin, with blocks being sawn to thickness and then machined into standard widths (600mm and upwards), before being surface polished and edged.
This method removes the need to ship waste material, and reduces the time needed to prepare client orders. This practice is called "cut to size"
A wide range of details may be pre-machined by the fabricator, allowing for installation of different sinks and cooker designs. A common drawback to natural stone is the need for sealing to prevent harboring of bacteria and/or fluids that may cause staining. In recent years oleophobic impregnators have been introduced as an alternative to surface sealers. With the advent of impregnators the frequency of sealing has been cut down to once every five to ten years on most materials.
Wood
Wooden countertops can come in a variety of designs ranging from butcher block to joined planks to single wide stave. Wood is considered to be the most eco-friendly option when it comes to choosing a kitchen countertop as wood is a renewable resource. Wood countertops must be thoroughly cleaned and disinfected after contact with foods such as raw meat. They have shown that while bacteria do get absorbed by the wood, they do not multiply and eventually die. While brand new plastic work surfaces are indeed easy to disinfect, once they have become heavily knife scarred they are nearly impossible to completely disinfect. This is not a problem with wooden work surfaces where the number of knife cuts made little difference.
High-pressure laminates
Post-formed plastic laminate
"Post-formed" (or literally "formed after being laminated" to the substrate) high pressure laminate countertop, often referred to as "plastic laminate countertop" is a material made more of wood product than plastic. The composition is of kraft paper, decorative papers, and melamine resins, bonded through high heat and pressure. This product is sometimes referred to as Formica or Arborite, but these are trade names of a manufactured high pressure laminate, of which there are many manufacturers.
The postform countertop is typically a high volume factory-produced product, which accounts for the economy of the product. The material composition consists of a single thin sheet of laminate (typically .030" - .040" in thickness) that gets bonded to a 45# density particle board substrate (or other similar base material such as MDF - medium density fiberboard, or plywood), with a PVA adhesive (poly vinyl acetate - a water-based adhesive). Traditionally postform countertops were manufactured with a solvent-based contact cement (a highly flammable, volatile organic compound - VOC). However, in today's marketplace PVA adhesives have taken over for reasons of environmental responsibility (no VOC's), safety (non-combustible), economy, and strength of the glue line.
A typical system consists of the following:
An automated infeed system for sequencing the particle board into production.
The CorFab Machine, an automated feed-through machine that cuts to size, cuts and bonds build down sticks with a hot-melt adhesive to the under side of the substrate, and shapes the edge detail, all in a single motion.
An automated laminating system that applies the adhesive to both the substrate and laminate.
An indexing unit that aligns the laminate to the substrate with the proper overhang.
A Pinch Roller that makes the bond between the laminate and substrate.
The Postforming Machine, that not only heats and forms the laminate around the substrate, but also cuts away the backsplash (when the top is to be used against a wall) from the main deck, all in a feed-through motion machine.
The AutoCove Machine, which heats and forms the backsplash upward 90 degrees, locking it into place with what is referred to as a cove stick, utilizing hot melt adhesive technology to hold it all together.
The final stage of the system usually consists of a trim saw that cuts the countertops to rough lengths, typically 8', 10' and 12', ready for distribution.
Once manufactured the tops need only to be cut to length, mitered, fitted for assembly, and end capped (only if it is a visible finished end). A very specific machine for cutting the postform countertop is manufactured by only a few companies, it is commonly called a Cutting Station, Top Saw, or simply Miter Saw. This machine accurately cuts the countertop to field dimensions, making it easy for the installer to make the final scribe cuts on-site to complete the work. Sink cut outs can be made either in the field or at the installers shop.
Overall, the postform countertop is the most economical countertop on the market, and has the broadest selection of surface material to choose from. Surfaces can be either a solid color, or a pattern, and textures range from a satin furniture finish to a heavily textured stone or pebbled appearance to a high gloss resolution. Because of this diversity, the postform countertop can satisfy a wide variety of design applications, and due to its economy, it can be easily replaced to provide a fresh appearance in any room.
Self edge or wood edge laminate
Self or wood edge plastic laminate countertops are also very popular for those who chose to have few or no surface seams. In this style, the top shop uses substrate for the countertop out of MDF, or particle board and then glue sheets of laminate to the substrate using Contact Cement. The laminate is then trimmed using a router. This method can't reproduce the curved contours of post-formed countertopping but can be made to easily conform to a much-wider range of floor plans with fewer seams.
Crafted glass
Custom architectural crafted glass, tempered glass, textured glass pieces, and the ancient art of verre églomisé, or reverse gilded glass, are applied to contemporary uses including countertops, backsplashes, and tabletops. Glass work may be customized to suit by craftsmen in the studio, then installed on site either in small components (such as a kitchen countertop composed of three rectangles of verre églomisé) or as immense, single units (for example, a glass countertop and sink basin formed of one continuous piece of textured glass). Surface texture comes in several variations, such as sanded, melted, pixels, and linear. Glass countertops also often have customized edges, including: bushed polished, textured, and fire polished edges. The glass is non-porous, relatively stain-proof, extremely hygienic, and "extremely heat resistant (up to 700 degrees)."
Much work is being done to "recycle" glass using sources such as post consumer glass or post industrial float glass. The material can be crushed or cut into strips that is heated until the softening point of glass, binding the loose material back into a solid form.
Tile
Tile, including ceramic tile and stone tile, is installed in much the same way as floor tiles or wall tiles through the use of mortar and grouting the tile gaps after they have been cemented down. The tiles that sit on the wall typically behind a countertop are called a backsplash.
Solid surface materials
Solid surface acrylic or polyester materials are usually prefabricated at the installer's shop and then assembled on site. The material is readily glued and the glue joints are then sanded, leaving almost no visible trace of the joint. The edge treatment for solid-surface countertops can be very elaborate. The material itself is usually only about thick so an edge is usually created by stacking up two or three layers of the material. The built-up edge then can be shaped to a rounded edge or an ogee. Fancier edge treatments are more expensive.
Engineered quartz surfacing
Engineered stone quartz surfacing is made from a mixture of ground quartz and resins. Testing has shown that they retain much of the toughness of quartz but display increased ductility due to the resin, improving impact resistance. Tests also have shown that this countertop surface is the most resistant to discoloration from foods and household products among common household surfaces, the second most stain resistant being granite. Countertops are custom made and more scratch resistant as well as less porous than natural quartz surfaces, and don't need to be sealed like other stone surfaces. Due to the presence of the resins, quartz counters are less prone to staining. Thicknesses may be 6mm, 1.2 cm (1/2 inch), 2 cm (3/4 inch), 3 cm (1¼ inch) or 4 cm (1½ inch).
Concrete
Concrete may be utilized as a surfacing material in one of several forms: cast-in-place (in which the fabricator creates forms atop the previously installed cabinetry, places, and then finishes the material in situ), custom precast (in which the fabricator creates site templates, duplicates the pattern in a production facility offsite, and installs the finished product atop the cabinetry), and the machining of pre-manufactured gauged slabs (similar to natural stone fabrication).
Concrete, especially precast, lends itself to a high degree of customization due to the phase-change nature of its creation, filling a specific form with a fluid material which hardens (through mineral hydration) to a durable cast stone. Color choices, edge styles, three-dimensional sculpting, and integral features such as sinks, drainboards, and decorative embedments are design options which may be incorporated. Due to its site-specific and generally handmade nature, concrete countertops are often produced by small shops and individual artisans although there are several large-scale manufacturers of gauged slabs.
Cultured marble
Cultured marble countertops are man made vanity tops that have the appearance of and resemble real stone marble. Cultured marble countertops are made by mixing high strength polyester resin and real marble stone dust. The combination is then formulated with additional chemicals and poured into a cast mold. These molds can ultimately produce bathtubs, whirlpool decks, shower pans, window sills, and even vanity tops. The finished material is significantly less expensive than natural marble and four times stronger than natural stones such as granite or marble. The process of using a mold also allows the fabricated countertops to have features such as different surface textures and a vast array of colors which natural stone can not. Cultured marble countertops are aesthetically pleasing and a more economical and durable alternative to real stone marble.
Paper composites
Paper composite panels fabricated from paper and resin laminated under heat and pressure to form a solid, dense material have been used as countertops in residences and science labs since the 1950s.
Other materials
Stainless steel, stone, terrazzo, bamboo, and other materials are usually prefabricated and assembled on site as well. The difficulty of prefabrication rises with the more exotic materials. As with solid-surface synthetic materials, the edge treatments can vary widely, but the material is usually thicker so there is often no need to build up the edge with multiple layers of the material.
Many predesigned, prefabricated units (including sinks, drainboards, and other accessories) are available in stainless steel. These may be used "stand-alone" or integrated into larger custom assemblies. Some stainless steel systems stand on integrated legs and do not require the support of cabinetry.
Sink installation
In any of these styles, "self-rimming" sinks can be used. They are mounted in templated holes cut in the countertop (or substrate material) using a jigsaw or other cutter appropriate to the material at hand and are suspended by their rim. The rim forms a close fit, reinforced with a sealant, on the top surface of the countertop, especially when the sink is clamped into the hole from below.
Most materials also allow the installation of a "bottom-mount" or "under-mount" sink. With these, the edge of the countertop material is exposed at the hole created for the sink (and so must be a carefully finished edge rather than a rough cut; this cut is generally done at the fabricator's workshop). The sink is then mounted to the bottom of the material from below. Especially for under-mount sinks, silicone-based sealants are typically used to assure a waterproof joint between the sink and the countertop material. The advantage of an "under-mount" sink is that it gives a contemporary look to the kitchen but the disadvantages are extra cost in both the sink and the counter top.
Solid-surface plastic materials allow a third option: sinks made of the same plastic material as the countertop can easily be glued to the underside of the countertop material and the joint sanded flat, creating the usual invisible joint and eliminating any dirt-catching seam between the sink and the countertop. The disadvantage is that the sinks do not have the same impact resistance of stainless or cast iron and may differentially expand and contract with extreme temperature change (as might be caused by a pot of hot water dumped into the sink). In a similar fashion, with stainless steel, a sink may be welded into the countertop; the joint is then ground to create a finished, concealed appearance.
References
Building materials
Furniture | Countertop | Physics,Engineering | 4,083 |
41,918,279 | https://en.wikipedia.org/wiki/Talaromyces%20atroroseus | Talaromyces atroroseus is a species of fungus described as new to science in 2013. Found in soil and fruit, it was first identified from house dust collected in South Africa. The fungus produces a stable red pigment with no known toxins that, it is speculated, could be used in manufacturing, especially mass-produced foods.
References
Trichocomaceae
Fungi described in 2013
Fungi of Africa
Fungus species | Talaromyces atroroseus | Biology | 85 |
10,090,902 | https://en.wikipedia.org/wiki/Mobile%20database | Mobile computing devices (e.g., smartphones and PDAs) store and share data over a mobile network, or access a database which is actually stored by the mobile device. This could be a list of contacts, price information, distance travelled, or any other information.
Many applications require the ability to download information from an information repository and operate on this information even when out of range or disconnected. An example of this is accessing your contacts and calendar on a mobile phone. In this scenario, a user would require access to update information from files in the home directories on a server or customer records from a database. This type of access and work load generated by such users is different from the traditional workloads seen in client–server systems of today.
Mobile databases are not used solely for the revision of company contacts and calendars, but are also utilized in a number of industries.
Considerations
Mobile users must be able to work without a network connection due to poor or even non-existent connections. A cache could be maintained to hold recently accessed data and transactions so that they are not lost due to connection failure. Users might not require access to truly live data, only recently modified data, and uploading of changes might be deferred until reconnected.
Bandwidth must be conserved (a common requirement on wireless networks that charge per megabyte or data transferred).
Mobile computing devices tend to have slower CPUs and limited battery life.
Users with multiple devices (e.g. smartphone and tablet) need to synchronize their devices to a centralized data store. This may require application-specific automation features.
This is in database theory known as "replication", and good mobile database system should provide tools for automatic replication that takes into account that others may have modified the same data as you while you were away, and not just the last update is kept, but also supports "merge" of variants.
Users may change location geographically and on the network. Usually dealing with this is left to the operating system, which is responsible for maintaining the wireless network connection.
Products
Commercially available mobile databases include those shown on this comparison chart.
See also
Cloud computing
References
External links
Mobile Database Review: Microsoft Databases for Windows CE, By Bryan Morgan, Apr 5, 2002, InformIT
Mobile Database Review: Sybase SQL Anywhere Studio 8.0, By Bryan Morgan, Feb 15, 2002, InformIT
Types of databases
Mobile technology | Mobile database | Technology | 489 |
58,646,832 | https://en.wikipedia.org/wiki/Hind%20Al-Abadleh | Hind Al-Abadleh was a professor of chemistry at Wilfrid Laurier University in Waterloo, Ontario, Canada. She studied the physical chemistry of environmental interfaces, aerosols and climate change.
Early life and education
Al-Abadleh grew up in the United Arab Emirates, where she became interested in chemistry during high school. She was excited that science could be used to protect the environment. She eventually studied chemistry at the United Arab Emirates University, graduating in 1999. She joined the University of Iowa in 1999 for her doctoral studies, earning her PhD in 2003. She was awarded the University of Iowa Dissertation Prize in Mathematics, Physical Sciences and Engineering.
Research
She moved to Northwestern University for a postdoctoral scholarship working with Franz Geiger. Whilst she loved Iowa, 9/11 made America a hostile climate for Muslim women (and men). She was appointed to the Department of Chemistry at Wilfrid Laurier University as an assistant professor in 2005 and was eventually promoted to Full Professor. She was awarded a Research Corporation Cottrell College Science Award to study the surface interactions of organoarsenical compounds with geosorbents spectroscopically. Al-Abadleh holds adjunct professor appointment at the University of Waterloo. She also was a visiting professor at the University of Toronto and Trent University (as the Inaugural Ray March Visiting professor). The 2008 Petro-Canada award allowed her to study organic arsenic in soil and water. Her research has been supported by the American Chemical Society, Ontario's Ministry of Research and Innovation, Imperial Oil and Canadian Foundation for Climate and Atmospheric Sciences. She is studying the ageing of aerosols using computational chemistry, mathematical modelling and spectroscopy. She gave a talk at 2014 TEDx Laurier University, To Dream and To Act. In 2015 she published a study showing that aqueous phases reactions of guaiacol and catechol with iron leads to the formation of secondary colored particles. This study highlighted additional pathways for particle growth in the atmosphere in addition to particle nucleation and growth from gas phase precursors.
In 2018 she was named the Fulbright Canada Research Chair in Climate Change, and will work at University of California, Irvine, for 2019. This position allowed her to teach a course on environmental catalysis and conduct research on multiphase chemistry in atmospheric aerosols catalyzed by metals. She is also a board member of Nano Ontario.
Bibliography
FT-IR Study of Water Adsorption on Aluminum Oxide Surfaces
Surface Water Structure and Hygroscopic Properties of Light Absorbing Secondary Organic Polymers of Atmospheric Relevance
Efficient Formation of Light-Absorbing Polymeric Nanoparticles from the Reaction of Soluble Fe(III) with C4 and C6 Dicarboxylic Acids
ATR-FTIR and Flow Microcalorimetry Studies on the Initial Binding Kinetics of Arsenicals at the Organic–Hematite Interface \
Density functional theory calculations on the adsorption of monomethylarsonic acid onto hydrated iron (oxyhydr)oxide clusters
Dispersion Effects on the Thermodynamics and Transition States of Dimethylarsinic Acid Adsorption on Hydrated Iron (Oxyhydr)oxide Clusters from Density Functional Theory Calculations
Awards
2018 Wilfrid Laurier University Faculty Association Merit Award
2018 Environmental Science Leader of the Society of Environmental Toxicology and Chemistry
2017 Kitchener-Waterloo Coalition of Muslim Women Women Who Inspire Award
2016 Canadian Arab Institute in Toronto Canadian Arab to Watch
2016 Muslim Awards for Excellence (MAX) Platinum Award of Excellence
2016 Wilfrid Laurier University Faculty Association Merit Award
2012 Wilfrid Laurier University Faculty Association Merit Award
2008 Petro-Canada Young Innovator Award
References
External links
Year of birth missing (living people)
Living people
Emirati women scientists
University of Iowa alumni
Emirati chemists
Academic staff of Wilfrid Laurier University
Environmental scientists | Hind Al-Abadleh | Environmental_science | 771 |
8,836,978 | https://en.wikipedia.org/wiki/Calypso%20%28electronic%20ticketing%20system%29 | Calypso is an international electronic ticketing standard for microprocessor contactless smart cards, originally designed by a group of transit operators from 11 countries including Belgium, Canada, France, Germany, Italy, Latvia, México, Portugal and others. It ensures multi-sources of compatible products, and allows for interoperability between several transport operators in the same area.
History
Calypso was born in 1993 from a partnership between the Paris transit operator RATP and Innovatron, a company owned by the French smartcard inventor, Roland Moreno. The key features of the scheme were patented by Innovatron. Most European transit operators from Belgium, Germany, France, Italy and Portugal eventually joined the group in the following years. The first use of the technology was in 1996.
In the same time, the international standard ISO/IEC 14443 for contactless smart cards was being designed, and the actors of Calypso strongly lobbied to have their technology included in the standard, but Innovatron's patents—and the price of the related royalties—were not compliant with ISO's policy. Therefore, despite their closeness, there are few significant differences between Calypso's historical contactless protocol and ISO/IEC 14443 Type B international standard.
The European standard for ticketing data (EN1545) has also been contributed by the actors of Calypso.
After a few years of trials, the system has been generalised in the early 2000s in major European cities such as Strasbourg, Paris, Venice, Lisbon, later followed by Turin, Porto, Marseille, Lyon, and many smaller cities. Calypso is extended now in other countries such as Belgium, Israel, Canada, Mexico, Colombia, etc.
Technical aspects
Calypso is based on two main technologies:
The microprocessor smartcard, widely used in many monetary transactions;
The contactless interface (improperly called RFID) ensuring both remote powering and communication between the reader and the card.
A Calypso card, whatever its form (card, watch, mobile phone or other NFC object, etc.) has a microprocessor which contains all the information related to its owner rights for the application, and which implements the Calypso authentication scheme for security. This makes a difference with other e-ticketing system, such as London's Oyster card, where the card is only a memory chip with no processing capabilities.
Calypso Networks Association
A non-for-profit association, Calypso Networks Association (CNA), has been created to regroup the transit network operators using Calypso, and the suppliers of Calypso compliant equipment. This association promotes the standard to new operators and manufacturers, defines the certification policy to guarantee the compatibility of all current and future products, and governs the evolution of the standard. This technical job is actually performed mainly by a subcontractor, Spirtech.
See also
CIPURSE, open security standard for transit fare collection systems by Open Standard for Public Transportation (OSPT) Alliance
References
External links
Calypso Networks Association
Calypso Networks Association
Radio-frequency identification
Contactless smart cards | Calypso (electronic ticketing system) | Engineering | 637 |
73,421,121 | https://en.wikipedia.org/wiki/Pertusaria%20elixii | Pertusaria elixii is a rare species of corticolous (bark-dwelling), crustose lichen in the family Pertusariaceae. Found in Thailand, it was formally described as a new species in 2005 by Sureeporn Jariangprasert. The type specimen was collected by the author from Doi Inthanon National Park (Chom Thong district, Chiang Mai) at an altitude of , where it was found growing on Betula alnoides. The species epithet honours Australian lichenologist John Elix, who assisted the author in chemical analysis of lichen specimens. Pertusaria elixii is distinguished from related species by the number of in its ascus (four), and the presence of 2'-O-methyl-substituted homologues of perlatolic acid.
See also
List of Pertusaria species
References
elixii
Lichen species
Lichens described in 2005
Lichens of Indo-China
Species known from a single specimen | Pertusaria elixii | Biology | 202 |
32,864,292 | https://en.wikipedia.org/wiki/Fungal%20fucose-specific%20lectin | In molecular biology, the fungal fucose-specific lectin family is a family of lectins. Lectins are proteins which are involved in many recognition events at the molecular or cellular level. These fungal lectins, such as Aleuria aurantia lectin AAL, specifically recognise fucosylated glycans. AAL is a dimeric protein, with each monomer being organised into a six-bladed beta-propeller fold and a small antiparallel two-stranded beta-sheet. The beta-propeller fold is important in fucose recognition; five binding pockets are found between the propeller blades. The small beta-sheet, on the other hand, is involved in the dimerisation process.
References
Protein domains | Fungal fucose-specific lectin | Biology | 156 |
35,430,753 | https://en.wikipedia.org/wiki/Ministry%20of%20Petroleum%20and%20Mining | The Ministry of Petroleum is a ministry of the Government of South Sudan. The incumbent minister is Puot Kang Chol., the ministry of petroleum contributes more than 90% of South Sudan's total income through oil production and exportation via pipeline from the oil fields in South Sudan to Port Sudan on the Red Sea. The Government of Sudan taxes South Sudan heavily for using the pipeline and its associated infrastructure. The South Sudanese economy is sensitive to changes in global oil prices, with declines severely affecting the country's national income. In June 2021, the ministry launched its first Oil licensing round in Juba. According to new studies assigned by the ministry, approximately 90% of the country's Oil and gas reserves remain unexplored.
List of ministers of petroleum
Major companies in South Sudan petroleum
Nile Petroleum cooperation
Petroleum Nasional Berhad (Petronas)
Akon Refinery company Ltd.
China National Petroleum cooperation
References
Petroleum
South Sudan, Petroleum
South Sudan, Petroleum
Energy ministries | Ministry of Petroleum and Mining | Engineering | 197 |
617,522 | https://en.wikipedia.org/wiki/Issai%20Schur | Issai Schur (10 January 1875 – 10 January 1941) was a Russian mathematician who worked in Germany for most of his life. He studied at the University of Berlin. He obtained his doctorate in 1901, became lecturer in 1903 and, after a stay at the University of Bonn, professor in 1919.
As a student of Ferdinand Georg Frobenius, he worked on group representations (the subject with which he is most closely associated), but also in combinatorics and number theory and even theoretical physics. He is perhaps best known today for his result on the existence of the Schur decomposition and for his work on group representations (Schur's lemma).
Schur published under the name of both I. Schur, and J. Schur, the latter especially in Journal für die reine und angewandte Mathematik. This has led to some confusion.
Childhood
Issai Schur was born into a Jewish family, the son of the businessman Moses Schur and his wife Golde Schur (née Landau). He was born in Mogilev on the Dnieper River in what was then the Russian Empire. Schur used the name Schaia (Isaiah as the epitaph on his grave) rather than Issai up in his middle twenties. Schur's father may have been a wholesale merchant.
In 1888, at the age of 13, Schur went to Liepāja (Courland, now in Latvia), where his married sister and his brother lived, 640 km north-west of Mogilev. Kurland was one of the three Baltic governorates of Tsarist Russia, and since the Middle Ages the Baltic Germans were the upper social class. The local Jewish community spoke mostly German and not Yiddish.
Schur attended the German-speaking Nicolai Gymnasium in Libau from 1888 to 1894 and reached the top grade in his final examination, and received a gold medal. Here he became fluent in German.
Education
In October 1894, Schur attended the University of Berlin, with concentration in mathematics and physics. In 1901, he graduated summa cum laude under Frobenius and Lazarus Immanuel Fuchs with his dissertation On a class of matrices that can be assigned to a given matrix, which contains a general theory of the representation of linear groups. According to Vogt, he began to use the name Issai at this time. Schur thought that his chance of success in the Russian Empire was rather poor, and because he spoke German so perfectly, he remained in Berlin. He graduated in 1903 and was a lecturer at the University of Berlin. Schur held a position as professor at the Berlin University for the ten years from 1903 to 1913.
In 1913 he accepted an appointment as associate professor and successor of Felix Hausdorff at the University of Bonn. In the following years Frobenius tried various ways to get Schur back to Berlin. Among other things, Schur's name was mentioned in a letter dated 27 June 1913 from Frobenius to Robert Gnehm (the School Board President of the ETH) as a possible successor to Carl Friedrich Geiser. Frobenius complained that they had never followed his advice before and then said: "That is why I can't even recommend Prof. J. Schur (now in Bonn) to you. He's too good for Zurich, and should be my successor in Berlin". Hermann Weyl got the job in Zurich. The efforts of Frobenius were finally successful in 1916, when Schur succeeded Johannes Knoblauch as adjunct professor. Frobenius died a year later, on 3 August 1917. Schur and Carathéodory were both named as the frontrunners for his successor. But they chose Constantin Carathéodory in the end. In 1919 Schur finally received a personal professorship, and in 1921 he took over the chair of the retired Friedrich Hermann Schottky. In 1922, he was also added to the Prussian Academy of Sciences.
During the time of Nazism
After the takeover by the Nazis and the elimination of the parliamentary opposition, the Law for the Restoration of the Professional Civil Service on 7 April 1933, prescribed the release of all distinguished public servants that held unpopular political opinions or who were "Jewish" in origin; a subsequent regulation extended this to professors and therefore also to Schur. Schur was suspended and excluded from the university system. His colleague Erhard Schmidt fought for his reinstatement, and since Schur had been a Prussian official before the First World War, he was allowed to participate in certain special lectures on teaching in the winter semester of 1933/1934 again. Schur withdrew his application for leave from the Science Minister and passed up the offer of a visiting professorship at the University of Wisconsin–Madison for the academic year 1933–34. One element that likely played a role in the rejection of the offer was that Schur no longer felt he could cope with the requirements that would have come with a new beginning in an English-speaking environment.
Already in 1932, Schur's daughter Hilde had married the doctor Chaim Abelin in Bern. As a result, Issai Schur visited his daughter in Bern several times. In Zurich he met often with George Pólya, with whom he was on friendly terms since before the First World War.
On such a trip to Switzerland in the summer of 1935, a letter reached Schur from Ludwig Bieberbach signed on behalf of the Rector's, stating that Schur should urgently seek him out in the University of Berlin. They needed to discuss an important matter with him. It involved Schur's dismissal on 30 September 1935.
Schur remained a member of the Prussian Academy of Sciences after his release as a professor, but a little later he lost this last remnant of his official position. Due to an intervention from Bieberbach in the spring of 1938 he was forced to explain his resignation from the commission of the Academy. His membership in the Advisory Board of Mathematische Zeitschrift was ended in early 1939.
Emigration
Schur found himself lonely after the flight of many of his students and the expulsion of renowned scientists from his previous place of work. Only Dr. Helmut Grunsky had been friendly to him, as Schur reported in the late thirties to his expatriate student Max Menachem Schiffer. The Gestapo was everywhere. Since Schur had announced to his wife his intentions to commit suicide in case of a summons to the Gestapo, in the summer of 1938 his wife took his letters, and with them a summons from the Gestapo, sent Issai Schur to a relaxing stay in a home outside of Berlin and went with medical certificate allowing her to meet the Gestapo in place of her husband. There they flatly asked why they were still staying in Germany. But there were economic obstacles to the planned emigration: emigrating Germans had a pre-departure Reich Flight Tax to pay, which was a quarter of their assets. Now Schur's wife had inherited a mortgage on a house in Lithuania, which because of the Lithuanian foreign exchange determination could not be repaid. On the other hand, Schur was forbidden to default or leave the mortgage to the German Reich. Thus the Schurs lacked cash and cash equivalents. Finally, the missing sum of money was somehow supplied, and to this day it does not seem to be clear who were the donors.
Schur was able to leave Germany in early 1939. His health, however, was already severely compromised. He traveled in the company of a nurse to his daughter in Bern, where his wife also followed a few days later. There they remained for several weeks and then emigrated to Palestine. Two years later, on his 66th birthday, on 10 January 1941, he died in Tel Aviv of a heart attack.
Work
Schur continued the work of his teacher Frobenius with many important works for group theory and representation theory. In addition, he published important results and elegant proofs of known results in almost all branches of classical algebra and number theory. His collected works are proof of this. There, his work on the theory of integral equations and infinite series can be found.
Linear groups
In his doctoral thesis Über eine Klasse von Matrizen, die sich einer gegebenen Matrix zuordnen lassen Issai Schur determined the polynomial representations of the general linear group on the field of complex numbers. The results and methods of this work are still relevant today. In his book, J.A. Green determined the polynomial representations of over infinite fields with arbitrary characteristic. It is mainly based on Schur's dissertation. Green writes, "This remarkable work (of Schur) contained many very original ideas, developed with superb algebraic skill. Schur showed that these (polynomial) representations are completely reducible, that each irreducible one is "homogeneous" of some degree , and that the equivalence types of irreducible polynomial representations of , of fixed homogeneous degree , are in one-one correspondence with the partitions of into not more than parts. Moreover Schur showed that the character of an irreducible representation of type is given by a certain symmetric function in variables (since described as a "Schur function")." According to Green, the methods of Schur's dissertation today are important for the theory of algebraic groups.
In 1927 Schur, in his work On the rational representations of the general linear group, gave new proofs for the main results of his dissertation. If is the natural -dimensional vector space on which operates, and if is a natural number, then the -fold tensor product over is a -module, on which the symmetric group of degree also operates by permutation of the tensor factors of each generator of . By exploiting these -bimodule actions on , Schur manages to find elegant proofs of his sentences. This work of Schur was once very well known.
Professorship in Berlin
Schur lived in Berlin as a highly respected member of the academic world, an apolitical scholar. A leading mathematician and outstanding and very successful teacher, he held a prestigious chair at the University of Berlin for 16 years. Until 1933, his research group had an excellent reputation at the University of Berlin in Germany and beyond. With Schur in the center, his faculty worked with representation theory, which was extended by his students in different directions (including solvable groups, combinatorics, matrix theory). Schur made fundamental contributions to algebra and group theory which, according to Hermann Weyl, were comparable in scope and depth to those of Emmy Noether (1882–1935).
When Schur's lectures were canceled in 1933, there was an outcry among the students and professors who appreciated him and liked him. By the efforts of his colleague Erhard Schmidt Schur was allowed to continue lecturing until the end of September 1935 for the time being. Schur was the last Jewish professor who lost his job at this time.
Zurich lecture
In Switzerland, Schur's colleagues Heinz Hopf and George Pólya were informed of the dismissal of Schur in 1935. They tried to help as best they could. On behalf of the Mathematical Seminars chief Michel Plancherel, on 12 December 1935 the school board president Arthur Rohn invited Schur to une série de conférences sur la théorie de la représentation des groupes finis. At the same time he asked that the formal invitation should come from President Rohn, comme le prof. Schur doit obtenir l'autorisation du ministère compétent de donner ces conférences. George Pólya arranged from this invitation of the Mathematical Seminars the Conference of the Department of Mathematics and Physics on 16 December. Meanwhile, on 14 December the official invitation letter from President Rohn had already been dispatched to Schur. Schur was promised for his guest lecture a fee of CHF 500.
Schur did not reply until 28 January 1936, on which day he was first in the possession of the required approval of the local authority. He declared himself willing to accept the invitation. He envisaged beginning the lecture on 4 February. Schur spent most of the month of February in Switzerland. Before his return to Germany he visited his daughter in Bern for a few days, and on 27 February he returned via Karlsruhe, where his sister lived, to Berlin. In a letter to Pólya from Berne, he concludes with the words: From Switzerland I take farewell with a heavy heart.
In Berlin, meanwhile, mathematician and Nazi Ludwig Bieberbach, in a letter dated 20 February 1936, informed the Reich Minister for Science, Art, and Education on the journey of Schur, and announced that he wanted to find out what was the content of the lecture in Zurich.
Significant students
Schur had a total of 26 graduate students, some of whom acquired a mathematical reputation. Among them are
Alfred Brauer, University of Berlin (1928)
Richard Brauer, University of Berlin (1925)
, University of Berlin (1925)
Bernhard Neumann, University of Berlin, Cambridge University (1932, 1935)
Félix Pollaczek, University of Berlin (1922)
Heinz Pruefer, University of Berlin, (1921)
Richard Rado, University of Berlin, Cambridge University (1933, 1935)
Isaac Jacob Schoenberg, Alexandru Ioan Cuza University of Iaşi (1926)
Wilhelm Specht, University of Berlin (1932)
Helmut Wielandt, University of Berlin (1935)
Legacy
Concepts named after Schur
Among others, the following concepts are named after Issai Schur:
List of things named after Issai Schur
Schur algebra
Schur complement
Schur index
Schur indicator
Schur multiplier
Schur orthogonality relations
Schur polynomial
Schur product
Schur test
Schur's inequality
Schur's theorem
Schur-convex function
Schur–Weyl duality
Lehmer–Schur algorithm
Schur's property for normed spaces.
Jordan–Schur theorem
Schur–Zassenhaus theorem
Schur triple
Schur decomposition
Schur's lower bound
Quotes
In his commemorative speech, Alfred Brauer (PhD candidate of Schur) spoke about Issai Schur as follows: As a teacher, Schur was excellent. His lectures were very clear, but not always easy and required cooperation – During the winter semester of 1930, the number of students who wanted to attend Schur's theory of numbers lecture, was such that the second largest university lecture hall with about 500 seats was too small. His most human characteristics were probably his great modesty, his helpfulness and his human interest in his students.
Heinz Hopf, who had been in Berlin before his appointment to Zurich at the ETH Privatdozent, held – as is clear from oral statements and also from letters – Issai Schur as a mathematician and greatly appreciated man. Here, this appreciation was based entirely on reciprocity: in a letter of 1930 to George Pólya on the occasion of the re-appointment of Hermann Weyl, Schur says of Hopf: Hopf is a very excellent teacher, a mathematician of strong temperament and strong effect, a master's discipline, trained excellent in other areas. – If I have to characterize him as a man, it may suffice if I say that I sincerely look forward to each time I meet with him.
Schur was, however, known for putting a correct distance in personal affairs. The testimony of Hopf is in accordance with statements of Schur's former students in Berlin, by Walter Ledermann and Bernhard Neumann.
Publications
Notes
References
Review
External links
1875 births
1941 deaths
People from Mogilev
People from Mogilyovsky Uyezd (Mogilev Governorate)
Belarusian Jews
Emigrants from the Russian Empire to Germany
German people of Belarusian-Jewish descent
19th-century German mathematicians
Combinatorialists
20th-century German mathematicians
Group theorists
Linear algebraists
Humboldt University of Berlin alumni
Academic staff of the Humboldt University of Berlin
Academic staff of the University of Bonn
Members of the Prussian Academy of Sciences
Corresponding Members of the USSR Academy of Sciences
Jewish emigrants from Nazi Germany to Mandatory Palestine
Deaths from coronary artery disease
Burials at Trumpeldor Cemetery
Issai Schur | Issai Schur | Mathematics | 3,294 |
2,670,022 | https://en.wikipedia.org/wiki/Iota2%20Scorpii | {{DISPLAYTITLE:Iota2 Scorpii}}
ι2 Scorpii, Latinised as Iota2 Scorpii, is a single star in tail of the zodiac constellation of Scorpius. It has an apparent visual magnitude of +4.82, and is visible to the naked eye. Because of parallax measurement errors, the distance to this star is only approximately known: it lies around 2,500 light years away from the Sun. It has a visual companion, a magnitude 11.0 star at an angular separation of 31.60 arcseconds along a position angle of 36°, as of 2000.
In the literature, there are two different stellar classifications for this star: A2 Ib and A6 Ib. In either case it is an A-type supergiant star with an estimated age of 30 million years and a mass 8.8 times that of the Sun. It shines with a luminosity 5,798 times the Sun's from an outer atmosphere that has an effective temperature of 6,372 K. As with other stars of its type, ι2 Scorpii varies slightly in brightness, showing an amplitude of 0.05 in magnitude.
References
External links
Scorpii, Iota2
Scorpius
A-type supergiants
087294
6631
161912
Durchmusterung objects | Iota2 Scorpii | Astronomy | 290 |
208,254 | https://en.wikipedia.org/wiki/16%20%28number%29 | 16 (sixteen) is the natural number following 15 and preceding 17. It is the fourth power of two.
In English speech, the numbers 16 and 60 are sometimes confused, as they sound very similar.
Mathematics
16 is the ninth composite number, and a square number: 42 = 4 × 4 (the first non-unitary fourth-power prime of the form p4). It is the smallest number with exactly five divisors, its proper divisors being , , and .
Sixteen is the only integer that equals mn and nm, for some unequal integers m and n (, , or vice versa). It has this property because . It is also equal to 32 (see tetration).
The aliquot sum of 16 is 15, within an aliquot sequence of four composite members (16, 15, 9, 4, 3, 1, 0) that belong to the prime 3-aliquot tree.
Sixteen is the largest known integer , for which is prime.
It is the first Erdős–Woods number.
There are 16 partially ordered sets with four unlabeled elements.
16 is the only number that can be both the perimeter and area of the same square, due to being equal to
The sedenions form a 16-dimensional hypercomplex number system.
Hexadecimal
Sixteen is the base of the hexadecimal number system, which is used extensively in computer science.
Technology
In some computer programming languages, the size in bits of certain data types
16-bit computing
A 16-bit integer can represent up to 65,536 values.
In the 16-bit era, 16-bit microprocessor ran 16-bit applications
Culture
As a unit of measurement
A low power of two, 16 was used in weighing light objects in several cultures. Early civilizations utilized the weighing scale as a means to measure mass, which made splitting resources into equal parts a simple task. In the imperial system, 16 ounces equivalates to one pound. Until the State Council of the People's Republic of China decreed a decimal conversion for currency in 1959, China equivalated 16 liǎng to one jīn. Chinese Taoists did finger computation on the trigrams and hexagrams by counting the finger tips and joints of the fingers with the tip of the thumb. Each hand can count up to 16 in such manner. The Chinese abacus uses two upper beads to represent the 5s and 5 lower beads to represent the 1s, the 7 beads can represent a hexadecimal digit from 0 to 15 in each column.
Age 16
A "sweet sixteen" is celebrated by many sixteen-year-old girls in the United States and Canada. It is a coming-of-age celebration that traditionally marks a girl's transition into womanhood.
In the United States and Canada, 16 is the most common age of sexual consent, as well as the age in the United Kingdom and several European countries. Sixteen is also the minimum age for being allowed a beginner's driver's license with parental consent in many US states and in Canada.
References
External links
Integers | 16 (number) | Mathematics | 638 |
23,090,985 | https://en.wikipedia.org/wiki/NKK%20switches | (formerly Nihon Kaiheiki Kogyo Co., Ltd.) is a designer and manufacturer of diversified industrial operational switches. The company offers illuminated, process sealed, miniature, specialty, surface mount and LCD programmable switches. The company also manufactures toggle, rocker, pushbutton, slide, DIP, rotary, keypad and keylock switches.
Affiliates
NKK Switches of America, Inc. (Scottsdale, AZ)
NKK Switches Hong Kong Co., Ltd. (Hong Kong)
NKK Switches Mactan, Inc. (Philippines)
References
External links
NKK group portal site
NKK switches of America
NKK switches Co., Ltd.
Electronics companies of Japan
Electrical equipment manufacturers
Electrical engineering companies of Japan
Companies based in Kawasaki, Kanagawa
Companies listed on the Tokyo Stock Exchange
Electronics companies established in 1953
1953 establishments in Japan
Japanese brands | NKK switches | Engineering | 173 |
5,064,230 | https://en.wikipedia.org/wiki/Sodium%20thiocyanate | Sodium thiocyanate (sometimes called sodium sulphocyanide) is the chemical compound with the formula NaSCN. This colorless deliquescent salt is one of the main sources of the thiocyanate anion. As such, it is used as a precursor for the synthesis of pharmaceuticals and other specialty chemicals. Thiocyanate salts are typically prepared by the reaction of cyanide with elemental sulfur:
8 NaCN + S8 → 8 NaSCN
Sodium thiocyanate crystallizes in an orthorhombic cell. Each Na+ center is surrounded by three sulfur and three nitrogen ligands provided by the triatomic thiocyanate anion. It is commonly used in the laboratory as a test for the presence of Fe3+ ions.
Applications in chemical synthesis
Sodium thiocyanate is employed to convert alkyl halides into the corresponding alkylthiocyanates. Closely related reagents include ammonium thiocyanate and potassium thiocyanate, which has twice the solubility in water. Silver thiocyanate may be used as well; the precipitation of insoluble silver halides help simplify workup. Treatment of isopropyl bromide with sodium thiocyanate in a hot ethanolic solution affords isopropyl thiocyanate. Protonation of sodium thiocyanate affords isothiocyanic acid, S=C=NH (pKa = −1.28). This species is generated in situ from sodium thiocyanate; it adds to organic amines to afford derivatives of thiourea.
References
Sodium compounds
Thiocyanates | Sodium thiocyanate | Chemistry | 359 |
1,067,204 | https://en.wikipedia.org/wiki/Griseofulvin | Griseofulvin is an antifungal medication used to treat a number of types of dermatophytoses (ringworm). This includes fungal infections of the nails and scalp, as well as the skin when antifungal creams have not worked. It is taken by mouth.
Common side effects include allergic reactions, nausea, diarrhea, headache, trouble sleeping, and feeling tired. It is not recommended in people with liver failure or porphyria. Use during or in the months before pregnancy may result in harm to the baby. Griseofulvin works by interfering with fungal mitosis.
Griseofulvin was discovered in 1939 from the soil fungus Penicillium griseofulvum. It is on the World Health Organization's List of Essential Medicines.
Medical uses
Griseofulvin is used orally only for dermatophytosis. It is ineffective topically. It is reserved for cases in which topical treatment with creams is ineffective.
Terbinafine given for 2 to 4 weeks is at least as effective as griseofulvin given for 6 to 8 weeks for treatment of Trichophyton scalp infections. However, griseofulvin is more effective than terbinafine for treatment of Microsporum scalp infections.
Pharmacology
Pharmacodynamics
The drug binds to tubulin, interfering with microtubule function, thus inhibiting mitosis. It binds to keratin in keratin precursor cells and makes them resistant to fungal infections. The drug reaches its site of action only when hair or skin is replaced by the keratin-griseofulvin complex. Griseofulvin then enters the dermatophyte through energy-dependent transport processes and binds to fungal microtubules. This alters the processing for mitosis and also underlying information for deposition of fungal cell walls.
Biosynthesis
It is produced industrially by fermenting the fungus Penicillium griseofulvum.
The first step in the biosynthesis of griseofulvin by P. griseofulvin is the synthesis of the 14-carbon poly-β-keto chain by a type I iterative polyketide synthase (PKS) via iterative addition of 6 malonyl-CoA to an acyl-CoA starter unit. The 14-carbon poly-β-keto chain undergoes cyclization/aromatization, using cyclase/aromatase, respectively, through a Claisen and aldol condensation to form the benzophenone intermediate. The benzophenone intermediate is then methylated via S-adenosyl methionine (SAM) twice to yield griseophenone C. The griseophenone C is then halogenated at the activated site ortho to the phenol group on the left aromatic ring to form griseophenone B. The halogenated species then undergoes a single phenolic oxidation in both rings forming the two oxygen diradical species. The right oxygen radical shifts alpha to the carbonyl via resonance allowing for a stereospecific radical coupling by the oxygen radical on the left ring forming a tetrahydrofuranone species. The newly formed grisan skeleton with a spiro center is then O-methylated by SAM to generate dehydrogriseofulvin. Ultimately, a stereoselective reduction of the olefin on dehydrogriseofulvin by NADPH affords griseofulvin.
Toxicology
Mice
Griseofulvin is found to alter the bile metabolism of mice by Yokoo et al. 1979. The same team went on to find a similar effect on mice by a chemically unrelated substance, 3,5-diethoxycarbonyl-1,4-dihydrocollidine, in Yokoo et al. 1982 and Tsunoo et al. 1987.
References
Antifungals
Aromatic ketones
Chloroarenes
Cyclohexenes
Disulfiram-like drugs
Halogen-containing natural products
IARC Group 2B carcinogens
Mutagens
Resorcinol ethers
Wikipedia medicine articles ready to translate
Spiro compounds
World Health Organization essential medicines
Terpenes and terpenoids | Griseofulvin | Chemistry | 889 |
61,086,513 | https://en.wikipedia.org/wiki/Amino%20%28app%29 | Amino is a social media application originally developed by Narvii, Inc. It was originally created by Yin Wang and Ben Anderson in 2012, and then launched as an app in 2016. Amino was acquired by MediaLab in January 2021, and the founders are no longer associated with the application.
History
In 2012, Wang and Anderson came up with the idea for a convention-like community while attending an anime convention in Boston, Massachusetts. Later that year, they would release two apps revolving around K-pop and photography that allowed fans of those subjects to chat freely.
In 2012, Amino was officially released.
Growth
Amino received 1.65 million dollars of seed funding in 2014, primarily from Union Ventures. Some additional seed investors include Google Ventures, SV Angel, Box Group, and other interested parties.
By July 2014, Amino's apps were downloaded 500,000 times. Though only having 15 communities at that time, Amino eventually grew to have 41 communities in September 2015. Amino's apps had been downloaded 13 million times by July 2016. Fandoms had migrated from websites like Facebook and Reddit to Amino, partly because of the app's mobile-native experience.
Before 2016, when a user wanted to join a new Amino, they had to download another app for the Amino they wanted to join. In 2016, Amino Apps launched a centralized portal that hosted every Amino community in one app, meaning users no longer had to download multiple apps.
In July of the same year, ACM, an app that allowed users to create their own communities, was launched. This resulted in the number of communities on Amino skyrocketing to over 2.5 million as of June 2018.
Features
The main feature of Amino is communities dedicated to a certain topic that users can join. Users can also chat with other members of a community in three ways, text, voice, or screening room, which allows users to watch videos together while voice chatting. Other features include polls, blog posts, image posts, wiki entries, stories, and quizzes. In some cases, posts that are very well-made and have been noticed by a community's administration will end up receiving a feature, making it appear on the front page along with other featured content.
In 2018, a premium membership option called Amino+ was added. Amino+ comes with additional features such as exclusive stickers, the ability to make stickers, custom chat bubbles, high resolution images, and other perks. Membership can be purchased with money or Amino coins. Amino coins can be purchased or earned through enabling ads, watching ad videos, completing activities on the Offer Wall, and playing Lucky Draw when checking in. Members can give and receive coins through props.
In 2019, Amino introduced six original short-form animated series, labelled "Amino Originals," produced by independent artists from across the internet. ATJ's "Little Red," a re-imagining of Little Red Riding Hood, premiered on November 15, 2019. "Little Red" was joined by five other shows in late December. Sophie Feher's "The Reef," a comedy featuring an aspiring marine biologist meeting a merman, premiered on December 27 alongside "Princely," an LGBT fairy tale created by Matt Bruneau-Richardson of Tiny Siren Animation. "Spaced Out," an alien abduction comedy by Michael Jae, and YouTuber Alex Clark's "Wyndvania II" premiered on December 28. Mysie Pereira's fairy tale "Turned to Stone" and Marcin Pawlowski's "Stranded" premiered on December 29, 2019.
Administration
On each community, there are two types of staff members, these being ‘Leader’ and ‘Curator.’ Leaders are higher rank than curators. Curators are usually the ones who feature posts, or post important announcements for users to see.
Curators are able to disable a post or public chat, delete comments or chat threads, manage featured content, manage posts in topic categories, and approve Wiki entries.
Leaders have more power than curators. In addition to curator powers, leaders can submit a community to be listed, change the Amino's features, change navigation, alter the community appearance, change the Amino's privacy settings, manage the Amino's join requests, send invites, appoint or demote Curators, strike or ban members, manage flagged content, change users' custom titles, manage topics and wiki categories, and create broadcasts (notifications sent for posts).
One leader will have the status of agent. An agent is the primary leader of a community; the person who created the community is automatically agent. An agent has the ability to delete their community as long as it is not too large or too active. An agent can appoint and remove both leaders and curators. Agent status can be transferred voluntarily to another leader, curator, or community member. If an agent is inactive, Team Amino may assist in transferring agent status.
Apps
Amino Community Manager
Otherwise known as ACM, this application is what users use to create and manage their own community in Amino. This app allows moderators to customize a community's theme, icon, and categories. ACM also allows moderation to customize community descriptions, pick leaders, change language settings, create a tagline for the community, change the home page lay out, alter the side navigation menu, and more. Unlisted communities are able to change their community's title and Amino ID, but this is not an option once a community is listed. A leader can use ACM to submit a request for their community to be listed on the explore page, after which the community will be reviewed by Team Amino for approval. Communities can be deleted on ACM, but only by the agent of that community.
Guidelines
Amino has a set of guidelines that all communities must comply with. Amino does not allow harassment or hate, spam or self-promotion (including promotion of one's own Amino community), sexual/NSFW content*, self harm, real graphic/gross content (fictional content is generally acceptable), unsafe/illegal content, or content that violates copyright. Communities are allowed to have additional rules so long as they do not violate Amino's rules. In addition to Amino's rules, users are required to be at least 13 years of age in the U.S. and 16 years of age in European Union countries. While sexual imagery is not allowed in any community and text based sexual content is not allowed in public areas, some private communities are allowed to discuss sexual themes. However, they are not exempt from Amino's rules on NSFW content.
If guidelines are broken, a leader may disable content or impose a warning, strike, or ban, depending on the severity of the infringement. A warning is a message informing the user that they have violated a guideline and may face further punishment unless they change their behaviour. A strike will put the user in read-only mode for up to 24 hours; this mode prevents the user from posting, chatting, or interacting with posts in that community. A ban removes the user from the community. Team Amino can separately give strikes or bans across the entire platform.
References
Social media
Social networking mobile apps | Amino (app) | Technology | 1,470 |
63,105,084 | https://en.wikipedia.org/wiki/ASUSat | ASUSat (Arizona State University Satellite, also known as ASU-OSCAR 37) was a U.S. amateur radio satellite that was developed and built for educational purposes by students at Arizona State University. It was equipped with two digital cameras for tracking changes to Earth's coasts and forests.
ASUSat was launched on January 27, 2000, along with JAWSAT with a Minotaur I rocket from Vandenberg Air Force Base, Lompoc, California. ASUSat was received 50 minutes after the start in South Africa, later also in New Zealand and the United States. During two overflights over Arizona, Arizona State University students were able to receive and control the satellite remotely. A problem with the power supply was reported on the third pass, 14 hours after take-off. The solar cells did not provide any electrical energy, so the batteries were exhausted shortly afterwards.
Frequencies
Uplink: 145.900 MHz
Downlink: 436.700 MHz
See also
OSCAR
References
Satellites orbiting Earth
Amateur radio satellites
Nanosatellites
Spacecraft launched in 2000 | ASUSat | Astronomy | 216 |
16,388,251 | https://en.wikipedia.org/wiki/Monomial%20group | In mathematics, in the area of algebra studying the character theory of finite groups, an M-group or monomial group is a finite group whose complex irreducible characters are all monomial, that is, induced from characters of degree 1.
In this section only finite groups are considered. A monomial group is solvable. Every supersolvable group and every solvable A-group is a monomial group. Factor groups of monomial groups are monomial, but subgroups need not be, since every finite solvable group can be embedded in a monomial group.
The symmetric group is an example of a monomial group that is neither supersolvable nor an A-group. The special linear group is the smallest finite group that is not monomial: since the abelianization of this group has order three, its irreducible characters of degree two are not monomial.
Notes
References
Finite groups
Properties of groups | Monomial group | Mathematics | 198 |
24,000,848 | https://en.wikipedia.org/wiki/Glossary%20of%20equestrian%20terms | This is a basic glossary of equestrian terms that includes both technical terminology and jargon developed over the centuries for horses and other equidae, as well as various horse-related concepts. Where noted, some terms are used only in American English (US), only in British English (UK), or are regional to a particular part of the world, such as Australia (AU).
A
B
C
D
E
F
G
H
I
J
K
L
M
N
O
P
Q
R
S
T
U
V
W
X
Y
Z
See also
For additional terminology, see also:
Equine anatomy (includes definitions and illustration of the points of a horse)
Equine coat color (lists all coat colors)
Equine conformation (includes terms that describe conformation flaws)
Horse breeding (explains relevant concepts)
List of horse breeds (includes horse breeds and types)
Horse racing:
Glossary of Australian and New Zealand punting
Glossary of North American horse racing
Equipment:
Bridle (includes a list of bridle parts)
Horse tack (horse equipment)
Horse harness (includes a list of harness parts)
Horse grooming (includes list of tools)
Saddle (includes a list of saddle parts)
References
.
Glossary
Equestrian
Equestrian
Wikipedia glossaries using description lists | Glossary of equestrian terms | Biology | 255 |
11,032,409 | https://en.wikipedia.org/wiki/Ternium | Ternium S.A. is a manufacturer of flat and long steel products with production centers in Argentina, Brazil, Mexico, Guatemala, Colombia, and the United States. Ternium owns a 51.5% interest in Usiminas of Brazil. The company has an annual production capacity of 15.4 million tons. In 2023, 55% of its sales were from Mexico; 21% of sales were from Argentina; Bolivia, Chile, Paraguay and Uruguay; 13% of sales were from Brazil; and 11% of sales were from the United States, Colombia and Central America.
Approximately 21% of the company is publicly-traded; the remainder is controlled by San Faustin S.A., which is in turn controlled by Rocca & Partners Stichting Administratiekantoor Aandelen San Faustin, a Stichting.
The company takes its name from the Latin words Ter (three) and Eternium (eternal) in reference to the integration of the three steel mills.
History
Ternium was formed in 2005 by the consolidation of three companies: Siderar of Argentina, Sidor of Venezuela and Hylsa of Mexico.
In 2006, Ternium was listed on the New York Stock Exchange.
In 2007, Ternium acquired Grupo IMSA, expanding its operations into Guatemala and the United States.
In April 2008, after a series of worker disputes over pay which led to strike actions, Sidor was nationalized by the government of Venezuela. In May 2009, compensation of US$1.65 billion was paid for Ternium's 59.7% stake in Sidor.
In August 2010, Ternium acquired a 54% interest in Ferrasa, and in April 2015, Ternium acquired the remainder of the company, which was renamed Ternium Colombia.
In 2017, Ternium acquired CSA Siderúrgica do Atlântico for €1.4 billion and renamed it Ternium Brazil.
In July 2023, Ternium increased its ownership in Usiminas to 51.5%.
References
External links
Companies based in Luxembourg City
Companies listed on the Buenos Aires Stock Exchange
Companies listed on the New York Stock Exchange
Iron and steel mills
Luxembourgian companies established in 2005
Manufacturing companies established in 2005
Manufacturing companies of Argentina
Manufacturing companies of Mexico
Steel companies of Luxembourg
Techint | Ternium | Chemistry | 464 |
3,733,715 | https://en.wikipedia.org/wiki/Oxamniquine | Oxamniquine, sold under the brand name Vansil among others, is a medication used to treat schistosomiasis due to Schistosoma mansoni. Praziquantel, however, is often the preferred treatment. It is given by mouth and used as a single dose.
Common side effects include sleepiness, headache, nausea, diarrhea, and reddish urine. It is typically not recommended during pregnancy, if possible. Seizures may occur and therefore caution is recommended in people with epilepsy. It works by causing paralysis of the parasitic worms. It is in the anthelmintic family of medications.
Oxamniquine was first used medically in 1972. It is on the World Health Organization's List of Essential Medicines. It is not commercially available in the United States. It is more expensive than praziquantel.
Medical uses
Oxamniquine is used for treatment of schistosomiasis. According to one systematic review, praziquantel is the standard treatment for S. mansoni infections and oxamniquine also appears effective.
Side effects
It is generally well tolerated following oral doses. Dizziness with or without drowsiness occurs in at least a third of patients, beginning up to three hours after a dose, and usually lasts for up to six hours. Headache and gastrointestinal effects, such as nausea, vomiting, and diarrhoea, are also common.
Allergic-type reactions, including urticaria, pruritic skin rashes, and fever, may occur. Liver enzyme values have been raised transiently in some patients. Epileptiform convulsions have been reported, especially in patients with a history of convulsive disorders. Hallucinations and excitement have occurred rarely.
A reddish discoloration of urine, probably due to a metabolite of oxamniquine, has been reported.
Oxamniquine is not recommended during pregnancy.
Pharmacokinetics
Peak plasma concentrations are achieved one to three hours after a dose, and the plasma half-life is 1.0 to 2.5 hours.
It is extensively metabolised to inactive metabolites, principally the 6-carboxy derivative, which are excreted in the urine. About 70% of a dose of oxamniquine is excreted as the 6-carboxy metabolite within 12 hours of a dose; traces of the 2-carboxy metabolite have also been detected in the urine.
Mechanism of action
It is an anthelmintic with schistosomicidal activity against Schistosoma mansoni, but not against other Schistosoma spp. Oxamniquine is a potent single-dose agent for treatment of S. mansoni infection, and it causes worms to shift from the mesenteric veins to the liver, where the male worms are retained; the female worms return to the mesentery, but can no longer release eggs.
Oxamniquine is a semisynthetic tetrahydroquinoline and possibly acts by DNA binding, resulting in contraction and paralysis of the worms and eventual detachment from terminal venules in the mesentry, and death. Its biochemical mechanisms are hypothesized to be related to an anticholinergic effect, which increases the parasite's motility, as well as inhibiting the synthesis of nucleic acids. Oxamniquine acts mainly on male worms, but also induces small changes on a small proportion of females. Like praziquantel, it promotes more severe damage of the dorsal tegument than of the ventral surface. The drug causes the male worms to shift from the mesenteric circulation to the liver, where the cellular host response causes its final elimination. The changes caused in the females are reversible and are due primarily to the discontinued male stimulation rather than the direct effect of oxamniquine.
History
Oxamniquine was first described by Kaye and Woolhouse in 1972 as a metabolite of the compound UK 3883 (2-isopropylaminomethyl-6-methyl-7-nitro-1,2,3,4-tetrahydroquinoline). Initially, it was prepared by enzymatic hydroxylation via the fungus Aspergillus sclerotiorum. In 1979, Pfizer at Sandwich was presented with the Queen's Award for Technological Achievement in recognition of the outstanding contribution made to tropical medicine by MANSIL (oxamniquine).
Brand names
Vansil; (Pfizer) 250 mg capsules, syrup 250 mg/5 mL
Mansil; 250 mg Tablets
Stereochemistry
Oxamniquine contains a stereocenter and consists of two enantiomers. This is a racemate, i.e. a 1: 1 mixture of ( R ) - and the ( S ) - form:
References
External links
Antiparasitic agents
Secondary amines
Nitroarenes
Primary alcohols
Tetrahydroquinolines
World Health Organization essential medicines
Wikipedia medicine articles ready to translate | Oxamniquine | Biology | 1,063 |
1,034,588 | https://en.wikipedia.org/wiki/Octadecane | Octadecane is an alkane hydrocarbon with the chemical formula CH3(CH2)16CH3.
Properties
Octadecane is distinguished by being the alkane with the lowest carbon number that is unambiguously solid at room temperature and pressure.
References
External links
Phytochemical and Ethnobotanical Databases
Alkanes | Octadecane | Chemistry | 76 |
17,423,952 | https://en.wikipedia.org/wiki/History%20of%20the%20steam%20engine | The first recorded rudimentary steam engine was the aeolipile mentioned by Vitruvius between 30 and 15 BC and, described by Heron of Alexandria in 1st-century Roman Egypt. Several steam-powered devices were later experimented with or proposed, such as Taqi al-Din's steam jack, a steam turbine in 16th-century Ottoman Egypt, Denis Papin's working model of the steam digester in 1679 and Thomas Savery's steam pump in 17th-century England. In 1712, Thomas Newcomen's atmospheric engine became the first commercially successful engine using the principle of the piston and cylinder, which was the fundamental type of steam engine used until the early 20th century. The steam engine was used to pump water out of coal mines.
During the Industrial Revolution, steam engines started to replace water and wind power, and eventually became the dominant source of power in the late 19th century and remaining so into the early decades of the 20th century, when the more efficient steam turbine and the internal combustion engine resulted in the rapid replacement of the steam engines. The steam turbine has become the most common method by which electrical power generators are driven. Investigations are being made into the practicalities of reviving the reciprocating steam engine as the basis for the new wave of advanced steam technology.
Precursors
Early uses of steam power
The first to use steam as a way to transform heat into movement was Archytas, who propelled a wooden bird along wires using steam as propellant around 400 BC. The earliest known rudimentary steam engine and reaction steam turbine, the aeolipile, is described by a mathematician and engineer named Heron of Alexandria in 1st century Roman Egypt, as recorded in his manuscript Spiritalia seu Pneumatica.
The same device was also mentioned by Vitruvius in De Architectura about 100 years earlier. Steam ejected tangentially from nozzles caused a pivoted ball to rotate. This suggests that the conversion of steam pressure into mechanical movement was known in Roman Egypt in the 1st century, however, its thermal efficiency was low. Heron also devised a machine that used air heated in an altar fire to displace a quantity of water from a closed vessel. The weight of the water was made to pull a hidden rope to operate temple doors. Some historians have conflated the two inventions to assert, incorrectly, that the aeolipile was capable of useful work.
According to William of Malmesbury, in 1125, Reims was home to a church that had an organ powered by air escaping from compression "by heated water", apparently designed and constructed by professor Gerbertus.
Among the papers of Leonardo da Vinci dating to the late 15th century is the design for a steam-powered cannon called the Architonnerre, which works by the sudden influx of hot water into a sealed, red-hot cannon.
A rudimentary impact steam turbine was described in 1551 by Taqi al-Din, a philosopher, astronomer and engineer in 16th century Ottoman Egypt, who described a method for rotating a spit by means of a jet of steam playing on rotary vanes around the periphery of a wheel. A similar device for rotating a spit was also later described by John Wilkins in 1648. These devices were then called "mills" but are now known as steam jacks. Another similar rudimentary steam turbine is shown by Giovanni Branca, an Italian engineer, in 1629 for turning a cylindrical escapement device that alternately lifted and let fall a pair of pestles working in mortars. The steam flow of these early steam turbines, however, was not concentrated and most of its energy was dissipated in all directions. This would have led to a great waste of energy and so they were never seriously considered for industrial use.
In 1605, French mathematician David Rivault de Fleurance in his treatise on artillery wrote on his discovery that water, if confined in a bombshell and heated, would explode the shells.
In 1606, the Spaniard Jerónimo de Ayanz y Beaumont demonstrated and was granted a patent for a steam-powered water pump. The pump was successfully used to drain the inundated mines of Guadalcanal, Spain.
In 1679, French Physicist Denis Papin, invented the Steam Digester (pressure cooker) which was used to extract fats from bones in a high pressure environment and then also create Bone meal.
Development of the commercial steam engine
"The discoveries that, when brought together by Thomas Newcomen in 1712, resulted in the steam engine were:"
The concept of a vacuum (i.e. a reduction in pressure below ambient)
The concept of pressure
Techniques for creating a vacuum
A means of generating steam
The piston and cylinder
In the late 15th century, Italian polymath, engineer, painter and architect Leonardo da Vinci wrote papers that described the Architonnerre, a Steam powered cannon that used high pressure environments to launch large and heavy projectiles with incredible force. Da Vinci's design resembled the original cannon with a long cylindrical tube on one end used to aim the projectile correctly and the other end a large chamber which was used to heat up water into steam and when it was ready to fire a small cap would be placed tightly on a hole on top of the cannon, causing rapid buildup of steam and creating a very high pressure environment and propelled the projectile with immense force towards the target. The Architonnerre was designed to shoot a projectile that weighed one Roman Talent. Many of the principles employed by da Vinci for the Architonnerre were later used in the development of the steam engine.
In 1643, Evangelista Torricelli conducted experiments on suction lift water pumps to test their limits, which was about 32 feet. (Atmospheric pressure is 32.9 feet or 10.03 meters. Vapor pressure of water lowers theoretical lift height.) He devised an experiment using a tube filled with mercury and inverted in a bowl of mercury (a barometer) and observed an empty space above the column of mercury, which he theorized contained nothing, that is, a vacuum.
Influenced by Torricelli, Otto von Guericke invented a vacuum pump by modifying an air pump used for pressurizing an air gun. Guericke put on a demonstration in 1654 in Magdeburg, Germany, where he was mayor. Two copper hemispheres were fitted together and air was pumped out. Weights strapped to the hemispheres could not pull them apart until the air valve was opened. The experiment was repeated in 1656 using two teams of 8 horses each, which could not separate the Magdeburg hemispheres.
Gaspar Schott was the first to describe the hemisphere experiment in his Mechanica Hydraulico-Pneumatica (1657).
After reading Schott's book, Robert Boyle built an improved vacuum pump and conducted related experiments.
Denis Papin became interested in using a vacuum to generate motive power while working with Christiaan Huygens and Gottfried Leibniz in Paris in 1663. Papin worked for Robert Boyle from 1676 to 1679, publishing an account of his work in Continuation of New Experiments (1680) and gave a presentation to Royal Society in 1689. From 1690 on Papin began experimenting with a piston to produce power with steam, building model steam engines. He experimented with atmospheric and pressure steam engines, publishing his results in 1707.
In 1663, Edward Somerset, 2nd Marquess of Worcester published a book of 100 inventions which described a method for raising water between floors employing a similar principle to that of a coffee percolator. His system was the first to separate the boiler (a heated cannon barrel) from the pumping action. Water was admitted into a reinforced barrel from a cistern, and then a valve was opened to admit steam from a separate boiler. The pressure built over the top of the water, driving it up a pipe. He installed his steam-powered device on the wall of the Great Tower at Raglan Castle to supply water through the tower. The grooves in the wall where the engine was installed were still to be seen in the 19th century. However, no one was prepared to risk money for such a revolutionary concept, and without backers the machine remained undeveloped.
Samuel Morland, a mathematician and inventor who worked on pumps, left notes at the Vauxhall Ordinance Office on a steam pump design that Thomas Savery read. In 1698 Savery built a steam pump called "The Miner's Friend." It employed both vacuum and pressure. These were used for low horsepower service for a number of years.
Thomas Newcomen was a merchant who dealt in cast iron goods. Newcomen's engine was based on the piston and cylinder design proposed by Papin. In Newcomen's engine steam was condensed by water sprayed inside the cylinder, causing atmospheric pressure to move the piston. Newcomen's first engine installed for pumping in a mine in 1712 at Dudley Castle in Staffordshire.
Cylinders
Denis Papin (22 August 1647 – ) was a French physicist, mathematician and inventor, best known for his pioneering invention of the steam digester, the forerunner of the pressure cooker. In the mid-1670s Papin collaborated with the Dutch physicist Christiaan Huygens on an engine which drove out the air from a cylinder by exploding gunpowder inside it. Realising the incompleteness of the vacuum produced by this means and on moving to England in 1680, Papin devised a version of the same cylinder that obtained a more complete vacuum from boiling water and then allowing the steam to condense; in this way he was able to raise weights by attaching the end of the piston to a rope passing over a pulley. As a demonstration model, the system worked, but in order to repeat the process, the whole apparatus had to be dismantled and reassembled. Papin quickly saw that to make an automatic cycle the steam would have to be generated separately in a boiler; however, he did not take the project further. Papin also designed a paddle boat driven by a jet playing on a mill-wheel in a combination of Taqi al Din and Savery's conceptions and he is also credited with a number of significant devices such as the safety valve. Papin's years of research into the problems of harnessing steam was to play a key part in the development of the first successful industrial engines that soon followed his death.
Savery steam pump
The first steam engine to be applied industrially was the "fire-engine" or "Miner's Friend", designed by Thomas Savery in 1698. This was a pistonless steam pump, similar to the one developed by Worcester. Savery made two key contributions that greatly improved the practicality of the design. First, in order to allow the water supply to be placed below the engine, he used condensed steam to produce a partial vacuum in the pumping reservoir (the barrel in Worcester's example), and using that to pull the water upward. Secondly, in order to rapidly cool the steam to produce the vacuum, he ran cold water over the reservoir.
Operation required several valves; at the start of a cycle, when the reservoir was empty, a valve would be opened to admit steam. This valve would be closed to seal the reservoir, and the cooling water valve would be opened to condense the steam and create a partial vacuum. A supply valve would then be opened, pulling water upward into the reservoir; the typical engine could pull water up to 20 feet. This was then closed, and the steam valve reopened, building pressure over the water and pumping it upward, as in the Worcester design. This cycle essentially doubled the distance that water could be pumped for any given pressure of steam, and production examples raised water about 40 feet.
Savery's engine solved a problem that had only recently become a serious one; raising water out of the mines in southern England as they reached greater depths. Savery's engine was somewhat less efficient than Newcomen's, but this was compensated for by the fact that the separate pump used by the Newcomen engine was inefficient, giving the two engines roughly the same efficiency of 6 million foot pounds per bushel of coal (less than 1%). Nor was the Savery engine very safe because part of its cycle required steam under pressure supplied by a boiler, and given the technology of the period the pressure vessel could not be made strong enough and so was prone to explosion. The explosion of one of his pumps at Broad Waters (near Wednesbury), about 1705, probably marks the end of attempts to exploit his invention.
The Savery engine was less expensive than Newcomen's and was produced in smaller sizes. Some builders were manufacturing improved versions of the Savery engine until late in the 18th century. Bento de Moura Portugal, FRS, introduced an ingenious improvement of Savery's construction "to render it capable of working itself", as described by John Smeaton in the Philosophical Transactions published in 1751.
Atmospheric condensing engines
Newcomen "atmospheric" engine
It was Thomas Newcomen with his "atmospheric-engine" of 1712 who can be said to have brought together most of the essential elements established by Papin in order to develop the first practical steam engine for which there could be a commercial demand. This took the shape of a reciprocating beam engine installed at surface level driving a succession of pumps at one end of the beam. The engine, attached by chains from other end of the beam, worked on the atmospheric, or vacuum principle.
Newcomen's design used some elements of earlier concepts. Like the Savery design, Newcomen's engine used steam, cooled with water, to create a vacuum. Unlike Savery's pump, however, Newcomen used the vacuum to pull on a piston instead of pulling on water directly. The upper end of the cylinder was open to the atmospheric pressure, and when the vacuum formed, the atmospheric pressure above the piston pushed it down into the cylinder. The piston was lubricated and sealed by a trickle of water from the same cistern that supplied the cooling water. Further, to improve the cooling effect, he sprayed water directly into the cylinder.
The piston was attached by a chain to a large pivoted beam. When the piston pulled the beam, the other side of the beam was pulled upward. This end was attached to a rod that pulled on a series of conventional pump handles in the mine. At the end of this power stroke, the steam valve was reopened, and the weight of the pump rods pulled the beam down, lifting the piston and drawing steam into the cylinder again.
Using the piston and beam allowed the Newcomen engine to power pumps at different levels throughout the mine, as well as eliminating the need for any high-pressure steam. The entire system was isolated to a single building on the surface. Although inefficient and extremely heavy on coal (compared to later engines), these engines raised far greater volumes of water and from greater depths than had previously been possible. Over 100 Newcomen engines were installed around England by 1735, and it is estimated that as many as 2,000 were in operation by 1800 (including Watt versions).
John Smeaton made numerous improvements to the Newcomen engine, notably the seals, and by improving these was able to almost triple their efficiency. He also preferred to use wheels instead of beams for transferring power from the cylinder, which made his engines more compact. Smeaton was the first to develop a rigorous theory of steam engine design of operation. He worked backward from the intended role to calculate the amount of power that would be needed for the task, the size and speed of a cylinder that would provide it, the size of boiler needed to feed it, and the amount of fuel it would consume. These were developed empirically after studying dozens of Newcomen engines in Cornwall and Newcastle, and building an experimental engine of his own at his home in Austhorpe in 1770. By the time the Watt engine was introduced only a few years later, Smeaton had built dozens of ever-larger engines into the 100 hp range.
Watt's separate condenser
While working at the University of Glasgow as an instrument maker and repairman in 1759, James Watt was introduced to the power of steam by Professor John Robison. Fascinated, Watt took to reading everything he could on the subject, and independently developed the concept of latent heat, only recently published by Joseph Black at the same university. When Watt learned that the university owned a small working model of a Newcomen engine, he pressed to have it returned from London where it was being unsuccessfully repaired. Watt repaired the machine, but found it was barely functional even when fully repaired.
After working with the design, Watt concluded that 80% of the steam used by the engine was wasted. Instead of providing motive force, it was being used to heat the cylinder. In the Newcomen design, every power stroke was started with a spray of cold water, which not only condensed the steam, but also cooled the walls of the cylinder. This heat had to be replaced before the cylinder would accept steam again. In the Newcomen engine the heat was supplied only by the steam, so when the steam valve was opened again a high proportion condensed on the cold walls as soon as it was admitted to the cylinder. It took a considerable amount of time and steam before the cylinder warmed back up and the steam started to fill it up.
Watt solved the problem of the water spray by removing the cold water to a different cylinder, placed beside the power cylinder. Once the induction stroke was complete a valve was opened between the two, and any steam that entered the cylinder would condense inside this cold cylinder. This would create a vacuum that would pull more of the steam into the cylinder, and so on until the steam was mostly condensed. The valve was then closed, and operation of the main cylinder continued as it would on a conventional Newcomen engine. As the power cylinder remained at operational temperature throughout, the system was ready for another stroke as soon as the piston was pulled back to the top. Maintaining the temperature was a jacket around the cylinder where steam was admitted. Watt produced a working model in 1765.
Convinced that this was a great advance, Watt entered into partnerships to provide venture capital while he worked on the design. Not content with this single improvement, Watt worked tirelessly on a series of other improvements to practically every part of the engine. Watt further improved the system by adding a small vacuum pump to pull the steam out of the cylinder into the condenser, further improving cycle times. A more radical change from the Newcomen design was closing off the top of the cylinder and introducing low-pressure steam above the piston. Now the power was not due to the difference of atmospheric pressure and the vacuum, but the pressure of the steam and the vacuum, a somewhat higher value. On the upward return stroke, the steam on top was transferred through a pipe to the underside of the piston ready to be condensed for the downward stroke. Sealing of the piston on a Newcomen engine had been achieved by maintaining a small quantity of water on its upper side. This was no longer possible in Watt's engine due to the presence of the steam. Watt spent considerable effort to find a seal that worked, eventually obtained by using a mixture of tallow and oil. The piston rod also passed through a gland on the top cylinder cover sealed in a similar way.
The piston sealing problem was due to having no way to produce a sufficiently round cylinder. Watt tried having cylinders bored from cast iron, but they were too out of round. Watt was forced to use a hammered iron cylinder. The following quotation is from Roe (1916):
"When [John] Smeaton first saw the engine he reported to the Society of Engineers that 'neither the tools nor the workmen existed who could manufacture such a complex machine with sufficient precision' "
Watt finally considered the design good enough to release in 1774, and the Watt engine was released to the market. As portions of the design could be easily fitted to existing Newcomen engines, there was no need to build an entirely new engine at the mines. Instead, Watt and his business partner Matthew Boulton licensed the improvements to engine operators, charging them a portion of the money they would save in reduced fuel costs. The design was wildly successful, and the Boulton and Watt company was formed to license the design and help new manufacturers build the engines. The two would later open the Soho Foundry to produce engines of their own.
In 1774, John Wilkinson invented a boring machine with the shaft holding the boring tool supported on both ends, extending through the cylinder, unlike the then used cantilevered borers. With this machine he was able to successfully bore the cylinder for Boulton and Watt's first commercial engine in 1776.
Watt never ceased improving his designs. This further improved the operating cycle speed, introduced governors, automatic valves, double-acting pistons, a variety of rotary power takeoffs and many other improvements. Watt's technology enabled the widespread commercial use of stationary steam engines.
Humphrey Gainsborough produced a model condensing steam engine in the 1760s, which he showed to Richard Lovell Edgeworth, a member of the Lunar Society. Gainsborough believed that Watt had used his ideas for the invention; however, James Watt was not a member of the Lunar Society at this period and his many accounts explaining the succession of thought processes leading to the final design would tend to belie this story.
Power was still limited by the low pressure, the displacement of the cylinder, combustion and evaporation rates and condenser capacity. Maximum theoretical efficiency was limited by the relatively low temperature differential on either side of the piston; this meant that for a Watt engine to provide a usable amount of power, the first production engines had to be very large, and were thus expensive to build and install.
Watt double-acting and rotative engines
Watt developed a double-acting engine in which steam drove the piston in both directions, thereby increasing the engine speed and efficiency. The double-acting principle also significantly increased the output of a given physical sized engine.
Boulton & Watt developed the reciprocating engine into the rotative type. Unlike the Newcomen engine, the Watt engine could operate smoothly enough to be connected to a drive shaft – via sun and planet gears – to provide rotary power along with double-acting condensing cylinders. The earliest example was built as a demonstrator and was installed in Boulton's factory to work machines for lapping (polishing) buttons or similar. For this reason it was always known as the Lap Engine. In early steam engines the piston is usually connected by a rod to a balanced beam, rather than directly to a flywheel, and these engines are therefore known as beam engines.
Early steam engines did not provide constant enough speed for critical operations such as cotton spinning. To control speed the engine was used to pump water for a water wheel, which powered the machinery.
High-pressure engines
As the 18th century advanced, the call was for higher pressures; this was strongly resisted by Watt who used the monopoly his patent gave him to prevent others from building high-pressure engines and using them in vehicles. He mistrusted the boiler technology of the day, the way they were constructed and the strength of the materials used.
The important advantages of high-pressure engines were:
They could be made much smaller than previously for a given power output. There was thus the potential for steam engines to be developed that were small and powerful enough to propel themselves and other objects. As a result, steam power for transportation now became a practicality in the form of ships and land vehicles, which revolutionized cargo businesses, travel, military strategy, and essentially every aspect of society.
Because of their smaller size, they were much less expensive.
They did not require the significant quantities of condenser cooling water needed by atmospheric engines.
They could be designed to run at higher speeds, making them more suitable for powering machinery.
The disadvantages were:
In the low-pressure range they were less efficient than condensing engines, especially if steam was not used expansively.
They were more susceptible to boiler explosions.
The main difference between how high-pressure and low-pressure steam engines work is the source of the force that moves the piston. In the engines of Newcomen and Watt, it is the condensation of the steam that creates most of the pressure difference, causing atmospheric pressure (Newcomen) and low-pressure steam, seldom more than 7 psi boiler pressure, plus condenser vacuum (Watt), to move the piston. In a high-pressure engine, most of the pressure difference is provided by the high-pressure steam from the boiler; the low-pressure side of the piston may be at atmospheric pressure or connected to the condenser pressure. Newcomen's indicator diagram, almost all below the atmospheric line, would see a revival nearly 200 years later with the low pressure cylinder of triple expansion engines contributing about 20% of the engine power, again almost completely below the atmospheric line.
The first known advocate of "strong steam" was Jacob Leupold in his scheme for an engine that appeared in encyclopaedic works from . Various projects for steam propelled boats and vehicles also appeared throughout the century, one of the most promising being the construction of Nicolas-Joseph Cugnot, who demonstrated his "fardier" (steam wagon) in 1769. Whilst the working pressure used for this vehicle is unknown, the small size of the boiler gave insufficient steam production rate to allow the fardier to advance more than a few hundred metres at a time before having to stop to raise steam. Other projects and models were proposed, but as with William Murdoch's model of 1784, many were blocked by Boulton and Watt.
This did not apply in the US, and in 1788 a steamboat built by John Fitch operated in regular commercial service along the Delaware River between Philadelphia, Pennsylvania, and Burlington, New Jersey, carrying as many as 30 passengers. This boat could typically make 7 to 8 miles per hour, and traveled more than during its short length of service. The Fitch steamboat was not a commercial success, as this route was adequately covered by relatively good wagon roads. In 1802, William Symington built a practical steamboat, and in 1807, Robert Fulton used a Watt steam engine to power the first commercially successful steamboat.
Oliver Evans in his turn was in favour of "strong steam" which he applied to boat engines and to stationary uses. He was a pioneer of cylindrical boilers; however, Evans' boilers did suffer several serious boiler explosions, which tended to lend weight to Watt's qualms. He founded the Pittsburgh Steam Engine Company in 1811 in Pittsburgh, Pennsylvania.
The company introduced high-pressure steam engines to the riverboat trade in the Mississippi watershed.
The first high-pressure steam engine was invented in 1800 by Richard Trevithick.
The importance of raising steam under pressure (from a thermodynamic standpoint) is that it attains a higher temperature. Thus, any engine using high-pressure steam operates at a higher temperature and pressure differential than is possible with a low-pressure vacuum engine. The high-pressure engine thus became the basis for most further development of reciprocating steam technology. Even so, around the year 1800, "high pressure" amounted to what today would be considered very low pressure, i.e. 40-50 psi (276-345 kPa), the point being that the high-pressure engine in question was non-condensing, driven solely by the expansive power of the steam, and once that steam had performed work it was usually exhausted at higher-than-atmospheric pressure. The blast of the exhausting steam into the chimney could be exploited to create induced draught through the fire grate and thus increase the rate of burning, hence creating more heat in a smaller furnace, at the expense of creating back pressure on the exhaust side of the piston.
On 21 February 1804, at the Penydarren ironworks at Merthyr Tydfil in South Wales, the first self-propelled railway steam engine or steam locomotive, built by Richard Trevithick, was demonstrated.
Cornish engine and compounding
Around 1811, Richard Trevithick was required to update a Watt pumping engine in order to adapt it to one of his new large cylindrical Cornish boilers. When Trevithick left for South America in 1816, his improvements were continued by William Sims. In a parallel, Arthur Woolf developed a compound engine with two cylinders, so that steam expanded in a high-pressure cylinder before being released into a low-pressure one. Efficiency was further improved by Samuel Groase, who insulated the boiler, engine, and pipes.
Steam pressure above the piston was increased eventually reaching or even and now provided much of the power for the downward stroke; at the same time condensing was improved. This considerably raised efficiency and further pumping engines on the Cornish system (often known as Cornish engines) continued to be built new throughout the 19th century. Older Watt engines were updated to conform.
The take-up of these Cornish improvements was slow in textile manufacturing areas where coal was cheap, due to the higher capital cost of the engines and the greater wear that they suffered. The change only began in the 1830s, usually by compounding through adding another (high-pressure) cylinder.
Another limitation of early steam engines was speed variability, which made them unsuitable for many textile applications, especially spinning. In order to obtain steady speeds, early steam powered textile mills used the steam engine to pump water to a water wheel, which drove the machinery.
Many of these engines were supplied worldwide and gave reliable and efficient service over a great many years with greatly reduced coal consumption. Some of them were very large and the type continued to be built right down to the 1890s.
Corliss engine
The Corliss steam engine (patented 1849) was called the greatest improvement since James Watt. The Corliss engine had greatly improved speed control and better efficiency, making it suitable to all sorts of industrial applications, including spinning.
Corliss used separate ports for steam supply and exhaust, which prevented the exhaust from cooling the passage used by the hot steam. Corliss also used partially rotating valves that provided quick action, helping to reduce pressure losses. The valves themselves were also a source of reduced friction, especially compared to the slide valve, which typically used 10% of an engine's power.
Corliss used automatic variable cut off. The valve gear controlled engine speed by using the governor to vary the timing of the cut off. This was partly responsible for the efficiency improvement in addition to the better speed control.
Porter-Allen high speed steam engine
The Porter-Allen engine, introduced in 1862, used an advanced valve gear mechanism developed for Porter by Allen, a mechanic of exceptional ability, and was at first generally known as the Allen engine. The high speed engine was a precision machine that was well balanced, achievements made possible by advancements in machine tools and manufacturing technology.
The high speed engine ran at piston speeds from three to five times the speed of ordinary engines. It also had low speed variability. The high speed engine was widely used in sawmills to power circular saws. Later it was used for electrical generation.
The engine had several advantages. It could, in some cases, be directly coupled. If gears or belts and drums were used, they could be much smaller sizes. The engine itself was also small for the amount of power it developed.
Porter greatly improved the fly-ball governor by reducing the rotating weight and adding a weight around the shaft. This significantly improved speed control. Porter's governor became the leading type by 1880.
The efficiency of the Porter-Allen engine was good, but not equal to the Corliss engine.
Uniflow (or unaflow) engine
The uniflow engine was the most efficient type of high-pressure engine. It was invented in 1911 and was first patented in 1885 by Leonard Jennett Todd. The uniflow engine used poppet valves and half cylinders which allowed steam to pass into the engine was then used to create a high pressure environment that was key to the function of the uniflow engine. It was used in ships, steam locomotives and steam wagons but was displaced by steam turbines and later marine diesel engines.
References
Bibliography
see Thomas Tredgold
Further reading
Stuart, Robert, A Descriptive History of the Steam Engine, London: J. Knight and H. Lacey, 1824.
Steam power
Steam engines
Steam engine | History of the steam engine | Physics,Technology | 6,622 |
35,043,787 | https://en.wikipedia.org/wiki/NGC%206881 | NGC 6881 is a planetary nebula, located in the constellation of Cygnus. It is formed of an inner nebula, estimated to be about one fifth of a light-year across, and a symmetrical structure that spreads out about one light-year from one tip to the other. The symmetry could be due to a binary star at the nebula's centre.
References
External links
Cygnus (constellation)
Planetary nebulae
6881 | NGC 6881 | Astronomy | 91 |
658,601 | https://en.wikipedia.org/wiki/DOSKEY | DOSKEY is a command for DOS, IBM OS/2, Microsoft Windows, and ReactOS that adds command history, macro functionality, and improved editing features to the command-line interpreters COMMAND.COM and cmd.exe.
History
The command was included as a terminate-and-stay-resident program with MS-DOS and PC DOS versions 5 and later, then Windows 9x, and finally Windows 2000 and later.
In early 1989, functionality similar to DOSKEY was introduced with DR-DOS 3.40 with its HISTORY CONFIG.SYS directive. This enabled a user-configurable console input history buffer and recall as well as pattern search functionality on the console driver level, that is, fully integrated into the operating system and transparent to running applications. In the summer of 1991, DOSKEY was introduced in MS-DOS/PC DOS 5.0 in order to provide some of the same functionality. DOSKEY also added a macro expansion facility, though special support was required before applications such as command line processors could take advantage of it. Starting with Novell DOS 7 in 1993, the macro capabilities were provided by an external DOSKEY command as well. In order to also emulate the DOSKEY history buffer functionality under DR-DOS, the DR-DOS DOSKEY worked as a front end to the resident history buffer functionality, which remained part of the kernel.
DOSKEY has also been included in IBM OS/2 Version 2.0.
In current Windows NT-based operating systems, the DOSKEY functionality is built into CMD.EXE, although the DOSKEY command is still used to change its operation.
The DOSKEY command is not available in FreeDOS, which has such features built into the command interpreter.
Usage
Command switches
DOSKEY allows the use of several command switches:
DOSKEY [/switch ...] [macroname=[text]]
Installs a new copy of DOSKEY.
[size]
Sets size of command history buffer to size.
Displays all DOSKEY macros.
Displays all DOSKEY macros for all executables which have DOSKEY macros.
[executable name]
Displays all DOSKEY macros for the given executable.
Displays all commands stored in memory.
Specifies that new text typed is inserted in old text.
Specifies that new text overwrites old text.
exename
Specifies the executable.
filename
Specifies a file of macros to install.
(undocumented - since MS-DOS 7)
(undocumented - since MS-DOS 7)
(undocumented - since MS-DOS 7)
(undocumented - since MS-DOS 7)
(undocumented - since MS-DOS 7)
[macroname]
Specifies a name for a macro created.
[text]
Specifies commands to record.
Keyboard shortcuts
During a DOSKEY session, the following keyboard shortcuts can be used:
and
Recall commands
Clears command line
Clears command line from the cursor to the beginning of the line.
Clears command line from the cursor to the end of the line.
Displays command history
Clears command history
Searches command history
Selects a command by number
Clears macro definitions
Command macros
DOSKEY implements support for command macros, a simple text-substitution facility which is used somewhat like command line aliases in other environments.
Command separator. Allows multiple commands in a macro.
–
Batch parameters. Equivalent to - in batch programs.
Symbol replaced by everything following the macro name on command line.
Alternatives
The absence of a command history in COMMAND.COM was a serious inconvenience ever since the earliest versions of MS-DOS. Numerous third-party programs have been written to address the issue; many were available long before Microsoft supplied DOSKEY. Some of them, including JP Software's 4DOS and NDOS, also provide additional editing capabilities lacking in DOSKEY, such as filename completion. Some of the better-known DOSKEY alternatives are Jack Gersbach's DOSEDIT, Chris Dunford's CED, Sverre Huseby's DOSED, Ashok Nadkarni's CMDEDIT, Steven Calwas's ANARKEY, Eric Tauck's TODDY, and enhanced DOSKEY written by Paul Houle.
Paul Houle's Enhanced DOSKEY is designed to be an enhanced drop-in replacement for the DOSKEY.COM that ships with MS-DOS and Windows 9x/Windows Me. It also has a smaller disk and memory-resident footprint. The primary added feature is command and file "auto-completion" via the Tab key. Version 2.5, released in 2014, also adds full support for long filenames (LFN).
See also
List of DOS commands
References
Further reading
(NB. NWDOSTIP.TXT is part of MPDOSTIP.ZIP, maintained up to 2001 and distributed on many sites at the time. The provided link points to a HTML-converted older version of the NWDOSTIP.TXT file.)
External links
doskey | Microsoft Docs
Paul Houle's enhanced DOSKEY
External DOS commands
OS/2 commands
ReactOS commands
Utilities for Windows
Windows administration | DOSKEY | Technology | 1,054 |
4,597,604 | https://en.wikipedia.org/wiki/NS5-brane | In string theory, the NS5-brane is a fundamental extended object in six-dimensional spacetime that carries magnetic charge under the Neveu–Schwarz B-field. The tension of the NS5-brane is inversely proportional to the Newton gravitational constant, making it a solitonic object of the theory. Coincident NS5-branes cannot be described by weakly coupled string theory, making them non-perturbative. When NS5-branes are separated on a circle transverse to their worldvolume, their description is given by a particular conformal field theory due to Giveon and Kutasov. When these fivebranes rotate on a circular orbit, their description is given by more complicated conformal field theories written by Martinec and Massai. Separated fivebranes that preserve a certain fraction of supersymmetry and wiggle in space are well-described by supergravity solutions of Lunin and Mathur.
References
String theory | NS5-brane | Astronomy | 200 |
25,499,621 | https://en.wikipedia.org/wiki/HadCRUT | HadCRUT is the dataset of worldwide monthly instrumental temperature records formed by combining the sea surface temperature records compiled by the Hadley Centre of the UK Met Office and the land surface air temperature records compiled by the Climatic Research Unit (CRU) of the University of East Anglia.
"HadCRUT" stands for Hadley Centre/Climatic Research Unit Temperature.
The data is provided on a grid of boxes covering the globe, with values provided for only those boxes containing temperature observations in a particular month and year. Interpolation is not applied to infill missing values. The first version of HadCRUT initially spanned the period 1881–1993, and this was later extended to begin in 1850 and to be regularly updated to the current year/month in near real-time.
HadCRUT4
HadCRUT4 was introduced in March 2012. It "includes the addition of newly digitised measurement data, both over land and sea, new sea-surface temperature bias adjustments and a more comprehensive error model for describing uncertainties in sea-surface temperature measurements". Overall, the net effect of HadCRUT4 versus HadCRUT3 is an increase in the average temperature anomaly, especially around 1950 and 1855, and less significantly around 1925 and 2005.
HadCRUT3
HadCRUT3 is the third major revision of this dataset, combining the CRUTEM3 land surface air temperature dataset with the HadSST2 sea surface temperature dataset. First published in 2006, this initially spanned the period 1850–2005, but has since been regularly updated to 2012. Its spatial grid boxes are 5° of latitude and longitude. A more complete statistical model of uncertainty was introduced with this revision, including estimates of measurements errors, biases due to changing exposure and urbanisation, and uncertainty due to incomplete coverage of the globe by observations of temperature.
HadCRUT2
HadCRUT2 was the second major version of this dataset, combining the CRUTEM2 land surface air temperature dataset with the HadSST sea surface temperature dataset. First published in 2003, this initially spanned the period 1856–2001, but was subsequently updated to end in 2005. Its spatial grid boxes are 5° of latitude and longitude. An estimate of uncertainty due to incomplete coverage of the globe by observations of temperature was included, as was a version with the variance adjusted to remove artificial changes arising from changing numbers of observations.
HadCRUT1
HadCRUT1 was the first version of this dataset. Although not initially referred to as HadCRUT1, this name was introduced later to distinguish it from subsequent versions. First published in 1994, this initially spanned the period 1881–1993, but was subsequently extended to span 1856–2002. HadCRUT1 at first combined two sea surface temperature datasets (MOHSST for 1881–1981 and GISST for 1981–1993) with an earlier land surface air temperature dataset from the Climatic Research Unit. The land surface air temperature dataset was replaced in 1995 with the newly published CRUTEM1 dataset. Its spatial grid boxes are 5° of latitude and longitude.
History of CRU Climate Data
The Climatic Research Unit had as an early priority the objective of filling gaps in available information "to establish the past record of climate over as much of the world as possible, as far back in time as was feasible, and in enough detail to recognise and establish the basic processes, interactions, and evolutions in the Earth's fluid envelopes and those involving the Earth's crust and its vegetation cover". Through the 1970s the unit worked on interpreting documentary historical records. From 1978 onward CRU began production of its gridded data set of land air temperature anomalies based on instrumental temperature records held by National Meteorological Organisations around the world. In 1986 sea temperatures were added to form a synthesis of data which was the first global temperature record, demonstrating unequivocally that the globe has warmed by almost 0.8 °C over the last 157 years. From 1989 this work proceeded in conjunction with the Met Office Hadley Centre for Climate Prediction and Research, and their work demonstrated global warming of almost 0.8 °C over the last 157 years.
Access to weather station temperature records was often under formal or informal confidentiality agreements that restricted use of this raw data to academic purposes. From the 1990s onwards the unit received requests for this weather station temperature data from people who hoped to independently verify the impact of various adjustments, and after the UK Freedom of Information Act (FOIA) came into effect in 2005, there were Freedom of Information requests to the Climatic Research Unit for this raw data. On 12 August 2009 CRU announced that they were seeking permission to waive these restrictions, and on 24 November 2009 the university stated that over 95% of the CRU weather station temperature data set had already been available for several years, with the remainder to be released when permissions were obtained. In a decision announced on 27 July 2011
the Information Commissioner's Office (ICO) required release of raw data even though permissions had not been obtained or in one instance had been refused, and on 27 July 2011 CRU announced
release of the raw temperature data not already in the public domain, with the exception of Poland which was outside the area covered by the FOIA request.
See also
Climatic Research Unit
References
External links
Met Office Hadley Centre observations datasets
CRU datasets including HadCRUT3, HadCRUT4, CRUTEM3 and CRUTEM4 at Climatic Research Unit
Met Office
Climate and weather statistics
Historical climatology
Meteorological data and networks | HadCRUT | Physics | 1,124 |
25,742,359 | https://en.wikipedia.org/wiki/Topcon%20RE%20Super | The Topcon RE Super, or Beseler Topcon Super D in USA, was launched by Tokyo Kogaku KK in 1963 and manufactured until 1971, at which point it was upgraded to the Super D and again to Super DM the following year. General sale continued for several years. These later models have a shutter release lock lever on the shutter release collar. It is a professional oriented 35mm SLR camera that had a comprehensive range of accessories available. It has a removable pentaprism viewfinder and focusing screen. It features the Exakta bayonet lens mount for interchangeable lenses. A special accessory shoe is situated at the base of the rewind knob with a standard PC sync. contact next to it. The release button is placed at the right-hand camera front, but there is no mirror-up facility; this was included on the upgraded versions. The standard lens is the RE. Auto-Topcor 1:1.4 f=5.8cm or the slightly slower 1:1.8 version. A battery-operated winder could be attached to the camera base.
Some common features of 35mm SLR photography were first seen on the Topcon RE Super. Among these is the through-the-lens exposure metering. This enabled improved exposure accuracy, especially in close-up macro photography using bellows or extension rings, and in telephotography with long lenses. In addition to this feature, the metering is at full aperture. For this purpose the RE-lenses have an aperture simulator that relays the preset aperture to the exposure meter at full aperture, retaining a bright viewfinder image while determining the correct exposure, avoiding the stop-down method. The meter also works independently of the pentaprism finder, which allows for different viewfinder configurations. The meter cell is actually incorporated in the camera's reflex finder mirror. This was accomplished by milling narrow slits in the mirror surface letting a fraction of the light through to the CdS cell placed just behind it.
Identifying the different models (elsewhere/USA)
Topcon RE Super / Beseler Topcon Super D: type 46A, serial no. prefix 46. Prod. period: 1963 to 1971
Topcon Super D / Beseler Topcon Super D: type 71A, serial no. prefix 71. Prod. period: 1972 only
Topcon Super DM / Topcon Super DM: type 72A, serial no. prefix 72. Prod. period: 1973 only
All models were available in chrome or black enamel finish.
Tokyo Kogaku KK
Tokyo Kogaku KK launched their first 35mm SLR camera in 1957, about two years before the Nikon F and the Canonflex. This was the Topcon R, with bayonet lens mount from the Exakta Varex camera from Ihagee in Dresden, successor to the Kine Exakta of 1936. It was also inspired by the Zeiss Ikon Contax S as well as the Japanese Miranda T—most obviously the body shape by the former, and the detachable finder prism by the latter. However, it was not until 1963 the Topcon name became famous by introducing the Topcon RE Super, an event that took the entire camera industry by surprise: This camera featured through-the-lens (TTL) exposure metering, at full lens aperture. The RE Super was fully prepared for professional work, supported by a choice of lenses and accessories to complement it. The United States importer was the Charles Beseler Company and it was sold as the Beseler Topcon Super D.
The interchangeable lenses for the RE Super
The following lenses have their own focusing thread:
RE. Auto-Topcor 1:4.0 f= 20mm 62 mm filter, introduced 1969
RE. Auto-Topcor 1:3.5 f= 25mm 62 mm filter, introduced 1965
RE. Auto-Topcor 1:2.8 f= 28mm 49 mm filter, introduced 1971
RE. Auto-Topcor 1:2.8 f= 35mm 49 mm filter, introduced 1963
RE. GN Auto-Topcor 1:1.8 f= 50mm 62 mm filter, introduced 1973 with lens aperture interconnected to distance set on the lens' focusing ring.
RE. GN Auto-Topcor M 1:1.4 f= 50mm 62 mm filter, introduced 1973 with lens aperture interconnected to distance set on the lens' focusing ring.
RE. Auto-Topcor 1:1.4 f= 58mm 62 mm filter, introduced 1963
RE. Auto-Topcor 1:1.8 f= 58mm 49 mm filter, introduced 1963
RE. Macro Auto-Topcor 1:3.5 f=58mm 49 mm filter
RE. Auto-Topcor 1:1.8 f= 85mm 62 mm filter, introduced 1973
RE. Auto-Topcor 1:2.8 f= 100mm 49 mm filter, introduced 1965
RE. Auto-Topcor 1:3.5 f= 135mm 49 mm filter, introduced 1963
RE. Auto-Topcor 1:5.6 f= 200mm 49 mm filter, introduced 1966
RE. Auto-Topcor 1:5.6 f= 300mm 62 mm filter, introduced 1965
RE. Auto-Topcor 1:5.6 f= 500mm, introduced 1969
RE. Zoom Auto-Topcor 1:4.7 f= 87~205mm 58 mm filter, introduced 1967
In addition, a range of special lenses without focusing thread (to be used with bellows or focusing extension tube) were available for macro work:
Macro Topcor 1:3.5 f= 30mm
Macro Topcor 1:3.5 f= 58mm
Macro Topcor 1:4 f= 135mm
References
135 film cameras
Single-lens reflex cameras | Topcon RE Super | Technology | 1,203 |
12,630 | https://en.wikipedia.org/wiki/Geometric%20series | In mathematics, a geometric series is a series summing the terms of an infinite geometric sequence, in which the ratio of consecutive terms is constant. For example, the series is a geometric series with common ratio , which converges to the sum of . Each term in a geometric series is the geometric mean of the term before it and the term after it, in the same way that each term of an arithmetic series is the arithmetic mean of its neighbors.
While Greek philosopher Zeno's paradoxes about time and motion (5th century BCE) have been interpreted as involving geometric series, such series were formally studied and applied a century or two later by Greek mathematicians, for example used by Archimedes to calculate the area inside a parabola (3rd century BCE). Today, geometric series are used in mathematical finance, calculating areas of fractals, and various computer science topics.
Though geometric series most commonly involve real or complex numbers, there are also important results and applications for matrix-valued geometric series, function-valued geometric series, adic number geometric series, and most generally geometric series of elements of abstract algebraic fields, rings, and semirings.
Definition and examples
The geometric series is an infinite series derived from a special type of sequence called a geometric progression. This means that it is the sum of infinitely many terms of geometric progression: starting from the initial term , and the next one being the initial term multiplied by a constant number known as the common ratio . By multiplying each term with a common ratio continuously, the geometric series can be defined mathematically as:
The sum of a finite initial segment of an infinite geometric series is called a finite geometric series, that is:
When it is often called a growth rate or rate of expansion. When it is often called a decay rate or shrink rate, where the idea that it is a "rate" comes from interpreting as a sort of discrete time variable. When an application area has specialized vocabulary for specific types of growth, expansion, shrinkage, and decay, that vocabulary will also often be used to name parameters of geometric series. In economics, for instance, rates of increase and decrease of price levels are called inflation rates and deflation rates, while rates of increase in values of investments include rates of return and interest rates.
When summing infinitely many terms, the geometric series can either be convergent or divergent. Convergence means there is a value after summing infinitely many terms, whereas divergence means no value after summing. The convergence of a geometric series can be described depending on the value of a common ratio, see . Grandi's series is an example of a divergent series that can be expressed as , where the initial term is and the common ratio is ; this is because it has three different values.
Decimal numbers that have repeated patterns that continue forever can be interpreted as geometric series and thereby converted to expressions of the ratio of two integers. For example, the repeated decimal fraction can be written as the geometric series where the initial term is and the common ratio is .
Convergence of the series and its proof
The convergence of the infinite sequence of partial sums of the infinite geometric series depends on the magnitude of the common ratio alone:
If , the terms of the series approach zero (becoming smaller and smaller in magnitude) and the sequence of partial sums converge to a limit value of .
If , the terms of the series become larger and larger in magnitude and the partial sums of the terms also get larger and larger in magnitude, so the series diverges.
If , the terms of the series become no larger or smaller in magnitude and the sequence of partial sums of the series does not converge. When , all the terms of the series are the same and the grow to infinity. When , the terms take two values and alternately, and therefore the sequence of partial sums of the terms oscillates between the two values and 0. One example can be found in Grandi's series. When and , the partial sums circulate periodically among the values , never converging to a limit. Generally when for any integer and with any , the partial sums of the series will circulate indefinitely with a period of , never converging to a limit.
The rate of convergence shows how the sequence quickly approaches its limit. In the case of the geometric series—the relevant sequence is and its limit is —the rate and order are found via
where represents the order of convergence. Using and choosing the order of convergence gives:
When the series converges, the rate of convergence gets slower as approaches . The pattern of convergence also depends on the sign or complex argument of the common ratio. If and then terms all share the same sign and the partial sums of the terms approach their eventual limit monotonically. If and , adjacent terms in the geometric series alternate between positive and negative, and the partial sums of the terms oscillate above and below their eventual limit . For complex and the converge in a spiraling pattern.
The convergence is proved as follows. The partial sum of the first terms of a geometric series, up to and including the term,
is given by the closed form
where is the common ratio. The case is merely a simple addition, a case of an arithmetic series. The formula for the partial sums with can be derived as follows:
for . As approaches 1, polynomial division or L'Hospital's rule recovers the case .
As approaches infinity, the absolute value of must be less than one for this sequence of partial sums to converge to a limit. When it does, the series converges absolutely. The infinite series then becomes
for .
This convergence result is widely applied to prove the convergence of other series as well, whenever those series's terms can be bounded from above by a suitable geometric series; that proof strategy is the basis for the ratio test and root test for the convergence of infinite series.
Connection to the power series
Like the geometric series, a power series has one parameter for a common variable raised to successive powers corresponding to the geometric series's , but it has additional parameters one for each term in the series, for the distinct coefficients of each , rather than just a single additional parameter for all terms, the common coefficient of in each term of a geometric series. The geometric series can therefore be considered a class of power series in which the sequence of coefficients satisfies for all and .
This special class of power series plays an important role in mathematics, for instance for the study of ordinary generating functions in combinatorics and the summation of divergent series in analysis. Many other power series can be written as transformations and combinations of geometric series, making the geometric series formula a convenient tool for calculating formulas for those power series as well.
As a power series, the geometric series has a radius of convergence of 1. This could be seen as a consequence of the Cauchy–Hadamard theorem and the fact that for any or as a consequence of the ratio test for the convergence of infinite series, with implying convergence only for However, both the ratio test and the Cauchy–Hadamard theorem are proven using the geometric series formula as a logically prior result, so such reasoning would be subtly circular.
Background
2,500 years ago, Greek mathematicians believed that an infinitely long list of positive numbers must sum to infinity. Therefore, Zeno of Elea created a paradox, demonstrating as follows: in order to walk from one place to another, one must first walk half the distance there, and then half of the remaining distance, and half of that remaining distance, and so on, covering infinitely many intervals before arriving. In doing so, he partitioned a fixed distance into an infinitely long list of halved remaining distances, each with a length greater than zero. Zeno's paradox revealed to the Greeks that their assumption about an infinitely long list of positive numbers needing to add up to infinity was incorrect.
Euclid's Elements has the distinction of being the world's oldest continuously used mathematical textbook, and it includes a demonstration of the sum of finite geometric series in Book IX, Proposition 35, illustrated in an adjacent figure.
Archimedes in his The Quadrature of the Parabola used the sum of a geometric series to compute the area enclosed by a parabola and a straight line. Archimedes' theorem states that the total area under the parabola is of the area of the blue triangle. His method was to dissect the area into infinite triangles as shown in the adjacent figure. He determined that each green triangle has the area of the blue triangle, each yellow triangle has the area of a green triangle, and so forth. Assuming that the blue triangle has area 1, then, the total area is the sum of the infinite series
Here the first term represents the area of the blue triangle, the second term is the area of the two green triangles, the third term is the area of the four yellow triangles, and so on. Simplifying the fractions gives
a geometric series with common ratio and its sum is:
In addition to his elegantly simple proof of the divergence of the harmonic series, Nicole Oresme proved that the arithmetico-geometric series known as Gabriel's Staircase,
In the diagram for his geometric proof, similar to the adjacent diagram, shows a two-dimensional geometric series. The first dimension is horizontal, in the bottom row, representing the geometric series with initial value and common ratio
The second dimension is vertical, where the bottom row is a new initial term and each subsequent row above it shrinks according to the same common ratio , making another geometric series with sum ,
This approach generalizes usefully to higher dimensions, and that generalization is described below in .
Applications
As mentioned above, the geometric series can be applied in the field of economics. This leads to the common ratio of a geometric series that may refer to the rates of increase and decrease of price levels are called inflation rates and deflation rates; in contrast, the rates of increase in values of investments include rates of return and interest rates. More specifically in mathematical finance, geometric series can also be applied in time value of money; that is to represent the present values of perpetual annuities, sums of money to be paid each year indefinitely into the future. This sort of calculation is used to compute the annual percentage rate of a loan, such as a mortgage loan. It can also be used to estimate the present value of expected stock dividends, or the terminal value of a financial asset assuming a stable growth rate. However, the assumption that interest rates are constant is generally incorrect and payments are unlikely to continue forever since the issuer of the perpetual annuity may lose its ability or end its commitment to make continued payments, so estimates like these are only heuristic guidelines for decision making rather than scientific predictions of actual current values.
In addition to finding the area enclosed by a parabola and a line in Archimedes' The Quadrature of the Parabola, the geometric series may also be applied in finding the Koch snowflake's area described as the union of infinitely many equilateral triangles (see figure). Each side of the green triangle is exactly the size of a side of the large blue triangle and therefore has exactly the area. Similarly, each yellow triangle has the area of a green triangle, and so forth. All of these triangles can be represented in terms of geometric series: the blue triangle's area is the first term, the three green triangles' area is the second term, the twelve yellow triangles' area is the third term, and so forth. Excluding the initial 1, this series has a common ratio , and by taking the blue triangle as a unit of area, the total area of the snowflake is:
Various topics in computer science may include the application of geometric series in the following:
Algorithm analysis: analyzing the time complexity of recursive algorithms (like divide-and-conquer) and in amortized analysis for operations with varying costs, such as dynamic array resizing.
Data structures: analyzing the space and time complexities of operations in data structures like balanced binary search trees and heaps.
Computer graphics: crucial in rendering algorithms for anti-aliasing, for mipmapping, and for generating fractals, where the scale of detail varies geometrically.
Networking and communication: modelling retransmission delays in exponential backoff algorithms and are used in data compression and error-correcting codes for efficient communication.
Probabilistic and randomized algorithms: analyzing random walks, Markov chains, and geometric distributions, which are essential in probabilistic and randomized algorithms.
Beyond real and complex numbers
While geometric series with real and complex number parameters and are most common, geometric series of more general terms such as functions, matrices, and adic numbers also find application. The mathematical operations used to express a geometric series given its parameters are simply addition and repeated multiplication, and so it is natural, in the context of modern algebra, to define geometric series with parameters from any ring or field. Further generalization to geometric series with parameters from semirings is more unusual, but also has applications; for instance, in the study of fixed-point iteration of transformation functions, as in transformations of automata via rational series.
In order to analyze the convergence of these general geometric series, then on top of addition and multiplication, one must also have some metric of distance between partial sums of the series. This can introduce new subtleties into the questions of convergence, such as the distinctions between uniform convergence and pointwise convergence in series of functions, and can lead to strong contrasts with intuitions from the real numbers, such as in the convergence of the series with and to in the 2-adic numbers using the 2-adic absolute value as a convergence metric. In that case, the 2-adic absolute value of the common coefficient is , and while this is counterintuitive from the perspective of real number absolute value (where naturally), it is nonetheless well-justified in the context of p-adic analysis.
When the multiplication of the parameters is not commutative, as it often is not for matrices or general physical operators, particularly in quantum mechanics, then the standard way of writing the geometric series, , multiplying from the right, may need to be distinguished from the alternative , multiplying from the left, and also the symmetric , multiplying half on each side. These choices may correspond to important alternatives with different strengths and weaknesses in applications, as in the case of ordering the mutual interferences of drift and diffusion differently at infinitesimal temporal scales in Ito integration and Stratonovitch integration in stochastic calculus.
References
Beyer, W. H. CRC Standard Mathematical Tables, 28th ed. Boca Raton, FL: CRC Press, p. 8, 1987.
Courant, R. and Robbins, H. "The Geometric Progression." §1.2.3 in What Is Mathematics?: An Elementary Approach to Ideas and Methods, 2nd ed. Oxford, England: Oxford University Press, pp. 13–14, 1996.
.
James Stewart (2002). Calculus, 5th ed., Brooks Cole.
Larson, Hostetler, and Edwards (2005). Calculus with Analytic Geometry, 8th ed., Houghton Mifflin Company.
Pappas, T. "Perimeter, Area & the Infinite Series." The Joy of Mathematics. San Carlos, CA: Wide World Publ./Tetra, pp. 134–135, 1989.
Roger B. Nelsen (1997). Proofs without Words: Exercises in Visual Thinking, The Mathematical Association of America.
History and philosophy
C. H. Edwards Jr. (1994). The Historical Development of the Calculus, 3rd ed., Springer. .
Eli Maor (1991). To Infinity and Beyond: A Cultural History of the Infinite, Princeton University Press.
Morr Lazerowitz (2000). The Structure of Metaphysics (International Library of Philosophy), Routledge.
Economics
Carl P. Simon and Lawrence Blume (1994). Mathematics for Economists, W. W. Norton & Company.
Mike Rosser (2003). Basic Mathematics for Economists, 2nd ed., Routledge.
Biology
Edward Batschelet (1992). Introduction to Mathematics for Life Scientists, 3rd ed., Springer.
Richard F. Burton (1998). Biology by Numbers: An Encouragement to Quantitative Thinking, Cambridge University Press.
External links
"Geometric Series" by Michael Schreiber, Wolfram Demonstrations Project, 2007.
Articles containing proofs
Ratios | Geometric series | Mathematics | 3,357 |
18,101,742 | https://en.wikipedia.org/wiki/Drill%20line | In a drilling rig, the drill line is a multi-thread, twisted wire rope that is threaded or reeved through in typically 6 to 12 parts between the traveling block and crown block to facilitate the lowering and lifting of the drill string into and out of the wellbore.
On larger diameter lines, traveling block loads of over a million pounds are possible.
To make a connection is to add another segment of drill pipe onto the top the drill string. A segment is added by pulling the kelly above the rotary table, stopping the mud pump, hanging off the drill string in the rotary table, unscrewing the kelly from the drill pipe below, swinging the kelly over to permit connecting it to the top of the new segment (which had been placed in the mousehole), and then screwing this assembly into the top of the existing drill string. Mud circulation is resumed, and the drill string is lowered into the hole until the bit takes weight at the bottom of the hole. Drilling then resumes.
External links
Drill line
Oilfield terminology
Drilling technology | Drill line | Chemistry | 213 |
77,009,362 | https://en.wikipedia.org/wiki/Fusion%20Engineering%20and%20Design | Fusion Engineering and Design is a peer-reviewed scientific journal, published monthly by Elsevier. Established under the name Nuclear Engineering and Design/Fusion in 1984 and retitled to its current name in 1987, it covers research on fusion power and plasma science. Its editors-in-chief are Seungyon Cho (Korea Institute of Fusion Energy) and Rudolf Neu (Max Planck Institute for Plasma Physics).
Abstracting and indexing
The journal is abstracted and indexed in:
According to the Journal Citation Reports, the journal has a 2023 impact factor of 1.9.
References
External links
English-language journals
Academic journals established in 1984
Monthly journals
Plasma science journals
Engineering journals
Elsevier academic journals
Fusion power | Fusion Engineering and Design | Physics,Chemistry | 142 |
65,372,852 | https://en.wikipedia.org/wiki/WD%201856%2B534 | WD 1856+534 is a white dwarf located in the constellation of Draco. At a distance of about from Earth, it is the outer component of a visual triple star system consisting of an inner pair of red dwarf stars, named G229-20. The white dwarf displays a featureless absorption spectrum, lacking strong optical absorption or emission features in its atmosphere. It has an effective temperature of , corresponding to an age of approximately 5.8 billion years. WD 1856+534 is approximately half as massive as the Sun, while its radius is much smaller, being 40% larger than Earth.
Planetary system
The white dwarf is known to host one exoplanet, WD 1856+534 b, in orbit around it. The exoplanet was detected through the transit method by the Transiting Exoplanet Survey Satellite (TESS) between July and August 2019. An analysis of the transit data in 2020 revealed that it is a Jupiter-like giant planet with a radius over ten times that of Earth's, and orbits its host star closely at a distance of 0.02 astronomical units (AU), with an orbital period 60 times shorter than that of Mercury around the Sun.
The unexpectedly close distance of the exoplanet to the white dwarf implies that it must have migrated inward after its host star evolved from a red giant to a white dwarf, otherwise it would have been engulfed by its star. This migration may be related to the fact that WD 1856+534 belongs to a hierarchical triple-star system: the white dwarf and its planet are gravitationally bound to a distant companion, G 229–20, which itself is a binary system of two red dwarf stars. Gravitational interactions with the companion stars may have triggered the planet's migration through the Lidov–Kozai mechanism in a manner similar to some hot Jupiters. An alternative hypothesis is that the planet instead has survived a common envelope phase. In the latter scenario, other planets engulfed before may have contributed to the expulsion of the stellar envelope. JWST observations seem to disfavour the formation via common envelope and instead favour high eccentricity migration.
The planetary transmission spectrum obtained with GTC OSIRIS is gray and featureless, likely because of the high level of hazes. The transmission spectrum was also obtained with Gemini GMOS. It does not show any features beside a possible dip at 0.55 μm. This feature could be caused be auroral emission at the nightside of the planet. The research find a minimum mass of 0.84 by accounting for the transit geometry of a grazing transit. The researchers also revised the white dwarf parameters and found a total age of 8-10 billion years, in agreement with the system belonging to the thin disk.
A search with transit timing variations found no additional planets. The search exclude planets with a mass more than 2 with orbital periods as long as 500 days and planets with >10 with orbital periods as long as 1000 days.
See also
WD 1145+017, a white dwarf with a transiting disrupted planetary-mass object
WD J0914+1914, a white dwarf with a disk of debris originating from a possible giant planet
ZTF J0139+5245, another white dwarf with a disk of debris from a disrupted planetary-mass object
CWISEP J1935-1546 a free-floating object with aurora emission in the infrared
List of exoplanets and planetary debris around white dwarfs
Notes
References
External links
NASA Missions Spy First Possible ‘Survivor’ Planet Hugging White Dwarf Star, Sean Potter, NASA, 16 September 2020
Planet discovered transiting a dead star, Steven Parsons, Nature News and Views, 16 September 2020
White dwarfs
Astronomical objects discovered in 2020
Draco (constellation)
Planetary systems with one confirmed planet
Gas giants
1690, TOI | WD 1856+534 | Astronomy | 775 |
7,062,509 | https://en.wikipedia.org/wiki/Biosignal | A biosignal is any signal in living beings that can be continually measured and monitored. The term biosignal is often used to refer to bioelectrical signals, but it may refer to both electrical and non-electrical signals. The usual understanding is to refer only to time-varying signals, although spatial parameter variations (e.g. the nucleotide sequence determining the genetic code) are sometimes subsumed as well.
Electrical biosignals
Electrical biosignals, or bioelectrical time signals, usually refers to the change in electric current produced by the sum of an electrical potential difference across a specialized tissue, organ or cell system like the nervous system. Thus, among the best-known bioelectrical signals are:
Electroencephalogram (EEG)
Electrocardiogram (ECG)
Electromyogram (EMG)
Electrooculogram (EOG)
Electroretinogram (ERG)
Electrogastrogram (EGG)
Galvanic skin response (GSR) or electrodermal activity (EDA)
EEG, ECG, EOG and EMG are measured with a differential amplifier which registers the difference between two electrodes attached to the skin. However, the galvanic skin response measures electrical resistance and the Magnetoencephalography (MEG) measures the magnetic field induced by electrical currents (electroencephalogram) of the brain.
With the development of methods for remote measurement of electric fields using new sensor technology, electric biosignals such as EEG and ECG can be measured without electric contact with the skin. This can be applied, for example, for remote monitoring of brain waves and heart beat of patients who must not be touched, in particular patients with serious burns.
Electrical currents and changes in electrical resistances across tissues can also be measured from plants.
Biosignals may also refer to any non-electrical signal that is capable of being monitored from biological beings, such as mechanical signals (e.g. the mechanomyogram or MMG), acoustic signals (e.g. phonetic and non-phonetic utterances, breathing), chemical signals (e.g. pH, oxygenation) and optical signals (e.g. movements).
Use in artistic contexts
In recent years, the use of biosignals has gained interest amongst an international artistic community of performers and composers who use biosignals to produce and control sound. Research and practice in the field go back decades in various forms and have lately been enjoying a resurgence, thanks to the increasing availability of more affordable and less cumbersome technologies. An entire issue of eContact!, published by the Canadian Electroacoustic Community in July 2012, was dedicated to this subject, with contributions from the key figures in the domain.
See also
Bioindicator
Biomarker
Biosignature
Molecular marker
Multimedia information retrieval
References
Bibliography
Donnarumma, Marco. "Proprioception, Effort and Strain in "Hypo Chrysos": Action art for vexed body and the Xth Sense." eContact! 14.2 — Biotechnological Performance Practice / Pratiques de performance biotechnologique (July 2012). Montréal: CEC.
Tanaka, Atau. "The Use of Electromyogram Signals (EMG) in Musical Performance: A Personal survey of two decades of practice." eContact! 14.2 — Biotechnological Performance Practice / Pratiques de performance biotechnologique (July 2012). Montréal: CEC.
External links
Applications
Using electroencephalograph signals for task classification and activity recognition Microsoft
NASA scientists use hands-off approach to land passengers jet
Hardware
University of Vienna : cours Biomedical Engineering, Electromyography (EMG)
Electroencephalographe,EEG, sans fil ( Cornell University, Ithaca, NY, USA)
Biology terminology
Electrophysiology | Biosignal | Biology | 807 |
4,860,678 | https://en.wikipedia.org/wiki/Einstein-aether%20theory | In physics the Einstein-aether theory, also called aetheory, is the name coined in 2004 for a modification of general relativity that has a preferred reference frame and hence violates Lorentz invariance. These generally covariant theories describes a spacetime endowed with both a metric and a unit timelike vector field named the aether. The aether in this theory is "a Lorentz-violating vector field" unrelated to older luminiferous aether theories; the "Einstein" in the theory's name comes from its use of Einstein's general relativity equation.
Relation to other theories of gravity
An Einstein-aether theory is an alternative theory of gravity that adds a vector field to the theory of general relativity. There are also scalar field modifications, including Brans–Dicke theory, all included with Horndeski's theory. Going the other direction, there are theories that add tensor fields, under the name Bimetric gravity or both scalar and vector fields can be added, as in Tensor–vector–scalar gravity.
History
The name "Einstein-aether theory" was coined in 2004 by T. Jacobson and D. Mattingly. This type of theory originated in the 1970s with the work of C.M.Will and K. Nordtvedt Jr. on gravitationally coupled vector field theories.
In the 1980's Maurizio Gasperini added a scalar field, which intuitively corresponded to a universal notion of time, to the metric of general relativity. Such a theory will have a preferred reference frame, that in which the universal time is the actual time.
In 2000, Ted Jacobson and David Mattingly developed a model that allows the consequences of preferred frames to be studied. Their theory contains less information than that of Gasperini, instead of a scalar field giving a universal time it contains only a unit vector field which gives the direction of time. Thus observers who follow the aether at different points will not necessarily age at the same rate in the Jacobson–Mattingly theory. In 2008 Ted Jacobson presented a status report on Einstein-aether theory.
Breaking Lorentz symmetry
The existence of a preferred, dynamical time vector breaks the Lorentz symmetry of the theory, more precisely it breaks the invariance under boosts. This symmetry breaking may lead to a Higgs mechanism for the graviton which would alter long distance physics, perhaps yielding an explanation for recent supernova data which would otherwise be explained by a cosmological constant. The effect of breaking Lorentz invariance on quantum field theory has a long history leading back at least to the work of Markus Fierz and Wolfgang Pauli in 1939. Recently it has regained popularity with, for example, the paper Effective Field Theory for Massive Gravitons and Gravity in Theory Space by Nima Arkani-Hamed, Howard Georgi and Matthew Schwartz. Einstein-aether theories provide a concrete example of a theory with broken Lorentz invariance and so have proven to be a natural setting for such investigations.
Action
The action of the Einstein-aether theory is generally taken to consist of the sum of the Einstein–Hilbert action with a Lagrange multiplier λ that ensures that the time vector is a unit vector and also with all of the covariant terms involving the time vector u but having at most two derivatives.
In particular it is assumed that the action may be written as the integral of a local Lagrangian density
where GN is Newton's constant and g is a metric with Minkowski signature. The Lagrangian density is
Here R is the Ricci scalar, is the covariant derivative and the tensor K is defined by
Here the ci are dimensionless adjustable parameters of the theory.
Solutions
Stars
Several spherically symmetric solutions to ae-theory have been found. Most recently Christopher Eling and Ted Jacobson have found solutions resembling stars and solutions resembling black holes.
In particular, they demonstrated that there are no spherically symmetric solutions in which stars are constructed entirely from the aether. Solutions without additional matter always have either naked singularities or else two asymptotic regions of spacetime, resembling a wormhole but with no horizon. They have argued that static stars must have static aether solutions, which means that the aether points in the direction of a timelike killing vector.
Black holes and potential problems
However this is difficult to reconcile with static black holes, as at the event horizon there are no timelike Killing vectors available and so the black hole solutions cannot have static aethers. Thus when a star collapses to form a black hole, somehow the aether must eventually become static even very far away from the collapse.
In addition the stress tensor does not obviously satisfy the Raychaudhuri equation, one needs to make recourse to the equations of motion. This is in contrast with theories with no aether, where this property is independent of the equations of motion.
Experimental constraints
In a 2005 paper, Nima Arkani-Hamed, Hsin-Chia Cheng, Markus Luty and Jesse Thaler have examined experimental consequences of the breaking of boost symmetries inherent in aether theories. They have found that the resulting Goldstone boson leads to, among other things, a new kind of Cherenkov radiation.
In addition they have argued that spin sources will interact via a new inverse square law force with a very unusual angular dependence. They suggest that the discovery of such a force would be very strong evidence for an aether theory, although not necessarily that of Jacobson, et al.
See also
Aether theories
Modern searches for Lorentz violation
References
Aether theories
Theories of gravity | Einstein-aether theory | Physics | 1,159 |
34,165,756 | https://en.wikipedia.org/wiki/Breeding%20for%20drought%20stress%20tolerance | Breeding for drought resistance is the process of breeding plants with the goal of reducing the impact of dehydration on plant growth.
Dehydration stress
Crop plants
In nature or crop fields, water is often the most limiting factor for plant growth. If plants do not receive adequate rainfall or irrigation, the resulting dehydration stress can reduce growth more than all other environmental stresses combined.
Drought can be defined as the absence of rainfall or irrigation for a period of time sufficient to deplete soil moisture and cause dehydration in plant tissues. Dehydration stress results when water loss from the plant exceeds the ability of the plant's roots to absorb water and when the plant's water content is reduced enough to interfere with normal plant processes.
Global phenomenon
About 15 million km2 of the land surface is covered by crop-land, and about 16% of this area is equipped for irrigation (Siebert et al. 2005). Thus, in many parts of the world, including the United States, plants may frequently encounter dehydration stress. Rainfall is very seasonal and periodic drought occurs regularly. The effect of drought is more prominent in sandy soils with low water holding capacity. On such soils some plants may experience dehydration stress after only a few days without water.
During the 20th century, the rate of increase in `blue' water withdrawal (from rivers, lakes, and aquifers) for irrigation and other purposes was higher than the growth rate of the world population (Shiklomanov 1998). Country-wise maps of irrigated areas are available.
Future challenges to crop production
Soil moisture deficit is a significant challenge to the future of crop production. Severe drought in parts of the U.S., Australia, and Africa in recent years drastically reduced crop yields and disrupted regional economies. Even in average years, however, many agricultural regions, including the U.S. Great Plains, suffer from chronic soil moisture deficits. Cereal crops typically attain only about 25% of their potential yield due to the effects of environmental stress, with dehydration stress the most important cause. Two major trends will likely increase the frequency and severity of soil moisture deficits:
Climate change: Higher temperatures are likely to increase crop water use due to increased transpiration. A warmer atmosphere will also speed up melting of mountain snow pack, resulting in less water available for irrigation. More extreme weather patterns will increase the frequency of drought in some regions.
Limited water supplies: Increased demand from municipal and industrial users will further reduce the amount of water available for irrigated crops.
Although changes in tillage and irrigation practices can improve production by conserving water, enhancing the genetic tolerance of crops to drought stress is considered an essential strategy for addressing moisture deficits.
Plant physiology
A plant responds to a lack of water by halting growth and reducing photosynthesis and other plant processes in order to reduce water use. As water loss progresses, leaves of some species may appear to change colour — usually to blue-green. Foliage begins to wilt and, if the plant is not irrigated, leaves will fall off and the plant will eventually die. Soil moisture deficit lowers the water potential of a plant's root and, upon extended exposure, abscisic acid is accumulated and eventually stomatal closure occurs. This reduces a plant's leaf relative water content.
The time required for dehydration stress to occur depends on the water-holding capacity of the soil, environmental conditions, stage of plant growth, and plant species. Plants growing in sandy soils with low water-holding capacity are more susceptible to dehydration stress than plants growing in clay soils. A limited root system will accelerate the rate at which dehydration stress develops. A plant's root system may be limited by the presence of competing root systems from neighbouring plants, by site conditions such as compacted soils or high water tables, or by container size (if growing in a container). A plant with a large mass of leaves in relation to the root system is prone to drought stress as the leaves may lose water faster than the roots can supply it. Newly planted plants and poorly established plants may be especially susceptible to dehydration stress because of the limited root system or the large mass of stems and leaves in comparison to roots.
Other stress factors
Aside from the moisture content of the soil, environmental conditions of high light intensity, high temperature, low relative humidity and high wind speed will significantly increase plant water loss. The prior environment of a plant also can influence the development of dehydration stress. A plant that has been exposed to dehydration stress (hardened) previously and has recovered may become more drought resistant. Also, a plant that was well-watered prior to being water-limited will usually survive a period of drought better than a continuously dehydration-stressed plant.
Mechanisms of Drought Resistance
The degree of resistance to drought depends upon individual crops. Generally three strategies can help a crop to mitigate the effect of dehydration stress:
The Drought Resistance terms in summary (Levitt, J. (1980); Blum, A. (2011) )
Avoidance
If the plant shows dehydration avoidance, the environmental factor is excluded from the plant tissues by reducing water loss ("water savers", e.g. by thick leaf epicuticular wax, leaf rolling, leaf posture) or maintaining water uptake ("water spenders", e.g. by growing deeper roots).
Dehydration avoidance is desirable in modern agriculture, where drought resistance requires the maintenance of economically viable plant production under dehydration stress. The role of dehydration avoidance is maintaining water supply and sustaining leaf hydration and turgidity with the purpose of maintaining stomatal opening and transpiration as long as possible under water deficit. This is essential for leaf gas exchange, photosynthesis and plant production through carbon assimilation.
Tolerance
If the plant shows dehydration tolerance, the environmental factor enters the plant tissues but the tissues survive, by e.g. maintaining turgor and osmotic adjustment.
Escape
Dehydration escape involves e.g. early maturing or seed dormancy, where the plant uses previous optimal conditions to develop vigor. Dehydration recovery refers to some plant species being able to recuperate after brief drought periods.
A proper timing of life-cycle, resulting in the completion of the most sensitive developmental stages while water is abundant, is considered to be a dehydration escape strategy. Avoiding dehydration stress with a root system capable of extracting water from deep soil layers, or by reducing evapotranspiration without affecting yields, is considered as dehydration avoidance. Mechanisms such as osmotic adjustment (OA) whereby a plant maintains cell turgor pressure under reduced soil water potential are categorised as dehydration tolerance mechanisms. Dehydration avoidance mechanisms can be expressed even in the absence of stress and are then considered constitutive. Dehydration tolerance mechanisms are the result of a response triggered by dehydration stress itself and are therefore considered adaptive. When the stress is terminal and predictable, dehydration escape through the use of shorter duration varieties is often the preferable method of improving yield potential. Dehydration avoidance and tolerance mechanisms are required in situations where the timing of drought is mostly unpredictable.
Drought resistance mechanisms are genetically controlled and genes or QTL responsible for drought resistance have been discovered in several crops which opens avenue for molecular breeding for drought resistance.
Drought resistance traits
Resistance to drought is a quantitative trait, with a complex phenotype, often confounded by plant phenology. Breeding for drought resistance is further complicated since several types of abiotic stress, such as high temperatures, high irradiance, and nutrient toxicities or deficiencies can challenge crop plants simultaneously.
Osmotic adjustment
When a plant is exposed to water deficit, it may accumulate a variety of osmotically active compounds such as amino acids and sugars, resulting in a lowering of the osmotic potential. Examples of amino acids that may be up-regulated are proline and glycine betaine. This is termed osmotic adjustment and enables the plant to take up water, maintain turgor and survive longer.
Cell membrane stability
The ability to survive dehydration is influenced by a cell's ability to survive at reduced water content. This can be considered complementary to OA because both traits will help maintain leaf growth (or prevent leaf death) during water deficit. Crop varieties differ in dehydration tolerance and an important factor for such differences is the capacity of the cell membrane to prevent electrolyte leakage at decreasing water content, or “cell membrane stability (CMS)”. The maintenance of membrane function is assumed to mean that cell activity is also maintained. Measurements of CMS have been used in different crops and are known to be correlated with yields under high temperature and possibly under dehydration stress.
Epicuticular wax
In sorghum (Sorghum bicolor L. Moench), drought resistance is a trait that is highly correlated with the thickness of the epicuticular wax layer. Experiments have demonstrated that rice varieties with a thick cuticle layer retain their leaf turgor for longer periods of time after the onset of a water-stress.
Partitioning and stem reserve mobilisation
As photosynthesis becomes inhibited by dehydration, the grain filling process becomes increasingly reliant on stem reserve utilisation. Numerous studies have reported that stem reserve mobilisation capacity is related to yield under dehydration stress in wheat. In rice, a few studies also indicated that this mechanism maintains grain yield under dehydration stress at the grain filling stage. This dehydration tolerance mechanism is stimulated by a decrease in gibberellic acid concentration and an increase in abscisic acid concentration.
Manupulation and Stability of flowering processes
Seedling drought resistance traits
For emergence from deep sowing (to exploit dry upper soil), this is practised to help seedlings reach the receding moisture profile, and to avoid high soil surface temperatures which inhibit germination. Screening at these stage provides practical advantages, specially when managing large amount of germ-plasms.
The Drought Resistant Ideotype
Usually ideotypes are developed to create an ideal plant variety. The following traits constitutes ideotype of wheat by International Maize and Wheat Improvement Center (CIMMYT).
Large seed size. Helps emergence, early ground cover and initial biomass.
Long coleoptiles. For emergence from deep sowing
Early ground cover.
Thinner, wider leaves (i.e., with a relatively low specific leaf weight) and a more prostrate growth habit help to increase ground cover, thus conserving soil moisture and potentially increasing radiation use efficiency.
High pre-anthesis biomass.
Good capacity for stem reserves and remobilisation
High spike photosynthetic capacity
High relative leaf water content (RLWC), stomatal conductance (Gs), and/or canopy temperature depression (CTD) during grain filling to indicate ability to extract water
Osmotic adjustment
Accumulation of abscisic acid (ABA).
The benefit of ABA accumulation under dehydration has been demonstrated (Innes et al. 1984).
It appears to pre-adapt plants to stress by reducing stomatal conductance, rates of cell division, organ size, and increasing development rate. However, high ABA can also result in sterility since high ABA levels may abort developing florets
Heat Tolerance: The contribution of heat tolerance to performance under dehydration stress needs to be quantified, but it is relatively easy to screen for (Reynolds et al. 1998).
Leaf anatomy: Waxiness, pubescence, rolling, thickness, posture. These traits decrease radiation load to the leaf surface. Benefits include a lower evapotranspiration rate and reduced risk of irreversible photo-inhibition. However, they may also be associated with reduce radiation use efficiency, which would reduce yield under more favourable conditions.
High tiller survival: Comparison of old and new varieties have shown that under dehydration older varieties over-produce tillers many of which fail to set grain while modern drought resistant lines produce fewer tillers most of which survive.
Stay-green: The trait may indicate the presence of drought resistance mechanisms, but probably does not contribute to yield per se if there is no water left in the soil profile by the end of the cycle to support leaf gas exchange. It may be detrimental if it indicates lack of ability to remobilise stem reserves. However, research in sorghum has indicated that Stay-green is associated with higher leaf chlorophyll content at all stages of development and both were associated with improved yield and transpiration efficiency under dehydration.
Combination phenomics: overall health of crops
The concept of combination phenomics comes from the idea that two or more plant stresses have common physiological effects or common traits - which are an indicator of overall plant health.
As both biotic and abiotic stresses can result in similar physiological consequence, drought resistant plants can be separated from sensitive plants. Some imaging or infrared measuring techniques can help to speed the process for breeding process. For example, spot blotch intensity and canopy temperature depression can be monitored with canopy temperature depression.
Molecular breeding for drought resistance
Recent research breakthroughs in biotechnology have revived interest in targeted drought resistance breeding and use of new genomics tools to enhance crop water productivity. Marker-assisted breeding is revolutionising the improvement of temperate field crops and will have similar impacts on breeding of tropical crops. Other molecular breeding tool include development of genetically modified crops that can tolerate plant stress. As a complement to the recent rapid progress in genomics, a better understanding of physiological mechanisms of dehydration response will also contribute to the progress of genetic enhancement of crop drought resistance. It is now well accepted that the complexity of the dehydration syndrome can only be tackled with a holistic approach that integrates physiological dissection of crop dehydration avoidance and - tolerance traits using molecular genetic tools such as marker-assisted selection (MAS), micro-arrays and transgenic crops, with agronomic practices that lead to better conservation and utilisation of soil moisture, and better matching of crop genotypes with the environment. MAS has been implemented in rice varieties to assess the drought tolerance and to develop new abiotic stress-tolerant varieties
See also
Upland rice
Molecular breeding
References
External links
Plant Breeding For Drought Tolerance, USDA initiative
Genetic and genomic tools to improve drought tolerance in wheat
Evaluating a Conceptual Model for Drought Tolerance
Website on plant stresses by Dr. Abraham Blum
Plant breeding | Breeding for drought stress tolerance | Chemistry | 3,018 |
420,422 | https://en.wikipedia.org/wiki/Histochemical%20tracer | A histochemical tracer is a compound used to reveal the location of cells and track neuronal projections. A neuronal tracer may be retrograde, anterograde, or work in both directions. A retrograde tracer is taken up in the terminal of the neuron and transported to the cell body, whereas an anterograde tracer moves away from the cell body of the neuron.
List
DiI (DiC18(3)) - retrograde and anterograde
Diamidino yellow
Fast blue
Horseradish peroxidase - retrograde
Cholera toxin B - retrograde and anterograde
Pseudorabies virus
Hydroxystilbamidine - retrograde
Texas Red
Fluorescein isothiocyanate
References
Histology | Histochemical tracer | Chemistry | 158 |
614,798 | https://en.wikipedia.org/wiki/Max%20Planck%20Institute%20for%20Extraterrestrial%20Physics | The Max Planck Institute for Extraterrestrial Physics is part of the Max Planck Society, located in Garching, near Munich, Germany.
In 1991 the Max Planck Institute for Physics and Astrophysics split up into the Max Planck Institute for Extraterrestrial Physics, the Max Planck Institute for Physics and the Max Planck Institute for Astrophysics. The Max Planck Institute for Extraterrestrial Physics was founded as sub-institute in 1963. The scientific activities of the institute are mostly devoted to astrophysics with telescopes orbiting in space. A large amount of the resources are spent for studying black holes in the Milky Way Galaxy and in the remote universe.
History
The Max-Planck-Institute for extraterrestrial physics (MPE) was preceded by the department for extraterrestrial physics in the Max-Planck-Institute for physics and astrophysics. This department was established by Professor Reimar Lüst on October 23, 1961. A Max-Planck Senate resolution transformed this department into a sub-institute of the Max-Planck-Institute for Physics and Astrophysics on May 15, 1963. Professor Lüst was appointed director of the institute. Another Senate resolution on March 8, 1991, finally established MPE as an autonomous institute within the Max-Planck-Gesellschaft. It is dedicated to the experimental and theoretical exploration of the space outside of Earth as well as astrophysical phenomena.
Timeline
Major events in the history of the institute include:
1963 Foundation as a sub-institute within the MPI für Physik und Astrophysik; director Reimar Lüst
1969 Klaus Pinkau becomes director at the institute (cosmic rays, gamma-astronomy)
1972 Gerhard Haerendel becomes director at the institute (plasma physics)
1975 Joachim Trümper becomes director and scientific member at the institute (X-ray astronomy)
1981 The MPE X-ray test facility "Panter" located in Neuried starts operation
1985 Gregor Morfill becomes director and scientific member at the institute (theory)
1986 Reinhard Genzel becomes director and scientific member at the institute (infrared astronomy)
1990 Joachim Trümper together with the MPI for Physics (MPP) founds the semiconductor laboratory as a joint project between the MPE and the MPP (since 2012 operated by the MPG)
2000 R. Genzel together with the University of California Berkeley founds the "UCB-MPG Center for International Exchange in Astrophysics and Space Science"
2000 G. Morfill together with the IPP founds the "Center for Interdisciplinary Plasma Science" (CIPS) (until 2004)
2001 The "International Max-Planck- Research School on Astrophysics" (IMPRS) is opened by MPE, MPA, ESO, MPP and the universities of Munich
2001 Günther Hasinger becomes scientific member and director at the institute (X-ray astronomy)
2002 Ralf Bender becomes scientific member and director at the institute (optical and interpretative astronomy)
2010 Kirpal Nandra becomes scientific member and director at the institute (high-energy astrophysics)
2014 Paola Caselli becomes scientific member and director at the institute (Center for Astrochemical Studies)
2020 Nobel Prize in Physics for Reinhard Genzel for his research on the black hole at the centre of the Milky Way (Sagittarius A*)
2023 Frank Eisenhauer becomes scientific member and director at the institute (infrared-/submillimeter astronomy)
Detailed history
The Max Planck Institute for Extraterrestrial Physics (MPE) was preceded by the department for extraterrestrial Physics in the Max Planck Institute for Physics and Astrophysics. This department was established by Professor Reimar Lüst on October 23, 1961. A Max-Planck Senate resolution transformed this department into a sub-institute of the Max Planck Institute for Physics and Astrophysics on May 15, 1963. Professor Lüst was appointed director of the institute. Another Senate resolution on March 8, 1991, finally established MPE as an autonomous institute within the Max Planck Society. It is dedicated to the experimental and theoretical exploration of the space outside of Earth as well as astrophysical phenomena. A continuous reorientation to new, promising fields of research and the appointment of new members ensures steady advancement.
Among the 29 employees of the Institute when it was founded in 1963 were 9 scientists and 1 Ph.D. student. Twelve years later in 1975 the number of employees had grown to 180 with 55 scientists and 13 Ph.D. students, and today (status 2015) there are some 400 staff (130 scientists and 75 PhD students). It is noteworthy that permanent positions at the institute have not increased since 1973 - despite its celebrated scientific achievements. The increasingly complex tasks and international obligations have been mainly maintained by staff members with positions having limited duration and funded by external organizations.
Because the institute has assumed a leading position in astronomy internationally, it has attracted guest scientists throughout the world. The number of long-term guests increased from 12 in 1974 to a maximum of 72 in 2000. In recent years MPE has hosted an average of about 50 guest scientists each year.
During the early years, the scientific work at the Institute concentrated on the investigation of extraterrestrial plasmas and the magnetosphere of the Earth. This work was performed with measurements of particles and electromagnetic fields as well as a specially developed ion-cloud technique using sounding rockets.
Another field of research also became important: astrophysical observations of electromagnetic radiation which could not be observed from the surface of the Earth because the wavelengths are such that the radiation is absorbed by the Earth's atmosphere. These observations and inferences therefrom are the subject matter of infra-red astronomy as well as X-ray- and gamma-ray-astronomy. In addition to more than 100 rockets, an increasing number of high-altitude balloons (up to now more than 50; e.g. HEXE) have been used to carry experiments to high altitudes.
Since the 1990s, satellites have become the preferred observation platforms because of their favorable observation-time/cost ratio. Nevertheless, high-flying observation airplanes and ground-based telescopes are also used to obtain data, especially for optical and near-infrared observations.
New observation techniques using satellites has necessitated the recording, processing and accessible storage of high data fluxes over long periods of time. This demanding task is performed by a data processing group, which has grown quickly in the last decade. Special data centers were established for the large satellite projects.
Besides the many successes, there have also been disappointments. The malfunctioning of the Ariane carrier rockets on test launches in 1980 and 1996 were particularly bitter setbacks. The satellite "Firewheel", in which many members of the Institute had invested years of work, was lost on May 23, 1980, because of a burning instability in the first stage of the launch rocket. The same fate was to overtake the four satellites of the CLUSTER-Mission on June 4, 1996, when the first Ariane 5 was launched. This time the disaster was attributed to an error in the rocket's software. The most recent loss was "ABRIXAS", an X-ray satellite built by industry under the leadership of MPE. After few hours in orbit, a malfunction of the power system caused the total loss of the satellite.
Over the years, however, the history of MPE is primarily a story of scientific successes.
Selected achievements
Exploration of the Ionosphere and Magnetosphere by means of ion clouds (1963–1985)
The first map of the galactic gamma-ray emission ( > 70 MeV) as measured with the satellite COS-B (1978)
Measurement of the magnetic field of the neutron star Her-X1 using the cyclotron line emission (balloon experiments 1978)
Experimental proof of the reconnection process (1979)
The artificial comet (AMPTE 1984/85)
Numerical simulation of a collision-free shock wave (1990)
The first map of the X-ray sky as measured with the imaging X-ray telescope on board the ROSAT satellite (1993)
First gamma-ray sky map in the energy range 3 to 10 MeV as measured with the imaging Compton telescope COMPTEL on board CGRO (1994)
The plasma-crystal experiment and its successors on the International Space Station (1996–2013)
The measurement of the element- and isotope-composition of the solar wind by the CELIAS experiment on board the SOHO satellite (1996)
The first detection of water-molecule lines in an expanding shell of a star using the Fabry-Perot spectrometer on board the ISO satellite (1996)
First detection of X-ray emission from comets and planets (1996, 2001)
Determining the energy source for ultraluminous infrared galaxies with the satellite ISO (1998)
Detection of gamma-ray line emission (44Ti) from supernova remnants (1998)
Deep observations of the extragalactic X-ray sky with ROSAT, XMM-Newton and Chandra and resolving the background radiation into individual sources (since 1998)
Confirmation that a supermassive black hole resides at the centre of the Milky Way Galaxy (2002)
Detection of a binary active galactic nucleus in X-rays (2003)
Reconstruction of the evolution history of stars in elliptical galaxies (2005)
Stellar disks rotating around the black hole in the Andromeda galaxy (2005)
Determining the gas content of normal galaxies in the early universe (since 2010)
Resolving the cosmic infrared background into individual galaxies with Herschel (2011)
Scientific work
The institute was founded in 1963 as a sub-institute of the Max-Planck-Institut für Physik und Astrophysik and established as an independent institute in 1991.
Its main research topics are astronomical observations in spectral regions which are only accessible from space because of the absorbing effects of the Earth's atmosphere, but also instruments on ground-based observatories are used whenever possible.
Scientific work is done in four major research areas that are supervised by one of the directors, respectively: optical and interpretative astronomy (Bender), infrared and sub-millimeter/millimeter astronomy (Genzel), high-energy astrophysics (Nandra), and in the Centre for Astrochemical Studies (Caselli). Within these areas scientists lead individual experiments and research projects organised in about 25 project teams. The research topics pursued at MPE range from the physics of cosmic plasmas and of stars to the physics and chemistry of interstellar matter, from star formation and nucleosynthesis to extragalactic astrophysics and cosmology.
Many experiments of the Max Planck Institute for Extraterrestrial Physics (MPE) have to be carried out above the dense Earth's atmosphere using aircraft, rockets, satellites and space probes. In the early days experiments were also flown on balloons. To run advanced extraterrestrial physics and state-of-the-art experimental astrophysics, the institute continues to develop high-tech instrumentation in-house. This includes detectors, spectrometers, and cameras as well as telescopes and complete payloads (e.g. ROSAT and eROSITA) and even entire satellites (as in case of AMPTE and EQUATOR-S). For this purpose the technical and engineering departments are of particular importance for the institute's research work.
Observers and experimenters perform their research work at the institute in close contact with each other. Their interaction while interpreting observations and propounding new hypotheses underlies the successful progress of the institute's research projects.
At the end of the year 2022 a total of 508 employees were working at the institute, numbering among them about 100 scientists, 60 junior scientists, 10 apprentices and 140 visiting researchers.
Projects
Scientific projects at the MPE are often the efforts of the different research departments to build, maintain, and use experiments and facilities which are needed by the many different scientific research interest at the institute. Apart from hardware projects, there are also projects that use archival data and are not necessarily connected to a new instrument. A brief overview of the most recent projects.
For the EUCLID space telescope, which has been launched in July 2023 and from which researchers hope to gain new insights into dark matter and dark energy, the institute contributed the NISP optical system.
The GRAVITY instrument enables the four 8-metre telescopes at the Very Large Telescope (VLT) in Chile to be interconnected by means of interferometry to form a virtual telescope with a diameter of 130 metres. The follow-up project GRAVITY Plus is currently being developed, which is expected to achieve an even sharper resolution thanks to a new system of adaptive optics, laser guide stars and an extended field of view.
For the 39-metre European Extremely Large Telescope (E-ELT), which is currently being built in the Chilean Atacama Desert and is planned to be finished by 2027, MPE is developing the first-light instrument MICADO (Multi-AO Imaging Camera for Deep Observations).
The ERIS (Enhanced Resolution Imager and Spectrograph) infrared camera will replace the NACO and SINFONI instruments at the VLT.
With eROSITA (extended ROentgen Survey with an Imaging Telescope Array), the main instrument of the Russian X-ray gamma-ray satellite Spektr-RG launched from Baikonur in July 2019, the first complete sky survey in the X-ray range was achieved.
External links
http://www.mpe.mpg.de
https://web.archive.org/web/20120609132517/http://www.mpia.de/Public/menu-e.php
http://www.nasa.gov/
http://www.esa.int/esaCP/index.html
References
Extraterrestrial Physics
Education in Munich
Astronomy institutes and departments
Physics research institutes
Garching bei München | Max Planck Institute for Extraterrestrial Physics | Astronomy | 2,805 |
68,657,850 | https://en.wikipedia.org/wiki/Ro07-9749 | Ro07-9749 is a benzodiazepine derivative with sedative and anxiolytic effects, which has been used as an internal standard in the analysis of other benzodiazepines, and also sold as a designer drug.
See also
Flubromazepam
Norflurazepam
Phenazepam
Ro05-4435
References
Designer drugs
2-Fluorophenyl compounds
Benzodiazepines
Iodobenzene derivatives | Ro07-9749 | Chemistry | 103 |
4,106,777 | https://en.wikipedia.org/wiki/Pelargonic%20acid | Pelargonic acid, also called nonanoic acid, is an organic compound with structural formula . It is a nine-carbon fatty acid. Nonanoic acid is a colorless oily liquid with an unpleasant, rancid odor. It is nearly insoluble in water, but very soluble in organic solvents. The esters and salts of pelargonic acid are called pelargonates or nonanoates.
The acid is named after the pelargonium plant, since oil from its leaves contains esters of the acid.
Preparation
Together with azelaic acid, it is produced industrially by ozonolysis of oleic acid.
Alternatively, pelargonic acid can be produced in a two-step process beginning with coupled dimerization and hydroesterification of 1,3-butadiene. This step produces a doubly unsaturated C9-ester, which can be hydrogenated to give esters of pelargonic acid.
A laboratory preparation involves permanganate oxidation of 1-decene.
Occurrence and uses
Pelargonic acid occurs naturally as esters in the oil of Pelargonium.
Synthetic esters of pelargonic acid, such as methyl pelargonate, are used as flavorings. Pelargonic acid is also used in the preparation of plasticizers and lacquers. The derivative 4-nonanoylmorpholine is an ingredient in some pepper sprays.
The ammonium salt of pelargonic acid, ammonium pelargonate, is a herbicide. It is commonly used in conjunction with glyphosate, a non-selective herbicide, for a quick burn-down effect in the control of weeds in turfgrass. It works by causing leaks in plant cell membranes, allowing chlorophyll molecules to escape the chloroplast. Under sunlight, these misplaced molecules cause immense oxidative damage to the plant.
The methyl form and ethylene glycol pelargonate act as nematicides against Meloidogyne javanica on Solanum lycopersicum, and the methyl against Heterodera glycines and M. incognita on Glycine max.
Esters of pelargonic acid are precursors to lubricants.
Pharmacological effects
Pelargonic acid may be more potent than valproic acid in treating seizures. Moreover, in contrast to valproic acid, pelargonic acid exhibited no effect on HDAC inhibition, suggesting that it is unlikely to show HDAC inhibition-related teratogenicity.
See also
List of saturated fatty acids
List of carboxylic acids
References
External links
MSDS at affymetrix.com
Alkanoic acids
Herbicides
Nematicides | Pelargonic acid | Biology | 583 |
67,871,989 | https://en.wikipedia.org/wiki/Flag%20flying%20day | A flag flying day is a day, decreed officially or by tradition, that the national flag should be hoisted by every official agency in the country. Private citizens and corporations also encouraged to fly the flag rather than leaving the flag staff empty or flying family or corporate flags. Flag flying days may also be observed for some provincial flags.
Flag flying days are different from Flag Day holidays that celebrate the flag itself and are usually held just one day per year. Flag flying days normally occur multiple times each year to celebrate national holidays or other occasions.
For flag flying days in various countries, see:
Flag-flying days in Estonia
Flag-flying days in Finland
Flag-flying days in Germany
Flag-flying days in Lithuania
Flag-flying days in Mexico
Flag-flying days in the Netherlands
Flag-flying days in Norway
Flag-flying days in Sweden
Flag-flying days in the United Kingdom
Flag-flying days in the United States
See also
Dance of Flags, Israeli celebration
Flag Day
References
Flags
Flag flying days
Flag practices
Observances | Flag flying day | Mathematics | 203 |
68,661,942 | https://en.wikipedia.org/wiki/Methoxyeugenol | Methoxyeugenol is a natural occurring allylbenzene and eugenol derivative. It is found in toxic Japanese star anise pericarp and leaves. as well as in nutmeg crude extract but not in nutmeg essential oil. It also activates PPAR-gamma in vivo.
See also
Acetyleugenol
References
Phenylpropenes
Phenylpropanoids
Secondary metabolites
Methoxy compounds | Methoxyeugenol | Chemistry | 94 |
66,601,521 | https://en.wikipedia.org/wiki/Time%20Lord%20Victorious | Time Lord Victorious is a multiplatform story set within the British science fiction television series Doctor Who. The story was announced in April 2020. The first instalment of the story was released in March 2020, and the final instalment was made available in April 2021 as a ticketed live experience. The serialised story is told through a variety of multimedia including audio dramas, comics, books, short stories, immersive experiences, collectables, and an animated series.
The title refers to an alias The Doctor assumed, claiming his supremacy over time, and final victory in the Time War.
Plot
The overall storyline includes events linking back to the Fourth Doctor's era, but essentially begins for the Tenth Doctor just after the events of "The Waters of Mars".
The Doctor's actions on Bowie Base One having created a temporal rift, he travels back to the Dark Times of the universe, where he meets the race known as the Kotturuh, who brought death itself into the universe in the early days of history, either making species mortal or killing them in a matter of hours for nothing more than the Kotturuhs' belief that they would contribute nothing to the future. The Doctor gathers an army of mercenaries and even goes so far as to create a virus based on the Kotturuhs' death touch that gives the entire species a 'lifespan' of fifteen minutes, eventually adopting the 'Time Lord Victorious' title as he tries to stop the Kotturuh's influence on history in the first place.
In the future, the Eighth Doctor discovers various temporal anomalies that he eventually traces back to the Dark Times, and he is forced to accompany a Dalek Time Squad into the past to investigate the source. The Ninth Doctor finds himself in the past while on a trip with Rose, and has to save a group of vampires from a previously-unknown female incarnation of Rassilon. The three Doctors meet about the Kotturuh homeworld, but the Tenth initially assumes that the Eighth and Ninth Doctors are a deception, resulting in the three fleets attacking each other until the Doctors make telepathic contact with each other. The Daleks attempt to steal samples from a Great Vampire in the Ninth's coffin ship to create a group of immortal Dalek/vampire hybrids, but the Doctors are able to find the last of the Kotturuh, who had abandoned her peoples' vendetta, and convince her to kill the Dalek hybrids. The Eighth Doctor takes the Dalek squad back into the Time Vortex, and the Ninth and Tenth are able to find a new planet for the vampires to settle on with the aid of a blood substitute.
The Eighth Doctor is able to destroy the Dalek saucer and escape, but a single Dalek survives and is picked up by a colony ship, but is eventually destroyed by the Fourth Doctor. The Tenth Doctor also takes the opportunity to tie up a loose end left by the Eighth Doctor, in the form of a telepathic entity that the Eighth defeated but was unable to properly trap at the time.
Multimedia
Animated series
Daleks!
Daleks! is an animated series based on the eponymous fictional extra-terrestrial race of mutants from the British science fiction television series Doctor Who. The series was written by James Goss as the final instalment in the multi-platform story arc Time Lord Victorious. The series was released in 5 weekly 10-minute episodes from 12 November 2020 on the official Doctor Who YouTube channel. The CGI animation was created by Studio Liddell. The cast includes Nicholas Briggs as the Daleks, Joe Sugg as R-41, Anjli Mohindra as the Mechanoid Queen, and Ayesha Antoine as Mechonoid 2150 and the Chief Archivist.
Following the previous events of Time Lord Victorious, the Daleks ransack the Archive of Islos, only to find that their home planet Skaro has been invaded. Huw Fullerton wrote in Radio Times that the series was "an enjoyable little corner of the Doctor Who universe", although he criticized the visual effects as "a little lacklustre". Aidan Mason of Pop Culture Beast stated that, "this series is a decent watch, especially Planet of the Mechanoids", but stated that the finale was, "full of holes and tropes, as well as slightly rushed."
Live experiences
Doctor Who: Time Fracture
Doctor Who: Time Fracture is an immersive experience offered by UK company Immersive Everywhere in collaboration with the BBC.
The experience is to be set across multiple times and worlds within the area that the experience is hosted, Mayfair. The experience will see attendees follow a story across multiple time periods, interact with characters from the Doctor Who universe, and encounter recurring adversaries including the Daleks and the Cybermen.
The immersive experience Time Fracture was due to launch 17 February 2021, but was postponed due to the COVID-19 pandemic. The experience launched on 21 April 2021.
A Dalek Awakens
A Dalek Awakens is an escape room game provided by UK based Escape Hunt Group Ltd in collaboration with the BBC. The escape room launched in March 2020 and was later revealed to be part of Time Lord Victorious. It is available at venues in Birmingham and Reading, and is due to be made available in Norwich and Basingstoke. Players board a mock spaceship and have to solve puzzles in order to prevent an invading Dalek from destroying them and the passengers aboard.
Audio dramas
All audio productions were produced by Big Finish Productions, except The Minds of Magnox, which was produced by BBC Audio. Reviewing Lesser Evils and Master Thief, Bryn Mitchell of We Are Cult stated that, "Like many of Big Finish’s Short Trips, these stories combine a cheap price with quality production, an elegant reading, and just plain good storytelling."
Big Finish Productions
BBC Audio
The Minds of Magnox
Books
All Flesh Is Grass – BBC Books
The Knight, the Fool and The Dead – BBC Books
The Wintertime Paradox – Penguin Books
Short stories
The Dawn of Kotturuh – Released on the BBC Doctor Who mailing list
The Last Message – Included with Doctor Who Figurine Collection: Time Lord Victorious #1 – Hero Collector
Mission to the Known – Included with Doctor Who Figurine Collection: Time Lord Victorious #2 – Hero Collector
Exit Strategy – Included with Doctor Who Figurine Collection: Time Lord Victorious #3 – Hero Collector
The Guide to the Dark Times – Released in the Doctor Who Annual 2021 – Penguin Books
Comics
Defender of the Daleks – Titan Comics
Monstrous Beauty – Doctor Who Magazine
Tales of the Dark Times – via Doctor Who: Comic Creator app
Blu-ray collection
Time Lord Victorious: Road to the Dark Times is a compendium of previously released television stories which link to the larger Time Lord Victorious narrative. The stories included are:
Planet of the Daleks (1973)
Genesis of the Daleks (1975)
The Deadly Assassin (1976)
State of Decay (1980)
The Curse of Fenric (1989)
"The Runaway Bride" (2006)
"The Waters of Mars" (2009)
References
Doctor Who stories
Multimedia works
Works based on Doctor Who | Time Lord Victorious | Technology | 1,455 |
40,525,780 | https://en.wikipedia.org/wiki/Intel%20SHA%20extensions | Intel SHA Extensions are a set of extensions to the x86 instruction set architecture which support hardware acceleration of Secure Hash Algorithm (SHA) family. It was specified in 2013. Instructions for SHA-512 was introduced in Arrow Lake and Lunar Lake in 2024.
The original SSE-based extensions added four instructions supporting SHA-1 and three for SHA-256.
SHA-1: SHA1RNDS4, SHA1NEXTE, SHA1MSG1, SHA1MSG2
SHA-256: SHA256RNDS2, SHA256MSG1, SHA256MSG2
The newer SHA-512 instruction set comprises AVX-based versions of the original SHA instruction set marked with a V prefix and these three new AVX-based instructions for SHA-512:
VSHA512RNDS2, VSHA512MSG1, VSHA512MSG2
x86 architecture processors
AMD
All recent AMD processors support the original SHA instruction set:
AMD Zen (2017) and later processors.
Intel
The following Intel processors support the original SHA instruction set:
Intel Goldmont (2016) and later Atom microarchitecture processors.
Intel Cannon Lake (2018/2019), Ice Lake (2019) and later processors for laptops ("mainstream mobile").
Intel Rocket Lake (2021) and later processors for desktop computers.
The following Intel processors will support the newer SHA-512 instruction set:
Intel Arrow Lake and Lunar Lake processors.
References
External links
Chapter 8 of
AMD
Intel
X86 instructions
X86 architecture | Intel SHA extensions | Technology | 319 |
12,215,865 | https://en.wikipedia.org/wiki/Code%3A%20The%20Hidden%20Language%20of%20Computer%20Hardware%20and%20Software | Code: The Hidden Language of Computer Hardware and Software (1999) is a book by Charles Petzold that seeks to teach how personal computers work at a hardware and software level. In the preface to the 2000 softcover edition, Petzold wrote that his goal was for readers to understand how computers work at a concrete level that "just might even rival that of electrical engineers and programmers" and that he "went as far back" as he could go in regard to the history of technological development. Petzold describes Code as being structured as moving "up each level in the hierarchy" in which computers are constructed. On June 10, 2022, Petzold announced that an expanded second edition would be published later that year. The second edition was released on July 28, 2022, along with an interactive companion website developed by Petzold.
The idea of writing the book came to him in 1987 while writing a column called "PC Tutor" for PC Magazine.
Chapter outline
Best Friends
Codes and Combinations
Braille and Binary Codes
Anatomy of a Flashlight
Communicating Around Corners
Logic with Switches
Telegraphs and Relays
Relays and Gates
Our Ten Digits
Alternatives 10s
Bit by Bit by Bit
Bytes and Hexadecimal
From ASCII to Unicode
Adding with Logic Gates
Is This for Real?
But What About Subtraction?
Feedback and Flip-Flops
Let's Build a Clock!
An Assemblage of Memory
Automating Arithmetic
The Arithmetic Logic Unit
Registers and Buses
CPU Control Signals
Loops, Jumps, and Calls
Peripherals
The Operating System
Coding
The World Brain
Content
Petzold begins Code by discussing older technologies like Morse code, Braille, and Boolean logic, which he uses to explain vacuum tubes, transistors, and integrated circuits. Code is notable for its explanations of historical technologies in order to build the pieces for further understanding. Electricity is explained through the example of a basic flashlight, which is then expanded upon through the explanation of the electrical telegraph. He noted that "very smart people" had to go down the "dead ends" of mechanical computers and decimal computing before reaching a scalable solution—namely, the electronic, binary computer with a von Neumann architecture. The book also covers more recent developments, including topics like floating point math, operating systems, and ASCII.
The book focuses on "pre-networked computers" and does not cover concepts like distributed computing because Petzold thought that it would not be as useful for "most people using the Internet", his intended audience. Specifically, he said in an interview that his "main hope" in writing Code was to impart upon his readers a "really good feeling for what a bit is, and how bits are combined to convey information".
Reception
Software engineer and blogger Jeff Atwood described Code as a "love letter to the computer".
Publishers Weekly, shortly after Code'''s publication, said "Initial response, at least among traditional tech book readers, has been positive" and quotes the book's editor, Ben Ryan, as saying "We're trying to cross the boundary of the computer section, and break out Code as general nonfiction science". It also praises both the quality of the physical book and the style of the writing as easy to read and understand.
Ryan Holihan, writing for Input, calls Code "excellent" and that "it is, by far, the most straightforward way of explaining the earth shattering power humans can wield when working with 1s and 0s", in a brief but positive review.Code'' has been included in the syllabi of post-secondary education technical courses, such as "Fundamentals of Modern Software" where it was called "a little dated, but it is a really clear and incredibly accessible presentation of how computers get from electrical currents flowing down wires to programs you can actually use" and other introductory and mid-level computer science and engineering courses.
See also
Bit
Computer memory
History of computing hardware
References
External links
Code by Charles Petzold, interactive companion to the book
1999 non-fiction books
Computer books
Computer science books
Computer programming books
History of computing hardware
American non-fiction books
Microsoft Press books | Code: The Hidden Language of Computer Hardware and Software | Technology | 842 |
53,890,794 | https://en.wikipedia.org/wiki/NGC%20450 | NGC 450 is a spiral galaxy located in the constellation Cetus. It was discovered in 1785 by William Herschel. NGC 450 has a very close companion, UGC 807 (or PGC 4545), which is attached at the northeast side of the halo. UGC 807 appears fairly faint, fairly small, and elongated. Despite that UGC 807 appears to form a double system, the companion has a redshift that is over six times greater than NGC 450, so they are a line-of-sight pair.
References
External links
Galaxies discovered in 1785
Cetus
0450
806
4540
Spiral galaxies
Overlapping galaxies | NGC 450 | Astronomy | 130 |
12,382,234 | https://en.wikipedia.org/wiki/International%20Review%20of%20Cell%20and%20Molecular%20Biology | The International Review of Cell and Molecular Biology is a scientific book series that publishes articles on plant and animal cell biology. Until 2008 it was known as the International Review of Cytology.
References
Molecular and cellular biology journals
English-language journals
Elsevier academic journals | International Review of Cell and Molecular Biology | Chemistry | 52 |
7,198,571 | https://en.wikipedia.org/wiki/Monoclonal%20antibody%20therapy | Monoclonal antibodies (mAbs) have varied therapeutic uses. It is possible to create a mAb that binds specifically to almost any extracellular target, such as cell surface proteins and cytokines. They can be used to render their target ineffective (e.g. by preventing receptor binding), to induce a specific cell signal (by activating receptors), to cause the immune system to attack specific cells, or to bring a drug to a specific cell type (such as with radioimmunotherapy which delivers cytotoxic radiation).
Major applications include cancer, autoimmune diseases, asthma, organ transplants, blood clot prevention, and certain infections.
Antibody structure and function
Immunoglobulin G (IgG) antibodies are large heterodimeric molecules, approximately 150 kDa and are composed of two kinds of polypeptide chain, called the heavy (~50kDa) and the light chain (~25kDa). The two types of light chains are kappa (κ) and lambda (λ). By cleavage with enzyme papain, the Fab (fragment-antigen binding) part can be separated from the Fc (fragment crystallizable region) part of the molecule. The Fab fragments contain the variable domains, which consist of three antibody hypervariable amino acid domains responsible for the antibody specificity embedded into constant regions. The four known IgG subclasses are involved in antibody-dependent cellular cytotoxicity.
Antibodies are a key component of the adaptive immune response, playing a central role in both in the recognition of foreign antigens and the stimulation of an immune response to them. The advent of monoclonal antibody technology has made it possible to raise antibodies against specific antigens presented on the surfaces of tumors. Monoclonal antibodies can be acquired in the immune system via passive immunity or active immunity. The advantage of active monoclonal antibody therapy is the fact that the immune system will produce antibodies long-term, with only a short-term drug administration to induce this response. However, the immune response to certain antigens may be inadequate, especially in the elderly. Additionally, adverse reactions from these antibodies may occur because of long-lasting response to antigens. Passive monoclonal antibody therapy can ensure consistent antibody concentration, and can control for adverse reactions by stopping administration. However, the repeated administration and consequent higher cost for this therapy are major disadvantages.
Monoclonal antibody therapy may prove to be beneficial for cancer, autoimmune diseases, and neurological disorders that result in the degeneration of body cells, such as Alzheimer's disease. Monoclonal antibody therapy can aid the immune system because the innate immune system responds to the environmental factors it encounters by discriminating against foreign cells from cells of the body. Therefore, tumor cells that are proliferating at high rates, or body cells that are dying which subsequently cause physiological problems are generally not specifically targeted by the immune system, since tumor cells are the patient's own cells. Tumor cells, however are highly abnormal, and many display unusual antigens. Some such tumor antigens are inappropriate for the cell type or its environment. Monoclonal antibodies can target tumor cells or abnormal cells in the body that are recognized as body cells, but are debilitating to one's health.
History
Immunotherapy developed in the 1970s following the discovery of the structure of antibodies and the development of hybridoma technology, which provided the first reliable source of monoclonal antibodies. These advances allowed for the specific targeting of tumors both in vitro and in vivo. Initial research on malignant neoplasms found mAb therapy of limited and generally short-lived success with blood malignancies. Treatment also had to be tailored to each individual patient, which was impracticable in routine clinical settings.
Four major antibody types that have been developed are murine, chimeric, humanised and human. Antibodies of each type are distinguished by suffixes on their name.
Murine
Initial therapeutic antibodies were murine analogues (suffix -omab). These antibodies have: a short half-life in vivo (due to immune complex formation), limited penetration into tumour sites and inadequately recruit host effector functions. Chimeric and humanized antibodies have generally replaced them in therapeutic antibody applications. Understanding of proteomics has proven essential in identifying novel tumour targets.
Initially, murine antibodies were obtained by hybridoma technology, for which Jerne, Köhler and Milstein received a Nobel prize. However the dissimilarity between murine and human immune systems led to the clinical failure of these antibodies, except in some specific circumstances. Major problems associated with murine antibodies included reduced stimulation of cytotoxicity and the formation of complexes after repeated administration, which resulted in mild allergic reactions and sometimes anaphylactic shock. Hybridoma technology has been replaced by recombinant DNA technology, transgenic mice and phage display.
Chimeric and humanized
To reduce murine antibody immunogenicity (attacks by the immune system against the antibody), murine molecules were engineered to remove immunogenic content and to increase immunologic efficiency. This was initially achieved by the production of chimeric (suffix -ximab) and humanized antibodies (suffix -zumab). Chimeric antibodies are composed of murine variable regions fused onto human constant regions. Taking human gene sequences from the kappa light chain and the IgG1 heavy chain results in antibodies that are approximately 65% human. This reduces immunogenicity, and thus increases serum half-life.
Humanised antibodies are produced by grafting murine hypervariable regions on amino acid domains into human antibodies. This results in a molecule of approximately 95% human origin. Humanised antibodies bind antigen much more weakly than the parent murine monoclonal antibody, with reported decreases in affinity of up to several hundredfold. Increases in antibody-antigen binding strength have been achieved by introducing mutations into the complementarity determining regions (CDR), using techniques such as chain-shuffling, randomization of complementarity-determining regions and antibodies with mutations within the variable regions induced by error-prone PCR, E. coli mutator strains and site-specific mutagenesis.
Human monoclonal antibodies
Human monoclonal antibodies (suffix -umab) are produced using transgenic mice or phage display libraries by transferring human immunoglobulin genes into the murine genome and vaccinating the transgenic mouse against the desired antigen, leading to the production of appropriate monoclonal antibodies. Murine antibodies in vitro are thereby transformed into fully human antibodies.
The heavy and light chains of human IgG proteins are expressed in structural polymorphic (allotypic) forms. Human IgG allotype is one of the many factors that can contribute to immunogenicity.
Targeted conditions
Cancer
Anti-cancer monoclonal antibodies can be targeted against malignant cells by several mechanisms. Ramucirumab is a recombinant human monoclonal antibody and is used in the treatment of advanced malignancies. In childhood lymphoma, phase I and II studies have found a positive effect of using antibody therapy.
Monoclonal antibodies used to boost an anticancer immune response is another strategy to fight cancer where cancer cells are not targeted directly. Strategies include antibodies engineered to block mechanisms which downregulate anticancer immune responses, checkpoints such as PD-1 and CTLA-4 (checkpoint therapy), and antibodies modified to stimulate activation of immune cells.
Autoimmune diseases
Monoclonal antibodies used for autoimmune diseases include infliximab and adalimumab, which are effective in rheumatoid arthritis, Crohn's disease and ulcerative colitis by their ability to bind to and inhibit TNF-α. Basiliximab and daclizumab inhibit IL-2 on activated T cells and thereby help preventing acute rejection of kidney transplants. Omalizumab inhibits human immunoglobulin E (IgE) and is useful in moderate-to-severe allergic asthma.
Alzheimer's disease
Alzheimer's disease (AD) is a multi-faceted, age-dependent, progressive neurodegenerative disorder, and is a major cause of dementia. According to the Amyloid hypothesis, the accumulation of extracellular amyloid beta peptides (Aβ) into plaques via oligomerization leads to hallmark symptomatic conditions of AD through synaptic dysfunction and neurodegeneration. Immunotherapy via exogenous monoclonal antibody (mAb) administration has been known to treat various central nervous disorders. In the case of AD, immunotherapy is believed to inhibit Aβ-oligomerization or clearing of Aβ from the brain and thereby prevent neurotoxicity.
However, mAbs are large molecules and due to the blood–brain barrier, uptake of mAb into the brain is extremely limited, only approximately 1 of 1000 mAb molecules is estimated to pass. However, the Peripheral Sink hypothesis proposes a mechanism where mAbs may not need to cross the blood–brain barrier. Therefore, many research studies are being conducted from failed attempts to treat AD in the past.
However, anti-Aβ vaccines can promote antibody-mediated clearance of Aβ plaques in transgenic mice models with amyloid precursor proteins (APP), and can reduce cognitive impairments. Vaccines can stimulate the immune system to produce its own antibodies, in the case of Alzheimer's disease by administration of the antigen Aβ. This is also known as active immunotherapy. Another strategy is so called passive immunotherapy. In this case the antibodies is produced externally in cultured cells and are delivered to the patient in the form of a drug. In mice expressing APP, both active and passive immunization of anti-Aβ antibodies has been shown to be effective in clearing plaques, and can improve cognitive function.
Currently, there are two FDA approved antibody therapies for Alzheimer's disease, Aducanemab and Lecanemab. Aducanemab has received accelerated approval while Lecanemab has received full approval. Several clinical trials using passive and active immunization have been performed and some are on the way with expected results in a couple of years. The implementation of these drugs is often during the early onset of AD. Other research and drug development for early intervention and AD prevention is ongoing. Examples of important mAb drugs that have been or are under evaluation for treatment of AD include Bapineuzumab, Solanezumab, Gautenerumab, Crenezumab, Aducanemab, Lecanemab and Donanemab.
Bapineuzumab
Bapineuzumab, a humanized anti-Aβ mAb, is directed against the N-terminus of Aβ. Phase II clinical trials of Bapineuzumab in mild to moderate AD patients resulted in reduced Aβ concentration in the brain. However, in patients with increased apolipoprotein (APOE) e4 carriers, Bapineuzumab treatment is also accompanied by vasogenic edema, a cytotoxic condition where the blood brain barrier has been disrupted thereby affecting white matter from excess accumulation of fluid from capillaries in intracellular and extracellular spaces of the brain.
In Phase III clinical trials, Bapineuzumab showed promising positive effect on biomarkers of AD but failed to show effect on cognitive decline. Therefore, Bapineuzumab was discontinued after failing in the Phase III clinical trial.
Solanezumab
Solanezumab, an anti-Aβ mAb, targets the N-terminus of Aβ. In Phase I and Phase II of clinical trials, Solanezumab treatment resulted in cerebrospinal fluid elevation of Aβ, thereby showing a reduced concentration of Aβ plaques. Additionally, there are no associated adverse side effects. Phase III clinical trials of Solanezumab brought about significant reduction in cognitive impairment in patients with mild AD, but not in patients with severe AD. However, Aβ concentration did not significantly change, along with other AD biomarkers, including phospho-tau expression, and hippocampal volume. Phase III clinical trials of Solanezumab failed as it did not show effect on cognitive decline in comparison to placebo.
Lecanemab
Lecanemab (BAN2401), is a humanized mAb that selectively targets toxic soluble Aβ protofibrils, In phase 3 clinical trials, Lecanemab showed a 27% slower cognitive decline after 18 months of treatment in comparison to placebo. The phase 3 clinical trials also reported infusion related reactions, amyloid-related imaging abnormalities and headaches as the most common side effects of Lecanemab. In July 2023 the FDA gave Lecanemab full approval for the treatment of Alzheimer's Disease and it was given the commercial name Leqembi.
Preventive trials
Failure of several drugs in Phase III clinical trials has led to AD prevention and early intervention for onset AD treatment endeavours. Passive anti-Aβ mAb treatment can be used for preventive attempts to modify AD progression before it causes extensive brain damage and symptoms. Trials using mAb treatment for patients positive for genetic risk factors, and elderly patients positive for indicators of AD are underway. This includes anti-AB treatment in Asymptomatic Alzheimer's Disease (A4), the Alzheimer's Prevention Initiative (API), and DIAN-TU.
The A4 study on older individuals who are positive for indicators of AD but are negative for genetic risk factors will test Solanezumab in Phase III Clinical Trials, as a follow-up of previous Solanezumab studies.
DIAN-TU, launched in December 2012, focuses on young patients positive for genetic mutations that are risks for AD. This study uses Solanezumab and Gautenerumab. Gautenerumab, the first fully human MAB that preferentially interacts with oligomerized Aβ plaques in the brain, caused significant reduction in Aβ concentration in Phase I clinical trials, preventing plaque formation and concentration without altering plasma concentration of the brain. Phase II and III clinical trials are currently being conducted.
Therapy types
Radioimmunotherapy
Radioimmunotherapy (RIT) involves the use of radioactively-conjugated murine antibodies against cellular antigens. Most research involves their application to lymphomas, as these are highly radio-sensitive malignancies. To limit radiation exposure, murine antibodies were chosen, as their high immunogenicity promotes rapid tumor clearance. Tositumomab is an example used for non-Hodgkin's lymphoma.
Antibody-directed enzyme prodrug therapy
Antibody-directed enzyme prodrug therapy (ADEPT) involves the application of cancer-associated monoclonal antibodies that are linked to a drug-activating enzyme. Systemic administration of a non-toxic agent results in the antibody's conversion to a toxic drug, resulting in a cytotoxic effect that can be targeted at malignant cells. The clinical success of ADEPT treatments is limited.
Antibody-drug conjugates
Antibody-drug conjugates (ADCs) are antibodies linked to one or more drug molecules. Typically when the ADC meets the target cell (e.g. a cancerous cell) the drug is released to kill it. Many ADCs are in clinical development. a few have been approved.
Immunoliposome therapy
Immunoliposomes are antibody-conjugated liposomes. Liposomes can carry drugs or therapeutic nucleotides and when conjugated with monoclonal antibodies, may be directed against malignant cells. Immunoliposomes have been successfully used in vivo to convey tumour-suppressing genes into tumours, using an antibody fragment against the human transferrin receptor. Tissue-specific gene delivery using immunoliposomes has been achieved in brain and breast cancer tissue.
Checkpoint therapy
Checkpoint therapy uses antibodies and other techniques to circumvent the defenses that tumors use to suppress the immune system. Each defense is known as a checkpoint. Compound therapies combine antibodies to suppress multiple defensive layers. Known checkpoints include CTLA-4 targeted by ipilimumab, PD-1 targeted by nivolumab and pembrolizumab and the tumor microenvironment.
The tumor microenvironment (TME) features prevents the recruitment of T cells to the tumor. Ways include chemokine CCL2 nitration, which traps T cells in the stroma. Tumor vasculature helps tumors preferentially recruit other immune cells over T cells, in part through endothelial cell (EC)–specific expression of FasL, ETBR, and B7H3. Myelomonocytic and tumor cells can up-regulate expression of PD-L1, partly driven by hypoxic conditions and cytokine production, such as IFNβ. Aberrant metabolite production in the TME, such as the pathway regulation by IDO, can affect T cell functions directly and indirectly via cells such as Treg cells. CD8 cells can be suppressed by B cells regulation of TAM phenotypes. Cancer-associated fibroblasts (CAFs) have multiple TME functions, in part through extracellular matrix (ECM)–mediated T cell trapping and CXCL12-regulated T cell exclusion.
FDA-approved therapeutic antibodies
The first FDA-approved therapeutic monoclonal antibody was a murine IgG2a CD3 specific transplant rejection drug, OKT3 (also called muromonab), in 1986. This drug found use in solid organ transplant recipients who became steroid resistant. Hundreds of therapies are undergoing clinical trials. Most are concerned with immunological and oncological targets.
Tositumomab – Bexxar – 2003 – CD20
Mogamulizumab – Poteligeo – August 2018 – CCR4
Moxetumomab pasudotox – Lumoxiti – September 2018 – CD22
Cemiplimab – Libtayo – September 2018 – PD-1
Polatuzumab vedotin – Polivy – June 2019 – CD79B
The bispecific antibodies have arrived in the clinic. In 2009, the bispecific antibody catumaxomab was approved in the European Union and was later withdrawn for commercial reasons. Others include amivantamab, blinatumomab, teclistamab, and emicizumab.
Economics
Since 2000, the therapeutic market for monoclonal antibodies has grown exponentially. In 2006, the "big 5" therapeutic antibodies on the market were bevacizumab, trastuzumab (both oncology), adalimumab, infliximab (both autoimmune and inflammatory disorders, 'AIID') and rituximab (oncology and AIID) accounted for 80% of revenues in 2006. In 2007, eight of the 20 best-selling biotechnology drugs in the U.S. are therapeutic monoclonal antibodies. This rapid growth in demand for monoclonal antibody production has been well accommodated by the industrialization of mAb manufacturing.
References
External links
Cancer Management Handbook: Principles of Oncologic Pharmacotherapy
Immunology
Antiviral drugs
sl:Biološka zdravila | Monoclonal antibody therapy | Biology | 4,051 |
170,853 | https://en.wikipedia.org/wiki/Supergiant | Supergiants are among the most massive and most luminous stars. Supergiant stars occupy the top region of the Hertzsprung–Russell diagram with absolute visual magnitudes between about −3 and −8. The temperature range of supergiant stars spans from about 3,400 K to over 20,000 K.
Definition
The title supergiant, as applied to a star, does not have a single concrete definition. The term giant star was first coined by Hertzsprung when it became apparent that the majority of stars fell into two distinct regions of the Hertzsprung–Russell diagram. One region contained larger and more luminous stars of spectral types A to M and received the name giant. Subsequently, as they lacked any measurable parallax, it became apparent that some of these stars were significantly larger and more luminous than the bulk, and the term super-giant arose, quickly adopted as supergiant.
Supergiants with spectral classes of O to A are typically referred to as blue supergiants, supergiants with spectral classes F and G are referred to as yellow supergiants, while those of spectral classes K to M are red supergiants. Another convention uses temperature: supergiants with effective temperatures below 4800 K are deemed red supergiants; those with temperatures between 4800 and 7500 K are yellow supergiants, and those with temperatures exceeding 7500 K are blue supergiants. These correspond approximately to spectral types M and K for red supergiants, G, F, and late A for yellow supergiants, and early A, B, and O for blue supergiants.
Spectral luminosity class
Supergiant stars can be identified on the basis of their spectra, with distinctive lines sensitive to high luminosity and low surface gravity. In 1897, Antonia C. Maury had divided stars based on the widths of their spectral lines, with her class "c" identifying stars with the narrowest lines. Although it was not known at the time, these were the most luminous stars. In 1943, Morgan and Keenan formalised the definition of spectral luminosity classes, with class I referring to supergiant stars. The same system of MK luminosity classes is still used today, with refinements based on the increased resolution of modern spectra. Supergiants occur in every spectral class from young blue class O supergiants to highly evolved red class M supergiants. Because they are enlarged compared to main-sequence and giant stars of the same spectral type, they have lower surface gravities, and changes can be observed in their line profiles. Supergiants are also evolved stars with higher levels of heavy elements than main-sequence stars. This is the basis of the MK luminosity system which assigns stars to luminosity classes purely from observing their spectra.
In addition to the line changes due to low surface gravity and fusion products, the most luminous stars have high mass-loss rates and resulting clouds of expelled circumstellar materials which can produce emission lines, P Cygni profiles, or forbidden lines. The MK system assigns stars to luminosity classes: Ib for supergiants; Ia for luminous supergiants; and 0 (zero) or Ia+ for hypergiants. In reality there is much more of a continuum than well defined bands for these classifications, and classifications such as Iab are used for intermediate luminosity supergiants. Supergiant spectra are frequently annotated to indicate spectral peculiarities, for example B2 Iae or F5 Ipec.
Evolutionary supergiants
Supergiants can also be defined as a specific phase in the evolutionary history of certain stars. Stars with initial masses above quickly and smoothly initiate helium core fusion after they have exhausted their hydrogen, and continue fusing heavier elements after helium exhaustion until they develop an iron core, at which point the core collapses to produce a Type II supernova. Once these massive stars leave the main sequence, their atmospheres inflate, and they are described as supergiants. Stars initially under will never form an iron core and in evolutionary terms do not become supergiants, although they can reach luminosities thousands of times the sun's. They cannot fuse carbon and heavier elements after the helium is exhausted, so they eventually just lose their outer layers, leaving the core of a white dwarf. The phase where these stars have both hydrogen and helium burning shells is referred to as the asymptotic giant branch (AGB), as stars gradually become more and more luminous class M stars. Stars of may fuse sufficient carbon on the AGB to produce an oxygen-neon core and an electron-capture supernova, but astrophysicists categorise these as super-AGB stars rather than supergiants.
Categorisation of evolved stars
There are several categories of evolved stars that are not supergiants in evolutionary terms but may show supergiant spectral features or have luminosities comparable to supergiants.
Asymptotic-giant-branch (AGB) and post-AGB stars are highly evolved lower-mass red giants with luminosities that can be comparable to more massive red supergiants, but because of their low mass, being in a different stage of development (helium shell burning), and their lives ending in a different way (planetary nebula and white dwarf rather than supernova), astrophysicists prefer to keep them separate. The dividing line becomes blurred at around (or as high as in some models) where stars start to undergo limited fusion of elements heavier than helium. Specialists studying these stars often refer to them as super AGB stars, since they have many properties in common with AGB such as thermal pulsing. Others describe them as low-mass supergiants since they start to burn elements heavier than helium and can explode as supernovae. Many post-AGB stars receive spectral types with supergiant luminosity classes. For example, RV Tauri has an Ia (bright supergiant) luminosity class despite being less massive than the sun. Some AGB stars also receive a supergiant luminosity class, most notably W Virginis variables such as W Virginis itself, stars that are executing a blue loop triggered by thermal pulsing. A very small number of Mira variables and other late AGB stars have supergiant luminosity classes, for example α Herculis.
Classical Cepheid variables typically have supergiant luminosity classes, although only the most luminous and massive will actually go on to develop an iron core. The majority of them are intermediate mass stars fusing helium in their cores and will eventually transition to the asymptotic giant branch. δ Cephei itself is an example with a luminosity of and a mass of .
Wolf–Rayet stars are also high-mass luminous evolved stars, hotter than most supergiants and smaller, visually less bright but often more luminous because of their high temperatures. They have spectra dominated by helium and other heavier elements, usually showing little or no hydrogen, which is a clue to their nature as stars even more evolved than supergiants. Just as the AGB stars occur in almost the same region of the HR diagram as red supergiants, Wolf–Rayet stars can occur in the same region of the HR diagram as the hottest blue supergiants and main-sequence stars.
The most massive and luminous main-sequence stars are almost indistinguishable from the supergiants they quickly evolve into. They have almost identical temperatures and very similar luminosities, and only the most detailed analyses can distinguish the spectral features that show they have evolved away from the narrow early O-type main-sequence to the nearby area of early O-type supergiants. Such early O-type supergiants share many features with WNLh Wolf–Rayet stars and are sometimes designated as slash stars, intermediates between the two types.
Luminous blue variables (LBVs) stars occur in the same region of the HR diagram as blue supergiants but are generally classified separately. They are evolved, expanded, massive, and luminous stars, often hypergiants, but they have very specific spectral variability, which defies the assignment of a standard spectral type. LBVs observed only at a particular time or over a period of time when they are stable, may simply be designated as hot supergiants or as candidate LBVs due to their luminosity.
Hypergiants are frequently treated as a different category of star from supergiants, although in all important respects they are just a more luminous category of supergiant. They are evolved, expanded, massive and luminous stars like supergiants, but at the most massive and luminous extreme, and with particular additional properties of undergoing high mass-loss due to their extreme luminosities and instability. Generally only the more evolved supergiants show hypergiant properties, since their instability increases after high mass-loss and some increase in luminosity.
Some B[e] stars are supergiants although other B[e] stars are clearly not. Some researchers distinguish the B[e] objects as separate from supergiants, while researchers prefer to define massive evolved B[e] stars as a subgroup of supergiants. The latter has become more common with the understanding that the B[e] phenomenon arises separately in a number of distinct types of stars, including some that are clearly just a phase in the life of supergiants.
Properties
Supergiants have masses from 8 to 12 times the Sun () upwards, and luminosities from about 1,000 to over a million times the Sun (). They vary greatly in radius, usually from 30 to 500, or even in excess of 1,000 solar radii (). They are massive enough to begin helium-core burning gently before the core becomes degenerate, without a flash and without the strong dredge-ups that lower-mass stars experience. They go on to successively ignite heavier elements, usually all the way to iron. Also because of their high masses, they are destined to explode as supernovae.
The Stefan–Boltzmann law dictates that the relatively cool surfaces of red supergiants radiate much less energy per unit area than those of blue supergiants; thus, for a given luminosity, red supergiants are larger than their blue counterparts. Radiation pressure limits the largest cool supergiants to around 1,500 and the most massive hot supergiants to around a million (Mbol around −10). Stars near and occasionally beyond these limits become unstable, pulsate, and experience rapid mass loss.
Surface gravity
The supergiant luminosity class is assigned on the basis of spectral features that are largely a measure of surface gravity, although such stars are also affected by other properties such as microturbulence. Supergiants typically have surface gravities of around log(g) 2.0 cgs and lower, although bright giants (luminosity class II) have statistically very similar surface gravities to normal Ib supergiants. Cool luminous supergiants have lower surface gravities, with the most luminous (and unstable) stars having log(g) around zero. Hotter supergiants, even the most luminous, have surface gravities around one, due to their higher masses and smaller radii.
Temperature
There are supergiant stars at all of the main spectral classes and across the whole range of temperatures from mid-M class stars at around 3,400 K to the hottest O class stars over 40,000 K. Supergiants are generally not found cooler than mid-M class. This is expected theoretically since they would be catastrophically unstable; however, there are potential exceptions among extreme stars such as VX Sagittarii.
Although supergiants exist in every class from O to M, the majority are spectral type B (blue supergiants), more than at all other spectral classes combined. A much smaller grouping consists of very low-luminosity G-type supergiants, intermediate mass stars burning helium in their cores before reaching the asymptotic giant branch. A distinct grouping is made up of high-luminosity supergiants at early B (B0-2) and very late O (O9.5), more common even than main sequence stars of those spectral types. The number of post-main sequence blue supergiants is greater than those expected from theoretical models, leading to the "blue supergiant problem".
The relative numbers of blue, yellow, and red supergiants is an indicator of the speed of stellar evolution and is used as a powerful test of models of the evolution of massive stars.
Luminosity
The supergiants lie more or less on a horizontal band occupying the entire upper portion of the HR diagram, but there are some variations at different spectral types. These variations are due partly to different methods for assigning luminosity classes at different spectral types, and partly to actual physical differences in the stars.
The bolometric luminosity of a star reflects its total output of electromagnetic radiation at all wavelengths. For very hot and very cool stars, the bolometric luminosity is dramatically higher than the visual luminosity, sometimes several magnitudes or a factor of five or more. This bolometric correction is approximately one magnitude for mid B, late K, and early M stars, increasing to three magnitudes (a factor of 15) for O and mid M stars.
All supergiants are larger and more luminous than main sequence stars of the same temperature. This means that hot supergiants lie on a relatively narrow band above bright main sequence stars. A B0 main sequence star has an absolute magnitude of about −5, meaning that all B0 supergiants are significantly brighter than absolute magnitude −5. Bolometric luminosities for even the faintest blue supergiants are tens of thousands of times the sun (). The brightest can be and are often unstable such as α Cygni variables and luminous blue variables.
The very hottest supergiants with early O spectral types occur in an extremely narrow range of luminosities above the highly luminous early O main sequence and giant stars. They are not classified separately into normal (Ib) and luminous (Ia) supergiants, although they commonly have other spectral type modifiers such as "f" for nitrogen and helium emission (e.g. O2 If for HD 93129A).
Yellow supergiants can be considerably fainter than absolute magnitude −5, with some examples around −2 (e.g. 14 Persei). With bolometric corrections around zero, they may only be a few hundred times the luminosity of the sun. These are not massive stars, though; instead, they are stars of intermediate mass that have particularly low surface gravities, often due to instability such as Cepheid pulsations. These intermediate mass stars' being classified as supergiants during a relatively long-lasting phase of their evolution account for the large number of low luminosity yellow supergiants. The most luminous yellow stars, the yellow hypergiants, are amongst the visually brightest stars, with absolute magnitudes around −9, although still less than .
There is a strong upper limit to the luminosity of red supergiants at around . Stars that would be brighter than this shed their outer layers so rapidly that they remain hot supergiants after they leave the main sequence. The majority of red supergiants were main sequence stars and now have luminosities below , and there are very few bright supergiant (Ia) M class stars. The least luminous stars classified as red supergiants are some of the brightest AGB and post-AGB stars, highly expanded and unstable low mass stars such as the RV Tauri variables. The majority of AGB stars are given giant or bright giant luminosity classes, but particularly unstable stars such as W Virginis variables may be given a supergiant classification (e.g. W Virginis itself). The faintest red supergiants are around absolute magnitude −3.
Variability
While most supergiants such as Alpha Cygni variables, semiregular variables, and irregular variables show some degree of photometric variability, certain types of variables amongst the supergiants are well defined. The instability strip crosses the region of supergiants, and specifically many yellow supergiants are Classical Cepheid variables. The same region of instability extends to include the even more luminous yellow hypergiants, an extremely rare and short-lived class of luminous supergiant. Many R Coronae Borealis variables, although not all, are yellow supergiants, but this variability is due to their unusual chemical composition rather than a physical instability.
Further types of variable stars such as RV Tauri variables and PV Telescopii variables are often described as supergiants. RV Tau stars are frequently assigned spectral types with a supergiant luminosity class on account of their low surface gravity, and they are amongst the most luminous of the AGB and post-AGB stars, having masses similar to the sun; likewise, the even rarer PV Tel variables are often classified as supergiants, but have lower luminosities than supergiants and peculiar B[e] spectra extremely deficient in hydrogen. Possibly they are also post-AGB objects or "born-again" AGB stars.
The LBVs are variable with multiple semi-regular periods and less predictable eruptions and giant outbursts. They are usually supergiants or hypergiants, occasionally with Wolf-Rayet spectra—extremely luminous, massive, evolved stars with expanded outer layers, but they are so distinctive and unusual that they are often treated as a separate category without being referred to as supergiants or given a supergiant spectral type. Often their spectral type will be given just as "LBV" because they have peculiar and highly variable spectral features, with temperatures varying from about 8,000 K in outburst up to 20,000 K or more when "quiescent."
Chemical abundances
The abundance of various elements at the surface of supergiants is different from less luminous stars. Supergiants are evolved stars and may have undergone convection of fusion products to the surface.
Cool supergiants show enhanced helium and nitrogen at the surface due to convection of these fusion products to the surface during the main sequence of very massive stars, to dredge-ups during shell burning, and to the loss of the outer layers of the star. Helium is formed in the core and shell by fusion of hydrogen and nitrogen which accumulates relative to carbon and oxygen during CNO cycle fusion. At the same time, carbon and oxygen abundances are reduced. Red supergiants can be distinguished from luminous but less massive AGB stars by unusual chemicals at the surface, enhancement of carbon from deep third dredge-ups, as well as carbon-13, lithium and s-process elements. Late-phase AGB stars can become highly oxygen-enriched, producing OH masers.
Hotter supergiants show differing levels of nitrogen enrichment. This may be due to different levels of mixing on the main sequence due to rotation or because some blue supergiants are newly evolved from the main sequence while others have previously been through a red supergiant phase. Post-red supergiant stars have a generally higher level of nitrogen relative to carbon due to convection of CNO-processed material to the surface and the complete loss of the outer layers. Surface enhancement of helium is also stronger in post-red supergiants, representing more than a third of the atmosphere.
Evolution
O type main-sequence stars and the most massive of the B type blue-white stars become supergiants. Due to their extreme masses, they have short lifespans, between 30 million years and a few hundred thousand years. They are mainly observed in young galactic structures such as open clusters, the arms of spiral galaxies, and in irregular galaxies. They are less abundant in spiral galaxy bulges and are rarely observed in elliptical galaxies, or globular clusters, which are composed mainly of old stars.
Supergiants develop when massive main-sequence stars run out of hydrogen in their cores, at which point they start to expand, just like lower-mass stars. Unlike lower-mass stars, however, they begin to fuse helium in the core smoothly and not long after exhausting their hydrogen. This means that they do not increase their luminosity as dramatically as lower-mass stars, and they progress nearly horizontally across the HR diagram to become red supergiants. Also unlike lower-mass stars, red supergiants are massive enough to fuse elements heavier than helium, so they do not puff off their atmospheres as planetary nebulae after a period of hydrogen and helium shell burning; instead, they continue to burn heavier elements in their cores until they collapse. They cannot lose enough mass to form a white dwarf, so they will leave behind a neutron star or black hole remnant, usually after a core collapse supernova explosion.
Stars more massive than about cannot expand into a red supergiant. Because they burn too quickly and lose their outer layers too quickly, they reach the blue supergiant stage, or perhaps yellow hypergiant, before returning to become hotter stars. The most massive stars, above about , hardly move at all from their position as O main-sequence stars. These convect so efficiently that they mix hydrogen from the surface right down to the core. They continue to fuse hydrogen until it is almost entirely depleted throughout the star, then rapidly evolve through a series of stages of similarly hot and luminous stars: supergiants, slash stars, WNh-, WN-, and possibly WC- or WO-type stars. They are expected to explode as supernovae, but it is not clear how far they evolve before this happens. The existence of these supergiants still burning hydrogen in their cores may necessitate a slightly more complex definition of supergiant: a massive star with increased size and luminosity due to fusion products building up, but still with some hydrogen remaining.
The first stars in the universe are thought to have been considerably brighter and more massive than the stars in the modern universe. Part of the theorized population III of stars, their existence is necessary to explain observations of elements other than hydrogen and helium in quasars. Possibly larger and more luminous than any supergiant known today, their structure was quite different, with reduced convection and less mass loss. Their very short lives are likely to have ended in violent photodisintegration or pair instability supernovae.
Supernova progenitors
Most type II supernova progenitors are thought to be red supergiants, while the less common type Ib/c supernovae are produced by hotter Wolf–Rayet stars that have completely lost more of their hydrogen atmosphere. Almost by definition, supergiants are destined to end their lives violently. Stars large enough to start fusing elements heavier than helium do not seem to have any way to lose enough mass to avoid catastrophic core collapse, although some may collapse, almost without trace, into their own central black holes.
The simple "onion" models showing red supergiants inevitably developing to an iron core and then exploding have been shown, however, to be too simplistic. The progenitor for the unusual type II Supernova 1987A was a blue supergiant, thought to have already passed through the red supergiant phase of its life, and this is now known to be far from an exceptional situation. Much research is now focused on how blue supergiants can explode as a supernova and when red supergiants can survive to become hotter supergiants again.
Well known examples
Supergiants are rare and short-lived stars, but their high luminosity means that there are many naked-eye examples, including some of the brightest stars in the sky. Rigel, the brightest star in the constellation Orion is a typical blue-white supergiant; the three stars of Orion's Belt are all blue supergiants; Deneb is the brightest star in Cygnus, another blue supergiant; and Delta Cephei (itself the prototype) and Polaris are Cepheid variables and yellow supergiants. Antares and VV Cephei A are red supergiants. μ Cephei is considered a red hypergiant due to its large luminosity and it is one of the reddest stars visible to the naked eye and one of the largest in the galaxy. Rho Cassiopeiae, a variable, yellow hypergiant, is one of the most luminous naked-eye stars. Betelgeuse is a red supergiant that may have been a yellow supergiant in antiquity and the second brightest star in the constellation Orion.
See also
List of stars with resolved images
Planetary nebula
List of largest stars
References
External links
http://alobel.freeshell.org/rcas.html
http://www.solstation.com/x-objects/rho-cas.htm
Star types
Stellar phenomena | Supergiant | Physics,Astronomy | 5,178 |
43,765,436 | https://en.wikipedia.org/wiki/Sponsoring%20Consortium%20for%20Open%20Access%20Publishing%20in%20Particle%20Physics | The Sponsoring Consortium for Open Access Publishing in Particle Physics (or SCOAP3) is an international collaboration in the high-energy physics community to convert traditional closed access physics journals to open access, freely available for everyone to read and reuse, shifting away the burden of the publishing cost from readers (traditional model) and authors (in the case of hybrid open access journals). Under the terms of the agreement, authors retain copyrights and the articles published under SCOAP3 will be in perpetuity under a CC BY license. The initiative was promoted by CERN in collaboration with international partners.
Participating countries in the agreement sponsor SCOAP3 journals through the consortium, and contribute according to their scientific output. More productive countries pay more, while lower-output countries pay less.
Participating journals
SCOAP3 supports journals mostly publishing High-Energy Physics content fully, and those articles in other journals that have been submitted by their authors to a High-Energy Physics category on arXiv.org. Each year more than 4,000 articles are published in open access as part of the initiative.
In 2012, SCOAP3 reached agreements with 12 subscription journals to make their articles openly accessible. This agreement would cover 90% of all published particle physics articles from 2014 onwards. Of the original 12 journals, two journals pulled out of the agreement: Physical Review C and Physical Review D.
On April 19, 2016, SCOAP3 announced the extension of the initiative until 2019 with 8 journals participating. From 2018 on also APS joined with three journals to a total of 11 supported journals at the moment.
The following journals participate currently, or have participated in the first phase of the consortia:
Book programme
In 2022 SCOAP3 entered into partnerships with leading academic publishers, including Cambridge University Press, Oxford University Press, Springer Nature, Taylor & Francis, and World Scientific, to make over 100 textbooks available open access.
Members
Countries are usually represented by one or few library consortia, funding agencies or research organizations that act as a coordinator for the country. Currently, 44 countries as well as 3 intergovernmental organisations (CERN, IAEA, JINR) are members of the consortium.
References
Further reading
Open Access publishing initiative, SCOAP3, to start on 1 January 2014
SCOAP3 Lifts Off: An 8 Interview with Ann Okerson
External links
Open access projects
Physics organizations
CERN
Particle physics journals | Sponsoring Consortium for Open Access Publishing in Particle Physics | Physics | 490 |
9,437,083 | https://en.wikipedia.org/wiki/Environmental%20flow | Environmental flows describe the quantity, timing, and quality of water flows required to sustain freshwater and estuarine ecosystems and the human livelihoods and well being that depend on these ecosystems. In the Indian context river flows required for cultural and spiritual needs assumes significance. Through implementation of environmental flows, water managers strive to achieve a flow regime, or pattern, that provides for human uses and maintains the essential processes required to support healthy river ecosystems. Environmental flows do not necessarily require restoring the natural, pristine flow patterns that would occur absent human development, use, and diversion but, instead, are intended to produce a broader set of values and benefits from rivers than from management focused strictly on water supply, energy, recreation, or flood control.
Rivers are parts of integrated systems that include floodplains and riparian corridors. Collectively these systems provide a large suite of benefits. However, the world's rivers are increasingly being altered through the construction of dams, diversions, and levees. More than half of the world's large rivers are dammed, a figure that continues to increase. Almost 1,000 dams are planned or under construction in South America and 50 new dams are planned on China's Yangtze River alone. Dams and other river structures change the downstream flow patterns and consequently affect water quality, temperature, sediment movement and deposition, fish and wildlife, and the livelihoods of people who depend on healthy river ecosystems. Environmental flows seek to maintain these river functions while at the same time providing for traditional offstream benefits.
Evolution of environmental flow concepts and recognition
From the turn of the 20th century through the 1960s, water management in developed nations focused largely on maximizing flood protection, water supplies, and hydropower generation. During the 1970s, the ecological and economic effects of these projects prompted scientists to seek ways to modify dam operations to maintain certain fish species. The initial focus was on determining the minimum flow necessary to preserve an individual species, such as trout, in a river. Environmental flows evolved from this concept of "minimum flows" and, later, "instream flows", which emphasized the need to keep water within waterways.
By the 1990s, scientists came to realize that the biological and social systems supported by rivers are too complicated to be summarized by a single minimum flow requirement. Since the 1990s, restoring and maintaining more comprehensive environmental flows has gained increasing support, as has the capability of scientists and engineers to define these flows to maintain the full spectrum of riverine species, processes and services. Furthermore, implementation has evolved from dam reoperation to an integration of all aspects of water management, including groundwater and surface water diversions and return flows, as well as land use and storm water management. The science to support regional-scale environmental flow determination and management has likewise advanced.
In a global survey of water specialists undertaken in 2003 to gauge perceptions of environmental flow, 88% of the 272 respondents agreed that the concept is essential for sustainably managing water resources and meeting the long-term needs of people. In 2007, the Brisbane Declaration on Environmental Flows was endorsed by more than 750 practitioners from more than 50 countries. The declaration announced an official pledge to work together to protect and restore the world's rivers and lakes. By 2010, many countries throughout the world had adopted environmental flow policies, although their implementation remains a challenge.
Examples
One effort currently underway to restore environmental flows is the Sustainable Rivers Project, a collaboration between The Nature Conservancy (TNC) and U.S. Army Corps of Engineers (USACE), which is the largest water manager in the United States. Since 2002, TNC and the USACE have been working to define and implement environmental flows by altering the operations of USACE dams in 8 rivers across 12 states. Dam reoperation to release environmental flows, in combination with floodplain restoration, has in some instances increased the water available for hydropower production while reducing flood risk.
Arizona's Bill Williams River, flowing downstream of Alamo Dam, is one of the rivers featured in the Sustainable Rivers Project. Having discussed modifying dam operations since the early 1990s, local stakeholders began to work with TNC and USACE in 2005 to identify specific strategies for improving the ecological health and biodiversity of the river basin downstream from the dam. Scientists compiled the best available information and worked together to define environmental flows for the Bill Williams River. While not all of the recommended environmental flow components could be implemented immediately, the USACE has changed its operations of Alamo Dam to incorporate more natural low flows and controlled floods. Ongoing monitoring is capturing resulting ecological responses such as rejuvenation of native willow-cottonwood forest, suppression of invasive and non-native tamarisk, restoration of more natural densities of beaver dams and associated lotic-lentic habitat, changes in aquatic insect populations, and enhanced groundwater recharge. USACE engineers continue to consult with scientists on a regular basis and use the monitoring results to further refine operations of the dam.
Another case in which stakeholders developed environmental flow recommendations is Honduras' Patuca III Hydropower Project. The Patuca River, the second longest river in Central America, has supported fish populations, nourished crops, and enabled navigation for many indigenous communities, including the Tawahka, Pech, and Miskito Indians, for hundreds of years. To protect the ecological health of the largest undisturbed rainforest north of the Amazon and its inhabitants, TNC and Empresa Nacional de Energía Eléctrica (ENEE, the agency responsible for the project) agreed to study and determine flows necessary to sustain the health of human and natural communities along the river. Due to very limited available data, innovative approaches were developed for estimating flow needs based on experiences and observations of the local people who depend on this nearly pristine river reach.
Methods, tools, and models
More than 200 methods are used worldwide to prescribe river flows needed to maintain healthy rivers. However, very few of these are comprehensive and holistic, accounting for seasonal and inter-annual flow variation needed to support the whole range of ecosystem services that healthy rivers provide. Such comprehensive approaches include DRIFT (Downstream Response to Imposed Flow Transformation), BBM (Building Block Methodology), and the "Savannah Process" for site-specific environmental flow assessment, and ELOHA (Ecological Limits of Hydrologic Alteration) for regional-scale water resource planning and management. The "best" method, or more likely, methods, for a given situation depends on the amount of resources and data available, the most important issues, and the level of certainty required. To facilitate environmental flow prescriptions, a number of computer models and tools have been developed by groups such as the USACE's Hydrologic Engineering Center to capture flow requirements defined in a workshop setting (e.g., HEC-RPT ) or to evaluate the implications of environmental flow implementation (e.g., HEC-ResSim , HEC-RAS , and HEC-EFM ).
Additionally, a 2D model is developed from a 3D turbulence model based on Smagorinsky large eddy closure to more appropriately model environmental large scale flows. This model is based on a slow manifold of the turbulent Smagorinsky large eddy closure instead of conventional depth-averaging flow equations.
Other tried and tested environmental flow assessment methods include DRIFT (King et al. 2003), which was recently used in the Kishenganga HPP dispute between Pakistan and India at the International Court of Arbitration.
In India
In India, the need for environmental flows has emerged from the hundreds of large dams being planned in the Himalayan rivers for hydro power generation. The cascades of dams planned across the Lohit, Dibang River in the Brahmaputra River, the Alaknanda and Bhagirathi River in the Ganga basin and the Teesta in Sikkim for example, would end up in the rivers flowing more through tunnels and pen stocks rather than the river channels. There have been some recommendations by various authorities (Courts, Tribunals, Expert Appraisal Committee of the Ministry of Environment and Forests (India)) on releasing e-flows from dams. However, these recommendations have never been backed by strong objectives about why certain e-flow releases are needed.
See also
Freshwater inflow
Water scarcity
References
Rivers
Aquatic ecology | Environmental flow | Biology | 1,673 |
3,043,978 | https://en.wikipedia.org/wiki/Generic%20point | In algebraic geometry, a generic point P of an algebraic variety X is a point in a general position, at which all generic properties are true, a generic property being a property which is true for almost every point.
In classical algebraic geometry, a generic point of an affine or projective algebraic variety of dimension d is a point such that the field generated by its coordinates has transcendence degree d over the field generated by the coefficients of the equations of the variety.
In scheme theory, the spectrum of an integral domain has a unique generic point, which is the zero ideal. As the closure of this point for the Zariski topology is the whole spectrum, the definition has been extended to general topology, where a generic point of a topological space X is a point whose closure is X.
Definition and motivation
A generic point of the topological space X is a point P whose closure is all of X, that is, a point that is dense in X.
The terminology arises from the case of the Zariski topology on the set of subvarieties of an algebraic set: the algebraic set is irreducible (that is, it is not the union of two proper algebraic subsets) if and only if the topological space of the subvarieties has a generic point.
Examples
The only Hausdorff space that has a generic point is the singleton set.
Any integral scheme has a (unique) generic point; in the case of an affine integral scheme (i.e., the prime spectrum of an integral domain) the generic point is the point associated to the prime ideal (0).
History
In the foundational approach of André Weil, developed in his Foundations of Algebraic Geometry, generic points played an important role, but were handled in a different manner. For an algebraic variety V over a field K, generic points of V were a whole class of points of V taking values in a universal domain Ω, an algebraically closed field containing K but also an infinite supply of fresh indeterminates. This approach worked, without any need to deal directly with the topology of V (K-Zariski topology, that is), because the specializations could all be discussed at the field level (as in the valuation theory approach to algebraic geometry, popular in the 1930s).
This was at a cost of there being a huge collection of equally generic points. Oscar Zariski, a colleague of Weil's at São Paulo just after World War II, always insisted that generic points should be unique. (This can be put back into topologists' terms: Weil's idea fails to give a Kolmogorov space and Zariski thinks in terms of the Kolmogorov quotient.)
In the rapid foundational changes of the 1950s Weil's approach became obsolete. In scheme theory, though, from 1957, generic points returned: this time à la Zariski. For example for R a discrete valuation ring, Spec(R) consists of two points, a generic point (coming from the prime ideal {0}) and a closed point or special point coming from the unique maximal ideal. For morphisms to Spec(R), the fiber above the special point is the special fiber, an important concept for example in reduction modulo p, monodromy theory and other theories about degeneration. The generic fiber, equally, is the fiber above the generic point. Geometry of degeneration is largely then about the passage from generic to special fibers, or in other words how specialization of parameters affects matters. (For a discrete valuation ring the topological space in question is the Sierpinski space of topologists. Other local rings have unique generic and special points, but a more complicated spectrum, since they represent general dimensions. The discrete valuation case is much like the complex unit disk, for these purposes.)
References
Algebraic geometry
General topology | Generic point | Mathematics | 789 |
5,376 | https://en.wikipedia.org/wiki/Cladistics | Cladistics ( ; from Ancient Greek 'branch') is an approach to biological classification in which organisms are categorized in groups ("clades") based on hypotheses of most recent common ancestry. The evidence for hypothesized relationships is typically shared derived characteristics (synapomorphies) that are not present in more distant groups and ancestors. However, from an empirical perspective, common ancestors are inferences based on a cladistic hypothesis of relationships of taxa whose character states can be observed. Theoretically, a last common ancestor and all its descendants constitute a (minimal) clade. Importantly, all descendants stay in their overarching ancestral clade. For example, if the terms worms or fishes were used within a strict cladistic framework, these terms would include humans. Many of these terms are normally used paraphyletically, outside of cladistics, e.g. as a 'grade', which are fruitless to precisely delineate, especially when including extinct species. Radiation results in the generation of new subclades by bifurcation, but in practice sexual hybridization may blur very closely related groupings.
As a hypothesis, a clade can be rejected only if some groupings were explicitly excluded. It may then be found that the excluded group did actually descend from the last common ancestor of the group, and thus emerged within the group. ("Evolved from" is misleading, because in cladistics all descendants stay in the ancestral group). To keep only valid clades, upon finding that the group is paraphyletic this way, either such excluded groups should be granted to the clade, or the group should be abolished.
Branches down to the divergence to the next significant (e.g. extant) sister are considered stem-groupings of the clade, but in principle each level stands on its own, to be assigned a unique name. For a fully bifurcated tree, adding a group to a tree also adds an additional (named) clade, and a new level on that branch. Specifically, also extinct groups are always put on a side-branch, not distinguishing whether an actual ancestor of other groupings was found.
The techniques and nomenclature of cladistics have been applied to disciplines other than biology. (See phylogenetic nomenclature.)
Cladistics findings are posing a difficulty for taxonomy, where the rank and (genus-)naming of established groupings may turn out to be inconsistent.
Cladistics is now the most commonly used method to classify organisms.
History
The original methods used in cladistic analysis and the school of taxonomy derived from the work of the German entomologist Willi Hennig, who referred to it as phylogenetic systematics (also the title of his 1966 book); but the terms "cladistics" and "clade" were popularized by other researchers. Cladistics in the original sense refers to a particular set of methods used in phylogenetic analysis, although it is now sometimes used to refer to the whole field.
What is now called the cladistic method appeared as early as 1901 with a work by Peter Chalmers Mitchell for birds and subsequently by Robert John Tillyard (for insects) in 1921, and W. Zimmermann (for plants) in 1943. The term "clade" was introduced in 1958 by Julian Huxley after having been coined by Lucien Cuénot in 1940, "cladogenesis" in 1958, "cladistic" by Arthur Cain and Harrison in 1960, "cladist" (for an adherent of Hennig's school) by Ernst Mayr in 1965, and "cladistics" in 1966. Hennig referred to his own approach as "phylogenetic systematics". From the time of his original formulation until the end of the 1970s, cladistics competed as an analytical and philosophical approach to systematics with phenetics and so-called evolutionary taxonomy. Phenetics was championed at this time by the numerical taxonomists Peter Sneath and Robert Sokal, and evolutionary taxonomy by Ernst Mayr.
Originally conceived, if only in essence, by Willi Hennig in a book published in 1950, cladistics did not flourish until its translation into English in 1966 (Lewin 1997). Today, cladistics is the most popular method for inferring phylogenetic trees from morphological data.
In the 1990s, the development of effective polymerase chain reaction techniques allowed the application of cladistic methods to biochemical and molecular genetic traits of organisms, vastly expanding the amount of data available for phylogenetics. At the same time, cladistics rapidly became popular in evolutionary biology, because computers made it possible to process large quantities of data about organisms and their characteristics.
Methodology
The cladistic method interprets each shared character state transformation as a potential piece of evidence for grouping. Synapomorphies (shared, derived character states) are viewed as evidence of grouping, while symplesiomorphies (shared ancestral character states) are not. The outcome of a cladistic analysis is a cladogram – a tree-shaped diagram (dendrogram) that is interpreted to represent the best hypothesis of phylogenetic relationships. Although traditionally such cladograms were generated largely on the basis of morphological characters and originally calculated by hand, genetic sequencing data and computational phylogenetics are now commonly used in phylogenetic analyses, and the parsimony criterion has been abandoned by many phylogeneticists in favor of more "sophisticated" but less parsimonious evolutionary models of character state transformation. Cladists contend that these models are unjustified because there is no evidence that they recover more "true" or "correct" results from actual empirical data sets
Every cladogram is based on a particular dataset analyzed with a particular method. Datasets are tables consisting of molecular, morphological, ethological and/or other characters and a list of operational taxonomic units (OTUs), which may be genes, individuals, populations, species, or larger taxa that are presumed to be monophyletic and therefore to form, all together, one large clade; phylogenetic analysis infers the branching pattern within that clade. Different datasets and different methods, not to mention violations of the mentioned assumptions, often result in different cladograms. Only scientific investigation can show which is more likely to be correct.
Until recently, for example, cladograms like the following have generally been accepted as accurate representations of the ancestral relations among turtles, lizards, crocodilians, and birds:
If this phylogenetic hypothesis is correct, then the last common ancestor of turtles and birds, at the branch near the lived earlier than the last common ancestor of lizards and birds, near the . Most molecular evidence, however, produces cladograms more like this:
If this is accurate, then the last common ancestor of turtles and birds lived later than the last common ancestor of lizards and birds. Since the cladograms show two mutually exclusive hypotheses to describe the evolutionary history, at most one of them is correct.
The cladogram to the right represents the current universally accepted hypothesis that all primates, including strepsirrhines like the lemurs and lorises, had a common ancestor all of whose descendants are or were primates, and so form a clade; the name Primates is therefore recognized for this clade. Within the primates, all anthropoids (monkeys, apes, and humans) are hypothesized to have had a common ancestor all of whose descendants are or were anthropoids, so they form the clade called Anthropoidea. The "prosimians", on the other hand, form a paraphyletic taxon. The name Prosimii is not used in phylogenetic nomenclature, which names only clades; the "prosimians" are instead divided between the clades Strepsirhini and Haplorhini, where the latter contains Tarsiiformes and Anthropoidea.
Lemurs and tarsiers may have looked closely related to humans, in the sense of being close on the evolutionary tree to humans. However, from the perspective of a tarsier, humans and lemurs would have looked close, in the exact same sense. Cladistics forces a neutral perspective, treating all branches (extant or extinct) in the same manner. It also forces one to try to make statements, and honestly take into account findings, about the exact historic relationships between the groups.
Terminology for character states
The following terms, coined by Hennig, are used to identify shared or distinct character states among groups:
A plesiomorphy ("close form") or ancestral state is a character state that a taxon has retained from its ancestors. When two or more taxa that are not nested within each other share a plesiomorphy, it is a symplesiomorphy (from syn-, "together"). Symplesiomorphies do not mean that the taxa that exhibit that character state are necessarily closely related. For example, Reptilia is traditionally characterized by (among other things) being cold-blooded (i.e., not maintaining a constant high body temperature), whereas birds are warm-blooded. Since cold-bloodedness is a plesiomorphy, inherited from the common ancestor of traditional reptiles and birds, and thus a symplesiomorphy of turtles, snakes and crocodiles (among others), it does not mean that turtles, snakes and crocodiles form a clade that excludes the birds.
An apomorphy ("separate form") or derived state is an innovation. It can thus be used to diagnose a clade – or even to help define a clade name in phylogenetic nomenclature. Features that are derived in individual taxa (a single species or a group that is represented by a single terminal in a given phylogenetic analysis) are called autapomorphies (from auto-, "self"). Autapomorphies express nothing about relationships among groups; clades are identified (or defined) by synapomorphies (from syn-, "together"). For example, the possession of digits that are homologous with those of Homo sapiens is a synapomorphy within the vertebrates. The tetrapods can be singled out as consisting of the first vertebrate with such digits homologous to those of Homo sapiens together with all descendants of this vertebrate (an apomorphy-based phylogenetic definition). Importantly, snakes and other tetrapods that do not have digits are nonetheless tetrapods: other characters, such as amniotic eggs and diapsid skulls, indicate that they descended from ancestors that possessed digits which are homologous with ours.
A character state is homoplastic or "an instance of homoplasy" if it is shared by two or more organisms but is absent from their common ancestor or from a later ancestor in the lineage leading to one of the organisms. It is therefore inferred to have evolved by convergence or reversal. Both mammals and birds are able to maintain a high constant body temperature (i.e., they are warm-blooded). However, the accepted cladogram explaining their significant features indicates that their common ancestor is in a group lacking this character state, so the state must have evolved independently in the two clades. Warm-bloodedness is separately a synapomorphy of mammals (or a larger clade) and of birds (or a larger clade), but it is not a synapomorphy of any group including both these clades. Hennig's Auxiliary Principle states that shared character states should be considered evidence of grouping unless they are contradicted by the weight of other evidence; thus, homoplasy of some feature among members of a group may only be inferred after a phylogenetic hypothesis for that group has been established.
The terms plesiomorphy and apomorphy are relative; their application depends on the position of a group within a tree. For example, when trying to decide whether the tetrapods form a clade, an important question is whether having four limbs is a synapomorphy of the earliest taxa to be included within Tetrapoda: did all the earliest members of the Tetrapoda inherit four limbs from a common ancestor, whereas all other vertebrates did not, or at least not homologously? By contrast, for a group within the tetrapods, such as birds, having four limbs is a plesiomorphy. Using these two terms allows a greater precision in the discussion of homology, in particular allowing clear expression of the hierarchical relationships among different homologous features.
It can be difficult to decide whether a character state is in fact the same and thus can be classified as a synapomorphy, which may identify a monophyletic group, or whether it only appears to be the same and is thus a homoplasy, which cannot identify such a group. There is a danger of circular reasoning: assumptions about the shape of a phylogenetic tree are used to justify decisions about character states, which are then used as evidence for the shape of the tree. Phylogenetics uses various forms of parsimony to decide such questions; the conclusions reached often depend on the dataset and the methods. Such is the nature of empirical science, and for this reason, most cladists refer to their cladograms as hypotheses of relationship. Cladograms that are supported by a large number and variety of different kinds of characters are viewed as more robust than those based on more limited evidence.
Terminology for taxa
Mono-, para- and polyphyletic taxa can be understood based on the shape of the tree (as done above), as well as based on their character states. These are compared in the table below.
Criticism
Cladistics, either generally or in specific applications, has been criticized from its beginnings. Decisions as to whether particular character states are homologous, a precondition of their being synapomorphies, have been challenged as involving circular reasoning and subjective judgements. Of course, the potential unreliability of evidence is a problem for any systematic method, or for that matter, for any empirical scientific endeavor at all.
Transformed cladistics arose in the late 1970s in an attempt to resolve some of these problems by removing a priori assumptions about phylogeny from cladistic analysis, but it has remained unpopular.
Issues
Ancestors
The cladistic method does not identify fossil species as actual ancestors of a clade. Instead, fossil taxa are identified as belonging to separate extinct branches. While a fossil species could be the actual ancestor of a clade, there is no way to know that. Therefore, a more conservative hypothesis is that the fossil taxon is related to other fossil and extant taxa, as implied by the pattern of shared apomorphic features.
Extinction status
An otherwise extinct group with any extant descendants, is not considered (literally) extinct, and for instance does not have a date of extinction.
Hybridization, interbreeding
Anything having to do with biology and sex is complicated and messy, and cladistics is no exception. Many species reproduce sexually, and are capable of interbreeding for millions of years. Worse, during such a period, many branches may have radiated, and it may take hundreds of millions of years for them to have whittled down to just two. Only then one can theoretically assign proper last common ancestors of groupings which do not inadvertently include earlier branches. The process of true cladistic bifurcation can thus take a much more extended time than one is usually aware of. In practice, for recent radiations, cladistically guided findings only give a coarse impression of the complexity. A more detailed account will give details about fractions of introgressions between groupings, and even geographic variations thereof. This has been used as an argument for the use of paraphyletic groupings, but typically other reasons are quoted.
Horizontal gene transfer
Horizontal gene transfer is the mobility of genetic info between different organisms that can have immediate or delayed effects for the reciprocal host. There are several processes in nature which can cause horizontal gene transfer. This does typically not directly interfere with ancestry of the organism, but can complicate the determination of that ancestry. On another level, one can map the horizontal gene transfer processes, by determining the phylogeny of the individual genes using cladistics.
Naming stability
If there is unclarity in mutual relationships, there are a lot of possible trees. Assigning names to each possible clade may not be prudent. Furthermore, established names are discarded in cladistics, or alternatively carry connotations which may no longer hold, such as when additional groups are found to have emerged in them. Naming changes are the direct result of changes in the recognition of mutual relationships, which often is still in flux, especially for extinct species. Hanging on to older naming and/or connotations is counter-productive, as they typically do not reflect actual mutual relationships precisely at all. E.g. Archaea, Asgard archaea, protists, slime molds, worms, invertebrata, fishes, reptilia, monkeys, Ardipithecus, Australopithecus, Homo erectus all contain Homo sapiens cladistically, in their sensu lato meaning. For originally extinct stem groups, sensu lato generally means generously keeping previously included groups, which then may come to include even living species. A pruned sensu stricto meaning is often adopted instead, but the group would need to be restricted to a single branch on the stem. Other branches then get their own name and level. This is commensurate to the fact that more senior stem branches are in fact closer related to the resulting group than the more basal stem branches; that those stem branches only may have lived for a short time does not affect that assessment in cladistics.
In disciplines other than biology
The comparisons used to acquire data on which cladograms can be based are not limited to the field of biology. Any group of individuals or classes that are hypothesized to have a common ancestor, and to which a set of common characteristics may or may not apply, can be compared pairwise. Cladograms can be used to depict the hypothetical descent relationships within groups of items in many different academic realms. The only requirement is that the items have characteristics that can be identified and measured.
Anthropology and archaeology: Cladistic methods have been used to reconstruct the development of cultures or artifacts using groups of cultural traits or artifact features.
Comparative mythology and folktale use cladistic methods to reconstruct the protoversion of many myths. Mythological phylogenies constructed with mythemes clearly support low horizontal transmissions (borrowings), historical (sometimes Palaeolithic) diffusions and punctuated evolution. They also are a powerful way to test hypotheses about cross-cultural relationships among folktales.
Literature: Cladistic methods have been used in the classification of the surviving manuscripts of the Canterbury Tales, and the manuscripts of the Sanskrit Charaka Samhita.
Historical linguistics: Cladistic methods have been used to reconstruct the phylogeny of languages using linguistic features. This is similar to the traditional comparative method of historical linguistics, but is more explicit in its use of parsimony and allows much faster analysis of large datasets (computational phylogenetics).
Textual criticism or stemmatics: Cladistic methods have been used to reconstruct the phylogeny of manuscripts of the same work (and reconstruct the lost original) using distinctive copying errors as apomorphies. This differs from traditional historical-comparative linguistics in enabling the editor to evaluate and place in genetic relationship large groups of manuscripts with large numbers of variants that would be impossible to handle manually. It also enables parsimony analysis of contaminated traditions of transmission that would be impossible to evaluate manually in a reasonable period of time.
Astrophysics infers the history of relationships between galaxies to create branching diagram hypotheses of galaxy diversification.
See also
Bioinformatics
Biomathematics
Coalescent theory
Common descent
Glossary of scientific naming
Language family
Patrocladogram
Phylogenetic network
Scientific classification
Stratocladistics
Subclade
Systematics
Three-taxon analysis
Tree model
Tree structure
Notes and references
Bibliography
Available free online at Gallica (No direct URL). This is the paper credited by for the first use of the term 'clade'.
responding to .
Translated from manuscript in German eventually published in 1982 (Phylogenetische Systematik, Verlag Paul Parey, Berlin).
d'Huy, Julien (2012b), "Le motif de Pygmalion : origine afrasienne et diffusion en Afrique". Sahara, 23: 49-59 .
d'Huy, Julien (2013a), "Polyphemus (Aa. Th. 1137)." "A phylogenetic reconstruction of a prehistoric tale". Nouvelle Mythologie Comparée / New Comparative Mythology 1,
d'Huy, Julien (2013c) "Les mythes évolueraient par ponctuations". Mythologie française, 252, 2013c: 8–12.
d'Huy, Julien (2013d) "A Cosmic Hunt in the Berber sky : a phylogenetic reconstruction of Palaeolithic mythology". Les Cahiers de l'AARS, 15, 2013d: 93–106.
Reissued 1997 in paperback. Includes a reprint of Mayr's 1974 anti-cladistics paper at pp. 433–476, "Cladistic analysis or cladistic classification." This is the paper to which is a response.
.
Tehrani, Jamshid J., 2013, "The Phylogeny of Little Red Riding Hood", PLOS ONE, 13 November.
External links
OneZoom: Tree of Life – all living species as intuitive and zoomable fractal explorer (responsive design)
Willi Hennig Society
Cladistics (scholarly journal of the Willi Hennig Society)
Phylogenetics
Evolutionary biology
Zoology
Philosophy of biology | Cladistics | Biology | 4,553 |
4,466,513 | https://en.wikipedia.org/wiki/Eckert%E2%80%93Mauchly%20Award | The Eckert–Mauchly Award recognizes contributions to digital systems and computer architecture. It is known as the computer architecture community’s most prestigious award. First awarded in 1979, it was named for John Presper Eckert and John William Mauchly, who between 1943 and 1946 collaborated on the design and construction of the first large scale electronic computing machine, known as ENIAC, the Electronic Numerical Integrator and Computer. A certificate and $5,000 are awarded jointly by the Association for Computing Machinery (ACM) and the IEEE Computer Society for outstanding contributions to the field of computer and digital systems architecture.
Recipients
1979 Robert S. Barton
1980 Maurice V. Wilkes
1981 Wesley A. Clark
1982 Gordon C. Bell
1983 Tom Kilburn
1984 Jack B. Dennis
1985 John Cocke
1986 Harvey G. Cragon
1987 Gene M. Amdahl
1988 Daniel P. Siewiorek
1989 Seymour Cray
1990 Kenneth E. Batcher
1991 Burton J. Smith
1992 Michael J. Flynn
1993 David J. Kuck
1994 James E. Thornton
1995 John Crawford
1996 Yale Patt
1997 Robert Tomasulo
1998 T. Watanabe
1999 James E. Smith
2000 Edward Davidson
2001 John Hennessy
2002 Bantwal Ramakrishna "Bob" Rau
2003 Joseph A. (Josh) Fisher
2004 Frederick P. Brooks
2005 Robert P. Colwell
2006 James H. Pomerene
2007 Mateo Valero
2008 David Patterson
2009 Joel Emer
2010 Bill Dally
2011 Gurindar S. Sohi
2012 Algirdas Avizienis
2013 James R. Goodman
2014 Trevor Mudge
2015 Norman Jouppi
2016 Uri Weiser
2017 Charles P. Thacker
2018 Susan J. Eggers
2019 Mark D. Hill
2020 Luiz André Barroso
2021 Margaret Martonosi
2022 Mark Horowitz
2023 Kunle Olukotun
2024 Wen-Mei W. Hwu
See also
ACM Special Interest Group on Computer Architecture
Computer engineering
Computer science
Computing
List of computer science awards
References
ACM-IEEE CS Eckert-Mauchly Award winners
Eckert Mauchly Award
Computer science awards
IEEE society and council awards | Eckert–Mauchly Award | Technology,Engineering | 434 |
4,277,843 | https://en.wikipedia.org/wiki/Improper%20input%20validation | Improper input validation or unchecked user input is a type of vulnerability in computer software that may be used for security exploits. This vulnerability is caused when "[t]he product does not validate or incorrectly validates input that can affect the control flow or data flow of a program."
Examples include:
Buffer overflow
Cross-site scripting
Directory traversal
Null byte injection
SQL injection
Uncontrolled format string
References
Computer security exploits | Improper input validation | Technology | 90 |
834,135 | https://en.wikipedia.org/wiki/Zentai | A zentai suit () is a skin-tight garment that covers the entire body. The word is a portmanteau of zenshin taitsu (). Zentai is most commonly made using nylon/spandex blends.
Use
The costumes are seen at major sporting events in North America and the United Kingdom. They created internationally recognized personalities of The Green Men, two fans of the Vancouver Canucks NHL team. Various professional street dance/hip hop dance groups use the outfits, such as The Body Poets in the United States and Remix Monkeys in the United Kingdom.
Full-body suits are used for video special effects: their unique colors enable the person wearing the chroma key suit to be digitally removed from a video image. Other applications have included music videos (Black Eyed Peas' song "Boom Boom Pow", including the live performance at the Super Bowl), breast cancer awareness, fashion modeling on an episode of America's Next Top Model, social anxiety workshops, television (Charlie Kelly as Green Man), a participant in public art project "One & Other", and social experiments.
Legal limitations
Since zentai cover one's face, a fine of up to €150 may be imposed upon those who wear them publicly in France. Furthermore, some sports leagues, such as Major League Baseball, ban the use of the costume hoods.
Brands
Companies have created brands of the suits including RootSuit or Superfan Suit in the United States, Bodysocks or Second Skins by Smiffy's and Morphsuits in the United Kingdom, and Jyhmiskin in Finland. Morphsuits has achieved relative commercial success internationally. Between January and late October 2010, the company shipped 10,000 to Canada alone. The Morphsuits brand has actively tried to disassociate themselves from the existing zentai community. Superfan Suits acknowledges in interviews that the outfits have existed previously. The term "morphsuit" has become a generic term in the process; one New Zealand-based newspaper refers to competing brand Jaskins as "one of the main online morphsuit brands." Jaskins company founder Josh Gaskin says their origins are unclear, pegging the first usage with It's Always Sunny in Philadelphia.
See also
Bodystocking
Black light theatre, which can use all-black zentai attire for its performances
Catsuit
Cosplay
Dancewear
Kigurumi
Spandex fetishism
Morphsuits
Notable users of Zentai
Pink Guy, musical artist
Jonathan Bree, musical artist
The Great Morgani, performance artist
Pandemonia, performance artist
References
Further reading
Will Doig, "Men Who Love Lycra", The Daily Beast, 3 March 2010.
External links
Costume design
Fetish clothing
One-piece suits | Zentai | Engineering | 566 |
77,556,775 | https://en.wikipedia.org/wiki/Tritium%20breeding%20module | A tritium breeding module or TBM is a component of a fusion reactor that produces tritium.
ITER will have four easily removable TBMs in order to test various material combinations in order to develop the breeding process.
See also
Breeding blanket
References
ITER
Fusion reactors | Tritium breeding module | Chemistry | 56 |
71,708,395 | https://en.wikipedia.org/wiki/Translational%20Research%20Institute%20for%20Space%20Health | The Translational Research Institute for Space Health (TRISH) is a virtual, applied research consortium that pursues and funds translational research and technologies to keep astronauts healthy during space exploration, with the added benefit of potential applications on Earth. TRISH is specifically focused on human health in preparation for deep space exploration efforts, including National Aeronautics and Space Administration's (NASA) Artemis missions to the Moon, and future human missions to Mars. TRISH also supports research to collect and study biomedical data gathered on commercial spaceflight missions to better understand the effect of spaceflight on the human body.
The consortium is led by Baylor College of Medicine's Center for Space Medicine, and includes Massachusetts Institute of Technology, and California Institute of Technology, with funding awarded to scientists and organizations around the United States. TRISH works directly with NASA's Human Research Program (HRP) to establish and coordinate research efforts that align with NASA’s goal of safely furthering human exploration while mitigating risks to human health.
History
TRISH was founded in 2016, and Baylor College of Medicine was selected as the lead institution in an agreement with a maximum potential value of $246 million for a six-year performance period. TRISH succeeded the National Space Biomedical Research Institute (NSBRI), a similar research institute also led by Baylor College of Medicine.
In 2021, NASA opted to renew TRISH, granting additional funding of up to $134.6 million between 2022 and 2028. When NASA reviewed TRISH in December 2020, it found that “TRISH had developed and transitioned 34 completed astronaut health and protection projects to NASA and had connected 415 first-time NASA researchers with opportunities to develop space health solutions.”
TRISH supports NASA's Human Research Program (HRP), founded in 2005, as outlined in TRISH's strategic plan. The goals of the HRP are to provide knowledge and technology to mitigate risks to human health and performance and develop tools to enable safe and productive human space exploration.
Effects of space on the human body
In January 2023, The Washington Post reported an interactive feature on the known effects of space travel to the human body, and noted TRISH's work. In the article, former TRISH Chief Medical Officer Emmanuel Urquieta stated “Space is just not very hospitable to the human body,” explaining that humans evolved on Earth with abundant gravity and low radiation, whereas space is characterized by minimal gravity and higher radiation exposure.
This environment can lead astronauts to experience space adaptation syndrome, muscle atrophy, decreased blood volume, altered immunity and DNA damage from radiation exposure, loss of bone, sensory changes, psychological stress, and inflammation, among other potential complications. Interventions to prevent these outcomes include routine exercise while in space, as well as pharmaceutical and dietary supplements. Additionally, changes in blood flow and digestion rate are likely to affect how the body processes and tolerates medications, an area requiring further study.
Trips to the Moon and Mars will require astronauts to spend more time in space than ever before, potentially exacerbating known deleterious effects of space travel to the human body. In April 2022, NPR's Brendan Byrne described one of TRISH's goals as “to understand how and why the body changes while in space and prepar[e] future astronauts for those health effects. That's important to understand if space agencies like NASA want to send humans to places like the Moon or Mars. Those trips could be longer than Vande Hei [‘s] almost yearlong mission. And the environments on the lunar surface and the red planet will be harsh, with limited medical resources.”
Leadership
TRISH's leadership includes executive director Dorit B. Donoviel, chief scientific officer Jennifer Fogarty, and chief medical officer Emmanuel Urquieta.
TRISH's board of directors includes chair Jeffrey P. Sutton, along with members Barbara Wold, and Thomas Heldt.
Consortium members
Baylor College of Medicine
Massachusetts Institute of Technology
California Institute of Technology
Research areas
TRISH researchers pursue scientific research in several fields, including:
Cellular and Molecular Biology
Behavioral Health
Environment, Food and Medication
Medical Technology
Radiation
Involvement with private spaceflight missions
As part of its EXPAND (Enhancing eXploration Platforms and Analog Definition) Program, TRISH has partnered with several commercial space providers on private spaceflight missions to gather spaceflight participant health data before, during, and after space travel. These may include tests on motor function, eye health, motion sickness, and cognitive wellbeing, among others.
TRISH-funded researchers have collected biomedical data from spaceflight participants aboard the Inspiration4 mission, the Axiom Mission 1, and Space Adventures’ MZ Mission. TRISH researchers have also collected biomedical data from astronauts on the Polaris Dawn, Ax-2, and Ax-3 missions. In 2024, TRISH has also entered an agreement with Blue Origin to collect biomedical data during suborbital missions.
Biomedical data gathered from private spaceflight participants adds to the diversity and volume of data available for space health researchers. TRISH maintains a centralized research database, the EXPAND Program, which hosts pre-, in-, and post-flight health data from multiple commercial space flights.
Outreach
TRISH leadership regularly appears at conferences and workshops, including SXSW, HRP's annual Investigator's Workshop, and conferences and meetings hosted by the Committee on Space Research (COSPAR), Aerospace Medical Association (AsMA), International Astronautical Congress, and others.
Funding for researchers and companies
TRISH offers funding for innovative research and technology projects through several mechanisms. TRISH's open solicitations are housed on the Institute's Grant Research Integrated Dashboard (GRID), an online portal, or the NASA NSPIRES portal. Previous solicitation topics have requested proposals on topics such as endogenous repair, metabolic manipulation, microphysiological systems, such as Tissue on a Chip, technologies in support of autonomous health care, and the training of postdoctoral fellows and future scientists in the field.
External links
Translational Research Institute for Space Health
The Human Body in Space
Open Funding Opportunities With TRISH
TRISH Strategic Plan
References
Consortia
Space exploration | Translational Research Institute for Space Health | Astronomy | 1,247 |
18,008,163 | https://en.wikipedia.org/wiki/Groundwater%20remediation | Groundwater remediation is the process that is used to treat polluted groundwater by removing the pollutants or converting them into harmless products. Groundwater is water present below the ground surface that saturates the pore space in the subsurface. Globally, between 25 per cent and 40 per cent of the world's drinking water is drawn from boreholes and dug wells. Groundwater is also used by farmers to irrigate crops and by industries to produce everyday goods. Most groundwater is clean, but groundwater can become polluted, or contaminated as a result of human activities or as a result of natural conditions.
The many and diverse activities of humans produce innumerable waste materials and by-products. Historically, the disposal of such waste have not been subject to many regulatory controls. Consequently, waste materials have often been disposed of or stored on land surfaces where they percolate into the underlying groundwater. As a result, the contaminated groundwater is unsuitable for use.
Current practices can still impact groundwater, such as the over application of fertilizer or pesticides, spills from industrial operations, infiltration from urban runoff, and leaking from landfills. Using contaminated groundwater causes hazards to public health through poisoning or the spread of disease, and the practice of groundwater remediation has been developed to address these issues. Contaminants found in groundwater cover a broad range of physical, inorganic chemical, organic chemical, bacteriological, and radioactive parameters. Pollutants and contaminants can be removed from groundwater by applying various techniques, thereby bringing the water to a standard that is commensurate with various intended uses.
Techniques
Ground water remediation techniques span biological, chemical, and physical treatment technologies. Most ground water treatment techniques utilize a combination of technologies. Some of the biological treatment techniques include bioaugmentation, bioventing, biosparging, bioslurping, and phytoremediation. Some chemical treatment techniques include ozone and oxygen gas injection, chemical precipitation, membrane separation, ion exchange, carbon absorption, aqueous chemical oxidation, and surfactant enhanced recovery. Some chemical techniques may be implemented using nanomaterials. Physical treatment techniques include, but are not limited to, pump and treat, air sparging, and dual phase extraction.
Biological treatment technologies
Bioaugmentation
If a treatability study shows no degradation (or an extended lab period before significant degradation is achieved) in contamination contained in the groundwater, then inoculation with strains known to be capable of degrading the contaminants may be helpful. This process increases the reactive enzyme concentration within the bioremediation system and subsequently may increase contaminant degradation rates over the nonaugmented rates, at least initially after inoculation.
Bioventing
Bioventing is an on site remediation technology that uses microorganisms to biodegrade organic constituents in the groundwater system. Bioventing enhances the activity of indigenous bacteria and archaea and stimulates the natural in situ biodegradation of hydrocarbons by inducing air or oxygen flow into the unsaturated zone and, if necessary, by adding nutrients. During bioventing, oxygen may be supplied through direct air injection into residual contamination in soil. Bioventing primarily assists in the degradation of adsorbed fuel residuals, but also assists in the degradation of volatile organic compounds (VOCs) as vapors move slowly through biologically active soil.
Biosparging
Biosparging is an in situ remediation technology that uses indigenous microorganisms to biodegrade organic constituents in the saturated zone. In biosparging, air (or oxygen) and nutrients (if needed) are injected into the saturated zone to increase the biological activity of the indigenous microorganisms. Biosparging can be used to reduce concentrations of petroleum constituents that are dissolved in groundwater, adsorbed to soil below the water table, and within the capillary fringe.
Bioslurping
Bioslurping combines elements of bioventing and vacuum-enhanced pumping of free-product that is lighter than water (light non-aqueous phase liquid or LNAPL) to recover free-product from the groundwater and soil, and to bioremediate soils. The bioslurper system uses a “slurp” tube that extends into the free-product layer. Much like a straw in a glass draws liquid, the pump draws liquid (including free-product) and soil gas up the tube in the same process stream. Pumping lifts LNAPLs, such as oil, off the top of the water table and from the capillary fringe (i.e., an area just above the saturated zone, where water is held in place by capillary forces). The LNAPL is brought to the surface, where it is separated from water and air. The biological processes in the term “bioslurping” refer to aerobic biological degradation of the hydrocarbons when air is introduced into the unsaturated zone contaminated soil.
Phytoremediation
In the phytoremediation process certain plants and trees are planted, whose roots absorb contaminants from ground water over time. This process can be carried out in areas where the roots can tap the ground water. Few examples of plants that are used in this process are Chinese Ladder fern Pteris vittata, also known as the brake fern, is a highly efficient accumulator of arsenic. Genetically altered cottonwood trees are good absorbers of mercury and transgenic Indian mustard plants soak up selenium well.
Permeable reactive barriers
Certain types of permeable reactive barriers utilize biological organisms in order to remediate groundwater.
Chemical treatment technologies
Chemical precipitation
Chemical precipitation is commonly used in wastewater treatment to remove hardness and heavy metals. In general, the process involves addition of agent to an aqueous waste stream in a stirred reaction vessel, either batchwise or with steady flow. Most metals can be converted to insoluble compounds by chemical reactions between the agent and the dissolved metal ions. The insoluble compounds (precipitates) are removed by settling and/or filtering.
Ion exchange
Ion exchange for ground water remediation is virtually always carried out by passing the water downward under pressure through a fixed bed of granular medium (either cation exchange media and anion exchange media) or spherical beads. Cations are displaced by certain cations from the solutions and ions are displaced by certain anions from the solution. Ion exchange media most often used for remediation are zeolites (both natural and synthetic) and synthetic resins.
Carbon adsorption
The most common activated carbon used for remediation is derived from bituminous coal. Activated carbon adsorbs volatile organic compounds from ground water; the compounds attach to the graphite-like surface of the activated carbon.
Chemical oxidation
In this process, called In Situ Chemical Oxidation or ISCO, chemical oxidants are delivered in the subsurface to destroy (converted to water and carbon dioxide or to nontoxic substances) the organics molecules. The oxidants are introduced as either liquids or gasses. Oxidants include air or oxygen, ozone, and certain liquid chemicals such as hydrogen peroxide, permanganate and persulfate.
Ozone and oxygen gas can be generated on site from air and electricity and directly injected into soil and groundwater contamination. The process has the potential to oxidize and/or enhance naturally occurring aerobic degradation. Chemical oxidation has proven to be an effective technique for dense non-aqueous phase liquid or DNAPL when it is present.
Surfactant enhanced recovery
Surfactant enhanced recovery increases the mobility and solubility of the contaminants absorbed to the saturated soil matrix or present as dense non-aqueous phase liquid. Surfactant-enhanced recovery injects surfactants (surface-active agents that are primary ingredient in soap and detergent) into contaminated groundwater. A typical system uses an extraction pump to remove groundwater downstream from the injection point. The extracted groundwater is treated aboveground to separate the injected surfactants from the contaminants and groundwater. Once the surfactants have separated from the groundwater they are re-used. The surfactants used are non-toxic, food-grade, and biodegradable. Surfactant enhanced recovery is used most often when the groundwater is contaminated by dense non-aqueous phase liquids (DNAPLs). These dense compounds, such as trichloroethylene (TCE), sink in groundwater because they have a higher density than water. They then act as a continuous source for contaminant plumes that can stretch for miles within an aquifer. These compounds may biodegrade very slowly. They are commonly found in the vicinity of the original spill or leak where capillary forces have trapped them.
Permeable reactive barriers
Some permeable reactive barriers utilize chemical processes to achieve groundwater remediation.
Physical treatment technologies
Pump and treat
Pump and treat is one of the most widely used ground water remediation technologies. In this process ground water is pumped to the surface and is coupled with either biological or chemical treatments to remove the impurities.
Air sparging
Air sparging is the process of blowing air directly into the ground water. As the bubbles rise, the contaminants are removed from the groundwater by physical contact with the air (i.e., stripping) and are carried up into the unsaturated zone (i.e., soil). As the contaminants move into the soil, a soil vapor extraction system is usually used to remove vapors.
Dual phase vacuum extraction
Dual-phase vacuum extraction (DPVE), also known as multi-phase extraction, is a technology that uses a high-vacuum system to remove both contaminated groundwater and soil vapor. In DPVE systems, a high-vacuum extraction well is installed with its screened section in the zone of contaminated soils and groundwater. Fluid/vapor extraction systems depress the water table and water flows faster to the extraction well. DPVE removes contaminants from above and below the water table. As the water table around the well is lowered from pumping, unsaturated soil is exposed. This area, called the capillary fringe, is often highly contaminated, as it holds undissolved chemicals, chemicals that are lighter than water, and vapors that have escaped from the dissolved groundwater below. Contaminants in the newly exposed zone can be removed by vapor extraction. Once above ground, the extracted vapors and liquid-phase organics and groundwater are separated and treated. Use of dual-phase vacuum extraction with these technologies can shorten the cleanup time at a site, because the capillary fringe is often the most contaminated area.
Monitoring-well oil skimming
Monitoring-wells are often drilled for the purpose of collecting ground water samples for analysis. These wells, which are usually six inches or less in diameter, can also be used to remove hydrocarbons from the contaminant plume within a groundwater aquifer by using a belt-style oil skimmer. Belt oil skimmers, which are simple in design, are commonly used to remove oil and other floating hydrocarbon contaminants from industrial water systems.
A monitoring-well oil skimmer remediates various oils, ranging from light fuel oils such as petrol, light diesel or kerosene to heavy products such as No. 6 oil, creosote and coal tar. It consists of a continuously moving belt that runs on a pulley system driven by an electric motor. The belt material has a strong affinity for hydrocarbon liquids and for shedding water. The belt, which can have a vertical drop of 100+ feet, is lowered into the monitoring well past the LNAPL/water interface. As the belt moves through this interface, it picks up liquid hydrocarbon contaminant which is removed and collected at ground level as the belt passes through a wiper mechanism. To the extent that DNAPL hydrocarbons settle at the bottom of a monitoring well, and the lower pulley of the belt skimmer reaches them, these contaminants can also be removed by a monitoring-well oil skimmer.
Typically, belt skimmers remove very little water with the contaminant, so simple weir-type separators can be used to collect any remaining hydrocarbon liquid, which often makes the water suitable for its return to the aquifer. Because the small electric motor uses little electricity, it can be powered from solar panels or a wind turbine, making the system self-sufficient and eliminating the cost of running electricity to a remote location.
See also
Toxic torts
Brownfield
CERCLA
Groundwater pollution
Plume (hydrodynamics)
Groundwater remediation applications of nanotechnology
References
External links
EPA Alternative Cleanup Technologies for Underground Storage Tank Sites
Aquifers
Environmental science
Ecological restoration
Environmental issues with water
-
Water chemistry
Water pollution | Groundwater remediation | Chemistry,Engineering,Biology,Environmental_science | 2,655 |
184,897 | https://en.wikipedia.org/wiki/Reagent | In chemistry, a reagent ( ) or analytical reagent is a substance or compound added to a system to cause a chemical reaction, or test if one occurs. The terms reactant and reagent are often used interchangeably, but reactant specifies a substance consumed in the course of a chemical reaction. Solvents, though involved in the reaction mechanism, are usually not called reactants. Similarly, catalysts are not consumed by the reaction, so they are not reactants. In biochemistry, especially in connection with enzyme-catalyzed reactions, the reactants are commonly called substrates.
Definitions
Organic chemistry
In organic chemistry, the term "reagent" denotes a chemical ingredient (a compound or mixture, typically of inorganic or small organic molecules) introduced to cause the desired transformation of an organic substance. Examples include the Collins reagent, Fenton's reagent, and Grignard reagents.
Analytical chemistry
In analytical chemistry, a reagent is a compound or mixture used to detect the presence or absence of another substance, e.g. by a color change, or to measure the concentration of a substance, e.g. by colorimetry. Examples include Fehling's reagent, Millon's reagent, and Tollens' reagent.
Commercial or laboratory preparations
In commercial or laboratory preparations, reagent-grade designates chemical substances meeting standards of purity that ensure the scientific precision and reliability of chemical analysis, chemical reactions or physical testing. Purity standards for reagents are set by organizations such as ASTM International or the American Chemical Society. For instance, reagent-quality water must have very low levels of impurities such as sodium and chloride ions, silica, and bacteria, as well as a very high electrical resistivity. Laboratory products which are less pure, but still useful and economical for undemanding work, may be designated as technical, practical, or crude grade to distinguish them from reagent versions.
Biology
In the field of biology, the biotechnology revolution in the 1980s grew from the development of reagents that could be used to identify and manipulate the chemical matter in and on cells. These reagents included antibodies (polyclonal and monoclonal), oligomers, all sorts of model organisms and immortalised cell lines, reagents and methods for molecular cloning and DNA replication, and many others.
Tool compounds
Tool compounds are an important class of reagent in biology. They are small molecules or biochemicals like siRNA or antibodies that are known to affect a given biomolecule—for example a drug target—but are unlikely to be useful as drugs themselves, and are often starting points in the drug discovery process.
However, many natural substances are hits in almost any assay in which they are tested, and therefore not useful as tool compounds. Medicinal chemists class them instead as pan-assay interference compounds. One example is curcumin.
See also
Limiting reagent
Common reagents
Product
Reagent bottle
Substrate
References
External links
Biological techniques and tools
Chemical reactions
Reagents for biochemistry | Reagent | Chemistry,Biology | 638 |
25,237,396 | https://en.wikipedia.org/wiki/Achaete-scute%20complex | The achaete-scute complex (AS-C) is a group of four genes (achaete, scute, lethal of scute, and asense) in the fruit fly Drosophila melanogaster. These genes encode basic helix-loop-helix transcription factors that have been best studied in their regulation of nervous system development. Because of their role in specifying neuroblast fate, the genes of the AS-C are called proneural genes. However, the AS-C has non-proneural functions, such as specifying muscle and gut progenitors. Homologues of AS-C in other animals, including humans and other vertebrates, have similar functions.
Genes of the AS-C interact with the Notch pathway in both their proneural functions as well as their specification of gut and muscle cells.
Genetic structure
The complex is found near the tip of the X chromosome, just 3' of yellow, in chromosome bands 1A6 through 1B3. It occupies around 93 kb of the genome, with all four genes oriented in the same direction.
achaete
The 5′-most gene of the achaete-scute complex, achaete (short form ac), is a small gene of less than 1000 bp. The Achaete protein is 201 amino acids long and has a relative size of 23 kDa. As with most classically described Drosophila genes, achaete is named for its mutant phenotype, which is the lack of sensory hairs (macrochaetae and microchaetae) on the back of the adult fly. Achaete functions to specify sensory hair cell fate. It functions downstream of other genes, including hairy and extramacrochaete, that set up fields of cells that may express achaete.
scute
Scute (short form sc) is found in about 25 kb (3)′ of achaete. It is a 1.45 kb gene encoding a 345 aa protein of 38.2 kDa.
lethal or scute
lethal or scute (short form l(1)sc) is found about 12 kb 3′ of scute. It is a 1.1 kb gene encoding a 257 aa protein of 29 kDa.
asense
asense (short form ase) found 45 kb 3′ of l(1)sc. It is a 2.8 kb gene encoding a 486 aa protein of 53.2 kDa.
See also
ASCL1 :- Achaete-scute homolog 1
ASCL2 :- Achaete-scute homolog 2
References
Drosophila melanogaster genes
Transcription factors | Achaete-scute complex | Chemistry,Biology | 545 |
67,130,460 | https://en.wikipedia.org/wiki/Barry%20Sinervo | Barry R. Sinervo (1961–2021) was a behavioral ecologist and evolutionary biologist. He was a full professor at University of California Santa Cruz where his research interests included game theory, climate change, herpetology, and animal behavior. One of his major discoveries was of a rock-paper-scissors game in side-blotched lizard mating behaviour. He also discovered evidence of the Baldwin effect in the side-blotched lizard. Sinervo was born in Port Arthur, Ontario, Canada, and educated at Dalhousie University, Nova Scotia, and the University of Washington, Seattle. He died from cancer at age 60 on March 15, 2021.
Honors
A species of lizard was named after Sinervo, Phymaturus sinervoi Scolaro et al., 2012.
References
1961 births
2021 deaths
University of California, Santa Cruz faculty
Ethologists
Evolutionary biologists
American herpetologists | Barry Sinervo | Biology | 191 |
37,858,095 | https://en.wikipedia.org/wiki/Orthanc%20%28server%29 | Orthanc is a standalone DICOM server. It is designed to improve the DICOM flows in hospitals and to support research about the automated analysis of medical images. Orthanc lets its users focus on the content of the DICOM files, hiding the complexity of the DICOM format and of the DICOM protocol. It is licensed under the GPLv3.
Orthanc can turn any computer running Windows, Linux or OS X into a DICOM store (in other words, a mini-PACS system). Its architecture is standalone, meaning that no complex database administration is required, nor the installation of third-party dependencies. Orthanc is also available as Docker images.
Orthanc provides a RESTful API on top of a DICOM server. Therefore, it is possible to drive Orthanc from any computer language. The DICOM tags of the stored medical images can be downloaded in the JSON file format. Furthermore, standard PNG images can be generated on the fly from the DICOM instances by Orthanc.
Plugins
Orthanc also features a plugin mechanism to add new modules that extends the core capabilities of its REST API. As of May 2022, a dozen of plugins are available:
multiple DICOM Web viewers,
a PostgreSQL database back-end,
a MySQL database back-end,
an ODBC database back-end,
a reference implementation of DICOMweb,
a Whole Slide Imaging viewer and tools to convert to/from WSI formats.
3 Object storage storage back-end,
a Python (programming language) plugin
a plugin to access data from The Cancer Imaging Archive
a plugin to index a local storage
a plugin to handle Neuroimaging file formats
History and awards
Orthanc was initiated by Sébastien Jodogne in 2011 as a postdoctoral researcher at CHU de Liège. The initial public release happened on .
For his work on Orthanc, Sébastien Jodogne received the 2014 Advancement of Free Software award. Orthanc also received the Agoria award for the best 2015 e-health project in Belgium.
Between 2017 and 2021, the development has been supported by the Osimis company, still with Sébastien Jodogne as the lead developer. Since 2021, the development has been handled both by Alain Mazy with financial support from Open Collective, and by the health informatics research team led by Sébastien Jodogne at Université Catholique de Louvain.
Orthanc was recognized as a digital public good by the Digital Public Goods Alliance in August 2023.
Distribution
Orthanc is part of the Debian Med project. Official packages are available for numerous Linux distributions including Debian, Ubuntu and Fedora. Ports are available for FreeBSD and OpenBSD. Windows/MacOS binary installer packages may be freely downloaded from the Orthanc website but are provided by a commercial partner.
Orthanc is also available as a Beta-package through the package center for Synology NAS users.
See also
Picture Archiving and Communication System
List of open-source health software
References
External links
Medical imaging
Medical software
Free software programmed in C++
Free server software
Free health care software
Free DICOM software | Orthanc (server) | Biology | 658 |
59,413,690 | https://en.wikipedia.org/wiki/Kerr%E2%80%93Dold%20vortex | In fluid dynamics, Kerr–Dold vortex is an exact solution of Navier–Stokes equations, which represents steady periodic vortices superposed on the stagnation point flow (or extensional flow). The solution was discovered by Oliver S. Kerr and John W. Dold in 1994. These steady solutions exist as a result of a balance between vortex stretching by the extensional flow and viscous diffusion, which are similar to Burgers vortex. These vortices were observed experimentally in a four-roll mill apparatus by Lagnado and L. Gary Leal.
Mathematical description
The stagnation point flow, which is already an exact solution of the Navier–Stokes equation is given by , where is the strain rate. To this flow, an additional periodic disturbance can be added such that the new velocity field can be written as
where the disturbance and are assumed to be periodic in the direction with a fundamental wavenumber . Kerr and Dold showed that such disturbances exist with finite amplitude, thus making the solution an exact to Navier–Stokes equations. Introducing a stream function for the disturbance velocity components, the equations for disturbances in vorticity-streamfunction formulation can be shown to reduce to
where is the disturbance vorticity. A single parameter
can be obtained upon non-dimensionalization, which measures the strength of the converging flow to viscous dissipation. The solution will be assumed to be
Since is real, it is easy to verify that Since the expected vortex structure has the symmetry , we have . Upon substitution, an infinite sequence of differential equation will be obtained which are coupled non-linearly. To derive the following equations, Cauchy product rule will be used. The equations are
The boundary conditions
and the corresponding symmetry condition is enough to solve the problem. It can be shown that non-trivial solution exist only when On solving this equation numerically, it is verified that keeping first 7 to 8 terms suffice to produce accurate results. The solution when is was already discovered by Craik and Criminale in 1986.
See also
Sullivan vortex
References
Flow regimes
Vortices | Kerr–Dold vortex | Chemistry,Mathematics | 431 |
878,898 | https://en.wikipedia.org/wiki/Oral%20candidiasis | Oral candidiasis (Acute pseudomembranous candidiasis), which is also known as oral thrush, among other names, is candidiasis that occurs in the mouth. That is, oral candidiasis is a mycosis (yeast/fungal infection) of Candida species on the mucous membranes of the mouth.
Candida albicans is the most commonly implicated organism in this condition. C. albicans is carried in the mouths of about 50% of the world's population as a normal component of the oral microbiota. This candidal carriage state is not considered a disease, but when Candida species become pathogenic and invade host tissues, oral candidiasis can occur. This change usually constitutes an opportunistic infection by normally harmless micro-organisms because of local (i.e., mucosal) or systemic factors altering host immunity.
Classification
Oral candidiasis is a mycosis (fungal infection). Traditionally, oral candidiasis is classified using the Lehner system, originally described in the 1960s, into acute and chronic forms (see table). Some of the subtypes almost always occur as acute (e.g., acute pseudomembranous candidiasis), and others chronic. However, these typical presentations do not always hold true, which created problems with this system. A more recently proposed classification of oral candidiasis distinguishes primary oral candidiasis, where the condition is confined to the mouth and perioral tissues, and secondary oral candidiasis, where there is involvement of other parts of the body in addition to the mouth. The global human immunodeficiency virus/acquired immunodeficiency syndrome (HIV/AIDS) pandemic has been an important factor in the move away from the traditional classification since it has led to the formation of a new group of patients who present with atypical forms of oral candidiasis.
By appearance
Three main clinical appearances of candidiasis are generally recognized: pseudomembranous, erythematous (atrophic) and hyperplastic. Most often, affected individuals display one clear type or another, but sometimes there can be more than one clinical variant in the same person.
Pseudomembranous
Acute pseudomembranous candidiasis is a classic form of oral candidiasis, commonly referred to as thrush. Overall, this is the most common type of oral candidiasis, accounting for about 35% of oral candidiasis cases.
It is characterized by a coating or individual patches of pseudomembranous white slough that can be easily wiped away to reveal erythematous (reddened), and sometimes minimally bleeding, mucosa beneath. These areas of pseudomembrane are sometimes described as "curdled milk", or "cottage cheese". The white material is made up of debris, fibrin, and desquamated epithelium that has been invaded by yeast cells and hyphae that invade to the depth of the stratum spinosum. As an erythematous surface is revealed beneath the pseudomembranes, some consider pseudomembranous candidiasis and erythematous candidiasis stages of the same entity. Some sources state that if there is bleeding when the pseudomembrane is removed, then the mucosa has likely been affected by an underlying process such as lichen planus or chemotherapy. Pseudomembraneous candidiasis can involve any part of the mouth, but usually it appears on the tongue, buccal mucosae or palate.
It is classically an acute condition, appearing in infants, people taking antibiotics or immunosuppressant medications, or immunocompromising diseases. However, sometimes it can be chronic and intermittent, even lasting for many years. Chronicity of this subtype generally occurs in immunocompromised states, (e.g., leukemia, HIV) or in persons who use corticosteroids topically or by aerosol. Acute and chronic pseudomembranous candidiasis are indistinguishable in appearance.
Erythematous
Erythematous (atrophic) candidiasis is when the condition appears as a red, raw-looking lesion. Some sources consider denture-related stomatitis, angular stomatitis, median rhomboid glossitis, and antibiotic-induced stomatitis as subtypes of erythematous candidiasis, since these lesions are commonly erythematous/atrophic. It may precede the formation of a pseudomembrane, be left when the membrane is removed, or arise without prior pseudomembranes. Some sources state that erythematous candidiasis accounts for 60% of oral candidiasis cases. Where it is associated with inhalation steroids (often used for treatment of asthma), erythematous candidiasis commonly appears on the palate or the dorsum of the tongue. On the tongue, there is loss of the lingual papillae (depapillation), leaving a smooth area.
Acute erythematous candidiasis usually occurs on the dorsum of the tongue in persons taking long term corticosteroids or antibiotics, but occasionally it can occur after only a few days of using a topical antibiotic. This is usually termed "antibiotic sore mouth", "antibiotic sore tongue", or "antibiotic-induced stomatitis" because it is commonly painful as well as red.
Chronic erythematous candidiasis is more usually associated with denture wearing (see denture-related stomatitis).
Hyperplastic
This variant is also sometimes termed "plaque-like candidiasis" or "nodular candidiasis". The most common appearance of hyperplastic candidiasis is a persistent white plaque that does not rub off. The lesion may be rough or nodular in texture. Hyperplastic candidiasis is uncommon, accounting for about 5% of oral candidiasis cases, and is usually chronic and found in adults. The most common site of involvement is the commissural region of the buccal mucosa, usually on both sides of the mouth.
Another term for hyperplastic candidiasis is "candidal leukoplakia". This term is a largely historical synonym for this subtype of candidiasis, rather than a true leukoplakia. Indeed, it can be clinically indistinguishable from true leukoplakia, but tissue biopsy shows candidal hyphae invading the epithelium. Some sources use this term to describe leukoplakia lesions that become colonized secondarily by Candida species, thereby distinguishing it from hyperplastic candidiasis. It is known that Candida resides more readily in mucosa that is altered, such as may occur with dysplasia and hyperkeratosis in an area of leukoplakia.
Associated lesions
Candida-associated lesions are primary oral candidiases (confined to the mouth), where the causes are thought to be multiple. For example, bacteria as well as Candida species may be involved in these lesions. Frequently, antifungal therapy alone does not permanently resolve these lesions, but rather the underlying predisposing factors must be addressed, in addition to treating the candidiasis.
Angular cheilitis
Angular cheilitis is inflammation at the corners (angles) of the mouth, very commonly involving Candida species, when sometimes the terms "Candida-associated angular cheilitis", or less commonly "monilial perlèche" are used. Candida organisms alone are responsible for about 20% of cases, and a mixed infection of C. albicans and Staphylococcus aureus for about 60% of cases. Signs and symptoms include soreness, erythema (redness), and fissuring of one, or more commonly both the angles of the mouth, with edema (swelling) seen intraorally on the commissures (inside the corners of the mouth). Angular cheilitis generally occurs in elderly people and is associated with denture related stomatitis.
Denture-related stomatitis
This term refers to a mild inflammation and erythema of the mucosa beneath a denture, usually an upper denture in elderly edentulous individuals (with no natural teeth remaining). Some report that up to 65% of denture wearers have this condition to some degree. About 90% of cases are associated with Candida species, where sometimes the terms "Candida-associated denture stomatitis", or "Candida-associated denture-induced stomatitis" (CADIS), are used. Some sources state that this is by far the most common form of oral candidiasis. Although this condition is also known as "denture sore mouth", there is rarely any pain. Candida is associated with about 90% of cases of denture related stomatitis.
Median rhomboid glossitis
This is an elliptical or rhomboid lesion in the center of the dorsal tongue, just anterior (in front) of the circumvallate papillae. The area is depapillated, reddened (or red and white) and rarely painful. There is frequently Candida species in the lesion, sometimes mixed with bacteria.
Linear gingival erythema
This is a localized or generalized, linear band of erythematous gingivitis (inflammation of the gums). It was first observed in HIV infected individuals and termed "HIV-gingivitis", but the condition is not confined to this group. Candida species are involved, and in some cases the lesion responds to antifungal therapy, but it is thought that other factors exist, such as oral hygiene and human herpesviruses. This condition can develop into necrotizing ulcerative periodontitis.
Others
Chronic multifocal oral candidiasis
This is an uncommon form of chronic (more than one month in duration) candidal infection involving multiple areas in the mouth, without signs of candidiasis on other mucosal or cutaneous sites. The lesions are variably red and/or white. Unusually for candidal infections, there is an absence of predisposing factors such as immunosuppression, and it occurs in apparently healthy individuals, normally elderly males. Smoking is a known risk factor.
Chronic mucocutaneous candidiasis
This refers to a group of rare syndromes characterized by chronic candidal lesions on the skin, in the mouth and on other mucous membranes (i.e., a secondary oral candidiasis). These include Localized chronic mucocutaneous candidiasis, diffuse mucocutaneous candidiasis (Candida granuloma), candidiasis–endocrinopathy syndrome and candidiasis thymoma syndrome. About 90% of people with chronic mucocutaneous candidiasis have candidiasis in the mouth.
Signs and symptoms
Signs and symptoms are dependent upon the type of oral candidiasis. Often, apart from the appearance of the lesions, there are usually no other signs or symptoms. Most types of oral candidiasis are painless, but a burning sensation may occur in some cases. Candidiasis can, therefore, sometimes be misdiagnosed as burning mouth syndrome. A burning sensation is more likely with erythematous (atrophic) candidiasis, whilst hyperplastic candidiasis is normally entirely asymptomatic. Acute atrophic candidiasis may feel like the mouth has been scalded with a hot liquid. Another potential symptom is a metallic, acidic, salty or bitter taste in the mouth. The pseudomembranous type rarely causes any symptoms apart from possibly some discomfort or bad taste due to the presence of the membranes. Sometimes the patient describes the raised pseudomembranes as "blisters." Occasionally there can be dysphagia (difficulty swallowing), which indicates that the candidiasis involves the oropharynx or the esophagus, as well as the mouth. The trachea and the larynx may also be involved where there is oral candidiasis, and this may cause hoarseness of the voice.
Causes
Species
The causative organism is usually Candida albicans, or less commonly other Candida species such as (in decreasing order of frequency) Candida tropicalis, Candida glabrata, Candida parapsilosis, Candida krusei, or other species (Candida stellatoidea, Candida pseudotropicalis, Candida famata, Candida rugosa, Candida geotrichium, Candida dubliniensis, and Candida guilliermondii). C. albicans accounts for about 50% of oral candidiasis cases, and together C. albicans, C. tropicalis and C. glabrata account for over 80% of cases. Candidiasis caused by non-C. albicans Candida (NCAC) species is associated more with immunodeficiency. For example, in HIV/AIDS, C. dubliniensis and C. geotrichium can become pathogenic.
About 35-50% of humans possess C. albicans as part of their normal oral microbiota. With more sensitive detection techniques, this figure is reported to rise to 90%. This candidal carrier state is not considered a disease, since there are no lesions or symptoms of any kind. Oral carriage of Candida is pre-requisite for the development of oral candidiasis. For Candida species to colonize and survive as a normal component of the oral microbiota, the organisms must be capable of adhering to the epithelial surface of the mucous membrane lining the mouth. This adhesion involves adhesins (e.g., hyphal wall protein 1), and extracellular polymeric materials (e.g., mannoprotein). Therefore, strains of Candida with more adhesion capability have more pathogenic potential than other strains. The prevalence of Candida carriage varies with geographic location, and many other factors. Higher carriage is reported during the summer months, in females, in hospitalized individuals, in persons with blood group O and in non-secretors of blood group antigens in saliva. Increased rates of Candida carriage are also found in people who eat a diet high in carbohydrates, people who wear dentures, people with xerostomia (dry mouth), in people taking broad spectrum antibiotics, smokers, and in immunocompromised individuals (e.g., due to HIV/AIDS, diabetes, cancer, Down syndrome or malnutrition). Age also influences oral carriage, with the lowest levels occurring in newborns, increasing dramatically in infants, and then decreasing again in adults. Investigations have quantified oral carriage of Candida albicans at 300-500 colony forming units in healthy persons. More Candida is detected in the early morning and the late afternoon. The greatest quantity of Candida species are harbored on the posterior dorsal tongue, followed by the palatal and the buccal mucosae. Mucosa covered by an oral appliance such as a denture harbors significantly more candida species than uncovered mucosa.
When Candida species cause lesions - the result of invasion of the host tissues - this is termed candidiasis. Some consider oral candidiasis a change in the normal oral environment rather than an exposure or true "infection" as such. The exact process by which Candida species switch from acting as normal oral commensals (saprophytic) state in the carrier to acting as a pathogenic organism (parasitic state) is not completely understood.
Several Candida species are polymorphogenic, that is, capable of growing in different forms depending on the environmental conditions. C. albicans can appear as a yeast form (blastospores), which is thought to be relatively harmless; and a hyphal form associated with invasion of host tissues. Apart from true hyphae, Candida can also form pseudohyphae — elongated filamentous cells, lined end to end. As a general rule, candidiasis presenting with white lesions is mainly caused by Candida species in the hyphal form and red lesions by yeast forms. C. albicans and C. dubliniensis are also capable of forming germ tubes (incipient hyphae) and chlamydospores under the right conditions. C. albicans is categorized serologically into A or B serotypes. The prevalence is roughly equal in healthy individuals, but type B is more prevalent in immunocompromised individuals.
Predisposing factors
The host defenses against opportunistic infection of candida species are
The oral epithelium, which acts both as a physical barrier preventing micro-organisms from entering the tissues, and is the site of cell mediated immune reactions.
Competition and inhibition interactions between candida species and other micro-organisms in the mouth, such as the many hundreds of different kinds of bacteria.
Saliva, which possesses both mechanical cleansing action and immunologic action, including salivary immunoglobulin A antibodies, which aggregate candida organisms and prevent them adhering to the epithelial surface; and enzymatic components such as lysozyme, lactoperoxidase and antileukoprotease.
Disruption to any of these local and systemic host defense mechanisms constitutes a potential susceptibility to oral candidiasis, which rarely occurs without predisposing factors. It is often described as being "a disease of the diseased", occurring in the very young, the very old, or the very sick.
Immunodeficiency
Immunodeficiency is a state of reduced function of the immune system, which can be caused by medical conditions or treatments.
Acute pseudomembranous candidiasis occurs in about 5% of newborn infants. Candida species are acquired from the mother's vaginal canal during birth. At very young ages, the immune system is yet to develop fully and there is no individual immune response to candida species, an infants antibodies to the fungus are normally supplied by the mother's breast milk.
Other forms of immunodeficiency which may cause oral candidiasis include HIV/AIDS, active cancer and treatment, chemotherapy or radiotherapy.
Corticosteroid medications may contribute to the appearance of oral candidiasis, as they cause suppression of immune function either systemically or on a local/mucosal level, depending on the route of administration. Topically administered corticosteroids in the mouth may take the form of mouthwashes, dissolving lozenges or mucosal gels; sometimes being used to treat various forms of stomatitis. Systemic corticosteroids may also result in candidiasis.
Inhaled corticosteroids (e.g., for treatment of asthma or chronic obstructive pulmonary disease), are not intended to be administered topically in the mouth, but inevitably there is contact with the oral and oropharyngeal mucousa as it is inhaled. In asthmatics treated with inhaled steroids, clinically detectable oral candidiasis may occur in about 5-10% of adults and 1% of children. Where inhaled steroids are the cause, the candidal lesions are usually of the erythematous variety. Candidiasis appears at the sites where the steroid has contacted the mucosa, typically the dorsum of the tongue (median rhomboid glossitis) and sometimes also on the palate. Candidal lesions on both sites are sometimes termed "kissing lesions" because they approximate when the tongue is in contact with the palate.
Denture wearing
Denture wearing and poor denture hygiene, particularly wearing the denture continually rather than removing it during sleep, is another risk factor for both candidal carriage and oral candidiasis. Dentures provide a relative acidic, moist and anaerobic environment because the mucosa covered by the denture is sheltered from oxygen and saliva. Loose, poorly fitting dentures may also cause minor trauma to the mucosa, which is thought to increase the permeability of the mucosa and increase the ability of C. albicans to invade the tissues. These conditions all favor the growth of C. albicans. Sometimes dentures become very worn, or they have been constructed to allow insufficient lower facial height (occlusal vertical dimension), leading to over-closure of the mouth (an appearance sometimes described as "collapse of the jaws"). This causes deepening of the skin folds at the corners of the mouth (nasolabial crease), in effect creating intertriginous areas where another form of candidiasis, angular cheilitis, can develop. Candida species are capable of adhering to the surface of dentures, most of which are made from polymethylacrylate. They exploit micro-fissures and cracks in the surface of dentures to aid their retention. Dentures may therefore become covered in a biofilm, and act as reservoirs of infection, continually re-infecting the mucosa. For this reason, disinfecting the denture is a vital part of treatment of oral candidiasis in persons who wear dentures, as well as correcting other factors like inadequate lower facial height and fit of the dentures.
Dry mouth
Both the quantity and quality of saliva are important oral defenses against candida. Decreased salivary flow rate or a change in the composition of saliva, collectively termed salivary hypofunction or hyposalivation is an important predisposing factor. Xerostomia is frequently listed as a cause of candidiasis, but xerostomia can be subjective or objective, i.e., a symptom present with or without actual changes in the saliva consistency or flow rate.
Diet
Malnutrition, whether by malabsorption, or poor diet, especially hematinic deficiencies (iron, vitamin B12, folic acid) can predispose to oral candidiasis, by causing diminished host defense and epithelial integrity. For example, iron deficiency anemia is thought to cause depressed cell-mediated immunity. Some sources state that deficiencies of vitamin A or pyridoxine are also linked.
There is limited evidence that a diet high in carbohydrates predisposes to oral candidiasis. In vitro and studies show that Candidal growth, adhesion and biofilm formation is enhanced by the presence of carbohydrates such as glucose, galactose and sucrose.
Smoking
Smoking, especially heavy smoking, is an important predisposing factor but the reasons for this relationship are unknown. One hypothesis is that cigarette smoke contains nutritional factors for C. albicans, or that local epithelial alterations occur that facilitate colonization of candida species.
Antibiotics
Broad-spectrum antibiotics (e.g. tetracycline) eliminate the competing bacteria and disrupt the normally balanced ecology of oral microorganisms, which can cause antibiotic-induced candidiasis.
Other factors
Several other factors can contribute to infection, including endocrine disorders (e.g. diabetes when poorly controlled), and/or the presence of certain other mucosal lesions, especially those that cause hyperkeratosis and/or dysplasia (e.g. lichen planus). Such changes in the mucosa predispose it to secondary infection with candidiasis. Other physical mucosal alterations are sometimes associated with candida overgrowth, such as fissured tongue (rarely), tongue piercing, atopy, and/or hospitalization.
Diagnosis
The diagnosis can typically be made from the clinical appearance alone, but not always. As candidiasis can be variable in appearance, and present with white, red or combined white and red lesions, the differential diagnosis can be extensive. In pseudomembraneous candidiasis, the membranous slough can be wiped away to reveal an erythematous surface underneath. This is helpful in distinguishing pseudomembraneous candidiasis from other white lesions in the mouth that cannot be wiped away, such as lichen planus, oral hairy leukoplakia. Erythematous candidiasis can mimic geographic tongue. Erythematous candidiasis usually has a diffuse border that helps distinguish it from erythroplakia, which normally has a sharply defined border.
Special investigations to detect the presence of candida species include oral swabs, oral rinse or oral smears. Smears are collected by gentle scraping of the lesion with a spatula or tongue blade and the resulting debris directly applied to a glass slide. Oral swabs are taken if culture is required. Some recommend that swabs be taken from 3 different oral sites. Oral rinse involves rinsing the mouth with phosphate-buffered saline for 1 minute and then spitting the solution into a vessel that examined in a pathology laboratory. Oral rinse technique can distinguish between commensal candidal carriage and candidiasis. If candidal leukoplakia is suspected, a biopsy may be indicated. Smears and biopsies are usually stained with periodic acid-Schiff, which stains carbohydrates in fungal cell walls in magenta. Gram staining is also used as Candida stains are strongly Gram positive.
Sometimes an underlying medical condition is sought, and this may include blood tests for full blood count and hematinics.
If a biopsy is taken, the histopathologic appearance can be variable depending upon the clinical type of candidiasis. Pseudomembranous candidiasis shows hyperplastic epithelium with a superficial parakeratotic desquamating (i.e., separating) layer. Hyphae penetrate to the depth of the stratum spinosum, and appear as weakly basophilic structures. Polymorphonuclear cells also infiltrate the epithelium, and chronic inflammatory cells infiltrate the lamina propria.
Atrophic candidiasis appears as thin, atrophic epithelium, which is non-keratinized. Hyphae are sparse, and inflammatory cell infiltration of the epithelium and the lamina propria. In essence, atrophic candidiasis appears like pseudomembranous candidiasis without the superficial desquamating layer.
Hyperplastic candidiasis is variable. Usually there is hyperplastic and acanthotic epithelium with parakeratosis. There is an inflammatory cell infiltrate and hyphae are visible. Unlike other forms of candidiasis, hyperplastic candidiasis may show dysplasia.
Treatment
Oral candidiasis can be treated with topical anti-fungal drugs, such as nystatin, miconazole, Gentian violet or amphotericin B. Surgical excision of the lesions may be required in cases that do not respond to anti-fungal medications.
Underlying immunosuppression may be medically manageable once it is identified, and this helps prevent recurrence of candidal infections.
Patients who are immunocompromised, either with HIV/AIDS or as a result of chemotherapy, may require systemic prevention or treatment with oral or intravenous administered anti-fungals. However, there is strong evidence that drugs that are absorbed or partially absorbed from the GI tract can prevent candidiasis more effectively than drugs that are not absorbed in the same way.
If candidiasis is secondary to corticosteroid or antibiotic use, then use may be stopped, although this is not always a feasible option. Candidiasis secondary to the use of inhaled steroids may be treated by rinsing out the mouth with water after taking the steroid. Use of a spacer device to reduce the contact with the oral mucosa may greatly reduce the risk of oral candidiasis.
In recurrent oral candidiasis, the use of azole antifungals risks selection and enrichment of drug-resistant strains of candida organisms. Drug resistance is increasingly more common and presents a serious problem in persons who are immunocompromised.
Prophylactic use of antifungals is sometimes employed in persons with HIV disease, during radiotherapy, during immunosuppressive or prolonged antibiotic therapy as the development of candidal infection in these groups may be more serious.
The candidal load in the mouth can be reduced by improving oral hygiene measures, such as regular toothbrushing and use of anti-microbial mouthwashes. Since smoking is associated with many forms of oral candidiasis, cessation may be beneficial.
Denture hygiene
Good denture hygiene involves regular cleaning of the dentures, and leaving them out of the mouth during sleep. This gives the mucosa a chance to recover, while wearing a denture during sleep is often likened to sleeping in one's shoes. In oral candidiasis, the dentures may act as a reservoir of Candida species known as denture stomatitis which continually reinfects the mucosa once antifungal medication is stopped. Therefore, they must be disinfected as part of the treatment for oral candidiasis. There are commercial denture cleaner preparations for this purpose, but it is readily accomplished by soaking the denture overnight in a 1:10 solution of sodium hypochlorite (Milton, or household bleach). Bleach may corrode metal components, so if the denture contains metal, soaking it twice daily in chlorhexidine solution can be carried out instead. An alternative method of disinfection is to use a 10% solution of acetic acid (vinegar) as an overnight soak, or to microwave the dentures in 200mL water for 3 minutes at 650 watts. Microwave sterilization is only suitable if no metal components are present in the denture. Antifungal medication can also be applied to the fitting surface of the denture before it is put back in the mouth. Other problems with the dentures, such as inadequate occlusal vertical dimension may also need to be corrected in the case of angular cheilitis.
Prognosis
The severity of oral candidiasis is subject to great variability from one person to another and in the same person from one occasion to the next. The prognosis of such infection is usually excellent after the application of topical or systemic treatments. However, oral candidiasis can be recurrent. Individuals continue to be at risk of the condition if underlying factors such as reduced salivary flow rate or immunosuppression are not rectifiable.
Candidiasis can be a marker for underlying disease, so the overall prognosis may also be dependent upon this. For example, a transient erythematous candidiasis that developed after antibiotic therapy usually resolves after antibiotics are stopped (but not always immediately), and therefore carries an excellent prognosis—but candidiasis may occasionally be a sign of more sinister undiagnosed pathology, such as HIV/AIDS or leukemia.
It is possible for candidiasis to spread to/from the mouth, from sites such as the pharynx, esophagus, lungs, liver, anogenital region, skin or the nails. The spread of oral candidiasis to other sites usually occurs in debilitated individuals. It is also possible that candidiasis is spread by sexual contact. Rarely, a superficial candidal infection such as oral candidiasis can cause invasive candidiasis, and even prove fatal. The observation that Candida species are normally harmless commensals on the one hand, but are also occasionally capable of causing fatal invasive candidiases, has led to the description "Dr Jekyll and Mr Hyde".
The role of thrush in the hospital and ventilated patients is not entirely clear, however, there is a theoretical risk of positive interaction of candida with topical bacteria.
Epidemiology
In humans, oral candidiasis is the most common form of candidiasis, by far the most common fungal infection of the mouth, and it also represents the most common opportunistic oral infection in humans with lesions only occurring when the environment favors pathogenic behavior.
Oropharyngeal candidiasis is common during cancer care, and it is a very common oral sign in individuals with HIV. Oral candidiasis occurs in about two thirds of people with concomitant AIDS and esophageal candidiasis.
The incidence of all forms of candidiasis have increased in recent decades. This is due to developments in medicine, with more invasive medical procedures and surgeries, more widespread use of broad spectrum antibiotics and immunosuppression therapies. The HIV/AIDs global pandemic has been the greatest factor in the increased incidence of oral candidiasis since the 1980s. The incidence of candidiasis caused by NCAC species is also increasing, again thought to be due to changes in medical practise (e.g., organ transplantation and use of indwelling catheters).
History
Oral candidiasis has been recognized throughout recorded history. The first description of this condition is thought to have occurred in the 4th century B.C. in "Epidemics" (a treatise that is part of the hippocratic corpus), where descriptions of what sounds like oral candidiasis are stated to occur with severe underlying disease.
The colloquial term "thrush" is of unknown origin but may stem from an unrecorded Old English word *þrusc or from a Scandinavian root. The term is not related to the bird of the same name.
Society and culture
Many pseudoscientific claims by proponents of alternative medicine surround the topic of candidiasis. Oral candidiasis is sometimes presented in this manner as a symptom of a widely prevalent systemic candidiasis, candida hypersensitivity syndrome, yeast allergy, or gastrointestinal candida overgrowth, which are medically unrecognized conditions. (See: Alternative medicine in Candidiasis)
References
External links
Animal fungal diseases
Mycosis-related cutaneous conditions
Oral mucosal pathology
Fungal diseases
simple:Oral candidiasis | Oral candidiasis | Biology | 7,199 |
1,939,326 | https://en.wikipedia.org/wiki/Harry%20C.%20Oberholser | Harry Church Oberholser (June 25, 1870 – December 25, 1963) was an American ornithologist.
Biography
Harry Oberholser was born to Jacob and Lavera S. Oberholser on June 25, 1870, in Brooklyn, New York. He attended Columbia University, but did not graduate. Later, Oberholser was awarded degrees (B.A., M.S., and PhD.) from the George Washington University. He married Mary Forrest Smith on June 30, 1914.
From 1895 to 1941, he was employed by the United States Bureau of Biological Survey (later the United States Fish and Wildlife Service) as an ornithologist, biologist, and editor. During his career, he collected bird specimens while on trips with Vernon Bailey and Louis Agassiz Fuertes. In 1928, Oberholser helped organize the Winter Waterfowl Survey, which continues to this day.
In 1941, at the age of 70, he became curator of ornithology at the Cleveland Museum of Natural History. Oberholser was the author of a number of books and articles. A complete manuscript of his work is available at the Dolph Briscoe Center for American History.
He died on December 25, 1963.
Memory
Empidonax oberholseri (dusky flycatcher) was named in his honor.
Books
The Bird Life of Texas (1974)
Birds of Mt. Kilimanjaro (1905)
Birds of the Anamba Islands (1917)
Birds of the Natuna Islands (1932)
The Bird Life of Louisiana (1938)
When Passenger Pigeons Flew in the Killbuck Valley (1999)
Critical notes on the subspecies of the spotted owl (1915)
The birds of the Tambelan Islands, South China Sea (1919)
The great plains waterfowl breeding grounds and their protection (1918)
References
External links
Mid-Winter Waterfowl Survey
1870 births
1963 deaths
American ornithologists
Taxon authorities
United States Department of Agriculture people | Harry C. Oberholser | Biology | 410 |
9,806,339 | https://en.wikipedia.org/wiki/Diane%20Bell%20%28anthropologist%29 | Diane Robin Bell (born 1943) is an Australian feminist anthropologist, author, and social justice advocate. Her work focuses on the Aboriginal people of Australia, Indigenous land rights, human rights, Indigenous religions, environmental issues, and feminist theory and practice.
Bell has undertaken fieldwork in central and southeastern Australia and in North America; and held senior positions in higher education in Australia and the USA. In 2005, after 17 years in the United States, she returned to Australia, and worked on a number of projects in South Australia. she is Professor Emerita of Anthropology at the George Washington University in Washington, D.C., U.S., Distinguished Honorary Professor of Anthropology at the Australian National University, Canberra.
Her books include Daughters of the Dreaming (1983/1993/2002); Generations: Grandmothers, mothers, and daughters (1987); Law: The old and the new (1980/1984); and Ngarrindjeri Wurruwarrin: A world that is, was, and will be (1998/2014). Evil: A novel (2005) was adapted to a play.
Early life and education
Diane Robin Bell was born in 1943 in Melbourne, Victoria.
After leaving school at the age of 16, she trained as a primary school teacher at Frankston Teachers College (1960-1) and taught in a range of state schools in Victoria and New South Wales between 1962 and 1970.
After the birth of her children, Genevieve (1967) and Morgan (1969), she completed high school through night classes at Box Hill High School, Victoria (1970-1), her BA (Hons) in Anthropology at Monash University (1975), and a PhD from Australian National University (ANU) (1981).
Career
In 1981, Bell worked for the newly-established Northern Territory Aboriginal Sacred Sites Protection Authority; set up her own anthropological consultancy in Canberra (1982-8); consulted for the Central Land Council, the Northern Land Council, Aboriginal Legal Aid Services, the Australian Law Reform Commission, and the Aboriginal Land Commissioner. Her academic posts included Research Fellow at the ANU (1983-6), and then as the Chair of Australian Studies at Deakin University in Geelong, where she was the first female professor on staff.
In 1989, Bell moved to the United States to take up the Chair of Religion, Economic Development and Social Justice endowed by the Henry R. Luce Foundation, at the College of the Holy Cross in Worcester, Massachusetts.
In 1999, she took up the position of Director of Women's Studies and Professor of Anthropology at the George Washington University (GWU) in Washington, D.C. As the recipient of a fellowship in 2003–4, awarded by the peak educational body, the American Council on Education, Bell also worked closely with the senior administration of Virginia Tech as they revised their curriculum.
On her retirement from GWU in 2005 she was awarded the title "Professor Emerita of Anthropology".
On her return to Australia in 2005, she was appointed writer and editor in residence at Flinders University, and visiting professor, School of Social Sciences at the University of Adelaide.
she is Emerita Professor at the ANU
Anthropological work
Bell's first full-length anthropological monograph was Daughters of the Dreaming, which focused on the religious, spiritual and ceremonial lives of Aboriginal women in central Australia. The book has been in continuous print since its first publication in 1983 and subsequent editions in 1993 and 2002 engage with the debates the work stimulated. It is now well-established practice to have women's councils as part of the decision-making and consultative structures in Aboriginal affairs. Through her research and in giving expert evidence, Bell has been able to demonstrate that Aboriginal women are owners and managers of land in their own right.
In 1986, Melbourne publishers McPhee Gribble, with Bell as author, won the competitive tender from the Australian Bicentennial Authority (ABA) to write a book about women in Australia for the 1988 Bicentenary. The book Generations: Grandmothers, Mothers and Daughters traces ways in which significant objects in the lives of Australian women have been passed from generation to generation and explores how stories of the objects forge links with female kin. Bell used an ethnographic approach to explore the commonalities of Australian women's cultures across age, time, race and region. Shortly after it was published, the book reached number one on The Age bestseller list for works of non-fiction.
Throughout the latter part of the 1970s, and through most of the 1980s, Bell was involved in issues about Aboriginal land rights and law reform. With lawyer Pam Ditton, she authored Law: the old and the new. Aboriginal Women in Central Australia Speak Out (1980/1984), which addressed issues of law reform in Central Australia, in the wake of the passage of the Northern Territory Land Rights Act 1976. Bell worked on some 10 land claims for the Central Land Council, the Northern Land Council and the then Aboriginal Land Commissioner, Justice Toohey.
In the late 1990s, Bell was drawn into the Hindmarsh Island bridge controversy. In 1994, a group of Ngarrindjeri women, traditional owners of the Lower River Murray, Lakes Alexandrina, and Lake Albert and the Coorong (South Australia) had objected that a proposal to build a bridge from Goolwa to Kumarangk (Hindmarsh Island) near the Murray mouth would desecrate sites sacred to them as women. The gender-restricted knowledge that underwrote their claim became known as "secret women's business", and was contested in the media, courts, and academy. In 1996, a South Australian Royal Commission found that the women had deliberately fabricated their beliefs to thwart the development. However, with one exception, the women who claimed knowledge of the sacred tradition did not give evidence at the Royal Commission because they considered it to be a violation of their religious freedoms.
In 1997, in the Federal Court of Australia, the developers sought compensation for the losses incurred by the delays in the building of the bridge. Mr. Justice von Doussa, heard from all parties to the dispute and, although the court had been informed that the case would not be a rerun of the Royal Commission, the matter of restricted women's knowledge recurred such that 'late in the trial the applicants amended their pleadings to specifically allege that the restricted women's knowledge, which they refer to as "women's business", was not a genuine Ngarrindjeri tradition'. In his 'Reasons for Decision' of August 2001, von Doussa noted 'the evidence received by the Court on this topic is significantly different to that which was before the Royal Commission. Upon the evidence before this Court I am not satisfied that the restricted women's knowledge was fabricated or that it was not part of genuine Aboriginal tradition'. As to Ngarrindjeri key witness, Doreen Kartinyeri, he wrote, "I am not persuaded that I should reject Dr. Kartinyeri's evidence and find that she has lied about the restricted women's business".
On 4 May 2009, "The Meeting of the Waters", the site complex the Ngarrindjeri women had sought to protect through the courts, was registered by the Government of South Australia. On 6 July 2010, on behalf of the SA Government, Paul Caica, Minister for Aboriginal Affairs, acknowledged Von Doussa's Decision that Ngarrindjeri knowledge was a genuine part of Aboriginal tradition and apologised for the great pain and hurt to the community.
Bell became involved in this matter of gender-restricted knowledge after the Royal Commission. On the basis of her research in the SA archives and fieldwork with the women in 1996–8, Bell was convinced there was sufficient evidence to support the women's claims that there was gender-restricted knowledge in Ngarrindjeri society and that the women had told the truth. Bell's subsequent monograph, Ngarrindjeri Wurruwarrin (1998/2014), was acclaimed. From 2005 to 2013, Bell lived on Ngarrindjeri lands as she researched and wrote the Connection Report for their Native Title Claim.
Writing
Bell is the author/editor of 10 books and numerous articles and book chapters dealing with religion, land rights, law reform, art, history, feminist theory and practice, and social change.
Bell's first published novel, Evil, addresses secrets within the Roman Catholic church and is set on the campus of a fictional American liberal arts college. Adapted by Leslie Jacobson to a play, Evil was performed for the "From Page to Stage" season on new plays at the Kennedy Center, Washington, D.C., USA, 2006 and in Adelaide, 2008. Bell's play "Weaving and Whispers" was performed at the TarraWarra Museum of Arts Biennial in 2014.
Politics
Bell ran as an independent candidate in the 2008 Mayo by-election, caused by the resignation of former foreign minister and Liberal leader Alexander Downer. Her campaign was called Vote4Di and was supported by a campaign website. South Australian independent Senator Nick Xenophon gave support to Bell's campaign. In a field of eleven candidates and the absence of a Labor candidate, Bell finished third on a 16.3 percent primary vote, behind the Greens on 21.4 (+10.4) percent and the Liberals on 41.3 (–9.8) percent. The seat became marginal for the Liberals on a 53.0 (–4.0) two-candidate vote.
River advocate
Bell campaigned for fresh water flows for the River Murray, Lakes Alexandrina and Albert and the Coorong. In 2007, she was a co-founder of the 'StoptheWeir' website and worked with the River, Lakes and Coorong Action Group Inc to stop the construction of a weir across the River Murray at Pomanda Island (at the point where the river enters Lake Alexandrina). She administered the "Hurry Save The Murray" website and has been a frequent speaker and commentator on environmental matters online, in the media and in preparing submissions and giving evidence to various environmental inquiries.
Other activities and roles
Bell served on the board of trustees for Hampshire College for eight years.
She has served on the editorial boards of several journals (Aboriginal History 1979–1988; Women's Studies International Forum 1990–2005) and was the Australian contributing member of the international editorial board for the Longmans Encyclopedia (1989) Macmillan , the Encyclopedia of World Religions (2005) and the Encyclopedia of Religion in Australia (2009), Cambridge University Press.
Bell was a contributing consultant to National Geographic on their Taboo TV series (2002-4).
Recognition and awards
1999: Ngarrindjeri Wurruwarrin (1998), winner, NSW Premier's Gleebooks Prize for Critical Writing in the NSW Premier's Literary Awards
1999: Ngarrindjeri Wurruwarrin, shortlisted for The Age Book of the Year
2000: Ngarrindjeri Wurruwarrin, winner, Australian Literature Society Gold Medal
2021: Medal of the Order of Australia, for "service to literature" in the 2021 Queen's Birthday Honours
2023: Hazel Rowley Literary Fellowship
Works
As author
Evil: A novel Spinifex Press, Melbourne, 2005
Ngarrindjeri Wurruwarrin: A world that is, was, and will be Spinifex Press, Melbourne, 1998 (New edition 2014)
Generations: Grandmothers, Mothers and Daughters Melbourne, Penguin, 1987, with photographs by Ponch Hawke.
Daughters of the Dreaming, First ed. Melbourne, McPheeGribble/Sydney, Allen and Unwin, 1983 (Second ed. Minneapolis, University of Minnesota Press 1993; Third ed. Melbourne, Spinifex Press 2002)
Law: The Old and the New (with Pam Ditton) Aboriginal History, Canberra, 1980
As editor
Kungun Ngarrindjeri Miminar Yunnan: Listen to Ngarrindjeri Women Speaking Melbourne, Spinifex Press, 2008 (in collaboration with Ngarrindjeri women
All about Water: All about the River (co-edited with Gloria Jones for the River, Lakes and Coorong Action Group, www.stoptheweir.com)
Radically Speaking: Feminism Reclaimed (Contributing co-editor with Renate Klein) Spinifex Press, Melbourne, 1996
Gendered Fields: Women, Men and Ethnography (Contributing co-editor with Pat Caplan and Wazir Karim) Routledge, London, 1993
This is My Story: The Use of Oral Sources (Contributing co-editor Shelley Schreiner) Centre for Australian Studies, Deakin University, Geelong, 1990
Longman's Encyclopedia (Australian Contributing Editor) Longmans, 1989
Religion in Aboriginal Australia (Contributing co-editor with Max Charlesworth, Kenneth Maddock and Howard Morphy) University of Queensland Press, St Lucia, 1984
As reviewer
Miles Franklin and the Serbs still matter: a review essay, Honest History, 1 December 2015
Sex, soldiers and the South Pacific, Honest History, 8 February 2016
An anthropologist, an historian and his historians, Honest History, 26 October 2016
Clare Wright's You Daughters of Freedom is a Big Book about Big Ideas, Honest History, 7 October 2018
Read and savour the salt of Bruce Pascoe's stories and essays of our land, Honest History, 1 November 2019
References
External links
Medal (OAM) of the Order of Australia in the General Division: Emeritus Professor Diane Robin BELL, ACT (includes a summary of academic positions and publications)
1943 births
Living people
Recipients of the Medal of the Order of Australia
Australian women anthropologists
Monash University alumni
Australian National University alumni
Columbian College of Arts and Sciences faculty
Australian anthropologists
Anthropology writers
Anthropology educators
Australian women novelists
Australian schoolteachers
Radical feminists
20th-century Australian novelists
Women science writers
20th-century Australian women writers
Australian feminist writers
College of the Holy Cross faculty | Diane Bell (anthropologist) | Technology | 2,856 |
23,316,944 | https://en.wikipedia.org/wiki/SN%202005E | SN 2005E (aka 2005-1032) was a calcium-rich supernova first observed in January 2005 that scientists concluded was a new type of cosmic explosion. The explosion originated in the galaxy NGC 1032, approximately 100 million light years away.
Location: (Epoch J2000)
Research and Conclusions
On May 19, 2010, a team of astronomers released a report on the discoveries made in their research of SN 2005E. The articles were published in the British journal Nature.
The researchers have determined that the blast emitted a large amount of calcium and titanium, which is evidence of a nuclear reaction involving helium, instead of the carbon and oxygen that is characteristic of Type Ia supernovae.
References
External links
Light curves and spectra on the Open Supernova Catalog
Supernovae
Cetus | SN 2005E | Chemistry,Astronomy | 160 |
33,035,640 | https://en.wikipedia.org/wiki/List%20of%20computer%20magazines%20in%20Spain | This is a list of computer magazines published in Spain.
List of computer magazines
Arroba
CiberSur
Computer Hoy
Gaceta Tecnológica
Hobby consolas
Linux Magazine
Micromanía
Novática
PC Actual
PC Forma
PC Pro
PC World
Linux+
Linux Actual
Linux User
Superjuegos
Todo Linux
Defunct
8000 Plus
Amiga World
Amigos del Amstrad
Amstrad Acción
Amstrad Educativo
Amstrad Mania
Amstrad Personal
Amstrad Sinclair Ocio
Amstrad User
Computer Music
CPC Attack
CPC User
FamilyPC
Megaocio
MicroHobby
Microhobby Amstrad Especial
Microhobby Amstrad Semanal
Mundo Amstrad
PC Magazine
PC Manía
PC Soft
PC User
PC Útil
PC World
PCW Plus
Programación Actual
Programando mi Amstrad
Solo Programadores
Super máquinas
Todo sobre el Amstrad
Tu Micro Amstrad
Users
Xtreme PC
See also
List of magazines in Spain
Computer magazines
Computer magazines published in Spain
Computer magazines | List of computer magazines in Spain | Technology | 211 |
72,701,354 | https://en.wikipedia.org/wiki/Journal%20of%20Rehabilitation%20in%20Civil%20Engineering | The Journal of Rehabilitation in Civil Engineering is a quarterly peer-reviewed open-access scientific journal published by Semnan University and the editor-in-chief is Ali Kheyroddin (Semnan University). The journal covers all aspects of rehabilitation engineering. It was established in 2012 and is indexed and abstracted in Scopus.
References
External links
Academic journals established in 2012
Civil engineering journals
Quarterly journals
English-language journals | Journal of Rehabilitation in Civil Engineering | Engineering | 87 |
1,627,370 | https://en.wikipedia.org/wiki/Wood%20shaper | A wood shaper (usually just shaper in North America or spindle moulder in the UK and Europe), is a stationary woodworking machine in which a vertically oriented spindle drives one or more stacked cutter heads to mill profiles on wood stock.
Description
Wood shaper cutter heads typically have three blades, and turn at one-half to one-eighth the speed of smaller, much less expensive two-bladed bits used on a hand-held wood router. Adapters are sold allowing a shaper to drive router bits, a compromise on several levels. As are router tables, cost-saving adaptations of hand-held routers mounted to comparatively light-duty dedicated work tables.
The wood being fed into a moulder is commonly referred to as either stock or blanks. The spindle may be raised and lowered relative to the shaper's table, and rotates between 3,000 and 10,000 rpm, with stock running along a vertical fence.
Being both larger and much more powerful than routers, shapers can cut much larger profiles than routers such as for crown moulding and raised-panel doorsand readily drive custom-made bits fabricated with unique profiles. Shapers feature between belt-driven motors, which run much more quietly and smoothly than typically 10,000 to 25,000 rpm direct-drive routers. Speed adjustments are typically made by relocating the belts on a stepped pulley system, much like that on a drill press. Unlike routers, shapers are also able to run in reverse, which is necessary in performing some cuts.
The most common form of wood shaper has a vertical spindle, some with tilting spindles or tables.
Shapers can be adapted to perform specialized cuts employing accessories such as sliding tables, tenon tables, tilting arbor, tenoning hoods, and interchangeable spindles. The standard US spindle shaft is , with on small shapers and 30 mm on European models. Most spindles are tall enough to accommodate more than one cutter head, allowing rapid tooling changes by raising or lowering desired heads into position. Additional spindles can be fitted with pre-spaced cutter heads when more are needed for a job than fit on one.
A wood moulder (American English) is similar to a shaper, but is a more powerful and complex machine with multiple cutting heads at both 90-degrees and parallel to its table. A wood shaper has only a single cutting head, mounted on a perpendicular axis to its table.
Safety
The primary safety feature on a wood shaper is a guard mounted above the cutter protecting hands and garments from being drawn into its blades. Jigs, fixtures such as hold-downs, and accessories that include featherboards, also help prevent injury and generally result in better cuts. The starter, or fulcrum, pin is a metal rod which threads into the table a few inches away from the cutter allowing stock to be fed into it in a freehand cut.
In addition to aiding productivity and setting a consistent rate of milling, a power feeder keeps appendages and garments out of harm's way. It may be multi-speed, and employ rubber wheels to feed stock past the cutter head.
Types
Single head moulder (a "shaper" in the US):
Have a top (horizontal) head only.
Cost less to buy, and are less complex and easier to both set up and run.
Multi Head Moulder (a "moulder" in the US):
Has multiple cutting heads.
Can process more work on complex jobs than a single head tool, in a single pass.
May have (in its standard form) up to four cutting heads, two parallel to the table and two perpendicular to it.
An alternative configuration has two bottom and two top heads, in order to size the lumber with the first top and the first bottom head, and then shape the lumber with the remaining top and bottom heads. Machines with two or more right heads are more common in the furniture industry to give the ability to run shorter stock and deeper more detailed cuts on the edge of the stock.
Tooling
Tooling refers to cutters, knives, blades, as well as planer blades, and Most blades are made from either a tool steel alloy known as high speed steel (HSS), or from carbide steel. Cutter heads are normally made from either steel or aluminum. High Speed Steel, carbide, aluminium, and steel for the cutter heads all come in a wide variety of grades.
References
Bibliography
Welcome to the Architectural Woodwork Institute
External links
Shaper | Wood shaper | Physics,Technology | 933 |
53,400,691 | https://en.wikipedia.org/wiki/Anorthoscope | An anorthoscope is a device that demonstrates an optical illusion that turns an anamorphic picture on a disc into a normal image by rotating it behind a counter-rotating disk with four radial slits. It was invented in 1829 by Belgian physicist Joseph Plateau, whose further studies of the principle led him to the 1832 invention of a stroboscopic animation that would become known as the phénakisticope (commonly regarded as a pinnacle in the development of cinematography, and thus as an important step in the history of modern media).
Anorthoscopes with a black background have a translucent picture and need a luminous slit revolving behind the image disc. To make them translucent, the discs were impregnated with oil on the back and varnished on both sides.
History
During early experiments for his study of physics at the University of Ghent, Plateau looked at two concentric cogwheels rotating in opposite directions, and noted how the fast moving cogs formed the shadowy image of a motionless wheel. He later read Peter Mark Roget's 1824 article Explanation of an optical deception in the appearance of the spokes of a wheel when seen through vertical apertures that addressed a similar illusion and decided to further investigate the phenomenon. Some of his findings were published in Correspondance Mathématique et Physique in 1828
On 9 June 1829, Plateau presented his (still unnamed) anorthoscope as "une espèce toute nouvelle d'anamorphoses" (a very new sort of anamorphoses]) in his doctoral thesis Sur quelques propriétés des impressions produites par la lumière sur l'organe de la vue (On some properties of the impressions that light produces in the organ of sight), at the University of Liège. It was later translated and published in the German scientific magazine Annalen der Physik und Chemie. A letter to Correspondance Mathématique et Physique dated 5 December 1829 included pictures of a disc and the resulting image as an illustration of these "new species of anamorphoses".
Plateau revisited this concept several times in the Correspondance Mathématique et Physique and by January 1836 he had arranged for the device to be released commercially. He sent a box with the instrument to Michael Faraday on 8 January 1836, since they both studied this type of optical illusions. Faraday had previously inspired Plateau to use a mirror with revolving discs, which helped Plateau to develop his Fantascope a.k.a. Phénakisticope. Faraday thought the anorthoscope was a beautiful machine with an exceedingly curious and good effect, and mentioned: "It has wonderfully surprised many to whom I have showed it and they all refuse to believe their own eyes and cannot admit that the forms seen are the things looked at".
The device was marketed starting in 1836 by publishers like Newton & Co in London, Susse in Paris (12 different discs) and J. Duboscq in Paris (at least 18 different discs).
Plateau apparently first used the name "anorthoscope" in a letter to his mentor, publisher and good friend Adolphe Quetelet, planning to send an example of the device to Miss Quetelet as a gift. Soon after, he presented it to the Royal Academy in Brussels in 1836.
Joseph Plateau created a combination of his Fantascope (or phénakisticope) and the Anorthoscope sometime between 1844 and 1849 resulting in a back-lit transparent disc with a sequence of figures that are animated when it is rotated behind a counter-rotating black disc with four illuminated slits, spinning four times as fast. Unlike the phénakisticope, several people could view the animation at the same time. This system has not been commercialized; the only known two handmade discs are in the Joseph Plateau Collection of the Ghent University. Belgian painter Jean Baptiste Madou created the first images on these discs and Plateau painted the successive parts.
21st century
A scientific paper on the effects of the anorthoscope was published in 2007.
A rare completed 1836 anorthoscope set by Susse with twelve discs was auctioned for in 2013. The other two known extant sets of this edition are in the Werner Nekes collection and in the Joseph Plateau Collection in the Science Museum of the Ghent University.
References
External links
Optical illusions
Optical toys
Precursors of film
1820s toys
Audiovisual introductions in 1829 | Anorthoscope | Physics | 907 |
7,562,237 | https://en.wikipedia.org/wiki/Frankie%20%28magazine%29 | Frankie, styled as frankie, is a bi-monthly Australian magazine that features music, art, fashion, photography, craft and other cultural content.
In 2012, it was awarded Australian Magazine of the Year at the Australian Magazine Awards, as well as winning out over both Vogue and Harper's Bazaar for the Australian Fashion Magazine of The Year.
History and profile
frankie was launched in October 2004 by editor Louise Bannister and creative director Lara Burke.
In early 2008, Bannister was replaced by Jo Walker as editor, and Bannister became publisher. Walker was promoted to editor-in-chief in May 2016, and left the company in September 2018.
Sophie Kalagas, who joined the magazine in 2013 as assistant editor and online editor, was the magazine's current editor until Aug 2021, when she left to pursue new opportunities and was replaced by assistant editor Emma Do.
Frankie celebrated its 100th issue in February 2021.
frankie is owned and published by Australian magazine publisher Nextmedia, which bought its niche publisher, Morrison Media, from Pacific Star Network in September 2018 for AUD $2.4 million.
In November 2014, Pacific Star Network had paid AUD $10 million to acquire Morrison Media and its chief entity "frankie press", which published frankie, men's publication Smith Journal, plus books and annuals. Both founders of the magazine, Louise Bannister and Lara Burke, left the magazine before the Pacific Star acquisition.
The magazine was headquartered in Queensland until 2013 when it was relocated in Melbourne.
Readership
The magazine's audience has grown rapidly since its inception. As of March 2019 it is estimated to be read by 335,000 globally. Despite the 2008 Global Financial Crisis that saw Australian magazine sales drop 3%, frankie'''s circulation rose by 31.60% in 2009, making frankie the fastest growing magazine in Australia during that period. The publication continued this trend in 2010, when circulation rose another 43.20%, according to the Australian Bureau of Circulation's January–June 2010 audit figures. This was the second year in a row that frankie had the highest growth out of all Australian magazines. In comparison, Harper's Bazaar had an increase of 9.04% over the same time period.
However, following Morrison Media's 2014 sale to Pacific Star Network the magazine's circulation dropped in 2015 for the first time in its history: a 3.5% drop from 63,645 copies in 2014 to 61,427. This was a difficult period for magazines targeted to young women; Dolly, Cleo, Girlfriend and Total Girl all saw their circulations slump.
By February 2017, when fewer than 20 circulation-audited magazine titles remained in Australia, frankies circulation had slid by 5.4% to 50,167.
frankie has significant social media impact: in 2015 it had a strong online component and steadily increased its popularity on Facebook, Instagram and its dedicated iPad app. In March 2020 it had over 358,000 Facebook fans and 73,700 Twitter followers.
Content
The magazine's content features DIY and vintage culture as well as music, art, fashion, photography, craft, humour, hipster culture, illustration and design. It has a distinctively feminine, hand-crafted aesthetic.
ABC's 7:30 Report described frankie as being between "quite edgy" and "quite daggy", having a strong emphasis on strong, curious stories instead of diets and celebrity culture, supporting emerging artists, musicians, entrepreneurs and designers and preferring to profile up-and-coming hipsters rather than existing ones.
Its senior contributors have included broadcaster and writer Marieke Hardy, and author Benjamin Law.
Other publications
In addition to its magazine and website, frankie has also published a series of books through its imprint, frankie press. These include two recipe books, "Afternoon Tea" and "Sweet Treats", an anthology of frankie magazine photography called "Photo Album", a book series on creative and collaborative areas called "Spaces", a showcase of Australian craftspeople called "Look What We Made", and a "Gift Paper Book".
frankie also brings out a calendar and a diary annually, which have both sold out every year to date.
In 2011 frankie launched a quarterly publication, titled Smith Journal. Smith Journal was part of the sale of Morrison Media to Nextmedia but ceased publication in December 2019.
StaffEditor Shannon JenkinsCreative Director Alice BudaDesigner Caitlyn BendallPartnerships Director Claire MullinsBranded Content Director Emily NaismithProduction and Office Manager Lizzie Dynon Digital and Assistant Editor Elle Burnard Marketing Coordinator Iris McPherson Digital Marketing Manager''' Kelsey Caruana
References
External links
2004 establishments in Australia
Women's magazines published in Australia
Bi-monthly magazines published in Australia
Design magazines
Women's fashion magazines
Magazines established in 2004
Pacific Star Network
Mass media in Queensland
Magazines published in Melbourne | Frankie (magazine) | Engineering | 990 |
19,895 | https://en.wikipedia.org/wiki/Molecular%20cloud | A molecular cloud, sometimes called a stellar nursery if star formation is occurring within, is a type of interstellar cloud of which the density and size permit absorption nebulae, the formation of molecules (most commonly molecular hydrogen, H2), and the formation of H II regions. This is in contrast to other areas of the interstellar medium that contain predominantly ionized gas.
Molecular hydrogen is difficult to detect by infrared and radio observations, so the molecule most often used to determine the presence of H2 is carbon monoxide (CO). The ratio between CO luminosity and H2 mass is thought to be constant, although there are reasons to doubt this assumption in observations of some other galaxies.
Within molecular clouds are regions with higher density, where much dust and many gas cores reside, called clumps. These clumps are the beginning of star formation if gravitational forces are sufficient to cause the dust and gas to collapse.
Research and discovery
The history pertaining to the discovery of molecular clouds is closely related to the development of radio astronomy and astrochemistry. During World War II, at a small gathering of scientists, Henk van de Hulst first reported he had calculated the neutral hydrogen atom should transmit a detectable radio signal. This discovery was an important step towards the research that would eventually lead to the detection of molecular clouds.
Once the war ended, and aware of the pioneering radio astronomical observations performed by Jansky and Reber in the US, the Dutch astronomers repurposed the dish-shaped antennas running along the Dutch coastline that were once used by the Germans as a warning radar system and modified into radio telescopes, initiating the search for the hydrogen signature in the depths of space.
The neutral hydrogen atom consists of a proton with an electron in its orbit. Both the proton and the electron have a spin property. When the spin state flips from a parallel condition to antiparallel, which contains less energy, the atom gets rid of the excess energy by radiating a spectral line at a frequency of 1420.405 MHz.
This frequency is generally known as the 21 cm line, referring to its wavelength in the radio band. The 21 cm line is the signature of HI and makes the gas detectable to astronomers back on earth. The discovery of the 21 cm line was the first step towards the technology that would allow astronomers to detect compounds and molecules in interstellar space.
In 1951, two research groups nearly simultaneously discovered radio emission from interstellar neutral hydrogen. Ewen and Purcell reported the detection of the 21-cm line in March, 1951. Using the radio telescope at the Kootwijk Observatory, Muller and Oort reported the detection of the hydrogen emission line in May of that same year.
Once the 21-cm emission line was detected, radio astronomers began mapping the neutral hydrogen distribution of the Milky Way Galaxy. Van de Hulst, Muller, and Oort, aided by a team of astronomers from Australia, published the Leiden-Sydney map of neutral hydrogen in the galactic disk in 1958 on the Monthly Notices of the Royal Astronomical Society. This was the first neutral hydrogen map of the galactic disc and also the first map showing the spiral arm structure within it.
Following the work on atomic hydrogen detection by van de Hulst, Oort and others, astronomers began to regularly use radio telescopes, this time looking for interstellar molecules. In 1963 Alan Barrett and Sander Weinred at MIT found the emission line of OH in the supernova remnant Cassiopeia A. This was the first radio detection of an interstellar molecule at radio wavelengths. More interstellar OH detections quickly followed and in 1965, Harold Weaver and his team of radio astronomers at Berkeley, identified OH emissions lines coming from the direction of the Orion Nebula and in the constellation of Cassiopeia.
In 1968, Cheung, Rank, Townes, Thornton and Welch detected NH₃ inversion line radiation in interstellar space. A year later, Lewis Snyder and his colleagues found interstellar formaldehyde. Also in the same year George Carruthers managed to identify molecular hydrogen. The numerous detections of molecules in interstellar space would help pave the way to the discovery of molecular clouds in 1970.
Hydrogen is the most abundant species of atom in molecular clouds, and under the right conditions it will form the H2 molecule. Despite its abundance, the detection of H2 proved difficult. Due to its symmetrical molecule, H2 molecules have a weak rotational and vibrational modes, making it virtually invisible to direct observation.
The solution to this problem came when Arno Penzias, Keith Jefferts, and Robert Wilson identified CO in the star-forming region in the Omega Nebula. Carbon monoxide is a lot easier to detect than H2 because of its rotational energy and asymmetrical structure. CO soon became the primary tracer of the clouds where star-formation occurs.
In 1970, Penzias and his team quickly detected CO in other locations close to the galactic center, including the giant molecular cloud identified as Sagittarius B2, 390 light years from the galactic center, making it the first detection of a molecular cloud in history. This team later would receive the Nobel prize of physics for their discovery of microwave emission from the Big Bang.
Due to their pivotal role, research about these structures have only increased over time. A paper published in 2022 reports over 10,000 molecular clouds detected since the discovery of Sagittarius B2.
Occurrence
Within the Milky Way, molecular gas clouds account for less than one percent of the volume of the interstellar medium (ISM), yet it is also the densest part of it. The bulk of the molecular gas is contained in a ring between from the center of the Milky Way (the Sun is about 8.5 kiloparsecs from the center). Large scale CO maps of the galaxy show that the position of this gas correlates with the spiral arms of the galaxy. That molecular gas occurs predominantly in the spiral arms suggests that molecular clouds must form and dissociate on a timescale shorter than 10 million years—the time it takes for material to pass through the arm region.
Perpendicularly to the plane of the galaxy, the molecular gas inhabits the narrow midplane of the galactic disc with a characteristic scale height, Z, of approximately 50 to 75 parsecs, much thinner than the warm atomic (Z from 130 to 400 parsecs) and warm ionized (Z around 1000 parsecs) gaseous components of the ISM. The exceptions to the ionized-gas distribution are H II regions, which are bubbles of hot ionized gas created in molecular clouds by the intense radiation given off by young massive stars; and as such they have approximately the same vertical distribution as the molecular gas.
This distribution of molecular gas is averaged out over large distances; however, the small scale distribution of the gas is highly irregular, with most of it concentrated in discrete clouds and cloud complexes.
General structure and chemistry of molecular clouds
Molecular clouds typically have interstellar medium densities of 10 to 30 , and constitute approximately 50% of the total interstellar gas in a galaxy. Most of the gas is found in a molecular state. The visual boundaries of a molecular cloud is not where the cloud effectively ends, but where molecular gas changes to atomic gas in a fast transition, forming "envelopes" of mass, giving the impression of an edge to the cloud structure. The structure itself is generally irregular and filamentary.
Cosmic dust and ultraviolet radiation emitted by stars are key factors that determine not only gas and column density, but also the molecular composition of a cloud. The dust provides shielding to the molecular gas inside, preventing dissociation by the ultraviolet radiation. The dissociation caused by UV photons is the main mechanism for transforming molecular material back to the atomic state inside the cloud. Molecular content in a region of a molecular cloud can change rapidly due to variation in the radiation field and dust movement and disturbance.
Most of the gas constituting a molecular cloud is molecular hydrogen, with carbon monoxide being the second most common compound. Molecular clouds also usually contain other elements and compounds. Astronomers have observed the presence of long chain compounds such as methanol, ethanol and benzene rings and their several hydrides. Large molecules known as polycyclic aromatic hydrocarbons have also been detected.
The density across a molecular cloud is fragmented and its regions can be generally categorized in clumps and cores. Clumps form the larger substructure of the cloud, having the average size of 1 pc. Clumps are the precursors of star clusters, though not every clump will eventually form stars. Cores are much smaller (by a factor of 10) and have higher densities. Cores are gravitationally bound and go through a collapse during star formation.
In astronomical terms, molecular clouds are short-lived structures that are either destroyed or go through major structural and chemical changes approximately 10 million years into their existence. Their short life span can be inferred from the range in age of young stars associated with them, of 10 to 20 million years, matching molecular clouds’ internal timescales.
Direct observation of T Tauri stars inside dark clouds and OB stars in star-forming regions match this predicted age span. The fact OB stars older than 10 million years don’t have a significant amount of cloud material about them, seems to suggest most of the cloud is dispersed after this time. The lack of large amounts of frozen molecules inside the clouds also suggest a short-lived structure. Some astronomers propose the molecules never froze in very large quantities due to turbulence and the fast transition between atomic and molecular gas.
Cloud formation and destruction
Due to their short lifespan, it follows that molecular clouds are constantly being assembled and destroyed. By calculating the rate at which stars are forming in our galaxy, astronomers are able to suggest the amount of interstellar gas being collected into star-forming molecular clouds in our galaxy. The rate of mass being assembled into stars is approximately 3 M☉ per year. Only 2% of the mass of a molecular cloud is assembled into stars, giving the number of 150 M☉ of gas being assembled in molecular clouds in the Milky Way per year.
Two possible mechanisms for molecular cloud formation have been suggested by astronomers. Cloud growth by collision and gravitational instability in the gas layer spread throughout the galaxy. Models for the collision theory have shown it cannot be the main mechanism for cloud formation due to the very long timescale it would take to form a molecular cloud, beyond the average lifespan of such structures.
Gravitational instability is likely to be the main mechanism. Those regions with more gas will exert a greater gravitational force on their neighboring regions, and draw surrounding material. This extra material increases the density, increasing their gravitational attraction. Mathematical models of gravitational instability in the gas layer predict a formation time within the timescale for the estimated cloud formation time.
Once a molecular cloud assembles enough mass, the densest regions of the structure will start to collapse under gravity, creating star-forming clusters. This process is highly destructive to the cloud itself. Once stars are formed, they begin to ionize portions of the cloud around it due to their heat. The ionized gas then evaporates and is dispersed in formations called ‘champagne flows’. This process begins when approximately 2% of the mass of the cloud has been converted into stars. Stellar winds are also known to contribute to cloud dispersal. The cycle of cloud formation and destruction is closed when the gas dispersed by stars cools again and is pulled into new clouds by gravitational instability.
Star Formation
Star formation involves the collapse of the densest part of the molecular cloud, fragmenting the collapsed region in smaller clumps. These clumps aggregate more interstellar material, increasing in density by gravitational contraction. This process continues until the temperature reaches a point where the fusion of hydrogen can occur. The burning of hydrogen then generates enough heat to push against gravity, creating hydrostatic equilibrium. At this stage, a protostar is formed and it will continue to aggregate gas and dust from the cloud around it.
One of the most studied star formation regions is the Taurus molecular cloud due to its close proximity to earth (140 pc or 430 ly away), making it an excellent object to collect data about the relationship between molecular clouds and star formation. Embedded in the Taurus molecular cloud there are T Tauri stars. These are a class of variable stars in an early stage of stellar development and still gathering gas and dust from the cloud around them. Observation of star forming regions have helped astronomers develop theories about stellar evolution. Many O and B type stars have been observed in or very near molecular clouds. Since these star types belong to population I (some are less than 1 million years old), they cannot have moved far from their birth place. Many of these young stars are found embedded in cloud clusters, suggesting stars are formed inside it.
Types of molecular cloud
Giant molecular clouds
A vast assemblage of molecular gas that has more than 10 thousand times the mass of the Sun is called a giant molecular cloud (GMC). GMCs are around 15 to 600 light-years (5 to 200 parsecs) in diameter, with typical masses of 10 thousand to 10 million solar masses. Whereas the average density in the solar vicinity is one particle per cubic centimetre, the average volume density of a GMC is about ten to a thousand times higher. Although the Sun is much denser than a GMC, the volume of a GMC is so great that it contains much more mass than the Sun. The substructure of a GMC is a complex pattern of filaments, sheets, bubbles, and irregular clumps.
Filaments are truly ubiquitous in the molecular cloud. Dense molecular filaments will fragment into gravitationally bound cores, most of which will evolve into stars. Continuous accretion of gas, geometrical bending, and magnetic fields may control the detailed fragmentation manner of the filaments. In supercritical filaments, observations have revealed quasi-periodic chains of dense cores with spacing of 0.15 parsec comparable to the filament inner width. A substantial fraction of filaments contained prestellar and protostellar cores, supporting the important role of filaments in gravitationally bound core formation. Recent studies have suggested that filamentary structures in molecular clouds play a crucial role in the initial conditions of star formation and the origin of the stellar IMF.
The densest parts of the filaments and clumps are called molecular cores, while the densest molecular cores are called dense molecular cores and have densities in excess of 104 to 106 particles per cubic centimeter. Typical molecular cores are traced with CO and dense molecular cores are traced with ammonia. The concentration of dust within molecular cores is normally sufficient to block light from background stars so that they appear in silhouette as dark nebulae.
GMCs are so large that local ones can cover a significant fraction of a constellation; thus they are often referred to by the name of that constellation, e.g. the Orion molecular cloud (OMC) or the Taurus molecular cloud (TMC). These local GMCs are arrayed in a ring in the neighborhood of the Sun coinciding with the Gould Belt. The most massive collection of molecular clouds in the galaxy forms an asymmetrical ring about the galactic center at a radius of 120 parsecs; the largest component of this ring is the Sagittarius B2 complex. The Sagittarius region is chemically rich and is often used as an exemplar by astronomers searching for new molecules in interstellar space.
Small molecular clouds
Isolated gravitationally-bound small molecular clouds with masses less than a few hundred times that of the Sun are called Bok globules. The densest parts of small molecular clouds are equivalent to the molecular cores found in GMCs and are often included in the same studies.
High-latitude diffuse molecular clouds
In 1984 IRAS identified a new type of diffuse molecular cloud. These were diffuse filamentary clouds that are visible at high galactic latitudes. These clouds have a typical density of 30 particles per cubic centimetre.
List of molecular cloud complexes
Sagittarius B2
Serpens-Aquila Rift
Rho Ophiuchi cloud complex
Corona Australis molecular cloud
Musca–Chamaeleonis molecular cloud
Vela Molecular Ridge
Radcliff wave
Orion molecular cloud complex
Taurus molecular cloud
Perseus molecular cloud
See also
References
External links
Nebulae
Cosmic dust
Concepts in astronomy | Molecular cloud | Physics,Astronomy | 3,365 |
52,772,351 | https://en.wikipedia.org/wiki/Nasal-associated%20lymphoid%20tissue | Nasal- or nasopharynx- associated lymphoid tissue (NALT) represents immune system of nasal mucosa and is a part of mucosa-associated lymphoid tissue (MALT) in mammals. It protects body from airborne viruses and other infectious agents. In humans, NALT is considered analogous to Waldeyer's ring.
Structure
NALT in mice is localized on cartilaginous soft palate of upper jaw, it is situated bilaterally on the posterior side of the palate. It consists mainly of lymphocytes, T cell and B cell enriched zones, follicle-associated epithelium (FAE) with epithelial M cells and some erythrocytes. M cells are typical for antigen intake from mucosa. In some areas of NALT, there are lymphatic vessels and HEVs (high endothelial venule). Dendritic cells and macrophages are also present.
NALT contains about same amount of T cells and B cells. The T-cell population contains about 3–4 times more CD4+ T cells than CD8+ T cells. Most of T cells are with αβ T cell receptor (TCR) and only few are with γδ TCR. CD4+ T cells are in naive state, marked by high expression of CD45RB. B cells are mostly in unswitched state, with sIgM+ IgD+ phenotype.
Development
Formation of NALT starts early after birth, it is not present during embrygenesis or in newborn mice. First signs of NALT (HEV with associated lymphocytes) occurs one week after birth, but full formation is established after 5–8 weeks. In contrast to Peyer's patches and lymph nodes, NALT formation is independent of IL-7R, LT-βR and ROR-γ signalling. It requires Id2 gene, which induce genesis of CD3−CD4+CD45+ cells. These cells accumulates on the site of NALT after birth and induce NALT formation.
Function
NALT in mice has strategic position for incoming pathogens and it is the first site of recognition and elimination of inhaled pathogens. It has a key role in inducing mucosal and systemic immune response. NALT is inductive site of MALT similarly to Peyer's patches in a small intestine.
After intranasal immunization or pathogen recognition, lymphocytes in NALT proliferate and differentiate. They start to produce cytokines, such as IFN-γ, type I interferons, IL-2, IL-4, IL-5, IL-6 or IL-10 (amount depend on used immunizating agent or adjuvans). B cells go through isotype switching and produce antigen-specific IgM, IgG and mainly IgA. Activated B cells can migrate through body to respiratory and genito-uritary tract, because they express chemokine receptors CCR10 and α4β1-integrin. Memory T and B cells are established and last for long time after immunization.
Vaccination
Intranasal (i.n.) immunization or vaccination is an effective way to stimulate respiratory immune system. This way of immunization can provoke both the cell-mediated and humoral immune responses and is capable of stimulating both the mucosal and systemic immune systems. A dose of i.n. administered antigen can be much smaller than of oral administered antigen, because antigens are not exposed to digestive enzymes. Thus, it would be a suitable way of vaccination against airborne viruses and bacteria. In 1997, nasal-spray vaccine containing inactivated influenza virus with nLT (heat-labile enterotoxin) as adjuvants was used in Switzerland, but it had to be withdrawn from the market, because it caused Bell's palsy in some patients. Thus, scientists are looking for more suitable and safe adjuvants, for expamle, Masafumi Yamamoto et al. in 1998 on mice model proved safe and efficient i.n. vaccination against Streptococcus pneumoniae and in 2002 also against influenza virus.
References
Immune system | Nasal-associated lymphoid tissue | Biology | 891 |
20,069,953 | https://en.wikipedia.org/wiki/Formulations%20of%20special%20relativity | The theory of special relativity was initially developed in 1905 by Albert Einstein. However, other interpretations of special relativity have been developed, some on the basis of different foundational axioms. While some are mathematically equivalent to Einstein's theory, others aim to revise or extend it.
Einstein's formulation was based on two postulates, as detailed below. Some formulations modify these postulates or attempt to derive the second postulate by deduction. Others differ in their approach to the geometry of spacetime and the linear transformations between frames of reference.
Einstein's two postulates
As formulated by Albert Einstein in 1905, the theory of special relativity was based on two main postulates:
The principle of relativity: The form of a physical law is the same in any inertial frame (a frame of reference that is not accelerating in any direction).
The speed of light is constant: In all inertial frames, the speed of light c is the same whether the light is emitted from a source at rest or in motion. (Note that this does not apply in non-inertial frames, indeed between accelerating frames the speed of light cannot be constant. Although it can be applied in non-inertial frames if an observer is confined to making local measurements.)
Einstein developed the theory of special relativity based on these two postulates. This theory made many predictions which have been experimentally verified, including the relativity of simultaneity, length contraction, time dilation, the relativistic velocity addition formula, the relativistic Doppler effect, relativistic mass, a universal speed limit, mass–energy equivalence, the speed of causality and the Thomas precession.
Single-postulate approaches
Several physicists have derived a theory of special relativity from only the first postulate – the principle of relativity – without assuming the second postulate that the speed of light is constant. The term "single-postulate" is misleading because these formulations may rely on unsaid assumptions such as the cosmological principle, that is, the isotropy and homogeneity of space. As such, the term does not refer to the exact number of postulates, but is rather used to distinguish such approaches from the "two-postulate" formulation. Single postulate approaches generally deduce, rather than assume, that the speed of light is constant.
Without assuming the second postulate, the Lorentz transformations can be obtained. However, there is a free parameter k, which renders it incapable of making experimental predictions unless further assumptions are made. The case k = 0 is equivalent to Newtonian physics.
Lorentz ether theory
Hendrik Lorentz and Henri Poincaré developed their version of special relativity in a series of papers from about 1900 to 1905. They used Maxwell's equations and the principle of relativity to deduce a theory that is mathematically equivalent to the theory later developed by Einstein.
Taiji relativity
Taiji relativity is a formulation of special relativity developed by Jong-Ping Hsu and Leonardo Hsu. The name of the theory, Taiji, is a Chinese word which refers to ultimate principles which predate the existence of the world. Hsu and Hsu claimed that measuring time in units of distance allowed them to develop a theory of relativity without using the second postulate in their derivation.
It is the principle of relativity, that Hsu & Hsu say, when applied to 4D spacetime, implies the invariance of the 4D-spacetime interval . The difference between this and the spacetime interval in Minkowski space is that is invariant purely by the principle of relativity whereas requires both postulates. The "principle of relativity" in spacetime is taken to mean invariance of laws under 4-dimensional transformations. They claim that there are versions of relativity which are consistent with experiment but have a definition of time where the "speed" of light is not constant. They develop one such version called common relativity which is more convenient for performing calculations for "relativistic many body problems" than using special relativity.
Several authors have made the case that Taiji relativity still assumes a further postulate – the cosmological principle that time and space look the same in all directions. Behara (2003) wrote that "the postulation on the speed of light in special relativity is an inevitable consequence of the relativity principle taken in conjunction with the idea of the homogeneity and isotropy of space and the homogeneity of time in all inertial frames".
Test theories of special relativity
Test theories of special relativity are flat spacetime theories which are used to test the predictions of special relativity. They differ from the two-postulate special relativity by differentiating between the one-way speed of light and the two-way speed of light. This results in different notions of time simultaneity. There is Robertson's test theory (1949) which predicts different experimental results from Einstein's special relativity, and there is the Mansouri–Sexl theory (1977) which is equivalent to Robertson's theory. There is also Edward's theory (1963) which cannot be called a test theory because it is physically equivalent to special relativity.
Geometric formulations
Minkowski spacetime
Minkowski space (or Minkowski spacetime) is a mathematical setting in which special relativity is conveniently formulated. Minkowski space is named for the German mathematician Hermann Minkowski, who around 1907 realized that the theory of special relativity (previously developed by Poincaré and Einstein) could be elegantly described using a four-dimensional spacetime, which combines the dimension of time with the three dimensions of space.
Mathematically, there are a number of ways in which the four-dimensions of Minkowski spacetime are commonly represented: as a four-vector with 4 real coordinates, as a four-vector with 3 real and one complex coordinate, or using tensors.
Spacetime algebra
Spacetime algebra is a type of geometric algebra that is closely related to Minkowski space, and is equivalent to other formalisms of special relativity. It uses mathematical objects such as bivectors to replace tensors in traditional formalisms of Minkowski spacetime, leading to much simpler equations than in matrix mechanics or vector calculus.
de Sitter relativity
According to the works of Cacciatori, Gorini, Kamenshchik, Bacry and Lévy-Leblond and the references therein, if you take Minkowski's ideas to their logical conclusion, then not only are boosts non-commutative but translations are also non-commutative. This means that the symmetry group of space time is a de Sitter group rather than the Poincaré group. This results in spacetime being slightly curved even in the absence of matter or energy. This residual curvature is caused by a cosmological constant to be determined by observation. Due to the small magnitude of the constant, the special relativity with the Poincaré group is more than accurate enough for all practical purposes, although near the Big Bang and inflation de Sitter relativity may be more useful due to the cosmological constant being larger back then. Note this is not the same thing as solving Einstein's field equations for general relativity to get a de Sitter Universe, rather the de Sitter relativity is about getting a de Sitter Group for special relativity which neglects gravity.
Euclidean relativity
Euclidean relativity uses a Euclidean (++++) metric in four-dimensional Euclidean space as opposed to the traditional Minkowski (+---) or (-+++) metric in four-dimensional space-time. The Euclidean metric is derived from the Minkowski metric by rewriting into the equivalent . The roles of time t and proper time have switched so that proper time takes the role of the coordinate for the 4th spatial dimension. A universal velocity for all objects moving through four-dimensional space appears from the regular time derivative . The approach differs from the so-called Wick rotation or complex Euclidean relativity. In Wick rotation, time is replaced by , which also leads to a positive definite metric, but it maintains proper time as the Lorentz invariant value whereas in Euclidean relativity becomes a coordinate. Because implies that photons travel at the speed of light in the subspace {x, y, z} and baryonic matter that is at rest in {x, y, z} travels normal to photons along , a paradox arises on how photons can be propagated in a space-time. The possible existence of parallel space-times or parallel worlds shifted and co-moving along is the approach of Giorgio Fontana. Euclidean geometry is consistent with Minkowski's classical theory of relativity. When the geometric projection of 4D properties to 3D space is made, the hyperbolic Minkowski geometry transforms into a rotation in 4D circular geometry.
Very special relativity
Ignoring gravity, experimental bounds seem to suggest that special relativity with its Lorentz symmetry and Poincaré symmetry describes spacetime. Cohen and Glashow have demonstrated that a small subgroup of the Lorentz group is sufficient to explain all the current bounds.
The minimal subgroup in question can be described as follows: The stabilizer of a null vector is the special Euclidean group SE(2), which contains T(2) as the subgroup of parabolic transformations. This T(2), when extended to include either parity or time reversal (i.e. subgroups of the orthochronous and time-reversal respectively), is sufficient to give us all the standard predictions. Their new symmetry is called Very Special Relativity (VSR).
Doubly special relativity
Doubly special relativity (DSR) is a modified theory of special relativity in which there is not only an observer-independent maximum velocity (the speed of light), but an observer-independent minimum length (the Planck length).
The motivation to these proposals is mainly theoretical, based on the following observation: the Planck length is expected to play a fundamental role in a theory of quantum gravity, setting the scale at which quantum gravity effects cannot be neglected and new phenomena are observed. If special relativity is to hold up exactly to this scale, different observers would observe quantum gravity effects at different scales, due to the Lorentz–FitzGerald contraction, in contradiction to the principle that all inertial observers should be able to describe phenomena by the same physical laws.
A drawback of the usual doubly special relativity models is that they are valid only at the energy scales where ordinary special relativity is supposed to break down, giving rise to a patchwork relativity. On the other hand, de Sitter relativity is found to be invariant under a simultaneous re-scaling of mass, energy and momentum, and is consequently valid at all energy scales.
See also
Alternative derivations of special relativity
Derivations of the Lorentz transformations
History of special relativity
Notes
References
Special relativity | Formulations of special relativity | Physics | 2,213 |
42,599,874 | https://en.wikipedia.org/wiki/Elevation%20%28emotion%29 | Elevation is an emotion elicited by witnessing actual or imagined virtuous acts of remarkable moral goodness. It is experienced as a distinct feeling of warmth and expansion that is accompanied by appreciation and affection for the individual whose exceptional conduct is being observed. Elevation motivates those who experience it to open up to, affiliate with, and assist others. Elevation makes an individual feel lifted up and optimistic about humanity.
Elevation can also be a deliberate act, characteristic habit, or virtue that is characterized by disdaining the trivial or undignified in favor of more exalted or noble themes. Thoreau recommended, for example that a person "read not the Times [but rather] read the Eternities" so that he "elevates his aim."
Background/overview
Elevation is defined as an emotional response to moral beauty. It is related to awe and wonder. It encompasses both the physical feelings and motivational effects that an individual experiences after witnessing acts of compassion or virtue.
Psychologist Jonathan Haidt also posits that elevation is the opposite of social disgust, which is the reaction to reading about or witnessing "any atrocious deed." Haidt insists that elevation is worth studying because we cannot fully understand human morality until we can explain how and why humans are so powerfully affected by the sight of strangers helping one another.
The goal of positive psychology is to bring about a balanced reappraisal of human nature and human potential. Positive psychologists are interested in understanding the motivations behind prosocial behavior in order to learn how to encourage individuals to help and care for each other. Thus, the field attempts to discern what causes individuals to act altruistically. While there is a great deal of research about individual acts of altruism, the amount of research done about a person's reaction to the altruism of others is surprisingly low. It is an oversight that Jonathan Haidt and others like him have striven to correct.
Major theories
Haidt's third dimension of social cognition
Haidt asserts that elevation elicits warm, pleasurable sensations in the chest, and it also motivates individuals to act more virtuously themselves. In his explanation of elevation, Haidt describes the three dimensions of social cognition:
The horizontal dimension of solidarity People vary in distance to the self in regards to affection and mutual obligation. For example, across cultures individuals act differently toward their friends than toward strangers.
The vertical dimension of hierarchy, status, or power People moderate their social exchanges by the relative status of the people whom they are interacting with.
The vertical dimension of "elevation versus degradation" or "purity versus pollution" People vary in their state and trait levels of spiritual purity. When people feel disgust toward certain behaviors, this emotion informs them that someone else is moving down on this third dimension. Haidt defines elevation as the opposite of disgust, because witnessing others rise on the third dimension causes the viewer to also feel higher on this dimension.
Fredrickson's broaden and build theory
Elevation exemplifies Barbara Fredrickson's broaden and build theory of positive emotions, which asserts that positive emotions expand an individual's scope of attention and cognition in the moment while also building resources for the future. Elevation makes an individual feel admiration for the altruist and also more motivated to help others. Elevation has the potential to spread by creating an upward helping spiral in which individuals view others doing good deeds and then feel an increased urge to help others.
Elevation as an other-praising emotion
Sara Algoe and Jonathan Haidt claim that elevation is in the "other-praising" family of emotions along with gratitude and admiration. These three emotions are positive reactions to witnessing the actions of exemplary others. The outcome of all three "other-praising" emotions is a focus on other people.
Algoe and Haidt provided empirical evidence to support this theory. They conducted a study in which participants were prompted to remember a time when they had experienced an event that would elicit elevation, gratitude, admiration, or joy. The participants then completed a questionnaire. Their results suggest that the "other-praising" emotions are different from happiness and distinct from each other due to differing motivational impulses. Elevation motivates individuals to be open and compassionate towards other people. Compared to joy or amusement, people experiencing elevation were more likely to express a desire to perform kind or helpful actions for others, become better people, and imitate the virtuous exemplar.
Elevation as a self-transcendent positive emotion
Michelle Shiota and others assert that elevation is a self-transcendent positive emotion that serves to direct attention away from the self towards appreciating an exceptional human action or remarkable aspect of the natural world. In doing so, elevation encourages individuals to transcend daily routines, limits, and perceived boundaries.
Shiota et al. describe how elevation functions as a moral emotion. It directs a person's judgments regarding others' morality and influences the person's own ensuing moral decisions in ways that may circumvent or precede logical moral reasoning. Elevation may have the adaptive function of motivating people to help others while also assisting those who experience the emotion. For example, elevation may help individuals select with partiality their caring relationship partners by eliciting affection for people who exhibit altruism or compassion. Elevation may also help foster norms of helping in groups or communities. When one member of a community witnesses another helping, they are likely to feel elevated and immediately or briefly in time react by helping someone else in the group. This is due to the mutual benefits of altruism.
Major empirical findings
Difference from happiness
Researchers have shown that the patterns of physical sensations and motivations generated by elevation are different to those caused by happiness. They induced elevation in a laboratory setting by showing undergraduates a ten-minute video clip documenting the life of Mother Teresa. In the control conditions, students were either shown a documentary that was emotionally neutral or a clip from America's Funniest Home Videos. Those in the elevation condition were more likely to report physical feelings of warmth or tingling in their chests. They were also more likely to express a desire to help or associate with others and to cultivate themselves to become better people. They found that happiness caused people to engage in more self-focused or internal pursuits, while elevation appeared to turn participants' attention outward toward other people.
Increased oxytocin in nursing mothers
Jennifer Silvers and Jonathan Haidt found that elevation may increase the amount of oxytocin circulating in the body by promoting the release of the hormone. In their study, nursing mothers and their infants watched video clips that either evoked elevation or amusement. Mothers who watched the elevation-inducing clip were more likely to nurse, leak milk, or cuddle their babies. These actions are associated with oxytocin and thus suggest a possible physiological mechanism underlying feelings of elevation.
Increased prosocial behavior
Results from two studies conducted by Simone Schnall and others suggest that viewing an altruistic act increases a person's motivation to act prosocially.
In the first study, participants either viewed a clip of professional musicians expressing gratitude to their mentors, which was designed to elicit elevation, or a neutral video. People who watched the elevation-evoking video were more likely to agree to help with a later, uncompensated study than those in a neutral state.
In the second experiment, participants were assigned to watch either an elevation film clip, control film clip, or a clip from a British comedy program. They were then asked if they would help the researcher complete a tedious questionnaire filled with math problems for as long as they agreed to keep going. Participants who reported feeling elevated helped the experimenter with the tedious task for almost twice as much time than the participants who were amused or were in the control condition. Also, the length of time that the participants assisted was predicted by self-reported characteristics of subjective elevation such as desiring to help others and feeling hopeful about humanity; however, helping time was individually variable and not predicted by positive affect in general.
Keith Cox studied undergraduates on a spring break service trip and discovered that those who reported more extreme and repeated experiences of elevation during the trip did more trip-specific volunteer activities related to their outing when they arrived home. These findings imply that the experience of elevation moved students to volunteer in the area in which they felt elevation.
Improving functioning in clinically depressed and anxious individuals
Research shows that elevation can contribute to emotional and social functioning in clinically depressed and anxious individuals. For ten days, participants completed brief daily surveys to assess elevation, feelings of competence, interpersonal functioning, symptoms, and compassionate goals. Their findings indicated that on days that clinically distressed individuals experienced high elevation in relation to their normal levels, they reported a greater desire to help others and to be close to others. They also reported less interpersonal conflict and fewer symptoms of distress. This emotion thus motivates people for making them feel better.
Applications
In the workplace
In a 2010 study, Michelangelo Vianello, Elisa Maria Galliani, and Jonathan Haidt found that an employer's ability to inspire elevation in employees strengthened positive attitudes and enhanced virtuous organizational behavior. It appears that employees pay a great deal of attention to the moral behavior of their superiors and respond positively to the display of fairness and moral integrity. Such displays inspire moral elevation and result in intense positive emotions. According to this study, employers could benefit from the positive effects associated with elevation and should actively strive to inspire it in their subordinates.
Promoting altruistic behavior
A study done at the University of Cambridge shows that elevation leads to an increase in altruism. In the study, individuals experiencing elevation were more likely to volunteer to participate in an unpaid study. Those experiencing elevation also spent twice as long helping an experimenter to perform tedious tasks as those who were experiencing mirth or a neutral emotional state. The researchers concluded that witnessing another person's altruistic behavior elicits elevation, which leads to tangible increases in altruism. According to these results, the best method of encouraging altruistic behavior may be simply to lead by example.
Increasing spirituality
Researchers found that elevation and other self-transcendent positive emotions cause people to view others and the world as more benevolent. This perception leads to increased spirituality, because seeing a person or action that is greater than oneself results in greater faith in the goodness of people and the world. It may also cause those who experience the emotion to view life as more meaningful. The researchers observed the greatest effect of elevation on spirituality in people who were less or non-religious. Because spirituality has been connected to prosocial behavior, this link could indicate other benefits of elevation. Holding a more positive view of the world could lead to increased helping behavior, which could encourage many positive interactions. This increase in positive experiences could lead to improved well-being and better health outcomes in individuals; instead of getting caught up in daily stress and negativity, they will be better able to identify and cultivate the positive aspects of their lives through their actions towards others thus motivated.
Elevation emotion in other species
There has been some debate in the scientific community over whether elevation is a uniquely human trait. Primatologist Jane Goodall argues that other animals are capable of experiencing awe, elevation, and wonder. Goodall is famous for her execution of the longest uninterrupted study of a group of animals. She lived among wild chimpanzees in Tanzania, observing them for 45 years. Several times, she witnessed signs of heightened arousal in chimpanzees in the presence of spectacular waterfalls or rainstorms. Each time, the chimp would perform a magnificent display, swaying rhythmically from one foot to the other, stamping in the water, and throwing rocks. Goodall postulates that such displays are the precursors of religious ritual, and are inspired by feelings akin to elevation or awe.
Further research directions
Most research concerning elevation has emphasized its impact on social interactions and behaviors. However, researchers are investigating the precise physiological mechanisms responsible for the warm, open sensation in the chest elicited by elevation. Video clips designed to evoke elevation have been observed to lead to a decrease in vagal parasympathetic on the heart. However, further investigation is necessary in order to determine whether elevation has a unique physiological profile.
Researchers are exploring the idea that profound experiences of elevation can be peak experiences that can alter people's identities and spiritual lives. While moral development is often conceptualized as a lifelong process, Haidt offers an "inspire and rewire" hypothesis which proposes that certain momentary experiences have the potential to induce temporary or even lasting moral changes by exposing individuals to these transformative experiences. Haidt suggests that instances of profound elevation can function as a "mental reset button," replacing cynical or pessimistic emotions with feelings of hope, love, and moral inspiration.
See also
References
Emotion
Moral psychology
Positive psychology | Elevation (emotion) | Biology | 2,626 |
25,678,280 | https://en.wikipedia.org/wiki/Kepler-4b | Kepler-4b, initially known as KOI 7.01, is an extrasolar planet first detected as a transit by the Kepler spacecraft. Its radius and mass are similar to that of Neptune; however, due to its proximity to its host star, it is substantially hotter than any planet in the Solar System. The planet's discovery was announced on January 4, 2010, in Washington, D.C., along with four other planets that were initially detected by the Kepler spacecraft and subsequently confirmed by telescopes at the W.M. Keck Observatory.
Nomenclature and history
Kepler-4b was named because it was the first planet discovered in the orbit of its star, Kepler-4. The star was, in turn, named for the Kepler Mission, a NASA satellite whose purpose is to discover Earth-like planets in a section of the sky between constellations Cygnus and Lyra using the transit method. Using this method, Kepler notes small and steady decreases in a star's brightness that are measured as a planet crosses in front of it. Initially, Kepler-4b was detected as a transit event by the Kepler telescope and considered a Kepler Object of Interest with the designation KOI 7.01.
Subsequent radial velocity measurements by the High Resolution Echelle Spectrometer on the telescopes of W.M. Keck Observatory confirmed the planetary nature of the transit event and established a mass estimate for the planet. The planet's discovery was announced on January 4, 2010, along with four other planets detected by Kepler: Kepler-5b, 6b, 7b and 8b at the 215th meeting of the American Astronomical Society in Washington, D.C.
Host star
Kepler-4 is a star very similar to the sun located about 1610 light-years away from Earth, in the constellation of Draco.
Characteristics
Kepler-4b orbits its host star in 3.213 days at a distance of 0.046 AU. This places it almost 10 times closer to its star than Mercury is to the Sun. Consequently, Kepler-4b is thought to be extremely hot, with an equilibrium temperature greater than 1700 kelvins (2600 °F). (1426°C) The planet is estimated to be 25 times more massive than the Earth with a radius that is 4 times greater than the Earth. This makes it similar to Neptune in terms of size and mass, but with a temperature that is not comparable to any planet in the Solar System (Venus, the hottest planet, is only 735 kelvins). Kepler-4b's eccentricity was assumed to be 0, however an independent reanalysis of the discovery data found a value of 0.25 ± 0.12, and a later reanalysis of the light curve discovered a secondary eclipse with depth 7.47 ± 1.82ppm at a phase of about 0.7.
See also
List of exoplanets discovered by the Kepler space telescope
References
External links
NASA.gov
Exoplanets with Kepler designations
Exoplanets discovered in 2010
Kepler-04b
Giant planets
Transiting exoplanets
Draco (constellation)
4b | Kepler-4b | Astronomy | 634 |
1,841,226 | https://en.wikipedia.org/wiki/Romer%27s%20gap | Romer's gap is an apparent gap in the Paleozoic tetrapod fossil record used in the study of evolutionary biology, which represent periods from which excavators have not yet found relevant fossils. It is named after American paleontologist Alfred Romer, who first recognised it in 1956. Recent discoveries in Scotland are beginning to close this gap in palaeontological knowledge.
Age
Romer's gap runs from approximately 360 to 345 million years ago, corresponding to the first 15 million years of the Carboniferous, the early Mississippian (starting with the Tournaisian and moving into the Visean). The gap forms a discontinuity between the primitive forests and high diversity of fishes in the end Devonian and more modern aquatic and terrestrial assemblages of the early Carboniferous.
Mechanism behind the gap
There has been long debate as to why there are so few fossils from this time period. Some have suggested the problem was of fossilization itself, suggesting that there may have been differences in the geochemistry of the time that did not favour fossil formation. Also, excavators simply may not have dug in the right places. The existence of a true low point in vertebrate diversity has been supported by independent lines of evidence, however recent finds in five new locations in Scotland have yielded multiple fossils of early tetrapods and amphibians. They have also allowed the most accurate logging of the geology of this period. This new evidence suggests that - at least locally - there was no gap in diversity or changes in oxygen geochemistry.
While initial arthropod terrestriality was well under way before the gap, and some digited tetrapods might have come on land, there are remarkably few terrestrial or aquatic fossils that date from the gap itself. Recent work on Paleozoic geochemistry has provided evidence for the biological reality of Romer's gap in both terrestrial vertebrates and arthropods, and has correlated it with a period of unusually low atmospheric oxygen concentration, which was determined from the idiosyncratic geochemistry of rocks formed during Romer's gap. The new sedimentary logging in the Ballagan Formation in Scotland challenges this, suggesting oxygen was stable throughout Romer's Gap.
Aquatic vertebrates, which include most tetrapods during the Carboniferous, were recovering from the Hangenberg event, a major extinction event that preceded Romer's gap, one on par with that which killed the dinosaurs. In this end-Devonian extinction, most marine and freshwater groups became extinct or were reduced to a few lineages, although the precise mechanism of the extinction is unclear. Before the event, oceans and lakes were dominated by lobe-finned fishes and armored fishes called placoderms. After the gap, modern ray finned fish, as well as sharks and their relatives were the dominant forms. The period also saw the demise of the Ichthyostegalia, the early fish-like amphibians with more than five digits.
The low diversity of marine fishes, particularly shell-crushing predators (durophages), at the beginning of Romer's gap is supported by the sudden abundance of hard-shelled crinoid echinoderms during the same period. The Tournaisian has even been called the "Age of Crinoids". Once the number of shell-crushing ray-finned fishes and sharks increased later in the Carboniferous, coincident with the end of Romer's gap, the diversity of crinoids with Devonian-type armor plummeted, following the pattern of a classic predator-prey (Lotka-Volterra) cycle. There is increasing evidence that lungfish and stem tetrapods and amphibians recovered quickly and diversified in the rapidly changing environment of the end-Devonian and Romer's Gap.
Gap fauna
The gap in the tetrapod record has been progressively closed with the discoveries of such early Carboniferous tetrapods as Pederpes and Crassigyrinus. There are a few sites where vertebrate fossils have been found to help fill in the gap, such as the East Kirkton Quarry, in Bathgate, Scotland, a long-known fossil site that was revisited by Stanley P. Wood in 1984 and has since been revealing a number of early tetrapods in the mid Carboniferous; "literally dozens of tetrapods came rolling out: Balanerpeton (a temnospondyl), Silvanerpeton and Eldeceeon (basal anthracosaurs), all in multiple copies, and one spectacular proto-amniote, Westlothiana", Paleos Project reports. In 2016, five new species were found across the Ballagan Formation: Perittodus apsconditus, Koilops herma, Ossirarus kierani, Diploradus austiumensis, Aytonerpeton microps. These stem tetrapods and amphibians provide evidence for an early split between the two groups, and rapid diversification in the Early Carboniferous.
However, tetrapod material in the earliest stage of the Carboniferous, the Tournaisian, remains scarce relative to fishes in the same habitats, which can appear in large death assemblages, and is unknown until late in the stage. Fish faunas from Tournaisian sites around the world are very alike in composition, containing common and ecologically similar species of ray-finned fishes, rhizodont lobe-finned fishes, acanthodians, sharks, and holocephalans.
Recent analysis of the Blue Beach deposits in Nova Scotia suggest that "the early tetrapod fauna is not easily divisible into Devonian and Carboniferous faunas, suggesting that some tetrapods passed through the end Devonian extinction event unaffected."
Tournaisian-age locations
For many years after Romer's gap was first recognised, only two sites yielding Tournaisian-age tetrapod fossils were known; one is in East Lothian, Scotland, and another in Blue Beach, Nova Scotia, where in 1841, Sir William Logan, the first Director of the Geological Survey of Canada, found footprints from a tetrapod. Blue Beach maintains a fossil museum that displays hundreds of Tournaisian fossils, which continue to be found as the cliff erodes to reveal new fossils.
In 2012, 350-million-year-old tetrapod remains from four new Tournaisian sites in Scotland were announced, including those from a primitive amphibian nicknamed "Ribbo". In 2016, five more species were unearthed from these localities, proving Scotland to be one of the most important sites in the world for understanding this time period.
These localities are the coast of Burnmouth, the banks of the Whiteadder Water near Chirnside, the River Tweed near Coldstream, and the rocks near Tantallon Castle alongside the Firth of Forth. Fossils of both aquatic and terrestrial tetrapods are known from these localities, providing an important record of the transition between life in water and life on land and filling some of the lacunae in Romer's gap. These new localities may represent a larger fauna, as all lie within a short distance of each other and share many fishes with the nearby and contemporary Foulden fish bed locality (which has not produced tetrapods thus far). As with East Kirkton Quarry, tetrapods at these sites were discovered through the long-term efforts of Stan Wood and colleagues.
In April 2013 scientists associated with the British Geological Survey (BGS) and the National Museums of Scotland announced the TW:eed project (Tetrapod World: early evolution and diversification). This project includes collaborators from across the UK, and aims to gather knowledge on the end-Devonian Early-Carboniferous world. One aim has been to drill a continuous borehole at an undisclosed location near Berwick-upon-Tweed. This has produced a complete, centimetre-scale sampling of Tournaisian sediment, without discontinuities, providing a timeline on which fossil discoveries can be accurately placed. In the most recent paper to be produced by the TW:eed team, they announced some initial results from the core, including the apparent lack of oxygen excursion across Romer's Gap. This suggests that previous theories about low oxygen being the cause of Romer's Gap will need to be re-evaluated.
See also
Arthropod gap
Sauropod hiatus
References
Notes
Paleontological concepts and hypotheses
Evolution of tetrapods
Carboniferous Scotland
Fossils of Scotland
Gaps in the fossil record | Romer's gap | Biology | 1,770 |
23,887,706 | https://en.wikipedia.org/wiki/Alliance%20%28taxonomy%29 | An alliance is an informal grouping used in biological taxonomy. The term "alliance" is not a taxonomic rank defined in any of the nomenclature codes. It is used for any group of species, genera or tribes to which authors wish to refer, that have at some time provisionally been considered to be closely related.
The term is often used for a group that authors are studying in further detail in order to refine the complex taxonomy. For example, a molecular phylogenetics study of the Aerides–Vanda Alliance (Orchidaceae: Epidendroideae) confirmed that the group is monophyletic, and clarified which species belong in each of the 14 genera. In other orchid groups, the various alliances that have been defined do not correspond well to clades.
Historically, some 19th century botanical authors used alliance to denote groups that would now be considered orders. This usage is now obsolete, and the ICN (Article 17.2) specifies that such taxa are treated as orders.
See also
Species aggregate
Association (ecology)
Bioindicator
References
Taxa by rank
Botanical nomenclature
Plant taxonomy
Zoological nomenclature | Alliance (taxonomy) | Biology | 224 |
2,922,444 | https://en.wikipedia.org/wiki/14%20Canis%20Minoris | 14 Canis Minoris, also known as HD 65345, is a single star in the equatorial constellation of Canis Minor. It is faintly visible to the naked eye with an apparent visual magnitude of +5.30. The distance to this star, as determined from an annual parallax shift of , is approximately 242 light years. 14 CMI has a relatively large proper motion, traversing the celestial sphere at the rate of . It is moving further from the Sun with heliocentric radial velocity of +42.6 km/s.
This is an evolved G-type giant star with a stellar classification of G8 IIIb. At the age of around 550 million years old, it is a red clump giant, which means it has already undergone helium flash and is generating energy through helium fusion at its core. The star has an estimated 2.5 times the mass of the Sun and has expanded to 8.7 times the Sun's radius. It is radiating roughly 48 times the Sun's luminosity from its enlarged photosphere at an effective temperature of 5,070 K.
References
G-type giants
Horizontal-branch stars
Canis Minor
Durchmusterung objects
Canis Minoris, 14
065345
038962
3110 | 14 Canis Minoris | Astronomy | 261 |
28,244,920 | https://en.wikipedia.org/wiki/Human%20presence%20in%20space | Human presence in space (also anthropogenic presence in space or humanity in space) is the direct and mediated presence or telepresence of humans in outer space, and in an extended sense across space including astronomical bodies. Human presence in space, particularly through mediation, can take many physical forms from space debris, uncrewed spacecraft, artificial satellites, space observatories, crewed spacecraft, art in space, to human outposts in outer space such as space stations.
While human presence in space, particularly its continuation and permanence can be a goal in itself, human presence can have a range of purposes and modes from space exploration, commercial use of space to extraterrestrial settlement or even space colonization and militarisation of space. Human presence in space is realized and sustained through the advancement and application of space sciences, particularly astronautics in the form of spaceflight and space infrastructure.
Humans have achieved some mediated presence throughout the Solar System, but the most extensive presence has been in orbit around Earth. Humans reached outer space mediated in 1944 (MW 18014) and have sustained mediated presence since 1958 (Vanguard 1), as well as having reached space directly for the first time on 12 April 1961 (Yuri Gagarin) and continuously since the year 2000 with the crewed International Space Station (ISS), or since the later 1980s with some few interruptions through crewing its predecessor, the space station Mir. The increasing and extensive human presence in orbital space around Earth, beside its benefits, has also produced a threat to it by carrying with it space debris, potentially cascading into the so-called Kessler syndrome. This has raised the need for regulation and mitigation of such to secure a sustainable access to outer space.
Securing the access to space and human presence in space has been pursued and allowed by the establishment of space law and space industry, creating a space infrastructure. But sustainability has remained a challenging goal, with the United Nations seeing the need to advance long-term sustainability of outer space activities in space science and application, and the United States having it as a crucial goal of its contemporary space policy and space program.
Terminology
For outer space being the dominant expanse of space, "space" is often used synonymously for outer space, referring to human presence in space to human presence across all of space, including astronomical bodies which outer space surrounds.
The United States has been using the term "human presence" to identify one of the long-term goals of its space program and its international cooperation. While it traditionally means and is used to name direct human presence, it is also used for mediated presence. Differentiating human presence in space between direct and mediated human presence, meaning human or non-human presence, such as with crewed or uncrewed spacecraft, is rooted in a history of how human presence is to be understood (see dedicated chapter).
Human, particularly direct, presence in space is sometimes replaced with "boots on the ground" or equated with space colonization. But such terms, particularly colonization and even settlement has been avoided and questioned to describe human presence in space, since they employ very particular concepts of appropriation, with historic baggage, addressing the forms of human presence in a particular and not general way.
Alternatively some have used the term "humanization of space", which differs in focusing on the general development, impact and structure of human presence in space.
On an international level the United Nations uses the phrase of "outer space activity" for the activity of its member states in space.
History
Human presence in outer space began with the first launches of artificial object in the mid 20th century, and has increased to the point where Earth is orbited by a vast number of artificial objects and the far reaches of the Solar System have been visited and explored by a range of space probes. Human presence throughout the Solar System is continued by different contemporary and future missions, most of them mediating human presence through robotic spaceflight.
First a realized project of the Soviet Union and followed in competition by the United States, human presence in space is now an increasingly international and commercial field.
Representation and participation
Participation and representation of humanity in space is an issue of human access to and presence in space ever since the beginning of spaceflight. Different space agencies, space programs and interest groups such as the International Astronomical Union have been formed supporting or producing humanity's or a particular human presence in space. Representation has been shaped by the inclusiveness, scope and varying capabilities of these organizations and programs.
Some rights of non-spacefaring countries to partake in spaceflight have been secured through international space law, declaring space the "province of all mankind", understanding spaceflight as its resource, though sharing of space for all humanity is still criticized as imperialist and lacking, particularly regarding regulation of private spceflight.
Additionally to international inclusion the inclusion of women, people of colour and with disability has also been lacking. To reach a more inclusive spaceflight some organizations like the Justspace Alliance and IAU featured Inclusive Astronomy have been formed in recent years.
Law and governance
Space activity is legally based on the Outer Space Treaty, the main international treaty. Though there are other international agreements such as the significantly less ratified Moon Treaty.
The Outer Space Treaty established the basic ramifications for space activity in article one:
"The exploration and use of outer space, including the Moon and other celestial bodies, shall be carried out for the benefit and in the interests of all countries, irrespective of their degree of economic or scientific development, and shall be the province of all mankind."
And continued in article two by stating:
"Outer space, including the Moon and other celestial bodies, is not subject to national appropriation by claim of sovereignty, by means of use or occupation, or by any other means."
The development of international space law has revolved much around outer space being defined as common heritage of mankind. The Magna Carta of Space presented by William A. Hyman in 1966 framed outer space explicitly not as terra nullius but as res communis, which subsequently influenced the work of the United Nations Committee on the Peaceful Uses of Outer Space (COPUOS).
The United Nations Office for Outer Space Affairs and the International Telecommunication Union are international organizations central for facilitating space regulation, such as space traffic management.
Forms
Signals and radiation
Humans have been producing a range of radiation which has reached space unintentionally as well as intentionally, well before any direct human presence in space.
Electromagnetic radiation such as light, of humans, has been reaching even stars as far away as the age of the radiation.
Beginning in the 20th century, humans have been sending radiation significantly into space. Nuclear explosions, especially high-altitude ones have since at times, starting with 1958, just a year after the first satellite Sputnik was launched, introduced strong and broad radiation from humans into space, producing electromagnetic pulses and orbital radiation belts, adding to the explosion's destructive potential on ground and in orbit.
While Earth's and humanities radiation profile is the main material for space based remote Earth observation, but radiation by human activity from Earth and from space has also been an obstacle for human activities, such as spiritual life or astronomy through light pollution and radio spectrum pollution from Earth and space. In the case of radio astronomy radio quiet zones have been kept and sought out, with the far side of the Moon being most pristine facing away from human made electromagnetic interference.
Space junk and human impact
Space junk as product and form of human presence in space has existed ever since the first orbital spaceflights and comes mostly in the form of space debris in outer space. Space debris has been for example possibly the first human objects to have been present in space beyond Earth, reaching its escape velocity after being ejected purposefully from an exploded Aerobee rocket in 1957. Most space debris is in orbit around Earth, it can stay there for years to centuries if at altitudes from hundreds to thousands of kilometers, before it falls to Earth. Space debris is a hazard since it can hit and damage spacecraft. Having reached considerable amounts around Earth, policies have been put into place to prevent space debris and hazards, such as international regulation to prevent nuclear hazards in Earth's orbit and the Registration Convention as part of space traffic management.
But space junk can also come as result of human activity on astronomical bodies, such as the remains of space missions, like the many artificial objects left behind on the Moon, and on other bodies.
Robotic
Human presence in space has been strongly based on the many robotic spacecraft, particularly as the many artificial satellites in orbit around Earth.
Many firsts of human presence in space have been achieved by robotic missions. The first artificial object to reach space, above the 100 km altitude Kármán line, and therefore performing the first sub-orbital flight was MW 18014 in 1944. But the first sustained presence in space was established by the orbital flight of Sputnik in 1957. Followed by a rich number of robotic space probes achieving human presence and exploration throughout the Solar system for the first time.
Human presence at the Moon was established by the Luna programme starting in 1959, with a first flyby and heliocentric orbit (Luna 1), a first arrival of an artificial object on the surface with an impactor (Luna 2), and a for the first time a successful flight to the far side of the Moon (Luna 3). The Moon then was in 1966 visited for the first time by a lander (Luna 9), as well as an orbiter (Luna 10), and in 1970 for the first time a rover (Lunokhod 1) landed on an extraterrestrial body.
Interplanetary presence was established at Venus by the Venera program, with a flyby in 1961 (Venera 1) and a crash in 1966 (Venera 3).
Presence in the outer Solar System was achieved by Pioneer 10 in 1972 and continuous presence in interstellar space by Voyager 1 in 2012.
The 1958 Vanguard 1 is the fourth artificial satellite and the oldest spacecraft still in space and orbit around Earth, though inactive.
Presence of non-human life from Earth
Since the very beginning of human outer space activities in 1944, and possibly before that, life has been present with microscopic life as space contaminate and after 1960 as space research subjects. Prior to crewed spaceflight non-human animals had been subjects of space research, specifically bioastronautics and astrobiology, being exposed to ever higher testflights. The first animals (including humans) and plant seeds in space above the 100 km Kármán line were corn seeds and fruit flies, launched for the first time on 9 July 1946, with the first fruit flies launched and returned alive in 1947. In 1949 Albert II, became the first mammal and first primate reaching the 100 km Kármán line, and in 1957 the dog Laika became the first animal in orbit, with both also becoming the first fatalities of spaceflight and in space, respectively. In 1968, on Zond 5 turtoises, insects and planets became the first animals (incl. humans) and plants to fly to and returned safely from the Moon and any extraterrestrial flight. In 2019 Chang'e 4 landed fruit flies on the Moon, the first extraterrestrial stay of non-human animals.
Visits of organisms to extraterrestrial bodies have been a significant issue of planetary protection, as with the crash of tardigrades on the Moon in 2019.
Plants first grown in 1966 with Kosmos 110 and in 1971 on Salyut 1, with the first producing seeds August 4, 1982 on Salyut 7. The first plant to sprout on the Moon and any extraterrestrial body grew in 2019, on the Chang'e 4 lander.
Plants and growing them in space and places such as the Moon have been important subjects of space research, but also as psychological support and possibly nutrition during continuous crewed presence in space.
Direct human presence in space
Direct human presence in space was achieved with Yuri Gagarin flying a space capsule in 1961 for one orbit around Earth for the first time. While direct human presence in open space, by exiting a spacecraft in a spacesuit, a so-called extravehicular activity, has been achieved since the first person to do so, Alexei Leonov, in 1965.
Though Valentina Tereshkova was in 1963 the first woman in space, women saw no further presence in space until the 1980s and are still underrepresented, e.g. with no women ever being present on the Moon. An internationalization of direct human presence in space started with the first space rendezvous of two crews of different human spaceflight programs, the Apollo–Soyuz mission in 1975 and at the end of the 1970s with the Interkosmos program.
Space stations have harboured so far the only long-duration direct human presence in space. After the first station Salyut 1 (1971) and its tragic Soyuz 11 crew, space stations have been operated consecutively since Skylab (1973), having allowed a progression of long-duration direct human presence in space. Long-duration direct human presence has been joined by visiting crews since 1977 (Salyut 6). Consecutive direct human presence in space has been achieved since the Salyut successor Mir starting with 1987. This was continued until the operational transition from the Mir to the ISS, giving rise with its first occupation to an uninterrupted direct human presence in space since 2000.<ref
While human population records in orbit developed from 1 in 1961, 2 in 1962, 4–7 in 1969, 7–11 in 1984
and 13 in 1995, to 14 in 2021, 17 in 2023 and 19 in 2024, developing into a continues population of no less than 10 people on two space stations since 5 June 2022 (as of 2024). The ISS has hosted the most people in space at the same time, reaching 13 for the first time during the eleven day docking of STS-127 in 2009.
Beyond Earth the Moon has been the only astronomical object which so far has seen direct human presence through the week long Apollo missions between 1968 and 1972, beginning with the first orbit by Apollo 8 in 1968 and with the first landing by Apollo 11 in 1969. The longest extraterrestrial human stay was three days by Apollo 17.
While most persons who have been to space are astronauts, professional members of human spaceflight programs, particularly governmental ones, the few others, starting in the 1980s, have been trained and gone to space as spaceflight participants, with the first space tourist staying in space in 2001.
By the end of the 2010s several hundred people from more than 40 countries have gone into space, most of them reaching orbit. 24 people have traveled to the Moon and 12 of them walked on the Moon.
Space travelers have spent by 2007 over 29,000 person-days (or a cumulative total of over 77 years) in space including over 100 person-days of spacewalks.
Usual durations for individuals to inhabit space on long-duration stays are six months, with the longest stays on record being at about a year.
Space infrastructure
A permanent human presence in space depends on an established space infrastructure which harbours, supplies and maintains human presence. Such infrastructure has originally been Earth ground-based, but with increased numbers of satellites and long-duration missions beyond the near side of the Moon space-to-space based infrastructure is being used. First simple interplanetary infrastructures have been created by space probes particularly when employing a system which combines a lander and a relaying orbiter.
Space stations are space habitats which have provided a crucial infrastructure for sustaining a continuous direct human, including non-human, presence in space. Space stations have been continuously present in orbit around Earth from Skylab in 1973, to the Salyut stations, Mir and eventually ISS.
The planned Artemis program includes the Lunar Gateway a future space station around the Moon as a multimission waystation.
Spiritual and artistic
Human presence has also been expressed through spiritual and artistic installations in outer space or on the Moon.
Apollo 15 Mission Commander David Scott left for example a Bible on their Lunar Roving Vehicle during an extravehicular activity on the Moon. Space has furthermore been the site of people taking part in religious festivities such as Christmas on the International Space Station.
Locations
Particular orbits
Human presence in Earth orbit and heliocentric orbit has been the case with a range of artificial objects since the beginning of spaceflight (both possibly with debris since 1957, but for sure since 1958 with Sputnik 1 and in 1959 with Luna 1 respectively), and at more interplanetary heliocentric orbits since 1961 with Venera 1. Extraterrestrial orbits other than heliocentric orbit has been achieved since 1966, starting with Luna 10 around the Moon and several at the same time in orbit of the Moon that same year starting with Lunar Orbiter 1, and since 1971 with Mariner 9 around another planet (Mars).
Humans have also used and occupied co-orbital configurations, particularly at different liberation points with halo orbits, to harness the benefits of those so called Lagrange points.
Some interplanetary missions, particularly the Ulysses solar polar probe and considerably Voyager 1 and 2, as well as others like Pioneer 10 and 11, have entered trajectories taking them out of the ecliptic plane.
Extraterrestrial bodies
Humanity has reached different types of astronomical bodies, but the longest and most diverse presence (including non-human, e.g. sprouting plants) has been on the Moon, particularly because it is the first and only extraterrestrial body having been directly visited by humans.
Space probes have been establishing and mediating human presence interplanetarily since their first visits to Venus. Mars has seen a continuous presence since 1997, after being first flown by in 1964 and landed on in 1971. A group of missions have been present on Mars since 2001, including continuous presence by a series of rovers since 2003.
Beside having reached some planetary-mass objects (that is planets, dwarf planets or the largest, so-called planetary-mass moons), humans have also reached, landed and in some cases even returned robotic probes from some small Solar System bodies, like asteroids and comets, with a range of space probes.
The Solar System region near the Sun's corona, inside Mercury's orbit, with its high gravitational potential difference from Earth and the subsequent high delta-v needed to reach it, has only been considerably pierced on highly elliptic orbits by some solar probes like Helios 1 & 2, as well as the more contemporary Parker Solar Probe. The latter being the closest to reach the Sun, breaking speed records with its very low solar altitudes at perihelion apsis.
Future direct human presence beyond Earth's orbit is possibly going to be re-introduced if current plans for crewed research stations to be established on Mars and on the Moon are continued to be developed.
Outer Solar System
Human presence in the outer Solar System has been established by the first visit to Jupiter in 1973 by Pioneer 10. Thirty years later nine probes had traveled to the Outer Solar System, and the first such probe (JUICE, the Jupiter Icy Moons Explorer) by another space agency than NASA had just been launched on its way. Jupiter and Saturn are the only outer Solar System bodies which have been orbited by probes (Jupiter: Galileo in 1995 and Juno in 2016; Saturn: Cassini–Huygens in 2004), with all other outer Solar System probes performing flybys.
The Saturn moon Titan, with its special lunar atmosphere, has so far been the only body in the outer Solar System to be landed on by the Cassini–Huygens lander Huygens in 2005.
Outbound
Several probes have reached Solar escape velocity, with Voyager 1 being the first to cross after 36 years of flight the heliopause and enter interstellar space on August 25, 2012, at distance of 121 AU from the Sun.
Living in space
Living in outer space is fundamentally different to living on Earth. It is shaped by the characteristic environment of outer space, particularly its microgravity (producing weightlessness) and its near perfect vacuum (supplying few and producing unhindered exposure to radiation and material from far away). Mundane needs such as for air, pressure, temperature and light have to be accommodated completely by life support systems. Furthermore movement, food intake and hygiene is confronted with challenges.
Long-duration stays are particularly endangered by the prevalent radiation exposure and the health effects of microgravity. Human fatalities have been the case due to accidents during spaceflight, particularly at launch and reentry. With the last in-flight accident killing humans, the Columbia accident in 2003, the sum of in-flight fatalities has risen to 15 astronauts and 4 cosmonauts, in five separate incidents. Over 100 others have died in accidents during activity directly related to spaceflight or testing.
None of them remained in space, but small parts of the remains of deceased people have been taken as space burials to orbital space since 1992 and controversially even to the Moon since 1999.
Bioastronautics, space medicine, space technology and space architecture are fields which are occupied with alleviating the effects of space on humans and non-humans.
Culture
Research has begun into the culture and "microsocieties" that are formed in space, with space archeologists analyzing residue from space environments to learn about astronaut life. A few incidents of astronauts from different countries having difficulties in getting along have also been studied.
Impact, environmental protection and sustainability
Human space activity, and its subsequent presence, can and has been having an impact on space as well as on the capacity to access it. This impact of human space activity and presence, or its potential, has created the need to address its issues regarding planetary protection, space debris, nuclear hazards, radio pollution and light pollution, to the reusability of launch systems, for space not to become a sacrifice zone.
Sustainability has been a goal of space law, space technology and space infrastructure, with the United Nations seeing the need to advance long-term sustainability of outer space activities in space science and application, and the United States having it as a crucial goal of its contemporary space policy and space program.
Human presence in space is particularly being felt in orbit around Earth. The orbital space around Earth has seen increasing and extensive human presence, beside its benefits it has also produced a threat to it by carrying with it space debris, potentially cascading into the so-called Kessler syndrome. This has raised the need for regulation and mitigation of such to secure a sustainable access to outer space.
Study and reception
Individually or as a society humans have engaged since pre-history in developing their perception of space above the ground, or the cosmos at large, and developing their place in it.
Social sciences have been studying such works of people from pre-history to the contemporary with the fields of archaeoastronomy to cultural astronomy. With actual human activity and presence in space the need for fields like astrosociology and space archaeology have been added.
Human presence observed from space
Earth observation has been one of the first missions of spaceflight, resulting in a dense contemporary presence of Earth observation satellites, having a wealth of uses and benefits for life on Earth.
Viewing human presence from space, particularly by humans directly, has been reported by some astronauts to cause a cognitive shift in perception, especially while viewing the Earth from outer space, this effect has been called the overview effect.
Observation of space from space
Parallel to the above overview effect the term "ultraview effect" has been introduced for a subjective response of intense awe some astronauts have experienced viewing large "starfields" while in space.
Space observatories like the Hubble Space Telescope have been present in Earth's orbit, benefiting from advantages from being outside Earth's atmosphere and away from its radio noise, resulting in less distorted observation results.
Direct and mediated human presence
Related to the long discussion of what human presence constitutes and how it should be lived, the discussion about direct (e.g. crewed) and mediated (e.g. uncrewed) human presence, has been decisive for how space policy makers have chosen human presence and its purposes.
The relevance of this issue for space policy has risen with the advancement and resulting possibilities of telerobotics, to the point where most of the human presence in space has been reallized robotically, leaving direct human presence behind.
Localization in space
The location of human presence has been studied throughout history by astronomy and was significant in order to relate to the heavens, that is to outer space and its bodies.
The historic argument between geocentrism and heliocentrism is one example about the location of human presence.
Scenarios of and relations to space beyond human presence
Realizations of the scales of space, have been taken as subject to discuss human and life's existence or relations to space and time beyond them, with some understanding humanity's or life's presence as a singularity or one to be in isolation, pondering on the Fermi paradox.
A diverse range of arguments of how to relate to space beyond human presence have been raised, with some seeing space beyond humans as reason to venture out into space and exploring it, some aiming for contact with extraterrestrial life, to arguments for protection of humanity or life from its possibilities.
Considerations about the ecological integrity and independence of celestial bodies, counter exploitive understandings of space as dead, particularly in the sense of terra nullius, have raised issues such as rights of nature.
Purposes and uses
Space and human presence in it has been the subject of different agendas.
Human presence in space at its beginnings, was fueled by the Cold War and its outgrowing the Space Race. During this time technological, nationalist, ideological and military competition were dominant driving factors of space policy and the resulting activity and, particularly direct human, presence in space.
With the waning of the Space Race, concluded by cooperation in human spaceflight, focus shifted in the 1970s further to space exploration and telerobotics, having a range of achievements and technological advances. Space exploration meant by then also an engagement by governments in the search for extraterrestrial life.
Since human activity and presence in space has been producing spin-off benefits, other than for the above purposes, such as Earth observation and communication satellites for civilian use, international cooperation to advance such benefits of human presence in space grew with time. Particularly for the purpose of continuing benefits of space infrastructure and space science the United Nations has been pushing for safeguarding human activity in outer space in a sustainable way.
With the contemporary so-called NewSpace, the aim of commercialization of space has grown along with a narrative of space habitation for the survival of some humans away from and without Earth, which in turn has been critically analyzed and highlighted colonialist purposes for human activity and presence in space. This has given rise for a deeper engagement in the fields of space environment and space ethics.
Overview of different purposes and uses
Space exploration#Rationales
Benefits of space exploration
NASA spinoff technologies
Space research
Earth observation
Astronomy
Space observatory
Search for extraterrestrial life (see also first contact)
Communication
Spaceflight/Space transportation
Commercial use of space
Space tourism
Space mining (see also Surface chauvinism and surfacism)
Space manufacturing
Environmental dumping
Planetary protection
Planetary defense
Isolationism
For presence, in itself (see Human outpost space basing)
Space imperialism
National or private potency and competition (see Space Race)
Militarization of space
Extraterrestrial settlement
Emigration from Earth
Integration or naturalization (see also biological naturalization such as Pantropy)
Demographic push (e.g. due to overconsumption)
Forced displacement
Space and survival
Escapism
Space development as a purpose of progress (see also New Frontier)
Expansionism
Space colonization
Civilizing mission
Terraforming (see also Ethics of terraforming)
Directed panspermia
See also
Outline of space science
Outline of space exploration
Timeline of Solar System exploration
List of spaceflight records
Rights of nature
Anthropogenic metabolism
Anthroposphere
Collective consciousness
Scale (analytical tool)
Noosphere
Ecological civilization
Human impact on the environment
Human ecology
Technosignature
Extremophile
Explanatory notes
References
Further reading
Space
Outer space
Space exploration | Human presence in space | Physics,Astronomy,Mathematics | 5,756 |
7,930,220 | https://en.wikipedia.org/wiki/Decapentaplegic | Decapentaplegic (Dpp) is a key morphogen involved in the development of the fruit fly Drosophila melanogaster and is the first validated secreted morphogen. It is known to be necessary for the correct patterning and development of the early Drosophila embryo and the fifteen imaginal discs, which are tissues that will become limbs and other organs and structures in the adult fly. It has also been suggested that Dpp plays a role in regulating the growth and size of tissues. Flies with mutations in decapentaplegic fail to form these structures correctly, hence the name (decapenta-, fifteen; -plegic, paralysis). Dpp is the Drosophila homolog of the vertebrate bone morphogenetic proteins (BMPs), which are members of the TGF-β superfamily, a class of proteins that are often associated with their own specific signaling pathway. Studies of Dpp in Drosophila have led to greater understanding of the function and importance of their homologs in vertebrates like humans.
Function in Drosophila
Dpp is a classic morphogen, which means that it is present in a spatial concentration gradient in the tissues where it is found, and its presence as a gradient gives it functional meaning in how it affects development. The most studied tissues in which Dpp is found are the early embryo and the imaginal wing discs, which later form the wings of the fly. During embryonic development, Dpp is uniformly expressed at the dorsal side of the embryo, establishing a sharp concentration gradient. In the imaginal discs, Dpp is strongly expressed in a narrow stripe of cells down the middle of the disc where the tissue marks the border between the anterior and posterior sides. Dpp diffuses from this stripe towards the edges of the tissue, forming a gradient as expected of a morphogen. However, although cells in the Dpp domain in the embryo do not proliferate, cells in the imaginal wing disc proliferate heavily, causing tissue growth. Although gradient formation in the early embryo is well understood, how the Dpp morphogen gradient forms in the wing imaginal disc remains controversial.
Role and formation in embryonic development
At the early blastoderm stage, Dpp signaling is uniform and low along the dorsal side. A sharp signaling profile emerges at the dorsal midline of the embryo during cellularization, with high levels of Dpp specifying the extraembryonic amnioserosa and low levels specifying the dorsal ectoderm. Dpp signaling also incorporates a positive feedback mechanism that promotes future Dpp binding. The morphogen gradient in embryos is established via a known active transport mechanism. Gradient formation depends on the BMP inhibitors Short gastrulation (Sog) and Twisted gastrulation (Tsg), and other extracellular proteins such as Tolloid (Tld), and Screw (Scw). Sog is produced in the ventral-lateral region of the embryo (perpendicular to the Dpp gradient) and forms a BMP-inhibiting gradient that prevents Dpp from binding to its receptor. Sog and Tsg form a complex with Dpp and are actively transported toward the dorsal midline (middle of the embryo), following the Sog concentration gradient. Tld, a metalloprotease, releases Dpp from the complex by mediating Sog processing, activating Dpp signaling at the midline. After gastrulation of the embryo, the Dpp gradient induces cardiac and visceral mesoderm formation.
Signaling pathway
Dpp, like its vertebrate homologs, is a signaling molecule. In Drosophila, the receptor for Dpp is formed by two proteins, Thickveins (Tkv) and Punt. Like Dpp itself, Tkv and Punt are highly similar to homologs in other species. When a cell receives a Dpp signal, the receptors are able to activate an intracellular protein called mothers against Dpp (MAD) by phosphorylation. The initial discovery of MAD in Drosophila paved the way for later experiments that identified the responder to TGF-β signaling in vertebrates, called SMADs. Activated MAD is able to bind to DNA and act as a transcription factor to affect the expression of different genes in response to Dpp signaling. Genes activated by Dpp signaling include optomotor blind (omb) and spalt, and activity of these genes are often used as indicators of Dpp signaling in experiments. Another gene with a more complicated regulatory interaction with Dpp is brinker. Brinker is a transcription factor that represses the activation targets of Dpp, so in order to turn on these genes Dpp must repress brinker as well as activate the other targets.
Role in imaginal wing disc
In the fly wing, the posterior and anterior halves of the tissue are populated by different kinds of cells that express different genes. Cells in the posterior but not the anterior express the transcription factor engrailed (En). One of the genes activated by En is hedgehog (hh), a signaling factor. Hedgehog signaling instructs neighboring cells to express Dpp, but Dpp expression is also repressed by En. The result is that Dpp is only produced in a narrow stripe of cells immediately adjacent to but not within the posterior half of the tissue. Dpp produced at this anterior/posterior border then diffuses out to the edges of the tissue, forming a spatial concentration gradient.
By reading their position along the gradient of Dpp, cells in the wing are able to determine their location relative to the anterior/posterior border, and they behave and develop accordingly.
It is possible that it is not actually the diffusion and gradient of Dpp that patterns tissues, but instead cells that receive Dpp signal instruct their neighbors on what to be, and those cells in turn signal their neighbors in a cascade through the tissue. Several experiments have been done to disprove this hypothesis and establish that it is actually the gradient of actual Dpp molecules that are responsible for patterning.
Mutant forms of the Dpp receptor Tkv exist that behave as if they are receiving high amounts of Dpp signal even in the absence of Dpp. Cells that contain this mutant receptor behave as if they are in an environment of high Dpp such as the area near the stripe of cells producing Dpp. By generating small patches of these cells in different parts of the wing tissue, investigators were able to distinguish how Dpp acts to pattern the tissue. If cells that receive a Dpp signal instruct their neighbors in a cascade, then additional tissue patterning centers should appear at the sites of the mutant cells that seem to receive high Dpp signaling but do not produce any Dpp themselves. However, if the physical presence of Dpp is necessary, then the cells near the mutants should not be affected at all. Experiments found the second case to be true, indicating that Dpp acts like a morphogen.
The common way to assess differences in tissue patterning in the fly wing is to look at the pattern of veins in the wing. In flies where the ability of Dpp to diffuse through the tissue is impaired, the positioning of the veins is shifted from that in normal flies, and the wing is generally smaller.
Dpp has also been proposed as a regulator of tissue growth and size, a classic problem in development. A problem common to organisms with multicellular organs that must grow from an initial size is how to know when to stop growing after the appropriate size is reached. Since Dpp is present in a gradient, it is conceivable that the slope of the gradient could be the measurement by which a tissue determines how large it is. If the amount of Dpp at the source is fixed and the amount at the edge of the tissue is zero, then the steepness of the gradient will decrease as the size of the tissue and the distance between the source and the edge increase. Experiments where an artificially steep gradient of Dpp is induced in wing tissue resulted in significantly increased amounts of cell proliferation, lending support to the steepness hypothesis.
Formation of the Dpp gradient in the imaginal wing disc
The shape of the Dpp gradient is determined by four ligand kinetic parameters that are affected by biological parameters:
The effective diffusion coefficient, which is dependent on extracellular diffusion, intracellular transport rates, and receptor binding/unbinding kinetics.
The effective extracellular and intracellular degradation rates.
The production rate, dependent on the Dpp production pathway.
The immobile fraction (a parameter associated with the method used to measure Dpp kinetics, FRAP).
It is important to note that a single biological parameter can affect multiple kinetic parameters. For example, receptor levels will affect both the diffusion coefficient and the degradation rates.
However, the mechanism by which the Dpp gradient is formed is still controversial, and no complete explanation has been proposed or proven. The four main categories of theories behind the formation of the gradient are free diffusion, restricted diffusion, transcytosis, and cytoneme-assisted transport.
Free/restricted diffusion model
The free diffusion model assumes Dpp to diffuse freely through the extracellular matrix, degrading via receptor-mediated degradation events. FRAP assays have argued against this model by noting that diffusion of GFP-Dpp does not match that expected of a similarly sized molecule. However, others have argued that a rate-limiting slow step further downstream of the process such as slow immobilization and slow degradation of Dpp itself could account for the observed differences in diffusion. Single molecules of Dpp have been tracked using fluorescence correlation spectroscopy (FCS), showing that 65% of Dpp molecules diffuse rapidly (consistent with the free diffusion model) and 35% diffuse slowly (consistent with Dpp bound to receptors or glypicans).
The restricted diffusion model includes the effects of cell packing geometry and interactions with the extracellular matrix via binding events with receptors such as Tkv and the heparin sulfate proteoglycans dally and dally-like.
Transcytosis model
The transcytosis model assumes Dpp to be transported via repeated rounds of intracellular receptor-mediated endocytosis, with the gradient severity determined by endocytotic sorting of Dpp toward recycling through cells vs degradation. This model was initially based on an initial observation that Dpp could not accumulate across clones where a critical protein called dynamin necessary for endocytosis had been mutated into the shibire (shi) phenotype. However, other experiments showed that Dpp was able to accumulate over shi clones, challenging the transcytosis model. A revision of the theory behind the model proposes that endocytosis is not essential for Dpp movement but is involved in Dpp signaling. Dpp fails to move across cells with mutated dally and dally-like, two heparin sulfate proteoglycans (HSPGs) commonly found in the extracellular matrix. As a result, these results suggest that Dpp moves along the cell surface via restricted extracellular diffusion involving dally and dally-like, but the transport of Dpp itself does not rely on transcytosis.
Cytoneme-mediated transport model
The cytoneme-mediated model suggests that Dpp is directly transported to target cells via actin-based filopodia called cytonemes that extend from the apical surface of Dpp-responding cells to the Dpp-producing source cells. These cytonemes have been observed, but the dependence of the Dpp gradient on cytonemes has not been definitively proven in imaginal wing discs. However, Dpp is known to be required for and sufficient to extend and maintain cytonemes. Experiments analyzing the dynamics between Dpp and cytonemes have been conducted in the air sac primordium, where Dpp signaling was found to have a functional link with cytonemes. However, these experiments have not been replicated in imaginal wing discs.
Role in molluscs
Dpp is also found in molluscs, where it plays a key role in shell formation by controlling the shape of the conch. In bivalves, it is expressed until the protoconch has taken on the required shape, after which point its expression ceases. It is also associated with shell formation in gastropods, with an asymmetric distribution that may be associated with their coiling: shell growth appears to be inhibited where Dpp is expressed.
References
External links
Drosophila decapentaplegic - The Interactive Fly
Developmental genetics
Drosophila melanogaster genes
Morphogens | Decapentaplegic | Biology | 2,605 |
69,503,348 | https://en.wikipedia.org/wiki/Middle%20ear%20implant | A middle ear implant is a hearing device that is surgically implanted into the middle ear. They help people with conductive, sensorineural or mixed hearing loss to hear.
Middle ear implants work by improving the conduction of sound vibrations from the middle ear to the inner ear. There are two types of middle ear devices: active and passive. Active middle ear implants (AMEI) consist of an external audio processor and an internal implant, which actively vibrates the structures of the middle ear. Passive middle ear implants (PMEIs) are sometimes known as ossicular replacement prostheses, TORPs or PORPs. They replace damaged or missing parts of the middle ear, creating a bridge between the outer ear and the inner ear, so that sound vibrations can be conducted through the middle ear and on to the cochlea. Unlike AMEIs, PMEIs contain no electronics and are not powered by an external source.
PMEIs are the usual first-line surgical treatment for conductive hearing loss, due to their lack of external components and cost-effectiveness. However, each patient is assessed individually as to whether an AMEI or PMEI would bring more benefit. This is especially true if the patient has already had several surgeries with PMEIs.
Active middle ear implant
Parts
An active middle ear implant (AMEI) has two parts: an internal implant and an external audio processor. The microphone of the audio processor picks up sounds from the environment. The processor then converts these acoustic signals into digital signals and sends them to the implant through the skin. The implant sends the signals to the Floating Mass Transducer (FMT): a small vibratory part that is surgically fixed either on one of the three ossicles or against the round window of the cochlea. The FMT vibrates and sends sound vibrations to the cochlea. The cochlea converts these vibrations into nerve signals and sends them to the brain, where they are interpreted as sound.
Indications
AMEIs are intended for patients with mild-to-severe sensorineural hearing loss, as well as those with conductive or mixed hearing loss. They can be used by adults and children over the age of 5.
Sensorineural hearing loss
An AMEI can be beneficial for patients with mild-to-severe sensorineural hearing loss who have an intact ossicular chain and healthy middle ear, but who either cannot wear hearing aids or who do not get sufficient benefit from them. Reasons for not being able to wear hearing aids include earmold allergies, skin problems, narrow, collapsed or closed ear canals, or malformed ears. In cases of sensorineural hearing loss, the FMT is usually attached to the incus.
Conductive or mixed hearing loss
An AMEI is also indicated for patients with conductive or mixed hearing loss with bone conduction thresholds from 45 dB in the low frequencies to 65 dB in the high frequencies. In these cases, the FMT can be coupled to various parts of the middle ear, depending on the patient's pathology:
The oval window, causing stimulation of the cochlea in patients without an ossicular chain.
The round window, causing reverse stimulation of the cochlea in patients without an ossicular chain.
The mobile stapes in patients with absence or fixation of other ossicles, usually in cases of chronic otitis media or malformations.
Efficacy
AMEIs have been shown by several studies to be equal or superior to both hearing aids and bone conduction implants. Lee et al used the PBmax test to study speech intelligibility in patients before and after receiving an AMEI. All patients had used hearing aids pre-implantation. The researchers found that speech intelligibility improved with the AMEI, particularly in patients with a down-sloping hearing loss. These findings were supported by Iwasaki et al, who found that both speech intelligibility and quality of life improved after implantation with an AMEI, applied to the round window.
AMEIs can also offer improved hearing performance over bone conduction implants for patients with mixed hearing loss. Mojallal et al found that patients whose mixed hearing loss was treated with an AMEI experienced both better word recognition and speech understanding in noise than those who received a bone conduction implant, providing that their bone conduction pure-tone average (0.5 to 4 kHz) was poorer than 35 dB HL.
Passive middle ear implant
Parts
Passive middle ear implants (PMEI) are ossicular replacement prostheses designed to replace some or all of the ossicular chain in the middle ear. They create a bridge between the outer ear and the inner ear, so that sound vibrations can be conducted through the middle ear and on to the cochlea
There are two types of PMEIs: tympanoplasty implants and stapes implants. Tympanoplasty implants (also known as PORPs or TORPs) are suitable for patients with a mobile stapes footplate, ie. a stapes footplate that moves in the normal way. Either a partial or a total tympanoplasty implant can be used, depending on the condition of the stapes. If the stapes is fixed and cannot transfer vibrations to the inner ear, then a stapes implant would be used.
PMEIs are made from different materials including titanium, teflon, hydroxylapatite, platinum, and nitinol, all of which are suitable for use within the human body. Titanium implants can safely undergo MRIs of up to 7.0 Tesla.
Indications
Tympanoplasty implant
The tympanoplasty implant is indicated in cases of congenital or acquired defects of the ossicular chain, due to e.g.:
Chronic otitis media
Traumatic injury
Malformation
Cholesteatoma
It can also be used to treat patients with inadequate conductive hearing from previous middle ear surgery.
Stapes implant
The stapesplasty prosthesis is indicated in cases of congenital or acquired defects of the stapes due to e.g.:
Otosclerosis
Congenital fixation of the stapes
Traumatic injury
Malformation of the ossicular chain/middle ear
It can also be used to treat patients with inadequate conductive hearing from previous stapes surgery.
See also
Ossicular replacement prosthesis
References
Medical technology | Middle ear implant | Biology | 1,310 |
22,507,026 | https://en.wikipedia.org/wiki/Photothermal%20optical%20microscopy | Photothermal optical microscopy / "photothermal single particle microscopy" is a technique that is based on detection of non-fluorescent labels. It relies on absorption properties of labels (gold nanoparticles, semiconductor nanocrystals, etc.), and can be realized on a conventional microscope using a resonant modulated heating beam, non-resonant probe beam and lock-in detection of photothermal signals from a single nanoparticle. It is the extension of the macroscopic photothermal spectroscopy to the nanoscopic domain. The high sensitivity and selectivity of photothermal microscopy allows even the detection of single molecules by their absorption. Similar to Fluorescence Correlation Spectroscopy (FCS), the photothermal signal may be recorded with respect to time to study the diffusion and advection characteristics of absorbing nanoparticles in a solution. This technique is called photothermal correlation spectroscopy (PhoCS).
Forward detection scheme
In this detection scheme a conventional scanning sample or laser-scanning transmission microscope is employed. Both the heating and the probing laser beam are coaxially aligned and
superimposed using a dichroic mirror. Both beams are focused onto a sample, typically via a high-NA illumination microscope objective, and recollected using a detection microscope objective. The thereby collimated transmitted beam is then imaged onto a photodiode after filtering out the heating beam. The photothermal signal is then the change in the transmitted probe beam power due to the heating laser. To increase the signal-to-noise ratio a lock-in technique may be used. To this end, the heating laser beam is modulated at a high frequency of the order of MHz and the detected probe beam power is then demodulated on the same frequency. For quantitative measurements, the photothermal signal may be normalized to the background detected power (which is typically much larger than the change ), thereby defining the relative photothermal signal
Detection mechanism
The physical basis for the photothermal signal in the transmission detection scheme is the lensing action of the refractive index profile that is created upon the absorption of the heating laser power by the nanoparticle. The signal is homodyne in the sense that a steady state difference signal accounts for the mechanism and the forward scattered field's self-interference with the transmitted beam corresponds to an energy redistribution as expected for a simple lens. The lens is a Gradient Refractive INdex (GRIN) particle determined by the 1/r refractive index profile established due to the point-source temperature profile around the nanoparticle. For a nanoparticle of radius embedded in a homogeneous medium of refractive index with a thermorefractive coefficient the refractive index profile reads:
in which the contrast of the thermal lens is determined by the nanoparticle absorption cross-section at the heating beam wavelength, the heating beam intensity at the point of the particle and the embedding medium's thermal conductivity via .
Although the signal can be well-explained in a scattering framework, the most intuitive description can be found by an intuitive analogy to the Coulomb scattering of wave packets in particle physics.
Backwards detection scheme
In this detection scheme a conventional scanning sample or laser-scanning transmission microscope is employed. Both the heating and the probing laser beam are coaxially aligned and
superimposed using a dichroic mirror. Both beams are focused onto a sample, typically via a high-NA illumination microscope objective. Alternatively, the probe-beam may be laterally displaced with respect to the heating beam. The retroreflected probe-beam power is then imaged onto a photodiode and the change as induced by the heating beam provides the photothermal signal
Detection mechanism
The detection is heterodyne in the sense that the scattered field of the probe beam by the thermal lens interferes in the backwards direction with a well-defined retroreflected part of the incidence probing beam.
References
Microscopy | Photothermal optical microscopy | Chemistry | 799 |
38,541,578 | https://en.wikipedia.org/wiki/Theta%20operator | In mathematics, the theta operator is a differential operator defined by
This is sometimes also called the homogeneity operator, because its eigenfunctions are the monomials in z:
In n variables the homogeneity operator is given by
As in one variable, the eigenspaces of θ are the spaces of homogeneous functions. (Euler's homogeneous function theorem)
See also
Difference operator
Delta operator
Elliptic operator
Fractional calculus
Invariant differential operator
Differential calculus over commutative algebras
References
Further reading
Differential operators | Theta operator | Mathematics | 108 |
14,584,919 | https://en.wikipedia.org/wiki/Robert%20J.%20Elliott | Robert James Elliott (born 1940) is a British-Canadian mathematician, known for his contributions to control theory, game theory, stochastic processes and mathematical finance.
He was schooled at Swanwick Hall Grammar School in Swanwick, Derbyshire and studied mathematics in which he earn a B.A. (1961) and M.A. (1965) at the University of Oxford, as well as a Ph.D (thesis Some results in spectral synthesis advised by John Hunter Williamson, 1965) and Sc.D. (1983) from the University of Cambridge.
He taught and conducted research at
University of Newcastle (1964),
Yale University (1965–66),
University of Oxford (1966–68),
University of Warwick (1969–73),
Northwestern University (1972–73),
University of Hull (1973–86),
University of Alberta (1985-2001),
University of Calgary (2001-2009) and
University of Adelaide (2009-2013).
Books
Stochastic Processes, Finance and Control A Festschrift in Honor of Robert J Elliott (World Scientific Publishing, 2012)
with Nigel Kalton, The Existence of Value for Differential Games (American Mathematical Society, 1972)
Stochastic Calculus and Applications (Springer-Verlag, 1982)
Viscosity Solutions and Optimal Control (Longman, 1987)
Stokasticheski Analiz i evo Prilozeniya (M.I.R. Publications Moscow, 1986)
with Lakhdar Aggoun and John B. Moore, Hidden Markov Models: Estimation and Control (Springer-Verlag, 1994)
with P. Ekkehard Kopp, Mathematics of Financial Markets (Springer Verlag, 1999, in Hungarian 2000).
with J. van der Hoek, Binomial Models in Finance (Springer Verlag, 2005)
with Rogemar S. Mamon, Hidden Markov Models in Finance (Springer, 2007)
with Samuel N. Cohen, Stochastic Calculus and Applications (Springer, 2015)
References
20th-century British mathematicians
21st-century British mathematicians
Academics of the University of Hull
Academics of the University of Warwick
Alumni of the University of Oxford
British emigrants to Canada
Game theorists
People from Amber Valley
Academic staff of the University of Alberta
Academic staff of the University of Calgary
1940 births
Living people | Robert J. Elliott | Mathematics | 462 |
247,124 | https://en.wikipedia.org/wiki/Ceratophyllaceae | Ceratophyllaceae is a cosmopolitan family of flowering plants including one living genus commonly found in ponds, marshes, and quiet streams in tropical and in temperate regions. It is the only extant family in the order Ceratophyllales. Species are commonly called coontails or hornworts, although hornwort is also used for unrelated plants of the division Anthocerotophyta.
Living Ceratophyllum grows completely submerged, usually, though not always, floating on the surface, and does not tolerate drought.
Taxonomy
Ceratophyllaceae was considered a relative of Nymphaeaceae and included in Nymphaeales in the Cronquist system, but research has shown that it is not closely related to Nymphaeaceae or any other extant plant family. Some early molecular phylogenies suggested it was the sister group to all other angiosperms, but more recent research suggests that it is the sister group to the eudicots. The APG III system placed the family in its own order, the Ceratophyllales. The APG IV system accepts the phylogeny shown below:
The extinct family Montsechiaceae containing the genus Montsechia has also been placed in the order Ceratophyllales.
Genera
The family contains one living genus, and several extinct genera described from the fossil record, including one of the earliest fruit bearing (in the form of an achene) plants, the Dakota formation freshwater genus Donlesia from Early Cretaceous.
Ceratophyllum
†Ceratostratiotes
†Donlesia
References
Freshwater plants
Angiosperm families | Ceratophyllaceae | Biology | 334 |
9,633,614 | https://en.wikipedia.org/wiki/Delay%20equalization | In signal processing, delay equalization corresponds to adjusting the relative phases of different frequencies to achieve a constant group delay, using by adding an all-pass filter in series with an uncompensated filter. Clever machine-learning techniques are now being applied to the design of such filters.
References
Signal processing
Digital signal processing | Delay equalization | Technology,Engineering | 66 |
31,389,801 | https://en.wikipedia.org/wiki/Dragoslav%20D.%20%C5%A0iljak | Dragoslav D. Šiljak is professor emeritus of Electrical Engineering at Santa Clara University, where he held the title of Benjamin and Mae Swig University Professor. He is best known for developing the mathematical theory and methods for control of complex dynamic systems characterized by large-scale, information structure constraints and uncertainty.
Biography
Šiljak was born on September 10, 1933, in Belgrade, Serbia to Dobrilo and Ljubica (née Živanović). He earned his bachelor's degree from the School of Electrical Engineering at the University of Belgrade in the field of Automatic Control Systems in 1957. By 1963, he had received both his Master's and Ph.D. degrees under the supervision of Professor Dušan Mitrović; and he was appointed Docent Professor in that same year. At the Belgrade University
he sought out books in Russian mathematical maestros like Lyapunov, Pontryagin, and Krasovsky. He also managed—as a graduate student—to get papers published in the top U.S. journal in control engineering.
His published papers, however, caught the attention of U.S. academics, including G.J. Thaler, a lecturer at Santa Clara University who convinced the Dean of Engineering Robert Parden to extend an invitation to Šiljak. He arrived on the Mission Campus in 1964 to teach and conduct research. There he taught courses in Electrical Engineering and Applied Mathematics, and developed methods for the design of control systems. In 1967, Šiljak married Dragana (née Todorovic). They have two children, Ana and Matija, and five grandchildren.
Research
In 1964, Šiljak was awarded a multi-year grant from the National Aeronautics and Space Administration (NASA) to apply parameter space methods for the design of robust control systems to space structures. He collaborated with Sherman Selzer in the Astrionics Laboratory of NASA's George C. Marshall Space Flight Center to design the navigation and control systems for the Saturn V Large Booster that propelled the 1969 Apollo 11 lunar mission. He then began to develop his theory of stability and control of large-scale systems, based on graph-theoretic methods and vector Lyapunov Functions. He applied the theory to the decentralized control of the Large Space Telescope and Skylab built by NASA.
In the early 1970s, Šiljak considered large-scale dynamic systems composed of interconnected sub-systems with uncertain interconnections. He defined the concept of "connective stability": a system is considered stable when it remains stable despite the disconnection and re-connection of subsystems during operation. He established the methods for determining the conditions for connective stability within the framework of comparison principle and vector Lyapunov functions. He applied these methods to a wide variety of models, including large space structures, competitive equilibrium in multi-market systems, multi-species communities in population biology, and large scale power systems.
In the 1980s, Šiljak and his collaborators developed a large number of new and highly original concepts and methods for the decentralized control of uncertain large-scale interconnected systems. He introduced new notions of overlapping sub-systems and decompositions to formulate the inclusion principle. The principle described the process of expansion and contraction of dynamic systems that serve the purpose of rewriting overlapping decompositions as disjoint, which, in turn, allows the standard methods for control design. Structurally fixed modes, multiple controllers for reliable stabilization, decentralized optimization, and hierarchical, epsilon, and overlapping decompositions laid the foundation for a powerful and efficient approach to a broad set of problems in control design of large complex systems. This development was reported in a comprehensive monograph Decentralized Control of Complex Systems
In the following two decades, Šiljak and his collaborators raised the research on complex systems to a higher level. Decomposition schemes involving inputs and outputs were developed for and applied to complex systems of unprecedented dimensions. Dynamic graphs were defined in a linear space as one parameter groups of transformations of the graph space into itself. This new mathematical entity opened the possibility to include continuous Boolean networks in a theoretical study of gene regulation and modeling of large-scale organic structures. These new and exciting developments were published in Control of Complex Systems: Structural Constraints and Uncertainty.
In 2004, a special issue in his honor was published in two numbers of the mathematical journal Dynamics of Continuous, Discrete, and Impulsive System, and it contained articles from leading scholars in the field of dynamic systems. A survey of the selected works of Dragoslav Šiljak can be found in "An Overview of the Collected Works of Dragoslav Siljak" by Zoran Gajić and Masao Ikeda, published in Dynamics of Continuous, Discrete and Impulsive Systems Series A: Mathematical Analysis.
Awards
In 1981, Šiljak served as a Distinguished Scholar of the Japan Society for Promotion of Science. In that same year he became a Fellow of the Institute of Electrical and Electronics Engineers (IEEE), "for contributions to the theory of nonlinear control and large-scale systems". He was selected as a Distinguished Professor of the Fulbright Foundation in 1984, and in 1985 became an International Member of the Serbian Academy of Arts and Sciences. In 1986, he served as a Director of the NSF Workshop “Challenges to Control: A Collective View,” organizing a forum of top control scientists at Santa Clara University for the purpose of assessing the state of the art of the field and outlining directions of research. In 1991, he gave a week-long seminar on decentralized control at the Seoul National University as a Hoam Distinguished Foreign Scholar. In 2001, he became a Life Fellow of the IEEE.
In 2010 he received the Richard E. Bellman Control Heritage Award from the American Automatic Control Council, "for his fundamental contributions to the theory of large-scale systems, decentralized control, and parametric approach to robust stability".
Sports career
Šiljak was a member of the national water polo team of Yugoslavia that won the silver medal at the 1952 Olympic Games in Helsinki, Finland. He was again a member of the team when it won the World Cup “Trofeo Italia” played in Nijmegen, The Netherlands, in 1953. Šiljak played water polo for the club “Jadran“ of Hercegnovi when the club won The National Championship of Yugoslavia in 1958 and 1959. He was a member of the club “Partizan," Belgrade when the club won the Yugoslav Championship in 1963 and became the “Champion of Champions” by winning the Tournament of European Water Polo Champions in Zagreb, Croatia, in 1964.
Works
Books
Nonlinear Systems: The Parameter Analysis and Design, John Wiley (1969)
Large-Scale Dynamic Systems: Stability and Structure, North-Holland (1978)
Decentralized Control of Complex Systems, Academic Press (1991), published in Russian as Децентрализованное управление сложными системами, Mir (1994).
Control of Complex Systems: Structural Constraints and Uncertainty, Springer Verlag (2010, with A. I. Zečević)
Dragoslav Šiljak, Stablinost sistema upravljanja (The Stability of Control Systems), Elektrotehniĉki fakultet u Beogradu (1974)
Select Articles
"Connective Stability of Complex Ecosystems," Nature (1974).
"Connective Stability of Competitive Equilibrium," Automatica (1975).
"Competitive Economic Systems: Stability, Decomposition, and Aggregation," IEEE Transactions on Automatic Control (1976).
"An Improved Block-Parallel Newton Method via Epsilon Decompositions for Load Flow Calculations," IEEE Transactions on Power Systems (1978).
"Lotka-Volterra Equations: Decomposition, Stability, and Structure," Journal of Mathematical Biology (1980) (with M. Ikeda).
"Structurally Fixed Modes," Systems and Control Letters (1981).
"Decentralized Control with Overlapping Information Sets," Journal of Optimization Theory and Applications (1981).
"An Inclusion Principle for Hereditary Systems," Journal of Mathematical Analysis and Applications (1984).
"Nested Epsilon Decompositions of Linear Systems: Weakly Coupled and Overlapping Blocks," SIAM Journal on Matrix Analysis and Applications (1991).
"Optimal Decentralized Control for Stochastic Dynamic Systems," Recent Trends in Optimization Theory and Applications (1995).
"Coherency Recognition Using Epsilon Decomposition," IEEE Transactions on Power Systems (1998).
"Dynamic Graphs," Nonlinear Analysis: Hybrid Systems (2008).
"Inclusion Principle for Descriptor Systems," IEEE Transactions on Automatic Control (2009).
"Consensus at Competitive Equilibrium: Dynamic Flow of Autonomous Cars in Traffic Networks" (2017).
External links
IFAC Biography
Collected Works of Professor Dragoslav Siljak
References
Control theorists
American electrical engineers
21st-century Serbian engineers
American people of Serbian descent
Living people
Fellows of the IEEE
Richard E. Bellman Control Heritage Award recipients
1933 births | Dragoslav D. Šiljak | Engineering | 1,845 |
17,283,420 | https://en.wikipedia.org/wiki/Edinburgh%20Handedness%20Inventory | The Edinburgh Handedness Inventory is a measurement scale used to assess the dominance of a person's right or left hand in everyday activities, sometimes referred to as laterality. The inventory can be used by an observer assessing the person, or by a person self-reporting hand use. The latter method tends to be less reliable due to a person over-attributing tasks to the dominant hand.
The Edinburgh Handedness Inventory was published in 1971 by Richard Carolus Oldfield and has been used in various scientific studies as well as popular literature. According to Google Scholar it has been cited tens of thousands of times. Within the very substantial literature on handedness it is used far more than any rival, such as FLANDERS, or the Annett Hand Preference Questionnaire, which is not good at eliciting either-hand responses.
Nevertheless, profound dissatisfaction with the Inventory has been expressed and statistical analysis of the Inventory has shown that the two-handed items broom and box are poorly correlated with the other eight items, while drawing is too highly correlated with writing to add information. A major revision has been published.
See also
Ambidexterity
Cross-dominance
Handedness
References
External links
An online example of the tool authored by Mark Cohen, hosted at http://www.brainmapping.org
Motor skills
Handedness
Chirality | Edinburgh Handedness Inventory | Physics,Chemistry,Biology | 269 |
379,828 | https://en.wikipedia.org/wiki/Cyanosis | Cyanosis is the change of body tissue color to a bluish-purple hue, as a result of decrease in the amount of oxygen bound to the hemoglobin in the red blood cells of the capillary bed. Cyanosis is apparent usually in the body tissues covered with thin skin, including the mucous membranes, lips, nail beds, and ear lobes. Some medications may cause discoloration such as medications containing amiodarone or silver. Furthermore, mongolian spots, large birthmarks, and the consumption of food products with blue or purple dyes can also result in the bluish skin tissue discoloration and may be mistaken for cyanosis. Appropriate physical examination and history taking is a crucial part to diagnose cyanosis. Management of cyanosis involves treating the main cause, as cyanosis isn’t a disease, it is a symptom.
Cyanosis is further classified into central cyanosis and peripheral cyanosis.
Pathophysiology
The mechanism behind cyanosis is different depending on whether it is central or peripheral.
Central cyanosis
Central cyanosis occurs due to decrease in arterial oxygen saturation (SaO2), and begins to show once the concentration of deoxyhemoglobin in the blood reaches a concentration of ≥ 5.0 g/dL (≥ 3.1 mmol/L or oxygen saturation of ≤ 85%). This indicates a cardiopulmonary condition.
Causes of central cyanosis are discussed below.
Peripheral cyanosis
Peripheral cyanosis happens when there is increased concentration of deoxyhemoglobin on the venous side of the peripheral circulation. In other words, cyanosis is dependent on the concentration of deoxyhemoglobin. Patients with severe anemia may appear normal despite higher-than-normal concentrations of deoxyhemoglobin. While patients with increased amounts of red blood cells (e.g., polycythemia vera) can appear cyanotic even with lower concentrations of deoxyhemoglobin.
Causes
Central cyanosis
Central cyanosis is often due to a circulatory or ventilatory problem that leads to poor blood oxygenation in the lungs. It develops when arterial oxygen saturation drops below 85% or 75%.
Acute cyanosis can be a result of asphyxiation or choking and is one of the definite signs that ventilation is being blocked.
Central cyanosis may be due to the following causes:
Central nervous system (impairing normal ventilation):
Intracranial hemorrhage
Drug overdose (e.g., heroin)
Generalized tonic–clonic seizure (GTCS)
Respiratory system:
Pneumonia
Bronchiolitis
Bronchospasm (e.g., asthma)
Pulmonary hypertension
Pulmonary embolism
Hypoventilation
Chronic obstructive pulmonary disease, or COPD (emphysema)
Cardiovascular system:
Congenital heart disease (e.g., Tetralogy of Fallot, right to left shunts in heart or great vessels)
Heart failure
Valvular heart disease
Myocardial infarction
Hemoglobinopathies:
Methemoglobinemia
Sulfhemoglobinemia
Polycythemia
Congenital cyanosis (HbM Boston) arises from a mutation in the α-codon which results in a change of primary sequence, H → Y. Tyrosine stabilizes the Fe(III) form (oxyhaemoglobin) creating a permanent T-state of Hb.
Others:
High altitude, cyanosis may develop in ascents to altitudes >2400 m.
Hypothermia
Frostbite
Obstructive sleep apnea
Peripheral cyanosis
Peripheral cyanosis is the blue tint in fingers or extremities, due to an inadequate or obstructed circulation. The blood reaching the extremities is not oxygen-rich and when viewed through the skin a combination of factors can lead to the appearance of a blue color. All factors contributing to central cyanosis can also cause peripheral symptoms to appear, but peripheral cyanosis can be observed in the absence of heart or lung failures. Small blood vessels may be restricted and can be treated by increasing the normal oxygenation level of the blood.
Peripheral cyanosis may be due to the following causes:
All common causes of central cyanosis
Reduced cardiac output (e.g., heart failure or hypovolemia)
Cold exposure
Chronic obstructive pulmonary disease (COPD)
Arterial obstruction (e.g., peripheral vascular disease, Raynaud phenomenon)
Venous obstruction (e.g., deep vein thrombosis)
Differential cyanosis
Differential cyanosis is the bluish coloration of the lower but not the upper extremity and the head. This is seen in patients with a patent ductus arteriosus. Patients with a large ductus develop progressive pulmonary vascular disease, and pressure overload of the right ventricle occurs. As soon as pulmonary pressure exceeds aortic pressure, shunt reversal (right-to-left shunt) occurs. The upper extremity remains pink because deoxygenated blood flows through the patent duct and directly into the descending aorta while sparing the brachiocephalic trunk, left common carotid, and left subclavian arteries.
Evaluation
A detailed history and physical examination (particularly focusing on the cardiopulmonary system) can guide further management and help determine the medical tests to be performed. Tests that can be performed include pulse oximetry, arterial blood gas, complete blood count, methemoglobin level, electrocardiogram, echocardiogram, X-Ray, CT scan, cardiac catheterization, and hemoglobin electrophoresis.
In newborns, peripheral cyanosis typically presents in the distal extremities, circumoral, and periorbital areas. Of note, mucous membranes remain pink in peripheral cyanosis as compared to central cyanosis where the mucous membranes are cyanotic.
Skin pigmentation and hemoglobin concentration can affect the evaluation of cyanosis. Cyanosis may be more difficult to detect on people with darker skin pigmentation. However, cyanosis can still be diagnosed with careful examination of the typical body areas such as nail beds, tongue, and mucous membranes where the skin is thinner and more vascular. As mentioned above, patients with severe anemia may appear normal despite higher than normal concentrations of deoxyhemoglobin. Signs of severe anemia may include pale mucosa (lips, eyelids, and gums), fatigue, lightheadedness, and irregular heartbeats.
Management
Cyanosis is a symptom, not a disease itself, so management should be focused on treating the underlying cause.
If it is an emergency, management should always begin with securing the airway, breathing, and circulation. In patients with significant respiratory distress, supplemental oxygen (in the form of nasal canula or continuous positive airway pressure depending on severity) should be given immediately.
If the methemoglobin levels are positive for methemoglobinemia, first-line treatment is to administer methylene blue.
History
The name cyanosis literally means the blue disease or the blue condition. It is derived from the color cyan, which comes from cyanós (κυανός), the Greek word for blue.
It is postulated by Dr. Christen Lundsgaard that cyanosis was first described in 1749 by Jean-Baptiste de Sénac, a French physician who served King Louis XV. De Sénac concluded from an autopsy that cyanosis was caused by a heart defect that led to the mixture of arterial and venous blood circulation. But it was not until 1919, when Dr. Lundsgaard was able to derive the concentration of deoxyhemoglobin (8 volumes per cent) that could cause cyanosis.
See also
Acrocyanosis
Blue baby syndrome
Raynaud's phenomenon
Blue Fugates
References
External links
Medical signs
Symptoms and signs: Skin and subcutaneous tissue | Cyanosis | Biology | 1,695 |
277,262 | https://en.wikipedia.org/wiki/Alcoholate | Originally, an alcoholate was the crystalline form of a salt in which alcohol took the place of water of crystallization, such as [SnCl3(OC2H5)·C2H5OH]2 and C8H6N4O5·CH3OH. However this denomination should not be used anymore for the ending -ate often occurs in names for anions.
The second meaning of the word is that of a tincture, or alcoholic extract of plant material.
The third, and more usual meaning of the word is as a synonym for alkoxide— is the conjugate base of an alcohol.
References
Salts
Alcohol | Alcoholate | Chemistry | 134 |
20,007,702 | https://en.wikipedia.org/wiki/Pinaverium%20bromide | Pinaverium bromide (INN) is a medication used for functional gastrointestinal disorders. It belongs to a drug group called antispasmodics and acts as a calcium channel blocker in helping to restore the normal contraction process of the bowel. It is most effective when taken for a full course of treatment and is not designed for immediate symptom relief or sporadic, intermittent use.
Pinaverium bromide was first registered in 1975 by Solvay Pharmaceuticals (now a division of Abbott Laboratories), and marketed globally using the brand names Dicetel and Eldicet. Generic pinaverium is available in South Korea under a trade name of Disten and in Argentina as Nulite.
Indications
It is indicated for the treatment and relief of symptoms associated with irritable bowel syndrome (IBS) including abdominal pain, bowel disturbances and intestinal discomfort; and treatment of symptoms related to functional disorders of biliary tract.
References
Calcium channel blockers
Quaternary ammonium compounds
Morpholines
Drugs developed by AbbVie
Phenol ethers
Ethers
Organobromides
Methoxy compounds | Pinaverium bromide | Chemistry | 230 |
61,175,228 | https://en.wikipedia.org/wiki/Scott%E2%80%93Curry%20theorem | In mathematical logic, the Scott–Curry theorem is a result in lambda calculus stating that if two non-empty sets of lambda terms A and B are closed under beta-convertibility then they are recursively inseparable.
Explanation
A set A of lambda terms is closed under beta-convertibility if for any lambda terms X and Y, if and X is β-equivalent to Y then . Two sets A and B of natural numbers are recursively separable if there exists a computable function such that if and if . Two sets of lambda terms are recursively separable if their corresponding sets under a Gödel numbering are recursively separable, and recursively inseparable otherwise.
The Scott–Curry theorem applies equally to sets of terms in combinatory logic with weak equality. It has parallels to Rice's theorem in computability theorem, which states that all non-trivial semantic properties of programs are undecidable.
The theorem has the immediate consequence that it is an undecidable problem to determine if two lambda terms are β-equivalent.
Proof
The proof is adapted from Barendregt in The Lambda Calculus.
Let A and B be closed under beta-convertibility and let a and b be lambda term representations of elements from A and B respectively. Suppose for a contradiction that f is a lambda term representing a computable function such that if and if (where equality is β-equality). Then define . Here, is true if its argument is zero and false otherwise, and is the identity so that is equal to x if b is true and y if b is false.
Then and similarly, . By the Second Recursion Theorem, there is a term X which is equal to f applied to the Church numeral of its Gödel numbering, X. Then implies that so in fact . The reverse assumption gives so . Either way we arise at a contradiction, and so f cannot be a function which separates A and B. Hence A and B are recursively inseparable.
History
Dana Scott first proved the theorem in 1963. The theorem, in a slightly less general form, was independently proven by Haskell Curry. It was published in Curry's 1969 paper "The undecidability of λK-conversion".
References
Lambda calculus
Undecidable problems | Scott–Curry theorem | Mathematics | 473 |
76,772,557 | https://en.wikipedia.org/wiki/IC%201166 | IC 1166 are a pair of galaxies in the Corona Borealis constellation comprising IC 1166 NED01 and IC 1166 NED02. They are located 977 million light-years from the Solar System and were discovered on July 28, 1892, by Stephane Javelle.
Galaxies
IC 1166 NED01
IC 1166 NED01 or PGC 56771 is a type E elliptical galaxy. Located above IC 1166 NED02, it has a diameter of approximately 110,000 light-years. PGC 56771 has an active nucleus and it is classified as a Seyfert type 1 galaxy. It has a quasar-like appearance, but its host clearly seen and presents two sets of emission lines which are superimposed on each other. PGC 56771 is classified a Markarian galaxy (designated Mrk 867), because compared to other galaxies its nucleus emits excessive amounts of ultraviolet rays. It has a surface brightness of 23.2 magnitude and, located at right ascension (16:02:08.92) and declination (26:19:45.60) respectively.
IC 1166 NED02
IC 1166 NED02 or PGC 1771884 is a type SBbc spiral galaxy. Located below IC 1166 NED01, it has an approximate diameter of 160,000 light-years making it slightly larger compared to the other galaxy and does not have an active galactic nucleus. PGC 1771884 has a surface brightness of 23.4 magnitude and, a right ascension of (16:02:08.83) and declination (26:19:31.20).
References
1166
Corona Borealis
Spiral galaxies
Elliptical galaxies
Astronomical objects discovered in 1892
Seyfert galaxies
0867
056771
2MASS objects
SDSS objects | IC 1166 | Astronomy | 379 |
76,269,454 | https://en.wikipedia.org/wiki/Decauville%20factory%20in%20Aulnay-sous-Bois | The Decauville factory in Aulnay-sous-Bois (previously known as Société Lilloise, or colloquially La Lilloise) produced prefabricated narrow gauge railway track and rolling stock from 1914 to the 1950s in Aulnay-sous-Bois, France.
History
The factory belonged to Etablissements Decauville ainé, a French manufacturer focussing on the production and sales of narrow gauge railway material. It was located on a piece of land that Decauville had acquired in 1914 in the Seine-Saint-Denis department of the Île-de-France region in the north-eastern suburbs of Paris. It focussed on tippers, i.e. V skip railway wagons and roadworthy dump trucks.
The factory was renamed to Société Industrielle d'Aulnay in 1946, enabling the company to win a major contract. At the factory Aulnay the last Decauville steam locomotives were built for the lignite mines of Yugoslavia.
References
Decauville
Dump trucks
Defunct rolling stock manufacturers of France
Aulnay-sous-Bois | Decauville factory in Aulnay-sous-Bois | Engineering | 221 |
1,614,549 | https://en.wikipedia.org/wiki/Time%20consistency%20%28finance%29 | Time consistency in the context of finance is the property of not having mutually contradictory evaluations of risk at different points in time. This property implies that if investment A is considered riskier than B at some future time, then A will also be considered riskier than B at every prior time.
Time consistency and financial risk
Time consistency is a property in financial risk related to dynamic risk measures. The purpose of the time the consistent property is to categorize the risk measures which satisfy the condition that if portfolio (A) is riskier than portfolio (B) at some time in the future, then it is guaranteed to be riskier at any time prior to that point. This is an important property since if it were not to hold then there is an event (with probability of occurring greater than 0) such that B is riskier than A at time although it is certain that A is riskier than B at time . As the name suggests a time inconsistent risk measure can lead to inconsistent behavior in financial risk management.
Mathematical definition
A dynamic risk measure on is time consistent if and implies .
Equivalent definitions
Equality
For all
Recursive
For all
Acceptance Set
For all where is the time acceptance set and
Cocycle condition (for convex risk measures)
For all where is the minimal penalty function (where is an acceptance set and denotes the essential supremum) at time and .
Construction
Due to the recursive property it is simple to construct a time consistent risk measure. This is done by composing one-period measures over time. This would mean that:
Examples
Value at risk and average value at risk
Both dynamic value at risk and dynamic average value at risk are not a time consistent risk measures.
Time consistent alternative
The time consistent alternative to the dynamic average value at risk with parameter at time t is defined by
such that .
Dynamic superhedging price
The dynamic superhedging price is a time consistent risk measure.
Dynamic entropic risk
The dynamic entropic risk measure is a time consistent risk measure if the risk aversion parameter is constant.
Continuous time
In continuous time, a time consistent coherent risk measure can be given by:
for a sublinear choice of function where denotes a g-expectation. If the function is convex, then the corresponding risk measure is convex.
References
Financial risk modeling
Mathematical finance
Financial economics | Time consistency (finance) | Mathematics | 464 |
5,181,068 | https://en.wikipedia.org/wiki/Pentolite | Pentolite is a composite high explosive used for military and civilian purposes, e.g., warheads and booster charges. It is made of pentaerythritol tetranitrate (PETN) phlegmatized with trinitrotoluene (TNT) by melt casting.
The most common military variety of pentolite (designated "Pentolite 50/50") is a mixture of 50% PETN and 50% TNT. (Unlike other compound explosives, the number before the slash is the mass percentage of TNT and the second number is the mass percentage of PETN.) This 50:50 mixture has a density of 1.65 g/cm3 and a detonation velocity of 7400 m/s.
Pentolite is a common explosive for cast boosters for the blasting work (as in mining). Civilian pentolite may contain a lower percentage of PETN, e. g. around 2% ("Pentolite 98/2"), 5% ("Pentolite 95/5") or 10% ("Pentolite 90/10"). These civilian pentolites have a detonation velocity of about 7,800 metres per second.
References
External links
Additional information in re Pentolite boosters
Explosives
Trinitrotoluene | Pentolite | Chemistry | 275 |
75,976,393 | https://en.wikipedia.org/wiki/Danny%20Ionescu | Danny Ionescu () is an aquatic microbial ecologist leading a research group in the department of Environmental Microbiomics at the Technische Universität Berlin. His primary research focus centers around the biology of giant bacteria and microbial life in the Dead Sea.
Education and career
Between 2000 and 2003, Ionescu earned a BSc degree in Marine Sciences and Marine Environmental Sciences from the School of Marine Sciences at the Ruppin Academic Center in Israel.
His academic journey continued with a master's degree between 2003 and 2005, conducted under the guidance of Prof. Aharon Oren at the Hebrew University of Jerusalem, in collaboration with Prof. Karlheinz Altendorf and Dr. Andre Lipski at the University of Osnabrueck in Germany. His Master's thesis, titled "Characterization of an endoevaporitic microbial community in the Eilat salterns by fatty acid analysis and stable isotope labeling", reflected his research focus during this period.
Ph.D. and postdoctoral research
In 2005, Ionescu embarked on a Ph.D. degree at the Hebrew University of Jerusalem, as part of the peace project "Bridging the Rift", in collaboration with Prof. Muna Hindiyeh and Prof. Mohhamad Wedyan. His doctoral thesis was titled "Cyanobacterial Biogeography and Nitrogen Fixation: Lessons from environmental and model organisms".
During his first postdoctoral research, starting in 2009, at the Max Planck Institute for Marine Microbiology in Bremen, Ionescu led the first scientific diving exploration of the Dead Sea.
The expedition revealed abundant microbial life forms in and around underwater freshwater springs. The underwater scenery of the Dead Sea as documented by Ionescu and Dr. Christian Lott of the Hydra Institute was featured in several documentary movies.
His work at the Max Planck Institute included studies on the interaction between minerals and microbial cells, conducted Äspö Hard Rock Laboratory in Sweden and on the island of Kiritimati, as part of the collaborative researcher group FOR571.
Between 2014 and 2024, Ionescu conducted his research at the Leibniz Institute of Freshwater Ecology and Inland Fisheries (IGB), where his research focused on the genomics and ecology of giant bacteria, specifically the genus Achromatium. Notably, in 2017, Ionescu discovered that the multiple chromosomes harbored by these large bacteria are not identical, highlighting their adaptability to different environments.
In 2021, Ionescu has received the independent research grant from the German Research Foundation (DFG) to further explore these topics.
Academic contributions
Ionescu has made significant contributions to the field of aquatic microbial ecology. A comprehensive list of his publications can be found on his ORCID and Google Scholar pages.
Additionally, Ionescu actively participates in the scientific community. He serves on the Managing Board of the open access research platform PCI Genomics and contributes as a recommender for PCI Microbiology. Moreover, he holds positions as an Associate Editor for Frontiers in Microbiology and editorial board member of Scientific Reports.
Personal life
Born in Bucharest, Romania, in 1976, Ionescu and his family immigrated to Israel in 1984. He is married to Dr. Mina Bizic, also a scientist, and the couple has two children. Ionescu's brother, Ariel Ionescu, is a Neurobiologist, and his brother-in-law, David Bizic, is an opera singer.
Ionescu is a certified SSI Gold level diving instructor
References
External links
Danny Ionescu's profile on the website of Leibniz Institute of Freshwater Ecology and Inland Fisheries
Danny Ionescu's's publications at Google Scholar
Danny Ionescu's publications at PubMed
Danny Ionescu's publications at ORCID
Living people
Romanian Jews in Israel
Environmental scientists
Microbiologists
Freshwater ecologists
Limnologists
Israeli marine biologists
German marine biologists
Marine biologists
Israeli microbiologists
1976 births | Danny Ionescu | Environmental_science | 795 |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.