id int64 580 79M | url stringlengths 31 175 | text stringlengths 9 245k | source stringlengths 1 109 | categories stringclasses 160
values | token_count int64 3 51.8k |
|---|---|---|---|---|---|
1,610,890 | https://en.wikipedia.org/wiki/Uniform%20data%20access | Uniform data access is a computational concept describing an even-ness of connectivity and controllability across numerous target data sources.
Necessary to fields such as Enterprise Information Integration (EII) and Electronic Data Interchange (EDI), it is most often used regarding analysis of disparate data types and data sources, which must be rendered into a uniform information representation, and generally must appear homogenous to the analysis tools—when the data being analyzed is typically heterogeneous and widely varying in size, type, and original representation.
Data management | Uniform data access | Technology | 111 |
25,933,913 | https://en.wikipedia.org/wiki/Montreal%20East%20Refinery%20%28Gulf%20Oil%20Canada%29 | The Montreal East Refinery (Gulf Canada) is a small petrochemical refinery located inside the city of Montréal-Est and inside the Coastal Petrochemical fields. The operator of the refining unit is Coastal Petrochemical (Petrochimie Coastal du Canada).
History
The refinery was constructed by British-American Oil Company in the 1930s to process crude oil imported from Texas. It was shut down by B/A's successor company, Gulf Canada, in 1983. Ultramar Canada purchased the 74,000 b/d capacity facility. refinery from Gulf Canada in 1986 and closed it soon after with the loss of 450 jobs. In June 1986 Montreal-based engineering firm Lavalin Inc. announced it was purchasing the refinery and would re-open it. In 1986 the refinery and its 210 000 m2 site was sold to Kemtec Petrochemicals which converted the plant to produce paraxylene. The plant came on line in 1989 and operated until 1991. That year Kemtec filed for bankruptcy. The site was determined to be heavily contaminated and, facing potentially large clean-up costs, all creditors of the company except the Government of Quebec declined to take over the property.
Recent operations
In 1994 the refinery was purchased by Coastal Canada Petroleum, Inc., (CCP) a subsidiary of Houston-based Coastal Corporation, for US$1.2 million. CCP acquired the processing equipment, entered into a long-term lease for the site, and agreed to make payments to an environmental trust fund for remediation of the contaminated site.
See also
Montreal Oil Refining Center
Montreal East Refinery (Shell Canada)
Montreal Refinery
References
Oil refineries in Canada
Montréal-Est
Industrial buildings and structures in Montreal
1963 establishments in Quebec | Montreal East Refinery (Gulf Oil Canada) | Chemistry | 348 |
31,595,206 | https://en.wikipedia.org/wiki/Hexaferrum | Hexaferrum and epsilon iron (ε-Fe) are synonyms for the hexagonal close-packed (HCP) phase of iron that is stable only at extremely high pressure.
A 1964 study at the University of Rochester mixed 99.8% pure α-iron powder with sodium chloride, and pressed a 0.5-mm diameter pellet between the flat faces of two diamond anvils. The deformation of the NaCl lattice, as measured by x-ray diffraction (XRD), served as a pressure indicator. At a pressure of 13 GPa and room temperature, the body-centered cubic (BCC) ferrite powder transformed to the HCP phase in Figure 1. When the pressure was lowered, ε-Fe transformed back to ferrite (α-Fe) rapidly. A specific volume change of −0.20 cm3/mole ± 0.03 was measured. Hexaferrum, much like austenite, is more dense than ferrite at the phase boundary. A shock wave experiment confirmed the diamond anvil results. Epsilon was chosen for the new phase to correspond with the HCP form of cobalt.
The triple point between the alpha, gamma and epsilon phases in the unary phase diagram of iron has been calculated as T = 770 K and P = 11 GPa, although it was determined at a lower temperature of T = 750 K (477 °C) in Figure 1. The Pearson symbol for hexaferrum is hP2 and its space group is P63/mmc.
Another study concerning the ferrite-hexaferrum transformation metallographically determined that it is a martensitic rather than equilibrium transformation.
While hexaferrum is purely academic in metallurgical engineering, it may have significance in geology. The pressure and temperature of Earth's iron core are on the order of 150–350 GPa and 3000 ± 1000 °C. An extrapolation of the austenite-hexaferrum phase boundary in Figure 1 suggests hexaferrum could be stable or metastable in Earth's core. For this reason, many experimental studies have investigated the properties of HCP iron under extreme pressures and temperatures. Figure 2 shows the compressional behaviour of ε-iron at room temperature up to a pressure as would be encountered halfway through the outer core of the Earth; there are no points at pressures lower than approximately 6 GPa, because this allotrope is not thermodynamically stable at low pressures but will slowly transform into α-iron.
References
Metallurgy
Iron
Steel | Hexaferrum | Chemistry,Materials_science,Engineering | 530 |
38,031 | https://en.wikipedia.org/wiki/Hazardous%20waste | Hazardous waste is waste that must be handled properly to avoid damaging human health or the environment. Waste can be hazardous because it is toxic, reacts violently with other chemicals, or is corrosive, among other traits. As of 2022, humanity produces 300-500 million metric tons of hazardous waste annually. Some common examples are electronics, batteries, and paints. An important aspect of managing hazardous waste is safe disposal. Hazardous waste can be stored in hazardous waste landfills, burned, or recycled into something new. Managing hazardous waste is important to achieve worldwide sustainability. Hazardous waste is regulated on national scale by national governments as well as on an international scale by the United Nations (UN) and international treaties.
Types
Universal wastes
Universal wastes are a special category of hazardous wastes that (in the U.S.) generally pose a lower threat relative to other hazardous wastes, are ubiquitous and produced in very large quantities by a large number of generators. Some of the most common "universal wastes" are: fluorescent light bulbs, some specialty batteries (e.g. lithium or lead containing batteries), cathode-ray tubes, and mercury-containing devices.
Universal wastes are subject to somewhat less stringent regulatory requirements. Small quantity generators of universal wastes may be classified as "conditionally exempt small quantity generators" (CESQGs) which release them from some of the regulatory requirements for the handling and storage hazardous wastes. Universal wastes must still be disposed of properly.
Household Hazardous Waste
Household Hazardous Waste (HHW), also referred to as domestic hazardous waste or home generated special materials, is a waste that is generated from residential households. HHW only applies to waste coming from the use of materials that are labeled for and sold for "home use". Waste generated by a company or at an industrial setting is not HHW.
The following list includes categories often applied to HHW. It is important to note that many of these categories overlap and that many household wastes can fall into multiple categories:
Paints and solvents
Automotive wastes (used motor oil, antifreeze, etc.)
Pesticides (insecticides, herbicides, fungicides, etc.)
Mercury-containing wastes (thermometers, switches, fluorescent lighting, etc.)
Electronics (computers, televisions, mobile phones)
Aerosols / Propane cylinders
Caustics / Cleaning agents
Refrigerant-containing appliances
Some specialty batteries (e.g. lithium, nickel cadmium, or button cell batteries)
Ammunition
Asbestos
Car batteries
Radioactive wastes (some home smoke detectors are classified as radioactive waste because they contain very small amounts of radioactive isotope americium, e.g. americium-241).
Disposal
Historically, some hazardous wastes were disposed of in regular landfills. Hazardous wastes must often be stabilized and solidified in order to enter a landfill and must undergo different treatments in order to stabilize and dispose of them. Most flammable materials can be recycled into industrial fuel. Some materials with hazardous constituents can be recycled, such as lead acid batteries. Many landfills require countermeasures against groundwater contamination. For example, a barrier has to be installed along the foundation of the landfill to contain the hazardous substances that may remain in the disposed waste.
Recycling
Some hazardous wastes can be recycled into new products. Examples may include lead–acid batteries or electronic circuit boards. When heavy metals in these types of ashes go through the proper treatment, they could bind to other pollutants and convert them into easier-to-dispose solids, or they could be used as pavement filling. Such treatments reduce the level of threat of harmful chemicals, like fly and bottom ash, while also recycling the safe product.
Incineration
Incinerators burn hazardous waste at high temperatures (1600°-2500°F, 870°-1400°C), greatly reducing its amount by decomposing it into ash and gases. Incineration works with many types of hazardous waste, including contaminated soil, sludge, liquids, and gases. An incinerator can be built directly at a hazardous waste site, or more commonly, waste can be transported from a site to a permanent incineration facility.
The ash and gases leftover from incineration can also be hazardous. Metals are not destroyed, and can either remain in the furnace or convert to gas and join the gas emissions. The ash needs to be stored in a hazardous waste landfill, although it takes less space than the original waste. Incineration releases gases such as carbon dioxide, nitrogen oxides, ammonia, and volatile organic compounds. Reactions in the furnace can also form hydrochloric acid gas and sulfur dioxide. To avoid releasing hazardous gases and solid waste suspended in those gases, modern incinerators are designed with systems to capture these emissions.
Landfill
Hazardous waste may be sequestered in a hazardous waste landfill or permanent disposal facility. "In terms of hazardous waste, a landfill is defined as a disposal facility or part of a facility where hazardous waste is placed or on land and which is not a pile, a land treatment facility, a surface impoundment, an underground injection well, a salt dome formation, a salt bed formation, an underground mine, a cave, or a corrective action management unit (40 CFR 260.10)."
Pyrolysis
Some hazardous waste types may be eliminated using pyrolysis in a high temperature not necessarily through electrical arc but starved of oxygen to avoid combustion. However, when electrical arc is used to generate the required ultra heat (in excess of 3000 degree C temperature) all materials (waste) introduced into the process will melt into a molten slag and this technology is termed Plasma not pyrolysis. Plasma technology produces inert materials and when cooled solidifies into rock like material. These treatment methods are very expensive but may be preferable to high temperature incineration in some circumstances such as in the destruction of concentrated organic waste types, including PCBs, pesticides and other persistent organic pollutants.
In society
Management and health effects
Hazardous waste management and disposal comes with consequences if not done properly. If disposed of improperly, hazardous gaseous substances can be released into the air resulting in higher morbidity and mortality. These gaseous substances can include hydrogen chloride, carbon monoxide, nitrogen oxides, sulfur dioxide, and some may also include heavy metals. With the prospect of gaseous material being released into the atmosphere, several organizations (RCRA, TSCA, HSWA, CERCLA) developed an identification scale in which hazardous materials and wastes are categorized in order to be able to quickly identify and mitigate potential leaks. F-List materials were identified as non-specific industrial practices waste, K-List materials were wastes generated from specific industrial processes - pesticides, petroleum, explosive industries, and the P & U list were commercially used generated waste and shelf stable pesticides. Not only can mismanagement of hazardous wastes cause adverse direct health consequences through air pollution, mismanaged waste can also contaminate groundwater and soil. In an Austrian study, people who live near industrial sites are "more often unemployed, have lower education levels, and are twice as likely to be immigrants." This creates disproportionately larger issues for those who depend heavily on the land for harvests and streams for drinking water; this includes Native American populations. Though all lower-class and/or social minorities are at a higher risk for being exposed to toxic exposure, Native Americans are at a multiplied risk due to the facts stated above (Brook, 1998). Improper disposal of hazardous waste has resulted in many extreme health complications within certain tribes. Members of the Mohawk Nation at Akwesasne have suffered elevated levels of PCB [Polychlorinated Biphenyls] in their bloodstreams leading to higher rates of cancer.
Global goals
The UN has a mandate on hazardous substances and wastes with recommendations to countries for dealing with hazardous waste. 199 countries signed the 1992 Basel Convention, seeking to stop the flow of hazardous waste from developed countries to developing countries with less stringent environmental regulations.
The international community has defined the responsible management of hazardous waste and chemicals as an important part of sustainable development by including it in Sustainable Development Goal 12. Target 12.4 of this goal is to "achieve the environmentally sound management of chemicals and all wastes throughout their life cycle". One of the indicators for this target is: "hazardous waste generated per capita; and proportion of hazardous waste treated, by type of treatment".
Regulatory history
In the United States
Resource Conservation and Recovery Act (RCRA)
Hazardous wastes are wastes with properties that make them dangerous or potentially harmful to human health or the environment. Hazardous wastes can be liquids, solids, contained gases, or sludges. They can be by-products of manufacturing processes or simply discarded commercial products, like cleaning fluids or pesticides. In regulatory terms, RCRA hazardous wastes are wastes that appear on one of the four hazardous wastes lists (F-list, K-list, P-list, or U-list), or exhibit at least one of the following four characteristics; ignitability, corrosivity, reactivity, or toxicity. in the US, Hazardous wastes are regulated under the Resource Conservation and Recovery Act (RCRA), Subtitle C.
By definition, EPA determined that some specific wastes are hazardous. These wastes are incorporated into lists published by the Agency. These lists are organized into three categories: F-list (non-specific source wastes) found in the regulations at 40 CFR 261.31, K-list (source-specific wastes) found in the regulations at 40 CFR 261.32, and P-list and the U-list (discarded commercial chemical products) found in the regulations at 40 CFR 261.33.
RCRA's record keeping system helps to track the life cycle of hazardous waste and reduces the amount of hazardous waste illegally disposed.
Comprehensive Environmental Response, Compensation, and Liability Act
The Comprehensive Environmental Response, Compensation, and Liability Act (CERCLA) was enacted in 1980. The primary contribution of CERCLA was to create a "Superfund" and provide for the clean-up and remediation of closed and abandoned hazardous waste sites. CERCLA addresses historic releases of hazardous materials, but does not specifically manage hazardous wastes.
In India
Environmental Act and Hazardous Waste Rules
In 1984, a deadly methyl isocyanate gas leak known as the Bhopal disaster raised environmental awareness in India. In response, the Indian government produced the Environmental Act in 1986, followed by the Hazardous Waste Rules in 1989. With these rules, companies are only permitted by the state to produce hazardous waste if they are able to dispose of it safely. However, state governments did not make these rules effective. There was around a decade delay between when hazardous waste landfills were requested and when they were built. During this time, companies disposed hazardous waste in various "temporary" hazardous waste locations, such as along roads and in canal pits, with no immediate plan to move it to proper facilities.
Supreme Court action
The Supreme Court stepped in to prevent damage from hazardous waste in order to protect the right to life. A 1995 petition by the Research Foundation for Science, Technology, and Natural Resource Policy spurred the Supreme Court to create the High Powered Committee (HPC) of Hazardous Waste, since data from pre-existing government boards was not usable. This committee found studies linking pollution and improper waste treatment with higher amounts of hexavalent chromium, lead, and other heavy metals. Industries and regulators were effectively ignoring these studies. In addition, the state was also not acting in accordance with the Basel Convention, an international treaty on the transport of hazardous waste. The Supreme Court modified the Hazardous Waste Rules and began the Supreme Court Monitoring Committee to follow up on its decisions. With this committee, the Court has been able to force companies polluting hazardous wastes to close.
Country examples
United States
In the United States, the treatment, storage, and disposal of hazardous waste are regulated under the Resource Conservation and Recovery Act (RCRA). Hazardous wastes are defined under RCRA in 40 CFR 261 and divided into two major categories: characteristic and listed.
The requirements of the RCRA apply to all the companies that generate hazardous waste and those that store or dispose of hazardous waste in the United States. Many types of businesses generate hazardous waste. Dry cleaners, automobile repair shops, hospitals, exterminators, and photo processing centers may all generate hazardous waste. Some hazardous waste generators are larger companies such as chemical manufacturers, electroplating companies, and oil refineries.
A U.S. facility that treats, stores, or disposes of hazardous waste must obtain a permit under the RCRA. Generators and transporters of hazardous waste must meet specific requirements for handling, managing, and tracking waste. Through the RCRA, Congress directed the United States Environmental Protection Agency (EPA) to create regulations to manage hazardous waste. Under this mandate, the EPA has developed strict requirements for all aspects of hazardous waste management, including treating, storing, and disposing of hazardous waste. In addition to these federal requirements, states may develop more stringent requirements that are broader in scope than the federal regulations. Furthermore, RCRA allows states to develop regulatory programs that are at least as stringent as RCRA, and after review by EPA, the states may take over responsibility for implementing the requirements under RCRA. Most states take advantage of this authority, implementing their own hazardous waste programs that are at least as stringent and, in some cases, stricter than the federal program.
The U.S. government provides several tools for mapping hazardous wastes to particular locations. These tools also allow the user to view additional information.
TOXMAP was a Geographic Information System (GIS) service from the Division of Specialized Information Services of the United States National Library of Medicine (NLM) that used maps of the United States to help users visually explore data from the United States Environmental Protection Agency's (EPA) Toxics Release Inventory and Superfund Basic Research Program. The US Federal Government funded this resource. TOXMAP's chemical and environmental health information was taken from NLM's Toxicology Data Network (TOXNET), PubMed, and other authoritative sources.
The US Environmental Protection Agency (EPA) "Where You Live" allows users to select a region from a map to find information about Superfund sites in that region.
See also
Toxic waste
Bamako Convention
Brownfield Regulation and Development
Environmental hazard
Environmental remediation
Environmental racism
Gade v. National Solid Wastes Management Association
List of solid waste treatment technologies
List of Superfund sites in the United States
List of waste management companies
List of waste management topics
List of waste types
Mixed waste (radioactive/hazardous)
National Priorities List (in the US)
Pollution
Recycling
Retail hazardous waste
Toxicity characteristic leaching procedure
Triad (environmental science)
Vapor intrusion
References
External links
Agency for Toxic Substances and Disease Registry
The EPA's hazardous waste page
The U.S. EPA's Hazardous Waste Cleanup Information System
Waste Management: A Half Century of Progress, a report by the EPA Alumni Association
Environment and health
Occupational safety and health
Environmental history of Canada | Hazardous waste | Technology | 3,113 |
70,631,248 | https://en.wikipedia.org/wiki/Thermally%20induced%20shape-memory%20effect%20%28polymers%29 | The thermally induced unidirectional shape-shape-memory effect is an effect classified within the new so-called smart materials. Polymers with thermally induced shape-memory effect are new materials, whose applications are recently being studied in different fields of science (e.g., medicine), communications and entertainment.
There are currently reported and commercially used systems. However, the possibility of programming other polymers is present, due to the number of copolymers that can be designed: the possibilities are almost endless.
General information
Polymers with thermally induced shape-memory effect are those polymers that respond to external stimuli and because of this have the ability to change their shape. The thermally induced shape-memory effect results from a combination of proper processing and programming of the system.
This effect can be observed in polymers with very different chemical composition, which opens a great possibility of applications.
Description of the effect on polymers
In the first step the polymers are processed by means of common techniques, such as injection or extrusion, thermoforming, at a temperature (THigh) at which the polymer melts, obtaining a final shape which is called "permanent" shape.
The next step is called system programming and involves heating the sample to a transition temperature (TTrans). At that temperature the polymer is deformed, reaching a shape called "temporary". Immediately afterwards the temperature of the sample is lowered.
The final step of the effect involves the recovery of the permanent shape. The sample is heated to the transition temperature (TTrans) and within a short time the recovery of the permanent shape is observed.
This effect is not a natural property of the polymer, but results from proper programming of the system with the appropriate chemistry.
For a polymer to exhibit this effect, it must have two components at the molecular level: bonds (chemical or physical) to determine the permanent shape and "trigger" segments with a TTrans to fix the temporary shape.
Characteristics of the effect on polymers
Metals exhibit a bidirectional shape-memory effect, maintaining one shape at each temperature. Polymers recover their shape only once.
Polymers can change their shape with elongations up to 200% while metals have a maximum of 8-10% elongation.
Recovery in metals and ceramics involves a change in crystal structure, while recovery in polymers is due to the action of entropic forces and anchor points.
Polymers can be designed according to the desired application, they can be: biodegradable, drug delivery systems (medicinal), antibacterial, etc.
The transition temperature is designed with "trigger" segments, which makes temperature adjustment easier than in ceramics, since they depend on equiatomic quantities.
Functioning
It should first be noted that the first inelastic mechanism of these polymers is the mobility of the chains and the conformational rearrangement of the groups. Then the effect on semi-crystalline and amorphous polymers must be distinguished. In both cases, anchor points must be created that act as "triggers" for the effect. In the case of amorphous polymers, these will be the knots or "tangles" of the chains, and in the case of semi-crystalline polymers, the crystals themselves will form these anchor points.
By modifying the shape of the material under minimal critical stress, the chains slide and a metastable structure is created, which increases the organization and order of the chains (lower entropy), when the deformation load is eliminated, the anchor points provide a storage mechanism for macroscopic stresses in the form of small localized stresses and decreasing entropy.
In the glassy state the rotational motions of the molecules are frozen and impeded, as the temperature increases and the glassy state is reached, these motions thaw and rotations and relaxations occur, the molecules take the form that is entropically most favorable to them, the one with the lowest energy. These movements are called relaxation process and the formation of "random strings" to eliminate stresses is called shape-memory loss.
A polymer will exhibit the shape-memory effect if it is susceptible to being stabilized in a given state of deformation, preventing the molecules from slipping and regaining their higher entropy (lower energy) form. This can be achieved almost entirely by creating crosslinking or vulcanization, these new bonds act as anchors and prevent the relaxation of the chains, the anchor points can be physical or chemical.
Comparison with metals and ceramics
The unidirectional shape-shape-memory effect was first observed by Chand and Read in 1951 in a Gold-Cadmium alloy and in 1963 Buehler described this effect for nitinol, which is an equiatomic Nickel-Titanium alloy.
This effect in metals and ceramics is based on a change in the crystal structure, called martensitic phase transition. The disadvantage of these materials is that it is an equiatomic alloy and deviations of 1% in the composition modify the transition temperature by approximately 100 K.
Some metals and ceramics present the effect bidirectionally, which means that at a certain temperature there is a shape and this can be changed by changing the temperature, but if the first temperature is recovered, also the first shape is recovered. This is achieved by training the material for each shape at each temperature.
Metals and ceramics with thermally induced bidirectional shape-memory effect have had great application in medical implants, sensors, transducers, etc. Many present a risk however due to their high toxicity.
Phases in the system
To obtain the effect, it is necessary to achieve a phase separation, one of these phases works as the trigger for the temporary form, using a transition temperature that can be Tm or Tg and in this effect is called TTrans. A second phase has the higher transition temperature and above this temperature the polymer melts and is processed by conventional methods.
The ratio of the elements forming the phase separation largely regulates the TTrans transition temperature; this is much easier to control than in metallic alloys.
An example of this is the poly(ethylene oxide-ethylene terephthalate) or EOET copolymer. The polyethylene terephthalate (PET) segment has a relatively high Tg and its Tm is commonly referred to as the "hard" segment, whereas polyethylene ethylene oxide (PEO), has a relatively low Tm and Tg and is referred to as the "soft" segment. In the final polymer these segments separate into two phases in the solid state. PET has a high degree of crystallinity and the formation of these crystals provides for the flow and rearrangement of the PEO chains as they are stretched at temperatures higher than their Tm.
Experimentation
Achieving of the effect
A commercial, high purity (non-recycled) polymer sample with known molecular mass distribution can be obtained or synthesized according to standard procedures.
Common properties such as elastic modulus, tan δ, crystallinity, viscosity, density should be characterized.
Anchor points, physical or chemical (chain entanglement, crystallinity or vulcanization), must be decided.
If crosslinking with slight vulcanization is desired, standardized methods for each polymer must be taken into account. In the case of PCO, for example, it is a polymer without shape-shape-memory effect because it does not present a clear "plateau", but the addition of a minimum amount of peroxide (~1%) provides PCO with all the requirements to present this effect.
A permanent stress-free shape with known dimensions is prepared by conventional methods.
The system is programmed, i.e. it is heated up to TTrans and at that temperature the shape is modified by applying pressure or stress. Then the material is cooled and finally the pressure or stress is removed.
After heating the sample again to TTrans, the stresses are released and the permanent shape is recovered.
Some polymers fatigue first, so each system can be evaluated with a simple experiment that consists of programming the system 10 or 20 times in a row and measuring the recovery in % and time.
Crystallizable polymers
Polymers that can crystallize are (with the exception of PP) guarantee to obtain this effect, mainly due to their ordering capacity, which is reflected in the crystallinity, the crystals have affinity for their constituent elements and form new bonds these achieve anchoring forces that give stability to the temporary form.
Crystallization, vulcanization, and final properties
To analyze the behavior of the crystals in this type of polymers, the WAXS and DSC techniques are used; these techniques help to determine what percentage of the polymer are crystals and how they are organized. This is due to the fact that the crystallinity decreases as the crosslinking increases, since the chains lose the ability to arrange themselves and order is essential to achieve crystallinity.
A second problem present when crosslinking molecules is melting, since an excess of crosslinking modifies the molecule in such a way that it stops melting (similar to a thermoset) and therefore the temporary shape cannot be obtained.
The control of curing either by electromagnetic waves or with peroxides is very important since it increases the TTrans and decreases the crystallinity, determining factors in the shape-shape-memory effect.
In the case of biocompatible semicrystalline systems such as poly(ε-caprolactone) and poly(n-butyl acrylate), crosslinked by photopolymerization it has been reported that the crystallization behavior is affected by the cooling rate, as in any other semicrystalline polymer, but the heat of crystallization remains independent of the cooling rate.
The influence of the crosslinking of the molecules, the cooling rate and the crystallization behavior are specific to each system and impossible to enumerate since the synthesis possibilities are almost infinite.
Crystallizable polymers such as oligo(ε-caprolactone) can have amorphous segments such as poly(n-butyl acrylate) and the molecular mass ratio of each determine the behavior of the system in programming temporary form and recovery to permanent form.
Factores que influencian el efecto
Molecular mass of the crosslinked polymer.
Molecular weight of the crystallizable polymer.
Degree of crosslinking.
Phase separation.
Moduli of the original polymers and proportion in the copolymer.
Moisture (in polymers susceptible to moisture degradation).
Cooling speed.
Amorphous polymers
If the polymeric system is amorphous, then the anchor points of the crystalline structure are not available and the only way to ensure the stability of the temporary shape is through chain entanglements (physical entanglements and not chemical crosslinking), in addition to the possibility of crosslinking.
Relaxation processes
In the glassy state, the movements of the long chain segments are frozen, the movements of these segments depend on an activation temperature that brings the polymer to a smoothing and elastic state, the rotation on the carbon bonds and the movements of the chains no longer have strong impediments to accommodate and acquire the conformation that requires less energy, the chains then "unravel" forming random strings, without order and therefore with higher entropy.
If a polymer sample is stretched for a short time in the elastic range, when the load is removed, the sample will recover its original shape, but if the load remains for a sufficiently long period, the chains rearrange and the original shape is not recovered, the result is an irreversible deformation, also called relaxation process (in this case: creep).
In order for a polymer to exhibit the thermally induced shape-memory effect, it is necessary to fix the chains with anchor points to avoid these relaxation processes that inelastically modify the system.
Glass transition
Amorphous polymers do not have a crystallization temperature (Tm) like semi-crystalline polymers and have only a glass transition temperature (Tg). This has a decisive influence on the behavior of shape-shape-memory polymer systems.
A crystalline copolymer system alone can result in the crosslinker-treated copolymer losing its crystallinity and becoming practically amorphous.
An amorphous polymer depends on the level of crosslinking or the degree of polymerization to exhibit this effect. In the case of poly(norbornene), which is a linear, amorphous polymer, with a content of 70 to 80% of trans bonds in commercial products, molecular mass of approximately 3x106 g mol−1 and Tg of approximately 35 to 45°C. Because it achieves an unusually high degree of polymerization, chain entanglements can be relied upon as anchor points to achieve the thermally induced shape-memory effect. Therefore, this polymer relies solely on physical anchor points. When heated up to Tg, the material abruptly changes from a rigid state to a tapered state (softens). To achieve the effect, the shape must be changed rapidly to avoid rearrangement of the segments of the polymer chains and immediately cool the material also very rapidly below Tg. Reheating the material back to Tg will show the recovery of the original shape.
Influence of chemical structure
In designing copolymers for thermally induced shape-memory effect it is very important to keep in mind that a slight change in chemical structure (cis/trans ratios, tacticity, molecular mass, etc.) produces a significant change in the shape-memory polymer. An example is the copolymer of poly(methylmethacrylate-co-methacrylic acid) or poly(MAA-co-MMA) compared to poly(MAA-co-MMA)-PEG, where PEG is short for poly(ethylene glycol) which forms complexes in the copolymer.
Changes in the morphology of the material including PEG provide shape-memory effect to the copolymer, showing two phases, the three-dimensional network providing a stable phase and the reversible phase formed by the amorphous part of the PEG-PMAA complexes. The complexes show a high modulus storage capacity, so when a PEG of higher molecular mass is introduced into the copolymer, an increase in the elastic modulus, higher modulus in the glassy state and faster recovery are observed.
Its properties can be studied with differential scanning calorimetry (DSC), wide-angle X-ray diffraction (WAXD) and dynamic mechanical analysis (DMA) techniques to determine its physicochemical arrangement.
Overview
For a polymer to exhibit the thermally induced shape-memory effect, it must have anchor points for temporary and permanent shape. These can be physical (chain entanglements, crystals) or chemical (chemical crosslinking, curing, vulcanization).
This effect in polymers depends on entropic forces and not on martensitic transitions like metals.
The most important physical properties are: elastic modulus, recovery speed, temporary shape stability.
The transition temperature TTrans can be Tm or Tg or a mixture of both.
All crystalline polymers (except for PP) can exhibit thermally induced shape-memory effect.
Inelastic mechanisms that decrease the effect are: moisture degradation (for moisture sensitive polymers e.g. polyurethanes), unraveling of the chains, degradation of the bonds that fix the permanent or temporary shape.
Applications
Most of the applications of polymers with this effect are only suggestions for now, many possibilities have been proposed, but so far only a few have been used, the most important being medical devices and automotive elements, although the greatest success has been achieved with heat-shrinkable polyethylene, which is also an exception in the programming step, since it is processed in a different way.
Healthcare applications
Orthodontic items, such as wires and foams for endovascular procedures.
Microelements for intelligent suturing.
Intravenous needles that soften in the body and laparoscopy devices
Drug delivery systems.
In-body degradable implants for minimally invasive surgeries.
Inner soles of orthopedic or special needs shoes and utensils for people with disabilities.
Intravenous catheters.
Everyday life applications
Seals for adjustable pipes and fittings, shrinkable or adjustable pipes.
Braille reprintable boards and reprintable advertisements.
Adjustable anti-corrosion films.
Hair for dolls, toys, hair styling items.
New items packaged in smaller volume and that change their shape upon first use.
Protections for automobiles, fenders, etc.
Artificial nails.
Smart textiles.
See also
Shape-shape-memory polymer
Shape-shape-memory alloy
Polymer
Copolymer
Smart material
Bibliographical references
Charlesby A. Atomic Radiation and Polymers. Pergamon Press, Oxford, pp. 198–257 (1960).
Gall, K; Dunn, M; Liu, Y. Internal stress storage in shape shape-memory polymer nanocomposites. Applied physical letters. 85, (Jul-2004).
Jeong, Han Mo; Song H, Chi W. Shape-shape-memory effect of poly (methylene-1,3-cyclopentane) and its copolymer with polyethylene. Polymer International, 51:275-280 (2002).
Kawate, K. Creep Recovery of Acrylate Urethane Oligomer/Acrylate Networks. Creep recovery, shape shape-memory. Journal of polymer science. 35.
Kim B K, Lee S Y, Xu M. Polyurethanes having shape-shape-memory effects. Polymer 37: 5781–93, (1998).
Langer, R; Tirrell, D. A. Designing materials for biology and medicine. Nature 428: (Apr-2004).
Lendlein, A; Kelch, S; Kratz, K. Shape-shape-memory Polymers. Encyclopedia of Materials: Science and Technology. 1–9. (2005).
Lendlein, A; Langer, R. Biodegradable, elastic shape-shape-memory polymers for potential biomedical applications. Science. 296, 1673–1676 (2002).
Lendlein, A; Kelch, S. Shape-Memory Polymers. Angew. Chemie. Chem. Int. 41: 2034 – 2057. (2002).
Lendlein, A; Schmidt, A M; Langer R, AB-polymer networks based on oligo(ε-caprolactone) segments showing shape-shape-memory properties. Proc. Natl. Acad. Sci. USA. 98(3): 842–7 (2001).
Li F, Chen Y, Zhu W, Zhang X, Xu M. Shape shape-memory effects of polyethylene/nylon 6 graft copolymers. Polymer 39(26):6929–6934 (1998).
Liu, Chun, Mather. Chemically Cross-Linked Polycyclooctene: Synthesis, Characterization, and Shape Memory Behavior. Macromolecules, 35: 9868-9874 (2002).
Nakasima A, Hu J, Ichinosa M, Shimada H. Potential application of shape-shape-memory plastic as elastic material in clinical orthodontics. (1991) Eur. J. Orthodontics 13:179–86.
Ortega, Alicia M; Gall, Ken. The Effect of Crosslink Density on the Thermo-Mechanical Response of Shape Memory Polymers.
Peng P; Wang, W; Xuesi C; and Jing X. Poly(ε-caprolactone) Polyurethane and Its Shape-Memory Property. Biomacromolecules 6:587-592 (2005).
Wang, M; Zhang, L. Recovery as a Measure of Oriented Crystalline Structure in Poly (ether ester) s Based on Poly (ethylene oxide) and poly(ethylene terephthalate) Used as Shape Memory Polymers. Journal of Polymer Science: Part B: Polymer Physics, 37: 101–112 (1999).
Yiping C. Ying G; Juan D; Juan L; Yuxing P; Albert S. Hydrogen-bonded polymer network—poly (ethylene glycol) complexes with shape shape-memory effect. Journal of Materials Chemistry. 12: 2957–2960 (2002).
Katime I, Katime O, Katime D "Los materiales inteligentes de este Milenio: los hidrogeles polímeros". Editorial de la Universidad del País Vasco, Bilbao 2004. ISBN 84-8373-637-3.
Katime I, Katime O y Katime D."Introducción a la Ciencia de los materiales polímeros: Síntesis y caracterización". Servicio Editorial de la Universidad del País Vasco, Bilbao 2010. ISBN 978-84-9860-356-9
Polymer chemistry
Polymer physics
Polymers | Thermally induced shape-memory effect (polymers) | Chemistry,Materials_science,Engineering | 4,337 |
11,238,434 | https://en.wikipedia.org/wiki/Animas-La%20Plata%20Water%20Project | The Animas-La Plata water project is a water project designed to fulfill the water rights settlement of the Ute Mountain and the Southern Ute tribes of the Ute Nation in Colorado, USA.
Congress authorized planning for the United States Bureau of Reclamation project with Public Law 84–485 on 11 April 1956, and construction was authorized by the Colorado River Basin Project Act of 30 September 1968 (Public Law 90-537). The project was to supply of water for irrigation, industrial and municipal water supply use in Colorado and New Mexico.
In 1978, Congress appropriated $710 million for the project but President Carter vetoed the entire appropriations bill to protest what he viewed as wasteful pork barrel projects. Congress overrode the veto. Cynthia Barnett, in her book "Mirage, Florida and the vanishing water of the Eastern U.S. ( University of Michigan Press, 2007) writes that the project was the legacy of Congressman Wayne Aspinall of Colorado. Aspinall was the longtime chair of the House Interior Committee. According to Barnett, the Department of the Interior's Inspector General called the project economically unfeasible. And government auditors estimated that the project would return only 40 cents of benefits for every dollar spent (Mirage, pp. 46–47).
The final environmental impact statement was approved and released in 1980. Construction was expected to begin in 1980 or 1981. In 1988, the project was incorporated into the Colorado Ute Indian Water Rights Settlement Act (Public Law 100-585).) In 1996–97, Colorado Gov. Roy Romer and his lieutenant governor, Gail Schoettler, undertook an initiative to bring supporters and opponents together to address and resolve the issues and gain consensus on project alternatives. In 1998, the Department of the Interior issued a recommendation for a substantially scaled-down project designed primarily to satisfy Native American water rights, along with municipal and industrial needs in the immediate area secondarily, and completely excluding other non-Indian irrigation systems.
In April 2002 work on the project began. The project consists of three major components:
A 280 cubic feet per second (7.9 m³/s) pumping plant on the Animas River just south of downtown Durango, Colorado;
An underground pipeline to carry project water from the pumping plant to the off-stream reservoir; and
the reservoir, Lake Nighthorse, at Ridges Basin, southwest of Durango.
Construction officially ended in March 2013 when project status was changed to maintenance.
Lake Nighthorse stores about of water. The average annual depletion rate is . The Lake Nighthorse reservoir started filling on 4 May 2009 and was filled to capacity by 29 June 2011.
As of March 2015, the Bureau or Reclamation was working with the City of Durango on a recreation lease and annexation agreement, as well as a cultural resource management plan to comply with Section 106 of the National Historic Preservation Act. Additional construction at the reservoir is planned to start in the summer of 2015.
In addition, the project includes a future buried pipeline from the Farmington, New Mexico, area to the Shiprock, New Mexico, area, supplying water for Navajo Nation usage.
See also
Animas River
Lake Nighthorse
References
External links
Buildings and structures in Colorado
United States Bureau of Reclamation
Interbasin transfer
Colorado River Storage Project | Animas-La Plata Water Project | Engineering,Environmental_science | 665 |
53,605,457 | https://en.wikipedia.org/wiki/Vacuum%20metallurgy | Vacuum metallurgy is the field of materials technology that deals with making, shaping, or treating metals in a controlled atmosphere, at pressures significantly less than normal atmospheric pressure. The purpose of vacuum metallurgy is to prevent contamination of metal by gases in the atmosphere. Alternatively, in some processes, a reactive gas may be introduced into the process to become part of the resultant product. Examples of vacuum metallurgy include vacuum degassing of molten steel in steelmaking operations, vacuum deposition of thin metal layers in manufacture of optics and semiconductors, vacuum casting, vacuum arc remelting of alloys, and vacuum induction melting.
See also
Electron-beam welding
References
Metallurgy | Vacuum metallurgy | Chemistry,Materials_science,Engineering | 137 |
324,863 | https://en.wikipedia.org/wiki/Formalism%20%28philosophy%29 | The term formalism describes an emphasis on form over content or meaning in the arts, literature, or philosophy. A practitioner of formalism is called a formalist. A formalist, with respect to some discipline, holds that there is no transcendent meaning to that discipline other than the literal content created by a practitioner. For example, formalists within mathematics claim that mathematics is no more than the symbols written down by the mathematician, which is based on logic and a few elementary rules alone. This is as opposed to non-formalists, within that field, who hold that there are some things inherently true, and are not, necessarily, dependent on the symbols within mathematics so much as a greater truth. Formalists within a discipline are completely concerned with "the rules of the game," as there is no other external truth that can be achieved beyond those given rules. In this sense, formalism lends itself well to disciplines based upon axiomatic systems.
Religion
Formalism in religion means an emphasis on ritual and observance over their meanings. Within Christianity, the term legalism is a derogatory term that is loosely synonymous to religious formalism.
Law
Formalism is a school of thought in law and jurisprudence which assumes that the law is a system of rules that can determine the outcome of any case, without reference to external norms. For example, formalism animates the commonly heard criticism that "judges should apply the law, not make it." To formalism's rival, legal realism, this criticism is incoherent, because legal realism assumes that, at least in difficult cases, all applications of the law will require that a judge refer to external (i.e. non-legal) sources, such as the judge's conception of justice, or commercial norms.
Criticism
In general in the study of the arts and literature, formalism refers to the style of criticism that focuses on artistic or literary techniques in themselves, in separation from the work's social and historical context.
Art criticism
Generally speaking, formalism is the concept which everything necessary in a work of art is contained within it. The context for the work, including the reason for its creation, the historical background, and the life of the artist, is not considered to be significant. Examples of formalist aestheticians are Clive Bell, Jerome Stolnitz, and Edward Bullough.
Literary criticism
In contemporary discussions of literary theory, the school of criticism of I. A. Richards and his followers, traditionally the New Criticism, has sometimes been labelled 'formalist'. The formalist approach, in this sense, is a continuation of aspects of classical rhetoric.
Russian formalism was a twentieth century school, based in Eastern Europe, with roots in linguistic studies and also theorising on fairy tales, in which content is taken as secondary since the tale 'is' the form, the princess 'is' the fairy-tale princess.
The arts
Poetry
In modern poetry, Formalist poets may be considered as the opposite of writers of free verse. These are only labels, and rarely sum up matters satisfactorily. 'Formalism' in poetry represents an attachment to poetry that recognises and uses schemes of rhyme and rhythm to create poetic effects and to innovate. To distinguish it from archaic poetry the term 'neo-formalist' is sometimes used.
See for example:
The Formalist, a literary magazine (now defunct) for formalist poetry
New Formalism, a movement within the poetry of the United States
The New Formalist, a literary magazine for formalist poetry. It was published from 2001 to 2010.
Film
In film studies, formalism is a trait in filmmaking, which overtly uses the language of film, such as editing, shot composition, camera movement, set design, etc., so as to emphasise graphical (as opposed to diegetic) qualities of the image. Strict formalism, condemned by realist film theorists such as André Bazin, has declined substantially in popular usage since the 1950s, though some more postmodern filmmakers reference it to suggest the artificiality of the film experience.
Examples of formalist films may include Resnais's Last Year at Marienbad and Parajanov's The Color of Pomegranates.
Intellectual method
Formalism can be applied to a set of notations and rules for manipulating them which yield results in agreement with experiment or other techniques of calculation. These rules and notations may or may not have a corresponding mathematical semantics. In the case no mathematical semantics exists, the calculations are often said to be purely formal. See for example scientific formalism.
Mathematics
In the foundations of mathematics, formalism is associated with a certain rigorous mathematical method: see formal system. In common usage, a formalism means the out-turn of the effort towards formalisation of a given limited area. In other words, matters can be formally discussed once captured in a formal system, or commonly enough within something formalisable with claims to be one. Complete formalisation is in the domain of computer science.
Formalism also more precisely refers to a certain school in the philosophy of mathematics, stressing axiomatic proofs through theorems, specifically associated with David Hilbert. In the philosophy of mathematics, therefore, a formalist is a person who belongs to the school of formalism, which is a certain mathematical-philosophical doctrine descending from Hilbert.
Anthropology
In economic anthropology, formalism is the theoretical perspective that the principles of neoclassical economics can be applied to our understanding of all human societies.
See also
Zhdanov Doctrine, Stalinist "anti-formalist" doctrine leading to purges in the arts and culture of the USSR and satellite states
References
External links
"Formalism in the Philosophy of Mathematics" by the Stanford Encyclopedia of Philosophy.
Theories of aesthetics
Theories of deduction
Literary concepts | Formalism (philosophy) | Mathematics | 1,179 |
14,760,937 | https://en.wikipedia.org/wiki/EEF1B2 | Elongation factor 1-beta is a protein that in humans is encoded by the EEF1B2 gene.
Function
This gene encodes a translation elongation factor. The protein is a guanine nucleotide exchange factor involved in the transfer of aminoacylated tRNAs to the ribosome. Alternative splicing results in three transcript variants which differ only in the 5' UTR.
Interactions
EEF1B2 has been shown to interact with EEF1G and HARS.
References
Further reading | EEF1B2 | Chemistry | 108 |
2,254,112 | https://en.wikipedia.org/wiki/Asymmetric%20C-element | Asymmetric C-elements are extended C-elements which allow inputs which only effect the operation of the element when transitioning in one of the directions. Asymmetric inputs are attached to either the minus (-) or plus (+) strips of the symbol. The common inputs which effect both the transitions are connected to the centre of the symbol. When transitioning from zero to one, the C-element will take into account the common and the asymmetric plus inputs. All these inputs must be high for the up transition to take place. Similarly when transitioning from one to zero the C-element will take into account the common and the asymmetric minus inputs. All these inputs must be low for the down transition to happen.
The figure shows the gate-level and transistor-level implementations and symbol of the asymmetric C-element. In the figure the plus inputs are marked with a 'P', the minus inputs are marked with an 'm' and the common inputs are marked with a 'C'.
In addition, it is possible to extend the asymmetric input convention to inverted C-elements, where a plus (minus) on an input port means that an input is required for the inverted output to fall (rise).
References
Digital electronics | Asymmetric C-element | Engineering | 262 |
6,222,223 | https://en.wikipedia.org/wiki/Harte%20Hanks | Harte Hanks is a global marketing services company headquartered in Boston, Massachusetts. Harte Hanks services include analytics, strategy, marketing technology, creative services, digital marketing, customer care, direct mail, logistics, and fulfillment.
History
Founded by Houston Harte and Bernard Hanks in 1923 as Harte-Hanks Newspapers (and later Harte-Hanks Communications), the company spent its first 50 years operating newspapers in Texas. In 1968, the company relocated from Abilene to San Antonio. It made its first IPO on March 8, 1972, later diversifying into television and radio properties. In 1984, the company's managers took it private, later going public again in 1993. In the mid-1990s, the company withdrew from the newspaper and broadcasting business and focused solely on direct marketing and shopper publications.
Newspapers
Harte Hanks' first newspapers were Hanks' Abilene Reporter-News and Harte's San Angelo Standard. Other early acquisitions in the 1920s and 1930s included the Harlingen Star, Corpus Christi Times, Big Spring Herald and Paris News. The company incorporated as Harte-Hanks Newspapers, Inc. in 1948.
The company bought two competing newspapers in Greenville, Texas in the mid-1950s, consolidating them into the Herald-Banner after two years of fierce rivalry. A court case followed, with Harte Hanks accused of unfair competition. The chain was acquitted of the charges in 1959.
In 1962, the company took full ownership of San Antonio Express-News, its largest circulation newspaper. The Express-News was one of the first properties Harte Hanks sold off, however, as it began to narrow its focus to smaller newspapers and eventually to direct marketing. Rupert Murdoch paid $19 million for the Express-News in 1973.
At the time of the first IPO in 1972, the firm owned properties in 19 markets across six states. The paper expanded outside of Texas that year with the purchase of the Anderson Independent and Anderson Daily Mail of Anderson, South Carolina, merging them into the Anderson Independent-Mail.
By 1980, the company owned 29 daily and 68 weekly newspapers.
In 1995, Harte Hanks sold to Community Newspaper Company its interest in the Massachusetts-based Middlesex News, two other dailies, and associated weeklies in the western suburbs of Boston. It had owned the News since 1972 and bought the News-Tribune and Daily Transcript in 1986.
The Abilene, Anderson, Corpus Christi, and San Angelo papers were among the last remaining Harte Hanks newspaper properties and were sold to E. W. Scripps Company in May 1997. Scripps spun out its newspaper assets into Journal Media Group in April 2015. Journal was then absorbed into Gannett in April 2016.
Television and radio
The company made its first foray into other media as early as 1962, when Harte Hanks bought KENS-AM-TV, San Antonio's CBS radio and television affiliates, as part of its acquisition of the Express-News. Harte Hanks turned KENS from a perennial ratings also-ran to the market leader by 1968. In the 1970s, the newspaper-dominated company further diversified its holdings by purchasing the WAIM radio and TV stations in Anderson as part of its purchase of the Independent and Mail, as well as television stations in Jacksonville, Florida, Greensboro, North Carolina, and Springfield, Missouri. In 1978, Harte Hanks bought radio stations formerly owned by Southern Broadcasting. In 1980, the company's broadcast holdings were four television stations, 11 radio stations and four cable television systems. It sold off most of these assets in the mid-1980s to pay down debt incurred in the leveraged buyout that took the company private. Harte Hanks continued to hold KENS until 1997, when it and the company's remaining newspaper properties were sold to Scripps.
Television stations
Other businesses
Harte Hanks was formerly associated with the publication of weekly shopper publications, with a circulation at one time of 13 million weekly in 1,100 separate editions of The PennySaver and The Flyer in California and Florida, respectively. The company sold The Flyer to Coda Media in 2012, having owned it since 1983. The PennySaver and website PennySaverUSA.com, a nationwide network of local advertising content online for consumers and businesses, were sold to OpenGate Capital in 2013. Harte Hanks had owned the publication since 1972.
In 2006, Harte Hanks acquired Global Address, a software company based in the United Kingdom that developed International Address Validation technology. In 2008, Global Address was renamed to Trillium Software. Trillium Software was later sold to Syncsort in 2016.
In 2008, Harte Hanks acquired Mason Zimbler, a UK-based digital marketing and media provider.
In 2008, Harte Hanks acquired Strange & Dawson, a UK-based digital advertising service.
In 2010, Harte Hanks acquired Information Arts, a UK-based data insight, data management and database-marketing firm.
In 2015, Harte Hanks acquired San Mateo, California-based digital marketing firm 3Q Digital. In 2018, Harte Hanks sold 3Q back to an entity owned by previous 3Q Digital owners.
Notes
References
Pederson, J. (2004), International directory of company histories, Volume 63, St. ₨ourcing and Offshoring Industry Almanac 2007'', Plunkett Research, Limited
External links
Companies formerly listed on the New York Stock Exchange
Newspaper companies of the United States
Marketing companies established in 1923
Harte family (United States)
Data companies
Data quality companies
1923 establishments in Texas
Data collection | Harte Hanks | Technology | 1,146 |
6,973,815 | https://en.wikipedia.org/wiki/Avonite | Avonite Surfaces is an acrylic solid surface material brand.
External links
"Slabs of Color", Popular Science, June 1989
"High-Tech Countertop: How to build a new countertop with the latest material", Popular Mechanics, April 1990
Cabinets and Countertops, Taunton Press, 2006
"A Solid History: Reviewing 40 Years Of Solid Surface", Surface Fabrication, November 2007
The professional practice of architectural detailing, Wiley, 1987
Building materials
Kitchen countertops | Avonite | Physics,Engineering | 95 |
3,895,745 | https://en.wikipedia.org/wiki/Smart%20meter | A smart meter is an electronic device that records information—such as consumption of electric energy, voltage levels, current, and power factor—and communicates the information to the consumer and electricity suppliers. Advanced metering infrastructure (AMI) differs from automatic meter reading (AMR) in that it enables two-way communication between the meter and the supplier.
Description
The term smart meter often refers to an electricity meter, but it also may mean a device measuring natural gas, water or district heating consumption. More generally, a smart meter is an electronic device that records information such as consumption of electric energy, voltage levels, current, and power factor. Smart meters communicate the information to the consumer for greater clarity of consumption behavior, and electricity suppliers for system monitoring and customer billing. Smart meters typically record energy near real-time, and report regularly, in short intervals throughout the day. Smart meters enable two-way communication between the meter and the central system. Smart meters may be part of a smart grid, but do not themselves constitute a smart grid.
Such an advanced metering infrastructure (AMI) differs from automatic meter reading (AMR) in that it enables two-way communication between the meter and the supplier. Communications from the meter to the network may be wireless, or via fixed wired connections such as power line carrier (PLC). Wireless communication options in common use include cellular communications, Wi-Fi (readily available), wireless ad hoc networks over Wi-Fi, wireless mesh networks, low power long-range wireless (LoRa), Wize (high radio penetration rate, open, using the frequency 169 MHz) Zigbee (low power, low data rate wireless), and Wi-SUN (Smart Utility Networks).
Similar meters, usually referred to as interval or time-of-use meters, have existed for years, but smart meters usually involve real-time or near real-time sensors, power outage notification, and power quality monitoring. These additional features are more than simple automated meter reading (AMR). They are similar in many respects to Advanced Metering Infrastructure (AMI) meters. Interval and time-of-use meters historically have been installed to measure commercial and industrial customers, but may not have automatic reading. Research by the UK consumer group Which?, showed that as many as one in three confuse smart meters with energy monitors, also known as in-home display monitors.
History
In 1972, Theodore Paraskevakos, while working with Boeing in Huntsville, Alabama, developed a sensor monitoring system that used digital transmission for security, fire, and medical alarm systems as well as meter reading capabilities. This technology was a spin-off from the automatic telephone line identification system, now known as Caller ID.
In 1974, Paraskevakos was awarded a U.S. patent for this technology. In 1977, he launched Metretek, Inc., which developed and produced the first smart meters. Since this system was developed pre-Internet, Metretek utilized the IBM series 1 mini-computer. For this approach, Paraskevakos and Metretek were awarded multiple patents.
The installed base of smart meters in Europe at the end of 2008 was about 39 million units, according to analyst firm Berg Insight. Globally, Pike Research found that smart meter shipments were 17.4 million units for the first quarter of 2011. Visiongain determined that the value of the global smart meter market would reach US$7 billion in 2012.
H.M. Zahid Iqbal, M. Waseem, and Dr. Tahir Mahmood, researchers of University of Engineering & Technology Taxila, Pakistan, introduced the concept of Smart Energy Meters in 2013. Their article, "Automatic Energy Meter Reading using Smart Energy Meter" outlined the key features of Smart Energy Meter including Automatic remote meter reading via GSM for utility companies and customers, Real-time monitoring of a customer's running load, Remote disconnection and reconnection of customer connections by the utility company and Convenient billing, eliminating the need of meter readers to physically visit the customers for billing.
over 99 million electricity meters were deployed across the European Union, with an estimated 24 million more to be installed by the end of 2020. The European Commission DG Energy estimates the 2020 installed base to have required €18.8 billion in investment, growing to €40.7 billion by 2030, with a total deployment of 266 million smart meters.
By the end of 2018, the U.S. had over 86 million smart meters installed. In 2017, there were 665 million smart meters installed globally. Revenue generation is expected to grow from $12.8 billion in 2017 to $20 billion by 2022.
Purpose
Since the inception of electricity deregulation and market-driven pricing throughout the world, utilities have been looking for a means to match consumption with generation. Non-smart electrical and gas meters only measure total consumption, providing no information of when the energy was consumed. Smart meters provide a way of measuring electricity consumption in near real-time. This allows utility companies to charge different prices for consumption according to the time of day and the season. It also facilitates more accurate cash-flow models for utilities. Since smart meters can be read remotely, labor costs are reduced for utilities.
Smart metering offers potential benefits to customers. These include, a) an end to estimated bills, which are a major source of complaints for many customers b) a tool to help consumers better manage their energy purchases—smart meters with a display outside their homes could provide up-to-date information on gas and electricity consumption and in doing so help people to manage their energy use and reduce their energy bills. With regards to consumption reduction, this is critical for understanding the benefits of smart meters because the relatively small percentage benefits in terms of savings are multiplied by millions of users. Smart meters for water consumption can also provide detailed and timely information about customer water use and early notification of possible water leaks in their premises. Electricity pricing usually peaks at certain predictable times of the day and the season. In particular, if generation is constrained, prices can rise if power from other jurisdictions or more costly generation is brought online. Proponents assert that billing customers at a higher rate for peak times encourages consumers to adjust their consumption habits to be more responsive to market prices and assert further, that regulatory and market design agencies hope these "price signals" could delay the construction of additional generation or at least the purchase of energy from higher-priced sources, thereby controlling the steady and rapid increase of electricity prices.
An academic study based on existing trials showed that homeowners' electricity consumption on average is reduced by approximately 3-5% when provided with real-time feedback.
Another advantage of smart meters that benefits both customers and the utility is the monitoring capability they provide for the whole electrical system. As part of an AMI, utilities can use the real-time data from smart meters measurements related to current, voltage, and power factor to detect system disruptions more quickly, allowing immediate corrective action to minimize customer impact such as blackouts. Smart meters also help utilities understand the power grid needs with more granularity than legacy meters. This greater understanding facilitates system planning to meet customer energy needs while reducing the likelihood of additional infrastructure investments, which eliminates unnecessary spending or energy cost increases.
Though the task of meeting national electricity demand with accurate supply is becoming ever more challenging as intermittent renewable generation sources make up a greater proportion of the energy mix, the real-time data provided by smart meters allow grid operators to integrate renewable energy onto the grid in order to balance the networks. As a result, smart meters are considered an essential technology to the decarbonisation of the energy system.
Advanced metering infrastructure
Advanced metering infrastructure (AMI) refers to systems that measure, collect, and analyze energy usage, and communicate with metering devices such as electricity meters, gas meters, heat meters, and water meters, either on request or on a schedule. These systems include hardware, software, communications, consumer energy displays and controllers, customer associated systems, meter data management software, and supplier business systems.
Government agencies and utilities are turning toward advanced metering infrastructure (AMI) systems as part of larger "smart grid" initiatives. AMI extends automatic meter reading (AMR) technology by providing two-way meter communications, allowing commands to be sent toward the home for multiple purposes, including time-based pricing information, demand-response actions, or remote service disconnects. Wireless technologies are critical elements of the neighborhood network, aggregating a mesh configuration of up to thousands of meters for back haul to the utility's IT headquarters.
The network between the measurement devices and business systems allows the collection and distribution of information to customers, suppliers, utility companies, and service providers. This enables these businesses to participate in demand response services. Consumers can use the information provided by the system to change their normal consumption patterns to take advantage of lower prices. Pricing can be used to curb the growth of peak demand consumption. AMI differs from traditional automatic meter reading (AMR) in that it enables two-way communications with the meter. Systems only capable of meter readings do not qualify as AMI systems.
AMI implementation relies on four key components: Physical Layer Connectivity, which establishes connections between smart meters and networks, Communication Protocols to ensure secure and efficient data transmission, Server Infrastructure, which consists of centralized or distributed servers to store, process, and manage data for billing, monitoring, and demand response; and Data Analysis, where analytical tools provide insights, load forecasting, and anomaly detection for optimized energy management. Together, these components help utilities and consumers monitor and manage energy use efficiently, supporting smarter grid management.
Physical Layer Connectivity
Communication is a cornerstone of smart meter technology, enabling reliable and secure data transmission to central systems. However, the diversity of environments in which smart meters operate presents significant challenges. Solutions to these challenges encompass a range of communication methods including Power-line communication (PLC), Cellular network, Wireless mesh network, Short-range, and satellite:
Power-line communication for Smart Metering
Power Line Communication (PLC) stands out among smart metering connectivity technologies because it leverages existing electrical power infrastructure for data transmission. Unlike cellular, radio-frequency (RF), or Wi-Fi-based solutions, PLC does not require building or maintaining separate communication networks, making it inherently more cost-effective and easier to scale. Two major PLC standards in smart metering are G3-PLC and the PRIME Alliance protocol. G3-PLC supports IPv6-based communications and adaptive data rates, providing robust performance even in noisy environments, while PRIME (PoweRline Intelligent Metering Evolution) focuses on efficient, high-speed communication with low-cost implementation. PLC-based smart metering is deployed extensively in regions like Europe, South America, and parts of Asia where dense infrastructure supports its use. Utilities favor PLC for its reliability in urban environments and for connecting large numbers of meters within smart grid networks.
An important feature of G3-PLC and PRIME is their ability to enable mesh networking (also called multi-hop), where smart meters act as repeaters for other meters in the network. This functionality allows meters to relay data from neighboring meters to ensure that the information reaches the Data Concentrator Unit (DCU), even if direct communication is not possible due to distance or signal obstructions. This approach enhances network reliability and coverage, particularly in dense urban environments or geographically challenging areas.
Cellular Network (GPRS, NB-IoT, LTE-M): "Cellular technologies are highly scalable and secure. With national coverage, cellular connectivity can support a large number of meters in densely populated areas as well as reach those in remote locations."
Wireless mesh network (e.g. Wirepas and Wi-Sun): Ideal for urban areas, where devices can relay data to optimize coverage and reliability. It is mostly used for Water Meter and Gas Meter
Short-range: such as Wireless M-Bus (WMBUS) are commonly used in smart metering applications to enable reliable, low-power communication between utility meters and local data collectors within buildings or neighborhoods.
Hybrid PLC/RF PRIME and G3-PLC standards defines an integrated approach for seamless integration of PLC and wireless communication, enhancing reliability and flexibility in smart grids.
Additional options, such as Wi-Fi and internet-based networks, are also in use. However, no single communication solution is universally optimal. The challenges faced by rural utilities differ significantly from those of urban counterparts or utilities in remote, mountainous, or poorly serviced areas.
Smart meters often extend their functionality through integration into Home Area Networks (HANs). These networks enable communication within the household and may include:
In-Premises Displays: Providing real-time energy usage insights for consumers.
Hubs: Interfacing multiple meters with the central head-end system.
Technologies used in HANs vary globally but typically include PLC, wireless ad hoc networks, and Zigbee. By leveraging appropriate connectivity solutions, smart meters can address diverse environmental and infrastructural needs while delivering seamless communication and enhanced functionality.
Smart meters used as a gateway for water and gas meters
Electricity smart meters start to be utilized as gateways for gas and water meters, creating integrated smart metering systems. In this configuration, gas and water meters communicate with the electricity meter using Wireless M-Bus (Wireless Meter-Bus), a European standard (EN 13757-4) designed for secure and efficient data transmission between utility meters and data collectors. The electricity meter then aggregates this data and transmits it to the central utility network via Power Line Communication (PLC), which leverages existing electrical wiring for data transfer.
Communication Protocols
Smart meter communication protocols are essential for enabling reliable, efficient, and secure data exchange between meters, utilities, and other components of advanced metering infrastructure (AMI). These protocols address the diverse requirements of global markets, supporting various communication methods, from optical ports and serial connections to power line communication (PLC) and wireless networks. Below is an overview of key protocols, including ANSI standards widely used in North America, IEC protocols prevalent in Europe, the globally recognized OSGP for smart grid applications, and the PLC-focused Meters and More, each designed to meet specific needs in energy monitoring and management.
IEC 62056
"IEC 62056 is the most widely adopted protocol" for smart meter communication, enabling reliable, two-way data exchange within Advanced Metering Infrastructure (AMI) systems. It encompasses the DLMS/COSEM protocol for structuring and managing metering data. "It is widely used because of its flexibility, scalability, and ability to support different communication media such as Power Line Communication (PLC), TCP/IP, and wireless networks.". It also supports data transmission over serial connections using ASCII or binary formats, with physical media options such as modulated light (via LED and photodiode) or wired connections (typically EIA-485).
ANSI C12.18
ANSI C12.18 is an ANSI Standard that describes a protocol used for two-way communications with a meter, mostly used in North American markets. The C12.18 Standard is written specifically for meter communications via an ANSI Type 2 Optical Port, and specifies lower-level protocol details. ANSI C12.19 specifies the data tables that are used. ANSI C12.21 is an extension of C12.18 written for modem instead of optical communications, so it is better suited to automatic meter reading. ANSI C12.22 is the communication protocol for remote communications.
OSGP
The Open Smart Grid Protocol (OSGP) is a family of specifications published by the European Telecommunications Standards Institute (ETSI) used in conjunction with the ISO/IEC 14908 control networking standard for smart metering and smart grid applications. Millions of smart meters based on OSGP are deployed worldwide. On July 15, 2015, the OSGP Alliance announced the release of a new security protocol (OSGP-AES-128-PSK) and its availability from OSGP vendors. This deprecated the original OSGP-RC4-PSK security protocol which had been identified to be vulnerable.
Meters and More
"Meters and More was created in 2010 from the coordinated work between Enel and Endesa to adopt, maintain and evolve the field-proven Meters and More open communication protocol for smart grid solutions." . In 2010, the Meters and More Association was established to promote the protocol globally, ensuring interoperability and efficiency in power line communication (PLC)-based smart metering systems. Meters and More is an open communication protocol designed for advanced metering infrastructure (AMI). It facilitates reliable, high-speed data exchange over PLC networks, focusing on energy monitoring, demand response, and secure two-way communication between utilities and consumers.
Unlike DLMS/COSEM, which is a globally standardized and versatile protocol supporting multiple utilities (electricity, gas, and water), Meters and More is tailored specifically for PLC-based systems, emphasizing efficiency, reliability, and ease of deployment in electricity metering.
There is a growing trend toward the use of TCP/IP technology as a common communication platform for Smart Meter applications, so that utilities can deploy multiple communication systems, while using IP technology as a common management platform. A universal metering interface would allow for development and mass production of smart meters and smart grid devices prior to the communication standards being set, and then for the relevant communication modules to be easily added or switched when they are. This would lower the risk of investing in the wrong standard as well as permit a single product to be used globally even if regional communication standards vary.
Server Infrastructure for Smart Meter AMI
In Advanced Metering Infrastructure (AMI), the server infrastructure is crucial for managing, storing, and processing the large volumes of data generated by smart meters. This infrastructure ensures seamless communication between smart meters, utility providers, and end-users, supporting real-time monitoring, billing, and grid management.
Key Components of AMI Server Infrastructure
Data Concentrator
A Data Concentrator Unit (DCU) aggregates data from multiple smart meters within a localized area (e.g., a neighborhood or building) before transmitting it to the central server. Data concentrators reduce the communication load on the network and help overcome connectivity challenges by acting as intermediaries between smart meters and the head-end system (HES). They typically support communication protocols like IEC 62056, DLMS/COSEM
Head-End System (HES)
The HES is responsible for collecting, validating, and managing data received from data concentrators and smart meters. It serves as the central communication hub, facilitating two-way communication between the smart meters and the utility's central servers. The HES supports meter configuration, firmware updates, and real-time data retrieval, ensuring data integrity and security.
Meter Data Management System (MDMS)
The MDMS is a specialized software platform that stores and processes large volumes of meter data collected by the HES. Key functions of the MDMS include data validation, estimation, and editing, as well as billing preparation, load analysis, and anomaly detection. The MDMS integrates with other utility systems, such as billing, customer relationship management (CRM), and demand response systems, to enable efficient energy management.
Data Analytics
Data analytics for smart meters leverages machine learning to extract insights from energy consumption data. Key applications include demand forecasting, dynamic pricing, Energy Disaggregation, and fault detection, enabling optimized grid performance and personalized energy management. These techniques drive efficiency, cost savings, and sustainability in modern energy systems.
"Energy Disaggregation, or the breakdown of your energy use based on specific appliances or devices", is an exploratory technique for analyzing energy consumption in households, commercial buildings, and industrial settings. By using data from a single energy meter, it employs algorithms and machine learning to estimate individual appliance usage without separate monitors. Known as Non-Intrusive Load Monitoring (NILM), this emerging method offers insights into energy efficiency, helping users optimize usage and reduce costs. While promising, energy disaggregation is still being refined for accuracy and scalability as part of smart energy management innovations.
Data management
The other critical technology for smart meter systems is the information technology at the utility that integrates the Smart Meter networks with utility applications, such as billing and CIS. This includes the Meter Data Management system.
It also is essential for smart grid implementations that power line communication (PLC) technologies used within the home over a Home Area Network (HAN), are standardized and compatible. The HAN allows HVAC systems and other household appliances to communicate with the smart meter, and from there to the utility. Currently there are several broadband or narrowband standards in place, or being developed, that are not yet compatible. To address this issue, the National Institute for Standards and Technology (NIST) established the PAP15 group, which studies and recommends coexistence mechanisms with a focus on the harmonization of PLC Standards for the HAN. The objective of the group is to ensure that all PLC technologies selected for the HAN coexist as a minimum. The two leading broadband PLC technologies selected are the HomePlug AV / IEEE 1901 and ITU-T G.hn technologies. Technical working groups within these organizations are working to develop appropriate coexistence mechanisms. The HomePlug Powerline Alliance has developed a new standard for smart grid HAN communications called the HomePlug Green PHY specification. It is interoperable and coexistent with the widely deployed HomePlug AV technology and with the latest IEEE 1901 global Standard and is based on Broadband OFDM technology. ITU-T commissioned in 2010 a new project called G.hnem, to address the home networking aspects of energy management, built upon existing Low Frequency Narrowband OFDM technologies.
Opposition and concerns
Some groups have expressed concerns regarding the cost, health, fire risk, security and privacy effects of smart meters and the remote controllable "kill switch" that is included with most of them. Many of these concerns regard wireless-only smart meters with no home energy monitoring or control or safety features. Metering-only solutions, while popular with utilities because they fit existing business models and have cheap up-front capital costs, often result in such "backlash". Often the entire smart grid and smart building concept is discredited in part by confusion about the difference between home control and home area network technology and AMI. The (now former) attorney general of Connecticut has stated that he does not believe smart meters provide any financial benefit to consumers, however, the cost of the installation of the new system is absorbed by those customers.
Security
Smart meters expose the power grid to cyberattacks that could lead to power outages, both by cutting off people's electricity and by overloading the grid. However many cyber security experts state that smart meters of UK and Germany have relatively high cybersecurity and that any such attack there would thus require extraordinarily high efforts or financial resources. The EU Cyber security Act took effect in June 2019, which includes Directive on Security Network and Information Systems establishing notification and security requirements for operators of essential services.
Through the Smartgrid Cybersecurity Committee, the U.S. Department of Energy published cybersecurity guidelines for grid operators in 2010 and updated them in 2014. The guidelines "...present an analytical framework that organizations can use to develop effective cybersecurity strategies..."
Implementing security protocols that protect these devices from malicious attacks has been problematic, due to their limited computational resources and long operational life.
The current version of IEC 62056 includes the possibility to encrypt, authenticate, or sign the meter data.
One proposed smart meter data verification method involves analyzing the network traffic in real-time to detect anomalies using an Intrusion Detection System (IDS). By identifying exploits as they are being leveraged by attackers, an IDS mitigates the suppliers' risks of energy theft by consumers and denial-of-service attacks by hackers. Energy utilities must choose between a centralized IDS, embedded IDS, or dedicated IDS depending on the individual needs of the utility. Researchers have found that for a typical advanced metering infrastructure, the centralized IDS architecture is superior in terms of cost efficiency and security gains.
In the United Kingdom, the Data Communication Company, which transports the commands from the supplier to the smart meter, performs an additional anomaly check on commands issued (and signed) by the energy supplier.
As Smart Meter devices are Intelligent Measurement Devices which periodically record the measured values and send the data encrypted to the Service Provider, therefore in Switzerland these devices need to be evaluated by an evaluation Laboratory, and need to be certified by METAS from 01.01.2020 according to Prüfmethodologie (Test Methodology for Execution of Data Security Evaluation of Swiss Smart Metering Components).
According to a report published by Brian Krebs, in 2009 a Puerto Rico electricity supplier asked the FBI to investigate large-scale thefts of electricity related to its smart meters. The FBI found that former employees of the power company and the company that made the meters were being paid by consumers to reprogram the devices to show incorrect results, as well as teaching people how to do it themselves. Several hacking tools that allow security researchers and penetration testers verify the security of electric utility smart meters have been released so far.
Health
Most health concerns about the meters arise from the pulsed radiofrequency (RF) radiation emitted by wireless smart meters.
Members of the California State Assembly asked the California Council on Science and Technology (CCST) to study the issue of potential health impacts from smart meters, in particular whether current FCC standards are protective of public health. The CCST report in April 2011 found no health impacts, based both on lack of scientific evidence of harmful effects from radio frequency (RF) waves and that the RF exposure of people in their homes to smart meters is likely to be minuscule compared to RF exposure to cell phones and microwave ovens. Daniel Hirsch, retired director of the Program on Environmental and Nuclear Policy at UC Santa Cruz, criticized the CCST report on the grounds that it did not consider studies that suggest the potential for non-thermal health effects such as latent cancers from RF exposure. Hirsch also stated that the CCST report failed to correct errors in its comparison to cell phones and microwave ovens and that, when these errors are corrected, smart meters "may produce cumulative whole-body exposures far higher than that of cell phones or microwave ovens."
The Federal Communications Commission (FCC) has adopted recommended Permissible Exposure Limit (PEL) for all RF transmitters (including smart meters) operating at frequencies of 300 kHz to 100 GHz. These limits, based on field strength and power density, are below the levels of RF radiation that are hazardous to human health.
Other studies substantiate the finding of the California Council on Science and Technology (CCST). In 2011, the Electric Power Research Institute performed a study to gauge human exposure to smart meters as compared to the FCC PEL. The report found that most smart meters only transmit RF signals 1% of the time or less. At this rate, and at a distance of 1 foot from the meter, RF exposure would be at a rate of 0.14% of the FCC PEL.
An indirect potential for harm to health by smart meters is that they enable energy companies to disconnect consumers remotely, typically in response to difficulties with payment. This can cause health problems to vulnerable people in financial difficulty; in addition to denial of heat, lighting, and use of appliances, there are people who depend on power to use medical equipment essential for life. While there may be legal protections in place to protect the vulnerable, many people in the UK were disconnected in violation of the rules.
Safety
Issues surrounding smart meters causing fires have been reported, particularly involving the manufacturer Sensus. In 2012. PECO Energy Company replaced the Sensus meters it had deployed in the Philadelphia, US region after reports that a number of the units had overheated and caused fires. In July 2014, SaskPower, the province-run utility company of the Canadian province of Saskatchewan, halted its roll-out of Sensus meters after similar, isolated incidents were discovered. Shortly afterward, Portland General Electric announced that it would replace 70,000 smart meters that had been deployed in the state of Oregon after similar reports. The company noted that it had been aware of the issues since at least 2013, and they were limited to specific models it had installed between 2010 and 2012. On July 30, 2014, after a total of eight recent fire incidents involving the meters, SaskPower was ordered by the Government of Saskatchewan to immediately end its smart meter program, and remove the 105,000 smart meters it had installed.
Privacy concerns
One technical reason for privacy concerns is that these meters send detailed information about how much electricity is being used each time. More frequent reports provide more detailed information. Infrequent reports may be of little benefit for the provider, as it doesn't allow as good demand management in the response of changing needs for electricity. On the other hand, widespread reports would allow the utility company to infer behavioral patterns for the occupants of a house, such as when the members of the household are probably asleep or absent. Furthermore, the fine-grained information collected by smart meters raises growing concerns of privacy invasion due to personal behavior exposure (private activity, daily routine, etc.). Current trends are to increase the frequency of reports. A solution that benefits both provider and user privacy would be to adapt the interval dynamically. Another solution involves energy storage installed at the household used to reshape the energy consumption profile. In British Columbia the electric utility is government-owned and as such must comply with privacy laws that prevent the sale of data collected by smart meters; many parts of the world are serviced by private companies that are able to sell their data. In Australia debt collectors can make use of the data to know when people are at home. Used as evidence in a court case in Austin, Texas, police agencies secretly collected smart meter power usage data from thousands of residences to determine which used more power than "typical" to identify marijuana growing operations.
Smart meter power data usage patterns can reveal much more than how much power is being used. Research has demonstrated that smart meters sampling power levels at two-second intervals can reliably identify when different electrical devices are in use.
Ross Anderson wrote about privacy concerns "It is not necessary for my meter to tell the power company, let alone the government, how much I used in every half-hour period last month"; that meters can provide "targeting information for burglars"; that detailed energy usage history can help energy companies to sell users exploitative contracts; and that there may be "a temptation for policymakers to use smart metering data to target any needed power cuts."
Opt-out options
Reviews of smart meter programs, moratoriums, delays, and "opt-out" programs are some responses to the concerns of customers and government officials. In response to residents who did not want a smart meter, in June 2012 a utility in Hawaii changed its smart meter program to "opt out". The utility said that once the smart grid installation project is nearing completion, KIUC may convert the deferral policy to an opt-out policy or program and may charge a fee to those members to cover the costs of servicing the traditional meters. Any fee would require approval from the Hawaii Public Utilities Commission.
After receiving numerous complaints about health, hacking, and privacy concerns with the wireless digital devices, the Public Utility Commission of the US state of Maine voted to allow customers to opt-out of the meter change at the cost of $12 a month. In Connecticut, another US state to consider smart metering, regulators declined a request by the state's largest utility, Connecticut Light & Power, to install 1.2 million of the devices, arguing that the potential savings in electric bills do not justify the cost. CL&P already offers its customers time-based rates. The state's Attorney General George Jepsen was quoted as saying the proposal would cause customers to spend upwards of $500 million on meters and get few benefits in return, a claim that Connecticut Light & Power disputed.
Abuse of dynamic pricing
Smart meters allow dynamic pricing; it has been pointed out that, while this allows prices to be reduced at times of low demand, it can also be used to increase prices at peak times if all consumers have smart meters. Additionally smart meters allow energy suppliers to switch customers to expensive prepay tariffs instantly in case of difficulties paying. In the UK during a period of very high energy prices from 2022, companies were remotely switching smart meters from a credit tariff to an expensive prepay tariff which disconnects supplies unless credit has been purchased. While regulations do not permit this without appropriate precautions to help those in financial difficulties and to protect the vulnerable, the rules were often flouted. (Prepaid tariffs could also be levied without smart meters, but this required a dedicated prepay meter to be installed.) In 2022, 3.2 million people were left without power at some point after running out of prepay credit.
Limited benefits
There are questions about whether electricity is or should be primarily a "when you need it" service where the inconvenience/cost-benefit ratio of time-shifting of loads is poor. In the Chicago area, Commonwealth Edison ran a test installing smart meters on 8,000 randomly selected households together with variable rates and rebates to encourage cutting back during peak usage. In Crain's Chicago Business article "Smart grid test underwhelms. In the pilot, few power down to save money.", it was reported that fewer than 9% exhibited any amount of peak usage reduction and that the overall amount of reduction was "statistically insignificant". This was from a report by the Electric Power Research Institute, a utility industry think tank who conducted the study and prepared the report. Susan Satter, senior assistant Illinois attorney general for public utilities said "It's devastating to their plan......The report shows zero statistically different result compared to business as usual."
By 2016, the 7 million smart meters in Texas had not persuaded many people to check their energy data as the process was too complicated.
A report from a parliamentary group in the UK suggests people who have smart meters installed are expected to save an average of £11 annually on their energy bills, much less than originally hoped. The 2016 cost-benefit analysis was updated in 2019 and estimated a similar average saving.
The Australian Victorian Auditor-General found in 2015 that 'Victoria's electricity consumers will have paid an estimated $2.239 billion for metering services, including the rollout and connection of smart meters. In contrast, while a few benefits have accrued to consumers, benefits realisation is behind schedule and most benefits are yet to be realised'
Erratic demand
Smart meters can allow real-time pricing, and in theory this could help smooth power consumption as consumers adjust their demand in response to price changes. However, modelling by researchers at the University of Bremen suggests that in certain circumstances, "power demand fluctuations are not dampened but amplified instead."
In the media
In 2013, Take Back Your Power, an independent Canadian documentary directed by Josh del Sol was released describing "dirty electricity" and the aforementioned issues with smart meters. The film explores the various contexts of the health, legal, and economic concerns. It features narration from the mayor of Peterborough, Ontario, Daryl Bennett, as well as American researcher De-Kun Li, journalist Blake Levitt, and Dr. Sam Milham. It won a Leo Award for best feature-length documentary and the Annual Humanitarian Award from Indie Fest the following year.
UK roll-out criticism
In a 2011 submission to the Public Accounts Committee, Ross Anderson wrote that Ofgem was "making all the classic mistakes which have been known for years to lead to public-sector IT project failures" and that the "most critical part of the project—how smart meters will talk to domestic appliances to facilitate demand response—is essentially ignored."
Citizens Advice said in August 2018 that 80% of people with smart meters were happy with them. Still, it had 3,000 calls in 2017 about problems. These related to first-generation smart meters losing their functionality, aggressive sales practices, and still having to send smart meter readings.
Ross Anderson of the Foundation for Information Policy Research has criticised the UK's program on the grounds that it is unlikely to lower energy consumption, is rushed and expensive, and does not promote metering competition. Anderson writes, "the proposed architecture ensures continued dominance of metering by energy industry incumbents whose financial interests are in selling more energy rather than less," and urged ministers "to kill the project and instead promote competition in domestic energy metering, as the Germans do – and as the UK already has in industrial metering. Every consumer should have the right to appoint the meter operator of their choice."
The high number of SMETS1 meters installed has been criticized by Peter Earl, head of energy at the price comparison website comparethemarket.com. He said, "The Government expected there would only be a small number of the first-generation of smart meters before Smets II came in, but the reality is there are now at least five million and perhaps as many as 10 million Smets I meters."
UK smart meters in southern England and the Midlands use the mobile phone network to communicate, so they do not work correctly when phone coverage is weak. A solution has been proposed, but was not operational as of March 2017.
In March 2018 the National Audit Office (NAO), which watches over public spending, opened an investigation into the smart meter program, which had cost £11bn by then, paid for by electricity users through higher bills. The National Audit Office published the findings of its investigation in a report titled "Rolling out smart meters" published in November 2018. The report, amongst other findings, indicated that the number of smart meters installed in the UK would fall materially short of the Department for Business, Energy & Industrial Strategy (BEIS) original ambitions of all UK consumers having a smart meter installed by 2020. In September 2019, smart meter rollout in the UK was delayed for four years.
Ross Anderson and Alex Henney wrote that "Ed Miliband cooked the books" to make a case for smart meters appear economically viable. They say that the first three cost-benefit analyses of residential smart meters found that it would cost more than it would save, but "ministers kept on trying until they got a positive result... To achieve 'profitability' the previous government stretched the assumptions shamelessly".
A counter-fraud officer at Ofgem with oversight of the roll-out of the smart meter program who raised concerns with his manager about many millions of pounds being misspent was threatened in 2018 with imprisonment under section 105 of the Utilities Act 2000, prohibiting disclosure of some information relevant to the energy sector, with the intention of protecting national security. The Employment Appeal Tribunal found that the law was in contravention of the European Convention on Human Rights.
Main Suppliers
Top ten smart electricity meters suppliers depends on the ranking method
Among them
Landis+Gyr
Itron
Xylem (formerly Sensus)
Sagemcom
Honeywell / Elster
Kamstrup A/S
Wasion Holdings Limited
Holley Technology Ltd
Gallery
See also
DASH7
Distributed generation
DLMS
Electranet
Home energy monitor
Home idle load
Home network
Meter-Bus
Meter data management
Net metering
Nonintrusive load monitoring
Open metering system
Open smart grid protocol
Power line communication
Smart grid
Utility submetering
Virtual power plant
Notes
References
External links
TIA Smart Utility Networks - U.S. Standardization Process
Demand Response and Advanced Metering Coalition. Definitions.
Advanced Metering Infrastructure (AMI), Department of Primary Industries, Victoria, Australia
Smart Metering Projects Map - Google Maps
Mad about metered billing? They were in 1886, too—Ars Technica
UK Smart Meters
Energy measurement
Electric power distribution
Electric power
Meter
Internet of things | Smart meter | Physics,Chemistry,Technology,Engineering | 8,189 |
549,072 | https://en.wikipedia.org/wiki/National%20Climatic%20Data%20Center | The United States National Climatic Data Center (NCDC), previously known as the National Weather Records Center (NWRC), in Asheville, North Carolina, was the world's largest active archive of weather data.
In 2015, the NCDC merged with two other federal environmental records agencies to become the National Centers for Environmental Information (NCEI).
History
In 1934, the U.S. government established a tabulation unit in New Orleans, Louisiana, to process weather records. Climate records and upper air observations were punched onto cards in 1936. This organization was transferred to Asheville, North Carolina, in 1951, where the National Weather Records Center (NWRC). It was housed in the Grove Arcade Building in Asheville, North Carolina.
Processing of the climate data was accomplished at Weather Records Processing Centers at Chattanooga, Tennessee; Kansas City, Missouri; and San Francisco, California, until January 1, 1963, when it was consolidated with the NWRC.
In 1967, the agency was renamed the National Climatic Data Center.
In 1995, the NCDC moved into the newly completed Veach-Baley Federal Complex in downtown Asheville.
In 2015, the NCDC merged with the National Geophysical Data Center and the National Oceanographic Data Center to become the National Centers for Environmental Information (NCEI).
Sources
Data were received from a wide variety of sources, including weather satellites, radar, automated airport weather stations, National Weather Service (NWS) Cooperative Observers, aircraft, ships, radiosondes, wind profilers, rocketsondes, solar radiation networks, and NWS Forecast/Warnings/Analyses Products.
Climate focus
The Center provided historical perspectives on climate which were vital to studies on global climate change, the greenhouse effect, and other environmental issues. The Center stored information essential to industry, agriculture, science, hydrology, transportation, recreation, and engineering. These services are still provided by the NCEI.
The NCDC said:
Evidence is mounting that global climate is changing. While it is generally accepted that humans are negatively influencing the climate, the extent to which humans are responsible is still under study. Regardless of the causes, it is essential that a baseline of long-term climate data be compiled; therefore, global data must be acquired, quality controlled, and archived. Working with international institutions such as the International Council of Scientific Unions, the World Data Centers, and the World Meteorological Organization, NCDC develops standards by which data can be exchanged and made accessible.NCDC provides the historical perspective on climate. Through the use of over a hundred years of weather observations, reference data bases are generated. From this knowledge the clientele of NCDC can learn from the past to prepare for a better tomorrow. Wise use of our most valuable natural resource, climate, is the goal of climate researchers, state and regional climate centers, business, and commerce.
Associated entities
NCDC also maintained World Data Center for Meteorology, Asheville. The four World Centers (U.S., Russia, Japan and China) have created a free and open situation in which data and dialogue are exchanged.
NCDC maintained the U.S. Climate Reference Network datasets and a vast number of other climate monitoring products.
See also
Climate Prediction Center
Environmental data rescue
Monthly Climatic Data for the World
National Severe Storms Laboratory
NOAA National Operational Model Archive and Distribution System (NOMADS)
State of the Climate
Storm Prediction Center
References
Climate change organizations based in the United States
Climate change assessment and attribution
Climatic Data Center
Climatic Data Center
Oceanography
Asheville, North Carolina
1934 establishments in Louisiana
2015 disestablishments in North Carolina | National Climatic Data Center | Physics,Environmental_science | 728 |
4,055,011 | https://en.wikipedia.org/wiki/Eastern%20falanouc | The eastern falanouc (Eupleres goudotii) is a rare mongoose-like mammal in the carnivoran family Eupleridae endemic to Madagascar .
It is classified alongside the Western falanouc (Eupleres major), recognized only in 2010, in the genus Eupleres. Falanoucs have several peculiarities. They have no anal or perineal glands (unlike their closest relative, the fanaloka), nonretractile claws, and a unique dentition: the canines and premolars are backwards-curving and flat. This is thought to be related to their prey, mostly invertebrates, such as worms, slugs, snails, and larvae.
It lives primarily in the lowland rainforests of eastern Madagascar, while E. major is found in northwest Madagascar. It is solitary and territorial, but whether nocturnal or diurnal is unknown. It is small (about 50 centimetres long with a 24-centimetre-long tail) and shy (clawing, not biting, in self-defence). It most closely resembles the mongooses with its long snout and low body, though its colouration is plain and brown (most mongooses have colouring schemes such as striping, banding, or other variations on the hands and feet).
Its life cycle displays periods of fat buildup during April and May, before the dry months of June and July. It has a brief courting period and weaning period, the young being weaned before the next mating season. Its reproductive cycle is fast. The offspring (one per litter) are born in burrows with opened eyes and can move with the mother through dense foliage at only two days old. In nine weeks, the already well-developed young are on solid food and shortly thereafter they leave their mothers. Though it is fast in gaining mobility (so as to follow its mother on forages), it grows at a slower rate than comparatively-sized carnivorans.
"Falanoucs are threatened by habitat loss, humans, dogs and an introduced competitor, the small Indian civet (Viverricula indica)."
Viverricula indica are also carnivores, and they had much spatial and temporal overlap with Eupleres goudotii when introduced to the same ecosystem the Eupleres goudotii were in. This overlap has shown to potentially have a negative impact on native carnivore populations such as the Eupleres goudotii because of the two species competing for similar resources.
References
Sources
Macdonald, David (ed). The Encyclopedia of Mammals. (New York, 1984)
External links
Eupleres goudotii - Animal Diversity Web
Images and Video - ARKive.org
EDGE species
eastern falanouc
Mammals of Madagascar
Endemic fauna of Madagascar
eastern falanouc | Eastern falanouc | Biology | 590 |
45,038,082 | https://en.wikipedia.org/wiki/Cross-sector%20biodiversity%20initiative | The Cross-Sector Biodiversity Initiative (CSBI)] is a partnership between IPIECA - the global oil and gas industry association for environmental and social issues, the International Council on Mining and Metals (ICMM) and the Equator Principles Association to develop and share good practices related to management of biodiversity and ecosystem services in the extractive industries.
The initiative supports the broader goals of innovative and transparent application of the mitigation hierarchy in relation to biodiversity and ecosystem services, as defined in the International Finance Corporation (IFC) Performance Standard 6: Biodiversity Conservation and Sustainable Management of Living Natural Resources (2012).
CSBI Charter and Governance
The vision and mission of the initiative are presented in its Charter developed and released by the 3 partner associations in 2013. CSBI is run by its member associations and volunteers from member companies and multilateral financing institutions, with the support of a part-time coordinator.
CBSI’s Leading Practice Tools
Between 2013 and 2015, CSBI released to the public three tools related to applying the mitigation hierarchy for biodiversity management, available for free on CSBI's website:
The Tool for Aligning Timelines for Project Execution, Biodiversity Management and Financing
Good Practices for the Collection of Biodiversity Baseline Data
The Cross-Sector Guide for Implementing the Mitigation Hierarchy
One-page summaries are available in English, French, Spanish, Italian, Japanese and Russian. Further translations are pending.
References
External links
Official CSBI website
Cross-Sector Guide for Implementing the Mitigation Hierarchy (on The Biodiversity Consultancy's website)
Biodiversity | Cross-sector biodiversity initiative | Biology | 312 |
2,086,384 | https://en.wikipedia.org/wiki/Paul%20Alivisatos | Armand Paul Alivisatos (born November 12, 1959) is a Greek-American chemist and academic administrator who has served as the 14th president of the University of Chicago since September 2021. He is a pioneer in nanomaterials development and an authority on the fabrication of nanocrystals and their use in biomedical and renewable energy applications. He was ranked fifth among the world's top 100 chemists for the period 2000–2010 in the list released by Thomson Reuters.
On September 1, 2021, Alivisatos became the 14th president of the University of Chicago, where he also holds a faculty appointment as the John D. MacArthur Distinguished Service Professor in the Department of Chemistry, the Pritzker School of Molecular Engineering, and the College; and serves as the Chair of the Board of Governors of Argonne National Laboratory and Chair of the Board of Directors of Fermi Forward Discovery Group LLC, the operator of Fermi National Accelerator Laboratory.
Prior to joining the University of Chicago, Alivisatos was the Executive Vice Chancellor and Provost (2017–2021) of the University of California, Berkeley, where he had taught since 1988. He previously served as the Director of the Lawrence Berkeley National Laboratory (2009–2016), and as Berkeley’s interim Vice Chancellor for Research (2016–2017). He held a number of faculty appointments at Berkeley, including the Samsung Distinguished Professor in Nanoscience and Nanotechnology Research and Professor of Chemistry and Materials Science & Engineering. Alivisatos was also the Founding Director of the Kavli Energy Nanosciences Institute (ENSI), an institute on the Berkeley campus launched by the Kavli Foundation to explore the application of nanoscience to sustainable energy technologies.
Early life and education
Paul Alivisatos was born in Chicago, Illinois, to a Greek family, where he lived until the age of 10, when his family moved to Athens, Greece. Alivisatos has said of his years in Greece that it was a great experience for him because he had to learn the Greek language and culture then catch up with the more advanced students. "When I found something very interesting it was sometimes a struggle for me to understand it the very best that I could," he has said of that experience. "That need to work harder became an important motivator for me." Alivisatos returned to the United States to attend the University of Chicago in the late 1970s.
In 1981, Alivisatos earned a B.A. with honors in chemistry from the University of Chicago. In 1986, he received a Ph.D. in physical chemistry from the University of California, Berkeley, where he worked under Charles Harris. His Ph.D. thesis concerned the photophysics of electronically excited molecules near metal and semiconductor surfaces. He then joined AT&T Bell Labs working with Louis E. Brus, and began research in the field of nanotechnology.
Alivisatos returned to Berkeley in 1988 as an assistant professor of chemistry, becoming associate professor in 1993 and professor in 1995. He served as Chancellor's Professor from 1998 to 2001, and added an appointment as a professor of materials science and engineering in 1999.
Alivisatos' affiliation with Lawrence Berkeley National Lab (or Berkeley Lab) began in 1991 when he joined the staff of the Materials Sciences Division. From 2005 to 2007 Alivisatos served as Berkeley Lab's Associate Laboratory Director for the Physical Sciences area. In 2008, he served as Deputy Lab Director under Berkeley Lab Director Steven Chu, and then as interim director when Chu stepped down to become the Secretary of Energy. He was named the seventh Director of Berkeley Lab on November 19, 2009, by the University of California Board of Regents on the recommendation of UC President Mark Yudof and with the concurrence of the U.S. Department of Energy. He played a critical role in the establishment of the Molecular Foundry, a U.S. Department of Energy's Nanoscale Science Research Center; and was the facility's founding director.
Energy Secretary, Nobel laureate, and fellow Berkeley alumnus Steven Chu noted that Alivisatos is "an incredible scientist with incredible judgment on a variety of issues. He's level-headed and calm, and he has an ability to inspire people…[and he can] take projects from material science to real-world applications."
Research
Alivisatos is an internationally recognized authority on nano chemistry in the synthesis of semiconductor quantum dots and multi-shaped artificial nanostructures. Further, he is a world expert on the chemistry of nanoscale crystals; one of his papers (Science, 271: 933–937, 1996) has been cited over 13,800 times. He is also an expert on how these can be applied, for example as biological markers (e.g., Science, 281: 2013–16, 1998; a paper cited over 10,900 times). In addition, his use of DNA in this area (DNA nanotechnology) has shown the surprising versatility of this molecule. He has used it to direct crystal growth and create new materials, as in Nature, 382: 609–11, 1996, and even to measure nanoscale distances (see Nature Nanotechnology, 1: 47–52, 2006).
He is widely recognized as being the first to demonstrate that semiconductor nanocrystals can be grown into complex two-dimensional shapes, as opposed to simple one-dimensional spheres. Alivisatos proved that controlling the growth of nanocrystals is the key to controlling both their size and shape. This achievement altered the nanoscience landscape and paved the way for a slew of new potential applications, including biomedical diagnostics, revolutionary photovoltaic cells, and LED materials.
Nanocrystals
Nanocrystals are aggregates of anywhere from a few hundred to tens of thousands of atoms that combine into a crystalline form of matter known as a "cluster." Typically a few nanometers in diameter, nanocrystals are larger than molecules but smaller than bulk solids and therefore often exhibit physical and chemical properties somewhere in between. Given that a nanocrystal is virtually all surface and no interior, its properties can vary considerably as the crystal grows in size.
Prior to Alivisatos' research, all non-metal nanocrystals were dot-shaped, meaning they were essentially one-dimensional. No techniques had been reported for making two-dimensional or rod-shaped semiconductor nanocrystals that would also be of uniform size. However, in a landmark paper that appeared in the March 2, 2000 issue of the journal Nature, Alivisatos reported on techniques used to select the size but vary the shapes of the nanocrystals produced. This was hailed as a major breakthrough in nanocrystal fabrication because rod-shaped semiconductor nanocrystals can be stacked to create nano-sized electronic devices.
The rod-shaped nanocrystal research, coupled with earlier work led by Alivisatos in which it was shown that quantum dots or "qdots"–nanometer-sized crystal dots (spheres a few billionths of a meter in size)– made from semiconductors such as cadmium selenide can emit multiple colors of light depending upon the size of the crystal, opened the door to using nanocrystals as fluorescent probes for the study of biological materials, biomedical research tools and aids to diagnosis, and as light-emitting diodes (LEDs). Alivisatos went on to use his techniques to create an entirely new generation of hybrid solar cells that combined nanotechnology with plastic electronics.
Applications
Alivisatos is the founding scientist of Quantum Dot Corporation, a company that makes crystalline nanoscale tags that are used in the study of cell behavior. (Quantum Dot is now part of Life Technologies.) He also founded the nanotechnology company Nanosys, and Solexant, a photovoltaic start-up that has since restarted as Siva Power. His research has led to the development of applications in range of industries, including bioimaging (for example, the use of quantum dots for luminescent labeling of biological tissue); display technologies (his quantum dot emissive film is found in the Kindle Fire HDX tablet); and renewable energy (solar applications of quantum dots).
University of Chicago
Alivisatos became president of the University of Chicago on September 1, 2021. He is the 14th president of the University of Chicago, succeeding Robert J. Zimmer who was president from 2006 to 2021. Alivisatos also serves as a John D. MacArthur Distinguished Service Professor in the Department of Chemistry, Pritzker School of Molecular Engineering, and the College.
Lawrence Berkeley National Lab
Under Alivisatos’ leadership, the Lawrence Berkeley National Lab shifted its priorities to the more interdisciplinary areas of renewable energy and climate-change research. During his tenure, the Lab began construction on new buildings for computational research, building efficiencies, solar energy research, and biological science.
Alivisatos focused on integrating the Lab into the nation's innovation ecosystem, especially in the areas of energy and the environment. While some of the groundwork for this integration was laid by former Director Steve Chu, Alivisatos led efforts to leverage the wide range of scientific capabilities at Berkeley Lab with a variety of industry partners and entrepreneurs. These public/private sector collaborations resulted in technology transfer for industries as diverse as automobiles and medicine, and contributed to an increased speed of development in manufacturing and renewable energy. On March 23, 2015 Alvisatos announced that he would step down as Director when a replacement was identified.
Alivisatos has also been outspoken on the issue of basic science funding at the federal level and America's ability to stay competitive in the areas global scientific research and development.
Personal life
Alivisatos is married to Nicole Alivisatos, a retired chemist, former editor of the journal Nano Letters, and daughter of the noted chemist, Gábor A. Somorjai. They have two daughters.
Awards and honors
1991–1995 – Presidential Young Investigator Award;
1991 – Alfred P. Sloan Foundation fellowship;
1991 – ACS Exxon Solid State Chemistry Fellowship;
1994 – Coblentz Award for Advances in Molecular Spectroscopy;
1994 – Wilson Prize at Harvard;
1994 – Department of Energy Award for Outstanding Scientific Accomplishment in Materials Chemistry;
1995 – Materials Research Society Outstanding Young Investigator Award;
1997 – Department of Energy Award for Sustained Outstanding Research in Materials Chemistry;
2005 – Colloid and Surface Chemistry American Chemical Society Award;
2006 – E. O. Lawrence Award;
2006 – Eni Italgas prize for Energy and Environment;
2006 – The Rank Prize (Optoelectronics);
2006 – University of Chicago's Distinguished Alumni Award (Professional Achievement);
2008 – Kavli Distinguished Lectureship in Nanoscience, Materials Research Society;
2009 – Nanoscience Prize, International Society for Nanoscale Science, Computation & Engineering;
2010 – Medaglia teresiana, University of Pavia;
2011 – Linus Pauling Award;
2011 – Von Hippel Award, Materials Research Society;
2012 – Wolf Prize in Chemistry;
2014 – National Medal of Science;
2014 – ACS Award in the Chemistry of Materials;
2015 – Axion Award, Hellenic American Professional Society;
2015 – Spiers Memorial Award, Royal Society of Chemistry;
2016 – Dan David Prize for nanoscience research;
2017 – NAS Award in Chemical Sciences;
2019 – Welch Award in Chemistry;
2020 – BBVA Foundation Frontiers of Knowledge Award;
2021 – Priestley Medal;
2024 – 2024 Kavli Prize in Nanoscience;
2024: Enrico Fermi Award
In addition to those listed above, Alivisatos has held fellowships with the American Association for the Advancement of Science, the American Physical Society (1996), and the American Chemical Society. He is a member of the National Academy of Sciences and the American Academy of Arts and Sciences.
Selected publications
For a full list of publications, see http://www.cchem.berkeley.edu/pagrp/publications.html
Editorships
Alivisatos is the founding editor of Nano Letters, a publication of the American Chemical Society. He formerly served on the Senior Editorial Board of Science. He has also served on the editorial advisory boards of ACS Nano, the Journal of Physical Chemistry, Chemical Physics, the Journal of Chemical Physics, and Advanced Materials.
References
External links
Alivisatos Research Group at the University of California at Berkeley
Lawrence Berkeley National Lab
Kavli Energy Nanoscience Institute
21st-century American chemists
American people of Greek descent
Living people
American nanotechnologists
UC Berkeley College of Letters and Science alumni
University of Chicago alumni
Fellows of the American Academy of Arts and Sciences
Members of the United States National Academy of Sciences
Wolf Prize in Chemistry laureates
Enrico Fermi Award recipients
1959 births
UC Berkeley College of Chemistry faculty
Fellows of the American Physical Society
Solid state chemists
20th-century Greek Americans
20th-century Greek scientists | Paul Alivisatos | Chemistry | 2,664 |
907,227 | https://en.wikipedia.org/wiki/Blurb | A blurb is a short promotional piece accompanying a piece of creative work. It may be written by the author or publisher or quote praise from others. Blurbs were originally printed on the back or rear dust jacket of a book. With the development of the mass-market paperback, they were placed on both covers by most publishers. Now they are also found on web portals and news websites. A blurb may introduce a newspaper or a book.
History
In the US, the history of the blurb is said to begin with Walt Whitman's collection, Leaves of Grass. In response to the publication of the first edition in 1855, Ralph Waldo Emerson sent Whitman a congratulatory letter, including the phrase "I greet you at the beginning of a great career": the following year, Whitman had these words stamped in gold leaf on the spine of the second edition.
The word blurb was coined in 1906 by American humorist Gelett Burgess (1866–1951). The October 1906 first edition of his short book Are You a Bromide? was presented in a limited edition to an annual trade association dinner. The custom at such events was to have a dust jacket promoting the work and with, as Burgess' publisher B. W. Huebsch described it, "the picture of a damsel—languishing, heroic, or coquettish—anyhow, a damsel on the jacket of every novel".
In this case, the jacket proclaimed "YES, this is a 'BLURB'!" and the picture was of a (fictitious) young woman "Miss Belinda Blurb" shown calling out, described as "in the act of blurbing." The name and term stuck for any publisher's contents on a book's back cover, even after the picture was dropped and only the text remained.
In Germany, the blurb is regarded to have been invented by Karl Robert Langewiesche around 1902. In German bibliographic usage, it is usually located on the second page of the book underneath the half title, or on the dust cover.
Books
A blurb on a book can be any combination of quotes from the work, the author, the publisher, reviews or fans, a summary of the plot, a biography of the author or simply claims about the importance of the work.
In the 1980s, Spy ran a regular feature called "Logrolling in Our Time" which exposed writers who wrote blurbs for one another's books.
Blurb requests
Prominent writers can receive large volumes of blurb requests from aspiring authors. This has led some writers to turn down such requests as a matter of policy. For example, Gary Shteyngart announced in The New Yorker that he would no longer write blurbs, except for certain writers with whom he had a professional or personal connection. Neil Gaiman reports that "Every now and again, I stop doing blurbs.... The hiatus lasts for a year or two, and then I feel guilty or someone asks me at the right time, and I relent." Jacob M. Appel reports that he received fifteen to twenty blurb requests per week and tackles "as many as I can."
Parody blurbs
Many humorous books and films parody blurbs that deliver exaggerated praise by unlikely people and insults disguised as praise.
Monty Python and the Holy Grail – "Makes Ben Hur look like an Epic"
1066 and All That – "We look forward keenly to the appearance of their last work"
The Harvard Lampoon satire of The Lord of the Rings, titled Bored of the Rings, deliberately used fake blurbs by deceased authors on the inside cover. One of the blurbs stated "One of the two or three books ...", and nothing else.
Film
Movie blurbs are part of the promotional campaign for films, and usually consist of positive, colorful extracts from published reviews.
Movie blurbs have often been faulted for taking words out of context. The New York Times reported that "the blurbing game is also evolving as newspaper film critics disappear and studios become more comfortable quoting Internet bloggers and movie Web sites in their ads, a practice that still leaves plenty of potential for filmgoers to be bamboozled. Luckily for consumers, there is a cavalry: blurb watchdog sites have sprung up and the number of Web sites that aggregate reviews by established critics is steadily climbing. ... Helping to keep studios in line these days are watchdog sites like eFilmCritic.com and The Blurbs, a Web column for Gelf magazine written by Carl Bialik of The Wall Street Journal."
Slate wrote in an "Explainer" column: "How much latitude do movie studios have in writing blurbs? A fair amount. There's no official check on running a misleading movie blurb, aside from the usual laws against false advertising. Studios do have to submit advertising materials like newspaper ads and trailers to the Motion Picture Association of America for approval. But the MPAA reviews the ads for their tone and content, not for the accuracy of their citations. ... As a courtesy, studios will often run the new, condensed quote by the critic before sending it to print."
Many examples exist of blurb used in marketing a film being traceable directly back to the film's marketing team.
References and sources
References
Sources
The story of Miss Belinda Blurb at wordorigins.org
Original dust jacket at the Library of Congress
Bibliography
(Includes bibliography)
"'Riveting!': The Quandary of the Book Blurb", New York Times, March 6, 2012
External links
Book design
Book terminology
Book publishing
Promotion and marketing communications
it:Libro#Quarta di copertina | Blurb | Engineering | 1,164 |
169,934 | https://en.wikipedia.org/wiki/Modal%20particle | In linguistics, modal particles are always uninflected words, and are a type of grammatical particle. They are used to indicate how the speaker thinks that the content of the sentence relates to the participants' common knowledge or to add emotion to the meaning of the sentence. Languages that use many modal particles in their spoken form include Dutch, Danish, German, Hungarian, Russian, Telugu, Nepali, Norwegian, Indonesian, Sinitic languages, and Japanese. The translation is often not straightforward and depends on the context.
Examples
German
The German particle ja is used to indicate that a sentence contains information that is obvious or already known to both the speaker and the hearer. The sentence Der neue Teppich ist rot means "The new carpet is red". Der neue Teppich ist ja rot may thus mean "As we are both aware, the new carpet is red", which would typically be followed by some conclusion from this fact. However, if the speaker says the same thing upon first seeing the new carpet, the meaning is "I'm seeing that the carpet is obviously red", which would typically express surprise. In speech the latter meaning can be inferred from a strong emphasis on rot and higher-pitched voice.
Dutch
In Dutch, modal particles are frequently used to add mood to a sentence, especially in spoken language. For instance:
Politeness
Kan je even het licht aandoen? (literally: "Can you briefly turn on the light?" with the added "even" indicating that it will not take you long to do so.)
Weet u misschien waar het station is? ("Do you perhaps know where the train station is?") Misschien here denotes a very polite and friendly request: "Could you tell me the way to the train station, please?"
Wil je soms wat drinken? ("Do you occasionally want a drink?")Soms here conveys a sincere interest in the answer to a question: "I'm curious if you would like to drink something?"
Frustration
Doe het toch maar. ("Do it nevertheless, however.")Toch here indicates anger and maar lack of consideration: "I don't really care what you think, just do it!"
Ben je nou nog niet klaar? ("Are you still not ready yet?")Nou here denotes loss of patience: "Don't tell me you still haven't finished!"
Modal particles may be combined to indicate mood in a very precise way. In this combination of six modal particles the first two emphasise the command, the second two are toning down the command, and the final two transform the command into a request:
Luister dan nu toch maar eens even. ("Listen + at this moment + now + just + will you? + only once + only for a while", meaning: "Just listen, will you?")
Because of this progressive alteration these modal particles cannot move around freely when stacked in this kind of combination. However, some other modal particles can be added to the equation on any given place, such as gewoon, juist, trouwens. Also, replacing the "imperative weakener" maar by gewoon (indicating normalcy or acceptable behavior), changes the mood of the sentence completely, now indicating utter frustration with someone who is failing to do something very simple:
Luister dan nou toch gewoon eens even! ("For once, can you just simply listen for a minute?")
References
Parts of speech | Modal particle | Technology | 746 |
4,238,932 | https://en.wikipedia.org/wiki/Glow%20plate | Glow plates are sheets of glass or plastic that "glow" when light is supplied to one of their edges.
The light source for a glow plate can be artificial, such as fluorescent light, or natural, with sunlight being directly exposed to the plate or fed through a fiber-optic system.
A joint effort between Florida State University and Oak Ridge National Laboratory is focused on the design of a "spiral bio-reactor light sheet", which consists of a plexiglas sheet that has been micro-etched on one side and rolled into a spiral shape.
Aside from aesthetic or utilitarian lighting purposes, much interest in using glow plates as a source of light comes from recent developments in algal cultivation.
External links
Algae used to mitigate carbon dioxide emissions The energy blog
Fabrication of spiral bio-reactor light sheets Student abstracts: engineering at ORNL
Lighting
Fiber optics
Algaculture | Glow plate | Biology | 177 |
76,549,012 | https://en.wikipedia.org/wiki/Sichere%20Inter-Netzwerk%20Architektur | Sichere Inter-Netzwerk Architektur (SINA) is a cryptographic system developed by and a product of the Bundesamt für Sicherheit in der Informationstechnik, an arm of the German government. It also has the anglicized name Secure Inter-Network Architecture. As of April 2024 the cryptosystem was employed by "many European governments for transmitting classified information".
History
SINA came to prominence in March 2023 when it was divulged in a court warrant for the arrest of the Austrian intelligence officer Egisto Ott, who has a relationship with Wirecard fraudster Jan Marsalek, that a SINA laptop was handed to the Russian secret services, specifically the Lubyanka Building office of the FSB.
The Austrian authorities report that Ott received €20,000 for the laptop from Marsalek.
References
Cryptographic software
Communications in Germany
Federal Ministry of the Interior (Germany)
German history stubs | Sichere Inter-Netzwerk Architektur | Mathematics | 204 |
42,133,703 | https://en.wikipedia.org/wiki/HTC%20Desire%20310 | The HTC Desire 310 is an Android-based smartphone designed and manufactured by HTC. The dual-SIM phone is powered by 1.3 GHz quad-core processor, and has a 4.5-inch screen. The Desire 310 is an entry-level smartphone running Android 4.2 Jelly Bean. The phone features a 1.3 GHz quad-core processor and 1 GB RAM.
See also
HTC Desire 610
HTC Desire 816
References
Android (operating system) devices
Desire 310
Mobile phones introduced in 2014
Discontinued smartphones
Mobile phones with user-replaceable battery | HTC Desire 310 | Technology | 117 |
3,973 | https://en.wikipedia.org/wiki/Bicycle | A bicycle, also called a pedal cycle, bike, push-bike or cycle, is a human-powered or motor-assisted, pedal-driven, single-track vehicle, with two wheels attached to a frame, one behind the other. A is called a cyclist, or bicyclist.
Bicycles were introduced in the 19th century in Europe. By the early 21st century there were more than 1 billion bicycles. There are many more bicycles than cars. Bicycles are the principal means of transport in many regions. They also provide a popular form of recreation, and have been adapted for use as children's toys. Bicycles are used for fitness, military and police applications, courier services, bicycle racing, and artistic cycling.
The basic shape and configuration of a typical upright or "safety" bicycle, has changed little since the first chain-driven model was developed around 1885. However, many details have been improved, especially since the advent of modern materials and computer-aided design. These have allowed for a proliferation of specialized designs for many types of cycling. In the 21st century, electric bicycles have become popular.
The bicycle's invention has had an enormous effect on society, both in terms of culture and of advancing modern industrial methods. Several components that played a key role in the development of the automobile were initially invented for use in the bicycle, including ball bearings, pneumatic tires, chain-driven sprockets, and tension-spoked wheels.
Etymology
The word bicycle first appeared in English print in The Daily News in 1868, to describe "Bysicles and trysicles" on the "Champs Elysées and Bois de Boulogne". The word was first used in 1847 in a French publication to describe an unidentified two-wheeled vehicle, possibly a carriage. The design of the bicycle was an advance on the velocipede, although the words were used with some degree of overlap for a time.
Other words for bicycle include "bike", "pushbike", "pedal cycle", or "cycle". In Unicode, the code point for "bicycle" is 0x1F6B2. The entity 🚲 in HTML produces 🚲.
Although bike and cycle are used interchangeably to refer mostly to two types of two-wheelers, the terms still vary across the world. In India, for example, a cycle refers only to a two-wheeler using pedal power whereas the term bike is used to describe a two-wheeler using internal combustion engine or electric motors as a source of motive power instead of motorcycle/motorbike.
History
The "dandy horse", also called Draisienne or Laufmaschine ("running machine"), was the first human means of transport to use only two wheels in tandem and was invented by the German Baron Karl von Drais. It is regarded as the first bicycle and von Drais is seen as the "father of the bicycle", but it did not have pedals. Von Drais introduced it to the public in Mannheim in 1817 and in Paris in 1818. Its rider sat astride a wooden frame supported by two in-line wheels and pushed the vehicle along with his or her feet while steering the front wheel.
The first mechanically propelled, two-wheeled vehicle may have been built by Kirkpatrick MacMillan, a Scottish blacksmith, in 1839, although the claim is often disputed. He is also associated with the first recorded instance of a cycling traffic offense, when a Glasgow newspaper in 1842 reported an accident in which an anonymous "gentleman from Dumfries-shire... bestride a velocipede... of ingenious design" knocked over a little girl in Glasgow and was fined five shillings ().
In the early 1860s, Frenchmen Pierre Michaux and Pierre Lallement took bicycle design in a new direction by adding a mechanical crank drive with pedals on an enlarged front wheel (the velocipede). This was the first in mass production. Another French inventor named Douglas Grasso had a failed prototype of Pierre Lallement's bicycle several years earlier. Several inventions followed using rear-wheel drive, the best known being the rod-driven velocipede by Scotsman Thomas McCall in 1869. In that same year, bicycle wheels with wire spokes were patented by Eugène Meyer of Paris. The French vélocipède, made of iron and wood, developed into the "penny-farthing" (historically known as an "ordinary bicycle", a retronym, since there was then no other kind). It featured a tubular steel frame on which were mounted wire-spoked wheels with solid rubber tires. These bicycles were difficult to ride due to their high seat and poor weight distribution. In 1868 Rowley Turner, a sales agent of the Coventry Sewing Machine Company (which soon became the Coventry Machinists Company), brought a Michaux cycle to Coventry, England. His uncle, Josiah Turner, and business partner James Starley, used this as a basis for the 'Coventry Model' in what became Britain's first cycle factory.
The dwarf ordinary addressed some of these faults by reducing the front wheel diameter and setting the seat further back. This, in turn, required gearing—effected in a variety of ways—to efficiently use pedal power. Having to both pedal and steer via the front wheel remained a problem. Englishman J.K. Starley (nephew of James Starley), J.H. Lawson, and Shergold solved this problem by introducing the chain drive (originated by the unsuccessful "bicyclette" of Englishman Henry Lawson), connecting the frame-mounted cranks to the rear wheel. These models were known as safety bicycles, dwarf safeties, or upright bicycles for their lower seat height and better weight distribution, although without pneumatic tires the ride of the smaller-wheeled bicycle would be much rougher than that of the larger-wheeled variety. Starley's 1885 Rover, manufactured in Coventry is usually described as the first recognizably modern bicycle. Soon the seat tube was added which created the modern bike's double-triangle diamond frame.
Further innovations increased comfort and ushered in a second bicycle craze, the 1890s Golden Age of Bicycles. In 1888, Scotsman John Boyd Dunlop introduced the first practical pneumatic tire, which soon became universal. Willie Hume demonstrated the supremacy of Dunlop's tyres in 1889, winning the tyre's first-ever races in Ireland and then England. Soon after, the rear freewheel was developed, enabling the rider to coast. This refinement led to the 1890s invention of coaster brakes. Dérailleur gears and hand-operated Bowden cable-pull brakes were also developed during these years, but were only slowly adopted by casual riders.
The Svea Velocipede with vertical pedal arrangement and locking hubs was introduced in 1892 by the Swedish engineers Fredrik Ljungström and Birger Ljungström. It attracted attention at the World Fair and was produced in a few thousand units.
In the 1870s many cycling clubs flourished. They were popular in a time when there were no cars on the market and the principal mode of transportation was horse-drawn vehicles, such the horse and buggy or the horsecar. Among the earliest clubs was The Bicycle Touring Club, which has operated since 1878. By the turn of the century, cycling clubs flourished on both sides of the Atlantic, and touring and racing became widely popular. The Raleigh Bicycle Company was founded in Nottingham, England in 1888. It became the biggest bicycle manufacturing company in the world, making over two million bikes per year.
Bicycles and horse buggies were the two mainstays of private transportation just prior to the automobile, and the grading of smooth roads in the late 19th century was stimulated by the widespread advertising, production, and use of these devices. More than 1 billion bicycles have been manufactured worldwide as of the early 21st century. Bicycles are the most common vehicle of any kind in the world, and the most numerous model of any kind of vehicle, whether human-powered or motor vehicle, is the Chinese Flying Pigeon, with numbers exceeding 500 million. The next most numerous vehicle, the Honda Super Cub motorcycle, has more than 100 million units made, while most produced car, the Toyota Corolla, has reached 44 million and counting.
Uses
Bicycles are used for transportation, bicycle commuting, and utility cycling. They are also used professionally by mail carriers, paramedics, police, messengers, and general delivery services. Military uses of bicycles include communications, reconnaissance, troop movement, supply of provisions, and patrol, such as in bicycle infantries.
They are also used for recreational purposes, including bicycle touring, mountain biking, physical fitness, and play. Bicycle sports include racing, BMX racing, track racing, criterium, roller racing, sportives and time trials. Major multi-stage professional events are the Giro d'Italia, the Tour de France, the Vuelta a España, the Tour de Pologne, and the Volta a Portugal. They are also used for entertainment and pleasure in other ways, such as in organised mass rides, artistic cycling and freestyle BMX.
Technical aspects
The bicycle has undergone continual adaptation and improvement since its inception. These innovations have continued with the advent of modern materials and computer-aided design, allowing for a proliferation of specialized bicycle types, improved bicycle safety, and riding comfort.
Types
Bicycles can be categorized in many different ways: by function, by number of riders, by general construction, by gearing or by means of propulsion. The more common types include utility bicycles, mountain bicycles, racing bicycles, touring bicycles, hybrid bicycles, cruiser bicycles, and BMX bikes. Less common are tandems, low riders, tall bikes, fixed gear, folding models, amphibious bicycles, cargo bikes, recumbents and electric bicycles.
Unicycles, tricycles and quadracycles are not strictly bicycles, as they have respectively one, three and four wheels, but are often referred to informally as "bikes" or "cycles".
Dynamics
A bicycle stays upright while moving forward by being steered so as to keep its center of mass over the wheels. This steering is usually provided by the rider, but under certain conditions may be provided by the bicycle itself.
The combined center of mass of a bicycle and its rider must lean into a turn to successfully navigate it. This lean is induced by a method known as countersteering, which can be performed by the rider turning the handlebars directly with the hands or indirectly by leaning the bicycle.
Short-wheelbase or tall bicycles, when braking, can generate enough stopping force at the front wheel to flip longitudinally. The act of purposefully using this force to lift the rear wheel and balance on the front without tipping over is a trick known as a stoppie, endo, or front wheelie.
Performance
The bicycle is extraordinarily efficient in both biological and mechanical terms. The bicycle is the most efficient human-powered means of transportation in terms of energy a person must expend to travel a given distance. From a mechanical viewpoint, up to 99% of the energy delivered by the rider into the pedals is transmitted to the wheels, although the use of gearing mechanisms may reduce this by 10–15%. In terms of the ratio of cargo weight a bicycle can carry to total weight, it is also an efficient means of cargo transportation.
A human traveling on a bicycle at low to medium speeds of around uses only the power required to walk. Air drag, which is proportional to the square of speed, requires dramatically higher power outputs as speeds increase. If the rider is sitting upright, the rider's body creates about 75% of the total drag of the bicycle/rider combination. Drag can be reduced by seating the rider in a more aerodynamically streamlined position. Drag can also be reduced by covering the bicycle with an aerodynamic fairing. The fastest recorded unpaced speed on a flat surface is .
In addition, the carbon dioxide generated in the production and transportation of the food required by the bicyclist, per mile traveled, is less than that generated by energy efficient motorcars.
Parts
Frame
The great majority of modern bicycles have a frame with upright seating that looks much like the first chain-driven bike. These upright bicycles almost always feature the diamond frame, a truss consisting of two triangles: the front triangle and the rear triangle. The front triangle consists of the head tube, top tube, down tube, and seat tube. The head tube contains the headset, the set of bearings that allows the fork to turn smoothly for steering and balance. The top tube connects the head tube to the seat tube at the top, and the down tube connects the head tube to the bottom bracket. The rear triangle consists of the seat tube and paired chain stays and seat stays. The chain stays run parallel to the chain, connecting the bottom bracket to the rear dropout, where the axle for the rear wheel is held. The seat stays connect the top of the seat tube (at or near the same point as the top tube) to the rear fork ends.
Historically, women's bicycle frames had a top tube that connected in the middle of the seat tube instead of the top, resulting in a lower standover height at the expense of compromised structural integrity, since this places a strong bending load in the seat tube, and bicycle frame members are typically weak in bending. This design, referred to as a step-through frame or as an open frame, allows the rider to mount and dismount in a dignified way while wearing a skirt or dress. While some women's bicycles continue to use this frame style, there is also a variation, the mixte, which splits the top tube laterally into two thinner top tubes that bypass the seat tube on each side and connect to the rear fork ends. The ease of stepping through is also appreciated by those with limited flexibility or other joint problems. Because of its persistent image as a "women's" bicycle, step-through frames are not common for larger frames.
Step-throughs were popular partly for practical reasons and partly for social mores of the day. For most of the history of bicycles' popularity women have worn long skirts, and the lower frame accommodated these better than the top-tube. Furthermore, it was considered "unladylike" for women to open their legs to mount and dismount—in more conservative times women who rode bicycles at all were vilified as immoral or immodest. These practices were akin to the older practice of riding horse sidesaddle.
Another style is the recumbent bicycle. These are inherently more aerodynamic than upright versions, as the rider may lean back onto a support and operate pedals that are on about the same level as the seat. The world's fastest bicycle is a recumbent bicycle but this type was banned from competition in 1934 by the Union Cycliste Internationale.
Historically, materials used in bicycles have followed a similar pattern as in aircraft, the goal being high strength and low weight. Since the late 1930s alloy steels have been used for frame and fork tubes in higher quality machines. By the 1980s aluminum welding techniques had improved to the point that aluminum tube could safely be used in place of steel. Since then aluminum alloy frames and other components have become popular due to their light weight, and most mid-range bikes are now principally aluminum alloy of some kind. More expensive bikes use carbon fibre due to its significantly lighter weight and profiling ability, allowing designers to make a bike both stiff and compliant by manipulating the lay-up. Virtually all professional racing bicycles now use carbon fibre frames, as they have the best strength to weight ratio. A typical modern carbon fiber frame can weigh less than .
Other exotic frame materials include titanium and advanced alloys. Bamboo, a natural composite material with high strength-to-weight ratio and stiffness has been used for bicycles since 1894. Recent versions use bamboo for the primary frame with glued metal connections and parts, priced as exotic models.
Drivetrain and gearing
The drivetrain begins with pedals which rotate the cranks, which are held in axis by the bottom bracket. Most bicycles use a chain to transmit power to the rear wheel. A very small number of bicycles use a shaft drive to transmit power, or special belts. Hydraulic bicycle transmissions have been built, but they are currently inefficient and complex.
Since cyclists' legs are most efficient over a narrow range of pedaling speeds, or cadence, a variable gear ratio helps a cyclist to maintain an optimum pedalling speed while covering varied terrain. Some, mainly utility, bicycles use hub gears with between 3 and 14 ratios, but most use the generally more efficient dérailleur system, by which the chain is moved between different cogs called chainrings and sprockets to select a ratio. A dérailleur system normally has two dérailleurs, or mechs, one at the front to select the chainring and another at the back to select the sprocket. Most bikes have two or three chainrings, and from 5 to 11 sprockets on the back, with the number of theoretical gears calculated by multiplying front by back. In reality, many gears overlap or require the chain to run diagonally, so the number of usable gears is fewer.
An alternative to chaindrive is to use a synchronous belt. These are toothed and work much the same as a chain—popular with commuters and long distance cyclists they require little maintenance. They cannot be shifted across a cassette of sprockets, and are used either as single speed or with a hub gear.
Different gears and ranges of gears are appropriate for different people and styles of cycling. Multi-speed bicycles allow gear selection to suit the circumstances: a cyclist could use a high gear when cycling downhill, a medium gear when cycling on a flat road, and a low gear when cycling uphill. In a lower gear every turn of the pedals leads to fewer rotations of the rear wheel. This allows the energy required to move the same distance to be distributed over more pedal turns, reducing fatigue when riding uphill, with a heavy load, or against strong winds. A higher gear allows a cyclist to make fewer pedal turns to maintain a given speed, but with more effort per turn of the pedals.
With a chain drive transmission, a chainring attached to a crank drives the chain, which in turn rotates the rear wheel via the rear sprocket(s) (cassette or freewheel). There are four gearing options: two-speed hub gear integrated with chain ring, up to 3 chain rings, up to 12 sprockets, hub gear built into rear wheel (3-speed to 14-speed). The most common options are either a rear hub or multiple chain rings combined with multiple sprockets (other combinations of options are possible but less common).
Steering
The handlebars connect to the stem that connects to the fork that connects to the front wheel, and the whole assembly connects to the bike and rotates about the steering axis via the headset bearings. Three styles of handlebar are common. Upright handlebars, the norm in Europe and elsewhere until the 1970s, curve gently back toward the rider, offering a natural grip and comfortable upright position. Drop handlebars "drop" as they curve forward and down, offering the cyclist best braking power from a more aerodynamic "crouched" position, as well as more upright positions in which the hands grip the brake lever mounts, the forward curves, or the upper flat sections for increasingly upright postures. Mountain bikes generally feature a 'straight handlebar' or 'riser bar' with varying degrees of sweep backward and centimeters rise upwards, as well as wider widths which can provide better handling due to increased leverage against the wheel.
Seating
Saddles also vary with rider preference, from the cushioned ones favored by short-distance riders to narrower saddles which allow more room for leg swings. Comfort depends on riding position. With comfort bikes and hybrids, cyclists sit high over the seat, their weight directed down onto the saddle, such that a wider and more cushioned saddle is preferable. For racing bikes where the rider is bent over, weight is more evenly distributed between the handlebars and saddle, the hips are flexed, and a narrower and harder saddle is more efficient. Differing saddle designs exist for male and female cyclists, accommodating the genders' differing anatomies and sit bone width measurements, although bikes typically are sold with saddles most appropriate for men. Suspension seat posts and seat springs provide comfort by absorbing shock but can add to the overall weight of the bicycle.
A recumbent bicycle has a reclined chair-like seat that some riders find more comfortable than a saddle, especially riders who suffer from certain types of seat, back, neck, shoulder, or wrist pain. Recumbent bicycles may have either under-seat or over-seat steering.
Brakes
Bicycle brakes may be rim brakes, in which friction pads are compressed against the wheel rims; hub brakes, where the mechanism is contained within the wheel hub, or disc brakes, where pads act on a rotor attached to the hub. Most road bicycles use rim brakes, but some use disc brakes. Disc brakes are more common for mountain bikes, tandems and recumbent bicycles than on other types of bicycles, due to their increased power, coupled with an increased weight and complexity.
With hand-operated brakes, force is applied to brake levers mounted on the handlebars and transmitted via Bowden cables or hydraulic lines to the friction pads, which apply pressure to the braking surface, causing friction which slows the bicycle down. A rear hub brake may be either hand-operated or pedal-actuated, as in the back pedal coaster brakes which were popular in North America until the 1960s.
Track bicycles do not have brakes, because all riders ride in the same direction around a track which does not necessitate sharp deceleration. Track riders are still able to slow down because all track bicycles are fixed-gear, meaning that there is no freewheel. Without a freewheel, coasting is impossible, so when the rear wheel is moving, the cranks are moving. To slow down, the rider applies resistance to the pedals, acting as a braking system which can be as effective as a conventional rear wheel brake, but not as effective as a front wheel brake.
Suspension
Bicycle suspension refers to the system or systems used to suspend the rider and all or part of the bicycle. This serves two purposes: to keep the wheels in continuous contact with the ground, improving control, and to isolate the rider and luggage from jarring due to rough surfaces, improving comfort.
Bicycle suspensions are used primarily on mountain bicycles, but are also common on hybrid bicycles, as they can help deal with problematic vibration from poor surfaces. Suspension is especially important on recumbent bicycles, since while an upright bicycle rider can stand on the pedals to achieve some of the benefits of suspension, a recumbent rider cannot.
Basic mountain bicycles and hybrids usually have front suspension only, whilst more sophisticated ones also have rear suspension. Road bicycles tend to have no suspension.
Wheels and tires
The wheel axle fits into fork ends in the frame and fork. A pair of wheels may be called a wheelset, especially in the context of ready-built "off the shelf", performance-oriented wheels.
Tires vary enormously depending on their intended purpose. Road bicycles use tires 18 to 25 millimeters wide, most often completely smooth, or slick, and inflated to high pressure to roll fast on smooth surfaces. Off-road tires are usually between wide, and have treads for gripping in muddy conditions or metal studs for ice.
Groupset
Groupset generally refers to all of the components that make up a bicycle excluding the bicycle frame, fork, stem, wheels, tires, and rider contact points, such as the saddle and handlebars.
Accessories
Some components, which are often optional accessories on sports bicycles, are standard features on utility bicycles to enhance their usefulness, comfort, safety and visibility. Fenders with spoilers (mudflaps) protect the cyclist and moving parts from spray when riding through wet areas. In some countries (e.g. Germany, UK), fenders are called mudguards. The chainguards protect clothes from oil on the chain while preventing clothing from being caught between the chain and crankset teeth. Kick stands keep bicycles upright when parked, and bike locks deter theft. Front-mounted baskets, front or rear luggage carriers or racks, and panniers mounted above either or both wheels can be used to carry equipment or cargo. Pegs can be fastened to one, or both of the wheel hubs to either help the rider perform certain tricks, or allow a place for extra riders to stand, or rest. Parents sometimes add rear-mounted child seats, an auxiliary saddle fitted to the crossbar, or both to transport children. Bicycles can also be fitted with a hitch to tow a trailer for carrying cargo, a child, or both.
Toe-clips and toestraps and clipless pedals help keep the foot locked in the proper pedal position and enable cyclists to pull and push the pedals. Technical accessories include cyclocomputers for measuring speed, distance, heart rate, GPS data etc. Other accessories include lights, reflectors, mirrors, racks, trailers, bags, water bottles and cages, and bell. Bicycle lights, reflectors, and helmets are required by law in some geographic regions depending on the legal code. It is more common to see bicycles with bottle generators, dynamos, lights, fenders, racks and bells in Europe. Bicyclists also have specialized form fitting and high visibility clothing.
Children's bicycles may be outfitted with cosmetic enhancements such as bike horns, streamers, and spoke beads. Training wheels are sometimes used when learning to ride, but a dedicated balance bike teaches independent riding more effectively.
Bicycle helmets can reduce injury in the event of a collision or accident, and a suitable helmet is legally required of riders in many jurisdictions. Helmets may be classified as an accessory or as an item of clothing.
Bike trainers are used to enable cyclists to cycle while the bike remains stationary. They are frequently used to warm up before races or indoors when riding conditions are unfavorable.
Standards
A number of formal and industry standards exist for bicycle components to help make spare parts exchangeable and to maintain a minimum product safety.
The International Organization for Standardization (ISO) has a special technical committee for cycles, TC149, that has the scope of "Standardization in the field of cycles, their components and accessories with particular reference to terminology, testing methods and requirements for performance and safety, and interchangeability".
The European Committee for Standardization (CEN) also has a specific Technical Committee, TC333, that defines European standards for cycles. Their mandate states that EN cycle standards shall harmonize with ISO standards. Some CEN cycle standards were developed before ISO published their standards, leading to strong European influences in this area. European cycle standards tend to describe minimum safety requirements, while ISO standards have historically harmonized parts geometry.
Maintenance and repair
Like all devices with mechanical moving parts, bicycles require a certain amount of regular maintenance and replacement of worn parts. A bicycle is relatively simple compared with a car, so some cyclists choose to do at least part of the maintenance themselves. Some components are easy to handle using relatively simple tools, while other components may require specialist manufacturer-dependent tools.
Many bicycle components are available at several different price/quality points; manufacturers generally try to keep all components on any particular bike at about the same quality level, though at the very cheap end of the market there may be some skimping on less obvious components (e.g. bottom bracket).
There are several hundred assisted-service Community Bicycle Organizations worldwide. At a Community Bicycle Organization, laypeople bring in bicycles needing repair or maintenance; volunteers teach them how to do the required steps.
Full service is available from bicycle mechanics at a local bike shop.
In areas where it is available, some cyclists purchase roadside assistance from companies such as the Better World Club or the American Automobile Association.
Maintenance
The most basic maintenance item is keeping the tires correctly inflated; this can make a noticeable difference as to how the bike feels to ride. Bicycle tires usually have a marking on the sidewall indicating the pressure appropriate for that tire. Bicycles use much higher pressures than cars: car tires are normally in the range of , whereas bicycle tires are normally in the range of .
Another basic maintenance item is regular lubrication of the chain and pivot points for derailleurs and brake components. Most of the bearings on a modern bike are sealed and grease-filled and require little or no attention; such bearings will usually last for or more. The crank bearings require periodic maintenance, which involves removing, cleaning and repacking with the correct grease.
The chain and the brake blocks are the components which wear out most quickly, so these need to be checked from time to time, typically every or so. Most local bike shops will do such checks for free. Note that when a chain becomes badly worn it will also wear out the rear cogs/cassette and eventually the chain ring(s), so replacing a chain when only moderately worn will prolong the life of other components.
Over the longer term, tires do wear out, after ; a rash of punctures is often the most visible sign of a worn tire.
Repair
Very few bicycle components can actually be repaired; replacement of the failing component is the normal practice.
The most common roadside problem is a puncture of the tire's inner tube. A patch kit may be employed to fix the puncture or the tube can be replaced, though the latter solution comes at a greater cost and waste of material. Some brands of tires are much more puncture-resistant than others, often incorporating one or more layers of Kevlar; the downside of such tires is that they may be heavier and/or more difficult to fit and remove.
Tools
There are specialized bicycle tools for use both in the shop and at the roadside. Many cyclists carry tool kits. These may include a tire patch kit (which, in turn, may contain any combination of a hand pump or CO2 pump, tire levers, spare tubes, self-adhesive patches, or tube-patching material, an adhesive, a piece of sandpaper or a metal grater (for roughening the tube surface to be patched) and sometimes even a block of French chalk), wrenches, hex keys, screwdrivers, and a chain tool. Special, thin wrenches are often required for maintaining various screw-fastened parts, specifically, the frequently lubricated ball-bearing "cones". There are also cycling-specific multi-tools that combine many of these implements into a single compact device. More specialized bicycle components may require more complex tools, including proprietary tools specific for a given manufacturer.
Social and historical aspects
The bicycle has had a considerable effect on human society, in both the cultural and industrial realms.
In daily life
Around the turn of the 20th century, bicycles reduced crowding in inner-city tenements by allowing workers to commute from more spacious dwellings in the suburbs. They also reduced dependence on horses. Bicycles allowed people to travel for leisure into the country, since bicycles were three times as energy efficient as walking and three to four times as fast.
In built-up cities around the world, urban planning uses cycling infrastructure like bikeways to reduce traffic congestion and air pollution. A number of cities around the world have implemented schemes known as bicycle sharing systems or community bicycle programs. The first of these was the White Bicycle plan in Amsterdam in 1965. It was followed by yellow bicycles in La Rochelle and green bicycles in Cambridge. These initiatives complement public transport systems and offer an alternative to motorized traffic to help reduce congestion and pollution. In Europe, especially in the Netherlands and parts of Germany and Denmark, bicycle commuting is common. In Copenhagen, a cyclists' organization runs a Cycling Embassy that promotes biking for commuting and sightseeing. The United Kingdom has a tax break scheme (IR 176) that allows employees to buy a new bicycle tax free to use for commuting.
In the Netherlands all train stations offer free bicycle parking, or a more secure parking place for a small fee, with the larger stations also offering bicycle repair shops. Cycling is so popular that the parking capacity may be exceeded, while in some places such as Delft the capacity is usually exceeded. In Trondheim in Norway, the Trampe bicycle lift has been developed to encourage cyclists by giving assistance on a steep hill. Buses in many cities have bicycle carriers mounted on the front.
There are towns in some countries where bicycle culture has been an integral part of the landscape for generations, even without much official support. That is the case of Ílhavo, in Portugal.
In cities where bicycles are not integrated into the public transportation system, commuters often use bicycles as elements of a mixed-mode commute, where the bike is used to travel to and from train stations or other forms of rapid transit. Some students who commute several miles drive a car from home to a campus parking lot, then ride a bicycle to class. Folding bicycles are useful in these scenarios, as they are less cumbersome when carried aboard. Los Angeles removed a small amount of seating on some trains to make more room for bicycles and wheel chairs.
Some US companies, notably in the tech sector, are developing both innovative cycle designs and cycle-friendliness in the workplace. Foursquare, whose CEO Dennis Crowley "pedaled to pitch meetings ... [when he] was raising money from venture capitalists" on a two-wheeler, chose a new location for its New York headquarters "based on where biking would be easy". Parking in the office was also integral to HQ planning. Mitchell Moss, who runs the Rudin Center for Transportation Policy & Management at New York University, said in 2012: "Biking has become the mode of choice for the educated high tech worker".
Bicycles offer an important mode of transport in many developing countries. Until recently, bicycles have been a staple of everyday life throughout Asian countries. They are the most frequently used method of transport for commuting to work, school, shopping, and life in general. In Europe, bicycles are commonly used. They also offer a degree of exercise to keep individuals healthy.
Bicycles are also celebrated in the visual arts. An example of this is the Bicycle Film Festival, a film festival hosted all around the world.
Poverty alleviation
Female emancipation
The safety bicycle gave women unprecedented mobility, contributing to their emancipation in Western nations. As bicycles became safer and cheaper, more women had access to the personal freedom that bicycles embodied, and so the bicycle came to symbolize the New Woman of the late 19th century, especially in Britain and the United States. The bicycle craze in the 1890s also led to a movement for so-called rational dress, which helped liberate women from corsets and ankle-length skirts and other restrictive garments, substituting the then-shocking bloomers.
The bicycle was recognized by 19th-century feminists and suffragists as a "freedom machine" for women. American Susan B. Anthony said in a New York World interview on 2 February 1896: "I think it has done more to emancipate woman than any one thing in the world. I rejoice every time I see a woman ride by on a wheel. It gives her a feeling of self-reliance and independence the moment she takes her seat; and away she goes, the picture of untrammelled womanhood." In 1895 Frances Willard, the tightly laced president of the Woman's Christian Temperance Union, wrote A Wheel Within a Wheel: How I Learned to Ride the Bicycle, with Some Reflections by the Way, a 75-page illustrated memoir praising "Gladys", her bicycle, for its "gladdening effect" on her health and political optimism. Willard used a cycling metaphor to urge other suffragists to action.
In 1985, Georgena Terry started the first women-specific bicycle company. Her designs featured frame geometry and wheel sizes chosen to better fit women, with shorter top tubes and more suitable reach.
Economic implications
Bicycle manufacturing proved to be a training ground for other industries and led to the development of advanced metalworking techniques, both for the frames themselves and for special components such as ball bearings, washers, and sprockets. These techniques later enabled skilled metalworkers and mechanics to develop the components used in early automobiles and aircraft.
Wilbur and Orville Wright, a pair of businessmen, ran the Wright Cycle Company which designed, manufactured and sold their bicycles during the bike boom of the 1890s.
They also served to teach the industrial models later adopted, including mechanization and mass production (later copied and adopted by Ford and General Motors), vertical integration (also later copied and adopted by Ford), aggressive advertising (as much as 10% of all advertising in U.S. periodicals in 1898 was by bicycle makers), lobbying for better roads (which had the side benefit of acting as advertising, and of improving sales by providing more places to ride), all first practiced by Pope. In addition, bicycle makers adopted the annual model change (later derided as planned obsolescence, and usually credited to General Motors), which proved very successful.
Early bicycles were an example of conspicuous consumption, being adopted by the fashionable elites. In addition, by serving as a platform for accessories, which could ultimately cost more than the bicycle itself, it paved the way for the likes of the Barbie doll.
Bicycles helped create, or enhance, new kinds of businesses, such as bicycle messengers, traveling seamstresses, riding academies, and racing rinks. Their board tracks were later adapted to early motorcycle and automobile racing. There were a variety of new inventions, such as spoke tighteners, and specialized lights, socks and shoes, and even cameras, such as the Eastman Company's Poco. Probably the best known and most widely used of these inventions, adopted well beyond cycling, is Charles Bennett's Bike Web, which came to be called the jock strap.
They also presaged a move away from public transit that would explode with the introduction of the automobile.
J. K. Starley's company became the Rover Cycle Company Ltd. in the late 1890s, and then renamed the Rover Company when it started making cars. Morris Motors Limited (in Oxford) and Škoda also began in the bicycle business, as did the Wright brothers. Alistair Craig, whose company eventually emerged to become the engine manufacturers Ailsa Craig, also started from manufacturing bicycles, in Glasgow in March 1885.
In general, U.S. and European cycle manufacturers used to assemble cycles from their own frames and components made by other companies, although very large companies (such as Raleigh) used to make almost every part of a bicycle (including bottom brackets, axles, etc.) In recent years, those bicycle makers have greatly changed their methods of production. Now, almost none of them produce their own frames.
Many newer or smaller companies only design and market their products; the actual production is done by Asian companies. For example, some 60% of the world's bicycles are now being made in China. Despite this shift in production, as nations such as China and India become more wealthy, their own use of bicycles has declined due to the increasing affordability of cars and motorcycles. One of the major reasons for the proliferation of Chinese-made bicycles in foreign markets is the lower cost of labor in China.
In line with the European financial crisis of that time, in 2011 the number of bicycle sales in Italy (1.75 million) passed the number of new car sales.
Environmental impact
One of the profound economic implications of bicycle use is that it liberates the user from motor fuel consumption. (Ballantine, 1972) The bicycle is an inexpensive, fast, healthy and environmentally friendly mode of transport. Ivan Illich stated that bicycle use extended the usable physical environment for people, while alternatives such as cars and motorways degraded and confined people's environment and mobility. Currently, two billion bicycles are in use around the world. Children, students, professionals, laborers, civil servants and seniors are pedaling around their communities. They all experience the freedom and the natural opportunity for exercise that the bicycle easily provides. Bicycle also has lowest carbon intensity of travel.
Manufacturing
The global bicycle market is $61 billion in 2011. , 130 million bicycles were sold every year globally and 66% of them were made in China.
Legal requirements
Early in its development, as with automobiles, there were restrictions on the operation of bicycles. Along with advertising, and to gain free publicity, Albert A. Pope litigated on behalf of cyclists.
The 1968 Vienna Convention on Road Traffic of the United Nations considers a bicycle to be a vehicle, and a person controlling a bicycle (whether actually riding or not) is considered an operator or driver. The traffic codes of many countries reflect these definitions and demand that a bicycle satisfy certain legal requirements before it can be used on public roads. In many jurisdictions, it is an offense to use a bicycle that is not in a roadworthy condition.
In some countries, bicycles must have functioning front and rear lights when ridden after dark.
Some countries require child and/or adult cyclists to wear helmets, as this may protect riders from head trauma. Countries which require adult cyclists to wear helmets include Spain, New Zealand and Australia. Mandatory helmet wearing is one of the most controversial topics in the cycling world, with proponents arguing that it reduces head injuries and thus is an acceptable requirement, while opponents argue that by making cycling seem more dangerous and cumbersome, it reduces cyclist numbers on the streets, creating an overall negative health effect (fewer people cycling for their own health, and the remaining cyclists being more exposed through a reversed safety in numbers effect).
Theft
Bicycles are popular targets for theft, due to their value and ease of resale. The number of bicycles stolen annually is difficult to quantify as a large number of crimes are not reported. Around 50% of the participants in the Montreal International Journal of Sustainable Transportation survey were subjected to a bicycle theft in their lifetime as active cyclists. Most bicycles have serial numbers that can be recorded to verify identity in case of theft.
See also
Bicycle and motorcycle geometry
Bicycle drum brake
Bicycle fender
Bicycle lighting
Bicycle parking station
Bicycle-friendly
Bicycle-sharing system
Cyclability
Cycling advocacy
Cycling in the Netherlands
Danish bicycle VIN-system
List of bicycle types
List of films about bicycles and cycling
Outline of bicycles
Outline of cycling
rattleCAD (software for bicycle design)
Skirt guard
Twike
Velomobile
Wooden bicycle
World Bicycle Day
Notes
References
Citations
Sources
General
Further reading
External links
A History of Bicycles and Other Cycles at the Canada Science and Technology Museum
19th-century inventions
Appropriate technology
Articles containing video clips
German inventions
Sustainable technologies
Sustainable transport | Bicycle | Physics | 8,770 |
319,888 | https://en.wikipedia.org/wiki/Blame | Blame is the act of censuring, holding responsible, or making negative statements about an individual or group that their actions or inaction are socially or morally irresponsible, the opposite of praise. When someone is morally responsible for doing something wrong, their action is blameworthy. By contrast, when someone is morally responsible for doing something right, it may be said that their action is praiseworthy. There are other senses of praise and blame that are not ethically relevant. One may praise someone's good dress sense, and blame their own sense of style for their own dress sense.
Philosophy
Philosophers discuss the concept of blame as one of the reactive attitudes, a term coined by P. F. Strawson, which includes attitudes like blame, praise, gratitude, resentment, and forgiveness. In contrast to physical or intellectual concepts, reactive attitudes are formed from the point of view of an active participant regarding objects. This is to be distinguished from the objective standpoint.
Neurology
Blaming appears to relate to include brain activity in the temporoparietal junction (TPJ). The amygdala has been found to contribute when we blame others, but not when we respond to their positive actions.
Sociology and psychology
Humans—consciously and unconsciously—constantly make judgments about other people. The psychological criteria for judging others may be partly ingrained, negative, and rigid, indicating some degree of grandiosity.
Blaming provides a way of devaluing others, with the result that the blamer feels superior, seeing others as less worthwhile and/or making the blamer "perfect". Off-loading blame means putting the other person down by emphasizing their flaws.
Victims of manipulation and abuse frequently feel responsible for causing negative feelings in the manipulator/abuser towards them and the resultant anxiety in themselves. This self-blame often becomes a major feature of victim status.
The victim gets trapped into a self-image of victimization. The psychological profile of victimization includes a pervasive sense of helplessness, passivity, loss of control, pessimism, negative thinking, strong feelings of guilt, shame, remorse, self-blame, and depression. This way of thinking can lead to hopelessness and despair.
Self-blame
Two main types of self-blame exist:
behavioral self-blame – undeserved blame based on actions. Victims who experience behavioral self-blame feel that they should have done something differently, and therefore feel at fault.
characterological self-blame – undeserved blame based on character. Victims who experience characterological self-blame feel there is something inherently wrong with them which has caused them to deserve to be victimized.
Behavioral self-blame is associated with feelings of guilt within the victim. While the belief that one had control during the abuse (past control) is associated with greater psychological distress, the belief that one has more control during the recovery process (present control) is associated with less distress, less withdrawal, and more cognitive reprocessing.
Counseling responses found helpful in reducing self-blame include:
supportive responses
psychoeducational responses (for example, learning about rape trauma syndrome)
responses addressing the issue of blame.
A helpful type of therapy for self-blame is cognitive restructuring or cognitive–behavioral therapy. Cognitive reprocessing is the process of taking the facts and forming a logical conclusion from them that is less influenced by shame or guilt.
Victim blaming
Victim blaming is holding the victims of a crime, an accident, or any type of abusive maltreatment to be entirely or partially responsible for the incident that has occurred. The fundamental attribution error concept explains how people tend to blame negative behavior more on the victims traits than the situation at the time of the event.
Individual blame versus system blame
In sociology, individual blame is the tendency of a group or society to hold the individual responsible for their situation, whereas system blame is the tendency to focus on social factors that contribute to one's fate.
Blame shifting
Blaming others can lead to a "kick the dog" effect where individuals in a hierarchy blame their immediate subordinate, and this propagates down a hierarchy until the lowest rung (the "dog"). A 2009 experimental study has shown that blaming can be contagious even for uninvolved onlookers.
In complex international organizations, such as enforcers of national and supranational policies and regulations, the blame is usually attributed to the last echelon, the implementing actors.
As a propaganda technique
Labeling theory accounts for blame by postulating that when intentional actors act out to continuously blame an individual for nonexistent psychological traits and for nonexistent variables, those actors aim to induce irrational guilt at an unconscious level. Blame in this case becomes a propaganda tactic, using repetitive blaming behaviors, innuendos, and hyperbole in order to assign negative status to normative humans. When innocent people are blamed fraudulently for nonexistent psychological states and nonexistent behaviors, and there is no qualifying deviance for the blaming behaviors, the intention is to create a negative valuation of innocent humans to induce fear, by using fear mongering. For centuries, governments have used blaming in the form of demonization to influence public perceptions of various other governments, as well as to induce feelings of nationalism in the public. Blame can objectify people, groups, and nations, typically negatively influencing the intended subjects of propaganda, compromising their objectivity. Blame is utilized as a social-control technique.
In organizations
The flow of blame in an organization may be a primary indicator of that organization's robustness and integrity. Blame flowing downwards, from management to staff, or laterally between professionals or partner organizations, indicates organizational failure. In a blame culture, problem-solving is replaced by blame-avoidance. Blame coming from the top generates "fear, malaise, errors, accidents, and passive-aggressive responses from the bottom", with those at the bottom feeling powerless and lacking emotional safety. Employees have expressed that organizational blame culture made them fear prosecution for errors and/or accidents and thus unemployment, which may make them more reluctant to report accidents, since trust is crucial to encourage accident reporting. This makes it less likely that weak and/or long-term indicators of safety threats get picked up, thus preventing the organization from taking adequate measures to prevent minor problems from escalating into uncontrollable situations. Several issues identified in organizations with a blame culture contradict the best practices adopted by high reliability organizations. Organisational chaos, such as confused roles and responsibilities, is strongly associated with blame culture and workplace bullying. Blame culture promotes a risk aversive approach, which prevent organizations and their agents from adequately assessing risks.
According to Mary Douglas, blame is systematically used in the micro-politics of institutions, with three latent functions: explaining disasters, justifying allegiances, and stabilizing existing institutional regimes. Within a politically stable regime, blame tends to be asserted on the weak or unlucky one, but in a less stable regime, blame shifting may involve a battle between rival factions. Douglas was interested in how blame stabilizes existing power structures within institutions or social groups. She devised a two-dimensional typology of institutions, the first attribute being named "group", which is the strength of boundaries and social cohesion, the second "grid", the degree and strength of the hierarchy. According to Douglas, blame will fall on different entities depending on the institutional type. For markets, blame is used in power struggles between potential leaders. In bureaucracies, blame tends to flow downwards and is attributed to a failure to follow rules. In a clan, blame is asserted on outsiders or involves allegations of treachery, to suppress dissidence and strengthen the group's ties. In the 4th type, isolation, the individuals are facing the competitive pressures of the marketplace alone; in other words, there is a condition of fragmentation with a loss of social cohesion, potentially leading to feelings of powerlessness and fatalism, and this type was renamed by various other authors into "donkey jobs". It is suggested that the progressive changes in managerial practices in healthcare is leading to an increase in donkey jobs.
The requirement of accountability and transparency, assumed to be key for good governance, worsen the behaviors of blame avoidance, both at the individual and institutional levels, as is observed in various domains such as politics and healthcare. Indeed, institutions tend to be risk-averse and blame-averse, and where the management of societal risks (the threats to society) and institutional risks (threats to the organizations managing the societal risks) are not aligned, there may be organizational pressures to prioritize the management of institutional risks at the expense of societal risks. Furthermore, "blame-avoidance behaviour at the expense of delivering core business is a well-documented organizational rationality". The willingness of maintaining one's reputation may be a key factor explaining the relationship between accountability and blame avoidance. This may produce a "risk colonization", where institutional risks are transferred to societal risks, as a strategy of risk management. Some researchers argue that there is "no risk-free lunch" and "no blame-free risk", an analogy to the "no free lunch" adage.
See also
References
Further reading
Douglas, Tom. Scapegoats: Transferring Blame, London-New York, Routledge, 1995.
Wilcox, Clifton W. Scapegoat: Targeted for Blame, Denver, Outskirts Press, 2009.
External links
Blaming
Moral Responsibility (also on praise and blame), in the Stanford Encyclopedia of Philosophy
Praise and Blame, in the Internet Encyclopedia of Philosophy
Social psychology
Concepts in ethics
Behavior
Accountability
Moral psychology | Blame | Biology | 1,968 |
17,364,737 | https://en.wikipedia.org/wiki/Prix%20Charles%20Peignot | The Prix Charles Peignot (Charles Peignot Prize) is a major award in typeface design, given "to a designer under the age of 35 who has made an outstanding contribution to type design". It is awarded irregularly, typically every three to five years, by the Association Typographique Internationale (ATypI, the international typographic association). It was first given in 1982.
The prize is named after Charles Peignot (1897–1983), type designer, director of the Deberny & Peignot type foundry, and founder and first president of ATypI.
Award winners
Winners to date of this award have been:
Claude Mediavilla (1982)
Jovica Veljović (1985)
Petr van Blokland (1988)
Robert Slimbach (1991)
Carol Twombly (1994)
Jean François Porchez (1998)
Jonathan Hoefler (2002)
Christian Schwartz (2007)
Alexandra Korolkova (2013)
David Jonathan Ross (2018)
References
External links
ATypI page about Charles Peignot himself
Post-war history of Deberny & Peignot
Typography | Prix Charles Peignot | Engineering | 237 |
310,234 | https://en.wikipedia.org/wiki/Battery%20%28crime%29 | Battery is a criminal offense involving unlawful physical contact, distinct from assault, which is the act of creating reasonable fear or apprehension of such contact.
Battery is a specific common law offense, although the term is used more generally to refer to any unlawful offensive physical contact with another person. Battery is defined by American common law as "any unlawful and/or unwanted touching of the person of another by the aggressor, or by a substance put in motion by them". In more severe cases, and for all types in some jurisdictions, it is chiefly defined by statutory wording. Assessment of the severity of a battery is determined by local law.
Generally
Specific rules regarding battery vary among different jurisdictions, but some elements remain constant across jurisdictions. Battery generally requires that:
an offensive touch or contact is made upon the victim, instigated by the actor; and
the actor intends or knows that their action will cause the offensive touching.
Under the US Model Penal Code and in some jurisdictions, there is battery when the actor acts recklessly without specific intent of causing an offensive contact. Battery is typically classified as either simple or aggravated. Although battery typically occurs in the context of physical altercations, it may also occur under other circumstances, such as in medical cases where a doctor performs a non-consented medical procedure.
Specific countries
Canada
Battery is not defined in the Canadian Criminal Code. Instead, the Code has an offense of assault, and assault causing bodily harm.
England and Wales
Battery is a common law offence within England and Wales.
As with the majority of offences in the UK, it has two elements:
Actus reus: The defendant unlawfully touched or applied force to the victim
Mens rea: The defendant intended or was reckless as to the unlawful touch or application of force
This offence is a crime against autonomy, with more violent crimes such as ABH and GBH being statutory offences under the Offences against the Person Act 1861.
As such, even the slightest of touches can amount to an unlawful application of force. However, it is assumed that everyday encounters (such as making contact with others on public transportation) are consented to and not punishable.
Much confusion can come between the terms "assault" and "battery". In everyday use the term assault may be used to describe a physical attack, which is indeed a battery. An assault is causing someone to apprehend that they will be the victim of a battery. This issue is so prevalent that the crime of sexual assault would be better labelled a sexual battery. This confusion stems from the fact that both assault and battery can be referred to as common assault. In practice, if charged with such an offence, the wording will read "assault by beating", but this means the same as "battery".
There is no separate offence for a battery relating to domestic violence; however, the introduction of the crime of "controlling or coercive behaviour in an intimate or family relationship" in section 76 of the Serious Crime Act 2015 has given rise to new sentencing guidelines that take into account significant aggravating factors such as abuse of trust, resulting in potentially longer sentences for acts of battery within the context of domestic violence.
Whether it is a statutory offence
In DPP v Taylor, DPP v Little, it was held that battery is a statutory offence, contrary to section 39 of the Criminal Justice Act 1988. This decision was criticised in Haystead v DPP where the Divisional court expressed the obiter opinion that battery remains a common law offence.
Therefore, whilst it may be a better view that battery and assault have statutory penalties, rather than being statutory offences, it is still the case that until review by a higher court, DPP v Little is the preferred authority.
Mode of trial and sentence
In England and Wales, battery is a summary offence under section 39 of the Criminal Justice Act 1988. However, by virtue of section 40, it can be tried on indictment where another indictable offence is also charged which is founded on the same facts or together with which it forms part of a series of offences of similar character. Where it is tried on indictment a Crown Court has no greater powers of sentencing than a magistrates' court would, unless the battery itself constitutes actual bodily harm or greater.
It is punishable with imprisonment for a term not exceeding six months, or a fine not exceeding level 5 on the standard scale, or both.
Defences
There are numerous defences to a charge of assault, namely
Intoxication due to drugs/alcohol - voluntary or involuntary (does not apply to offences which may be committed recklessly, intentionally or with negligence i.e. assault/battery)
Defence of self or others or others
Prevention of crime
Mistake
Duress
Necessity
Insanity
Automatism
Provocation
Alibi
Diminished responsibility
Consent (does not apply when the assault/battery results in ABH or greater)
Superior orders
Reasonable chastisement of a child
Medical procedure
Sporting activities
Arrest by Constable
Arrest by citizen
For provocation, see Tuberville v Savage.
Russia
There is an offence which could be (loosely) described as battery in Russia. Article 116 of the Russian Criminal Code provides that battery or similar violent actions which cause pain are an offence.
Scotland
There is no distinct offence of battery in Scotland. The offence of assault includes acts that could be described as battery.
United States
In the United States, criminal battery, or simple battery, is the use of force against another, resulting in harmful or offensive contact, including sexual contact. At common law, simple battery is a misdemeanor. The prosecutor must prove all three elements beyond a reasonable doubt:
an unlawful application of force
to the person of another
resulting in either bodily injury or offensive touching.
The common-law elements serve as a basic template, but individual jurisdictions may alter them, and they may vary slightly from state to state.
Under modern statutory schemes, battery is often divided into grades that determine the severity of punishment. For example:
Simple battery may include any form of non-consensual harmful or insulting contact, regardless of the injury caused. Criminal battery requires intent to inflict an injury on another.
Sexual battery may be defined as non-consensual touching of the intimate parts of another. At least in Florida, "Sexual battery means oral, anal, or vaginal penetration by, or union with, the sexual organ of another or the anal or vaginal penetration of another by any other object": See section 794.011.
Family-violence battery may be limited in its scope between persons within a certain degree of relationship: statutes for this offense have been enacted in response to increasing awareness of the problem of domestic violence.
Aggravated battery generally is seen as a serious offense of felony grade. Aggravated battery charges may occur when a battery causes serious bodily injury or permanent disfigurement. As a successor to the common law crime of mayhem, this is sometimes subsumed in the definition of assault. In Florida, aggravated battery is the intentional infliction of great bodily harm and is a second-degree felony, whereas battery that unintentionally causes great bodily harm is considered a third-degree felony.
Kansas
In the state of Kansas, battery is defined as follows:
Battery.
(a) Battery is:
(1) Knowingly or recklessly causing bodily harm to another person; or
(2) knowingly causing physical contact with another person when done in a rude, insulting, or angry manner.
Louisiana
The law on battery in Louisiana reads:
§ 33. Battery defined:
Battery is the intentional use of force or violence upon the person of another; or the intentional administration of a poison or other noxious liquid or substance to another.
Jurisdictional differences
In some jurisdictions, battery has recently been constructed to include directing bodily secretions (i.e., spitting) at another person without their permission. Some of those jurisdictions automatically elevate such a battery to the charge of aggravated battery. In some jurisdictions, the charge of criminal battery also requires evidence of a mental state (mens rea). The terminology used to refer to a particular offense can also vary by jurisdiction. Some jurisdictions, such as New York, refer to what, under the common-law, would-be battery as assault, and then use another term for the crime that would have been assault, such as menacing.
Distinction between battery and assault
A typical overt behavior of an assault is Person A chasing Person B and swinging a fist toward their head. That for battery is A striking B.
Battery requires:
a volitional act (that is the defendant was acting volunarily), that
results in physical (or in the USA, "harmful or offensive") contact with another person, and
is committed for the purpose of causing that contact, or, in the USA, under circumstances that render such contact substantially certain to occur or with a reckless disregard as to whether such contact will result, or in England and Wales, reckless that it might occur (meaning that the defendant foresaw the risk of that contact and carried on unreasonably to take that risk).
Assault, where rooted on English law, the act of intentionally causing a person to apprehend physical contact with their person. Elsewhere it is often similarly worded as the threat of violence to a person while aggravated assault is the threat with the clear and present ability and willingness to carry it out. Aggravated battery is, typically, offensive touching without a tool or weapon with attempt to harm or restrain.
See also
Assault (tort)
Assault occasioning actual bodily harm
Battery (tort)
Non-fatal offences against the person in English law
Right of self-defense
References
Common law offences in England and Wales
Crimes
Criminology
Offences against the person
Violence | Battery (crime) | Biology | 1,956 |
48,711,516 | https://en.wikipedia.org/wiki/Border%20art | Border Art is a contemporary art practice rooted in the socio-political experience(s), such as of those on the U.S.-Mexico borderlands, or frontera. Since its conception in the mid-80's, this artistic practice has assisted in the development of questions surrounding homeland, borders, surveillance, identity, race, ethnicity, and national origin(s).
Border art as a conceptual artistic practice, however, opens up the possibility for artists to explore similar concerns of identity and national origin(s) but whose location is not specific to the Mexico- United States border. A border can be a division, dividing groups of people and families. Borders can include but are not limited to language, culture, social and economic class, religion, and national identity. In addition to a division, a border can also conceive a borderland area that can create a cohesive community separate from the mainstream cultures and identities portrayed in the communities away from the borders, such as the Tijuana-San Diego border between Mexico and the United States.
Border art can be defined as an art that is created in reference to any number of physical or imagined boundaries. This art can but is not limited to social, political, physical, emotional and/or nationalist issues. Border art is not confined to one particular medium. Border art/artists often address the forced politicization of human bodies and physical land and the arbitrary, yet incredibly harmful, separations that are created by these borders and boundaries. These artists are often "border crossers" themselves. They may cross borders of traditional art-making (through performance, video, or a combination of mediums). They may at once be artists and activists, existing in multiple social roles at once. Many border artists defy easy classifications in their artistic practice and work.
History of border art specific to the Mexico-United States border
Ila Nicole Sheren states, "Border Art didn't become a category until the Border Art Workshop/Taller de Arte Fronterizo (BAW/TAF). Starting in 1984, and continuing in several iterations through the early twenty-first century, the binational collective transformed San Diego-Tijuana into a highly charged site for conceptual performance art ...The BAW/TAF artists were to link performance, site-specificity, and the U.S.-Mexico Border, as well as the first to export "border art" to other geographic locations and situations." A proponent of Border Art is Guillermo Gómez-Peña, founder of The Border Arts Workshop/Taller de Arte Fronterizo. The Border Arts Workshop/Taller de Arte Fronterizo pioneered tackling the political tensions of the borderlands, at a time when the region was gaining increased attention from the media due to the NAFTA debates. The contradiction of the border opening to the free flow of capital but simultaneously closing to the flow of immigrants provided the opportunity to address other long-existing conflicts within the region.
Antonio Prieto argues that "As opposed to folk artists, the new generation belongs to the middle class, has formal training and self-consciously conceives itself as a producer of 'border art.' Moreover, their art is politically charged, and assumes a confrontational stance vis-à-vis both Mexican and U.S. government policies."
In their introduction to the exhibition, La Frontera/The Border: Art About the Mexico/United States Border Experience, Patricio Chávez and Madeleine Grynstejn state, “For the artists represented here, the border is not a physical boundary line separating two sovereign nations, but rather a place of its own, defined by a confluence of cultures that is not geographically bounded either to the north or to the south. The border is the specific nexus of an authentic zone of hybridized cultural experience, reflecting the migration and cross-pollination of ideas and images between different cultures that arise from real and constant human, cultural, and sociopolitical movements. In this decade, borders and international boundaries have become paramount in our national consciousness and in international events. As borders define the economy, political ideology, and national identity of countries throughout the world, so we should examine our own borderlands for an understanding of ourselves and each other.”
Prieto notes that “While the first examples of Chicano art in the late sixties took up issues of land, community and oppression, it was not until later that graphic artists like Rupert García began to explicitly depict the border in their work. García's 1973 silkscreen "¡Cesen Deportación!," for example, calls for an end to the exploitative treatment of migrant workers who are allowed to cross the border and are then deported at the whim of U.S. economic and political interests.”
Prieto notes that for Mexican and Chicano artists, the aesthetics of rascuache created a hybrid of Mexican and American visual culture. While it does not have an exact English translation, the term, rascuache, can be likened to the artistic term, kitsch. It translates most closely from Spanish as "leftover" with a sensibility closest to the English term, kitsch.
Photographer David Taylor focused on the U.S.-Mexico border by following monuments that mark the official borders of the United States and México outlined as a result of the 1848 Treaty of Guadalupe Hidalgo. He quotes on his website, “My travels along the border have been done both alone and in the company of agents. In total, the resulting pictures are intended to offer a view into locations and situations that we generally do not access and portray a highly complex physical, social and political topography during a period of dramatic change.” In his project, Taylor has covered physical borders by documenting the environment and landscape along the border but also addresses social issues by engaging with locals, patrolman, smugglers, and many other people living in and being affected by the U.S.-México border. He also chooses to address political issues by focusing on the large issue of drug trafficking.
Related artworks
Border Tuner is a project by the Mexican-Canadian Artist Rafael Lozano-Hemmer that explores connections which exist between cities and people on either side of the Mexico - United States border. Situated in the El Paso / Juarez borderlands, this interactive work utilizes large searchlights as a means for participants on either side of the border to communicate with one another. When one beam of light interacts with another, a microphone and speaker automatically switch on allowing participants on both sides to communicate across the hardened infrastructure which divides their two countries. The searchlight, most commonly used in applications of surveillance and apprehension of migrants by the United States Border Patrol is one of the symbols which Lozano-Hemmer subverts in his work. Of this loaded symbol he says: “I find searchlights absolutely abhorrent, that’s why I must work with them.”
Borrando La Frontera (Erasing the Border) by Ana Teresa Fernandez challenges the materiality of the U.S./Mexico border through its erasure of the structure. In the film, Ana Teresa Fernandez hopes to “[turn] a wall that divides into the ocean and the sky that expands” into a symbol for potential future mobility. By making the border the same color as the sky, rendering it invisible, the artist draws attention to the naturalized sense of nation in opposition to the natural landscape. The artist creates new meaning for the sky's natural blue color, as she uses it to symbolize a geography with open borders and freedom of movement. By painting this idea over the border fence, Fernandez and her collaborators The film also emphasizes the natural elements of the scene The birds' and the water's movement, unfazed by the fence, attest to the redundancy of the fence and the politics of the U.S./Mexico border.
Artesania & Cuidado (Craft & Care) by Tanya Aguiñiga serves as a collection of the artists work in activism, design, and documentation. Specifically, Aguiñiga's entry way to the gallery sets the tone for the exhibition. Aguiñiga is also responsible for AMBOS—Art Made Between Opposite Sides. The project consisting of artworks made that foster a sense of interconnectedness in border regions. The project is multifaceted and presents itself in the form of documentary, activism, community engagement, and collaboration to activate the U.S./ Mexico border, exploring identities affected by the luminal zone of the border and to promote healthy relationships from one side of the border to the other.
"World Trade Center Walk" by (Philippe Petit) Called the "artistic crime of the century," Petit's daring feat became the focus of a media sensation. On the morning of August 7, Petit stepped onto the tightrope, which was suspended between the two towers. A crowd of thousands soon gathered to watch the man on the wire more than 1,300 feet above them. For 45 minutes, Petit practically danced on the thin metal line. He was arrested for his efforts and was ordered to give a performance in Central Park as his sentence.
Another artist tackling the contentions of the United States/Mexico border is Judi Werthein, who in 2005 created a line of shoes titled, Brinco, Spanish for the word Jump. These shoes would be distributed, free of charge, to people in Tijuana looking to cross the border. Werthein explains, "The shoe includes a compass, a flashlight because people cross at night, and inside is included also some Tylenol painkillers because many people get injured during crossing." Additionally, the shoes featured removable soles with a map of the San Diego/Tijuana border, specifically indicating favorable routes to take. On the back of the ankle of the shoe is an image of Toribio Romo González, the saint dedicated to Mexican migrants. The shoes themselves were made cheaply and mass-produced from China, imitating the means of production abused by many American companies. These shoes would also be sold in small boutique shops in San Diego for $215 a pair, advertised to the higher class audience as "one-of-a-kind art objects." The profits of this venture would then be donated to a Tijuana shelter aiding migrants in need.
Jorge Rojas's performance art is complex in its approach of reflecting his cross-cultural experiences in both Mexico and America. Rojas was born in Morelos, Mexico, and now lives in Salt Lake City, Utah. This change in residence has informed the changes in his work regarding his feelings of home vs. homeland. His work examines this change in homeland in ways that highlight his foreignness and his awareness of both cultures. His performance pieces often combine Mexican cultural themes with a performance style that creates a new space to identify the constant change in cultural identity.
Shinji Ohmaki's piece “Liminal Air Space-Time” talks about the physical sense of liminal space, and how this represents a boarder. this liminal space is represented by a thin white piece oh clothe that blows in the air. The use of vents underneath constantly keeps it floating in the air. Ohmaki says, "The cloth moves up and down, causing a fluctuation of the borders that divide various territories… some people they will feel that time is passing quickly While others might feel that time is being slowed down. By tilting the sensations, a dimension of time and space that differs from everyday life can be created." So just like and actual boarder you get a sense of not knowing whee you are and how long you will be stuck float in the air.
History of border art specific to Palestine-Israel
In June 2005, performance artist Francis Alÿs walked from one end of Jerusalem to the other performing The Green Line. In this performance, Alÿs is carrying a can filled with green paint. The bottom of the can was perforated with a small hole, so the paint dripped out as a continuous squiggly line on the ground as he walked. The route he followed was one drawn in green on a map as part of the armistice after the 1948 Arab-Israeli War, indicating land under the control of the new state of Israel. Alÿs restricted his walking to a 15-mile stretch through a divided Jerusalem, a hike that took him down streets, through yards and parks, and over rocky abandoned terrain. Julien Deveaux documented the walk alongside Alÿs.
Artist invested in Palestine/Israel art:
Sama Alshaibi: A well known artist that uses her body as an instrument for her artwork. Focuses the context of double generational displacement as well as the notion of being "illegal" in the United States as described as psychological displacement. Her body serves as an "allegorical sight" and "captures feminine perspective." She focuses on portraying the life of a displaced Palestinian woman who immigrated to the United States at an early age with her family. She also describes information based on embroidery with Palestinian and Arab women. Some of her artworks are: Milk Maid, Carry Over, Together Apart, and Between Two Rivers.
International artists and their influence
There have been several artists from other countries who have come to the Israeli West Bank barrier and used the wall itself as a canvas to express their condemnation of its establishment. They have worked in hand with local Palestinian street artists as well to express their sentiments and ultimately get across their message. These much more well known international artists have also aided in turning the public eye to the conflict that is occurring between Palestine and Israel. Many of the artists that work on the separation barrier have taken something that is perceived as an instrument of division and thus turned it into the canvas in which they create their message.
Banksy
The anonymous, UK-based artist, Banksy, is a prominent figure in the way individuals have used the separation wall as surface to express their dissent for its establishment. He has used the dimension of the wall, the division it represents, and the context behind it to make works that succeed in their environment. One of his more popular works depicts a dove, a symbol of peace, juxtaposed with it wearing a bullet-proof vest. Here, it can be inferred that Banksy is trying to express that there is a want for peace between the two nations, yet given the history of violence, they must be prepared for conflict. Essentially, he is demonstrating the fake sense of peace that is being generated as a byproduct of this wall.
Other works Banksy has done over the years also including creating optical illusions to break up the solidarity of the wall. He tries to emphasize the elements of a barrier, how it divides up space and creates a disconnect from the world around the viewer. He has works such as children in front of a "hole" in front of the wall that reveals a paradise, a world unseen by the viewer due to the obstruction by the wall.
Banksy has expressed his different opinions on the Israeli-Palestine conflict and the experiences he has encountered while working on border art. An often cited conversation between Banksy and a Palestinian man helps illustrate the sentiments towards the wall from the Palestinian perspective:
"Old man: You paint the wall, you make it look beautiful.
Me [Banksy]: Thanks.
Old man: We don't want it to be beautiful, we hate this wall. Go home."
The gravity of this conversation demonstrates how border art can have a political message or help a group of people express their opinion, yet it the art cannot take away the wall. In this interaction, the wall is the antagonist to the Palestinian people and any attempt to beautify the wall is rendered useless because it does not remove the rift that is produced.
Banksy has made other comments regarding the size and scale of the separation barrier in regards to how it essentially isolates the Palestinian population, nearly surrounding them on every side. He says, "It essentially turns Palestine into the world's largest open-air prison."
Swoon (artist)
Another artist is American born street artist, swoon, who has worked on the separation barrier as one of the few prominent female artists that have influenced the male dominated world of street art. Swoon is instrumental to creating a female narrative in this increasingly studied area of art. Many of her pieces depict women as the key figures and protagonists of their respective compositions and ultimately gives another perspective to the border art phenomenon.
Her border art on the separation barrier focuses on the characteristics of scale and location, causing the viewer to comprehend the sheer size of the wall in relation to the body. Swoon explains why scale is important to her by saying, " '...I think it's important that people understand the scale of it because it helps in understanding the grotesque power imbalance that the Palestinian people are facing.' " By creating this contrast in size of the viewer to her art work, it causes the individual to question the wall, bring attention to it, and consider the lengths Israel has taken to protect itself from external forces.
One of her works that demonstrates this concept is that of her Lace-Weaving Woman, here the subject rises about halfway up the wall and looks as if she weaving her skirt. The action of weaving here implies a sense of unity, and in its context is juxtaposed as the wall is symbolized as division. Other pieces by Swoon have been focusing on location such as creating art where a Palestinian youth had placed a Palestinian flag at the top of the barrier and then was subsequently arrested by Israeli officials. Swoon has not given definitive meanings behind her work, and allow the viewers to interpret these spaces where she has worked and how her art has changed it, if at all.
Conceptual border(s)
Borders can also be conceptual. For example, borders between social classes or races. Gloria Anzaldúa's conceptualization of borders goes beyond national borders. Anzaldúa states: "The U.S.-Mexican border es una herida abierta where the Third World grates against the first and bleeds. And before a scab form, it hemorrhages again, the lifeblood of two worlds merging to form a third country - a border culture." Anzaldúa also refers to the border as being a locus of rupture and resistance; fragmented.Borderlands. Border artists include Ana Mendieta, Guillermo Gómez-Peña, Coco Fusco, and Mona Hatoum.
Conceptually speaking, borders, as discussed by Claire Fox, can be found anywhere; it is portable. Especially wherever poor, ethnic, immigrant, and/or minority communities collide with hegemonic society.
Prieto notes that “This double task --being critical while at the same time proposing a utopian borderless future-- was undertaken with the tools of conceptual art.” Conceptual art was a European avant-garde artistic practice which focused on the intellectual development of an artwork rather than art as object.
Border is further discussed in Adalbarto Aguirre Jr. and Jennifer K. Simmers academic journal and discusses the fluidity of borders saying that“The border merges land and people into a shared body of social and cultural representation.” The article also continues on saying that the meaning of border changes with the people that experience them.
Sheren additionally echoes that “‘Border’ began to refer to a variety of non-physical boundaries: those between cultural or belief systems, those separating the colonial and the postcolonial, and even those demarcating various kinds of subjects.” In this way, borders transcend physicality and become ‘portable’.
In a conceptual mindset, the human body can be viewed as a borderline. This is explored in Gloria Anzaldua's article La Frontera = The Border: Art about the Mexico/United States Border Experience. She discusses at length the Layers of our identities and how we become these boundaries within our environment. She mentions the dynamics that affect our identities such as sex, gender, education, ethnicity, race, class, etc. The author questions if these are equal parts or are pieces of our self more prevalent due to our surroundings? She speaks about the concept of unified consciousness, a mix of identity from the universal collective in human existence. She continues by saying we must articulate a person not categorized by one thing but as a history of identities such as student, mother, sister, brother, teacher, craftsman, coworker, etc.
Another individual who also explores ideas of the human body acting as a conceptual border is Sama Alshaibi. This is expressed in her personal essay and art titled Memory Work in the Palestinian Diaspora. In contextualizing her art work that is mentioned later in the piece, she discusses her and her family's personal history. "My body, pictured in my American passport, had the ability to travel and move freely in this world and could come back to the U.S. and speak for those whom I met in Occupied Palestine, confined to a single city and cut off from the world by massive walls." After referencing her own personal and familial narratives, Sama then shifts her discussion toward utilizing her body again within her art; her body acts as "a vehicle to embody and illustrate visual narratives of the Palestinian past and present." Overall, her photographs and videos depicting her body at the center of focus are all in an attempt to construct a "collective memory," "...[a memory] which culminates in a different mediation of history, one that resists the "official" and mediated history Palestine and Israel.
Stuart Hall also elaborates on the concept of identity in his article: Ethnicity, Identity, and Difference. He replaces the idea of an intersectional identity model with a layered identity model. The layered model lists titles of identity within one person in order of which is more prevalent depending on the circumstances. The intersectional is outdated due to the idea of having one central identity and branching off of it is a multitude of descriptions such as race, class, and gender.
Trinh T. Minh-ha additionally observes “boundaries not only express the desire to free/to subject one practice, one culture, one national community from/to another, but also expose the extent to which cultures are products of the continuing struggle between official and unofficial narratives–those largely circulated in favor of the State and its policies of inclusion, incorporation, and validation, as well as of exclusion, appropriation, and dispossession.”
Patssi Valdez touches on the idea of the border in her screenprint, "L.A./TJ." Valdez is an American Chicana artist currently living and working in Los Angeles. Unlike most who hear the word border and immediately assume separation, her idea of a border is a frame. Seen in L.A./TJ, Valdez frames the two cities, thus exaggerating the idea of mixing reality rather than separating the two. This mixing of reality is a symbol of her belonging and interacting with both Mexico and the United States.
Expanding notions of “Border Art”
There exist inherent difficulties in articulating the traumas of the Holocaust. The art created between direct and post-generational participants redefines notions of “memory-as-border.” In other words, understanding the notions of “border” becomes complex in relation to firsthand and secondhand trauma. One example of this thought is how the experiences of those directly involved in the Holocaust effect their offspring? Marianne Hirsch describes this phenomenon as “postmemory.”
Postmemory most specifically describes the relationship of children of survivors of cultural or collective trauma to the experiences of their parents, experiences that they “remember” only as the narratives and images with which they grew up, but that are so powerful, so monumental, as to constitute memories in their own right. The term “postmemory” is meant to convey its temporal and qualitative difference from survivor memory, its secondary, or second-generation memory quality, its basis in displacement, its vicariousness and belatedness. The work of postmemory defines the familial inheritance and transmission of cultural trauma. The children of victims, survivors, witnesses, or perpetrators have different experiences of postmemory, even though they share the familial ties that facilitate intergenerational identification.
Artist Sama Alshaibi, considers Hirsch's conception of postmemory as "key to my life and to my art practice, which is, after all, an extension of who I am." Born to a Palestinian mother and an Iraqi father, Alshaibi describes her upbringing as "...dominated by traumatic narratives of losing Palestine, and all along I was mourning for a place unknown to me." As a result, her work is "based on narratives of my mother's family's forced migration from Palestine to Iraq and then on to America."
In Headdress of the Disinherited, Alshaibi photographic work features the artist wearing her recreation of a traditional Palestinian headdress lined with coins that were used as a bride's wedding dowry. Alshaibi describes the headdress as part of an inter-generational transmission: "Fashioned after my mother's faint memory of her grandmother's, our collaborative effort constructs a memorial to our family's continual migrations." Alshaibi recreated the headdress using familial ephemera and travel documentation rather than coins, "Substituting the no longer minted Palestinian currency with coins embossed with our visas, passport stamps, and pictures suggests an intellectual dowry rather than a monetary one." Dowry money hat resembles migration and displacement. The placement continues to hold an effect over heads today, with the dematerialization of women's bodies and cultures from the region.
Border art in practice and examples of work
Doris Salcedo, Shibboleth, 2007, Installation Art, Tate Modern
Sama Alshaibi
Ahlam Shibli
Francis Alÿs
Yishay Garbasz
Mona Hatoum
Susan Meiselas
Christo and Jeanne-Claude, Running Fence, 1972–76, Sonoma and Marin Counties, California
Ana Mendieta, Silueta Series, 1973-1980
Anila Quayyum Agha
References
Hall, S. (1996). Ethnicity, Identity, and Difference . Becoming National, 337–349.
External links
La Frontera: Artists along the U.S.-Mexico Border with Stefan Falke
Resources for further education and services
U.S. Customs and Border Protection
Colibrí Center for Human Rights – Report or Find a Missing Person on the U.S.-Mexico Border
American art
Mexican art
Biopolitics
Nationalism
Identity (social science) | Border art | Engineering,Biology | 5,451 |
13,794,001 | https://en.wikipedia.org/wiki/Janus-faced%20molecule | A Janus molecule (or Janus-faced molecule) is a molecule which can represent both beneficial and toxic effects. The term Janus-faced molecule is derived from the ancient Roman god, Janus. Janus is depicted as having two faces; one facing the past and one facing the future. This is synonymous to a Janus molecule having two distinct purposes: a beneficial and a toxic purpose depending on its quantity.
Examples
Examples of a Janus-faced molecule are nitric oxide and cholesterol. In the case of cholesterol, the property that makes cholesterol useful in cell membranes, namely its absolute insolubility in water, also makes it lethal. When cholesterol accumulates in the wrong place, for example within the walls of an artery, it cannot be readily mobilized, and its presence eventually leads to the development of an atherosclerotic plaque.
One such example of a Janus-faced molecule is S100A8/A9 protein complex; this complex is associated with autoimmune and abnormal growth of cells disorders. S100 is integral in the fight against cancer, S100 induces phagocytes that phagocytize malignant tumor cells which results in apoptosis.
Proteoglycans are another class of molecules that display this duality, under certain chemical conditions these molecules can emerge as inhibitors or promoters. Recent studies have shown that proteoglycans can play an integral role in the metastasis of cancer. Another molecule that falls within this class of molecules is DKK1. This molecule's presence can trigger cancers to display both metastatic as well as anti-metastatic properties especially pertaining to breast cancers. It has been studied that DKK1 secretion can be associated with promoting breast cancer metastasis to the bone as well as the suppression of metastasis to the lungs. Botulinum neurotoxins also portray these dichotomous roles. This specific molecule is formed by Clostridium Botulinum, a spore forming bacteria. If this bacteria contaminates food, the results can be fatal and can lead to death. Yet, despite their toxicity which is lethal even in small doses, these molecules can be used in a wide array of pharmacological applications; one such application is the one utilized in cosmetology .
Gamma peptide nucleic acid (PNA) (synthetic DNA and RNA analogs) is another Janus molecule which slips between DNA strands. The gamma PNA could be inserted between strands of DNA or RNA to recognize sequences or elements that could potentially cause known diseases through its bifacial recognition. It does so by inserting itself when the DNA or RNA strand is undergoing transcription to conduction transcriptional regulation. However, there are still ongoing challenges with this Janus molecule that requires further research and experimentation.
Some fungi are capable of producing secondary metabolites called mycotoxins which are toxic and affect human and animal health. Mycotoxins are often found in farmed ingredients such as corn and rice while it is being harvested or kept in storage; When these ingredients are largely manufactured towards humans and animals, there is the possibly of consumption of these toxins. The toxicity of these mycotoxins were intensively studied and appeared to be affective in killing microbes as well as inhibiting/killing tumor cell growth. This exhibits janus-faced molecule characteristics because it kills indiscriminitely. A consequence of using mycotoxins against tumor cell growth in cancer treatment is an increase risk of mutations.
See also
Janus
Toxicity
References
Molecules | Janus-faced molecule | Physics,Chemistry | 737 |
31,036,887 | https://en.wikipedia.org/wiki/Immersive%20virtual%20musical%20instrument | An immersive virtual musical instrument, or immersive virtual environment for music and sound, represents sound processes and their parameters as 3D entities of a virtual reality so that they can be perceived not only through auditory feedback but also visually in 3D and possibly through tactile as well as haptic feedback, using 3D interface metaphors consisting of interaction techniques such as navigation, selection and manipulation (NSM). It builds on the trend in electronic musical instruments to develop new ways to control sound and perform music such as explored in conferences like NIME.
Development
Florent Berthaut created a variety of 3D reactive widgets involving novel representations of musical events and sound, that required a special 3D input device to interact with them using adapted 3D interaction techniques.
Jared Bott created an environment that used 3D spatial control techniques as used in known musical instruments, with symbolic 2D visual representation of musical events.
Richard Polfreman made a 3D virtual environment for musical composition with visual representations of musical and sound data similar to 2D composition environments but placed in a 3D space.
Leonel Valbom created a 3D immersive virtual environment with visual 3D representations of musical events and audio spatialization with which could be interacted using NSM interaction techniques.
Teemu Mäki-Patola explored interaction metaphors based on existing musical instruments as seen in his Virtual Xylophone, Virtual Membrane, and Virtual Air Guitar implementations.
Sutoolz from su-Studio Barcelona used real time 3D video games technology to allow a live performer to construct and play a fully audio visual immersive environment.
Axel Mulder explored the sculpting interaction metaphor by creating a 3D virtual environment that allowed interaction with abstract deformable shapes, such as a sheet and a sphere, which parameters were mapped to sound effects in innovative ways. The work focused on proving the technical feasibility of 3D virtual musical instruments. Gestural control was based on 3D object manipulation such as a subset of prehension.
Early work was done by Jaron Lanier with his Chromatophoria band and separately by Niko Bolas who developed the Soundsculpt Toolkit, a software interface that allows the world of music to communicate with the graphical elements of virtual reality.
References
External links
Immersive Virtual Musical Instruments
Virtual Air Guitar
Virtual Musical Instruments
sutoolz 1.0 alpha: 3D software music interface
SU-TOOLZ Interactive 3D Sound-Scape software
Human–computer interaction
Electronic musical instruments
Virtual reality | Immersive virtual musical instrument | Engineering | 496 |
174,901 | https://en.wikipedia.org/wiki/Hartree | The hartree (symbol: Eh), also known as the Hartree energy, is the unit of energy in the atomic units system, named after the British physicist Douglas Hartree. Its CODATA recommended value is =
The hartree is approximately the negative electric potential energy of the electron in a hydrogen atom in its ground state and, by the virial theorem, approximately twice its ionization energy; the relationships are not exact because of the finite mass of the nucleus of the hydrogen atom and relativistic corrections.
The hartree is usually used as a unit of energy in atomic physics and computational chemistry: for experimental measurements at the atomic scale, the electronvolt (eV) or the reciprocal centimetre (cm−1) are much more widely used.
Other relationships
= 2 Ry = 2 R∞hc
=
=
=
≘
≘
≘
≘
where:
ħ is the reduced Planck constant,
me is the electron mass,
e is the elementary charge,
a0 is the Bohr radius,
ε0 is the electric constant,
c is the speed of light in vacuum, and
α is the fine-structure constant.
Effective hartree units are used in semiconductor physics where is replaced by and is the static dielectric constant. Also, the electron mass is replaced by the effective band mass . The effective hartree in semiconductors becomes small enough to be measured in millielectronvolts (meV).
References
Units of energy
Physical constants | Hartree | Physics,Mathematics | 293 |
24,324,554 | https://en.wikipedia.org/wiki/Armillaria%20tigrensis | Armillaria tigrensis is a species of mushroom in the family Physalacriaceae. This species is found in South America.
See also
List of Armillaria species
References
Fungal tree pathogens and diseases
tigrensis
Fungus species | Armillaria tigrensis | Biology | 50 |
10,518,751 | https://en.wikipedia.org/wiki/California%20species%20of%20special%20concern | A species of special concern is a legal designation by the California Department of Fish and Wildlife for native wildlife facing significant risks. This label is applied to species that:
Have vanished from California, or for birds, no longer play their primary roles in the ecosystem
Are deemed threatened or endangered under the Federal Endangered Species Act but lack state listing
Meet the state Endangered Species Act criteria for threatened or endangered status but await formal listing
Have experienced or are currently undergoing substantial declines in population or habitat range, potentially leading to consideration for threatened or endangered status under the state Endangered Species Act if these declines persist
Possess naturally small populations that are exposed to various threats, such as habitat loss or human interference, which could result in declines meeting the criteria for threatened or endangered status under the state Endangered Species Act
Definitions from the California Department of Fish and Wildlife
"Species of special concern" is a designation given to animals that are not categorized under the California Endangered Species Act, however they are still (1) exhibiting a declining population that may suggest a future listing, or (2) have historically low numbers partially due to ongoing threats to their existence. To be listed as a species of special concern, it must meet one or more of the following criteria:
Reside in small, isolated populations or in fragmented habitat, resulting in further isolation or population decline
Exhibit significant decline in population. A majority of taxa do not have population information available. Species that are still abundant, even if they exhibit population decline, do not meet the species of special concern requirements, whereas uncommon and rare species do.
Rely on a habitat that has undergone significant historical or recent decline in size, affecting the species ability to thrive. Examples of California habitats that have experienced substantial reductions in recent history include coastal wetlands, particularly in the urbanized San Francisco Bay and south-coastal areas, alluvial fan sage scrub and coastal sage scrub in the southern coastal basins, and arid scrub in the San Joaquin Valley
Occupy areas where habitat conversion to incompatible land uses threatens their survival
Have limited California records or historical presence without recent sightings
Primarily inhabit public lands, experience challenges due to management practices that impact the animal's persistence
Purpose
The categorization of a species as a Species of Special Concern is intended to result in enhanced consideration for these animals by agencies involved in the environment, such as California Department of Fish and Wildlife, land managers, consulting biologists, and others. This designation is intended to avoid the need for costly listings under State endangered species laws and the subsequent recovery efforts. Additionally, this designation has the intended purpose of encouraging the collection of additional information on these species' biology, distribution, and status while directing management and research efforts towards their conservation.
The California Department of Fish and Wildlife should consider species of special concern during any of the following processes: (1) the environmental review process, (2) conservation planning process, (3) the preparation of management plans for California Department of Fish and Wildlife lands, or (4) inventories, surveys, and monitoring (conducted either by the California Department of Fish and Wildlife or others with whom they are cooperating).
Designation process
The California Bird Species of Special Concern document (Shuford and Gardali 2008) outlines the state's preferred process for designating species. This methodology has been developed through collaboration between the California Department of Fish and Wildlife and the scientific community. Steps in the process of designation include:
Maintain consistency by ensuring a unified definition of species of special concern across different taxonomic groups
Establish a technical advisory group composed of biology experts who are knowledgeable on the taxonomic group's status
Develop a list of taxa with potential for species of special concern nomination through an open, collaborative process
Apply relevant metrics established by the technical advisory group to assess the status of the taxon
Include federally-listed taxa automatically
Exclude State-listed taxa autonomically
Use a ranking scheme to develop conservation priorities
Offer an explanation for taxa that was previously designated as species of special concern but has been omitted from the revised list
References
Endangered species
Environment of California
Nature conservation in the United States | California species of special concern | Biology | 803 |
1,477,166 | https://en.wikipedia.org/wiki/Total%20enclosure%20fetishism | Total enclosure fetishism is a form of sexual fetishism whereby a person becomes aroused when having their entire body enclosed in a certain way. Total enclosure is often accompanied by some element of bondage.
Examples
Some total enclosure activities include:
In rubber fetishism, rubber suits, gas masks, a bondage suit, and similar garments and accessories are used for total enclosure.
Vacuum beds rigidly enclose the entire body under a rubber sheet with a small breathing tube.
Sleepsacks and body bags are also used as less rigid enclosure alternative to the vacuum beds, although some are made in inflatable form to increase pressure on the occupant's body.
In spandex fetishism, zentai suits are used for total enclosure in skintight fabric from head to toe. In the case of zentai, the wearer breathes through the loose-woven fabric itself, the garment is not as tight as a rubber or PVC garment would be, and the costume generally comes off with a zipper that can be operated by the wearer.
Being sealed within a giant stuffed animal or murrsuit (sexual fursuit).
Although experiences of these activities are regarded as claustrophobic, total enclosure fetishists like to practice these activities, sometimes combining them with bondage to intensify feelings of helplessness.
Risks
As with all activities involving bondage or potential risk to breathing, this is a risky activity. Maintaining an airway, preventing positional asphyxia, and ensuring that the enclosed person has a means of escape at all times are of paramount importance, if these activities are not to result in death.
See the articles on bondage and erotic asphyxiation for some discussion of the risks involved.
See also
Bondage (BDSM)
Bondage hood
Bondage suit
BDSM
Endosomaphilia
Human furniture
Partialism
Mummification (BDSM)
Sources
Gillian Freeman, "The Undergrowth of Literature", Nelson, 1967, pp. 141–143
David Kunzle, "Fashion and fetishism: a social history of the corset, tight-lacing, and other forms of body-sculpture in the West", Rowman and Littlefield, 1982, , p. 39
Simon LeVay, Sharon McBride Valente, "Human sexuality", Sinauer Associates, 2006, , p. 494
http://en.wikifur.com/wiki/Murrsuit
Fashion-related fetishism | Total enclosure fetishism | Biology | 508 |
53,694,722 | https://en.wikipedia.org/wiki/Bathymodiolus%20platifrons | Bathymodiolus platifrons, described by Hashimoto and Okutani in 1994, is a deep-sea mussel that is common in hydrothermal vents and methane seeps in the Western Pacific Ocean.
Symbiosis
Bathymodiolus platifrons harbours methane-oxidizing bacteria in its gill, which help to transfer methane into material and energy to help it to thrive in such environments.
References
platifrons
Molluscs described in 1994
Chemosynthetic symbiosis | Bathymodiolus platifrons | Biology | 111 |
40,054,744 | https://en.wikipedia.org/wiki/Auxiliary%20line | An auxiliary line (or helping line) is an extra line needed to complete a proof in plane geometry. Other common auxiliary constructs in elementary plane synthetic geometry are the helping circles.
As an example, a proof of the theorem on the sum of angles of a triangle can be done by adding a straight line parallel to one of the triangle sides (passing through the opposite vertex).
Although the adding of auxiliary constructs can often make a problem obvious, it's not at all obvious to discover the helpful construct among all the possibilities, and for this reason many prefer to use more systematic methods for the solution of geometric problems (such as the coordinate method, which requires much less ingenuity).
References
External links
http://www.cut-the-knot.org/Generalization/MenelausByEinstein.shtml On Einstein's opinion regarding proofs that use the introduction of additional constructs
Line (geometry) | Auxiliary line | Mathematics | 192 |
49,298,442 | https://en.wikipedia.org/wiki/CC398 | CC398 or MRSA CC398 is a new variant of MRSA that has emerged in animals and is found in intensively reared production animals (primarily pigs, but also cattle and poultry), where it can be transmitted to humans as LA-MRSA (livestock-associated MRSA). A 2009 study shows, however, that dissemination of CC398 from exposed humans to other, non-exposed humans is infrequent. Though dangerous to humans, CC398 is often asymptomatic in food-producing animals. In a single study conducted in Denmark, MRSA was shown to originate in livestock and spread to humans, though the MRSA strain may have originated in humans and was transmitted to livestock.
A 2011 study reported 47% of the meat and poultry sold in surveyed U.S. grocery stores was contaminated with S. aureus, and of those 5–24.4% of the total were resistant to at least three classes of antibiotics. "Now we need to determine what this means in terms of risk to the consumer," said Dr. Keim, a co-author of the paper. Some samples of commercially sold meat products in Japan were also found to harbor MRSA strains.
An investigation of 100 pig-meat samples purchased from major UK retailers conducted by the Guardian in 2015 showed that some 10% of the samples were contaminated.
In 2017 17 out of 401 examined horses in Denmark were found to carry MRSA, typically strains of CC398. The same year it was reported that from 20142016 44 persons in Denmark were infected with LA-MRSA from fur farming mink and that LA-MRSA was found in 88% of Danish pig herds.
See also
Carbapenem resistant enterobacteriaceae
Necrotizing fasciitis
Staphylococcus aureus
Toxic shock syndrome
XF-73
Teixobactin
References
Staphylococcaceae
Bacterial diseases
Antibiotic-resistant bacteria
Healthcare-associated infections | CC398 | Biology | 403 |
24,673,684 | https://en.wikipedia.org/wiki/Hu%20%28ritual%20baton%29 | A () is a flat scepter originating from China, where they were originally used as narrow tablets for recording notes and orders. They were historically used by officials throughout East Asia, including Japan, Korea, Ryukyu, and Vietnam. They are known as in Japan, and are worn as part of the ceremonial outfit. They continue to be used in Daoist and Shinto ritual contexts in some parts of East Asia.
Origin
The use of the originated in ancient China, where the Classic of Rites required a to have a length of two six , and its mid part a width of three (). Originally, the was held by officials in court to record significant orders and instructions by the emperors. From the Jin dynasty onwards, following the increased proliferation of paper, the became a ceremonial instrument. In China, it was customary to hold the with the broad end down and the narrow end up.
The was originally used at court for the taking of notes and was usually made of bamboo. Officials could record speaking notes on the tablet ahead of the audience, and record the emperor's instructions during the audience. Likewise, the emperor could use one for notes during ceremonies.
The eventually became a ritual implement; it also became customary for officials to shield their mouths with their when speaking to the emperor.
A can be made of different material according to the holder's rank: sovereigns used jade (similar to, but not the same as, the ceremonial jade sceptre, ())(zh), nobles used ivory, and court officials used bamboo.
A is often seen in portraits of Chinese mandarins, but is now mostly used by Daoist priests (). The Buddhist deity King Yama, judge of the underworld, is often depicted bearing a .
Use in China
During the Tang dynasty, court etiquette required officials to wear the in their belts when riding horses. The chancellor was provided with a rack, which was carried into the palace. After an audience, the could be left on the rack. Lesser officials had bags, which were held by their attendants. During the early Tang dynasty, Mandarins of the fifth rank or above used ivory , while those below used wooden ones. The rules were further elaborated later to require that mandarins of the third rank or above used which were curved at the front and straight at the back, while those of the fifth rank or above used which were curved at the front and angled at the back. The used by lower rank mandarins were made of bamboo and were angled at the top and square at the bottom. In the Ming dynasty, Mandarins of the fourth rank or above used ivory , while those of the fifth rank or below used wooden ones.
The fell out of use in the Imperial Court system during the Qing dynasty. The greater ceremonial deference demanded by Qing emperors meant that officials had to greet the emperor by kowtowing, making it impractical to carry the to an audience.
In contemporary times, the is mostly used by as part of the traditional outfit of during formal and ceremonial functions such as the performing of rites.
Use in Japan
The standard reading in Japanese for the character used to write is , but as this is also one of the readings for the character , it is avoided and considered bad luck. The character's unusual pronunciation seems to derive from the fact the baton is approximately one (an old unit of measurement equivalent to ) in length.
A or is a baton or scepter about long, held vertically in the right hand, and was traditionally part of a nobleman's formal attire (the . Today, the is mostly used by Shinto priests during official and ceremonial functions, not only when wearing the but when wearing other types of formal clothing such as the , the and the . The emperor's is roughly square at both ends, whereas a retainer's is rounded at the top and square at the bottom. Both become progressively narrow towards the bottom. Oak is considered the best material for the , followed in order by holly, cherry, , and Japanese cedar.
The originally had a strip of paper attached to the back containing instructions and memoranda for the ceremony or event about to take place, but it later evolved into a purely ceremonial implement meant to add solemnity to rituals. According to the Taihō Code, a set of administrative laws implemented in the year 701, nobles of the fifth rank and above had to use an ivory , while those below that rank were to use oak, Japanese yew, holly, cherry, sakaki, Japanese cedar, or other woods. Ivory, however, was too hard to obtain, and the law was changed. The , a Japanese book of laws and regulations written in 927, permits to all the use of of unfinished wood, except when wearing special ceremonial clothes called . The Japanese is usually made of woods like Japanese yew, holly, cherry, , or Japanese cedar. The is often seen in portraits of the Japanese , emperors, nobleman, and Shinto priests ().
Gallery
See also
Ruyi (scepter)
Sceptre
References
Ceremonial objects
Chinese traditional clothing
Confucian culture
East Asian traditions
Japanese religious terminology
Korean clothing
Regalia
Religious objects
Shinto in Japan
Shinto religious objects
Shinto religious clothing
Taoist culture
Vietnamese clothing
Wands
Writing media | Hu (ritual baton) | Physics | 1,068 |
470,198 | https://en.wikipedia.org/wiki/SSLIOP | In distributed computing, SSLIOP is an Internet Inter-ORB Protocol (IIOP) over Secure Sockets Layer (SSL), providing confidentiality and authentication.
, SSLIOP is implemented by (at least) TAO, JacORB, OpenORB , and MICO .
See also
CSIv2
SECIOP
Common Object Request Broker Architecture | SSLIOP | Technology | 73 |
1,986,011 | https://en.wikipedia.org/wiki/Simply%20typed%20lambda%20calculus | The simply typed lambda calculus (), a form
of type theory, is a typed interpretation of the lambda calculus with only one type constructor () that builds function types. It is the canonical and simplest example of a typed lambda calculus. The simply typed lambda calculus was originally introduced by Alonzo Church in 1940 as an attempt to avoid paradoxical use of the untyped lambda calculus.
The term simple type is also used to refer to extensions of the simply typed lambda calculus with constructs such as products, coproducts or natural numbers (System T) or even full recursion (like PCF). In contrast, systems that introduce polymorphic types (like System F) or dependent types (like the Logical Framework) are not considered simply typed. The simple types, except for full recursion, are still considered simple because the Church encodings of such structures can be done using only and suitable type variables, while polymorphism and dependency cannot.
Syntax
In the 1930s Alonzo Church sought to use the logistic method: his lambda calculus, as a formal language based on symbolic expressions, consisted of a denumerably infinite series of axioms and variables, but also a finite set of primitive symbols, denoting abstraction and scope, as well as four constants: negation, disjunction, universal quantification, and selection respectively; and also, a finite set of rules I to VI. This finite set of rules included rule V modus ponens as well as IV and VI for substitution and generalization respectively. Rules I to III are known as alpha, beta, and eta conversion in the lambda calculus. Church sought to use English only as a syntax language (that is, a metamathematical language) for describing symbolic expressions with no interpretations.
In 1940 Church settled on a subscript notation for denoting the type in a symbolic expression. In his presentation, Church used only two base types: for "the type of propositions" and for "the type of individuals". The type has no term constants, whereas has one term constant. Frequently the calculus with only one base type, usually , is considered. The Greek letter subscripts , etc. denote type variables; the parenthesized subscripted denotes the function type . Church 1940 p.58 used 'arrow or ' to denote stands for, or is an abbreviation for.
By the 1970s stand-alone arrow notation was in use; for example in this article non-subscripted symbols and can range over types. The infinite number of axioms were then seen to be a consequence of applying rules I to VI to the types (see Peano axioms). Informally, the function type refers to the type of functions that, given an input of type , produce an output of type .
By convention, associates to the right: is read as .
To define the types, a set of base types, , must first be defined. These are sometimes called atomic types or type constants. With this fixed, the syntax of types is:
.
For example, , generates an infinite set of types starting with , , , , , , , ..., , ...
A set of term constants is also fixed for the base types. For example, it might be assumed that one of the base types is , and its term constants could be the natural numbers.
The syntax of the simply typed lambda calculus is essentially that of the lambda calculus itself. The term denotes that the variable is of type . The term syntax, in Backus–Naur form, is variable reference, abstractions, application, or constant:
where is a term constant. A variable reference is bound if it is inside of an abstraction binding . A term is closed if there are no unbound variables.
In comparison, the syntax of untyped lambda calculus has no such typing or term constants:
Whereas in typed lambda calculus every abstraction (i.e. function) must specify the type of its argument.
Typing rules
To define the set of well-typed lambda terms of a given type, one defines a typing relation between terms and types. First, one introduces typing contexts, or typing environments , which are sets of typing assumptions. A typing assumption has the form , meaning variable has type .
The typing relation indicates that is a term of type in context . In this case is said to be well-typed (having type ). Instances of the typing relation are called typing judgments. The validity of a typing judgment is shown by providing a typing derivation, constructed using typing rules (wherein the premises above the line allow us to derive the conclusion below the line). Simply typed lambda calculus uses these rules:
In words,
If has type in the context, then has type .
Term constants have the appropriate base types.
If, in a certain context with having type , has type , then, in the same context without , has type .
If, in a certain context, has type , and has type , then has type .
Examples of closed terms, i.e. terms typable in the empty context, are:
For every type , a term (identity function/I-combinator),
For types , a term (the K-combinator), and
For types , a term (the S-combinator).
These are the typed lambda calculus representations of the basic combinators of combinatory logic.
Each type is assigned an order, a number . For base types, ; for function types, . That is, the order of a type measures the depth of the most left-nested arrow. Hence:
Semantics
Intrinsic vs. extrinsic interpretations
Broadly speaking, there are two different ways of assigning meaning to the simply typed lambda calculus, as to typed languages more generally, variously called intrinsic vs. extrinsic, ontological vs. semantical, or Church-style vs. Curry-style.
An intrinsic semantics only assigns meaning to well-typed terms, or more precisely, assigns meaning directly to typing derivations. This has the effect that terms differing only by type annotations can nonetheless be assigned different meanings. For example, the identity term on integers and the identity term on booleans may mean different things. (The classic intended interpretations
are the identity function on integers and the identity function on boolean values.)
In contrast, an extrinsic semantics assigns meaning to terms regardless of typing, as they would be interpreted in an untyped language. In this view, and mean the same thing (i.e., the same thing as ).
The distinction between intrinsic and extrinsic semantics is sometimes associated with the presence or absence of annotations on lambda abstractions, but strictly speaking this usage is imprecise. It is possible to define an extrinsic semantics on annotated terms simply by ignoring the types (i.e., through type erasure), as it is possible to give an intrinsic semantics on unannotated terms when the types can be deduced from context (i.e., through type inference). The essential difference between intrinsic and extrinsic approaches is just whether the typing rules are viewed as defining the language, or as a formalism for verifying properties of a more primitive underlying language. Most of the different semantic interpretations discussed below can be seen through either an intrinsic or extrinsic perspective.
Equational theory
The simply typed lambda calculus (STLC) has the same equational theory of βη-equivalence as untyped lambda calculus, but subject to type restrictions. The equation for beta reduction
holds in context whenever and , while the equation for eta reduction
holds whenever and does not appear free in .
The advantage of typed lambda calculus is that STLC allows potentially nonterminating computations to be cut short (that is, reduced).
Operational semantics
Likewise, the operational semantics of simply typed lambda calculus can be fixed as for the untyped lambda calculus, using call by name, call by value, or other evaluation strategies. As for any typed language, type safety is a fundamental property of all of these evaluation strategies. Additionally, the strong normalization property described below implies that any evaluation strategy will terminate on all simply typed terms.
Categorical semantics
The simply typed lambda calculus enriched with product types, pairing and projection operators (with -equivalence) is the internal language of Cartesian closed categories (CCCs), as was first observed by Joachim Lambek. Given any CCC, the basic types of the corresponding lambda calculus are the objects, and the terms are the morphisms. Conversely, the simply typed lambda calculus with product types and pairing operators over a collection of base types and given terms forms a CCC whose objects are the types, and morphisms are equivalence classes of terms.
There are typing rules for pairing, projection, and a unit term. Given two terms and , the term has type . Likewise, if one has a term , then there are terms and where the correspond to the projections of the Cartesian product. The unit term, of type 1, written as and vocalized as 'nil', is the final object. The equational theory is extended likewise, so that one has
This last is read as "if t has type 1, then it reduces to nil".
The above can then be turned into a category by taking the types as the objects. The morphisms are equivalence classes of pairs where x is a variable (of type ) and t is a term (of type ), having no free variables in it, except for (optionally) x.
The set of terms in the language is the closure of this set of terms under the operations of abstraction and application.
This correspondence can be extended to include "language homomorphisms" and functors between the category of Cartesian closed categories, and the category of simply typed lambda theories.
Part of this correspondence can be extended to closed symmetric monoidal categories by using a linear type system.
Proof-theoretic semantics
The simply typed lambda calculus is closely related to the implicational fragment of propositional intuitionistic logic, i.e., the implicational propositional calculus, via the Curry–Howard isomorphism: terms correspond precisely to proofs in natural deduction, and inhabited types are exactly the tautologies of this logic.
From his logistic method Church 1940 p.58 laid out an axiom schema, p. 60, which Henkin 1949 filled in with type domains (e.g. the natural numbers, the real numbers, etc.). Henkin 1996 p. 146 described how Church's logistic method could seek to provide a foundation for mathematics (Peano arithmetic and real analysis), via model theory.
Alternative syntaxes
The presentation given above is not the only way of defining the syntax of the simply typed lambda calculus. One alternative is to remove type annotations entirely (so that the syntax is identical to the untyped lambda calculus), while ensuring that terms are well-typed via Hindley–Milner type inference. The inference algorithm is terminating, sound, and complete: whenever a term is typable, the algorithm computes its type. More precisely, it computes the term's principal type, since often an unannotated term (such as ) may have more than one type (, , etc., which are all instances of the principal type ).
Another alternative presentation of simply typed lambda calculus is based on bidirectional type checking, which requires more type annotations than Hindley–Milner inference but is easier to describe. The type system is divided into two judgments, representing both checking and synthesis, written and respectively. Operationally, the three components , , and are all inputs to the checking judgment , whereas the synthesis judgment only takes and as inputs, producing the type as output. These judgments are derived via the following rules:
Observe that rules [1]–[4] are nearly identical to rules (1)–(4) above, except for the careful choice of checking or synthesis judgments. These choices can be explained like so:
If is in the context, we can synthesize type for .
The types of term constants are fixed and can be synthesized.
To check that has type in some context, we extend the context with and check that has type .
If synthesizes type (in some context), and checks against type (in the same context), then synthesizes type .
Observe that the rules for synthesis are read top-to-bottom, whereas the rules for checking are read bottom-to-top. Note in particular that we do not need any annotation on the lambda abstraction in rule [3], because the type of the bound variable can be deduced from the type at which we check the function. Finally, we explain rules [5] and [6] as follows:
To check that has type , it suffices to synthesize type .
If checks against type , then the explicitly annotated term synthesizes .
Because of these last two rules coercing between synthesis and checking, it is easy to see that any well-typed but unannotated term can be checked in the bidirectional system, so long as we insert "enough" type annotations. And in fact, annotations are needed only at β-redexes.
General observations
Given the standard semantics, the simply typed lambda calculus is strongly normalizing: every sequence of reductions eventually terminates. This is because recursion is not allowed by the typing rules: it is impossible to find types for fixed-point combinators and the looping term . Recursion can be added to the language by either having a special operator of type or adding general recursive types, though both eliminate strong normalization.
Unlike the untyped lambda calculus, the simply typed lambda calculus is not Turing complete. All programs in the simply typed lambda calculus halt. For the untyped lambda calculus, there are programs that do not halt, and moreover there is no general decision procedure that can determine whether a program halts.
Important results
Tait showed in 1967 that -reduction is strongly normalizing. As a corollary -equivalence is decidable. Statman showed in 1979 that the normalisation problem is not elementary recursive, a proof that was later simplified by Mairson. The problem is known to be in the set of the Grzegorczyk hierarchy. A purely semantic normalisation proof (see normalisation by evaluation) was given by Berger and Schwichtenberg in 1991.
The unification problem for -equivalence is undecidable. Huet showed in 1973 that 3rd order unification is undecidable and this was improved upon by Baxter in 1978 then by Goldfarb in 1981 by showing that 2nd order unification is already undecidable. A proof that higher order matching (unification where only one term contains existential variables) is decidable was announced by Colin Stirling in 2006, and a full proof was published in 2009.
We can encode natural numbers by terms of the type (Church numerals). Schwichtenberg showed in 1975 that in exactly the extended polynomials are representable as functions over Church numerals; these are roughly the polynomials closed up under a conditional operator.
A full model of is given by interpreting base types as sets and function types by the set-theoretic function space. Friedman showed in 1975 that this interpretation is complete for -equivalence, if the base types are interpreted by infinite sets. Statman showed in 1983 that -equivalence is the maximal equivalence that is typically ambiguous, i.e. closed under type substitutions (Statman's Typical Ambiguity Theorem). A corollary of this is that the finite model property holds, i.e. finite sets are sufficient to distinguish terms that are not identified by -equivalence.
Plotkin introduced logical relations in 1973 to characterize the elements of a model that are definable by lambda terms. In 1993 Jung and Tiuryn showed that a general form of logical relation (Kripke logical relations with varying arity) exactly characterizes lambda definability. Plotkin and Statman conjectured that it is decidable whether a given element of a model generated from finite sets is definable by a lambda term (Plotkin–Statman conjecture). The conjecture was shown to be false by Loader in 2001.
Notes
References
H. Barendregt, Lambda Calculi with Types, Handbook of Logic in Computer Science, Volume II, Oxford University Press, 1993. .
External links
Lambda calculus
Theory of computation
Type theory | Simply typed lambda calculus | Mathematics | 3,389 |
45,449,667 | https://en.wikipedia.org/wiki/Obangsaek | The traditional Korean color spectrum, also known as Obangsaek (), is the color scheme of the five Korean traditional colors of white, black, blue, yellow and red. In Korean traditional arts and traditional textile patterns, the colors of Obangsaek represent five cardinal directions: Obangsaek theory is a combination of Five Elements and Five Colours theory and originated in China.
Five orientations
Blue: east
Red: south
Yellow: center
White: west
Black: north
These colors are also associated with the Five Elements of Culture of Korea:
Blue: Wood
Red: Fire
Yellow: Earth
White: Metal
Black: Water
References
Korean art
Korean clothing
Orientation (geometry)
Color in culture
Wuxing (Chinese philosophy) | Obangsaek | Physics,Mathematics | 145 |
7,674,809 | https://en.wikipedia.org/wiki/Network%20tomography | Network tomography is the study of a network's internal characteristics using information derived from end point data. The word tomography is used to link the field, in concept, to other processes that infer the internal characteristics of an object from external observation, as is done in MRI or PET scanning (even though the term tomography strictly refers to imaging by slicing). The field is a recent development in electrical engineering and computer science, dating from 1996. Network tomography seeks to map the path data takes through the Internet by examining information from “edge nodes,” the computers in which the data are originated and from which they are requested.
The field is useful for engineers attempting to develop more efficient computer networks. Data derived from network tomography studies can be used to increase quality of service by limiting link packet loss and increasing routing optimization.
Recent developments
There have been many published papers and tools in the area of network tomography, which aim to monitor the health of various links in a network in real-time. These can be classified into loss and delay tomography.
Loss tomography
Loss tomography aims to find “lossy” links in a network by sending active “probes” from various vantage points in the network or the Internet.
Delay tomography
The area of delay tomography has also attracted attention in the recent past. It aims to find link delays using end-to-end probes sent from vantage points. This can potentially help isolate links with large queueing delays caused by congestion.
More applications
Network tomography may be able to infer network topology using end-to-end probes. Topology discovery is a tradeoff between accuracy vs. overhead. With network tomography, the emphasis is to achieve as accurate a picture of the network with minimal overhead. In comparison, other network topology discovery techniques using SNMP or route analytics aim for greater accuracy with less emphasis on overhead reduction.
Network tomography may find links which are shared by multiple paths (and can thus become potential bottlenecks in the future).
Network Tomography may improve the control of a smart grid
See also
Network science
Computer network
References
Networks
Electrical engineering | Network tomography | Engineering | 430 |
73,108,245 | https://en.wikipedia.org/wiki/Pleurotus%20smithii | Pleurotus smithii is a species of fungus in the family Pleurotaceae, described as new to science by mycologist Gastón Guzmán in 1975. Like other species of the Pleurotus cystidiosus clade, it has an anamorphic form, named Antromycopsis guzmanii. P. smithii can be distinguished from P. cystidiosus by lack of pleurocystidia or them being only present in young stages as cystidioid elements, short hyphal segments of the conidiophores in the anamorph, and long subcylindrical cheilocystidia in the teleomorph form.
See also
List of Pleurotus species
References
External links
Fungi described in 1975
Pleurotaceae
Fungus species | Pleurotus smithii | Biology | 165 |
50,448,376 | https://en.wikipedia.org/wiki/CCDC180 | Coiled-coil domain containing protein 180 (CCDC180) is a protein that in humans is encoded by the CCDC180 gene. This protein is known to localize to the nucleus and is thought to be involved in regulation of transcription as are many proteins containing coiled-coil domains. As it is expressed most highly in the testes and is regulated by SRY and SOX transcription factors, it could be involved in sex determination.
Gene
Locus
CCDC180 is located on chromosome 9 at the locus 9q22.33.
Common aliases
CCDC180 is also known by the aliases KIAA1529, BDAG1 (Behçet's Disease Associated Gene 1), and C9orf174.
Gene features
The CCDC180 gene is 71,221 bases long. It contains 37 exons and is oriented on the forward strand of the chromosome.
mRNA
There are no known isoforms or alternative splicing variants of the CCDC180 mRNA.
Protein
General features
CCDC180 contains 1,701 amino acids and has a molecular weight of 197.3 kDa. The isoelectric point (pI) is 5.74. The low pI is attributed to a relatively high concentration of glutamic acid when compared to other human proteins at 12.9%. CCDC180 also contains a relatively low concentration of glycine when compared to the average human protein at 3.5%.
Domains
CCDC180 contains two domains of unknown function (DUFs): DUF4455 and DUF4456. There are also two coiled-coil regions which overlap with the DUFs. There is a region of low complexity that is very rich in glutamic acid.
Secondary and tertiary structure
The secondary structure of CCDC180 is predicted to be almost completely composed of alpha helices, with only a few predicted beta sheets. The tertiary structure is not completely characterized as yet, but a model predicted by the I-TASSER server at the University of Michigan is pictured.
Post-translational modifications
CCDC180 is predicted to undergo a variety of post-translational modifications:
Phosphorylation on serine, threonine, and tyrosine residues
Tyrosine sulfation
Sumoylation
O-linked β-N-acetylglucosamine modification of a serine residue
Subcellular localization
CCDC180 is predicted to localize to the nucleus, and it contains four nuclear localization sequences.
Expression
CCDC180 is expressed ubiquitously at low levels throughout the body, and the highest expression is consistently seen to be in the testes. Other replicated tissues of high expression include the trachea and eye.
Regulation of expression
Transcriptional regulation
Transcription of CCDC180 is predicted to be regulated by a 664 base pair promoter region, with the ID GXP_1829211. This prediction is supported by the transcripts GXT_23217882, GXT_24495001, GXT_24495002, and GXT_24495003. Transcription factors predicted to bind to this promoter region are described below.
Ccaat-enhancer binding protein
KRAB domain zinc finger protein 57
Krüppel-like C2H2 zinc finger factors
Octamer binding protein
SRY box 9
GLI zinc finger family
RXR heterodimers
SOX factors
E-box binding factors
Nerve growth factor-induced protein C
Myc-associated zinc finger
GC-binding factor 2
X-box binding protein 1
Histone nuclear factor P
Interacting proteins
The following proteins have been shown to interact with CCDC180 in yeast two-hybrid assays.
Clinical significance
A single-nucleotide polymorphism (SNP) in the gene that leads to a single amino acid change (S995C) has been shown in a genome-wide association study to be significantly associated with Behçet's disease, and this designation led to the alias Behcet's disease-associated gene 1 (BDAG1). The role of CCDC180 in the disease phenotype is unknown.
Homology
There are no paralogs in humans for this gene, but there are orthologs in a wide variety of organisms, extending back to single-celled green algae. CCDC180 is not conserved in bacteria, archaea, plants, fungi, or protists. The following table includes a subset of species containing protein orthologs of CCDC180. It is not exhaustive, but it indicates the variety of species containing orthologs of CCDC180.
Evolutionary history
CCDC180 is a relatively quickly-evolving gene compared to other well-known genes. There are no known family members, splice variants or isoforms, or evidence of gene duplications in the history of the gene.
References
Proteins
Genes | CCDC180 | Chemistry | 1,009 |
12,280,369 | https://en.wikipedia.org/wiki/Subsurface%20ocean%20current | A subsurface ocean current is an oceanic current that runs beneath surface currents. Examples include the Equatorial Undercurrents of the Pacific, Atlantic, and Indian Oceans, the California Undercurrent, and the Agulhas Undercurrent, the deep thermohaline circulation in the Atlantic, and bottom gravity currents near Antarctica. The forcing mechanisms vary for these different types of subsurface currents.
Density current
The most common of these is the density current, epitomized by the Thermohaline current. The density current works on a basic principle: the denser water sinks to the bottom, separating from the less dense water, and causing an opposite reaction from it. There are numerous factors controlling density.
Salinity
One is the salinity of water, a prime example of this being the Mediterranean/Atlantic exchange. The saltier waters of the Mediterranean sink to the bottom and flow along there, until they reach the ledge between the two bodies of water. At this point, they rush over the ledge into the Atlantic, pushing the less saline surface water into the Mediterranean.
Temperature
Another factor of density is temperature. Thermohaline (literally meaning heat-salty) currents are very influenced by heat. Cold water from glaciers, icebergs, etc. descends to join the ultra-deep, cold section of the worldwide Thermohaline current. After spending an exceptionally long time in the depths, it eventually heats up, rising to join the higher Thermohaline current section. Because of the temperature and expansiveness of the Thermohaline current, it is substantially slower, taking nearly 1000 years to run its worldwide circuit.
Turbidity current
One factor of density is so unique that it warrants its own current type. This is the turbidity current. Turbidity current is caused when the density of water is increased by sediment. This current is the underwater equivalent of a landslide. When sediment increases the density of the water, it falls to the bottom, and then follows the form of the land. In doing so, the sediment inside the current gathers more from the ocean bed, which in turn gathers more, and so on. As a limited amount of sediment can be carried by a certain amount of water, more water must become laded with sediment, until a huge, destructive current is washing down some marine hillside. It is theorized that submarine depths, such as the Marianas Trench have been caused in part by this action. There is one additional effect of turbidity currents: upwelling. All of the water rushing into ocean valleys displaces a significant amount of water. This water literally has nowhere to go but up. The upwelling current goes almost straight up. This spreads the nutrient rich ocean life to the surface, feeding some of the world’s largest fisheries. This current also helps Thermohaline currents return to the surface.
Ekman Spiral
An entirely different class of subsurface current is caused by friction with surface currents and objects. When the wind or some other surface force compels surface currents into motion, some of this is translated into subsurface motion. The Ekman Spiral, named after Vagn Walfrid Ekman, is the standard for this transfer of energy. The Ekman Spiral works as follows: when the surface moves, the subsurface inherits some -but not all- of this motion. Due to the Coriolis Effect, however, the current moves at a 45˚ angle to the right of the first (left in the Southern Hemisphere). The current below is slower yet, and moves at a 45˚ angle to the right. This process continues in the same manner, until, at about 100 meters below the surface, the current is moving in the opposite direction of the surface current.
Subsidence
The final type of subsurface current is subsidence, caused when forces push water against some obstacle (like a rock), causing it to pile up there. The water at the bottom of the pileup flows away from it, causing a subsidence current.
Wave Patterns
Various subsurface currents conflict at times, causing bizarre wave patterns. One of the most noticeable of these is the Maelstrom. The word is derived from Nordic words meaning to grind and stream. Essentially, the maelstrom is a large, very powerful whirlpool, a large swirling body of water being drawn down and inward toward its center. This is usually the result of tidal currents.
Effect
Subsurface currents have a large effect on life on earth. They flow beneath the surface of the water, allowing them to be relatively free of external influence. Thus, they function like clockwork, providing nutrient transportation, water transfer, etc., as well as affecting the ocean floor and submarine processes.
See also
Oceanography
References
Ocean currents
Oceanography | Subsurface ocean current | Physics,Chemistry,Environmental_science | 993 |
3,271,928 | https://en.wikipedia.org/wiki/Bamako%20Convention | The Bamako Convention (in full: Bamako Convention on the Ban of the Import into Africa and the Control of Transboundary Movement and Management of Hazardous Wastes within Africa) is a treaty of African nations prohibiting the import of any hazardous (including radioactive) waste. The convention was negotiated by twelve nations of the Organisation of African Unity at Bamako, Mali in January, 1991, and came into force in 1998.
Impetus for the Bamako Convention arose from the failure of the Basel Convention to prohibit trade of hazardous waste to less developed countries (LDCs), and from the realization that many developed nations were exporting toxic wastes to Africa. This impression was strengthened by several prominent cases. One important case, which occurred in 1987, concerned the importation into Nigeria of of hazardous waste from the Italian companies Ecomar and Jelly Wax, which had agreed to pay local farmer Sunday Nana $100 per month for storage. The barrels, found in storage in the port of Koko, contained toxic waste including polychlorinated biphenyls, and their eventual shipment back to Italy led to protests closing three Italian ports.
The Bamako Convention uses a format and language similar to that of the Basel Convention, but is much stronger in prohibiting all imports of hazardous waste. Additionally, it does not make exceptions on certain hazardous wastes (like those for radioactive materials) made by the Basel Convention.
Bamako Conference
The first Conference of the Parties to the Bamako Convention convened from 24 to 26 June 2013 at Bamako, Mali.
During the conference, parties agreed that the United Nations Environmental Programme would carry out the Bamako Convention Secretariat functions. Parties also resolved to encourage the Secretariat of the Bamako Convention to strengthen its ties with the Secretariat of the Basel, Rotterdam and Stockholm Conventions.
The following parties to the Bamako Convention attended COP 1: Benin, Burkina Faso, Burundi, Cameroon, Congo, Democratic Republic of the Congo (DRC), Côte d'Ivoire, Ethiopia, Gambia, Libya, Mali, Mozambique, Mauritius, Niger, Senegal, Togo and Tunisia. In addition, Eswatini, Guinea, Guinea-Bissau, Liberia, Nigeria and Zambia participated as observers.
See also
Basel Convention
Rotterdam Convention
Stockholm Convention
References
External links
Text of the Convention
List of Countries which have Signed, Ratified/Acceded
Basel Action Network
Nigerian case entry at Trade and Environment Database
Waste treaties
Chemical safety
Treaties concluded in 1991
Treaties entered into force in 1998
African Union treaties
Hazardous waste
1991 in Mali
1998 in the environment
Treaties of Benin
Treaties of Burkina Faso
Treaties of Burundi
Treaties of Cameroon
Treaties of Ivory Coast
Treaties of the Comoros
Treaties of the Republic of the Congo
Treaties of Zaire
Treaties of Egypt
Treaties of the Transitional Government of Ethiopia
Treaties of Gabon
Treaties of the Gambia
Treaties of the Libyan Arab Jamahiriya
Treaties of Mali
Treaties of Mozambique
Treaties of Mauritius
Treaties of Niger
Treaties of Senegal
Treaties of the Republic of the Sudan (1985–2011)
Treaties of Tanzania
Treaties of Togo
Treaties of Tunisia
Treaties of Uganda
Treaties of Zimbabwe
History of Bamako | Bamako Convention | Chemistry,Technology | 609 |
52,837,586 | https://en.wikipedia.org/wiki/Pseudo%20Jahn%E2%80%93Teller%20effect | The pseudo Jahn–Teller effect (PJTE), occasionally also known as second-order JTE, is a direct extension of the Jahn–Teller effect (JTE) where spontaneous symmetry breaking in polyatomic systems (molecules and solids) occurs even when the relevant electronic states are not degenerate.
The PJTE can occur under the influence of sufficiently low-lying electronic excited states of appropriate symmetry.
"The pseudo Jahn–Teller effect is the only source of instability and distortions of high-symmetry configurations of polyatomic systems in nondegenerate states, and it contributes significantly to the instability in degenerate states".
History
In their early 1957 paper on what is now called pseudo Jahn–Teller effect (PJTE), Öpik and Pryce showed that a small splitting of the degenerate electronic term does not necessarily remove the instability and distortion of a polyatomic system induced by the Jahn–Teller effect (JTE), provided that the splitting is sufficiently small (the two split states remain "pseudo degenerate"), and the vibronic coupling between them is strong enough. From another perspective, the idea of a "mix" of different electronic states induced by low-symmetry vibrations was introduced in 1933 by Herzberg and Teller to explore forbidden electronic transitions, and extended in the late 1950s by Murrell and Pople and by Liehr.
The role of excited states in softening the ground state with respect to distortions in benzene was demonstrated qualitatively by Longuet-Higgins and Salem by analyzing the π electron levels in the Hückel approximation, while a general second-order perturbation formula for such vibronic softening was derived by Bader in 1960. In 1961 Fulton and Gouterman presented a symmetry analysis of the two-level case in dimers and introduced the term "pseudo Jahn–Teller effect". The first application of the PJTE to solving a major solid-state structural problem with regard to the origin of ferroelectricity was published in 1966 by Isaac Bersuker, and the first book on the JTE covering the PJTE was published in 1972 by Englman. The second-order perturbation approach was employed by Pearson in 1975 to predict instabilities and distortions in molecular systems; he called it "second-order JTE" (SOJTE). The first explanation of PJT origin of puckering distortion as due to the vibronic coupling to the excited state, was given for the N3H32+ radical by Borden, Davidson, and Feller in 1980 (they called it "pyramidalization").
Methods of numerical calculation of the PJT vibronic coupling effect with applications to spectroscopic problems were developed in the early 1980s
A significant step forward in this field was achieved in 1984 when it was shown by numerical calculations that the energy gap to the active excited state may not be the ultimate limiting factor in the PJTE, as there are two other compensating parameters in the condition of instability. It was also shown that, in extension of the initial definition, the PJT interacting electronic states are not necessarily components emerging from the same symmetry type (as in the split degenerate term). As a result, the applicability of the PJTE became a priory unlimited. Moreover, it was shown by Bersuker that the PJTE is the only source of instability of high-symmetry configurations of polyatomic systems in nondegenerate states (works cited in Refs.), and degeneracy and pseudo degeneracy are the only source of spontaneous symmetry breaking in matter in all its forms. The many applications of the PJTE to the study of a variety of properties of molecular systems and solids are reflected in a number of reviews and books ), as well as in proceedings of conferences on the JTE.
Theoretical background
General theory
The equilibrium geometry of any polyatomic system in nondegenerate states is defined as corresponding to the point of the minimum of the adiabatic potential energy surface (APES), where its first derivatives are zero and the second derivatives are positive. If we denote the energy of the system as a function of normal displacements as , at the minimum point of the APES (), the curvature of in direction ,
(1)
is positive, i.e., . Very often the geometry of the system at this point of equilibrium on the APES does not coincide with the highest possible (or even with any high) symmetry expected from general symmetry considerations. For instance, linear molecules are bent at equilibrium, planar molecules are puckered, octahedral complexes are elongated, or compressed, or tilted, cubic crystals are tetragonally polarized (or have several structural phases), etc. The PJTE is the general driving force of all these distortions if they occur in the nondegenerate electronic states of the high-symmetry (reference) geometry. If at the reference configuration the system is structurally unstable with respect to some nuclear displacements , then in this direction. The general formula for the energy is , where is the Hamiltonian and \psi_0 is the wavefunction of the nondegenerate ground state. Substituting in Eq. (1), we get (omitting the index for simplicity)
(2)
(3)
(4)
where are the wavefunctions of the excited states, and the expression, obtained as a second order perturbation correction, is always negative, . Therefore, if , the contribution is the only source of instability. The matrix elements in Eq. (4) are off-diagonal vibronic coupling constants,
(5)
These measure the mixing of the ground and excited states under the nuclear displacements , and therefore is termed the vibronic contribution. Together with the value and the energy gap between the mixing states, are the main parameters of the PJTE (see below).
In a series of papers beginning in 1980 (see references in ) it was proved that for any polyatomic system in the high-symmetry configuration
(6)
and hence the vibronic contribution is the only source of instability of any polyatomic system in nondegenerate states. If for the high-symmetry configuration of any polyatomic system, then a negative curvature, , can be achieved only due to the negative vibronic coupling component , and only if . It follows that any distortion of the high-symmetry configuration is due to, and only to the mixing of its ground state with excited electronic states by the distortive nuclear displacements realized via the vibronic coupling in Eq. (5). The latter softens the system with respect to certain nuclear displacements (), and if this softening is larger than the original (nonvibronic) hardness in this direction, the system becomes unstable with respect to the distortions under consideration, leading to its equilibrium geometry of lower symmetry, or to dissociation.
There are many cases when neither the ground state is degenerate, nor is there a significant vibronic coupling to the lowest excited states to realize PJTE instability of the high-symmetry configuration of the system, and still there is a ground state equilibrium configuration with lower symmetry. In such cases the symmetry breaking is produced by a hidden PJTE (similar to a hidden JTE); it takes place due to a strong PJTE mixing of two excited states, one of which crosses the ground state to create a new (lower) minimum of the APES with a distorted configuration.
The two-level problem
The use of the second order perturbation correction, Eq. (4), for the calculation of the value in the case of PJTE instability is incorrect because in this case , meaning the first perturbation correction is larger than the main term, and hence the criterion of applicability of the perturbation theory in its simplest form does not hold. In this case, we should consider the contribution of the lowest excited states (that make the total curvature negative) in a pseudo degenerate problem of perturbation theory. For the simplest case when only one excited state creates the main instability of the ground state, we can treat the problem via a pseudo degenerate two-level problem, including the contribution of the higher, weaker-influencing states as a second order correction.
In the PJTE two-level problem we have two electronic states of the high-symmetry configuration, ground and excited , separated by an energy interval of , that become mixed under nuclear displacements of certain symmetry ; the denotations , , and indicate, respectively, the irreducible representations to which the symmetry coordinate and the two states belong. In essence, this is the original formulation of the PJTE. Assuming that the excited state is sufficiently close to the ground one, the vibronic coupling between them should be treated as a perturbation problem for two near-degenerate states. With both interacting states non-degenerate the vibronic coupling constant in Eq. (5) (omitting indices) is non-zero for only one coordinate with . This gives us directly the symmetry of the direction of softening and possible distortion of the ground state. Assuming that the primary force constants in the two states are the same (for different see [1]), we get a 2×2 secular equation with the following solution for the energies of the two states interacting under the linear vibronic coupling (energy is referred to the middle of the gap between the levels at the undistorted geometry):
(7)
It is seen from these expressions that, on taking into account the vibronic coupling, , the two APES curves change in different ways: in the upper sheet the curvature (the coefficient at in the expansion on ) increases, whereas in the lower one it decreases. But until the minima of both states correspond to the point , as in the absence of vibronic mixing. However, if
(8)
the curvature of the lower curve of the APES becomes negative, and the system is unstable with respect to the displacements (Fig. 1). Under condition (8), the minima points on the APES are given by
(9)
From these expressions and Fig. 1 it is seen that while the ground state is softened (destabilized) by the PJTE, the excited state is hardened (stabilized), and this effect is the larger, the smaller and the larger F. It takes place in any polyatomic system and influences many molecular properties, including the existence of stable excited states of molecular systems that are unstable in the ground state (e.g., excited states of intermediates of chemical reactions); in general, even in the absence of instability the PJTE softens the ground state and increases the vibrational frequencies in the excited state.
Comparison with the Jahn-Teller effect
The two branches of the APES for the case of strong PJTE resulting in the instability of the ground state (when the condition of instability (11) holds) are illustrated in Fig. 1b in comparison with the case when the two states have the same energy (Fig. 1a), i. e. when they are degenerate and the Jahn–Teller effect (JTE) takes place. We see that the two cases, degenerate and nondegenerate but close-in-energy (pseudo degenerate) are similar in generating two minima with distorted configurations, but there are important differences: while in the JTE there is a crossing of the two terms at the point of degeneracy (leading to conical intersections in more complicated cases), in the nondegenerate case with strong vibronic coupling there is an "avoided crossing" or "pseudo crossing". Even a more important difference between the two vibronic coupling effects emerges from the fact that the two interacting states in the JTE are components of the same symmetry type, whereas in the PJTE each of the two states may have any symmetry. For this reason, the possible kinds of distortion is very limited in the JTE, and unlimited in the PJTE. It is also noticeable that while the systems with JTE are limited by the condition of electron degeneracy, the applicability of the PJTE has no a priori limitations, as it includes also the cases of degeneracy. Even when the PJT coupling is weak and the inequality (11) does not hold, the PJTE is still significant in softening (lowering the corresponding vibrational frequency) of the ground state and increasing it in the excited state. When considering the PJTE in an excited state, all the higher in energy states destabilize it, while the lower ones stabilize it.
For a better understanding it is important to follow up on how the PJTE is related to intramolecular interactions. In other words, what is the physical driving force of the PJTE distortions (transformations) in terms of well-known electronic structure and bonding? The driving force of the PJTE is added (improved) covalence: the PJTE distortion takes place when it results in an energy gain due to greater covalent bonding between the atoms in the distorted configuration. Indeed, in the starting high-symmetry configuration the wavefunctions of the electronic states, ground and excited, are orthogonal by definition. When the structure is distorted, their orthogonality is violated, and a nonzero overlap between them occurs. If for two near-neighbor atoms the ground state wavefunction pertains (mainly) to one atom and the excited state wavefunction belongs (mainly) to the other, the orbital overlap resulting from the distortion adds covalency to the bond between them, so the distortion becomes energetically favorable (Fig. 2).
Applications
Examples of the PJTE being used to explain chemical, physical, biological, and materials science phenomena are innumerable; as stated above, the PJTE is the only source of instability and distortions in high-symmetry configurations of molecular systems and solids with nondegenerate states, hence any phenomenon stemming from such instability can be explained in terms of the PJTE. Below are some illustrative examples.
Linear systems
PJTE versus Renner–Teller effect in bending distortions. Linear molecules are exceptions from the JTE, and for a long time it was assumed that their bending distortions in degenerate states (observed in many molecules) is produced by the Renner–Teller effect (RTE) (the splitting of the generate state by the quadratic terms of the vibronic coupling). However, recently it was proved that the RTE, by splitting the degenerate electronic state, just softens the lower branch of the APES, but this lowering of the energy is not enough to overcome the rigidity of the linear configuration and to produce bending distortions. It follows that the bending distortion of linear molecular systems is due to, and only to the PJTE that mixes the electronic state under consideration with higher in energy (excited) states. This statement is enhanced by the fact that many linear molecules in nondegenerate states (and hence with no RTE) are, too, bent in the equilibrium configuration. The physical reason for the difference between the PJTE and the RTE in influencing the degenerate term is that while in the former case the vibronic coupling with the excited state produces additional covalent bonding that makes the distorted configuration preferable (see above, section 2.3), the RTE has no such influence; the splitting of the degenerate term in the RTE takes place just because the charge distribution in the two states becomes nonequivalent under the bending distortion.
Peierls distortion in linear chains. In linear molecules with three or more atoms there may be PJTE distortions that do not violate the linearity but change the interatomic distances. For instance, as a result of the PJTE a centrosymmetric linear system may become non-centrosymmetric in the equilibrium configurations, as, for example, in the BNB molecule (see in ). An interesting extension of such distortions in sufficiently long (infinite) linear chains was first considered by Peierls. In this case the electronic states, combinations of atomic states, are in fact band states, and it was shown that if the chain is composed by atoms with unpaired electrons, the valence band is only half filled, and the PJTE interaction between the occupied and unoccupied band states leads to the doubling of the period of the linear chain (see also in the books ).
Broken cylindrical symmetry. It was shown also that the PJTE not only produces the bending instability of linear molecules, but if the mixing electronic states involve a Δ state (a state with a nonzero momentum with respect to the axis of the molecule, its projection quantum number being Λ=2), the APES, simultaneously with the bending, becomes warped along the coordinate of rotations around the molecular axis, thus violating both the linear and cylindrical symmetry. It happens because the PJTE, by mixing the wavefunctions of the two interacting states, transfers the high momentum of the electrons from states with Λ=2 to states with lower momentum, and this may alter significantly their expected rovibronic spectra.
Nonlinear molecules and two-dimensional (2D) systems
PJTE and combined PJTE plus JTE effects in molecular structures. There is a practically unlimited number of molecular systems for which the origin of their structural properties was revealed and/or rationalized based on the PJTE, or a combination of the PJTE and JTE. The latter stems from the fact that in any system with a JTE in the ground state the presence of a PJT active excited state is not excluded, and vice versa, the active excited state for the PJTE of the ground one may be degenerate, and hence JT active. Examples are shown, e.g., in Refs., including molecular systems like Na3, C3H3, C4X4 (X= H, F, Cl, Br), CO3, Si4R4 (with R as large ligands), planar cyclic CnHn, all kind of coordination systems of transition metals, mixed-valence compounds, biological systems, origin of conformations, geometry of ligands' coordination, and others. Indeed, it is difficult to find a molecular system for which the PJTE implications are a priori excluded, which is understandable in view of the mentioned above unique role of the PJTE in such instabilities. Three methods to quench the PJTE have been documented: changing the electronic charge of the molecule, sandwiching the molecule with other ions and cyclic molecules, and manipulating the environment of the molecule.
Hidden PJTE, spin crossover, and magnetic-dielectric bistability. As mentioned above, there are molecular systems in which the ground state in the high-symmetry configuration is neither degenerate to trigger the JTE, nor does it interact with the low-lying excited states to produce the PJTE (e.g., because of their different spin multiplicity). In these situations, the instability is produced by a strong PJTE in the excited states; this is termed "hidden PJTE" in the sense that its origin is not seen explicitly as a PJTE in the ground state. An interesting typical situation of hidden PJTE emerges in molecular and solid-state systems with valence half-filed closed shells electronic configurations e2 and t3. For instance, in the e2 case the ground state in the high-symmetry equilibrium geometry is an orbital non-degenerate triplet 3A, while the nearby low-lying two excited electronic states are close-in-energy singlets 1E and 1A; due to the strong PJT interaction between the latter, the lower component of 1E crosses the triplet state to produce a global minimum with lower symmetry. Fig. 3 illustrates the hidden PJTE in the CuF3 molecule, showing also the singlet-triplet spin crossover and the resulting two coexisting configurations of the molecule: high-symmetry (undistorted) spin-triplet state with a nonzero magnetic moment, and a lower in energy dipolar-distorted singlet state with zero magnetic moment. Such magnetic-dielectric bistability is inherent to a whole class of molecular systems and solids.
Puckering in planar molecules and graphene-like 2D and quasi 2D systems. Special attention has been paid recently to 2D systems in view of a variety of their planar-surface-specific physical and chemical properties and possible graphene-like applications in electronics. Similar-to-graphene properties are sought for in silicene, phosphorene, boron nitride, zinc oxide, gallium nitride, as well as in 2D transition metal dichalkogenides and oxides, plus a number of other organic and inorganic 2D and quasi-2D compounds with expected similar properties. One of the main important features of these systems is their planarity or quasi-planarity, but many of the quasi-2D compounds are subject to out-of-plane deviations known as puckering (buckling).
The instability and distortions of the planar configuration (as in any other systems in nondegenerate state) was shown to be due to the PJTE.
Detailed exploration of the PJTE in such systems allows one to identify the excited states that are responsible for the puckering, and suggest possible external influence that restores their planarity, including oxidation, reduction, substitutions, or coordination to other species. Recent investigations have also extended to 3D compounds.
Solid state and materials science
Cooperative PJTE in BaTiO3-type crystals and ferroelectricity. In crystals with PJTE centers the interaction between the local distortions may lead to their ordering to produce a phase transition to a regular crystal phase with lower symmetry. Such cooperative PJTE is quite similar to the cooperative JTE; it was shown in one of the first studies of the PJTE in solid state systems that in the case of ABO3 crystals with perovskite structure the local dipolar PJTE distortions at the transition metal B center and their cooperative interactions lead to ferroelectric phase transitions. Provided the criterion for PJTE is met, each [BO6] center has an APES with eight equivalent minima along the trigonal axes, six orthorhombic, and (higher) twelve tetragonal saddle-points between them. With temperature, the gradually reached transitions between the minima via the different kind of saddle-points explains the origin of all the four phases (three ferroelectric and one paraelectric) in perovskites of the type BaTiO3 and their properties. The predicted by the theory trigonal displacement of the Ti ion in all four phases, the fully disordered PJTE distortions in the paraelectric phase, and their partially disordered state in two other phases was confirmed by a variety of experimental investigations (see in ).
Multiferroicity and magnetic-ferroelectric crossover. The PJTE theory of ferroelectricity in ABO3 crystals was expanded to show that, depending on the number of electrons in the dn shell of the transition metal ion B4+ and their low spin or high spin arrangement (which controls the symmetry and spin multiplicity of the ground and PJTE active excited states of the [BO6] center), the ferroelectricity may coexist with a magnetic moment (multiferroicity). Moreover, in combination with the temperature dependent spin crossover phenomenon (which changes the spin multiplicity), this kind of multiferroicity may lead to a novel effect known as a magnetic-ferroelectric crossover.
Solid state magnetic-dielectric bistability. Similar to the above-mentioned molecular bistability induced by the hidden PJTE, a magnetic-dielectric bistability due to two coexisting equilibrium configurations with corresponding properties may take place also in crystals with transition metal centers, subject to the electronic configuration with half-filled e2 or t3 shells. As in molecular systems, the latter produce a hidden PJTE and local bistability which, distinguished from the molecular case, are enhanced by the cooperative interactions, thus acquiring larger lifetimes. This crystal bistability was proved by calculations for LiCuO2 and NaCuO2 crystals, in which the Cu3+ ion has the electronic e2(d8) configuration (similar to the CuF3 molecule).
Giant enhancement of observable properties in interaction with external perturbations. In a recent development it was shown that in inorganic crystals with PJTE centers, in which the local distortions are not ordered (before the phase transition to the cooperative phase), the effect of interaction with external perturbations contains an orientational contribution which enhances the observable properties by several orders of magnitude. This was demonstrated on the properties of crystals like paraelectric BaTiO3 in interaction with electric fields (in permittivity and electrostriction), or under a strain gradient (flexoelectricity). These giant enhancement effects occur due to the dynamic nature of the PJTE local dipolar distortions (their tunneling between the equivalent minima); the independently rotating dipole moments on each center become oriented (frozen) along the external perturbation resulting in an orientational polarization which is not there in the absence of the PJTE
References
Condensed matter physics
Inorganic chemistry
Solid-state chemistry | Pseudo Jahn–Teller effect | Physics,Chemistry,Materials_science,Engineering | 5,323 |
30,734,176 | https://en.wikipedia.org/wiki/Isotopes%20of%20unbinilium | Unbinilium (120Ubn) has not yet been synthesised, so there is no experimental data and a standard atomic weight cannot be given. Like all synthetic elements, it would have no stable isotopes.
List of isotopes
No isotopes of unbinilium are known.
Nucleosynthesis
Target-projectile combinations leading to Z = 120 compound nuclei
The below table contains various combinations of targets and projectiles that could be used to form compound nuclei with Z = 120.
Hot fusion
238U(64Ni,xn)302-xUbn
In April 2007, the team at the GSI Helmholtz Centre for Heavy Ion Research in Darmstadt, Germany attempted to create unbinilium using a 238U target and a 64Ni beam:
+ → * → no atoms
No atoms were detected, providing a limit of 1.6 pb for the cross section at the energy provided. The GSI repeated the experiment with higher sensitivity in three separate runs in April–May 2007, January–March 2008, and September–October 2008, all with negative results, reaching a cross section limit of 90 fb.
244Pu(58Fe,xn)302-xUbn
Following their success in obtaining oganesson by the reaction between 249Cf and 48Ca in 2006, the team at the Joint Institute for Nuclear Research (JINR) in Dubna started experiments in March–April 2007 to attempt to create unbinilium with a 58Fe beam and a 244Pu target. Initial analysis revealed that no atoms of unbinilium were produced, providing a limit of 400 fb for the cross section at the energy studied.
+ → * → no atoms
The Russian team planned to upgrade their facilities before attempting the reaction again.
245Cm(54Cr,xn)299-xUbn
There are indications that this reaction may be tried by the JINR in the future. The expected products of the 3n and 4n channels, 296Ubn and 295Ubn, could undergo five alpha decays to reach the darmstadtium isotopes 276Ds and 275Ds respectively; these darmstadtium isotopes were synthesised at the JINR in 2022 and 2023 respectively, both in the 232Th+48Ca reaction.
248Cm(54Cr,xn)302-xUbn
In 2011, after upgrading their equipment to allow the use of more radioactive targets, scientists at the GSI attempted the rather asymmetrical fusion reaction:
+ → * → no atoms
It was expected that the change in reaction would quintuple the probability of synthesizing unbinilium, as the yield of such reactions is strongly dependent on their asymmetry. Although this reaction is less asymmetric than the 249Cf+50Ti reaction, it also creates more neutron-rich unbinilium isotopes that should receive increased stability from their proximity to the shell closure at N = 184. Three signals were observed in May 2011; a possible assignment to 299Ubn and its daughters was considered, but could not be confirmed, and a different analysis suggested that what was observed was simply a random sequence of events.
In March 2022, Yuri Oganessian gave a seminar at the JINR considering how one could synthesise element 120 in the 248Cm+54Cr reaction. In 2023, the director of the JINR, Grigory Trubnikov, stated that he hoped that the experiments to synthesise element 120 will begin in 2025.
249Cf(50Ti,xn)299-xUbn
In August–October 2011, a different team at the GSI using the TASCA facility tried a new, even more asymmetrical reaction:
+ → * → no atoms
Because of its asymmetry, the reaction between 249Cf and 50Ti was predicted to be the most favorable practical reaction for synthesizing unbinilium, although it is also somewhat cold, and is further away from the neutron shell closure at N = 184 than any of the other three reactions attempted. No unbinilium atoms were identified, implying a limiting cross section of 200 fb. Jens Volker Kratz predicted the actual maximum cross section for producing unbinilium by any of the four reactions 238U+64Ni, 244Pu+58Fe, 248Cm+54Cr, or 249Cf+50Ti to be around 0.1 fb; in comparison, the world record for the smallest cross section of a successful reaction was 30 fb for the reaction 209Bi(70Zn,n)278Nh, and Kratz predicted a maximum cross section of 20 fb for producing ununennium. If these predictions are accurate, then synthesizing ununennium would be at the limits of current technology, and synthesizing unbinilium would require new methods.
This reaction was investigated again in April to September 2012 at the GSI. This experiment used a 249Bk target and a 50Ti beam to produce element 119, but since 249Bk decays to 249Cf with a half-life of about 327 days, both elements 119 and 120 could be searched for simultaneously:
+ → * → no atoms
+ → * → no atoms
Neither element 119 nor element 120 was observed. This implied a limiting cross section of 65 fb for producing element 119 in these reactions, and 200 fb for element 120.
In May 2021, the JINR announced plans to investigate the 249Cf+50Ti reaction in their new facility. The 249Cf target would have been produced by the Oak Ridge National Laboratory in Oak Ridge, Tennessee, United States; the 50Ti beam would be produced by the Hubert Curien Pluridisciplinary Institute in Strasbourg, Alsace, France. However, after the Russian invasion of Ukraine began in 2022, collaboration between the JINR and other institutes completely ceased due to sanctions. Thus, the JINR's plans have since shifted to the 248Cm+54Cr reaction, where the target and projectile beam could both be made in Russia.
Starting from 2022, plans began to be made to use the 88-inch cyclotron in the Lawrence Berkeley National Laboratory (LBNL) in Berkeley, California, United States to attempt to make new elements using 50Ti projectiles. The plan was to first test them on a plutonium target to create livermorium (element 116), which was successful in 2024. Thus, an attempt to make element 120 in the 249Cf+50Ti reaction is now planned for 2025.
References
Sources
Unbinilium
unbinilium | Isotopes of unbinilium | Chemistry | 1,345 |
18,477,184 | https://en.wikipedia.org/wiki/Vector-field%20consistency | Vector-Field Consistency is a consistency model for replicated data (for example, objects), initially described in a paper which was awarded the best-paper prize in the ACM/IFIP/Usenix Middleware Conference 2007. It has since been enhanced for increased scalability and fault-tolerance in a recent paper.
Description
This consistency model was initially designed for replicated data management in ad hoc gaming in order to minimize bandwidth usage without sacrificing playability. Intuitively, it captures the notion that although players require, wish, and take advantage of information regarding the whole of the game world (as opposed to a restricted view to rooms, arenas, etc. of limited size employed in many multiplayer video games), they need to know information with greater freshness, frequency, and accuracy as other game entities are located closer and closer to the player's position.
It prescribes a multidimensional divergence bounding scheme, based on a vector field that employs consistency vectors k=(θ,σ,ν), standing for maximum allowed time - or replica staleness, sequence - or missing updates, and value - or user-defined measured replica divergence, applied to all space coordinates in game scenario or world.
The consistency vector-fields emanate from field-generators designated as pivots (for example, players) and field intensity attenuates as distance grows from these pivots in concentric or square-like regions. This consistency model unifies locality-awareness techniques employed in message routing and consistency enforcement for multiplayer games, with divergence bounding techniques traditionally employed in replicated database and web scenarios.
Notes
References
Data management | Vector-field consistency | Technology | 340 |
8,491,096 | https://en.wikipedia.org/wiki/Poynting%20effect | The Poynting effect may refer to two unrelated physical phenomena. Neither should be confused with the Poynting–Robertson effect. All of these effects are named after John Henry Poynting, an English physicist.
Solid mechanics
In solid mechanics, the Poynting effect is a finite strain theory effect observed when an elastic cube is sheared between two plates and stress is developed in the direction normal to the sheared faces, or when a cylinder is subjected to torsion and the axial length changes. The Poynting phenomenon in torsion was noticed experimentally by J. H. Poynting.
Chemistry and thermodynamics
In thermodynamics, the Poynting effect generally refers to the change in the fugacity of a liquid when a non-condensable gas is mixed with the vapor at saturated conditions.
Equivalently in terms of vapor pressure, if one assumes that the vapor and the non-condensable gas behave as ideal gases and an ideal mixture, it can be shown that:
where
is the modified vapor pressure
is the unmodified vapor pressure
is the liquid molar volume
is the liquid/vapor's gas constant
is the temperature
is the total pressure (vapor pressure + non-condensable gas)
A common example is the production of the medicine Entonox, a high-pressure mixture of nitrous oxide and oxygen. The ability to combine and at high pressure while remaining in the gaseous form is due to the Poynting effect.
References
Elasticity (physics)
Rubber properties
Gases | Poynting effect | Physics,Chemistry,Materials_science | 315 |
2,376,407 | https://en.wikipedia.org/wiki/V382%20Velorum | V382 Velorum, also known as Nova Velorum 1999, was a bright nova which occurred in 1999 in the southern constellation Vela. V382 Velorum reached a brightness of 2.6 magnitude, making it easily visible to the naked eye. It was discovered by Peter Williams of Heathcote, New South Wales, Australia at 09:30 UT on 22 May 1999. Later that same day it was discovered independently at 10:49 UT by Alan C. Gilmore at Mount John University Observatory in New Zealand.
In its quiescent state, V382 Velorum has a mean visual magnitude of 16.56. It is classified as a fast nova with a smooth light curve.
Like all novae, V382 Velorum is a binary system with two stars orbiting so close to each other that one star, the "donor" star, transfers matter to its companion star which is a white dwarf. The orbital period is 3.5 hours. The white dwarf in this system has a mass of 1.23M⊙. V382 Velorum is a neon nova, a relatively rare type of nova with a O-Ne-Mg white dwarf, rather than the more common C-O white dwarf.
The stars forming V382 Velorum are surrounded by a small emission nebula about 10 arc seconds in diameter.
References
External links
https://web.archive.org/web/20060517025834/http://institutocopernico.org/cartas/v382velb.gif
https://web.archive.org/web/20050915104557/http://www.tsm.toyama.toyama.jp/curators/aroom/var/nova/1990.htm
Novae
Vela (constellation)
1999 in science
Velorum, V382 | V382 Velorum | Astronomy | 388 |
2,922,328 | https://en.wikipedia.org/wiki/UW%20Canis%20Majoris | UW Canis Majoris is a star in the constellation Canis Major. It is classified as a Beta Lyrae eclipsing contact binary and given the variable star designation UW Canis Majoris. Its brightness varies from magnitude +4.84 to +5.33 with a period of 4.39 days. Bode had initially labelled it as Tau2 Canis Majoris, but this designation had been dropped by Gould and subsequent authors. It is visible to the naked eye of a person under good observing conditions.
Sergai Gaposchnhkin analyzed 376 photographic plates taken over a twelve year period, and announced in 1936 that 29 Canis Majoris is a variable star. It was given its variable star designation in 1936.
UW Canis Majoris A is a rare blue supergiant of spectral type O7.5-8 Iab. The precise characteristics of the system are still uncertain, in part because the spectral signature of the secondary is very hard to disentangle from the spectrum of the primary and the surrounding envelope of stellar wind. A detailed spectral study by Gies et al. found that the primary had a diameter 13 times that of the Sun, while its secondary companion is a slightly cooler, less evolved and less luminous supergiant of spectral type O9.7Ib that is 10 times the Sun's diameter. According to this study, the brighter star is the more luminous, its luminosity 200,000 times that of the Sun as opposed to the secondary's 63,000 times. However the secondary is the more massive star at 19 Solar masses () compared to the primary's .
However, a more recent photometric analysis finds several configurations of mass and luminosity ratios that match the observed data.
Parallax measurements showed it to be approximately 3,000 light years from Earth, but this is unexpectedly close for a star of its spectral type and brightness. More accurate Hipparcos parallax data gives an even closer result around 2000 light years, but Gaia Data Release 3 gives a parallax of , corresponding to a distance of around 3,800 light years. It is thought to be a distant member of NGC 2362 which would place it about 5,000 light years and more closely match its expected luminosity. The contradiction between the different distance results is still a subject of research.
UW Canis Majoris is part of the giant HII Region Sh2-310 and it along with Tau Canis Majoris which is the brightest member of NGC 2362, HD 58011, and VY Canis Majoris are thought to be are thought to be probable sources of ionization of gases in Sh2-310.
References
Beta Lyrae variables
O-type supergiants
Canis Major
Canis Majoris, 29
Canis Majoris, UW
035412
Durchmusterung objects
057060
2781
Emission-line stars
Sh2-310 | UW Canis Majoris | Astronomy | 601 |
883,068 | https://en.wikipedia.org/wiki/Analysis%20of%20flows | In theoretical physics, an analysis of flows is the study of "gauge" or "gaugelike" "symmetries" (i.e. flows the formulation of a theory is invariant under). It is generally agreed that flows indicate nothing more than a redundancy in the description of the dynamics of a system, but often, it is simpler computationally to work with a redundant description.
Flows in classical mechanics
Flows in the action formalism
Classically, the action is a functional on the configuration space. The on-shell solutions are given by the variational problem of extremizing the action subject to boundary conditions.
While the boundary is often ignored in textbooks, it is crucial in the study of flows. Suppose we have a "flow", i.e. the generator of a smooth one-dimensional group of transformations of the configuration space, which maps on-shell states to on-shell states while preserving the boundary conditions. Because of the variational principle, the action for all of the configurations on the orbit is the same. This is not the case for more general transformations which map on shell to on shell states but change the boundary conditions.
Here are several examples. In a theory with translational symmetry, timelike translations are not flows because in general they change the boundary conditions. However, now take the case of a simple harmonic oscillator, where the boundary points are at a separation of a multiple of the period from each other, and the initial and final positions are the same at the boundary points. For this particular example, it turns out there is a flow. Even though this is technically a flow, this would usually not be considered a gauge symmetry because it is not local.
Flows can be given as derivations over the algebra of smooth functionals over the configuration space. If we have a flow distribution (i.e. flow-valued distribution) such that the flow convolved over a local region only affects the field configuration in that region, we call the flow distribution a gauge flow.
Given that we are only interested in what happens on shell, we would often take the quotient by the ideal generated by the Euler–Lagrange equations, or in other words, consider the equivalence class of functionals/flows which agree on shell.
Flows in the Hamiltonian formalism
First class constraints
Second class constraints
BRST formalism
Batalin–Vilkovisky formalism
References
Theoretical physics | Analysis of flows | Physics | 495 |
13,148,240 | https://en.wikipedia.org/wiki/Ground%20reaction%20force | In physics, and in particular in biomechanics, the ground reaction force (GRF) is the force exerted by the ground on a body in contact with it.
For example, a person standing motionless on the ground exerts a contact force on it (equal to the person's weight) and at the same time an equal and opposite ground reaction force is exerted by the ground on the person.
In the above example, the ground reaction force coincides with the notion of a normal force. However, in a more general case, the GRF will also have a component parallel to the ground, for example when the person is walking – a motion that requires the exchange of horizontal (frictional) forces with the ground.
The use of the word reaction derives from Newton's third law, which essentially states that if a force, called action, acts upon a body, then an equal and opposite force, called reaction, must act upon another body. The force exerted by the ground is conventionally referred to as the reaction, although, since the distinction between action and reaction is completely arbitrary, the expression ground action would be, in principle, equally acceptable.
The component of the GRF parallel to the surface is the frictional force. When slippage occurs the ratio of the magnitude of the frictional force to the normal force yields the coefficient of static friction.
GRF is often observed to evaluate force production in various groups within the community. One of these groups studied often are athletes to help evaluate a subject's ability to exert force and power. This can help create baseline parameters when creating strength and conditioning regimens from a rehabilitation and coaching standpoint. Plyometric jumps such as a drop-jump is an activity often used to build greater power and force which can lead to overall better ability on the playing field. When landing from a safe height in a bilateral comparisons on GRF in relation to landing with the dominant foot first followed by the non-dominant limb, literature has shown there were no significances in bilateral components with landing with the dominant foot first faster than the non-dominant foot on the GRF of the drop-jump or landing on vertical GRF output.
References
Mechanics
Biomechanics | Ground reaction force | Physics,Engineering | 452 |
50,250,122 | https://en.wikipedia.org/wiki/Nanomechanical%20resonator | A nanomechanical resonator is a nanoelectromechanical systems ultra-small resonator that oscillates at a specific frequency depending on its mass and stiffness.
See also
Quartz crystal microbalance
Atomic force microscopy
References
Further reading
Nanoelectronics | Nanomechanical resonator | Materials_science | 60 |
1,625,348 | https://en.wikipedia.org/wiki/IC%20power-supply%20pin | IC power-supply pins are voltage and current supply terminals found on integrated circuits (ICs) in electrical engineering, electronic engineering, and integrated circuit design. ICs have at least two pins that connect to the power rails of the circuit in which they are installed. These are known as the power-supply pins. However, the labeling of the pins varies by IC family and manufacturer. The double subscript notation usually corresponds to a first letter in a given IC family (transistors) notation of the terminals (e.g. VDD supply for a drain terminal in FETs etc.).
The simplest labels are V+ and V−, but internal design and historical traditions have led to a variety of other labels being used. V+ and V− may also refer to the non-inverting (+) and inverting (−) voltage inputs of ICs like op amps.
For power supplies, sometimes one of the supply rails is referred to as ground (abbreviated "GND") positive and negative voltages are relative to the ground. In digital electronics, negative voltages are seldom present, and the ground nearly always is the lowest voltage level. In analog electronics (e.g. an audio power amplifier) the ground can be a voltage level between the most positive and most negative voltage level.
While double subscript notation, where subscripted letters denote the difference between two points, uses similar-looking placeholders with subscripts, the double-letter supply voltage subscript notation is not directly linked (though it may have been an influencing factor).
BJTs
ICs using bipolar junction transistors have VCC (+, positive) and VEE (-, negative) power-supply pins though VCC is also often used for CMOS devices as well.
In circuit diagrams and circuit analysis, there are long-standing conventions regarding the naming of voltages, currents, and some components. In the analysis of a bipolar junction transistor, for example, in a common-emitter configuration, the DC voltage at the collector, emitter, and base (with respect to ground) may be written as VC, VE, and VB respectively.
Resistors associated with these transistor terminals may be designated RC, RE, and RB. In order to create the DC voltages, the furthest voltage, beyond these resistors or other components if present, was often referred to as VCC, VEE, and VBB. In practice VCC and VEE then refer to the positive and negative supply lines respectively in common NPN circuits. Note that VCC would be negative, and VEE would be positive in equivalent PNP circuits.
The VBB specifies reference bias supply voltage in ECL logic.
FETs
Exactly analogous conventions were applied to field-effect transistors with their drain, source and gate terminals. This led to VD and VS being created by supply voltages designated VDD and VSS in the more common circuit configurations. In equivalence to the difference between NPN and PNP bipolars, VDD is positive with regard to VSS in the case of n-channel FETs and MOSFETs and negative for circuits based on p-channel FETs and MOSFETs.
CMOS
CMOS ICs have generally borrowed the NMOS convention of VDD for positive and VSS for negative, even though both positive and negative supply rails connect to source terminals (the positive supply goes to PMOS sources, the negative supply to NMOS sources).
In many single-supply digital and analog circuits the negative power supply is also called "GND". In "split-rail" supply systems there are multiple supply voltages. Examples of such systems include modern cell phones, with GND and voltages such as 1.2 V, 1.8 V, 2.4 V, 3.3 V, and PCs, with GND and voltages such as −5 V, 3.3 V, 5 V, 12 V. Power-sensitive designs often have multiple power rails at a given voltage, using them to conserve energy by switching off supplies to components that are not in active use.
More advanced circuits often have pins carrying voltage levels for more specialized functions, and these are generally labeled with some abbreviation of their purpose. For example, VUSB for the supply delivered to a USB device (nominally 5 V), VBAT for a battery, or Vref for the reference voltage for an analog-to-digital converter. Systems combining both digital and analog circuits often distinguish digital and analog grounds (GND and AGND), helping isolate digital noise from sensitive analog circuits. High-security cryptographic devices and other secure systems sometimes require separate power supplies for their unencrypted and encrypted (red/black) subsystems to prevent leakage of sensitive plaintext.
BJTs and FETs mixed
Although still in relatively common use, there is limited relevance of these device-specific power-supply designations in circuits that use a mixture of bipolar and FET elements, or in those that employ either both NPN and PNP transistors or both n- and p-channel FETs. This latter case is very common in modern chips, which are often based on CMOS technology, where the C stands for complementary, meaning that complementary pairs of n- and p-channel devices are common throughout.
These naming conventions were part of a bigger picture, where, to continue with bipolar-transistor examples, although the FET remains entirely analogous, DC or bias currents into or out of each terminal may be written IC, IE, and IB. Apart from DC or bias conditions, many transistor circuits also process a smaller audio-, video-, or radio-frequency signal that is superimposed on the bias at the terminals. Lower-case letters and subscripts are used to refer to these signal levels at the terminals, either peak-to-peak or RMS as required. So we see vc, ve, and vb, as well as ic, ie, and ib. Using these conventions, in a common-emitter amplifier, the ratio vc/vb represents the small-signal voltage gain at the transistor, and vc/ib the small-signal trans-resistance, from which the name transistor is derived by contraction. In this convention, vi and vo usually refer to the external input and output voltages of the circuit or stage.
Similar conventions were applied to circuits involving vacuum tubes, or thermionic valves, as they were known outside of the U.S. Therefore, we see VP, VK, and VG referring to plate (or anode outside of the U.S.), cathode (note K, not C) and grid voltages in analyses of vacuum triode, tetrode, and pentode circuits.
See also
4000 series
7400 series
Bob Widlar
Common collector
Differential amplifier
List of 4000 series integrated circuits
List of 7400 series integrated circuits
Logic family
Logic gate
Open collector
Operational amplifier applications
Pin-compatibility
Reference designator
Notes
References
Integrated circuits
fr:Boîtier de circuit intégré#Broches d'alimentation d'un circuit intégré | IC power-supply pin | Technology,Engineering | 1,474 |
17,710,437 | https://en.wikipedia.org/wiki/Co-rumination | The theory of co-rumination refers to extensively discussing and revisiting problems, speculating about problems, and focusing on negative feelings with peers. Although it is similar to self-disclosure in that it involves revealing and discussing a problem, it is more focused on the problems themselves and thus can be maladaptive. While self-disclosure is seen in this theory as a positive aspect found in close friendships, some types of self-disclosure can also be maladaptive. Co-rumination is a type of behavior that is positively correlated with both rumination and self-disclosure and has been linked to a history of anxiety because co-ruminating may exacerbate worries about whether problems will be resolved, about negative consequences of problems, and depressive diagnoses due to the consistent negative focus on troubling topics, instead of problem-solving. However, co-rumination is also closely associated with high-quality friendships and closeness.
Developmental psychology and gender differences
According to these hypothesized dynamics, girls are more likely than boys to co-ruminate with their close friends, and co-rumination increases with age in children. Female adolescents are more likely to co-ruminate than younger girls because their social worlds become increasingly complex and stressful. This is not true for boys, however as age differences are not expected among boys because their interactions remain activity-focused and the tendency to extensively discuss problems is likely to remain inconsistent with male norms.
Unfortunately, while providing this support, this tendency may also reinforce internalizing problems such as anxiety or depression, especially in adolescent girls, which may account for higher depression among girls than boys. For boys, lower levels of co-rumination may help buffer them against emotional problems if they spend less time with friends dwelling on problems and concerns, though less sharing of personal thoughts and feelings can potentially interfere with creating high-quality friendships.
Co-rumination has been found to partially explain (or mediate) gender differences in anxiety and depression; females have reported engaging in more co-rumination in close friendships than males, as well as elevated co-rumination was associated with females' higher levels of depression, but not anxiety. Co-rumination is also linked with romantic activities, which have been shown to correlate with depressive symptoms over time, because they are often the problem discussed among adolescents.
Research suggests that within adolescents, children who currently exhibit high levels of co-rumination would predict the onset of depressive diagnoses than children who exhibit lower levels of co-rumination. In addition, this link was maintained even when children with current diagnoses were excluded, as well as statistically controlling for current depressive symptoms. This further suggests that the relation between co-rumination and a history of depressive diagnoses is not due simply to current levels of depression. Another study looking at 146 adolescents (69% female) ranging in age from 14 to 19 suggests that comparing gender differences in co-rumination across samples, it appears as if these differences intensify through early adolescence but begin to narrow shortly thereafter and remain steady through emerging adulthood
Stress hormones, co-rumination and depression
Co-rumination, or talking excessively about each other's problems, is common during adolescent years, especially among girls, as mentioned before. On a biological basis, a study has shown that there is an increase in the levels of stress hormones during co-rumination. This suggests that since stress hormones are released during co-rumination, they may also be released in greater amounts during other life stressors. If someone exhibits co-rumination in response to a life problem it may become more and more common for them to co-ruminate about all problems in their life.
Studies have also shown that co-rumination can predict internalizing symptoms such as depression and anxiety. Since co-rumination involves repeatedly going over problems again and again this clearly may lead to depression and anxiety. Catastrophizing, when one takes small possibilities and blows them out of proportion into something negative, is common in depression and anxiety and may very well be a result of constantly going over problems that may not be as bad as they seem.
Effects in daily life
Co-rumination, or lack thereof, leads to different behaviors in daily life. For example, studies have examined the link between co-rumination and weekly drinking habits, specifically, negative thoughts. Worry co-rumination leads to less drinking weekly, while angry co-rumination leads to a significant increase in drinking. There have also been some gender differences found as well in the same study. In general, negative co-rumination increased the likelihood that women would binge drink weekly, versus men who would drink less weekly. When dealing with specific negative emotions, women drank less when taking part in worry co-rumination (as opposed to other negative emotions), while there appeared to be a lack of significant difference in men. (Ciesla et al., 2011)
Therapy
Co-rumination treatment typically consists of cognitive emotion regulation therapy for rumination with the patient. This therapy focuses both on the patient themselves and their habits of continually co-ruminating with a friend or friends. Therapies may need to be altered depending on the gender of each patient. As suggested by Zlomke and Hahn (2010) men showed vast improvement in anxiety and worrying symptoms by focusing their attention on how to handle a negative event through "refocus on planning". For women, accepting a negative event/emotion and re-framing it in a positive light was associated with decreased levels of worry. In other words, some of the cognitive emotion regulation strategies that work for men do not necessarily work for women and vice versa. Patients are encouraged to talk about their problems with friends and family members, but need to focus on a solution instead of focusing on the exact problem.
Types of relationships
While the majority of studies have been conducted with youth same-sex friendships, others have explored co-rumination and correlates of co-rumination within other types of relationships. Research on co-rumination in the workplace has shown that discussions about workplace problems have led to mixed results, especially regarding gender differences. In high abusive supervision settings, the effects of co-rumination were shown to intensify its negative effects for women, while associating lower negative effects for men. In low abusive supervision settings, results show that there were no significant effects for women, but had negative outcomes for men. The study suggests the reason men are at risk for job dissatisfaction and depression in low stress supervision, is due to the gender differences at an early age. At a young age, girls report to co-ruminate more than boys, and as they age girls' scores tend to rise, while boys' scores tend to drop. The study further suggests that in adulthood, men have less experience with co-rumination than women, however some men may learn skills through interacting with women or the interaction style with other men in adulthood has changed from activity-based to conversation-based; suggesting that not only do men and women co-ruminate differently, but that the level of stress may be a factor as well. In another study, co-rumination was seen to increase the negative effects of burnout on perceived stress among co-workers, thereby indicating that, while co-rumination may be seen as a socially-supportive interaction, it could have negative psychological outcomes for co-workers.
Within the context of mother-adolescent relationships, a study that examines 5th, 8th, and 11th graders has found greater levels of co-rumination among mother and daughter than mother and son relationships. In addition, mother-adolescent co-rumination was related to positive relationship quality, but also to enmeshment which was unique to co-rumination. These enmeshment as well as internalizing relations were strongest when co-ruminating was focused on the mother's problems.
Other relationships have also been studied. For instance, one study found that graduate students engage in co-rumination. Furthermore, for those graduate students, co-rumination acted as a partial mediator, which suppressed the positive effects of social support on emotional exhaustion.
Primary researchers
Researchers in psychology and communication have studied the conceptualization of co-rumination along with the effects of the construct. A few primary researchers have focused attention on the construct including Amanda Rose Professor of Psychology at the University of Missouri, who was one of the first scholars to write about the construct. Others who are doing work on co-rumination include Justin P. Boren, Associate Professor of Communication at Santa Clara University, Jennifer Byrd-Craven, associate professor of psychology at Oklahoma State University, and Dana L. Haggard, professor of management at Missouri State University.
See also
Developmental psychology
References
Developmental psychology | Co-rumination | Biology | 1,789 |
24,039,538 | https://en.wikipedia.org/wiki/List%20of%20wireless%20sensor%20nodes | A sensor node, also known as a mote (chiefly in North America), is a node in a sensor network that is capable of performing some processing, gathering sensory information and communicating with other connected nodes in the network. A mote is a node but a node is not always a mote.
List of Wireless Sensor Nodes
See also
Wireless sensor network
Sensor node
Mesh networking
Sun SPOT
Embedded computer
Embedded system
Mobile ad hoc network (MANETS)
Smartdust
Sensor Web
References
Wireless sensor network
Computer networking
Embedded systems
Wireless sensor nodes | List of wireless sensor nodes | Technology,Engineering | 106 |
1,137,568 | https://en.wikipedia.org/wiki/Artificial%20gravity | Artificial gravity is the creation of an inertial force that mimics the effects of a gravitational force, usually by rotation.
Artificial gravity, or rotational gravity, is thus the appearance of a centrifugal force in a rotating frame of reference (the transmission of centripetal acceleration via normal force in the non-rotating frame of reference), as opposed to the force experienced in linear acceleration, which by the equivalence principle is indistinguishable from gravity.
In a more general sense, "artificial gravity" may also refer to the effect of linear acceleration, e.g. by means of a rocket engine.
Rotational simulated gravity has been used in simulations to help astronauts train for extreme conditions.
Rotational simulated gravity has been proposed as a solution in human spaceflight to the adverse health effects caused by prolonged weightlessness.
However, there are no current practical outer space applications of artificial gravity for humans due to concerns about the size and cost of a spacecraft necessary to produce a useful centripetal force comparable to the gravitational field strength on Earth (g).
Scientists are concerned about the effect of such a system on the inner ear of the occupants. The concern is that using centripetal force to create artificial gravity will cause disturbances in the inner ear leading to nausea and disorientation. The adverse effects may prove intolerable for the occupants.
Centripetal force
In the context of a rotating space station, it is the radial force provided by the spacecraft's hull that acts as centripetal force. Thus, the "gravity" force felt by an object is the centrifugal force perceived in the rotating frame of reference as pointing "downwards" towards the hull.
By Newton's Third Law, the value of little g (the perceived "downward" acceleration) is equal in magnitude and opposite in direction to the centripetal acceleration. It was tested with satellites like Bion 3 (1975) and Bion 4 (1977); they both had centrifuges on board to put some specimens in an artificial gravity environment.
Differences from normal gravity
From the perspective of people rotating with the habitat, artificial gravity by rotation behaves similarly to normal gravity but with the following differences, which can be mitigated by increasing the radius of a space station.
Centrifugal force varies with distance: Unlike real gravity, the apparent force felt by observers in the habitat pushes radially outward from the axis, and the centrifugal force is directly proportional to the distance from the axis of the habitat. With a small radius of rotation, a standing person's head would feel significantly less gravity than their feet. Likewise, passengers who move in a space station experience changes in apparent weight in different parts of the body.
The Coriolis effect gives an apparent force that acts on objects that are moving relative to a rotating reference frame. This apparent force acts at right angles to the motion and the rotation axis and tends to curve the motion in the opposite sense to the habitat's spin. If an astronaut inside a rotating artificial gravity environment moves towards or away from the axis of rotation, they will feel a force pushing them in or against the direction of spin. These forces act on the semicircular canals of the inner ear and can cause dizziness. Lengthening the period of rotation (lower spin rate) reduces the Coriolis force and its effects. It is generally believed that at 2 rpm or less, no adverse effects from the Coriolis forces will occur, although humans have been shown to adapt to rates as high as 23 rpm.
Changes in the rotation axis or rate of a spin would cause a disturbance in the artificial gravity field and stimulate the semicircular canals (refer to above). Any movement of mass within the station, including a movement of people, would shift the axis and could potentially cause a dangerous wobble. Thus, the rotation of a space station would need to be adequately stabilized, and any operations to deliberately change the rotation would need to be done slowly enough to be imperceptible. One possible solution to prevent the station from wobbling would be to use its liquid water supply as ballast which could be pumped between different sections of the station as required.
Human spaceflight
The Gemini 11 mission attempted in 1966 to produce artificial gravity by rotating the capsule around the Agena Target Vehicle to which it was attached by a 36-meter tether. They were able to generate a small amount of artificial gravity, about 0.00015 g, by firing their side thrusters to slowly rotate the combined craft like a slow-motion pair of bolas. The resultant force was too small to be felt by either astronaut, but objects were observed moving towards the "floor" of the capsule.
Health benefits
Artificial gravity has been suggested as a solution to various health risks associated with spaceflight. In 1964, the Soviet space program believed that a human could not survive more than 14 days in space for fear that the heart and blood vessels would be unable to adapt to the weightless conditions. This fear was eventually discovered to be unfounded as spaceflights have now lasted up to 437 consecutive days, with missions aboard the International Space Station commonly lasting 6 months. However, the question of human safety in space did launch an investigation into the physical effects of prolonged exposure to weightlessness. In June 1991, a Spacelab Life Sciences 1 flight performed 18 experiments on two men and two women over nine days. In an environment without gravity, it was concluded that the response of white blood cells and muscle mass decreased. Additionally, within the first 24 hours spent in a weightless environment, blood volume decreased by 10%. Long weightless periods can cause brain swelling and eyesight problems. Upon return to Earth, the effects of prolonged weightlessness continue to affect the human body as fluids pool back to the lower body, the heart rate rises, a drop in blood pressure occurs, and there is a reduced tolerance for exercise.
Artificial gravity, for its ability to mimic the behavior of gravity on the human body, has been suggested as one of the most encompassing manners of combating the physical effects inherent in weightless environments. Other measures that have been suggested as symptomatic treatments include exercise, diet, and Pingvin suits. However, criticism of those methods lies in the fact that they do not fully eliminate health problems and require a variety of solutions to address all issues. Artificial gravity, in contrast, would remove the weightlessness inherent in space travel. By implementing artificial gravity, space travelers would never have to experience weightlessness or the associated side effects. Especially in a modern-day six-month journey to Mars, exposure to artificial gravity is suggested in either a continuous or intermittent form to prevent extreme debilitation to the astronauts during travel.
Proposals
Several proposals have incorporated artificial gravity into their design:
Discovery II: a 2005 vehicle proposal capable of delivering a 172-metric-ton crew to Jupiter's orbit in 118 days. A very small portion of the 1,690-metric-ton craft would incorporate a centrifugal crew station.
Multi-Mission Space Exploration Vehicle (MMSEV): a 2011 NASA proposal for a long-duration crewed space transport vehicle; it included a rotational artificial gravity space habitat intended to promote crew health for a crew of up to six persons on missions of up to two years in duration. The torus-ring centrifuge would utilize both standard metal-frame and inflatable spacecraft structures and would provide 0.11 to 0.69 g if built with the diameter option.
ISS Centrifuge Demo: a 2011 NASA proposal for a demonstration project preparatory to the final design of the larger torus centrifuge space habitat for the Multi-Mission Space Exploration Vehicle. The structure would have an outside diameter of with a ring interior cross-section diameter of . It would provide 0.08 to 0.51 g partial gravity. This test and evaluation centrifuge would have the capability to become a Sleep Module for the ISS crew.
Mars Direct: A plan for a crewed Mars mission created by NASA engineers Robert Zubrin and David Baker in 1990, later expanded upon in Zubrin's 1996 book The Case for Mars. The "Mars Habitat Unit", which would carry astronauts to Mars to join the previously launched "Earth Return Vehicle", would have had artificial gravity generated during flight by tying the spent upper stage of the booster to the Habitat Unit, and setting them both rotating about a common axis.
The proposed Tempo3 mission rotates two halves of a spacecraft connected by a tether to test the feasibility of simulating gravity on a crewed mission to Mars.
The Mars Gravity Biosatellite was a proposed mission meant to study the effect of artificial gravity on mammals. An artificial gravity field of 0.38 g (equivalent to Mars's surface gravity) was to be produced by rotation (32 rpm, radius of ca. 30 cm). Fifteen mice would have orbited Earth (Low Earth orbit) for five weeks and then land alive. However, the program was canceled on 24 June 2009, due to a lack of funding and shifting priorities at NASA.
Vast Space is a private company that proposes to build the world's first artificial gravity space station using the rotating spacecraft concept.
A Mars gravity simulator could be built on the Moon to prepare for Mars missions. The surface gravity of Mars is somewhat more than twice that of the Moon. It has been proposed to build a large low-pressure bubble, and within it up to twenty higher-pressure rotating tori, all within a cave or lava tube. An analogous system could be built on Mars to prepare people to return to Earth, whose surface gravity is more than twice that of Mars.
Issues with implementation
Some of the reasons that artificial gravity remains unused today in spaceflight trace back to the problems inherent in implementation. One of the realistic methods of creating artificial gravity is the centrifugal effect caused by the centripetal force of the floor of a rotating structure pushing up on the person. In that model, however, issues arise in the size of the spacecraft. As expressed by John Page and Matthew Francis, the smaller a spacecraft (the shorter the radius of rotation), the more rapid the rotation that is required. As such, to simulate gravity, it would be better to utilize a larger spacecraft that rotates slowly.
The requirements on size about rotation are due to the differing forces on parts of the body at different distances from the axis of rotation. If parts of the body closer to the rotational axis experience a force that is significantly different from parts farther from the axis, then this could have adverse effects. Additionally, questions remain as to what the best way is to initially set the rotating motion in place without disturbing the stability of the whole spacecraft's orbit. At the moment, there is not a ship massive enough to meet the rotation requirements, and the costs associated with building, maintaining, and launching such a craft are extensive.
In general, with the small number of negative health effects present in today's typically shorter spaceflights, as well as with the very large cost of research for a technology which is not yet really needed, the present day development of artificial gravity technology has necessarily been stunted and sporadic.
As the length of typical space flights increases, the need for artificial gravity for the passengers in such lengthy spaceflights will most certainly also increase, and so will the knowledge and resources available to create such artificial gravity, most likely also increase. In summary, it is probably only a question of time, as to how long it might take before the conditions are suitable for the completion of the development of artificial gravity technology, which will almost certainly be required at some point along with the eventual and inevitable development of an increase in the average length of a spaceflight.
In science fiction
Several science fiction novels, films, and series have featured artificial gravity production.
In the movie 2001: A Space Odyssey, a rotating centrifuge in the Discovery spacecraft provides artificial gravity.
The 1999 television series Cowboy Bebop, a rotating ring in the Bebop spacecraft creates artificial gravity throughout the spacecraft.
In the novel The Martian, the Hermes spacecraft achieves artificial gravity by design; it employs a ringed structure, at whose periphery forces around 40% of Earth's gravity are experienced, similar to Mars' gravity.
In the novel Project Hail Mary by the same author, weight on the titular ship Hail Mary is provided initially by engine thrust, as the ship is capable of constant acceleration up to and is also able to separate, turn the crew compartment inwards, and rotate to produce while in orbit.
The movie Interstellar features a spacecraft called the Endurance that can rotate on its central axis to create artificial gravity, controlled by retro thrusters on the ship.
The 2021 film Stowaway features the upper stage of a launch vehicle connected by 450-meter long tethers to the ship's main hull, acting as a counterweight for inertia-based artificial gravity.
In the television series For All Mankind, the space hotel Polaris, later renamed Phoenix after being purchased and converted into a space vessel by Helios Aerospace for their own Mars mission, features a wheel-like structure controlled by thrusters to create artificial gravity, whilst a central axial hub operates in zero gravity as a docking station.
Linear acceleration
Linear acceleration is another method of generating artificial gravity, by using the thrust from a spacecraft's engines to create the illusion of being under a gravitational pull. A spacecraft under constant acceleration in a straight line would have the appearance of a gravitational pull in the direction opposite to that of the acceleration, as the thrust from the engines would cause the spacecraft to "push" itself up into the objects and persons inside of the vessel, thus creating the feeling of weight. This is because of Newton's third law: the weight that one would feel standing in a linearly accelerating spacecraft would not be a true gravitational pull, but simply the reaction of oneself pushing against the craft's hull as it pushes back. Similarly, objects that would otherwise be free-floating within the spacecraft if it were not accelerating would "fall" towards the engines when it started accelerating, as a consequence of Newton's first law: the floating object would remain at rest, while the spacecraft would accelerate towards it, and appear to an observer within that the object was "falling".
To emulate artificial gravity on Earth, spacecraft using linear acceleration gravity may be built similar to a skyscraper, with its engines as the bottom "floor". If the spacecraft were to accelerate at the rate of 1 g—Earth's gravitational pull—the individuals inside would be pressed into the hull at the same force, and thus be able to walk and behave as if they were on Earth.
This form of artificial gravity is desirable because it could functionally create the illusion of a gravity field that is uniform and unidirectional throughout a spacecraft, without the need for large, spinning rings, whose fields may not be uniform, not unidirectional with respect to the spacecraft, and require constant rotation. This would also have the advantage of relatively high speed: a spaceship accelerating at 1 g, 9.8 m/s2, for the first half of the journey, and then decelerating for the other half, could reach Mars within a few days. Similarly, a hypothetical space travel using constant acceleration of 1 g for one year would reach relativistic speeds and allow for a round trip to the nearest star, Proxima Centauri. As such, low-impulse but long-term linear acceleration has been proposed for various interplanetary missions. For example, even heavy (100 ton) cargo payloads to Mars could be transported to Mars in and retain approximately 55 percent of the LEO vehicle mass upon arrival into a Mars orbit, providing a low-gravity gradient to the spacecraft during the entire journey.
This form of gravity is not without challenges, however. At present, the only practical engines that could propel a vessel fast enough to reach speeds comparable to Earth's gravitational pull require chemical reaction rockets, which expel reaction mass to achieve thrust, and thus the acceleration could only last for as long as a vessel had fuel. The vessel would also need to be constantly accelerating and at a constant speed to maintain the gravitational effect, and thus would not have gravity while stationary, and could experience significant swings in g-forces if the vessel were to accelerate above or below 1 g. Further, for point-to-point journeys, such as Earth-Mars transits, vessels would need to constantly accelerate for half the journey, turn off their engines, perform a 180° flip, reactivate their engines, and then begin decelerating towards the target destination, requiring everything inside the vessel to experience weightlessness and possibly be secured down for the duration of the flip.
A propulsion system with a very high specific impulse (that is, good efficiency in the use of reaction mass that must be carried along and used for propulsion on the journey) could accelerate more slowly producing useful levels of artificial gravity for long periods of time. A variety of electric propulsion systems provide examples. Two examples of this long-duration, low-thrust, high-impulse propulsion that have either been practically used on spacecraft or are planned in for near-term in-space use are Hall effect thrusters and Variable Specific Impulse Magnetoplasma Rockets (VASIMR). Both provide very high specific impulse but relatively low thrust, compared to the more typical chemical reaction rockets. They are thus ideally suited for long-duration firings which would provide limited amounts of, but long-term, milli-g levels of artificial gravity in spacecraft.
In a number of science fiction plots, acceleration is used to produce artificial gravity for interstellar spacecraft, propelled by as yet theoretical or hypothetical means.
This effect of linear acceleration is well understood, and is routinely used for 0 g cryogenic fluid management for post-launch (subsequent) in-space firings of upper stage rockets.
Roller coasters, especially launched roller coasters or those that rely on electromagnetic propulsion, can provide linear acceleration "gravity", and so can relatively high acceleration vehicles, such as sports cars. Linear acceleration can be used to provide air-time on roller coasters and other thrill rides.
Simulating lunar gravity
In January 2022, China was reported by the South China Morning Post to have built a small ( diameter) research facility to simulate low lunar gravity with the help of magnets. The facility was reportedly partly inspired by the work of Andre Geim (who later shared the 2010 Nobel Prize in Physics for his research on graphene) and Michael Berry, who both shared the Ig Nobel Prize in Physics in 2000 for the magnetic levitation of a frog.
Graviton control or generator
Speculative or fictional mechanisms
In science fiction, artificial gravity (or cancellation of gravity) or "paragravity" is sometimes present in spacecraft that are neither rotating nor accelerating. At present, there is no confirmed technique as such that can simulate gravity other than actual rotation or acceleration. There have been many claims over the years of such a device. Eugene Podkletnov, a Russian engineer, has claimed since the early 1990s to have made such a device consisting of a spinning superconductor producing a powerful "gravitomagnetic field." In 2006, a research group funded by ESA claimed to have created a similar device that demonstrated positive results for the production of gravitomagnetism, although it produced only 0.0001 g.
See also
References
External links
List of peer review papers on artificial gravity
TEDx talk about artificial gravity
Overview of artificial gravity in Sci-Fi and Space Science
NASA's Java simulation of artificial gravity
Variable Gravity Research Facility (xGRF), concept with tethered rotating satellites, perhaps a Bigelow expandable module and a spent upper stage as a counterweight
Gravity
Gravity
Space colonization
Scientific speculation
Space medicine
Rotation | Artificial gravity | Physics | 4,038 |
65,677,758 | https://en.wikipedia.org/wiki/Lactarius%20psammicola | Lactarius psammicola is a species of mushroom in the genus Lactarius, family Russulaceae, and order Russulales. Its mushroom cap is convex when young and becomes funnel shaped as it ages. The cap has concentric rings of orangish brown. The taste is described as acrid.
Further reading
Hesler & Smith's monograph of North American Lactarius species
References
psammicola
Fungus species | Lactarius psammicola | Biology | 90 |
21,318,521 | https://en.wikipedia.org/wiki/Graph%20structure%20theorem | In mathematics, the graph structure theorem is a major result in the area of graph theory. The result establishes a deep and fundamental connection between the theory of graph minors and topological embeddings. The theorem is stated in the seventeenth of a series of 23 papers by Neil Robertson and Paul Seymour. Its proof is very long and involved. and are surveys accessible to nonspecialists, describing the theorem and its consequences.
Setup and motivation for the theorem
A minor of a graph is any graph that is isomorphic to a graph that can be obtained from a subgraph of by contracting some edges. If does not have a graph as a minor, then we say that is -free. Let be a fixed graph. Intuitively, if is a huge -free graph, then there ought to be a "good reason" for this. The graph structure theorem provides such a "good reason" in the form of a rough description of the structure of . In essence, every -free graph suffers from one of two structural deficiencies: either is "too thin" to have as a minor, or can be (almost) topologically embedded on a surface that is too simple to embed upon. The first reason applies if is a planar graph, and both reasons apply if is not planar. We first make precise these notions.
Tree width
The tree width of a graph is a positive integer that specifies the "thinness" of . For example, a connected graph has tree width one if and only if it is a tree, and has tree width two if and only if it is a series–parallel graph. Intuitively, a huge graph has small tree width if and only if takes the structure of a huge tree whose nodes and edges have been replaced by small graphs. We give a precise definition of tree width in the subsection regarding clique-sums. It is a theorem that if is a minor of , then the tree width of is not greater than that of . Therefore, one "good reason" for to be -free is that the tree width of is not very large. The graph structure theorem implies that this reason always applies in case is planar.
Corollary 1. For every planar graph , there exists a positive integer such that every -free graph has tree width less than .
It is unfortunate that the value of in Corollary 1 is generally much larger than the tree width of (a notable exception is when , the complete graph on four vertices, for which ). This is one reason that the graph structure theorem is said to describe the "rough structure" of -free graphs.
Surface embeddings
Roughly, a surface is a set of points with a local topological structure of a disc. Surfaces fall into two infinite families: the orientable surfaces include the sphere, the torus, the double torus and so on; the nonorientable surfaces include the real projective plane, the Klein bottle and so on. A graph embeds on a surface if the graph can be drawn on the surface as a set of points (vertices) and arcs (edges) that do not cross or touch each other, except where edges and vertices are incident or adjacent. A graph is planar if it embeds on the sphere. If a graph embeds on a particular surface then every minor of also embeds on that same surface. Therefore, a "good reason" for to be -free is that embeds on a surface that does not embed on.
When is not planar, the graph structure theorem may be looked at as a vast generalization of the Kuratowski theorem. A version of this theorem proved by states that if a graph is both -free and -free, then is planar. This theorem provides a "good reason" for a graph not to have or as minors; specifically, embeds on the sphere, whereas neither nor embed on the sphere. Unfortunately, this notion of "good reason" is not sophisticated enough for the graph structure theorem. Two more notions are required: clique-sums and vortices.
Clique-sums
A clique in a graph is any set of vertices that are pairwise adjacent in . For a non-negative integer , a -clique-sum of two graphs and is any graph obtained by selecting a non-negative integer , selecting clique of size in each of and , identifying the two cliques into a single clique of size , then deleting zero or more of the edges that join vertices in the new clique.
If is a list of graphs, then we may produce a new graph by joining the list of graphs via -clique-sums. That is, we take a -clique-sum of and , then take a -clique-sum of with the resulting graph, and so on. A graph has tree width at most if it can be obtained via -clique-sums from a list of graphs, where each graph in the list has at most vertices.
Corollary 1 indicates to us that -clique-sums of small graphs describe the rough structure -free graphs when is planar. When is nonplanar, we also need to consider -clique-sums of a list of graphs, each of which is embedded on a surface. The following example with illustrates this point. The graph embeds on every surface except for the sphere. However, there exist -free graphs that are far from planar. In particular, the 3-clique-sum of any list of planar graphs results in a -free graph. determined the precise structure of -free graphs, as part of a cluster of results known as Wagner's theorem:
Theorem 2. If is -free, then can be obtained via 3-clique-sums from a list of planar graphs, and copies of one special non-planar graph having 8-vertices.
We point out that Theorem 2 is an exact structure theorem since the precise structure of -free graphs is determined. Such results are rare within graph theory. The graph structure theorem is not precise in this sense because, for most graphs , the structural description of -free graphs includes some graphs that are not -free.
Vortices (rough description)
One might be tempted to conjecture that an analog of Theorem 2 holds for graphs other than . Perhaps it is true that: for any non-planar graph , there exists a positive integer such that every -free graph can be obtained via -clique-sums from a list of graphs, each of which either has at most vertices or embeds on some surface that does not embed on. Unfortunately, this statement is not yet sophisticated enough to be true. We must allow each embedded graph to "cheat" in two limited ways. First, we must allow a bounded number of locations on the surface at which we may add some new vertices and edges that are permitted to cross each other in a manner of limited complexity. Such locations are called vortices. The "complexity" of a vortex is limited by a parameter called its depth, closely related to pathwidth. The reader may prefer to defer reading the following precise description of a vortex of depth . Second, we must allow a limited number of new vertices to add to each of the embedded graphs with vortices.
Vortices (precise definition)
A face of an embedded graph is an open 2-cell in the surface that is disjoint from the graph, but whose boundary is the union of some of the edges of the embedded graph. Let be a face of an embedded graph and let , be the vertices lying on the boundary of (in that circular order). A circular interval for is a set of vertices of the form where and are integers and where subscripts are reduced modulo . Let be a finite list of circular intervals for . We construct a new graph as follows. For each circular interval in we add a new vertex that joins to zero or more of the vertices in . Finally, for each pair of intervals in , we may add an edge joining to provided that and have nonempty intersection. The resulting graph is said to be obtained from by adding a vortex of depth at most (to the face ) provided that no vertex on the boundary of appears in more than of the intervals in .
Statement of the graph structure theorem
Graph structure theorem. For any graph , there exists a positive integer such that every -free graph can be obtained as follows:
We start with a list of graphs, where each graph in the list is embedded on a surface on which does not embed
to each embedded graph in the list, we add at most vortices, where each vortex has depth at most
to each resulting graph we add at most new vertices (called apexes) and add any number of edges, each having at least one of its endpoints among the apexes.
finally, we join via -clique-sums the resulting list of graphs.
Note that steps 1. and 2. result in an empty graph if is planar, but the bounded number of vertices added in step 3. makes the statement consistent with Corollary 1.
Refinements
Strengthened versions of the graph structure theorem are possible depending on the set of forbidden minors. For instance, when one of the graphs in is planar, then every -minor-free graph has a tree decomposition of bounded width; equivalently, it can be represented as a clique-sum of graphs of constant size. When one of the graphs in can be drawn in the plane with only a single crossing, then the -minor-free graphs admit a decomposition as a clique-sum of graphs of constant size and graphs of bounded genus, without vortices.
A different strengthening is also known when one of the graphs in is an apex graph.
See also
Robertson–Seymour theorem
Notes
References
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
Graph minor theory
Theorems in graph theory | Graph structure theorem | Mathematics | 2,042 |
10,551,079 | https://en.wikipedia.org/wiki/Comparison%20of%20recording%20media | This article details a comparison of audio recording mediums.
Comparison
The typical duration of a vinyl album is about 15 to 25 minutes per side. Classical music and spoken word recordings can extend to over 30 minutes on a side. If a side exceeds the average time, the maximum groove amplitude is reduced to make room for the additional program material. This can cause hiss in the sound from lower quality amplifiers when the volume is turned up to compensate for the lower recorded level. An extreme example, Todd Rundgren's Initiation LP, with 36 minutes of music on one side, has a "technical note" at the bottom of the inner sleeve: "if the sound does not seem loud enough on your system, try re-recording the music onto tape." The total of around 40–45 minutes often influences the arrangement of tracks, with the preferred positions being the opening and closing tracks of each side.
Although the term EP is commonly used to describe a 7" single with more than two tracks, technically they are not different from a normal 7" single. The EP uses reduced dynamic range and a smaller run-off groove area to extend the playing time. However, there are examples of singles, such as The Beatles' "Hey Jude" or Queen's "Bohemian Rhapsody", which are six minutes long or more. (In 1989, RCA released 'Dreamtime' by the band Love and Rockets, which clocks at 8:40). These longer recordings would require the same technical approach as an EP. The term EP has also been used for 10" 45 rpm records, typically containing a reduced number of tracks.
Vinyl albums have a large 12" (30 cm) album cover, which also allows cover designers scope for imaginative designs, often including fold-outs and leaflets.
See also
Audio format
Audio storage
CardTalk
DJ
Hard drive
Magnetic cartridge
RCA
Record changer
Record press
Sound recording
Unusual types of gramophone records
Voyager Golden Record
Vinyl Emulation Software
References
Lawrence, Harold; "Mercury Living Presence." Compact disc liner notes. Bartók, Antal Dorati, Mercury 432 017-2. 1991.
International standard IEC 60098: Analogue audio disk records and reproducing equipment. Third edition, International Electrotechnical Commission, 1987.
College Physics, Sears, Zemansky, Young, 1974, LOC #73-21135, chapter: Acoustic Phenomena
Further reading
From Tin Foil to Stereo — Evolution of the Phonograph by Oliver Read and Walter L. Welch.
Where have all the good times gone? — the rise and fall of the record industry Louis Barfe.
Pressing the LP record by Ellingham, Niel, published at 1 Bruach Lane, PH16 5DG, Scotland.
External links
Creating a vinyl record
YouTube — Record Making With Duke Ellington (1937) A look at how early 78 rpm records were made.
Kiddie Records Weekly — Recordings and case images from children's records of the 1940s and 1950s.
Recorded music
Technological comparisons | Comparison of recording media | Technology | 604 |
25,510,801 | https://en.wikipedia.org/wiki/Solar%20eclipse%20of%20May%2022%2C%202077 | A total solar eclipse will occur at the Moon's ascending node of orbit on Saturday, May 22, 2077, with a magnitude of 1.029. A solar eclipse occurs when the Moon passes between Earth and the Sun, thereby totally or partly obscuring the image of the Sun for a viewer on Earth. A total solar eclipse occurs when the Moon's apparent diameter is larger than the Sun's, blocking all direct sunlight, turning day into darkness. Totality occurs in a narrow path across Earth's surface, with the partial solar eclipse visible over a surrounding region thousands of kilometres wide. Occurring about 3.2 days after perigee (on May 18, 2077, at 20:50 UTC), the Moon's apparent diameter will be larger.
The path of totality will be visible from parts of Australia, Papua New Guinea, and the Solomon Islands. A partial solar eclipse will also be visible for parts of Australia, Indonesia, Antarctica, and Oceania.
Eclipse details
Shown below are two tables displaying details about this particular solar eclipse. The first table outlines times at which the moon's penumbra or umbra attains the specific parameter, and the second table describes various other parameters pertaining to this eclipse.
Eclipse season
This eclipse is part of an eclipse season, a period, roughly every six months, when eclipses occur. Only two (or occasionally three) eclipse seasons occur each year, and each season lasts about 35 days and repeats just short of six months (173 days) later; thus two full eclipse seasons always occur each year. Either two or three eclipses happen each eclipse season. In the sequence below, each eclipse is separated by a fortnight.
Related eclipses
Eclipses in 2077
A total solar eclipse on May 22.
A partial lunar eclipse on June 6.
An annular solar eclipse on November 15.
A partial lunar eclipse on November 29.
Metonic
Preceded by: Solar eclipse of August 3, 2073
Followed by: Solar eclipse of March 10, 2081
Tzolkinex
Preceded by: Solar eclipse of April 11, 2070
Followed by: Solar eclipse of July 3, 2084
Half-Saros
Preceded by: Lunar eclipse of May 17, 2068
Followed by: Lunar eclipse of May 28, 2086
Tritos
Preceded by: Solar eclipse of June 22, 2066
Followed by: Solar eclipse of April 21, 2088
Solar Saros 129
Preceded by: Solar eclipse of May 11, 2059
Followed by: Solar eclipse of June 2, 2095
Inex
Preceded by: Solar eclipse of June 11, 2048
Followed by: Solar eclipse of May 3, 2106
Triad
Preceded by: Solar eclipse of July 22, 1990
Followed by: Solar eclipse of March 23, 2164
Solar eclipses of 2076–2079
Saros 129
Metonic series
Tritos series
Inex series
Notes
References
2077 05 22
2077 in science
2077 05 22
2077 05 22 | Solar eclipse of May 22, 2077 | Astronomy | 598 |
71,756,479 | https://en.wikipedia.org/wiki/EUICC | eUICC (embedded UICC) refers to the architectural standards published by the GSM Association (GSMA) or implementations of those standard for eSIM, a device used to securely store one or more SIM card profiles, which are the unique identifiers and cryptographic keys used by cellular network service providers to uniquely identify and securely connect to mobile network devices. Applications of eUICC are found in mobile network devices (cell phones, tablets, portable computers, security controllers, medical devices, etc.) that use GSM cellular network eSIM technology.
Standards
The core standards that define eUICC are published by the GSM Association in two topical areas.
Consumer and IOT
Core standards for implementing eSIM on mobile devices include the following articles:
eSIM Architecture Specification
eSIM IoT Architecture and Requirement Specification
SGP 22 Remote Sim Provisioning(RSP) Architecture for consumer Devices
Machine to Machine (M2M)
GSMA publishes standards for machine-to-machine (M2M) third-party provisioning of eSIM which includes the following articles:
SGP.01 M2M eSIM Architecture v4.2
SGP.02 eSIM Technical Specifications V4.2.1
Implementation
eUICC can refer to any implementation or application of the eUICC standards in an eSIM device. Each implementation of eUICC includes software code, a processor to emulate the software, non-volatile memory used to store the unique identifiers and cryptographic keys that are part of a SIM profile, and a bus interface to communicate the SIM profile to the mobile device. eUICC standards specify that only one eUICC security controller (ECASD) may be implemented in an eSIM, but the eSIM may store multiple SIM profiles.
EID
GSMA standards define EID as "eUICC Identifier". Some developers / implementers have referred to this using the descriptive term "eSIM identifier", which summarizes the function of an eUICC Identifier. Some third parties have joined this acronym with the term "electronic identity document", which is a general concept of any identifier stored or presented in electronic format.
References
Architecture | EUICC | Engineering | 446 |
39,626,985 | https://en.wikipedia.org/wiki/Wireless%20powerline%20sensor | A Wireless powerline sensor hangs from an overhead power line and sends measurements to a data collection system. Because the sensor does not contact anything but a single live conductor, no high-voltage isolation is needed. The sensor, installed simply by clamping it around a conductor, powers itself from energy scavenged from electrical or magnetic fields surrounding the conductor being measured. Overhead power line monitoring helps distribution system operators provide reliable service at optimized cost.
Communication
In the photos on the right, an antenna on the sensor transmits data to a communication device attached to a nearby utility pole. The communication device gets power from the 240 volt utility line in a residential neighborhood. The device has two antennas. One antenna collects data from the sensors, and the other antenna forwards the data to the electrical utility control center over cell phone service.
In some systems, powerline sensors may transmit information on the high voltage conductor itself rather than by transmission of a radio signal.
Measurements
The primary purpose of a powerline sensor is to measure current, however, some sensors can either directly measure or derive other data such as:
Conductor temperature
Ambient temperature
Inclination or the amount of line sagging
Wind movement
Electric fields
Power Generation
Distribution and Consumption of electricity
See also
Energy management system
List of wireless sensor nodes
Sensor node
Supervisory control and data acquisition
Wireless sensor network
References
6. Patel N., Kumar S. (2017),. "Enhanced Clear Channel Assessment for Slotted CSMA/CA in IEEE 802.15. 4", Springer
Wireless Personal Communications, Vol. 95, No. 4, pp 4063–4081. https://link.springer.com/article/10.1007/s11277- 017-4042-5
External links
A Wireless Sensors Suite for Smart Grid Applications
Power Line Monitoring for Energy Demand Control
Specifications for a commercially available product
Electric power distribution
Wireless sensor network
Electric power transmission systems | Wireless powerline sensor | Technology | 384 |
57,416,301 | https://en.wikipedia.org/wiki/ZANU%E2%80%93PF%20Building | The ZANU–PF Building is a 15-story high-rise building in Harare, Zimbabwe, which serves as the headquarters of ZANU–PF, the country's ruling party. The top floors of the building hold the offices of the ZANU–PF Politburo, lower floors hold other party offices, and the first floor is home to the ZANU Archives, which holds many records from the Rhodesian Bush War. The building hosts annual meetings of the party's politburo, central committee, and other organizations.
Location
The ZANU–PF Building is located in Harare, Zimbabwe, at the corner of Samora Machel Avenue and Rotten Row, next to Willoughby Crescent.
History
Fundraising for a new ZANU–PF headquarters began on 24 October 1983, when the party set a goal of raising Z$15 million in one year. Ultimately paid for by the Chinese Communist Party, construction began in the late 1980s, and the building was completed in 1990. Constructed during the post-independence building boom, the ZANU–PF Building, unlike many others at the time, was designed by Zimbabwean architects, Peter Martin and Tony Wales-Smith. At the time of its completion, it was the tallest building in Harare. It became nicknamed the "Shake Shake" building, for its resemblance to Chibuku Shake Shake, a type of sorghum beer sold in cartons.
Architecture
The ZANU–PF Building is a 15-story grey concrete structure, topped by a large emblem of a cockerel, a symbol of ZANU–PF. It is of the postmodern style, and is sometimes described as Brutalist.
See also
List of tallest buildings in Zimbabwe
References
Buildings and structures in Harare
Headquarters of political parties
Office buildings completed in 1990
Politics of Zimbabwe
Postmodern architecture
ZANU–PF | ZANU–PF Building | Engineering | 382 |
3,389,593 | https://en.wikipedia.org/wiki/Current%20divider | In electronics, a current divider is a simple linear circuit that produces an output current (IX) that is a fraction of its input current (IT). Current division refers to the splitting of current between the branches of the divider. The currents in the various branches of such a circuit will always divide in such a way as to minimize the total energy expended.
The formula describing a current divider is similar in form to that for the voltage divider. However, the ratio describing current division places the impedance of the considered branches in the denominator, unlike voltage division, where the considered impedance is in the numerator. This is because in current dividers, total energy expended is minimized, resulting in currents that go through paths of least impedance, hence the inverse relationship with impedance. Comparatively, voltage divider is used to satisfy Kirchhoff's voltage law (KVL). The voltage around a loop must sum up to zero, so the voltage drops must be divided evenly in a direct relationship with the impedance.
To be specific, if two or more impedances are in parallel, the current that enters the combination will be split between them in inverse proportion to their impedances (according to Ohm's law). It also follows that if the impedances have the same value, the current is split equally.
Current divider
A general formula for the current IX in a resistor RX that is in parallel with a combination of other resistors of total resistance RT (see Figure 1) is
where IT is the total current entering the combined network of RX in parallel with RT. Notice that when RT is composed of a parallel combination of resistors, say R1, R2, ... etc., then the reciprocal of each resistor must be added to find the reciprocal of the total resistance RT:
General case
Although the resistive divider is most common, the current divider may be made of frequency-dependent impedances. In the general case:
and the current IX is given by
where ZT refers to the equivalent impedance of the entire circuit.
Using admittance
Instead of using impedances, the current divider rule can be applied just like the voltage divider rule if admittance (the inverse of impedance) is used:
Take care to note that YT is a straightforward addition, not the sum of the inverses inverted (as would be done for a standard parallel resistive network). For Figure 1, the current IX would be
Example: RC combination
Figure 2 shows a simple current divider made up of a capacitor and a resistor. Using the formula below, the current in the resistor is
where ZC = 1/(jωC) is the impedance of the capacitor, and j is the imaginary unit.
The product τ = CR is known as the time constant of the circuit, and the frequency for which ωCR = 1 is called the corner frequency of the circuit. Because the capacitor has zero impedance at high frequencies and infinite impedance at low frequencies, the current in the resistor remains at its DC value IT for frequencies up to the corner frequency, whereupon it drops toward zero for higher frequencies as the capacitor effectively short-circuits the resistor. In other words, the current divider is a low-pass filter for current in the resistor.
Loading effect
The gain of an amplifier generally depends on its source and load terminations. Current amplifiers and transconductance amplifiers are characterized by a short-circuit output condition, and current amplifiers and transresistance amplifiers are characterized using ideal infinite-impedance current sources. When an amplifier is terminated by a finite, non-zero termination, and/or driven by a non-ideal source, the effective gain is reduced due to the loading effect at the output and/or the input, which can be understood in terms of current division.
Figure 3 shows a current amplifier example. The amplifier (gray box) has input resistance Rin, output resistance Rout and an ideal current gain Ai. With an ideal current driver (infinite Norton resistance) all the source current iS becomes input current to the amplifier. However, for a Norton driver a current divider is formed at the input that reduces the input current to
which clearly is less than iS. Likewise, for a short circuit at the output, the amplifier delivers an output current iout = Ainii to the short circuit. However, when the load is a non-zero resistor RL, the current delivered to the load is reduced by current division to the value
Combining these results, the ideal current gain Ai realized with an ideal driver and a short-circuit load is reduced to the loaded gain Aloaded:
The resistor ratios in the above expression are called the loading factors. For more discussion of loading in other amplifier types, see .
Unilateral versus bilateral amplifiers
Figure 3 and the associated discussion refers to a unilateral amplifier. In a more general case where the amplifier is represented by a two-port network, the input resistance of the amplifier depends on its load, and the output resistance on the source impedance. The loading factors in these cases must employ the true amplifier impedances including these bilateral effects. For example, taking the unilateral current amplifier of Figure 3, the corresponding bilateral two-port network is shown in Figure 4 based upon h-parameters. Carrying out the analysis for this circuit, the current gain with feedback Afb is found to be
That is, the ideal current gain Ai is reduced not only by the loading factors, but due to the bilateral nature of the two-port by an additional factor , which is typical for negative-feedback amplifier circuits. The factor β(RL/RS) is the current feedback provided by the voltage feedback source of voltage gain β V/V. For instance, for an ideal current source with RS = ∞ Ω, the voltage feedback has no influence, and for RL = 0 Ω, there is zero load voltage, again disabling the feedback.
References and notes
See also
Voltage divider
Resistor
Ohm's law
Thévenin's theorem
Voltage regulation
External links
Divider Circuits and Kirchhoff's Laws chapter from Lessons In Electric Circuits Vol 1 DC free ebook and Lessons In Electric Circuits series.
University of Texas: Notes on electronic circuit theory
Analog circuits
Electric current | Current divider | Physics,Engineering | 1,308 |
18,123,022 | https://en.wikipedia.org/wiki/Miracle%20Dog | Miracle Dog: How Quentin Survived the Gas Chamber to Speak for Animals on Death Row is a non-fiction book written by Randy Grim. Published in February 2005 by Blue Ribbon Books, the book details the story of a dog named Quentin, who survived fifteen minutes in a carbon monoxide gas chamber at the St. Louis, Missouri animal shelter in 2003. Grim, the president and founder of Stray Rescue of St. Louis, adopted the dog and used his story to campaign against the use of the gas chamber for Animal euthanasia and to support no-kill animal shelters. As a result of Grim's efforts, the St. Louis animal shelter stopped using the gas chamber in January 2005, switching to more humane euthanasia methods.
Reception
Ranny Green of The Seattle Times wrote of this book: "Through 30 years of reviewing pet books, I can't remember one that has left such a lasting impression..."
References
External links
Review at BookIdeas.com
Books about friendship
Books about animal rights
2005 non-fiction books
Individual dogs
Non-fiction books about dogs
Animal euthanasia | Miracle Dog | Chemistry | 222 |
33,739,475 | https://en.wikipedia.org/wiki/Mobilities | Mobilities is a contemporary paradigm in the social sciences that explores the movement of people (human migration, individual mobility, travel, transport), ideas (see e.g. meme) and things (transport), as well as the broader social implications of those movements. Mobility can also be thought as the movement of people through social classes, social mobility or income, income mobility.
A mobility "turn" (or transformation) in the social sciences began in the 1990s in response to the increasing realization of the historic and contemporary importance of movement on individuals and society. This turn has been driven by generally increased levels of mobility and new forms of mobility where bodies combine with information and different patterns of mobility. The mobilities paradigm incorporates new ways of theorizing about how these mobilities lie "at the center of constellations of power, the creation of identities and the microgeographies of everyday life." (Cresswell, 2011, 551)
The mobility turn arose as a response to the way in which the social sciences had traditionally been static, seeing movement as a black box and ignoring or trivializing "the importance of the systematic movements of people for work and family life, for leisure and pleasure, and for politics and protest" (Sheller and Urry, 2006, 208). Mobilities emerged as a critique of contradictory orientations toward both sedentarism and deterritorialisation in social science. People had often been seen as static entities tied to specific places, or as nomadic and placeless in a frenetic and globalized existence. Mobilities looks at movements and the forces that drive, constrain and are produced by those movements.
Several typologies have been formulated to clarify the wide variety of mobilities. Most notably, John Urry divides mobilities into five types: mobility of objects, corporeal mobility, imaginative mobility, virtual mobility and communicative mobility. Later, Leopoldina Fortunati and Sakari Taipale proposed an alternative typology taking the individual and the human body as a point of reference. They differentiate between ‘macro-mobilities’ (consistent physical displacements), ‘micro-mobilities’ (small-scale displacements), ‘media mobility’ (mobility added to the traditionally fixed forms of media) and ‘disembodied mobility’ (the transformation in the social order). The categories are typically considered interrelated, and therefore they are not exclusive.
Scope
While mobilities is commonly associated with sociology, contributions to the mobilities literature have come from scholars in anthropology, cultural studies, economics, geography, migration studies, science and technology studies, and tourism and transport studies. (Sheller and Urry, 2006, 207)
The eponymous journal Mobilities provides a list of typical subjects which have been explored in the mobilities paradigm (Taylor and Francis, 2011):
Mobile spatiality and temporality
Sustainable and alternative mobilities
Mobile rights and risks
New social networks and mobile media
Immobilities and social exclusions
Tourism and travel mobilities
Migration and diasporas
Transportation and communication technologies
Transitions in complex systems
Origins
Sheller and Urry (2006, 215) place mobilities in the sociological tradition by defining the primordial theorist of mobilities as Georg Simmel (1858–1918). Simmel's essays, "Bridge and Door" (Simmel, 1909 / 1994) and "The Metropolis and Mental Life" (Simmel, 1903 / 2001) identify a uniquely human will to connection, as well as the urban demands of tempo and precision that are satisfied with mobility.
The more immediate precursors of contemporary mobilities research emerged in the 1990s (Cresswell 2011, 551). Historian James Clifford (1997) advocated for a shift from deep analysis of particular places to the routes connecting them. Marc Augé (1995) considered the philosophical potential of an anthropology of "non-places" like airports and motorways that are characterized by constant transition and temporality. Sociologist Manuel Castells outlined a "network society" and suggested that the "space of places" is being surpassed by a "space of flows." Feminist scholar Caren Kaplan (1996) explored questions about the gendering of metaphors of travel in social and cultural theory.
The contemporary paradigm under the moniker "mobilities" appears to originate with the work of sociologist John Urry. In his book, Sociology Beyond Societies: Mobilities for the Twenty-First Century, Urry (2000, 1) presents a "manifesto for a sociology that examines the diverse mobilities of peoples, objects, images, information and wastes; and of the complex interdependencies between, and social consequences of, these diverse mobilities."
This is consistent with the aims and scope of the eponymous journal Mobilities, which "examines both the large-scale movements of people, objects, capital, and information across the world, as well as more local processes of daily transportation, movement through public and private spaces, and the travel of material things in everyday life" (Taylor and Francis, 2011).
In 2006, Mimi Sheller and John Urry published an oft-cited paper that examined the mobilities paradigm as it was just emerging, exploring its motivations, theoretical underpinnings, and methodologies. Sheller and Urry specifically focused on automobility as a powerful socio-technical system that "impacts not only on local public spaces and opportunities for coming together, but also on the formation of gendered subjectivities, familial and social networks, spatially segregated urban neighborhoods, national images and aspirations to modernity, and global relations ranging from transnational migration to terrorism and oil wars" (Sheller and Urry, 2006, 209). This was further developed by the journal Mobilities (Hannam, Sheller and Urry, 2006).
Mobilities can be viewed as an extension of the "spatial turn" in the arts and sciences in the 1980s, in which scholars began "to interpret space and the spatiality of human life with the same critical insight and interpretive power as have traditionally been given to time and history (the historicality of human life) on one hand, and to social relations and society (the sociality of human life) on the other" (Sheller and Urry, 2006, 216; Engel and Nugent, 2010, 1; Soja, 1999 / 2005, 261).
Engel and Nugent (2010) trace the conceptual roots of the spatial turn to Ernst Cassirer and Henri Lefebvre (1974), although Fredric Jameson appears to have coined the epochal usage of the term for the 1980s paradigm shift. Jameson (1988 / 2003, 154) notes that the concept of the spatial turn "has often seemed to offer one of the more productive ways of distinguishing postmodernism from modernism proper, whose experience of temporality -- existential time, along with deep memory -- it is henceforth conventional to see as dominant of the high modern."
For Oswin & Yeoh (2010) mobility seems to be inextricably intertwined with late-modernity and the end of the nation-state. The sense of mobility makes us to think in migratory and tourist fluxes as well as the necessary infrastructure for that displacement takes place.
P. Vannini (2012) opted to see mobility as a projection of existent cultural values, expectancies and structures that denotes styles of life. Mobility after all would not only generate effects on people's behaviour but also specific styles of life. Vannini explains convincingly that on Canada's coast, the values of islanders defy the hierarchal order in populated cities from many perspectives. Islanders prioritize the social cohesion and trust of their communities before the alienation of mega-cities. There is a clear physical isolation that marks the boundaries between urbanity and rurality. From another view, nonetheless, this ideological dichotomy between authenticity and alienation leads residents to commercialize their spaces to outsiders. Although the tourism industry is adopted in these communities as a form of activity, many locals have historically migrated from urban populated cities.
Mobilities and transportation geography
The intellectual roots of mobilities in sociology distinguish it from traditional transportation studies and transportation geography, which have firmer roots in mid 20th century positivist spatial science.
Cresswell (2011, 551) presents six characteristics distinguishing mobilities from prior approaches to the study of migration or transport:
Mobilities often links science and social science to the humanities.
Mobilities often links across different scales of movement, while traditional transportation geography tends to focus on particular forms of movement at only one scale (such as local traffic studies or household travel surveys).
Mobilities encompasses the movement of people, objects, and ideas, rather than narrowly focusing on areas like passenger modal shift or freight logistics.
Mobilities considers both motion and "stopping, stillness and relative immobility."
Mobilities incorporates mobile theorization and methodologies to avoid the privileging of "notions of boundedness and the sedentary."
Mobilities often embraces the political and differential politics of mobility, as opposed to the apolitical, "objective" stance often sought by researchers associated with engineering disciplines
Mobilities can be seen as a postmodern descendant of modernist transportation studies, with the influence of the spatial turn corresponding to a "post-structuralist agnosticism about both naturalistic and universal explanations and about single-voiced historical narratives, and to the concomitant recognition that position and context are centrally and inescapably implicated in all constructions of knowledge" (Cosgrove, 1999, 7; Warf and Arias, 2009).
Despite these ontological and epistemological differences, Shaw and Hesse (2010, 207) have argued that mobilities and transport geography represent points on a continuum rather than incompatible extremes. Indeed, traditional transport geography has not been wholly quantitative any more than mobilities is wholly qualitative. Sociological explorations of mobility can incorporate empirical techniques, while model-based inquiries can be tempered with richer understandings of the meanings, representations and assumptions inherently embedded in models.
Shaw and Sidaway (2010, 505) argue that even as research in the mobilities paradigm has attempted to reengage transportation and the social sciences, mobilities shares a fate similar to traditional transportation geography in still remaining outside the mainstream of the broader academic geographic community.
Theoretical underpinnings of mobilities
Sheller and Urry (2006, 215-217) presented six bodies of theory underpinning the mobilities paradigm:
The prime theoretical foundation of mobilities is the work of early 20th-century sociologist Georg Simmel, who identified a uniquely human "will to connection," and provided a theoretical connection between mobility and materiality. Simmel focused on the increased tempo of urban life, that "drives not only its social, economic, and infrastructural formations, but also the psychic forms of the urban dweller." Along with this tempo comes a need for precision in timing and location in order to prevent chaos, which results in complex and novel systems of relationships.
A second body of theory comes from the science and technology studies which look at mobile sociotechnical systems that incorporate hybrid geographies of human and nonhuman components. Automobile, rail or air transport systems involve complex transport networks that affect society and are affected by society. These networks can have dynamic and enduring parts. Non-transport information networks can also have unpredictable effects on encouraging or suppressing physical mobility (Pellegrino 2012).
A third body of theory comes from the postmodern conception of spatiality, with the substance of places being constantly in motion and subject to constant reassembly and reconfiguration (Thrift 1996).
A fourth body of theory is a "recentring of the corporeal body as an affective vehicle through which we sense place and movement, and construct emotional geographies". For example, the car is "experienced through a combination of senses and sensed through multiple registers of motion and emotion″ (Sheller and Urry 2006, 216).
A fifth body of theory incorporates how topologies of social networks relate to how complex patterns form and change. Contemporary information technologies and ways of life often create broad but weak social ties across time and space, with social life incorporating fewer chance meetings and more networked connections.
Finally, the last body of theory is the analysis of complex transportation systems that are "neither perfectly ordered nor anarchic." For example, the rigid spatial coupling, operational timings, and historical bindings of rail contrast with unpredictable environmental conditions and ever-shifting political winds. And, yet, "change through the accumulation of small repetitions...could conceivably tip the car system into the postcar system."
Mobilities methodologies
Mimi Sheller and John Urry (2006, 217-219) presented seven methodological areas often covered in mobilities research:
Analysis of the patterning, timing and causation of face-to-face co-presence
Mobile ethnography - participation in patterns of movement while conducting ethnographic research
Time-space diaries - subjects record what they are doing, at what times and in what places
Cyber-research - exploration of virtual mobilities through various forms of electronic connectivity
Study of experiences and feelings
Study of memory and private worlds via photographs, letters, images and souvenirs
Study of in-between places and transfer points like lounges, waiting rooms, cafes, amusement arcades, parks, hotels, airports, stations, motels, harbors
See also
Bicycle
Congestion
Home care
Hypermobility (travel)
Pedestrian
Public transport
Private transport
Transportation engineering
References
Social sciences
Space
Motion (physics) | Mobilities | Physics,Mathematics | 2,835 |
36,789,447 | https://en.wikipedia.org/wiki/The%20World%20of%20Abnormal%20Psychology | The World of Abnormal Psychology is an educational video series produced by Annenberg Media, which examines behavioral disorders in humans. The series was hosted by Dr. Philip Zimbardo of Stanford University, who was best known for his controversial Stanford prison experiment.
Overview
This series builds on Zimbardo's first series Discovering Psychology and is often shown on PBS stations in the United States. The series has been used in courses at seminaries, and as a resource for teachers. The American Psychological Association lists the series under Education and Psychology.
Episodes
The series has 13 episodes, each focusing on a different area of abnormal behavior.
Publications
The World of Abnormal Psychology, videotape (VHS (13 ea.), 60 minutes per episode, 1991–92), Annenberg/CPB Project,
The World of Abnormal Psychology: Study Guide, book (3rd ed., 1999), Allyn & Bacon,
The World of Abnormal Psychology: Faculty Guide, book (2nd ed., 1996), HarperCollins Publishers,
Abnormal Psychology and Modern Life, book (9th ed., 1992), HarperCollins Publishers,
References
Abnormal psychology
Mass media franchises introduced in 1992 | The World of Abnormal Psychology | Biology | 236 |
12,637,285 | https://en.wikipedia.org/wiki/Fazadinium%20bromide | Fazadinium bromide is a muscle relaxant which acts as a nicotinic acetylcholine receptor antagonist through neuromuscular blockade.
References
Bromides
Azo compounds
Imidazopyridines
Nicotinic antagonists | Fazadinium bromide | Chemistry | 52 |
22,145,766 | https://en.wikipedia.org/wiki/Early%20Algebra | Early Algebra is an approach to early mathematics teaching and learning. It is about teaching traditional topics in more profound ways. It is also an area of research in mathematics education.
Traditionally, algebra instruction has been postponed until adolescence. However, data of early algebra researchers shows ways to teach algebraic thinking much earlier. The National Council of Teachers of Mathematics (NCTM) integrates algebra into its Principles and Standards starting from Kindergarten.
One of the major goals of early algebra is generalizing number and set ideas. It moves from particular numbers to patterns in numbers. This includes generalizing arithmetic operations as functions, as well as engaging children in noticing and beginning to formalize properties of numbers and operations such as the commutative property, identities, and inverses.
Students historically have had a very difficult time adjusting to algebra for a number of reasons. Researchers have found that by working with students on such ideas as developing rules for the use of letters to stand in for numbers and the true meaning of the equals symbol (it is a balance point, and does not mean "put the answer next"), children are much better prepared for formal algebra instruction.
Teacher professional development in this area consists of presenting common student misconceptions and then developing lessons to move students out of faulty ways of thinking and into correct generalizations. The use of true, false, and open number sentences can go a long way toward getting students thinking about the properties of number and operations and the meaning of the equals sign.
Research areas in early algebra include use of representations, such as symbols, graphs and tables; cognitive development of students; viewing arithmetic as a part of algebraic conceptual fields
Notes
References
Blanton, M. L. Algebra and the Elementary Classroom: Transforming Thinking, Transforming Practice. (Heinemann, 2008).
J. Kaput, D. Carraher, & M. Blanton (Eds.), Algebra in the Early Grades. (Lawrence Erlbaum and Associates, 2007).
Schliemann, A.D., Carraher, D.W., & Brizuela, B. Bringing Out the Algebraic Character of Arithmetic: From Children's Ideas to Classroom Practice. (Lawrence Erlbaum Associates, 2007).
Carraher, D., Schliemann, A.D., Brizuela, B., & Earnest, D. (2006). Arithmetic and Algebra in early Mathematics Education. Journal for Research in Mathematics Education, Vol 37.
National Council of Teachers of Mathematics. Principles and Standards for School Mathematics. (Author, 2000)
External links
Tufts/TERC Early Algebra Project
Algebra education | Early Algebra | Mathematics | 535 |
44,457,499 | https://en.wikipedia.org/wiki/Wolfgang%20Kautek | Wolfgang Kautek is an Austrian Physical chemist and the head of the Physical chemistry department at the University of Vienna.
He is the President of the Erwin Schrödinger Society for Nanosciences (ESG) and the Chairman of the Research Group "Physical Chemistry" at the Austrian Chemical Society (GÖCh).
References
Physical chemists
Austrian chemists
Academic staff of the University of Vienna
Living people
Austrian physical chemists
Laser researchers
1953 births | Wolfgang Kautek | Chemistry | 91 |
33,329,308 | https://en.wikipedia.org/wiki/Toor%20%28Unix%29 | {{DISPLAYTITLE:toor (Unix)}}
Toor, the word "root" spelled backwards, is an alternative superuser account in Unix-like operating systems, particularly BSD and variants.
Purpose
In Unix, it is traditional to keep the root filesystem as small as reasonably possible, moving larger programs and rapidly changing data to other, optional parts of the system. This increases the likelihood that the system can be brought to a semi-usable state in the case of a partial system failure. It also means that the superuser account, necessary for repairing a broken system, should not depend on any programs outside of this small core. To this end, the root account is often configured with a shell which is small, efficient, and dependable, but awkward for daily use.
The toor account is intended as a solution to this problem. It is identical to root, but is configured to use a different, more featureful shell.
Alternately, toor may be configured with the emergency shell, allowing root the freedom to use the featureful one.
Implementation
In a Unix-like system, each user has a user ID number, which is what the kernel uses to distinguish users and to manage user permissions. User ID #0 is reserved as the superuser account, and is given permission to do anything on the system.
Users log in by username, not by ID number, and a user's choice of login shell is also managed by name. This separation between name and number allows a given user ID to be associated with more than one username, each having its own shell.
Security considerations
The presence of a 'toor' account (or the presence of more than one account with a user ID of 0) triggers a warning in many security auditing systems. This is valuable, since if the system administrator did not intend for a second superuser account, then it may mean that the system has been compromised.
It may be argued that even an intentional 'toor' account is a security risk, since it provides a second point of attack for someone trying to illicitly gain superuser privileges on the system. However, if passwords are chosen and guarded carefully, the risk increase is minimal.
For example, NetBSD ships with a disabled 'toor' account, meaning that there is no password with which one can log into the system as 'toor'. This is not a security risk in itself, though it may generate security warnings as previously described. However, if the system is compromised, an administrator may be less likely to notice the enabling of a disabled account than the creation of a new one, especially if they have become accustomed to ignoring warnings about 'toor' from their (arguably misconfigured) security program.
References
System administration
Operating system security
Unix | Toor (Unix) | Technology | 571 |
13,583,602 | https://en.wikipedia.org/wiki/Small%20nucleolar%20RNA%20SNORA76 | In molecular biology, SNORA76 (also known as ACA62) is a non-coding RNA (ncRNA) which modifies other small nuclear RNAs (snRNAs). It is a member of the H/ACA class of small nucleolar RNA that guide the sites of modification of uridines to pseudouridines.
This snoRNA was identified by computational screening and its expression in mouse experimentally verified
by Northern blot and primer extension analysis. ACA62 is proposed to guide the pseudouridylation of 18S rRNA U34
and U105.
References
External links
Non-coding RNA | Small nucleolar RNA SNORA76 | Chemistry | 136 |
52,071,381 | https://en.wikipedia.org/wiki/NGC%20313 | NGC 313 is a triple star located in the constellation Pisces. It was discovered on November 29, 1850, by Bindon Stoney.
References
External links
0313
18501129
Pisces (constellation) | NGC 313 | Astronomy | 45 |
14,874,679 | https://en.wikipedia.org/wiki/60S%20ribosomal%20protein%20L10a | 60S ribosomal protein L10a is a protein that in humans is encoded by the RPL10A gene.
Ribosomes, the organelles that catalyze protein synthesis, consist of a small 40S subunit and a large 60S subunit. Together these subunits are composed of 4 RNA species and approximately 80 structurally distinct proteins. This gene encodes a ribosomal protein that is a component of the 60S subunit. The protein belongs to the L1P family of ribosomal proteins. It is located in the cytoplasm. The expression of this gene is downregulated in the thymus by cyclosporin-A (CsA), an immunosuppressive drug. Studies in mice have shown that the expression of the ribosomal protein L10a gene is downregulated in neural precursor cells during development. This gene used to be referred to as NEDD6 (neural precursor cell expressed, developmentally downregulated 6), but it has been renamed RPL10A (ribosomal protein 10a). As is typical for genes encoding ribosomal proteins, there are multiple processed pseudogenes of this gene dispersed through the genome.
References
Further reading
Ribosomal proteins | 60S ribosomal protein L10a | Chemistry | 247 |
14,429,328 | https://en.wikipedia.org/wiki/Frizzled-10 | Frizzled-10 (Fz-10) is a protein that in humans is encoded by the FZD10 gene. Fz-10 has also been designated as CD350 (cluster of differentiation 350).
Function
This gene is a member of the frizzled gene family. Members of this family encode 7-transmembrane domain proteins that are receptors for the Wingless type MMTV integration site family of signaling proteins. Most frizzled receptors are coupled to the beta-catenin canonical signaling pathway. Using array analysis, expression of this intronless gene is significantly up-regulated in two cases of primary colon cancer.
References
Further reading
External links
Clusters of differentiation
G protein-coupled receptors | Frizzled-10 | Chemistry | 150 |
23,688,981 | https://en.wikipedia.org/wiki/C20H24N2O2 | {{DISPLAYTITLE:C20H24N2O2}}
The molecular formula C20H24N2O2 (molar mass: 324.42 g/mol, exact mass: 324.1838 u) may refer to:
Affinine
Quinidine
Quinine
Molecular formulas | C20H24N2O2 | Physics,Chemistry | 64 |
12,903,181 | https://en.wikipedia.org/wiki/Riser%20clamp | A riser clamp is a type of hardware used by mechanical building trades for pipe support in vertical runs of piping (risers) at each floor level. The devices are placed around the pipe, and integral fasteners are then tightened to clamp them onto the pipe. The friction between the pipe and riser clamp transfers the weight of the pipe through the riser clamp to the building structure. Risers are generally located at floor penetrations, particularly for continuous floor slabs such as concrete. They may also be located at some other interval as dictated by local building codes or at intermediate intervals to support plumbing which has been altered or repaired. Heavier piping types, such as cast iron, require more frequent support. Ordinarily, riser clamps are made of carbon steel and individually sized to fit certain pipe sizes.
There are at least two types of riser clamp: the two-bolt pipe clamp and the yoke clamp.
References
Piping
Plumbing | Riser clamp | Chemistry,Engineering | 201 |
7,538,464 | https://en.wikipedia.org/wiki/Rainbowing | Rainbowing is the process in which a dredging ship propels sand that has been claimed from the ocean floor in a high arc to a particular location. This is used for multiple purposes, ranging from building up a beach to prevent erosion to constructing new islands. The name is derived from the appearance of the arc, which closely resembles a brown-colored rainbow.
This technique was used extensively in the construction of the Palm Islands and The World, Dubai.
Process
The process of rainbowing begins with the excavation of sediment, typically sand, from the seabed by a dredger. Dredgers excavate the sediment using mechanical or hydraulic methods or a combination of both. During the excavation process, large quantities of water are collected along with sediment creating a mixture called slurry. The slurry can then be utilised on-site or transported to where it will be deposited. The liquid characteristics of the slurry allow the dredger to transfer the slurry by ejecting it through the air in arcs.
Rainbowing nozzles
The projection of slurry for beach nourishment and other dredging uses is achieved through the use of nozzles which affect the output and trajectory of the slurry.
The diameter of the nozzle affects the output of the dredger and the distance that the slurry is projected. Smaller diameters, for instance, have less flow leading to lower hourly output, but are able to project the slurry over a further distance due to a higher exit velocity. Jumbo dredgers today can easily achieve distances in excess of 150 metres, but at the cost of 30% extra discharge time.
Nozzles that are angled 30° from the horizontal are standard. Although 45° nozzle angles achieve longer distances from a ballistics perspective, 45° nozzles have been observed to create large craters. In addition, a high amount of sand flows back towards the dredger. 30° nozzles instead project the sand with a flatter trajectory, minimizing back flow while achieving a final distance comparable to that reached by a 45° nozzle.
Other methods of disposing and transferring the slurry include pumping the slurry through pipelines or using natural forces such as wave currents.
Advantages
Since rainbowing transfers material by ejecting it through the air, the technique is useful for reclaiming areas that are too shallow for direct placement. In addition, rainbowing allows the dredger to dispose excavated sediment on-site. This is useful for dredging operations such as creating trenches since the dredger can simply cast the excavated sediment to the side as opposed to spending time dumping or transporting the collected sediment. This allows for a continuous trenching operation.
Environmental impact
Rainbowing, along with other dredging and reclamation methods, has various effects on the environment apart from vastly changing its geographical structure.
Throughout the dredging and nourishment process, plumes of fine sediment, which can take longer to settle, can remain suspended in the water for long periods of time. These clouds of fine sediment can have adverse effects on the ecosystem, asphyxiating fish and other fauna as well as blocking sunlight. As organisms die, the water becomes toxic as decomposed organic materials raise hydrogen sulfide levels. In such cases, it is often impossible for an ecosystem to revive. It often takes a couple years for the ecosystem to recover, when recovery does occur. In addition, coral can be removed or become buried by the sediment.
References
External links
https://web.archive.org/web/20061117204621/http://channel.nationalgeographic.com/channel/totalmegastructures/photogallery_rainbowing_island.html
http://www.accessmylibrary.com/premium/0286/0286-11654915.html
Coastal construction
Ocean pollution | Rainbowing | Chemistry,Engineering,Environmental_science | 800 |
23,071,369 | https://en.wikipedia.org/wiki/TRIM62 | TRIM62, also called DEAR1 (for ductal epithelium–associated RING chromosome 1), is a protein in the tripartite motif family. In human it is encoded by the gene TRIM62. TRIM62 is involved in the morphogenesis of the mammary gland, and loss of TRIM62 gene expression in breast is associated with a higher risk of recurrence in early-onset breast cancer.
References
Developmental genes and proteins | TRIM62 | Chemistry,Biology | 98 |
50,924,839 | https://en.wikipedia.org/wiki/Vladimir%20Gennadievich%20Sprindzuk | Vladimir Gennadievich Sprindzuk (Russian Владимир Геннадьевич Спринджук, Belarusian Уладзімір Генадзевіч Спрынджук, 22 July 1936, Minsk – 26 July 1987) was a Soviet-Belarusian number theorist.
Education and career
Sprindzuk studied from 1954 at Belarusian State University and from 1959 at the University of Vilnius. There he received in 1963 his Ph.D. with Jonas Kubilius as primary advisor and Yuri Linnik as secondary advisor and with thesis entitled (in Russian) "Метрические теоремы о дыяфантавых приближение алгебраическими числами ограниченной степени" (Metric Theorems of Diophantine Approximations and Approximations by Algebraic Numbers of Bounded Degree). In 1965 he received his Russian doctorate of sciences (Doctor Nauk) from the State University of Leningrad with thesis entitled (in Russian) "Проблема Малера в метрической теории чисел" (The Mahler Problem in the Metric Theory of Numbers). In 1969 he became a professor and head of the academic division of number theory at the Mathematical Institute of the National Academy of Sciences of Belarus in Minsk and lectured at the Belarusian State University in Minsk. He was a visiting professor at the University of Paris, at the Polish Academy of Sciences and at the Slovak Academy of Sciences.
Sprindzuk's research deals with Diophantine approximation, Diophantine equations and transcendental numbers. While a first year undergraduate student, he published his first paper, in which he solved a problem of Aleksandr Khinchin, and wrote to Khinchin about the solution. Another important influence was the Leningrad number theorist Yuri Linnik, who was Sprindzuk's advisor for his Russian doctorate of sciences. In 1965 Sprindzuk proved a conjecture of Mahler, that almost all real numbers are S-numbers of Type 1 — Mahler had previously proved that almost all real numbers are S-numbers. Sprindzuk generalized an important theorem proved by Wolfgang M. Schmidt.
He was elected in 1969 a corresponding member and in 1986 a full member of the National Academy of Sciences of Belarus. Beginning in 1970 he was on the editorial staff of Acta Arithmetica. In 1970 he was an Invited Speaker at the ICM in Nice with talk New applications of analytic and p-adic methods in diophantine approximations.
Selected publications
Articles
Books
Mahler’s Problem in metric number theory. American Mathematical Society 1969 (translation from Russian original, Minsk 1967)
Metric theory of Diophantine approximations. Winston and Sons, Washington D.C. 1979 (translation from Russian original, published Nauka, Moscow 1977)
Classical Diophantine Equations. Springer, Lecture Notes in Mathematics vol. 1559, 1993 (translation from Russian original, Moscow 1982)
References
External links
Sprinzduk's publication list from numbertheory.org
Number theorists
Soviet mathematicians
1936 births
1987 deaths
Scientists from Minsk | Vladimir Gennadievich Sprindzuk | Mathematics | 725 |
65,894 | https://en.wikipedia.org/wiki/Electromotive%20force | In electromagnetism and electronics, electromotive force (also electromotance, abbreviated emf, denoted ) is an energy transfer to an electric circuit per unit of electric charge, measured in volts. Devices called electrical transducers provide an emf by converting other forms of energy into electrical energy. Other types of electrical equipment also produce an emf, such as batteries, which convert chemical energy, and generators, which convert mechanical energy. This energy conversion is achieved by physical forces applying physical work on electric charges. However, electromotive force itself is not a physical force, and ISO/IEC standards have deprecated the term in favor of source voltage or source tension instead (denoted ).
An electronic–hydraulic analogy may view emf as the mechanical work done to water by a pump, which results in a pressure difference (analogous to voltage).
In electromagnetic induction, emf can be defined around a closed loop of a conductor as the electromagnetic work that would be done on an elementary electric charge (such as an electron) if it travels once around the loop.
For two-terminal devices modeled as a Thévenin equivalent circuit, an equivalent emf can be measured as the open-circuit voltage between the two terminals. This emf can drive an electric current if an external circuit is attached to the terminals, in which case the device becomes the voltage source of that circuit.
Although an emf gives rise to a voltage and can be measured as a voltage and may sometimes informally be called a "voltage", they are not the same phenomenon (see ).
Overview
Devices that can provide emf include electrochemical cells, thermoelectric devices, solar cells, photodiodes, electrical generators, inductors, transformers and even Van de Graaff generators. In nature, emf is generated when magnetic field fluctuations occur through a surface. For example, the shifting of the Earth's magnetic field during a geomagnetic storm induces currents in an electrical grid as the lines of the magnetic field are shifted about and cut across the conductors.
In a battery, the charge separation that gives rise to a potential difference (voltage) between the terminals is accomplished by chemical reactions at the electrodes that convert chemical potential energy into electromagnetic potential energy. A voltaic cell can be thought of as having a "charge pump" of atomic dimensions at each electrode, that is:
In an electrical generator, a time-varying magnetic field inside the generator creates an electric field via electromagnetic induction, which creates a potential difference between the generator terminals. Charge separation takes place within the generator because electrons flow away from one terminal toward the other, until, in the open-circuit case, an electric field is developed that makes further charge separation impossible. The emf is countered by the electrical voltage due to charge separation. If a load is attached, this voltage can drive a current. The general principle governing the emf in such electrical machines is Faraday's law of induction.
History
In 1801, Alessandro Volta introduced the term "force motrice électrique" to describe the active agent of a battery (which he had invented around 1798).
This is called the "electromotive force" in English.
Around 1830, Michael Faraday established that chemical reactions at each of two electrode–electrolyte interfaces provide the "seat of emf" for the voltaic cell. That is, these reactions drive the current and are not an endless source of energy as the earlier obsolete theory thought. In the open-circuit case, charge separation continues until the electrical field from the separated charges is sufficient to arrest the reactions. Years earlier, Alessandro Volta, who had measured a contact potential difference at the metal–metal (electrode–electrode) interface of his cells, held the incorrect opinion that contact alone (without taking into account a chemical reaction) was the origin of the emf.
Notation and units of measurement
Electromotive force is often denoted by or ℰ.
In a device without internal resistance, if an electric charge passing through that device gains an energy via work, the net emf for that device is the energy gained per unit charge: Like other measures of energy per charge, emf uses the SI unit volt, which is equivalent to a joule (SI unit of energy) per coulomb (SI unit of charge).
Electromotive force in electrostatic units is the statvolt (in the centimeter gram second system of units equal in amount to an erg per electrostatic unit of charge).
Formal definitions
Inside a source of emf (such as a battery) that is open-circuited, a charge separation occurs between the negative terminal N and the positive terminal P.
This leads to an electrostatic field that points from P to N, whereas the emf of the source must be able to drive current from N to P when connected to a circuit.
This led Max Abraham to introduce the concept of a nonelectrostatic field that exists only inside the source of emf.
In the open-circuit case, , while when the source is connected to a circuit the electric field inside the source changes but remains essentially the same.
In the open-circuit case, the conservative electrostatic field created by separation of charge exactly cancels the forces producing the emf.
Mathematically:
where is the conservative electrostatic field created by the charge separation associated with the emf, is an element of the path from terminal N to terminal P, '' denotes the vector dot product, and is the electric scalar potential.
This emf is the work done on a unit charge by the source's nonelectrostatic field when the charge moves from N to P.
When the source is connected to a load, its emf is just
and no longer has a simple relation to the electric field inside it.
In the case of a closed path in the presence of a varying magnetic field, the integral of the electric field around the (stationary) closed loop may be nonzero.
Then, the "induced emf" (often called the "induced voltage") in the loop is:
where is the entire electric field, conservative and non-conservative, and the integral is around an arbitrary, but stationary, closed curve through which there is a time-varying magnetic flux , and is the vector potential.
The electrostatic field does not contribute to the net emf around a circuit because the electrostatic portion of the electric field is conservative (i.e., the work done against the field around a closed path is zero, see Kirchhoff's voltage law, which is valid, as long as the circuit elements remain at rest and radiation is ignored).
That is, the "induced emf" (like the emf of a battery connected to a load) is not a "voltage" in the sense of a difference in the electric scalar potential.
If the loop is a conductor that carries current in the direction of integration around the loop, and the magnetic flux is due to that current, we have that , where is the self inductance of the loop.
If in addition, the loop includes a coil that extends from point 1 to 2, such that the magnetic flux is largely localized to that region, it is customary to speak of that region as an inductor, and to consider that its emf is localized to that region.
Then, we can consider a different loop that consists of the coiled conductor from 1 to 2, and an imaginary line down the center of the coil from 2 back to 1.
The magnetic flux, and emf, in loop is essentially the same as that in loop :
For a good conductor, is negligible, so we have, to a good approximation,
where is the electric scalar potential along the centerline between points 1 and 2.
Thus, we can associate an effective "voltage drop" with an inductor (even though our basic understanding of induced emf is based on the vector potential rather than the scalar potential), and consider it as a load element in Kirchhoff's voltage law,
where now the induced emf is not considered to be a source emf.
This definition can be extended to arbitrary sources of emf and paths moving with velocity through the electric field and magnetic field :
which is a conceptual equation mainly, because the determination of the "effective forces" is difficult.
The term
is often called a "motional emf".
In (electrochemical) thermodynamics
When multiplied by an amount of charge the emf yields a thermodynamic work term that is used in the formalism for the change in Gibbs energy when charge is passed in a battery:
where is the Gibbs free energy, is the entropy, is the system volume, is its pressure and is its absolute temperature.
The combination is an example of a conjugate pair of variables. At constant pressure the above relationship produces a Maxwell relation that links the change in open cell voltage with temperature (a measurable quantity) to the change in entropy when charge is passed isothermally and isobarically. The latter is closely related to the reaction entropy of the electrochemical reaction that lends the battery its power. This Maxwell relation is:
If a mole of ions goes into solution (for example, in a Daniell cell, as discussed below) the charge through the external circuit is:
where is the number of electrons/ion, and is the Faraday constant and the minus sign indicates discharge of the cell. Assuming constant pressure and volume, the thermodynamic properties of the cell are related strictly to the behavior of its emf by:
where is the enthalpy of reaction. The quantities on the right are all directly measurable. Assuming constant temperature and pressure:
which is used in the derivation of the Nernst equation.
Distinction with potential difference
Although an electrical potential difference (voltage) is sometimes called an emf, they are formally distinct concepts:
Potential difference is a more general term that includes emf.
Emf is the cause of a potential difference.
In a circuit of a voltage source and a resistor, the sum of the source's applied voltage plus the ohmic voltage drop through the resistor is zero. But the resistor provides no emf, only the voltage source does:
For a circuit using a battery source, the emf is due solely to the chemical forces in the battery.
For a circuit using an electric generator, the emf is due solely to a time-varying magnetic forces within the generator.
Both a 1 volt emf and a 1 volt potential difference correspond to 1 joule per coulomb of charge.
In the case of an open circuit, the electric charge that has been separated by the mechanism generating the emf creates an electric field opposing the separation mechanism. For example, the chemical reaction in a voltaic cell stops when the opposing electric field at each electrode is strong enough to arrest the reactions. A larger opposing field can reverse the reactions in what are called reversible cells.
The electric charge that has been separated creates an electric potential difference that can (in many cases) be measured with a voltmeter between the terminals of the device, when not connected to a load. The magnitude of the emf for the battery (or other source) is the value of this open-circuit voltage.
When the battery is charging or discharging, the emf itself cannot be measured directly using the external voltage because some voltage is lost inside the source.
It can, however, be inferred from a measurement of the current and potential difference , provided that the internal resistance already has been measured:
"Potential difference" is not the same as "induced emf" (often called "induced voltage").
The potential difference (difference in the electric scalar potential) between two points A and B is independent of the path we take from A to B.
If a voltmeter always measured the potential difference between A and B, then the position of the voltmeter would make no difference.
However, it is quite possible for the measurement by a voltmeter between points A and B to depend on the position of the voltmeter, if a time-dependent magnetic field is present.
For example, consider an infinitely long solenoid using an AC current to generate a varying flux in the interior of the solenoid.
Outside the solenoid we have two resistors connected in a ring around the solenoid.
The resistor on the left is 100 Ω and the one on the right is 200 Ω, they are connected at the top and bottom at points A and B.
The induced voltage, by Faraday's law is , so the current Therefore, the voltage across the 100 Ω resistor is and the voltage across the 200 Ω resistor is , yet the two resistors are connected on both ends, but measured with the voltmeter to the left of the solenoid is not the same as measured with the voltmeter to the right of the solenoid.
Generation
Chemical sources
The question of how batteries (galvanic cells) generate an emf occupied scientists for most of the 19th century. The "seat of the electromotive force" was eventually determined in 1889 by Walther Nernst to be primarily at the interfaces between the electrodes and the electrolyte.
Atoms in molecules or solids are held together by chemical bonding, which stabilizes the molecule or solid (i.e. reduces its energy). When molecules or solids of relatively high energy are brought together, a spontaneous chemical reaction can occur that rearranges the bonding and reduces the (free) energy of the system. In batteries, coupled half-reactions, often involving metals and their ions, occur in tandem, with a gain of electrons (termed "reduction") by one conductive electrode and loss of electrons (termed "oxidation") by another (reduction-oxidation or redox reactions). The spontaneous overall reaction can only occur if electrons move through an external wire between the electrodes. The electrical energy given off is the free energy lost by the chemical reaction system.
As an example, a Daniell cell consists of a zinc anode (an electron collector) that is oxidized as it dissolves into a zinc sulfate solution. The dissolving zinc leaving behind its electrons in the electrode according to the oxidation reaction (s = solid electrode; aq = aqueous solution):
The zinc sulfate is the electrolyte in that half cell. It is a solution which contains zinc cations , and sulfate anions with charges that balance to zero.
In the other half cell, the copper cations in a copper sulfate electrolyte move to the copper cathode to which they attach themselves as they adopt electrons from the copper electrode by the reduction reaction:
which leaves a deficit of electrons on the copper cathode. The difference of excess electrons on the anode and deficit of electrons on the cathode creates an electrical potential between the two electrodes. (A detailed discussion of the microscopic process of electron transfer between an electrode and the ions in an electrolyte may be found in Conway.) The electrical energy released by this reaction (213 kJ per 65.4 g of zinc) can be attributed mostly due to the 207 kJ weaker bonding (smaller magnitude of the cohesive energy) of zinc, which has filled 3d- and 4s-orbitals, compared to copper, which has an unfilled orbital available for bonding.
If the cathode and anode are connected by an external conductor, electrons pass through that external circuit (light bulb in figure), while ions pass through the salt bridge to maintain charge balance until the anode and cathode reach electrical equilibrium of zero volts as chemical equilibrium is reached in the cell. In the process the zinc anode is dissolved while the copper electrode is plated with copper. The salt bridge has to close the electrical circuit while preventing the copper ions from moving to the zinc electrode and being reduced there without generating an external current. It is not made of salt but of material able to wick cations and anions (a dissociated salt) into the solutions. The flow of positively charged cations along the bridge is equivalent to the same number of negative charges flowing in the opposite direction.
If the light bulb is removed (open circuit) the emf between the electrodes is opposed by the electric field due to the charge separation, and the reactions stop.
For this particular cell chemistry, at 298 K (room temperature), the emf = 1.0934 V, with a temperature coefficient of = −4.53×10−4 V/K.
Voltaic cells
Volta developed the voltaic cell about 1792, and presented his work March 20, 1800. Volta correctly identified the role of dissimilar electrodes in producing the voltage, but incorrectly dismissed any role for the electrolyte. Volta ordered the metals in a 'tension series', "that is to say in an order such that any one in the list becomes positive when in contact with any one that succeeds, but negative by contact with any one that precedes it." A typical symbolic convention in a schematic of this circuit ( –||– ) would have a long electrode 1 and a short electrode 2, to indicate that electrode 1 dominates. Volta's law about opposing electrode emfs implies that, given ten electrodes (for example, zinc and nine other materials), 45 unique combinations of voltaic cells (10 × 9/2) can be created.
Typical values
The electromotive force produced by primary (single-use) and secondary (rechargeable) cells is usually of the order of a few volts. The figures quoted below are nominal, because emf varies according to the size of the load and the state of exhaustion of the cell.
Other chemical sources
Other chemical sources include fuel cells.
Electromagnetic induction
Electromagnetic induction is the production of a circulating electric field by a time-dependent magnetic field. A time-dependent magnetic field can be produced either by motion of a magnet relative to a circuit, by motion of a circuit relative to another circuit (at least one of these must be carrying an electric current), or by changing the electric current in a fixed circuit. The effect on the circuit itself, of changing the electric current, is known as self-induction; the effect on another circuit is known as mutual induction.
For a given circuit, the electromagnetically induced emf is determined purely by the rate of change of the magnetic flux through the circuit according to Faraday's law of induction.
An emf is induced in a coil or conductor whenever there is change in the flux linkages. Depending on the way in which the changes are brought about, there are two types: When the conductor is moved in a stationary magnetic field to procure a change in the flux linkage, the emf is statically induced. The electromotive force generated by motion is often referred to as motional emf. When the change in flux linkage arises from a change in the magnetic field around the stationary conductor, the emf is dynamically induced. The electromotive force generated by a time-varying magnetic field is often referred to as transformer emf.
Contact potentials
When solids of two different materials are in contact, thermodynamic equilibrium requires that one of the solids assume a higher electrical potential than the other. This is called the contact potential. Dissimilar metals in contact produce what is known also as a contact electromotive force or Galvani potential. The magnitude of this potential difference is often expressed as a difference in Fermi levels in the two solids when they are at charge neutrality, where the Fermi level (a name for the chemical potential of an electron system) describes the energy necessary to remove an electron from the body to some common point (such as ground). If there is an energy advantage in taking an electron from one body to the other, such a transfer will occur. The transfer causes a charge separation, with one body gaining electrons and the other losing electrons. This charge transfer causes a potential difference between the bodies, which partly cancels the potential originating from the contact, and eventually equilibrium is reached. At thermodynamic equilibrium, the Fermi levels are equal (the electron removal energy is identical) and there is now a built-in electrostatic potential between the bodies.
The original difference in Fermi levels, before contact, is referred to as the emf.
The contact potential cannot drive steady current through a load attached to its terminals because that current would involve a charge transfer. No mechanism exists to continue such transfer and, hence, maintain a current, once equilibrium is attained.
One might inquire why the contact potential does not appear in Kirchhoff's law of voltages as one contribution to the sum of potential drops. The customary answer is that any circuit involves not only a particular diode or junction, but also all the contact potentials due to wiring and so forth around the entire circuit. The sum of all the contact potentials is zero, and so they may be ignored in Kirchhoff's law.
Solar cell
Operation of a solar cell can be understood from its equivalent circuit. Photons with energy greater than the bandgap of the semiconductor create mobile electron–hole pairs. Charge separation occurs because of a pre-existing electric field associated with the p-n junction. This electric field is created from a built-in potential, which arises from the contact potential between the two different materials in the junction. The charge separation between positive holes and negative electrons across the p–n diode yields a forward voltage, the photo voltage, between the illuminated diode terminals, which drives current through any attached load. Photo voltage is sometimes referred to as the photo emf, distinguishing between the effect and the cause.
Solar cell current–voltage relationship
Two internal current losses limit the total current available to the external circuit. The light-induced charge separation eventually creates a forward current through the cell's internal resistance in the direction opposite the light-induced current . In addition, the induced voltage tends to forward bias the junction, which at high enough voltages will cause a recombination current in the diode opposite the light-induced current.
When the output is short-circuited, the output voltage is zeroed, and so the voltage across the diode is smallest. Thus, short-circuiting results in the smallest losses and consequently the maximum output current, which for a high-quality solar cell is approximately equal to the light-induced current . Approximately this same current is obtained for forward voltages up to the point where the diode conduction becomes significant.
The current delivered by the illuminated diode to the external circuit can be simplified (based on certain assumptions) to:
is the reverse saturation current. Two parameters that depend on the solar cell construction and to some degree upon the voltage itself are the ideality factor m and the thermal voltage , which is about 26 millivolts at room temperature.
Solar cell photo emf
Solving the illuminated diode's above simplified current–voltage relationship for output voltage yields:
which is plotted against in the figure.
The solar cell's photo emf has the same value as the open-circuit voltage , which is determined by zeroing the output current :
It has a logarithmic dependence on the light-induced current and is where the junction's forward bias voltage is just enough that the forward current completely balances the light-induced current. For silicon junctions, it is typically not much more than 0.5 volts. While for high-quality silicon panels it can exceed 0.7 volts in direct sunlight.
When driving a resistive load, the output voltage can be determined using Ohm's law and will lie between the short-circuit value of zero volts and the open-circuit voltage . When that resistance is small enough such that (the near-vertical part of the two illustrated curves), the solar cell acts more like a current generator rather than a voltage generator, since the current drawn is nearly fixed over a range of output voltages. This contrasts with batteries, which act more like voltage generators.
Other sources that generate emf
A transformer coupling two circuits may be considered a source of emf for one of the circuits, just as if it were caused by an electrical generator; this is the origin of the term "transformer emf".
For converting sound waves into voltage signals:
a microphone generates an emf from a moving diaphragm.
a magnetic pickup generates an emf from a varying magnetic field produced by an instrument.
a piezoelectric sensor generates an emf from strain on a piezoelectric crystal.
Devices that use temperature to produce emfs include thermocouples and thermopiles.
Any electrical transducer which converts a physical energy into electrical energy.
See also
Counter-electromotive force
Electric battery
Electrochemical cell
Electrolytic cell
Galvanic cell
Voltaic pile
References
Further reading
George F. Barker, "On the measurement of electromotive force". Proceedings of the American Philosophical Society Held at Philadelphia for Promoting Useful Knowledge, American Philosophical Society. January 19, 1883.
Andrew Gray, "Absolute Measurements in Electricity and Magnetism", Electromotive force. Macmillan and co., 1884.
Charles Albert Perkins, "Outlines of Electricity and Magnetism", Measurement of Electromotive Force. Henry Holt and co., 1896.
John Livingston Rutgers Morgan, "The Elements of Physical Chemistry", Electromotive force. J. Wiley, 1899.
"Abhandlungen zur Thermodynamik, von H. Helmholtz. Hrsg. von Max Planck". (Tr. "Papers to thermodynamics, on H. Helmholtz. Hrsg. by Max Planck".) Leipzig, W. Engelmann, Of Ostwald classical author of the accurate sciences series. New consequence. No. 124, 1902.
Theodore William Richards and Gustavus Edward Behr, jr., "The electromotive force of iron under varying conditions, and the effect of occluded hydrogen". Carnegie Institution of Washington publication series, 1906.
Henry S. Carhart, "Thermo-electromotive force in electric cells, the thermo-electromotive force between a metal and a solution of one of its salts". New York, D. Van Nostrand company, 1920.
Hazel Rossotti, "Chemical applications of potentiometry". London, Princeton, N.J., Van Nostrand, 1969.
Nabendu S. Choudhury, 1973. "Electromotive force measurements on cells involving beta-alumina solid electrolyte". NASA technical note, D-7322.
G. W. Burns, et al., "Temperature-electromotive force reference functions and tables for the letter-designated thermocouple types based on the ITS-90". Gaithersburg, MD : U.S. Dept. of Commerce, National Institute of Standards and Technology, Washington, Supt. of Docs., U.S. G.P.O., 1993.
Electromagnetism
Electrodynamics
Voltage | Electromotive force | Physics,Mathematics | 5,574 |
16,767,087 | https://en.wikipedia.org/wiki/Cure | A cure is a substance or procedure that ends a medical condition, such as a medication, a surgical operation, a change in lifestyle or even a philosophical mindset that helps end a person's sufferings; or the state of being healed, or cured. The medical condition could be a disease, mental illness, genetic disorder, or simply a condition a person considers socially undesirable, such as baldness or lack of breast tissue.
An incurable disease may or may not be a terminal illness; conversely, a curable illness can still result in the patient's death.
The proportion of people with a disease that are cured by a given treatment, called the cure fraction or cure rate, is determined by comparing disease-free survival of treated people against a matched control group that never had the disease.
Another way of determining the cure fraction and/or "cure time" is by measuring when the hazard rate in a diseased group of individuals returns to the hazard rate measured in the general population.
Inherent in the idea of a cure is the permanent end to the specific instance of the disease. When a person has the common cold, and then recovers from it, the person is said to be cured, even though the person might someday catch another cold. Conversely, a person that has successfully managed a disease, such as diabetes mellitus, so that it produces no undesirable symptoms for the moment, but without actually permanently ending it, is not cured.
Related concepts, whose meaning can differ, include response, remission and recovery.
Statistical model
In complex diseases, such as cancer, researchers rely on statistical comparisons of disease-free survival (DFS) of patients against matched, healthy control groups. This logically rigorous approach essentially equates indefinite remission with cure. The comparison is usually made through the Kaplan-Meier estimator approach.
The simplest cure rate model was published by Joseph Berkson and Robert P. Gage in 1952. In this model, the survival at any given time is equal to those that are cured plus those that are not cured, but who have not yet died or, in the case of diseases that feature asymptomatic remissions, have not yet re-developed signs and symptoms of the disease. When all of the non-cured people have died or re-developed the disease, only the permanently cured members of the population will remain, and the DFS curve will be perfectly flat. The earliest point in time that the curve goes flat is the point at which all remaining disease-free survivors are declared to be permanently cured. If the curve never goes flat, then the disease is formally considered incurable (with the existing treatments).
The Berkson and Gage equation is
where is the proportion of people surviving at any given point in time, is the proportion that are permanently cured, and is an exponential curve that represents the survival of the non-cured people.
Cure rate curves can be determined through an analysis of the data. The analysis allows the statistician to determine the proportion of people that are permanently cured by a given treatment, and also how long after treatment it is necessary to wait before declaring an asymptomatic individual to be cured.
Several cure rate models exist, such as the expectation-maximization algorithm and Markov chain Monte Carlo model. It is possible to use cure rate models to compare the efficacy of different treatments. Generally, the survival curves are adjusted for the effects of normal aging on mortality, especially when diseases of older people are being studied.
From the perspective of the patient, particularly one that has received a new treatment, the statistical model may be frustrating. It may take many years to accumulate sufficient information to determine the point at which the DFS curve flattens (and therefore no more relapses are expected). Some diseases may be discovered to be technically incurable, but also to require treatment so infrequently as to be not materially different from a cure. Other diseases may prove to have multiple plateaus, so that what was once hailed as a "cure" results unexpectedly in very late relapses. Consequently, patients, parents and psychologists developed the notion of psychological cure, or the moment at which the patient decides that the treatment was sufficiently likely to be a cure as to be called a cure. For example, a patient may declare himself to be "cured", and to determine to live his life as if the cure were definitely confirmed, immediately after treatment.
Related terms
Response Response is a partial reduction in symptoms after treatment.
RecoveryRecovery is a restoration of health or functioning. A person who has been cured may not be fully recovered, and a person who has recovered may not be cured, as in the case of a person in a temporary remission or who is an asymptomatic carrier for an infectious disease.
PreventionPrevention is a way to avoid an injury, sickness, disability, or disease in the first place, and generally it will not help someone who is already ill (though there are exceptions). For instance, many babies and young children are vaccinated against polio (a highly infectious disease) and other infectious diseases, which prevents them from contracting polio. But the vaccination does not work on patients who already have polio. A treatment or cure is applied after a medical problem has already started.
TherapyTherapy treats a problem, and may or may not lead to its cure. In incurable conditions, a treatment ameliorates the medical condition, often only for as long as the treatment is continued or for a short while after treatment is ended. For example, there is no cure for AIDS, but treatments are available to slow down the harm done by HIV and extend the treated person's life. Treatments don't always work. For example, chemotherapy is a treatment for cancer, but it may not work for every patient. In easily cured forms of cancer, such as childhood leukaemia's, testicular cancer and Hodgkin lymphoma, cure rates may approach 90%. In other forms, treatment may be essentially impossible. A treatment need not be successful in 100% of patients to be considered curative. A given treatment may permanently cure only a small number of patients; so long as those patients are cured, the treatment is considered curative.
Examples
Cures can take the form of natural antibiotics (for bacterial infections), synthetic antibiotics such as the sulphonamides, or fluoroquinolones, antivirals (for a very few viral infections), antifungals, antitoxins, vitamins, gene therapy, surgery, chemotherapy, radiotherapy, and so on. Despite a number of cures being developed, the list of incurable diseases remains long.
1700s
Scurvy became curable (as well as preventable) with doses of vitamin C (for example, in limes) when James Lind published A Treatise on the Scurvy (1753).
1890s
Antitoxins to diphtheria and tetanus toxins were produced by Emil Adolf von Behring and his colleagues from 1890 onwards. The use of diphtheria antitoxin for the treatment of diphtheria was regarded by The Lancet as the "most important advance of the [19th] Century in the medical treatment of acute infectious disease".
1930s
Sulphonamides become the first widely available cure for bacterial infections.
Antimalarials were first synthesized, making malaria curable.
1940s
Bacterial infections became curable with the development of antibiotics.
2010s
Hepatitis C, a viral infection, became curable through treatment with antiviral medications.
See also
Eradication of infectious diseases
Preventive medicine
Remission (medicine)
Relapse, the reappearance of a disease
Spontaneous remission
References
Drugs
Medical terminology
Therapy | Cure | Chemistry | 1,595 |
16,027,304 | https://en.wikipedia.org/wiki/Interferometric%20modulator%20display | Interferometric modulator display (IMOD, trademarked mirasol) is a technology used in electronic visual displays that can create various colors via interference of reflected light. The color is selected with an electrically switched light modulator comprising a microscopic cavity that is switched on and off using driver integrated circuits similar to those used to address liquid crystal displays (LCD). An IMOD-based reflective flat panel display includes hundreds of thousands of individual IMOD elements each a microelectromechanical systems (MEMS)-based device.
In one state, an IMOD subpixel absorbs incident light and appears black to the viewer. In a second state, it reflects light at a specific wavelength, using a diffraction grating effect. When not being addressed, an IMOD display consumes very little power. Unlike conventional back-lit liquid crystal displays, it is clearly visible in bright ambient light such as sunlight. IMOD prototypes as of mid-2010 could emit 15 frames per second (fps), and in November 2011 Qualcomm demonstrated another prototype reaching 30 fps, suitable for video playback. The smartwatch Qualcomm Toq features this display with 40 fps.
Mirasol screens were only able to produce 60 Hz video but it quickly drained the battery. Devices that used the screen have colors that look washed out, so the technology never saw mainstream support.
Working principle
The basic elements of an IMOD-based display are microscopic devices that act essentially as mirrors that can be switched on or off individually. Each of these elements reflects only one exact wavelength of light, such as a specific hue of red, green or blue, when turned on, and absorbs light (appears black) when off. Elements are organised into a rectangular array in order to produce a display screen.
An array of elements that all reflect the same color when turned on produces a monochromatic display, for example black and red (in this example using IMOD elements that reflect red light when "on"). As each element reflects only a certain amount of light, grouping several elements of the same color together as subpixels allows different brightness levels for a pixel based on how many elements are reflective at a particular time.
Multiple color displays are created by using subpixels, each designed to reflect a specific different color. Multiple elements of each color are generally used to both give more combinations of displayable color (by mixing the reflected colors) and to balance the overall brightness of the pixel.
Because elements only use power in order to switch between on and off states (no power is needed to reflect or absorb light hitting the display once the element is either reflecting or absorbing), IMOD-based displays potentially use much less power than displays that generate light and/or need constant power to keep pixels in a particular state. Being a reflective display, they require an external light source (such as daylight or a lamp) to be readable, just like paper or other electronic paper technologies.
Details
A pixel in an IMOD-based display consists of one or more subpixels that are individual microscopic interferometric cavities similar in operation to Fabry–Pérot interferometers (etalons). While a simple etalon consists of two half-silvered mirrors, an IMOD comprises a reflective membrane which can move in relation to a semi-transparent thin film stack. With an air gap defined within this cavity, the IMOD behaves like an optically resonant structure whose reflected color is determined by the size of the airgap. Application of a voltage to the IMOD creates electrostatic forces which bring the membrane into contact with the thin film stack. When this happens the behavior of the IMOD changes to that of an induced absorber. The consequence is that almost all incident light is absorbed and no colors are reflected. It is this binary operation that is the basis for the IMOD's application in reflective flat panel displays. Since the display utilizes light from ambient sources, the display's brightness increases in high ambient environments (i.e. sunlight). In contrast, a back-lit LCD suffers from incident light.
For a practical RGB color model (RGB) display, a single RGB pixel is built from several subpixels, because the brightness of a monochromatic pixel is not adjusted. A monochromatic array of subpixels represents different brightness levels for each color, and for each pixel, there are three such arrays: red, green and blue.
Development
The IMOD technology was invented by Mark W. Miles, a MEMS researcher and founder of Etalon, Inc., and (co-founder) of Iridigm Display Corporation. Qualcomm took over the development of this technology after its acquisition of Iridigm in 2004, and subsequently formed Qualcomm MEMS Technologies (QMT). Qualcomm has allowed commercialization of the technology under the trademark name "mirasol". This energy-efficient, biomimetic technology sees application and use in portable electronics such as e-book readers and mobile phones.
Future IMOD panels manufacturers include Qualcomm in conjunction with Foxlink, having established a joint-venture with Sollink (高強光電) in 2009 with a future facility dedicated to manufacturing IMOD panels. Production for this began in Jan 2011, with the fabricated panels intended for devices such as e-readers.
As of 2015, the IMOD Mirasol display laboratory in Longtan, Taiwan, formerly run by Qualcomm, is now apparently run by Apple.
Uses
IMOD displays are now available in the commercial marketplace. QMT's displays, using IMOD technology, are found in the Acoustic Research ARWH1 Stereo Bluetooth headset device, the Showcare Monitoring system (Korea), the Hisense C108, and MP3 applications from Freestyle Audio and Skullcandy. In the mobile phone marketplace, Taiwanese manufacturers Inventec and Cal-Comp have announced phones with mirasol displays, and LG claims to be developing "one or more" handsets using mirasol technology. These products all have only two-color (black plus one other) "bi-chromic" displays. A multi-color IMOD display is used in the Qualcomm Toq smartwatch.
References
Bibliography
Display technology
Qualcomm | Interferometric modulator display | Engineering | 1,303 |
48,706,788 | https://en.wikipedia.org/wiki/Erica%20Klarreich | Erica Gail Klarreich is an American mathematician, journalist and science popularizer.
Early life and education
Klarreich's father was a professor of mathematics, and her mother was a mathematics teacher.
Klarreich obtained her Ph.D. in mathematics under the guidance of Yair Nathan Minsky at Stony Brook University in 1997.
Mathematics
As a mathematician, Klarreich proved that the boundary of the curve complex is homeomorphic to the space of ending laminations.
Popular science writing
As a popular science writer, Klarreich's work has appeared in publications such as Nature, Scientific American, New Scientist, and Quanta Magazine. She is one of the winners of the 2021 Joint Policy Board for Mathematics Communications Award for her popular science writing.
Selected publications
Mathematics
"The boundary at infinity of the curve complex and the relative Teichmüller space"
"Semiconjugacies between Kleinian group actions on the Riemann sphere"
Popular science
"Biologists join the dots", Nature, v. 413, n. 6855, pp. 450–452, 2001.
"Foams and honeycombs", American Scientist, v. 88, n. 2, pp. 152–161, 2000.
"Quantum cryptography: Can you keep a secret?", Nature, v. 418, n. 6895, pp. 270–272, 2002.
"Huygens's clocks revisited", American Scientist, v. 90, pp. 322–323, 2002.
References
External links
Klarreich's personal page
Living people
American geometers
20th-century American mathematicians
21st-century American mathematicians
Stony Brook University alumni
Place of birth missing (living people)
American science communicators
Quantum cryptography
Mathematical chemistry
20th-century American women mathematicians
21st-century American women mathematicians
Year of birth missing (living people) | Erica Klarreich | Chemistry,Mathematics | 383 |
33,449,620 | https://en.wikipedia.org/wiki/Histone-like%20nucleoid-structuring%20protein | Histone-like nucleoid-structuring protein (H-NS), is one of twelve nucleoid-associated proteins (NAPs) whose main function is the organization of genetic material, including the regulation of gene expression via xenogeneic silencing. H-NS is characterized by an N-terminal domain (NTD) consisting of two dimerization sites, a linker region that is unstructured and a C-terminal domain (CTD) that is responsible for DNA-binding. Though it is a small protein (15 kDa), it provides essential nucleoid compaction and regulation of genes (mainly silencing) and is highly expressed, functioning as a dimer or multimer. Change in temperature causes H-NS to be dissociated from the DNA duplex, allowing for transcription by RNA polymerase, and in specific regions lead to pathogenic cascades in enterobacteria such as Escherichia coli and the four Shigella species.
Structure
H-NS has a specific topology that allows it to condense bacterial DNA into a superhelical structure based on evidence from X-ray crystallography. The condensed superhelical structure has implicated H-NS in gene repression caused by the formation of oligomers. These oligomers form due to dimerization of two sites in the N-terminal domain of H-NS. For example, in bacterial species like Salmonella typhimurium, the NTD of H-NS contains dimerization sites in helices alpha 1, alpha 2 and alpha 3. Alpha helices 3 and 4 are then responsible for creating the superhelical structure of H-NS-DNA interactions by head to head association (Figure 2). H-NS also contains an unstructured linker region, also known as a Q-linker. The C-Terminal domain, also known as the DNA Binding Domain (DBD), shows high affinity for regions in DNA that are rich in Adenine and Thymine and present in a hook-like motif in a minor groove. The base stacking present in this AT rich region of the DNA allows for minor widening of the minor groove that is preferential for binding. Common DBD's include AACTA and TACTA regions which can appear hundreds of times throughout the genome. Within these AT-rich regions, the minor groove has a width of 3.5 Å, which is preferential for H-NS binding. In E. coli, it was observed that H-NS restructures the genome into microdomains in vivo. While the bacterial genome is split into four different macrodomains including Ori and Ter (macrodomain of E. coli and Shigella spp. in which H-NS is encoded), it is thought that H-NS plays a role in the formation of these small 10 kb microdomains throughout the genome.
Function
A major function of H-NS is to influence DNA topology (Figure 2). H-NS is responsible for formation of nucleofilaments along the DNA and DNA-DNA bridges. H-NS is known as a passive DNA bridger, meaning that it binds two distant segments of DNA and remains stationary, forming a loop. This DNA loop formation allows H-NS to control gene expression. Relief of suppression by H-NS can be achieved by the binding of another protein, or by changes in DNA topology which can occur due to changes in temperature and osmolarity, for example. The CTD binds to the bacterial DNA in such a way that inhibits the function of RNA polymerase. This is a common feature seen in horizontally acquired genes. Structural studies of H-NS use bacterial species such as E. coli and Shigella spp. because the C-Terminal Domain is completely conserved.
The process for formation of H-NS-DNA complexes begins with the CTD binding to a preferential site in the genome. This may be the result of the large amount of positively charged amino acid residues located within the linker region that causes the CTD to search for a binding site with high affinity. Once the CTD is bound to its preferential region, TpA step, the NTD's can oligomerize and form rigid nucleofilaments that, if favorable conditions exist, will more freely bind to one another to form DNA-bridges. This form of bridging is known as "passive bridging" and may not allow RNAP to proceed with transcription. The experiments used to support this method of DNA binding and gene silencing come from Atomic Force Microscopy and single-molecule studies in vitro.
All bacteria must be sensitive to changes in their physical environment to survive. These mechanisms allow for turning genes on or off depending on its extracellular environment. Many researchers believe that H-NS contributes to these sensory functions. H-NS has been observed to control around 60% of the temperature regulated genes and can dissociate from the DNA duplex at 37 °C. This particular sensitivity seen in H-NS allows for pathogenesis and is the main focus of study. Outside of a host, the temperature of 32 °C prevents dissociation of H-NS from the virulence plasmid in Shigella spp. in order to conserve energy for energetically costly production of proteins involved in pathogenesis. The presence of magnesium ions (Mg2+) has been shown to allow H-NS to form a slightly open to completely open conformational change in structure that will ultimately alter the interaction between the negatively charged NTD and positively charged CTD. Magnesium concentrations below 2 mM, allows for the formation of rigid nucleoprotein filaments and high concentrations promote the formation of H-NS DNA bridges. The charges seen in the NTD and CTD may explain how H-NS remains sensitive to changes in temperature and osmolarity (pH below 7.4). H-NS can also interact with other proteins and influence their function, for example it can interact with the flagellar motor protein FliG to increase its activity.
Clinical Significance
H-NS has a conserved role in the pathogenicity of gram-negative bacteria including Shigella spp., Escherichia coli, Salmonella spp., and many others. It is implicated in the transcription of the virF gene causing what is known as the virF leading to bacillary dysentery, a disease affecting children mainly seen in developing countries. These two bacterial species contain a virulence plasmid that is responsible for invasion of host cells and is regulated by H-NS. Interestingly, almost 70% of the open reading frames (ORF) of the specialized virulence plasmid in Shigella spp. is AT-rich, allowing for long term regulation of this plasmid by H-NS.
Aforementioned, studies show that temperature sensitive H-NS will dissociate from bacterial DNA at 37 °C, triggering RNA polymerase to transcribe virF, the gene responsible for the expression of VirF. VirF is the main regulator of the virulence cascade and is expressed due to the temperature sensitive "hinge" region of the virF promoter changing conformation so that is no longer favorable for DNA-bridging by H-NS (Figure 3). Once VirF is expressed, it up regulates the production of icsA, functions to promote motility, and virB, encodes the next regulation protein in the Shigella cascade. As soon as VirB is expressed, it will disrupt H-NS for the rest of the virulence plasmid.
Shigella spp. contain "molecular backups", or paralogues, to H-NS that have been studied in detail due to their apparent assistance in organization of the virulence plasmid. StpA is a paralogue of H-NS that is conserved across the species but the other, Sfh is expressed solely in the S. flexneri mutant strain 2457T. This mutant strain is of much interest to researchers because it acts as a replacement for H-NS since 2457T does not contain the hns gene. The correlation between H-NS and its paralogues is poorly understood at this time. Due to importance of these paralogues in the absence of H-NS in the mutant, further research and focus on these paralogues could lead to promising antibacterial treatments.
References
Protein families
DNA-binding proteins | Histone-like nucleoid-structuring protein | Biology | 1,746 |
12,228,637 | https://en.wikipedia.org/wiki/C4H6 | {{DISPLAYTITLE:C4H6}}
The molecular formula C4H6 (molar mass: 54.09 g/mol) may refer to:
1,3-Butadiene
1,2-Butadiene
Bicyclobutane
Cyclobutene
Dimethylacetylene (2-butyne)
1-Methylcyclopropene
3-Methylcyclopropene
Methylenecyclopropane
Trimethylenemethane
1-butyne | C4H6 | Chemistry | 107 |
4,171,659 | https://en.wikipedia.org/wiki/Lug%20nut | A lug nut or wheel nut is a fastener, specifically a nut, used to secure a wheel on a vehicle. Typically, lug nuts are found on automobiles, trucks (lorries), and other large vehicles using rubber tires.
Design
A lug nut is a nut fastener with one rounded or conical (tapered) end, used on steel and most aluminum wheels. A set of lug nuts is typically used to secure a wheel to threaded wheel studs and thereby to a vehicle's axles.
Some designs (Audi, BMW, Mercedes-Benz, Saab, Volkswagen) use lug bolts or wheel bolts instead of nuts, which screw into a tapped (threaded) hole in the wheel's hub or brake drum or brake disc.
The conical lug's taper is normally 60 degrees (although 45 degrees is common for wheels designed for racing applications), and is designed to help center the wheel accurately on the axle, and to reduce the tendency for the nut to loosen due to fretting induced precession, as the car is driven. One popular alternative to the conical lug seating design is the rounded, hemispherical, or ball seat. Automotive manufacturers such as Audi, BMW, and Honda use this design rather than a tapered seat, but the nut performs the same function. Older style (non-ferrous) alloy wheels use nuts with a cylindrical shank slipping into the wheel to center it and a washer that applies pressure to clamp the wheel to the axle.
Wheel lug nuts may have different shapes. Aftermarket alloy and forged wheels often require specific lug nuts to match their mounting holes, so it is often necessary to get a new set of lug nuts when the wheels are changed.
There are four common lug nut types:
cone seat
bulge cone seat
under hub cap
spline drive.
The lug nut thread type varies between car brands and models. Examples of commonly used metric threads include:
M10×1.25 mm
M12 (1.25, 1.5 or 1.75 mm thread pitch, with M12x1.5 mm being the most common)
M14 (1.25, 1.5 or 2 mm pitch, with M14×1.5 mm being the most common)
M16×1.5 mm
Some older American cars use inch threads, for example ″-20 (11.1 mm), ″-20 (12.7 mm), or ″-20 (14.3 mm).
Removal and installation
Lug nuts may be removed using a lug, socket, or impact wrench. If the wheel is to be removed, an automotive jack to raise the vehicle and some wheel chocks would be used as well. Wheels that have hubcaps or wheel covers need these removed beforehand, typically with a screwdriver, flatbar, or prybar. Lug nuts can be difficult to remove, as they may become frozen to the wheel stud. In such cases a breaker bar or repeated blows from an impact wrench can be used to free them. Alternating between tightening and loosening can free especially stubborn lug nuts.
Lug nuts must be installed in an alternating pattern, commonly referred to as a star pattern. This ensures a uniform distribution of load across the wheel mounting surface. When installing lug nuts, it is recommended to tighten them with a calibrated torque wrench. While a lug, socket, or impact wrench may be used to tighten lug nuts, the final tightening should be performed by a torque wrench, ensuring an accurate and adequate load is applied. Torque specifications vary by vehicle and wheel type. Both vehicle and wheel manufacturers provide recommended torque values which should be consulted when an installation is done. Failure to abide by the recommended torque value can result in damage to the wheel and brake rotor/drum. Additionally, under-tightened lug nuts may come loose with time.
The tool size needed for removal and installation depends on the type of lug nut. The three most common hex sizes for lug nuts are 17 mm, 19 mm, and 21 mm, while 22 mm, 23 mm, inch (17.5 mm), and inch (20.6 mm) are less commonly used.
Detecting loose nuts
In order to allow early detection of loose lug nuts, some large vehicles are fitted with loose wheel nut indicators. The indicator spins with the nut so that loosening can be detected with a visual inspection.
Anti-theft nuts or bolts
In countries where the theft of alloy wheels is a serious problem, locking nuts (or bolts, as applicable) are available — or already fitted by the vehicle manufacturer — which require a special adaptor ("key") between the nut and the wrench to fit and remove. The key is normally unique to each set of nuts. Only one locking nut per wheel is normally used, so they are sold in sets of four. Most designs can be defeated using a hardened removal tool which uses a left-hand self-cutting thread to grip the locking nut, although more advanced designs have a spinning outer ring to frustrate such techniques. An older technique for removal was to simply hammer a slightly smaller socket over the locking wheel nut to be able to remove it. However, with the newer design of locking wheel nuts this is no longer possible. Removal nowadays requires special equipment that is not available to the general public. This helps to prevent thieves from obtaining the tools to be able to remove the lock nuts themselves.
History
In the United States, vehicles manufactured prior to 1975 by the Chrysler Corporation used left-hand and right-hand screw thread for different sides of the vehicle to prevent loosening. Most Buicks, Pontiacs, and Oldsmobiles used both left-handed and right-handed lug nuts prior to model year 1965. It was later realized that the taper seat performed the same function. Most modern vehicles use right-hand threads on all wheels.
See also
Center cap
Wheel sizing
References
External links
Nuts (hardware)
Vehicle parts | Lug nut | Technology | 1,230 |
67,455,126 | https://en.wikipedia.org/wiki/Bottini%20of%20Siena | The Bottini di Siena are a complex system of medieval underground aqueducts for the water supply of the city of Siena with a total length of . The system used to be the main water supply of the entire city of Siena until 1914 and nowadays continues to supply water to the fountains of Siena.
Structure of the underground aqueduct system
It is named after the Latin word buctinus, used for the first time in 1226, and the word botte (Italian for "barrels"), which describes the shape of the arched walls, mostly made of terracotta, composing the roofs of the underground tunnels of the aqueduct. The underground canal system of the Bottini (singular Bottino ) consists of several waterways which supply the wells within the city walls of Siena and in their vicinity, bringing water from sources located several miles away. The Bottini collect rainwater through the permeable roof, carrying it into the city, divert part of the Tressa, Staggia and Arbia rivers towards the city of Siena, collecting rainwater through the permeable roof along the way. This complex underground system was not only used to supply the city with drinking water, but also for the operation of many water-dependent medieval artisan industries (like dyers and leather workers), for cleaning and fire prevention activities, irrigation and agriculture, which would otherwise have been impossible in a city like Siena, located kilometers away from the nearest watercourse.
The Bottini are not to be seen as a uniform canal system. In addition to the main veins Bottino maestro di Fonte Gaia and Bottino maestro di Fontebranda and their side arms (called Ramo, plural rami) there are many independent Bottini that only feed individual fountains. The watercourses are not identical to the streets of Siena. Most of the canals (called Gorello , plural: gorelli) are about wide and are mostly located in brick-walled corridors that are about wide. The height of the corridors varies from . The Bottini are not closed off from the outside world, there are several air shafts in each Bottino (called smiraglio, pl.: smiragli, sometimes also called occhio, pl.: occhi).
The Bottino maestro di Fonte Gaia is supplied by three main tributaries from Colombaio (also del Castagno, with of maximum distance from Fonte Gaia), Michele a Quarto (also San Dalmazio) and Uopini, which is close to the Fontebecci fountain. Smaller tributaries are that of Vico Alto (before Fontebecci in the Ramo di Colombaio) and those of Acqua Calda, Marciano and Poggiarello (between Fontebecci and Porta Camollia). The entire route has a constant incline of 1 ‰ (1 m of difference in height over 1 km length) and transports per second. The excavations took place starting from two different points. One started from the Piazza del Campo to the north, while the other one starts from Santa Petronilla (near the Antiporto di Camollia), in the direction of Fonte Gaia (south) and in the direction of Fontebecci (north).
The Bottino maestro di Fontebranda is the shortest (most distant point north of Fontebranda), oldest and deepest of the two main canals. It starts north of the city walls and transports per second. The most important tributary is the Ramo di Chiarenna, other important tributaries are those of Santa Petronilla and San Prospero.
History
First excavations and birth of the medieval aqueduct system
The first underground watercourses already existed in the 4th century, during the Roman period, when Siena was still limited to the area of Castelvecchio. The Fontanella fountain is mentioned here in 394. The reason for the construction of the aqueduct system in the Middle Ages was mainly due to the shortage of water in the city of Siena, which experienced a period of strong population growth starting from the 11th century.
They were first documented with their Latin name in 1226 as Buctinus. Bricked Bottini were documented during the expansion works of Fontebranda in 1246 when thousands of stones were used. Further work on the tributaries took place in March 1250, here work was carried out on the wells of Val di Montone, Val di Follonica, Fontebranda, Pescaia and Vetrice, with that of Pescaia on the two main arms attached (Bottino di Fonte Gaia and Bottino di Fontebranda). In 1267 there were attempts to direct the waters of the Merse to Siena, but these plans failed shortly afterwards. After that, the focus was again on repairing the existing waterways. New water veins were found in 1274, which later led to the construction of the Fontenuova and Fonte d'Ovile fountains.
Development and evolution of the Bottini
In order to finally lead the water to the central Piazza del Campo, the city government accepted on December 16, 1334, Jacopo di Vanni was entrusted with bringing the water from the veins to the north into the city and this, already in 1343, arrived in Piazza del Campo, the center of the life of the city. In 1343 the booty reached Fontebecci and an attempt was made to connect it to the water of the river Staggia, which is the real Quercegrossa. Since 1344, for the most difficult parts of the excavations, were hired professional miners from Massa Marittima and Montieri called Guerchi that received higher wages than the inexperienced workers Sienese. The first stones of Fonte Gaia were laid in April 1343, the fountain was consecrated in 1346 (with the water coming from Fontebecci) and then redesigned from 1409 to 1419 by Jacopo della Quercia. The Ramo di Uopini was completed in 1387. In order to increase the water quality, between 1437 and 1438, under the Prato di Porta Camollia, the Galazzoni were built a system that removes the impurities from the water by decanting it. These basins are deep and contain at least . After 1466 no significant changes or extensions were made to the main structure of the Bottini system 1438.
The first private branches to private households emerged from 1474 onwards. In fact, on July 14, 1474, Alessandro di Mariano Sozzini received permission from the city government to build a drain near the Pantaneto fountain to his private residence in Via Pantaneto at his own expense. In September of the same year Pietro Forteguerri also received permission to draw water from the fountain in Via del Casato to his house. Bartolo di Tura also received permission in 1474 to create a private connection. The aim of allowing the creation of private connections was to reduce illegal discharges, the immense extent of which was officially denounced as early as 1446.
From the Florentine conquest of Siena to the unification of Italy
The air shafts (smiragli) outside the city walls proved to be problematic in times of war. Already in the run-up to the Battle of Camollia (1526) the conspirator Lucio Aringhieri tried to bring troops into the city via the Bottini. In the run-up to the Fiorentine siege (1554–1555), the Bottini began to be walled up in March 1553 so that only water could flow under the barriers.
Throughout the period from the surrender of Siena in 1555 to the entry into operation of the Vivo aqueduct after the 1st World War, Siena continued to use the spoils as the only source of water supply.
From September 1691, some private individuals demanded and obtained connections to the municipal water supply through wells that collected the water from the gorello: based on how much they paid, they received the relative amount of water, measured by the municipality in "Dadi". The dado, also called forellino, was a small hole in the center of a plate that blocked the junction channel and corresponded to about of water in 24 hours. People could have contracts for 1/2 dado, for 1, 2, 3 dadi, and so on.
The oldest planimetric map still in existence dates from 1768 and is now in the Siena State Archives. In July 1825, Giovanni Gani created a connection between the two main canals at an intersection near the Palazzo dei Diavioli. In order to supply water to the almost dry canal of Fontegaia, it was taken with two pumps from the canal of Fontebranda, located twenty meters below. After the waterflow in the Bottino of Fontegaia normalized, the connection was interrupted again in the same year. This system was used in 1835 and 1851 for the same reasons. In order to cope with the water shortage in the Fontegaia Canal, restoration work took place from 1851 to 1868, during which several areas of the canal affected by landslides were cleared. The (modernized) pump approach was resumed in 1870 when Vico Bello installed steam pumps. However, these were already moved in 1873 in the direction of the road to San Domenico; the length of the connecting pipe is . This connection also supplied the area of the Fortezza Santa Barbara fortress and was active until 1931.
The end of the Bottini as a water supply system for drinking water
At the end of the 19th century, the water supply via the Bottini was no longer considered to be sufficient in quantitative and hygienic terms. From 1885, 18 sources were examined, from 1886 the rivers Arbia, Elsa and Masellone as well as Bozzone (river), Staggia and Tressa were shortlisted, whether or not they corresponded to the water quality for the city supply. Ultimately, in 1895 the choice fell on the Vivo, which flow from the Monte Amiata. From just below the source at Vivo d'Orcia to the city of Siena, an underground aqueduct was built over the municipal areas of Castiglione d'Orcia, Montalcino, San Quirico d'Orcia (near Bagno Vignoni) and Murlo, which is called Acquedotto del Vivo and which today, alongside later supply lines (for example the Ente and Fiora rivers ), which supplies the city of Siena with drinking water. The aqueduct reached Porta San Marco on May 15, 1914, while the inner-city distribution system was completed in 1918.
Fountains
The fountains belonging to the Bottini water system are divided into two categories. The main wells (fonti maggiori) include the wells that have larger water inlets, the secondary wells (fonti minori) include the wells whose water quantity and importance is much lower.
Main fountains
Secondary fountains
Citations
General bibliography
Antonio Maria Baldi: Gli antichi bottini senesi. In: Leonardo Lombardi, Gioacchino Lena, Giulio Pazzagli (Hrsg.): Tecnica di idraulica antica. Geologia dell'Ambiente, Supplemento al numero 4/2006 (Periodico della SIGEA, Società Italiana di Geologia Ambientale), Rom 2006 (Onlineausgabe, PDF)
Comune di Siena (Hrsg.): I Bottini. Acquedotti medievali senesi. Edizioni Gielle, Siena 1984
Comune di Siena, Santa Maria della Scala, Associazione La Diana (Hrsg.): A ritrovar la Diana. Protagon Editori, Siena 2001,
Duccio Balestracci, Laura Vigni, Armando Constantini: La memoria dell'Acqua. I bottini di Siena. Protagon Editori, Siena 2006
Acquedotto del Fiora/La Diana (Hrsg.); Benedetto Bargagli Petrucci, Giacomo Luchini, Luca Luchini, Laura Vigni, Giacomo Zanibelli: Acqua per la città. Nel centenario dell'acquedotto del Vivo. Una tormentata avventura senese fra XIX e XX secolo. Tipografia senese, Siena 2014
Fabio Bargagli Petrucci: Le fonti di Siena e i loro aquedotti, note storiche dalle origini fino al MDLV. Siena 1906 (Onlineausgabe bei archive.org, PDF)
External links
Offizielle Webseite der Stadt Siena
La Diana, Associazione La Diana, Website Bottini di Siena.
Webseite des Museo dell'Acqua
Enjoy Siena, Website Bottini di Siena.
Enjoy Siena, Website Museo dell'Acqua.
Bottini medievali senesi
La mia terra di Siena: Fonti e bottini di Siena
Fountains in Siena
History of construction
Populated places established in the 1st millennium BC
Roman sites of Tuscany
Siena
Water
World Heritage Sites in Italy | Bottini of Siena | Engineering,Environmental_science | 2,715 |
670,783 | https://en.wikipedia.org/wiki/Wake%20turbulence | Wake turbulence is a disturbance in the atmosphere that forms behind an aircraft as it passes through the air. It includes several components, the most significant of which are wingtip vortices and jet-wash, the rapidly moving gases expelled from a jet engine.
Wake turbulence is especially hazardous in the region behind an aircraft in the takeoff or landing phases of flight. During take-off and landing, an aircraft operates at a high angle of attack. This flight attitude maximizes the formation of strong vortices. In the vicinity of an airport, there can be multiple aircraft, all operating at low speed and low altitude; this provides an extra risk of wake turbulence with a reduced height from which to recover from any upset.
Definition
Wake turbulence is a type of clear-air turbulence. In the case of wake turbulence created by the wings of a heavy aircraft, the rotating vortex-pair lingers for a significant amount of time after the passage of the aircraft, sometimes more than a minute. One of these rotating vortices can seriously upset or even invert a smaller aircraft that encounters it, either in the air or on the ground.
In fixed-wing level flight
The vortex circulation is outward, upward, and around the wingtips when viewed from either ahead or behind the aircraft. Tests with large aircraft have shown that vortices remain spaced less than a wingspan apart, drifting with the wind, at altitudes greater than a wingspan from the ground. Tests have also shown that the vortices sink at a rate of several hundred feet per minute, slowing their descent and diminishing in strength with time and distance behind the generating aircraft.
At altitude, vortices sink at a rate of per minute and stabilize about below the flight level of the generating aircraft. Therefore, aircraft operating at altitudes greater than are considered to be at less risk.
When the vortices of larger aircraft sink close to the ground — within — they tend to move laterally over the ground at a speed of . A crosswind decreases the lateral movement of the upwind vortex and increases the movement of the downwind vortex.
Helicopters
Helicopters also produce wake turbulence. Helicopter wakes may be significantly stronger than those of a fixed-wing aircraft of the same weight. The strongest wake will occur when the helicopter is operating at slower speeds (20 to 50 knots). Light helicopters with two-blade rotor systems produce a wake as strong as heavier helicopters with more than two blades. The strong rotor wake of the Bell Boeing V-22 Osprey tiltrotor can extend further and has contributed to a crash.
Hazard avoidance
Wingtip devices may slightly lessen the power of wingtip vortices. However, such changes are not significant enough to change the distances or times at which it is safe to follow other aircraft.
Wake turbulence categories
ICAO mandates wake turbulence categories based upon the maximum takeoff weight (MTOW) of the aircraft. These are used for separation of aircraft during take-off and landing.
There are a number of separation criteria for take-off, landing, and en-route phases of flight based upon wake turbulence categories. Air Traffic Controllers will sequence aircraft making instrument approaches with regard to these criteria. The aircraft making a visual approach is advised of the relevant recommended spacing and are expected to maintain their separation.
Parallel or crossing runways
During takeoff and landing, an aircraft's wake sinks toward the ground and moves laterally away from the runway when the wind is calm. A crosswind will tend to keep the upwind side of the wake in the runway area and may cause the downwind side to drift toward another runway. Since the wingtip vortices exist at the outer edge of an airplane's wake, this can be dangerous.
Staying at or above the leader's glide path
Warning signs
Uncommanded aircraft movements (such as wing rocking) may be caused by wake. This is why maintaining situational awareness is critical. Ordinary turbulence is not unusual, particularly in the approach phase. A pilot who suspects wake turbulence is affecting his or her aircraft should get away from the wake, execute a missed approach or go-around and be prepared for a stronger wake encounter. The onset of wake can be subtle and even surprisingly gentle. There have been serious accidents (see the next section) where pilots have attempted to salvage a landing after encountering moderate wake only to encounter severe wake turbulence that they were unable to overcome. Pilots should not depend on any aerodynamic warning, but if the onset of wake is occurring, immediate evasive action is vital.
Plate lines
In 2020, researchers looked into installing "plate lines" near the runway threshold to induce secondary vortices and shorten the vortex duration. In the trial installation at Vienna International Airport, they reported a 22%-37% vortex reduction.
Incidents involving wake turbulence
8 June 1966 – an XB-70 collided with an F-104. Though the true cause of the collision is unknown, it is believed that due to the XB-70 being designed to have enhanced wake turbulence to increase lift, the F-104 moved too close, therefore getting caught in the vortex and colliding with the wing (see main article).
A DC-9 crashed at the Greater Southwest International Airport while performing "touch and go" landings behind a DC-10. This crash prompted the FAA to create new rules for minimum following separation from "heavy" aircraft.
16 Jan 1987 – A Yakovlev Yak-40 crashed just after take-off in Tashkent. The flight took off just one minute fifteen seconds after an Ilyushin Il-76, thus encountering its wake vortex. The Yakovlev Yak-40 then banked sharply to the right, struck the ground, and caught fire. All nine people on board Aeroflot Flight 505 died.
6 February 1991 – A Boeing KC-135E Stratotanker, registered as 58-0013, suffered an accident when two out of the four engines detached from the aircraft due to severe wake turbulence from another KC-135 and from high winds. The pilots managed to execute an emergency at Prince Abdullah Air Base, Saudi Arabia, saving all four crew members onboard.
15 December 1993 – A chartered aircraft with five people on board, including In-N-Out Burger president Rich Snyder, crashed several miles before John Wayne Airport in Orange County, California. The aircraft was following a Boeing 757 for landing when it became caught in its wake turbulence, rolled into a deep descent, and crashed. As a result of this and other incidents involving aircraft following behind a Boeing 757, the FAA now employs the separation rules of heavy aircraft for the Boeing 757.
20 September 1999 – A JAS 39A Gripen from Airwing F 7 Såtenäs crashed into Lake Vänern in Sweden during an air combat maneuvering exercise. After passing through the wake vortex of the other aircraft, the Gripen abruptly changed course. Before the Gripen impacted the ground, the pilot ejected from the aircraft and landed safely by parachute in the lake.
12 November 2001 – American Airlines Flight 587 crashed into the Belle Harbor neighborhood of Queens, New York, shortly after takeoff from John F. Kennedy International Airport. The accident was attributed to the first officer's misuse of the rudder in response to wake turbulence from a Japan Airlines Boeing 747, resulting in the overstressing and separation of the vertical stabilizer.
8 July 2008 – A United States Air Force PC-12 trainer crashed at Hurlburt Field, Fla., when the pilot tried to land too closely behind a larger AC-130U Spooky gunship and was caught in the gunship's wake turbulence. Air Force rules require at least a two-minute separation between slow-moving heavy planes like the AC-130U and small, light planes, but the PC-12 trailed the gunship by only about 40 seconds. As the PC-12 hit the wake turbulence, it suddenly rolled to the left and began to turn upside down. The instructor pilot stopped the roll, but before he could get the plane upright, the left wing struck the ground, sending the plane skidding across a field before stopping on a paved overrun.
3 November 2008 – The wake turbulence of an Airbus A380-800 caused temporary loss of control to a Saab 340 on approach to a parallel runway during high crosswind conditions.
4 November 2008 – In the 2008 Mexico City plane crash, a Learjet 45 carrying Mexican Interior Secretary Juan Camilo Mouriño crashed near Paseo de la Reforma Avenue when turning for final approach to runway 05R at Mexico City International Airport. The airplane was flying behind a 767-300 and above a heavy helicopter. According to the Mexican government, the pilots were not told about the type of plane that was approaching before them, nor did they reduce to minimum approach speed.
9 September 2012 – A Robin DR 400 crashed after rolling 90 degrees in wake turbulence induced by the preceding Antonov An-2. Three were killed and one was severely injured.
28 March 2014 – An Indian Air Force C-130J-30 KC-3803 crashed near Gwalior, India, killing all five personnel aboard. The aircraft was conducting low level penetration training by flying at around when it ran into wake turbulence from another C-130J aircraft that was leading the formation, causing it to crash.
7 January 2017 – A private Bombardier Challenger 604 rolled three times in midair and dropped after encountering wake turbulence when it passed under an Airbus A380 over the Arabian Sea. Several passengers were injured, one seriously. Due to the G-forces experienced, the plane was damaged beyond repair and was consequently written off.
14 June 2018 – At 11:29 pm, Qantas passenger flight QF94, en route from Los Angeles to Melbourne, suffered a sudden freefall over the ocean after lift-off as a result of an intense wake vortex. The event lasted for about ten seconds, according to the passengers. The turbulence was caused by the wake of the previous Qantas flight QF12, which had departed only two minutes before flight QF94.
Measurement
Wake turbulence can be measured using several techniques. Currently, ICAO recognizes two methods of measurement, sound tomography, and a high-resolution technique, the Doppler lidar, a solution now commercially available. Techniques using optics can use the effect of turbulence on refractive index (optical turbulence) to measure the distortion of light that passes through the turbulent area and indicate the strength of that turbulence.
Audibility
Wake turbulence can occasionally, under the right conditions, be heard by ground observers. On a still day, the wake turbulence from heavy jets on landing approach can be heard as a dull roar or whistle. This is the strong core of the vortex. If the aircraft produces a weaker vortex, the breakup will sound like tearing a piece of paper. Often, it is first noticed some seconds after the direct noise of the passing aircraft has diminished. The sound then gets louder. Nevertheless, being highly directional, wake turbulence sound is easily perceived as originating a considerable distance behind the aircraft, its apparent source moving across the sky just as the aircraft did. It can persist for 30 seconds or more, continually changing timbre, sometimes with swishing and cracking notes, until it finally dies away.
In popular culture
In the 1986 film Top Gun, Lieutenant Pete "Maverick" Mitchell, played by Tom Cruise, suffers two flameouts caused by passing through the jetwash of another aircraft, piloted by fellow aviator Tom "Ice Man" Kazansky (played by Val Kilmer). As a result, he is put into an unrecoverable spin and is forced to eject, killing his RIO Nick "Goose" Bradshaw. In a subsequent incident, he is caught in an enemy fighter's jetwash, but manages to recover safely.
In the movie Pushing Tin, air traffic controllers stand just off the threshold of a runway while an aircraft lands in order to experience wake turbulence firsthand. However, the film dramatically exaggerates the effect of turbulence on persons standing on the ground, showing the protagonists being blown about by the passing aircraft. In reality, the turbulence behind and below a landing aircraft is too gentle to knock over a person standing on the ground. (In contrast, jet blast from an aircraft taking off can be extremely dangerous to people standing behind the aircraft.)
See also
Batchelor vortex
Eddy (fluid dynamics)
Wake (of boats)
Wingtip device
References
External links
Captain Meryl Getline explains "Heavy"
U.S. FAA, The Aeronautical Information Manual on Wake Turbulence
U.S. FAA, Pilot Controller Glossary, see Aircraft Classes
Wake Turbulence, An Invisible Enemy
Photographs of Wake turbulence
NASA Dryden – Wake Vortex Research
Aircraft aerodynamics
Aviation risks
Air traffic control
Turbulence
Aircraft wing design
ca:Estela | Wake turbulence | Chemistry | 2,588 |
15,214,284 | https://en.wikipedia.org/wiki/ZNF318 | Zinc finger protein 318 is a protein that in humans is encoded by the ZNF318 gene.
References
Further reading
External links
Transcription factors | ZNF318 | Chemistry,Biology | 30 |
67,065,244 | https://en.wikipedia.org/wiki/Fluoride%20nitrate | Fluoride nitrates are mixed anion compounds that contain both fluoride ions and nitrate ions. Compounds are known for some amino acids and for some heavy elements. Some transition metal fluorido complexes that are nitrates are also known. There are also fluorido nitrato complex ions known in solution.
List
References
Nitrates
Fluorides
Mixed anion compounds | Fluoride nitrate | Physics,Chemistry | 77 |
550,450 | https://en.wikipedia.org/wiki/Cristobalite | Cristobalite () is a mineral polymorph of silica that is formed at very high temperatures. It has the same chemical formula as quartz, SiO2, but a distinct crystal structure. Both quartz and cristobalite are polymorphs with all the members of the quartz group, which also include coesite, tridymite and stishovite. It is named after Cerro San Cristóbal in Pachuca Municipality, Hidalgo, Mexico.
It is used in dentistry as a component of alginate impression materials as well as for making models of teeth.
Properties
Metastability
Cristobalite is stable only above 1470 °C, but can crystallize and persist metastably at lower temperatures. The persistence of cristobalite outside its thermodynamic stability range occurs because the transition from cristobalite to quartz or tridymite is "reconstructive", requiring the breaking up and reforming of the silica framework. These frameworks are composed of SiO4 tetrahedra in which every oxygen atom is shared with a neighbouring tetrahedron, so that the chemical formula of silica is SiO2. The breaking of these bonds, required to convert cristobalite to tridymite and quartz, requires considerable activation energy and may not happen on a human time frame at room temperature. Framework silicates are also known as tectosilicates.
When devitrifying silica, cristobalite is usually the first phase to form, even when well outside its thermodynamic stability range. This is an example of Ostwald's step rule. The dynamically disordered nature of the β phase is partly responsible for the low enthalpy of fusion of silica.
Structures
There is more than one form of the cristobalite framework. At high temperatures, the structure is called β-cristobalite. It is in the cubic crystal system, space group Fdm (No. 227, Pearson symbol cF104). It has the diamond structure but with linked tetrahedra of silicon and oxygen where the carbon atoms are in diamond. A chiral tetragonal form called α-cristobalite (space group either P41212, No. 92, or P43212, No. 96, at random) occurs on cooling below about 250 °C at ambient pressure and is related to the cubic form by static tilting of the silica tetrahedra in the framework. This transition is variously called the low-high or transition. It may be termed "displacive"; i.e., it is not generally possible to prevent the cubic β form from becoming tetragonal by rapid cooling. Under rare circumstances the cubic form may be preserved if the crystal grain is pinned in a matrix that does not allow for the considerable spontaneous strain that is involved in the transition, which causes a change in shape of the crystal. This transition is highly discontinuous. Going from the α form to the β form causes an increase in volume of 3 or 4 percent. The exact transition temperature depends on the crystallinity of the cristobalite sample, which itself depends on factors such as how long it has been annealed at a particular temperature.
The cubic β phase consists of dynamically disordered silica tetrahedra. The tetrahedra remain fairly regular and are displaced from their ideal static orientations due to the action of a class of low-frequency phonons called rigid unit modes. It is the "freezing" of one of these rigid unit modes that is the soft mode for the α–β transition.
In β-cristobalite, there are right-handed and left-handed helices of tetrahedra (or of silicon atoms) parallel to all three axes. In the α–β phase transition, however, only the right-handed or the left-handed helix in one direction is preserved (the other becoming a two-fold screw axis), so only one of the three degenerate cubic crystallographic axes retains a fourfold rotational axis (actually a screw axis) in the tetragonal form. (That axis becomes the "c" axis, and the new "a" axes are rotated 45° compared to the other two old axes. The new "a" lattice parameter is shorter by approximately the square root of 2, so the α unit cell contains only 4 silicon atoms rather than 8.) The choice of axis is arbitrary, so that various twins can form within the same grain. These different twin orientations coupled with the discontinuous nature of the transition (volume and slight shape change) can cause considerable mechanical damage to materials in which cristobalite is present and that pass repeatedly through the transition temperature, such as refractory bricks.
Occurrence
Cristobalite occurs as white octahedra or spherulites in acidic volcanic rocks and in converted diatomaceous deposits in the Monterey Formation of the US state of California and similar areas.
The micrometre-scale spheres that make up precious opal exhibit some X-ray diffraction patterns that are similar to that of cristobalite, but lack any long-range order so they are not considered true cristobalite. In addition, the presence of structural water in opal makes it doubtful that opal consists of cristobalite.
Cristobalite is visible as the white inclusions in snowflake obsidian, a volcanic glass.
References
Further reading
American Geological Institute Dictionary of Geological Terms.
Durham, D. L., "Monterey Formation: Diagenesis". in: Uranium in the Monterey Formation of California. US Geological Survey Bulletin 1581-A, 1987.
Reviews in Mineralogy and Geochemistry, vol. 29., Silica: behavior, geochemistry and physical applications. Mineralogical Society of America, 1994.
R. B. Sosman, The Phases of Silica. (Rutgers University Press, 1965)
External links
Polymorphism (materials science)
Tetragonal minerals
Minerals in space group 92
Minerals in space group 96
Silica polymorphs
Silicon compounds
Silicon dioxide
Minerals in space group 227 | Cristobalite | Materials_science,Engineering | 1,270 |
7,715,111 | https://en.wikipedia.org/wiki/Nepenthes%20%C3%97%20truncalata | Nepenthes × truncalata (; a blend of truncata and alata) is a natural hybrid involving N. alata and N. truncata. Like its two parent species, it is endemic to the Philippines, but limited in distribution by the natural range of N. truncata on Mindanao.
References
Mann, P. 1998. A trip to the Philippines. Carnivorous Plant Newsletter 27(1): 6–11.
McPherson, S.R. & V.B. Amoroso 2011. Field Guide to the Pitcher Plants of the Philippines. Redfern Natural History Productions, Poole.
CP Database: Nepenthes × truncalata
Carnivorous plants of Asia
truncalata
Nomina nuda
Flora of Mindanao | Nepenthes × truncalata | Biology | 158 |
2,921,691 | https://en.wikipedia.org/wiki/Niclosamide | Niclosamide, sold under the brand name Niclocide among others, is an anthelmintic medication used to treat tapeworm infestations, including diphyllobothriasis, hymenolepiasis, and taeniasis. It is not effective against other worms such as flukes or roundworms. It is taken by mouth.
Side effects include nausea, vomiting, abdominal pain, and itchiness. It may be used during pregnancy. It works by blocking glucose uptake and oxidative phosphorylation by the worm.
Niclosamide was first synthesized in 1958. It is on the World Health Organization's List of Essential Medicines. Niclosamide is not available for human use in the United States.
Side effects
Side effects include nausea, vomiting, abdominal pain, constipation, and itchiness. Rarely, dizziness, skin rash, drowsiness, perianal itching, or an unpleasant taste occur. For some of these reasons, praziquantel is a preferable and equally effective treatment for tapeworm infestation.
Important Note: Niclosamide kills the pork tapeworm and results in its digestion. This then may cause a multitude of viable eggs to be released and may result in cysticercosis. Therefore, a purge should be given 1 or two hours after treatment. CNS cysticercosis is a life-threatening condition and may require brain surgery.
Mechanism of action
Niclosamide inhibits glucose uptake, oxidative phosphorylation, and anaerobic metabolism in the tapeworm.
Other applications
Niclosamide's metabolic effects are relevant to a wide ranges of organisms, and accordingly it has been applied as a control measure to organisms other than tapeworms. For example, it is an active ingredient in some formulations such as Bayluscide for killing lamprey larvae, as a molluscide, and as a general purpose piscicide in aquaculture. Niclosamide has a short half-life in water in field conditions; this makes it valuable in ridding commercial fish ponds of unwanted fish; it loses its activity soon enough to permit re-stocking within a few days of eradicating the previous population. Researchers have found that niclosamide is effective in killing invasive zebra mussels in cool waters.
Research
Niclosamide is under investigation as a potential treatment for certain types of cancer, bacterial infections, and viral infections.
In 2018, niclosamide was observed to be a potent activator of PTEN-induced kinase 1 in primary cortical neurons.
References
Further reading
External links
Anthelmintics
Chlorobenzene derivatives
Nitrobenzene derivatives
Salicylanilides
World Health Organization essential medicines
Wikipedia medicine articles ready to translate
Pesticides
Experimental cancer drugs | Niclosamide | Biology,Environmental_science | 587 |
2,108,537 | https://en.wikipedia.org/wiki/41%20Arietis | 41 Arietis (abbreviated 41 Ari) is a triple star system in the northern constellation of Aries. With an apparent visual magnitude of 3.63, this system is readily visible to the naked eye. It has an annual parallax shift of 19.69 mas, which indicates it is at a distance of from the Sun.
The system consists of a binary pair, designated 41 Arietis A, together with a third companion star, 41 Arietis D. (41 Arietis B and C form optical pairs with A, but are not physically related.) The components of A are themselves designated 41 Arietis Aa (formally named Bharani ) and Ab.
Nomenclature
41 Arietis is the system's Flamsteed designation. It does not possess a Greek-letter Bayer designation, since this system was once part of the now-obsolete constellation Musca Borealis, but is sometimes designated c Arietis. The designations of the two constituents as 41 Arietis A and D, and those of A's components - 41 Arietis Aa and Ab - derive from the convention used by the Washington Multiplicity Catalog (WMC) for multiple star systems, and adopted by the International Astronomical Union (IAU).
Nicolas-Louis de Lacaille called the star Līliī Austrīnā () 'southern of Lilium' (in Latin) in 1757, as a star of the now-defunct constellation of Lilium (the Lily). To him 39 Arietis was Līliī Boreā, 'northern of Lilium'.
In Hindu astronomy, Bharani (भरणी bharaṇī, ) is the second nakshatra, or lunar mansion corresponding to 35, 39 and 41 Arietis. In 2016, the IAU organized a Working Group on Star Names (WGSN) to catalog and standardize proper names for stars. The WGSN decided to attribute proper names to individual stars rather than entire multiple systems. It approved the name Bharani for the component 41 Arietis Aa on 30 June 2017 and it is now so included in the List of IAU-approved Star Names.
In Chinese, (), meaning Stomach (asterism), refers to an asterism consisting of 41, 35 and 39 Arietis. Consequently, the Chinese name for 41 Arietis itself is (, .)
In Avestan, the star was known as Upa-paoiri, and it was associated with one of the yazatas.
Properties
The primary component is a B-type main sequence star with a stellar classification of B8 Vn. The suffix 'n' indicates 'nebulous' absorption lines in the star's spectrum caused by the Doppler effect of rapid rotation. It has a projected rotational velocity of 175 km/s. This is creating an equatorial bulge that is 12% larger than the star's polar radius. It is a candidate member of the AB Doradus moving group and has an orbiting companion at an angular separation of 0.3 arcseconds.
References
External links
HR 838
Image 41 Arietis
Arietis, 41
Arietis, c
017573
013209
Spectroscopic binaries
Aries (constellation)
B-type main-sequence stars
Bharani
0838
Durchmusterung objects | 41 Arietis | Astronomy | 688 |
23,830,159 | https://en.wikipedia.org/wiki/Scheduled-task%20pattern | A scheduled-task pattern is a type of software design pattern used with real-time systems. It is not to be confused with the "scheduler pattern".
While the scheduler pattern delays access to a resource (be it a function, variable, or otherwise) only as long as absolutely needed, the scheduled-task pattern delays execution until a determined time. This is important in real-time systems for a variety of reasons.
References
External links
See also
Command pattern
Memento pattern
Software design patterns | Scheduled-task pattern | Technology | 102 |
15,070,377 | https://en.wikipedia.org/wiki/ZNF451 | Zinc finger protein 451 is a novel nuclear protein that in humans is encoded by the ZNF451 gene.
References
Further reading
External links
Transcription factors | ZNF451 | Chemistry,Biology | 32 |
20,733,547 | https://en.wikipedia.org/wiki/Richard%20Veryard | Richard Veryard FRSA (born 1955) is a British computer scientist, author and business consultant, known for his work on service-oriented architecture and the service-based business.
Biography
Veryard attended Sevenoaks School from 1966 to 1972, where he attended classes by Gerd Sommerhoff. He received his MA Mathematics and Philosophy from Merton College, Oxford, in 1976, and his MSc Computing Science at the Imperial College London in 1977. Later he also received his MBA from the Open University in 1992.
Veryard started his career in industry working for Data Logic Limited, Middlesex, UK, where he first developed and taught public data analysis courses. After years of practical experience in this field, he wrote his first book about this topic in 1984. In 1987 he became an IT consultant with James Martin Associates (JMA), specializing in the practical problems of planning and implementing information systems. After the European operation of JMA were acquired by the Texas Instruments, he became a Principal Consultant in the Software Business and a member of Group Technical Staff. At Texas Instruments he was one of the developers of IE\Q, a proprietary methodology for software quality management. Since 1997 he is freelance consultant under the flag of Veryard Projects Ltd. Since 2006 he is a principal consultant at CBDi, a research forum for service-oriented architecture and engineering.
Veryard has taught courses at City University, Brunel University and the Copenhagen Business School, and is a Fellow of the Royal Society of Arts in London.
Work
Pragmatic data analysis, 1984
In "Pragmatic data analysis" (1984) Veryard presented data analysis as a branch of systems analysis, which shared the same principles. His position on data modelling would appear to be implicit in the term data analysis. He presented two philosophical attitudes towards data modeling, which he called "semantic relativism and semantic absolutism. According to the absolutist way of thinking, there is only one correct or ideal way of modeling anything: each object in the real world must be represented by a particular construct. Semantic relativism, on the other hand, believe that most things in the real world can be modeled in many different ways, using basic constructs".
Veryard further examined the problem of the discovery of classes and objects. This may proceed from a number of different models, that capture the requirements of the problem domain. Abbott (1983) proposed that each search starts from a textual description of the problem. Ward (1989) and Seidewitz and Stark (1986) suggested starting from the products of structured analysis, namely data flow diagrams. Veryard examined the same problem from the perspective of data modeling.
Veryard made the point, that the modeler has some choice in whether to use an entity, relationship or attribute to represent a given universe of discourse (UoD) concept. This justifies a common position, that "data models of the same UoD may differ, but the differences are the result of shortcomings in the data modeling language. The argument is that data modeling is essentially descriptive, but that current data modeling languages allow some choice in how the description is documented."
Economics of Information Systems and Software, 1991
In the 1991 book "The Economics of Information Systems and Software", edited by Veryard, experts from various areas, including business administration, project management, software engineering and economics, contribute their expertise concerning the economics of systems software, including evaluation of benefits, types of information and project costs and management.
Information Coordination, 1993
In the 1993 book "Information Coordination: The Management of Information Models, Systems, and Organizations" Veryard gives a snapshot of the state of the art around these subjects. "Maximizing the value of corporate data depends upon being able to manage information models both within and between businesses. A centralized information model is not appropriate for many organizations," Veryard explains.
His book "takes the approach that multiple information models exist and the differences and links between them have to be managed. Coordination is currently an area of both intensive theoretical speculation and of practical research and development. Information Coordination explains practical guidelines for information management, both from on-going research and from recent field experience with CASE tools and methods".
Enterprise Modelling Methodology
In the 1990s Veryard worked together in an Enterprise Computing Project and developed a version of Business Relationship Modelling specifically for Open Distributed Processing, under the name Enterprise Modelling Methodology/Open Distributed Processing (EMM/ODP). EMM/ODP proposed some new techniques and method extensions for enterprise modelling for distributed systems.
Component-based business
In 2001 Veryard introduced the concept of "component-based business". Component-based business relates to new business architectures, in which "an enterprise is configured as a dynamic network of components providing business services to one another". In the new millennium there has been "a phenomenal growth in this kind of new autonomous business services, fuelled largely by the internet and e-business".
The concept of "component-Based Business constitutes a radical challenge to traditional notions of strategy, planning, requirements, quality and change, and tries to help you improve how you think through the practical difficulties and opportunities of the component-based business". This applied to both hardware and software, and to business relationships.
Veryard's subsequent work on organic planning for SOA has been referenced by a number of authors.
Six Viewpoints of Business Architecture, 2013
In "Six Viewpoints of Business Architecture" Veryard describes business architecture as "a practice (or collection of practices) associated with business performance, strategy and structure."
And furthermore about the main task of the business architect:
The business architect is expected to take responsibility for some set of stakeholder concerns, in collaboration with a number of related business and architectural roles, including
• business strategy planning, business change management, business analysis, etc.
• business operations, business excellence, etc.
• enterprise architecture, solution architecture, data/process architecture, systems architecture, etc.
Conventional accounts of business architecture are often framed within a particular agenda - especially an IT-driven agenda. Many enterprise architecture frameworks follow this agenda, and this affects how they describe business architecture and its relationship with other architectures (such as IT systems architecture). Indeed, business architecture is often seen as little more than a precursor to system architecture - an attempt to derive systems requirements.
Publications
Richard Veryard. Pragmatic data analysis. Oxford : Blackwell Scientific Publications, 1984.
Richard Veryard (ed.). The Economics of information systems and software. Oxford : Butterworth-Heinemann, 1991.
Richard Veryard. Information modelling : practical guidance. New York : Prentice Hall, 1992.
Richard Veryard. Information coordination : the management of information models, systems, and organizations. New York : Prentice Hall, 1994.
Richard Veryard. Component-based business : plug and play. London : Springer, 2001.
Richard Veryard. Six Viewpoints of Business Architecture, 2013
Articles, papers, book chapters, etc., a selection:
Richard Veryard (2000). Reasoning about systems and their properties. In: Peter Henderson (ed) Systems Engineering for Business Process Change, Springer-Verlag, 2002* Richard Veryard. "Business-Driven SOA," CBDI Journal, May–June 2004
References
External links
Richard Veryard Home page
List of recent publications by Richard Veryard.
1955 births
Living people
British computer scientists
Information systems researchers
Enterprise modelling experts
People educated at Sevenoaks School
Alumni of Merton College, Oxford
Alumni of the Department of Computing, Imperial College London | Richard Veryard | Technology | 1,520 |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.