id
int64
39
79M
url
stringlengths
31
227
text
stringlengths
6
334k
source
stringlengths
1
150
categories
listlengths
1
6
token_count
int64
3
71.8k
subcategories
listlengths
0
30
1,579,211
https://en.wikipedia.org/wiki/Vycor
Vycor is the brand name of Corning's high-silica, high-temperature glass. It provides very high thermal shock resistance. Vycor is approximately 96% silica and 4% boron trioxide, but unlike pure fused silica, it can be readily manufactured in a variety of shapes. Vycor can be subject to prolonged usage at 900 °C. Vycor products are made by a multi-step process. First, a relatively soft alkali-borosilicate glass is melted and formed by typical glassworking techniques into the desired shape. This is heat-treated, which causes the material to separate into two intermingled "phases" with distinct chemical compositions. One phase is rich in alkali and boric oxide and can be easily dissolved in acid. The other phase is mostly silica, which is insoluble. The glass object is then soaked in a hot acid solution, which leaches away the soluble glass phase, leaving an object which is mostly silica. At this stage, the glass is porous. Finally, the object is heated to more than 1200 °C, which consolidates the porous structure, making the object shrink slightly and become non-porous. The finished material is classified as a "reconstructed glass". For some applications the final step is skipped, leaving the glass porous. Such glass has a high affinity for water and makes an excellent getter for water vapour. It is widely used in science and engineering. Vycor has an extremely low coefficient of thermal expansion, just one quarter that of Pyrex. This property makes the material suitable for use in applications that demand very high dimensional stability, such as metrology instruments, and for products that need to withstand high thermal-shock loads. Vycor also has ultraviolet transmission to about 250 nm and is used in some germicidal lamps. Based on a reference thickness of 1mm, Vycor glass has an approximately 90% transmission spectra from ~300 nm to 3100 nm. Immersing the porous glass in certain chemical solutions before the final consolidation step produces a colored glass that can withstand high temperatures without degrading. This is used for colored glass filters for various applications. Corning manufactures Vycor products for high-temperature applications, such as evaporating dishes. Porous vycor is a prototypical matrix material for the study of confined liquid physics. Vycor can also be used for removal of 231Pa and 233Pa in fuel recycling. References External links Corning Inc. Manufacturer's website Momentive Performance Materials, Inc. Quartz and Low Softening Point Glass (LSPG) Manufacturer Phase separation in borosilicate and alkali earth silicate glasses Glass trademarks and brands Transparent materials
Vycor
[ "Physics" ]
566
[ "Physical phenomena", "Optical phenomena", "Materials", "Transparent materials", "Matter" ]
1,579,278
https://en.wikipedia.org/wiki/Business%20analyst
A business analyst (BA) is a person who processes, interprets and documents business processes, products, services and software through analysis of data. The role of a business analyst is to ensure business efficiency increases through their knowledge of both IT and business function. Some tasks of a business analyst include creating detailed business analysis, budgeting and forecasting, business strategising, planning and monitoring, variance analysis, pricing, reporting and defining business requirements for stakeholders. The business analyst role is applicable to four key areas/levels of business functions – operational, project, enterprise and competitive focuses. Each of these areas of business analysis have a significant impact on business performance, and assist in enhancing profitability and efficiency in all stages of the business process, and across all business functions. Role Business analysis has been defined as "a disciplined approach for introducing change to organization" through management, processing, and interpretation of data in order to "identify and define the solution that will maximize the value delivered by an organization to its stakeholders". A business analyst's job description tends to include "creating detailed business analysis, outlining problems, opportunities and solutions for a business, budgeting and forecasting, planning and monitoring, variance and analysis, pricing, reporting, and defining business requirements and reporting back to stakeholders". There are many business activities which the business analyst is involved in. Some areas in which business analysts can have an important role are in financial analysis, quality assurance, training, business policy and procedures, market analysis, organizational development and solution testing. More specifically, business analysts are required to use the data which is gathered for the purpose of analysis and interpret greater meaning for the business. This can then be used to improve business performance through identifying areas for potential growth, cost reduction, understanding customer behavior, and observing economic trends and forecasts, and then reacting appropriately. Successful business analysts should influence the business environment by providing reliable guidance in decision making for the future through observing data which reflects the behaviour of the business in the past. Business analysts are essential at all levels of a business, as both tactical and strategic planning require analysts who help with "incremental improvements to products, business processes, and application". Business analysts have an increasing need to provide a business with sustainable solutions. The Business Analyst "plays a key role in making sustainable choices, providing direction to business and influencing demand for specific technologies". Business analysis practices have the opportunity to use business data in a positive way, which can lead to the transition of a sustainable world. Areas of business analysis Business focuses Due to the range of applications a business analyst can have, there are specific areas in which they can function. Kathleen B. Haas describes the requirement of business analysts in four areas of business – operations focus, project focus, enterprise focus, and competitive focus. Operations focus – business analyst are able to use big data to analyze the way in which a business's operations are impacting the ability of the business to generate business value. Business analysts add value to the operational level of a business by enabling efficiency to be maximized through cost cuts, investing in better equipment, improving employee efficiency, and increasing production of popular products. Project focus – when a business analyst takes charge of a project, areas that are historically overlooked are more likely to be considered carefully. The business analyst has an essential role in projects, which includes "integrating strategic planning with portfolio planning for Information Systems and technology", inclusion of the possible effects of business decisions on future performance, and the use of modelling tools to demonstrate the "as-is" and "to-be" business to all employees across various levels of the business. Enterprise focus – a business analyst who works in this area of a business helps to "optimize development of innovative solutions" through the use of technology. Activities involved in an enterprise-focused business analyst's job include building current and future business architecture, conducting analyses of opportunities, problems and feasibility, proposing new projects to build solutions, validating forecasts and assumptions being made, conducting solution assessments and validation, comparing planned and actual results of business plans. Competitive focus – the competitive environment is analyzed by business analysts "in order to develop a meaningful strategy" for all areas of a business. One of the main functions of business which this is relevant in is marketing. By observing consumer behavior when interacting with a business's products and the products of its competitors, as well as the distinctiveness of brands in the consumer space, information about substitutability and product performance can be determined. Specific business analyst roles Business analyst skills can be applied to a variety of roles within business processes. Business analyst Business systems analyst Systems analyst Requirements engineer Process analyst Product analyst Product manager Product owner Enterprise analyst Business architect Management consultant Business intelligence analyst Data scientist Customer Relationship Management Business analysts can also work in areas relating to project management, product management, software development, quality assurance and interaction design. Skills and qualifications Skills Oral and written communication skills Facilitation, and interpersonal and consultative skills Analytical thinking and problem solving Being detail-oriented and capable of delivering high level accuracy Organisational skills Knowledge of business structure Stakeholder analysis Requirements engineering Cost benefit analysis Processes modelling Understanding of networks, databases and other technology These skills are a combination of hard skills and soft skills. A business analyst should have knowledge in IT and/or business, but the combination of both of these fields is what makes a business analyst such a valuable asset to the business environment. As a minimum standard, a business analyst should have a "general understanding of how systems, products and tools work" in the business environment. Some IT employees may transfer from the area of IT into a business analyst role, as their skills are often applicable in both. There are broader categorized skill sets which business analysts require in the work place. Mediation - business analysts are a useful "liaison support role" between business professionals and IT professionals in the workplace. The business analyst role is an overlap of these two professions, and therefore the business analyst plays an essential role in communication and understanding between these two groups. Requirements elicitation - this refers to "analyzing and gathering the needs of both computer-based systems as well as the business". Successful requirements elicitation can help to improve and eliminate quality and defective requirements respectively at an early point in the product lifecycle, and can therefore minimize wastage and maximize business success simultaneously. Solution designer - business analysts can contribute to the design of business functions and processes through the analysis of past performance and certain areas for improvement. Business modelling - forecasting, modelling and analyzing current and future business performance, functions and processes are essential to the business analyst role. These skills enable the business analyst to make educated business decisions. Business problem analysis - a business analyst must be able to analyze the issues a business is facing in order to determine how they impact business performance, and how the business can overcome these problems with maximum efficiency. Information System (IS) strategy evaluation - business analysts are required to continually monitor and control the strategic plans of a business, so that it is able to best meet its needs and goals. Part of this involves comparison with competitors and industry trends. Qualifications There are a number of qualifications that can lead to a career as a business analyst. Completing a bachelor's degree - this could be in information technology, business administration or economics. Completing a master's degree - masters degrees "help add more skills and significantly increase your salary". Examples of masters degrees which are relevant to business analysts include business analytics, business informatics, business intelligence & analytics, data science, management information systems or information technology. The combination of all these skills and qualifications provides the business environment with deeper understanding into the behaviour of markets, products, competitors, economies, and operations within and around a business. Challenges A successful business analyst requires access to large amounts of data, and in the process of using this data they must be aware of challenges relating to data privacy, careful management of analytical resources, team success, and effective communication of results to external parties. Considering all these factors into their tasks reduces the risk of inaccurate conclusions being drawn. Data privacy is an increasingly common issue, as social media and Big Data are becoming more prominent, and hence it is important for businesses to ensure that they handle and distribute only the necessary data to the appropriate employees. Management of analytical resources is necessary for business analysts to consider, as there are many ways in which a business can implement high initial costs in the process of analysis of data, and hence resources should be carefully managed so as to not lose business profits. Team functionality and success is important in all areas of business, and business analytics is no different. Business analysts work best in environments where group dynamics are balanced and teamwork is maximised to ensure the best conclusions are drawn from the data. Effectively communicating to external parties is an important challenge for business analysts. The language a business analyst uses in their every-day job is likely to be difficult for other groups within and beyond the business to understand. Hence, it is essential that the business considers how they communicate their conclusions to others. See also Business process reengineering Change management analyst Information technology References Business analysis Business terms Systems analysis Business occupations Computer occupations
Business analyst
[ "Technology" ]
1,843
[ "Computer occupations" ]
1,579,423
https://en.wikipedia.org/wiki/Reaction%20quotient
In chemical thermodynamics, the reaction quotient (Qr or just Q) is a dimensionless quantity that provides a measurement of the relative amounts of products and reactants present in a reaction mixture for a reaction with well-defined overall stoichiometry at a particular point in time. Mathematically, it is defined as the ratio of the activities (or molar concentrations) of the product species over those of the reactant species involved in the chemical reaction, taking stoichiometric coefficients of the reaction into account as exponents of the concentrations. In equilibrium, the reaction quotient is constant over time and is equal to the equilibrium constant. A general chemical reaction in which α moles of a reactant A and β moles of a reactant B react to give ρ moles of a product R and σ moles of a product S can be written as \it \alpha\,\rm A{} + \it \beta\,\rm B{} <=> \it \rho\,\rm R{} + \it \sigma\,\rm S{}. The reaction is written as an equilibrium even though, in many cases, it may appear that all of the reactants on one side have been converted to the other side. When any initial mixture of A, B, R, and S is made, and the reaction is allowed to proceed (either in the forward or reverse direction), the reaction quotient Qr, as a function of time t, is defined as where {X}t denotes the instantaneous activity of a species X at time t. A compact general definition is where Пj denotes the product across all j-indexed variables, aj(t) is the activity of species j at time t, and νj is the stoichiometric number (the stoichiometric coefficient multiplied by +1 for products and –1 for starting materials). Relationship to K (the equilibrium constant) As the reaction proceeds with the passage of time, the species' activities, and hence the reaction quotient, change in a way that reduces the free energy of the chemical system. The direction of the change is governed by the Gibbs free energy of reaction by the relation , where K is a constant independent of initial composition, known as the equilibrium constant. The reaction proceeds in the forward direction (towards larger values of Qr) when ΔrG < 0 or in the reverse direction (towards smaller values of Qr) when ΔrG > 0. Eventually, as the reaction mixture reaches chemical equilibrium, the activities of the components (and thus the reaction quotient) approach constant values. The equilibrium constant is defined to be the asymptotic value approached by the reaction quotient: and . The timescale of this process depends on the rate constants of the forward and reverse reactions. In principle, equilibrium is approached asymptotically at t → ∞; in practice, equilibrium is considered to be reached, in a practical sense, when concentrations of the equilibrating species no longer change perceptibly with respect to the analytical instruments and methods used. If a reaction mixture is initialized with all components having an activity of unity, that is, in their standard states, then and . This quantity, ΔrG°, is called the standard Gibbs free energy of reaction. All reactions, regardless of how favorable, are equilibrium processes, though practically speaking, if no starting material is detected after a certain point by a particular analytical technique in question, the reaction is said to go to completion. In biochemistry In biochemistry, the reaction quotient is often referred to as the mass-action ratio with the symbol . Example The burning of octane, C8H18 + 25/2 O2 → 8CO2 + 9H2O has a ΔrG° ~ –240 kcal/mol, corresponding to an equilibrium constant of 10175, a number so large that it is of no practical significance, since there are only ~5 × 1024 molecules in a kilogram of octane. Significance and applications The reaction quotient plays a crucial role in understanding the direction and extent of a chemical reaction's progress towards equilibrium: Equilibrium condition: At equilibrium, the reaction quotient (Q) is equal to the equilibrium constant (K) for the reaction. This condition is represented as Q = K, indicating that the forward and reverse reaction rates are equal. Predicting reaction direction: If Q < K, the reaction will proceed in the forward direction to establish equilibrium. If Q > K, the reaction will proceed in the reverse direction to reach equilibrium. Extent of reaction: The difference between Q and K provides information about how far the reaction is from equilibrium. A larger difference indicates a greater driving force for the reaction to proceed towards equilibrium. Reaction kinetics: The reaction quotient can be used to study the kinetics of reversible reactions and determine rate laws, as it is related to the concentrations of reactants and products at any given time. Equilibrium constant determination: By measuring the concentrations of reactants and products at equilibrium, the equilibrium constant (K) can be calculated from the reaction quotient (Q = K at equilibrium). The reaction quotient is a powerful concept in chemical kinetics and thermodynamics, enabling the prediction of reaction directions, the extent of reaction progress, and the determination of equilibrium constants. It finds applications in various fields, including chemical engineering, biochemistry, and environmental chemistry, where understanding the behavior of reversible reactions is crucial. References External links Reaction quotient tutorials tutorial I No longer accessible as of November 2023 tutorial II tutorial III Equilibrium chemistry Physical chemistry
Reaction quotient
[ "Physics", "Chemistry" ]
1,170
[ "Equilibrium chemistry", "Physical chemistry", "Applied and interdisciplinary physics", "nan" ]
1,579,458
https://en.wikipedia.org/wiki/Pargeting
Pargeting (or sometimes pargetting) is a decorative or waterproof plastering applied to building walls. The term, if not the practice, is particularly associated with the English counties of Suffolk and Essex. In the neighbouring county of Norfolk, the term "pinking" is used. Patrick Leigh Fermor describes similar decorations on pre-World War II buildings in Linz, Austria. "Pargeted façades rose up, painted chocolate, green, purple, cream and blue. They were adorned with medallions in high relief and the stone and plaster scroll-work gave them a feeling of motion and flow." Pargeting derives from the word 'parget', a Middle English term that is probably derived from the Old French pargeter or parjeter, to throw about, or porgeter, to roughcast a wall. However, the term is more usually applied only to the decoration in relief of the plastering between the studwork on the outside of half-timber houses, or sometimes covering the whole wall. The devices were stamped on the wet plaster. This seems generally to have been done by sticking a number of pins in a board in certain lines or curves, and then pressing on the wet plaster in various directions, so as to form geometrical figures. Sometimes these devices are in relief, and in the time of Elizabeth I of England represent figures, birds and foliage. Fine examples can be seen at Ipswich, Maidstone, and Newark-on-Trent. The term is also applied to the lining of the inside of smoke flues to form an even surface for the passage of the smoke. See also Harl Parge coat Plasterwork Yeseria References External links Architectural elements Plastering Wallcoverings
Pargeting
[ "Chemistry", "Technology", "Engineering" ]
353
[ "Building engineering", "Coatings", "Architecture", "Architectural elements", "Plastering", "Components" ]
1,579,479
https://en.wikipedia.org/wiki/Pietra%20dura
Pietra dura (), pietre dure () or intarsia lapidary (see below), called parchin kari or parchinkari () in the Indian Subcontinent, is a term for the inlay technique of using cut and fitted, highly polished colored stones to create images. It is considered a decorative art. The stonework, after the work is assembled loosely, is glued stone-by-stone to a substrate after having previously been "sliced and cut in different shape sections, and then assembled together so precisely that the contact between each section was practically invisible". Stability was achieved by grooving the undersides of the stones so that they interlocked, rather like a jigsaw puzzle, with everything held tautly in place by an encircling 'frame'. Many different colored stones, particularly marbles, were used, along with semiprecious, and even precious stones. It first appeared in Rome in the 16th century, reaching its full maturity in Florence. Pietra dura items are generally crafted on green, white or black marble base stones. Typically, the resulting panel is completely flat, but some examples where the image is in low relief were made, taking the work more into the area of hardstone carving. Related arts and terms Pietre dure is an Italian plural meaning "hard rocks" or hardstones; the singular pietra dura is also encountered in Italian. In Italian, but not in English, the term embraces all gem engraving and hardstone carving, which is the artistic carving of three-dimensional objects in semi-precious stone, normally from a single piece, for example in Chinese jade. The traditional convention in English has been to use the singular pietra dura just to denote multi-colored inlay work. However, in recent years there has been a trend to use "pietre dure" as a term for the same thing, but not for all of the techniques it covers, in Italian. But the title of a 2008 exhibition at the Metropolitan Museum of Art, New York, Art of the Royal Court: Treasures in Pietre Dure from the Palaces of Europe, used the full Italian sense of the term, probably because they thought that it had greater brand recognition. The material on the website speaks of objects such as a vase in lapis lazuli as being examples of "hardstone carving (pietre dure)". The Victoria & Albert Museum in London uses both versions on its website, but uses "pietra dura" ("A method of inlaying coloured marbles or semi-precious stones into a stone base, often in geometric or flower patterns....") in its "Glossary", which was evidently not consulted by the author of another page, where the reader is told: "Pietre dure (from the Italian 'hard stone') is made from finely sliced coloured stones, precisely matched, to create a pictorial scene or regular design". The English term "Florentine mosaic" is sometimes also encountered, probably developed by the tourist industry. Giovanni Montelatici (1864–1930) was an Italian Florentine artist whose brilliant work has been distributed across the world by tourists and collectors. It is distinct from mosaic in that the component stones are mostly much larger and cut to a shape suiting their place in the image, not all of roughly equal size and shape as in mosaic. In pietra dura, the stones are not cemented together with grout, and works in pietra dura are often portable. Nor should it be confused with micromosaics, a form of mosaic using very small tesserae of the same size to create images rather than decorative patterns, for Byzantine icons, and later for panels for setting into furniture and the like. For fixed inlay work on walls, ceilings, and pavements that do not meet the definition of mosaic, the better terms are intarsia or, in some specific applications, Cosmatesque. Similarly, for works that use larger pieces of stone or tile, opus sectile may be used. Pietra dura is essentially stone marquetry. As a high expression of lapidary art, it is closely related to the art of jewellery. It can also be considered a branch of sculpture because three-dimensionality can be achieved, as with a bas relief. History Pietra dura developed from the ancient Roman opus sectile, which at least in terms of surviving examples, was architectural, used on floors and walls, with both geometric and figurative designs. In the Middle Ages cosmatesque floors and small columns, etc. on tombs and altars continued to use inlays of different colours in geometric patterns. Byzantine art continued with inlaid floors, but also produced some small religious figures in hardstone inlays, for example in the Pala d'Oro in San Marco, Venice (though this mainly uses enamel). In the Italian Renaissance this technique again was used for images. The Florentines, who most fully developed the form, however, regarded it as 'painting in stone'. As it developed in Florence, the technique was initially called "opere di commessi" (approximately, "Fitted together works"). Medici Grand Duke Ferdinando I of Tuscany founded the Galleria di Lavori in 1588, now the Opificio delle pietre dure, for the purpose of developing this and other decorative forms. A multitude of varied objects were created. Table tops were particularly prized, and these tend to be the largest specimens. Smaller items in the form of medallions, cameos, wall plaques, panels inserted into doors or onto cabinets, bowls, jardinieres, garden ornaments, fountains, benches, etc. are all found. A popular form was to copy an existing painting, often of a human figure, as illustrated by the image of Pope Clement VIII, above. Examples are found in many museums. The medium was transported to other European centers of court art and remained popular into the 19th century. In particular, Naples became a noted center of the craft. By the 20th century, the medium was in decline, in part by the assault of modernism, and the craft had been reduced to mainly restoration work. In recent decades, however, the form has been revived, and receives state-funded sponsorship. Modern examples range from tourist-oriented souvenirs, including reproductions of 19th century style religious subjects (especially in Florence and Naples), to works copying or based on older designs used for luxurious decorative contexts, to works in a contemporary artistic idiom. Parchin kari By the early part of the 17th century, smaller objects produced by the Opificio were widely diffused throughout Europe, and as far East to the court of the Mughals in India, where the form was imitated and reinterpreted in a native style; its most sumptuous expression is found in the Taj Mahal. In Mughal India, pietra dura was known as Parchin kari, literally 'inlay' or 'driven-in' work. Due to the Taj Mahal being one of the major tourist attractions, there is a flourishing industry of pietra dura artifacts in Agra. Gallery Notes External links Italian mosaic Decorative arts Visual arts media Visual arts materials Pavements Floors Taj Mahal Hardstone carving Italian words and phrases Islamic art
Pietra dura
[ "Engineering" ]
1,501
[ "Structural engineering", "Floors" ]
1,579,586
https://en.wikipedia.org/wiki/Physics%20First
Physics First is an educational program in the United States, that teaches a basic physics course in the ninth grade (usually 14-year-olds), rather than the biology course which is more standard in public schools. This course relies on the limited math skills that the students have from pre-algebra and algebra I. With these skills students study a broad subset of the introductory physics canon with an emphasis on topics which can be experienced kinesthetically or without deep mathematical reasoning. Furthermore, teaching physics first is better suited for English Language Learners, who would be overwhelmed by the substantial vocabulary requirements of Biology. Physics First began as an organized movement among educators around 1990, and has been slowly catching on throughout the United States. The most prominent movement championing Physics First is Leon Lederman's ARISE (American Renaissance in Science Education). Many proponents of Physics First argue that turning this order around lays the foundations for better understanding of chemistry, which in turn will lead to more comprehension of biology. Due to the tangible nature of most introductory physics experiments, Physics First also lends itself well to an introduction to inquiry-based science education, where students are encouraged to probe the workings of the world in which they live. The majority of high schools which have implemented "physics first" do so by way of offering two separate classes, at two separate levels: simple physics concepts in 9th grade, followed by more advanced physics courses in 11th or 12th grade. In schools with this curriculum, nearly all 9th grade students take a "Physical Science", or "Introduction to Physics Concepts" course. These courses focus on concepts that can be studied with skills from pre-algebra and algebra I. With these ideas in place, students then can be exposed to ideas with more physics related content in chemistry, and other science electives. After this, students are then encouraged to take an 11th or 12th grade course in physics, which does use more advanced math, including vectors, geometry, and more involved algebra. There is a large overlap between the Physics First movement, and the movement towards teaching conceptual physics - teaching physics in a way that emphasizes a strong understanding of physical principles over problem-solving ability. Criticism American public schools traditionally teach biology in the first year of high school, chemistry in the second, and physics in the third. The belief is that this order is more accessible, largely because biology can be taught with less mathematics, and will do the most toward providing some scientific literacy for the largest number of students. In addition, many scientists and educators argue that freshmen do not have an adequate background in mathematics to be able to fully comprehend a complete physics curriculum, and that therefore quality of a physics education is lost. While physics requires knowledge of vectors and some basic trigonometry, many students in the Physics First program take the course in conjunction with geometry. They suggest that instead students first take biology and chemistry which are less mathematics-intensive so that by the time they are in their junior year, students will be advanced enough in mathematics with either an algebra 2 or pre-calculus education to be able to fully grasp the concepts presented in physics. Some argue this even further, saying that at least calculus should be a prerequisite for physics. Others point out that, for example, secondary school students will never study the advanced physics that underlies chemistry in the first place. " planes (frictionless or not) didn't come up in ... high school chemistry class ... and the same can be said for some of the chemistry that really makes sense of biological phenomena." For physics to be relevant to a chemistry course, students have to develop a truly fundamental understanding of the concepts of energy, force, and matter, beyond the context of specific applications like the inclined plane. Footnotes External links American Association of Physics Teachers Listservs, incl. Physics First A Closer Look at Cross-Disciplinary Educational Sequences American Association of Physics Teachers on Physics First Project ARISE (American Renaissance in Science Education) AAPT Physics First Informational Guide (pdf file) Curricula Secondary education in the United States Education reform in the United States Physics education High school course levels
Physics First
[ "Physics" ]
830
[ "Applied and interdisciplinary physics", "Physics education" ]
1,579,633
https://en.wikipedia.org/wiki/Controlled%20ecological%20life-support%20system
Controlled (or closed) ecological life-support systems (acronym CELSS) are a self-supporting life support system for space stations and colonies typically through controlled closed ecological systems, such as the BioHome, BIOS-3, Biosphere 2, Mars Desert Research Station, and Yuegong-1. Original concept CELSS was first pioneered by the Soviet Union during the famed "Space Race" in the 1950s–60s. Originated by Konstantin Tsiolkovsky and furthered by V.I. Vernadsky, the first forays into this science were the use of closed, unmanned ecosystems, expanding into the research facility known as the BIOS-3. Then in 1965, manned experiments began in the BIOS-3. Rationale Human presence in space, thus far, has been limited to our own Earth–Moon system. Also, everything that astronauts would need in the way of life support (air, water, and food) has been brought with them. This may be economical for short missions of spacecraft, but it is not the most viable solution when dealing with the life support systems of a long-term craft (such as a generation ship) or a settlement. The aim of CELSS is to create a regenerative environment that can support and maintain human life via agricultural means. Components of CELSS Air revitalization In non-CELSS environments, air replenishment and processing typically consists of stored air tanks and scrubbers. The drawback to this method lies in the fact that upon depletion the tanks would have to be refilled; the scrubbers would also require replacement after they become ineffective. There is also the issue of processing toxic fumes, which come from the synthetic materials used in the construction of habitats. Therefore, the issue of how air quality is maintained requires attention; in experiments, it was found that the plants also removed volatile organic compounds offgassed by synthetic materials used thus far to build and maintain all man-made habitats. In CELSS, air is initially supplied by external supply, but is maintained by the use of foliage plants, which create oxygen in photosynthesis (aided by the waste-byproduct of human respiration, ). Eventually, the main goal of a CELSS environment is to have foliage plants take over the complete and total production of oxygen needs; this would make the system a closed, instead of controlled, system. Food / consumables production As with all present forays into space, crews have had to store all consumables they require prior to launch. Typically, hard-food consumables were freeze dried so that the craft's weight could be reduced. Of course, in a self-sustaining ecosystem, a place for crops to grow would be set aside, allowing foods to be grown and cultivated. The larger the group of people, the more crops would have to be grown. As for water, experiments have shown that it would be derived from condensate in the air (a byproduct of air conditioning and vapors), as well as excess moisture from plants. It would then have to be filtered by some means, either by nature or by machine. Waste-water treatment Early space-flight had travelers either ejecting their wastes into space or storing it for a return trip. CELSS studied means of breaking down human wastes and, if possible, integrating the processed products back into the ecology. For instance, urine was processed into water, which was safe for use in toilets and watering plants. Wastewater treatment makes use of plants, particularly aquatic, to process the wastewater. It has been shown that the more waste is treated by the aquatic plants (or, more specifically, their root systems), the larger the aquatic plants grow. In tests, such as those done in the BioHome, the plants also made viable compost as a growth medium for crops. Closed versus controlled Closed systems are totally self-reliant, recycling everything indefinitely with no external interaction. The life of such a system is limited, as the entropy of a closed system can only increase with time. But if the otherwise closed system is allowed to accept high-temperature radiant energy from an external source (e.g., sunlight) and to reject low-temperature waste heat to deep space, it can continue indefinitely. An example of such a system is the Earth itself. Controlled systems, by contrast, depend on certain external interactions such as periodic maintenance. An example of such a system is the ISS. Notable CELSS projects BioHome Biosphere 2 BIOS-3 Biosphere J Controlled Environment Systems Research Facility Biotron (disambiguation) Biotron Experimental Climate Change Research Facility ALS-NSCORT - Advanced Life Support - NASA Specialized Center of Research and Training Other types of regenerative ecological systems Bioregenerative life support system (acronym: BLSS) Environmental Control and Life Support System (acronym ECLSS) Engineered Closed/Controlled EcoSystem (acronym ECCES) Spome See also Life support system Closed ecological system Arcology References Ecological experiments Systems ecology Spacecraft life support systems
Controlled ecological life-support system
[ "Environmental_science" ]
1,045
[ "Environmental social science", "Systems ecology" ]
1,579,816
https://en.wikipedia.org/wiki/Impeller
An impeller, or impellor, is a driven rotor used to increase the pressure and flow of a fluid. It is the opposite of a turbine, which extracts energy from, and reduces the pressure of, a flowing fluid. Strictly speaking, propellers are a sub-class of impellers where the flow both enters and leaves axially, but in many contexts the term "impeller" is reserved for non-propeller rotors where the flow enters axially and leaves radially, especially when creating suction in a pump or compressor. In pumps An impeller is a rotating component of a centrifugal pump that accelerates fluid outward from the center of rotation, thus transferring energy from the motor that drives the pump to the fluid being pumped. The velocity achieved by the impeller transfers into pressure when the outward movement of the fluid is confined by the pump casing. An impeller is usually a short cylinder with an open inlet (called an eye) to accept incoming fluid, vanes to push the fluid radially, and a splined, keyed, or threaded bore to accept a drive shaft. It can be cheaper to cast an impeller and its spindle as one piece, rather than separately. This combination is sometimes referred to simply as the "rotor." Types Open An open impeller has a hub with attached vanes and is mounted on a shaft. The vanes do not have a wall, making open impellers slightly weaker than closed or semi-closed impellers. However, as the side plate is not fixed to the inlet side of the vane, the blade stresses are significantly lower. In pumps, the fluid enters the impeller's eye, where vanes add energy and direct it to the nozzle discharge. A close clearance between vanes and pump volute or back plate prevent most of fluid from flowing back. Wear on the bowl and edge of vane can be compensated by adjusting the clearance to maintain efficiency over time. Because the internal parts are visible, open impellers are easier to inspect for damage and maintain than closed impellers. They can also be more easily modified to change flow properties. Open impellers operate on a narrow range of specific speed. Open impellers are usually faster and easier to maintain.  For small pumps and those dealing with suspended solids, open impellers are generally used. Sand locking does not occur as easily as with closed type. Semi-closed A semi-closed impeller has an additional back wall, giving it more strength. These impellers can pass mixed solid-liquid mixtures at the cost of reduced efficiency. Closed or shrouded The construction of closed impellers includes additional back and front walls on both sides of vanes that enhances its strength. This also reduces the thrust load on the shaft, increasing bearing life and reliability and reducing shafting cost. However, this more complicated design, including the use of additional wear rings, makes closed impellers more difficult to manufacture and more expensive than open impellers. A closed impeller's efficiency decreases as wear ring clearance increases with use. However, adjustment of impeller bowl clearance does not affect the wear on vanes as critically as open impeller. Closed impellers can be used on a wider range specific speed than open impellers. They are generally used in large pumps and clear water applications. These impellers can't perform effectively with solids and become difficult to clean if clogged. Screw The screw impeller design aligns more with an axial progressive channel that allows for solids to be openly handled when rotating. In centrifugal compressors The main part of a centrifugal compressor is the impeller. An open impeller has no cover, therefore it can work at higher speeds. A compressor with a covered impeller can have more stages than one that has an open impeller. In water jets Some impellers are similar to small propellers but without the large blades. Among other uses, they are used in water jets to power high speed boats. Because impellers do not have large blades to turn, they can spin at much higher speeds than propellers. The water forced through the impeller is channeled by the housing, creating a water jet that propels the vessel forward. The housing is normally tapered into a nozzle to increase the speed of the water, which also creates a Venturi effect in which low pressure behind the impeller pulls more water towards the blades, tending to increase the speed. To work efficiently, there must be a close fit between the impeller and the housing. The housing is normally fitted with a replaceable wear ring which tends to wear as sand or other particles are thrown against the housing side by the impeller. Vessels using impellers are normally steered by changing the direction of the water jet. Compare to propeller and jet aircraft engines. In agitated tanks Impellers in agitated tanks are used to mix fluids or slurry in the tank. This can be used to combine materials in the form of solids, liquids and gas. Mixing the fluids in a tank is very important if there are gradients in conditions such as temperature or concentration. There are two types of impellers, depending on the flow regime created (see figure): Axial flow impeller Radial flow impeller Radial flow impellers impose essentially shear stress to the fluid, and are used, for example, to mix immiscible liquids or in general when there is a deformable interface to break. Another application of radial flow impellers is the mixing of very viscous fluids. Axial flow impellers impose essentially bulk motion and are used on homogenization processes, in which increased fluid volumetric flow rate is important. Impellers can be further classified principally into three sub-types: Propeller Paddles Turbines Propellers Propellers are axial thrust-giving elements. These elements give a very high degree of swirling in the vessel. The flow pattern generated in the fluid resembles a helix. In washing machines Some constructions of top loading washing machines use impellers to agitate the laundry during washing. Firefighting rank badge Fire services in the United Kingdom and many countries of the Commonwealth use a stylized depiction of an impeller as a rank badge. Officers wear one or more on their epaulettes or the collar of their firefighting uniform as an equivalent to the "pips" worn by the army and police. In air pumps Air pumps, such as the roots blower, use meshing impellers to move air through a system. Applications include blast furnaces, ventilation systems, and superchargers for internal combustion engines. In medicine Impellers are an integral part of axial-flow pumps, used in ventricular assist devices to augment or fully replace cardiac function. See also Axial fan design Bladelet (impeller) Centrifugal fan Rim-driven thruster Turbine References Pumps Marine propulsion Fluid dynamics cs:Rotor
Impeller
[ "Physics", "Chemistry", "Engineering" ]
1,410
[ "Pumps", "Turbomachinery", "Chemical engineering", "Physical systems", "Hydraulics", "Marine engineering", "Piping", "Marine propulsion", "Fluid dynamics" ]
1,579,924
https://en.wikipedia.org/wiki/Wheelchair%20ramp
A wheelchair ramp is an inclined plane installed in addition to or instead of stairs. Ramps permit wheelchair users, as well as people pushing strollers, carts, or other wheeled objects, to more easily access a building, or navigate between areas of different height. Ramps for accessibility may predate the wheelchair and are found in ancient Greece. A wheelchair ramp can be permanent, semi-permanent or portable. Permanent ramps are designed to be bolted or otherwise attached in place. Semi-permanent ramps rest on top of the ground or concrete pad and are commonly used for the short term. Permanent and semi-permanent ramps are usually of aluminum, concrete or wood. Portable ramps are usually aluminum and typically fold for ease of transport. Portable ramps are primarily intended for home and building use but can also be used with vans to load an unoccupied mobility device or to load an occupied mobility device when both the device and the passenger are easy to handle. Ramps must be carefully designed in order to be useful. In many places, laws dictate a ramp's minimum width and maximum slope. In general, reduced incline rises are easier for wheelchair users to traverse and are safer in icy climates. However, they consume more space and require traveling a greater distance to go up. Hence, in some cases it is preferable to include an elevator or other type of wheelchair lift. In many countries, wheelchair ramps and other features to facilitate universal access are required by building code when constructing new facilities which are open to the public. Internationally, the United Nations Convention on the Rights of Persons with Disabilities mandates nations take action to "enable persons with disabilities to live independently and participate fully in all aspects of life." Among other requirements, it compels countries to institute "minimum standards and guidelines..." for accessibility. Design standards In the US, the Americans with Disabilities Act (ADA) requires a slope of no more than 1:12 for wheelchairs and scooters for business and public use, which works out to of ramp for each of rise. For example, a rise requires a minimum of in length of ramp. Additionally, ADA limits the longest single span of ramp, prior to a rest or turn platform, to . Ramps can be as long as needed, but no single run of ramp can exceed . Residential Applications usually are not required to meet ADA standards (ADA is a commercial code). The UK's guidelines as recommended by the Disability Discrimination Act 1995 and Equality Act 2010 are a maximum of 1:12 for ramps (with exceptions for existing buildings) "Ramps should be as shallow as possible. The maximum permissible gradient is 1:12 [...], with the occasional exception in the case of short, steeper ramps when refitting existing buildings." In Hong Kong, wheelchair ramps may not exceed a 1:12 slope, except in some situations under the Barrier Free Access (BFA) terms. In South Africa 1:12 is the maximum slope unless the difference in level is less than 400mm, in which case it is 1:10. [SANS 10400-S SS2(a)]. In Australia, the National Construction Code requires a wheelchair ramp to have a maximum incline of 1 in 8. This means that for every travelled horizontally, the ramp rises . The wheelchair ramp must also have a minimum width of . Vehicle ramps Vehicles such as buses, trams, taxis, cars and vans may include a ramp to facilitate entry and exit for all. These may be built-in or portable designs. Most major automotive companies offer rebates for portable ramps and mobility access equipment for new vehicles. Access to buses and trams may involve a retractable ramp. See also Adapted automobile Bridge plate (mechanism) Sidewalk curb wheelchair ramp Wheelchair lift References Accessible building Wheelchairs Public transport Passenger rail transport
Wheelchair ramp
[ "Engineering" ]
767
[ "Accessible building", "Architecture" ]
1,579,964
https://en.wikipedia.org/wiki/Balloon%20modelling
Balloon modelling or balloon twisting is the shaping of special modelling balloons into various shapes, often balloon animals. People who create balloon animals and other twisted balloon decoration sculptures are called twisters, balloon benders, and balloon artists. Twisters often perform in restaurants, at birthday parties, fairs and at public and private events or functions. Two primary design styles are "single balloon modelling", which restricts itself to the use of one balloon per model, and "multiple balloon modelling", which uses more than one balloon. Each style has its own set of challenges and skills, and most twisters practise both styles. Depending on the needs of the moment, they might easily move between the one-balloon or multiple approaches, or they might even incorporate additional techniques such as "weaving" and "stuffing". Modelling techniques have evolved to include a range of very complex moves, and a highly specialized vocabulary has emerged to describe the techniques involved and their resulting creations. Some twisters inflate their balloons with their own lungs, and for many years this was a standard and necessary part of the act. However, many now use a pump of some sort, whether it is a hand pump, an electric pump plugged in or run by a battery pack, or a compressed gas tank containing air or nitrogen. Twisters do not generally fill their creations with helium, as these designs will not usually float anyway. The balloons for twisting are too porous for helium and the designs are generally too heavy for their size for helium to lift. Origins The origins of balloon modelling are unknown. The 1975 book by "Jolly the Clown" Art Petri credits "Herman Bonnert from Pennsylvania at a magician's convention in 1939" as being the first balloontwister. Val Andrews, in Manual of Balloon Modeling, Vol. 1, An Encyclopedic Series, credits H.J. Bonnert of Scranton, Pennsylvania as being the "daddy of them all". Jim Church III states, "Frank Zacone from Youngstown, Ohio was doing a balloon act during the 1940s and had been doing the act for some time." Another candidate for first balloon twister is Henry Maar. Equipment Modellers will use an assortment of balloons, usually in various colors. Balloon sizes are usually identified by a number: the most common size of twisting balloons is called a "260", as it is approximately two inches in diameter and 60 inches long. Thus, a "260" is 2×60 inches and a "160" is 1×60 inches when fully blown up. Although these are the most common sizes used, there are dozens of other shapes available as well. The most common methods for inflation are air pumps similar to bicycle pumps, electric air compressors, and the mouth. Inflating a balloon with the mouth is difficult and can be dangerous. Particularly well-trained and talented twisters, however, can blow-up several balloons at once, and some can even blow up 160s, which are much more difficult to mouth-inflate than the more common 260s, as their narrowness requires a great deal more strength and breath pressure to inflate. See also Ralph Dewey Notes External links 'Pop Art', Jonathan Allen, Cabinet, issue 37, 2010 Visual arts media Modelling Party equipment
Balloon modelling
[ "Chemistry" ]
667
[ "Balloons", "Fluid dynamics" ]
1,579,998
https://en.wikipedia.org/wiki/Temperature%20control
Temperature control is a process in which change of temperature of a space (and objects collectively there within), or of a substance, is measured or otherwise detected, and the passage of heat energy into or out of the space or substance is adjusted to achieve a desired temperature. Thermoregulation is the act of keeping the body at a static and regulated temperature that is suitable for the host despite the external temperature conditions. See also Heat exchanger Moving bed heat exchanger Thermal Control System Thermodynamic equilibrium Industrial automation Spacecraft thermal control External links Article about PID control by Bob Pease (from archive.org) References
Temperature control
[ "Technology" ]
128
[ "Home automation", "Temperature control" ]
1,580,006
https://en.wikipedia.org/wiki/Grandmother%20hypothesis
The grandmother hypothesis is a hypothesis to explain the existence of menopause in human life history by identifying the adaptive value of extended kin networking. It builds on the previously postulated "mother hypothesis" which states that as mothers age, the costs of reproducing become greater, and energy devoted to those activities would be better spent helping her offspring in their reproductive efforts. It suggests that by redirecting their energy onto those of their offspring, grandmothers can better ensure the survival of their genes through younger generations. By providing sustenance and support to their kin, grandmothers not only ensure that their genetic interests are met, but they also enhance their social networks which could translate into better immediate resource acquisition. This effect could extend past kin into larger community networks and benefit wider group fitness. Background One explanation to this was presented by G.C. Williams who was the first to posit that menopause might be an adaptation. Williams suggested that at some point it became more advantageous for women to redirect reproductive efforts into increased support of existing offspring. Since a female's dependent offspring would die as soon as she did, he argued, older mothers should stop producing new offspring and focus on those existing. In so doing, they would avoid the age-related risks associated with reproduction and thereby eliminate a potential threat to the continued survival of current offspring. The evolutionary reasoning behind this is driven by related theories. Kin selection Kin selection provides the framework for an adaptive strategy by which altruistic behavior is bestowed on closely related individuals because easily identifiable markers exist to indicate them as likely to reciprocate. Kin selection is implicit in theories regarding the successful propagation of genetic material through reproduction, as helping an individual more likely to share one's genetic material would better ensure the survival of at least a portion of it. Hamilton's rule suggests that individuals preferentially help those more related to them when costs to themselves are minimal. This is modeled mathematically as . Grandmothers would, therefore, be expected to forgo their own reproduction once the benefits of helping those individuals (b) multiplied by the relatedness to that individual (r) outweighed the costs of the grandmother not reproducing (c). Evidence of kin selection emerged as correlated with climate-driven changes, around 1.8–1.7 million years ago, in female foraging and food sharing practices. These adjustments increased juvenile dependency, forcing mothers to opt for a low-ranked, common food source (tubers) that required adult skill to harvest and process. Such demands constrained female IBIs (Inter Birth Intervals) thus providing an opportunity for selection to favor the grandmother hypothesis. Parental investment Parental investment, originally put forth by Robert Trivers, is defined as any benefit a parent confers on an offspring at a cost to its ability to invest elsewhere. This theory serves to explain the dynamic sex difference in investment toward offspring observed in most species. It is evident first in gamete size, as eggs are larger and far more energetically expensive than sperm. Females are also much more sure of their genetic relationship with their offspring, as birth serves as a very reliable marker of relatedness. This paternity uncertainty that males experience makes them less likely than females to invest, since it would be costly for males to provide sustenance to another male's offspring. This translates into the grandparental generation, as grandmothers should be much more likely than grandfathers to invest energy into the offspring of their children, and more so in the offspring of their daughters than sons. The grandmother effect Evolutionary theory dictates that all organisms invest heavily in reproduction in order to replicate their genes. According to parental investment, human females will invest heavily in their young because the number of mating opportunities available to them and how many offspring they are able to produce in a given amount of time is fixed by the biology of their sex. This inter birth interval (IBI) is a limiting factor in how many children a woman can have because of the extended developmental period that human children experience. Extended childhood, like the extended post-reproductive lifespan for females, is relatively unique to humans. Because of this correlation, human grandmothers are well-poised to provide supplemental parental care to their offspring's children. Since their grandchildren still carry a portion of their genes, it is still in the grandmother's genetic interest to ensure those children survive to reproduction. Reproductive senescence The mismatch between the rates of degradation of somatic cells versus gametes in human females provides an unsolved paradox. Somatic cells decline more slowly, and humans invest more in somatic longevity relative to other species. Since natural selection has a much stronger influence on younger generations, deleterious mutations during later life become harder to select out of the population. In female placentals, the number of ovarian oocytes is fixed during embryonic development, possibly as an adaptation to reduce the accumulation of mutations, which then mature or degrade over the life course. At birth there are, typically, one million ova. However, by menopause, only approximately 400 eggs would have actually matured. In humans, the rate of follicular atresia increases at older ages (around 38–40), for reasons that are not known. In chimpanzees, our closest nonhuman, genetic relatives, recent research indicates a menopausal age of roughly 50, similar to that of human females, in captive chimpanzees (), with similar findings reflected in a study of the Ngogo (Uganda) wild chimpanzee community reported in October 2023 (). The report of the latter study questioned the grandmother hypothesis by observing that "...chimpanzees have very different living arrangements than humans. Older female chimpanzees typically do not live near their daughters or provide care for grandchildren, yet females at Ngogo often live past their childbearing years." Previously, a very similar rate of oocyte atresia until the age of 40 had been posited in chimps and humans, at which point humans experienced a far accelerated rate compared to chimpanzees. The aging process in humans leaves a dilemma in that females live past their ability to reproduce. The question poised to evolutionary researchers then becomes, why do human bodies live on so robustly and for so long past their reproductive potential, and could there be an adaptive benefit to abandoning one's own attempts at reproduction to assist kin? Alloparenting The practice of dividing parenting responsibilities among non-parents affords females a great advantage in that they can dedicate more effort and energy toward having an increased number of offspring. While this practice is observed in several species, it has been an especially successful strategy for humans who rely extensively on social networks. One observational study of the Aka foragers of Central Africa demonstrated how allomaternal investment toward an offspring increased specifically during times that the mother's investment in subsistence and economic activities increased. If the grandmother effect were true, post-menopausal women should continue to work after the cessation of fertility and use the proceeds to preferentially provision their kin. Studies of Hadza women have provided such evidence. A modern hunter-gatherer group in Tanzania, the post-menopausal Hadza women often help their grandchildren by foraging for food staples that younger children are inefficient at acquiring successfully. Children, therefore, require the assistance of an adult to gain this crucial version of sustenance. Often, however, mothers are inhibited by the care of younger offspring and are less available to help their older children forage. In this regard, the Hadza grandmothers become vital to the care of existing grandchildren, and allow reproductive-age women to redirect energy from existing offspring into younger offspring or other reproductive efforts. However, some commentators felt that the role of Hadza men, who contribute 96% of the mean daily intake of protein, was ignored, though the authors have addressed this criticism in numerous publications. Other studies also demonstrated reservations about behavioral similarities between the Hadza and our ancestors. Maternal v. paternal grandmothers Because grandmothers should be expected to provide preferential treatment to offspring she is most certain of her relationship to, there should be differences in the help she provides to each grandchild according to that relationship. Studies have found that not only does the maternal or paternal relationship of the grandparent affect whether or how much help a grandchild receives, but also what kind of help. Paternal grandmothers often had a detrimental effect on infant mortality. Also, maternal grandmothers concentrate on offspring survival, whereas paternal grandmothers increase birth rates. These finding are consistent with ideas of parental investment and paternity uncertainty. Equally, a grandmother could be both a maternal and paternal grandmother and thus in division of resources, a daughter's offspring should be favored. Other studies have focused on the genetic relationship between grandmothers and grandchildren. Such studies have found that the effects of maternal / paternal grandmothers on grandsons / granddaughters may vary based on degree of genetic relatedness, with paternal grandmothers having positive effects on granddaughters but detrimental effects on grandsons, and paternity uncertainty may be less important than chromosome inheritance. Criticisms and alternative hypotheses Some critics have cast doubt on the hypothesis because while it addresses how grandparental care could have maintained longer female post-reproductive lifespans, it does not provide an explanation for how it would have evolved in the first place. One theory is that the number of caregivers has a positive relationship on the likelihood of offspring reaching adulthood, suggesting that grandparents who contribute to the care of their grandchildren are more likely to have their genes passed down. Some versions of the grandmother hypothesis asserted that it helped explain the longevity of human senescence. However, demographic data has shown that historically rising numbers in older people among the population correlated with lower numbers of younger people. This suggests that at some point grandmothers were not helpful toward the survival of their grandchildren, and does not explain why the first grandmother would forgo her own reproduction to help her offspring and grandchildren. In addition, all variations on the mother, or grandmother effect, fail to explain longevity with continued spermatogenesis in males. Another problem concerning the grandmother hypothesis is that it requires a history of female philopatry. Though some studies suggest that hunter-gatherer societies are patriarchal, mounting evidence shows that residence is fluid among hunter-gatherers and that married women in at least one patrilineal society visit their kin during times when kin-based support can be especially beneficial to a woman's reproductive success. One study does suggest, however, that maternal kin were essential to the fitness of sons as fathers in a patrilocal society. It also fails to explain the detrimental effects of losing ovarian follicular activity. While continued post-menopausal synthesis of estrogen occurs in peripheral tissues through the adrenal pathways, these women undoubtedly face an increased risk of conditions associated with lower levels of estrogen: osteoporosis, osteoarthritis, Alzheimer's disease and coronary artery disease. However, cross-cultural studies of menopause have found that menopausal symptoms are quite variable among different populations, and that some populations of females do not recognize, and may not even experience, these "symptoms". This high level of variability in menopausal symptoms across populations brings into question the plausibility of menopause as a sort of "culling agent" to eliminate non-reproductive females from competition with younger, fertile members of the species. This also faces the task of explaining the paradox between the typical age for menopause onset and the life expectancy of female humans. See also The How and the Why – a play based on the controversy surrounding the Grandmother hypothesis and the evolution of human reproduction Postpartum confinement, a worldwide tradition in which the grandmothers care for the next generation Patriarch hypothesis References Gerontology Human evolution Menopause Middle age
Grandmother hypothesis
[ "Biology" ]
2,441
[ "Gerontology" ]
1,580,229
https://en.wikipedia.org/wiki/Hagioscope
A hagioscope () or squint is an architectural term denoting a small splayed opening or tunnel at seated eye-level, through an internal masonry dividing wall of a church in an oblique direction (south-east or north-east), giving worshippers a view of the altar and therefore of the elevation of the host. Where worshippers were separated from the high altar not by a solid wall of masonry but by a transparent parclose screen, a hagioscope was not required as a good view of the high altar was available to all within the sectioned-off area concerned. Where a squint was made in an external wall so that lepers and other non-desirables could see the service without coming into contact with the rest of the populace, they are termed leper windows or lychnoscopes. Function Where the congregation of a church is united in the nave there is no use for a hagioscope. However, when parts of the congregation separated themselves for purposes of social distinction, by use of walls or other screens from the chancel, or nave, and from the main congregation, such a need arose. In medieval architecture hagioscopes were often a low window in the chancel wall and were frequently protected by either a wooden shutter or iron bars. Hagioscopes are found on one or both sides of the chancel arch; in some cases a series of openings has been cut in the walls in an oblique line to enable a person standing in the porch (as in Bridgwater church, Somerset) to see the altar; in this case and in other instances such openings were sometimes provided for an attendant, who had to ring the Sanctus bell when the Host was elevated. Though rarely encountered in continental Europe, they are occasionally found to serve such purposes as allowing a monk in one of the vestries to follow the service and to communicate with the bell-ringers. Sometimes squints were placed to enable nuns or anchorites to observe the services without having to give up their isolation. Anchorites could view the Eucharist without being able to observe the congregation. The unusual design of the church of St Helen's in Bishopsgate, one of the largest surviving ancient churches of London, arose from its once having been two separate places of worship: a 13th-century parish church and the chapel of a Benedictine convent. On the convent side of St Helen's Church, a "squint" allowed the nuns to observe the parish masses; church records show that the squint in this case was not enough to restrain the nuns, who were eventually admonished to "abstain from kissing secular persons", a practice to which it seems they had become "too prone". Surviving examples Finland There is only one hagioscope in Finland, at Olavinlinna (St. Olaf's Castle), in the town of Savonlinna. Here, the squint has enabled some congregants to continue gathering at the dark, damp stone church tower through the dead of winter, despite forbidding temperatures and weather conditions. France In France, the hagioscope of Notre-Dame in Dives-sur-Mer, Normandy, has the inscription trou aux lépreux (leper window). Other hagioscopes are known at St. Laurent in Deauville, Normandy and at the old church of St. Maurice in Freyming-Merlebach, Lorraine. Germany In Germany, a number of hagioscopes still exist or were rediscovered in the 19th and 20th century. They are found mainly in Lower Saxony which had a small population in the Middle Ages and only a few bigger cities. In cities lepers lived together in housing estates which often had their own chapels. In Georgsmarienhütte the hagioscope of church St. Johann belonged to the former Benedictine convent Kloster Oesede, founded in the 12th century and reconstructed in the early 1980s. Nearby in Bad Iburg a hagioscope was rediscovered at St. Clemens, church of former Benedictine monastery in the castle and monastery complex Schloss Iburg. Other hagioscopes in Lower Saxony are found in Bokelesch, Westoverledingen, Dornum, Midlum, Kirchwahlingen (Gemeinde Böhme) and Hankensbüttel. In Northrhine-Westphalia, St. Antonius-Kapelle in Gescher-Tungerloh-Capellen has a hagioscope. St. Antonius is used as Autobahn chapel at Bundesautobahn 31. Another hagioscope is found in St. Ulricus in Börninghausen. In Rhineland-Palatinate the church of St. Eligius-Hospital in Neuerburg has a hagioscope. In Baden-Württemberg there is a hagioscope in St. Peter und Paul, the Old Cemetery Church of Nusplingen. Ireland A leper's squint (now blocked up) is visible at St Mary's Cathedral, Limerick. Athenry Priory also once had a leper's squint Furness Church, 13th century Norman church near Naas St Mary's Church, Inis Cealtra has a stone opening believed to be a leper's squint Netherlands St. Vitus in Wetsens, and Jistrum, both in Friesland, have hagioscopes, as does the oldest church in the Netherlands, which stands in Oosterbeek. Sweden In Sweden, Bro Church near Visby on Gotland has a cross-shaped hagioscope. Another church on Gotland with a hagioscope is Atlingbo Church. Other hagioscopes are at the church of Vreta Abbey near Linköping, Granhult Kyrka in Uppvidinge and Husaby Kyrka in Götene. The wooden church in Granhult (Småland) has a hagioscope which can be closed. United Kingdom Churches in England with hagioscopes include: Church of St Mary the Virgin, Gamlingay, Cambridgeshire St Wynwallow's Church, Landewednack, Cornwall St Martin's Church, Liskeard, Cornwall St Anietus's Church, St Neot, Cornwall St Corentin's Church, Cury, Cornwall St Martin's Church, St Martin-by-Looe, Cornwall St Cyricius and St Julietta's Church, St Veep, Cornwall Church of St Sampson, South Hill, Cornwall Stoke Climsland Church, Cornwall St Petroc's Church, Trevalga, Cornwall St James' in Great Ormside, Cumbria St Mary's, Lytchett Matravers, Dorset (a particularly large example) St Mary and St Cuthbert, Chester-le-Street, Durham St Mary's Church, Easington, County Durham St Thomas à Becket Church, Lewes, East Sussex St. Laurence and All Saints Church, Eastwood, Essex St Andrew and St Bartholomew's Church, Ashleworth, Gloucestershire Church of the Holy Rood, Holybourne, Hampshire St Cuthbert's Church, Aldingham, Lancashire St Wilfrid's church, Ribchester, Lancashire has a squint on the north side permitting the high altar to be viewed from outside the church. St Nicholas's Church, Walcot, Lincolnshire St James The Less, Sulgrave, Northamptonshire St Mary's Church, Grendon, Northamptonshire (see picture in Gallery) St Cuthbert's Church, Beltingham, Northumberland Selby Abbey, Selby, North Yorkshire St Oswald's Church, Sowerby, North Yorkshire Holy Trinity Church, Goodramgate, York, North Yorkshire St Peter's in Upton, Nottinghamshire St Mary the Virgin Church in Henley-On-Thames, Oxfordshire St Nicholas's Church, Old Marston, Oxfordshire St Nicholas' Church, Kenilworth, Warwickshire St Mary's Church in Winchcombe, Gloucestershire St John the Baptist in Symondsbury, Dorset St. Cyriac's Church at Lacock, Wiltshire Gallery See also Passthrough (opening) References Windows Architectural elements Church architecture
Hagioscope
[ "Technology", "Engineering" ]
1,678
[ "Building engineering", "Architectural elements", "Components", "Architecture" ]
1,580,233
https://en.wikipedia.org/wiki/HD%20217107
HD 217107 (6 G. Piscium) is a yellow subgiant star approximately 65 light-years away from Earth in the constellation of Pisces (the Fish). Its mass is very similar to the Sun's, although it is considerably older. Two planets have been discovered orbiting the star: one is extremely close and completes an orbit every seven days, while the other is much more distant, taking fourteen years to complete an orbit. Distance, age, and mass HD 217107 is fairly close to the Sun: the Gaia astrometric satellite measured its parallax as 49.7846 Milliarcseconds, which corresponds to a distance of 65.51 light years. Its apparent magnitude is 6.17, making it just barely visible to the naked eye under favourable conditions. Spectroscopic observations show that its spectral type is G7 or G8, which means its temperature is about 5,000 K. Its mass is thought to be roughly the same as the Sun's, although its estimated age of 7.7 billion years is rather older than the Sun's 4.6 billion years, and it is thought to be beginning to evolve away from the main sequence, having consumed almost all the hydrogen in its core in nuclear fusion reactions. Planetary system A study of the radial velocity of HD 217107 carried out in 1998 revealed that its motion along the line of sight varied over a 7.1-day cycle. The period and amplitude of this variation indicated that it was caused by a planetary companion in orbit around the star, with a minimum mass slightly greater than that of Jupiter. The companion planet was designated HD 217107 b. While most planets with orbital periods of less than 10 days have almost circular orbits, HD 217107 b has a somewhat eccentric orbit, and its discoverers hypothesized that this could be due to the gravitational influence of a second planet in the system at a distance of several astronomical units (AU). Confirmation of the existence of a second planet followed in 2005, when long term observations of the star's radial velocity variations revealed a variation on a period of about eight years, caused by a planet with a mass at least twice that of Jupiter in a very eccentric orbit with a semimajor axis of about 4.3 AU. The second planet was designated HD 217107 c. See also List of exoplanets discovered before 2000 - HD 217107 b List of exoplanets discovered between 2000–2009 - HD 217107 c References External links Extrasolar Planet Interactions by Rory Barnes & Richard Greenberg, Lunar and Planetary Lab, University of Arizona G-type subgiants 217107 113421 Pisces (constellation) Planetary systems with two confirmed planets Durchmusterung objects Piscium, 6 8734
HD 217107
[ "Astronomy" ]
577
[ "Pisces (constellation)", "Constellations" ]
1,580,287
https://en.wikipedia.org/wiki/Spinning%20pinwheel
The spinning pinwheel is a type of progress indicator and a variation of the mouse pointer used in Apple's macOS to indicate that an application is busy. Officially, the macOS Human Interface Guidelines refer to it as the spinning wait cursor, but it is also known by other names. These include, but are not limited to, the spinning beach ball, the spinning wheel of death, and the spinning beach ball of death. History A wristwatch was used as the first wait cursor in early versions of the classic Mac OS. Apple's HyperCard first popularized animated cursors, including a black-and-white spinning quartered circle resembling a beach ball. The beach-ball cursor was also adopted to indicate running script code in the HyperTalk-like AppleScript. The cursors could be advanced by repeated HyperTalk invocations of "set cursor to busy". Wait cursors are activated by applications performing lengthy operations. Some versions of the Apple Installer used an animated "counting hand" cursor. Other applications provided their own theme-appropriate custom cursors, such as a revolving Yin Yang symbol, Fetch's running dog, Retrospect's spinning tape, and Pro Tools' tapping fingers. Apple provided the standard interfaces for animating cursors: originally the Cursor Utilities (SpinCursor, RotateCursor) and, in Mac OS 8 and later, the Appearance Manager (SetAnimatedThemeCursor). From NeXT Step to MacOS X NeXTStep 1.0 used a monochrome icon resembling a spinning magneto-optical disk. Some NeXT computers included an optical drive, which was often slower than a magnetic hard drive. This made it a common reason for the wait cursor to appear. When color support was added in NeXTStep 2.0, color versions of all icons were added. The wait cursor was updated to reflect the bright rainbow surface of these removable disks, and that icon remained, even when later machines began using hard disk drives as primary storage. Contemporary CD-ROM drives were even slower (at 1x, 150 kbit/s). With the arrival of Mac OS X, the wait cursor was often called the "spinning beach ball" in the press, presumably by authors not knowing its NeXT history or relating it to the HyperCard wait cursor. The two-dimensional appearance was kept essentially unchanged from NeXT to Rhapsody/Mac OS X Server 1.0 which otherwise had a user interface design resembling Mac OS 8/Platinum theme. This continued through Mac OS X 10.0/Cheetah and Mac OS X 10.1/Puma, which introduced the Aqua user interface theme. Mac OS X 10.2/Jaguar gave the cursor a glossy rounded "gumdrop" look in keeping with other OS X interface elements. In OS X 10.10, the entire pinwheel rotates (previously only the overlaying translucent layer moved). With OS X 10.11 El Capitan the spinning wait-cursor's design was updated. It now has less shadowing and has brighter, more solid colors to better match the design of the user interface and the colors also turn with the spinning, not just the texture. System usage In single-task operating systems like the original Macintosh operating system, the wait cursor might indicate that the computer was completely unresponsive to user input, or just indicate that response may temporarily be slower than usual due to disk access. This changed with multitasking operating systems such as System Software 5, where it is possible to switch to another application and continue to work there. Individual applications could also choose to display the wait cursor during long operations (and were often able to cancel this display with a keyboard command). After the transition to Mac OS X (macOS), the display of the wait cursor was only able to be controlled by the operating system, not by the application. This could indicate that the application was in an infinite loop, or just performing a lengthy operation and ignoring events. Each application has an event queue that receives events from the operating system (for example, key presses and mouse button clicks); and if an application takes longer than 2 seconds to process the events in its event queue (regardless of the cause), the operating system displays the wait cursor whenever the cursor hovers over that application's windows. The icon is meant to indicate that the application is temporarily unresponsive, a state from which it should recover. It may also indicate that all or part of the application has entered an unrecoverable state or an infinite loop. During this time the user may be prevented from closing, resizing, or even minimizing the windows of the affected application (although moving the window is still possible in OS X, as well as previously hidden parts of the window which are usually redrawn, even when the application is otherwise unresponsive). While one application is unresponsive, typically other applications are usable. A file system and network delays are another common cause. Guidelines, tools and methods for developers By default, events (and any actions they initiate) are processed sequentially, intended to limit the trivial amount of processing from each event. The spinning wait cursor will appear until the operation is complete. If the operation takes too long, the application will appear unresponsive. Developers may prevent this by using separate threads for lengthy processing, allowing the application's main thread to continue responding to external events. However, this greatly increases the application's complexity. Another approach is to divide the work into smaller packets and use NSRunLoop or Grand Central Dispatch. Bugs in applications can cause them to stop responding to events; for instance, an infinite loop or a deadlock. Applications afflicted rarely recover. Problems with the virtual memory system—such as slow paging caused by a spun-down hard disk or disk read-errors—will cause the wait cursor to appear across multiple applications, until the hard disk and virtual memory system recover. Instruments is an application that comes with the Mac OS X Developer Tools. Along with its other functions, it allows the user to monitor and sample applications that are either not responding or performing a lengthy operation. Each time an application does not respond and the spinning wait cursor is activated, Instruments can sample the process to determine which code is causing the application to stop responding. With this information, the developer can rewrite code to avoid the cursor being activated. Apple's guidelines suggest that developers try to avoid invoking the spinning wait cursor, and instead suggest using other user interface indicators, such as an asynchronous progress indicator. See also Windows wait cursor Spinning cursor Spinning wheel (throbber) Pointer (user interface) Notes References External links Pointers in macOS from Apple's website. Troubleshooting the "Spinning Beach Ball of Death" Excerpt from “Troubleshooting Mac OS X” book where there are some information on how to deal with Spinning Wait Cursor problems. Computer errors MacOS
Spinning pinwheel
[ "Technology" ]
1,472
[ "Computer errors" ]
1,580,554
https://en.wikipedia.org/wiki/Tiltmeter
A tiltmeter is a sensitive inclinometer designed to measure very small changes from the vertical level, either on the ground or in structures. Tiltmeters are used extensively for monitoring volcanoes, the response of dams to filling, the small movements of potential landslides, the orientation and volume of hydraulic fractures, and the response of structures to various influences such as loading and foundation settlement. Tiltmeters may be purely mechanical or incorporate vibrating-wire or electrolytic sensors for electronic measurement. A sensitive instrument can detect changes of as little as one arc second. Tiltmeters have a long, diverse history, somewhat parallel to the history of the seismometer. The very first tiltmeter was a long-length stationary pendulum. These were used in the very first large concrete dams, and are still in use today, augmented with newer technology such as laser reflectors. Although they had been used for other applications such as volcano monitoring, they have distinct disadvantages, such as their huge length and sensitivity to air currents. Even in dams, they are slowly being replaced by the modern electronic tiltmeter. Volcano and Earth movement monitoring then used the water-tube, long baseline tiltmeter. In 1919, the physicist, Albert A. Michelson, noted that the most favorable arrangement to obtain high sensitivity and immunity from temperature perturbations is to use the equipotential surface defined by water in a buried half-filled water pipe. This was a simple arrangement of two water pots, connected by a long water-filled tube. Any change in tilt would be registered by a difference in fill-mark of one pot compared to the other. Although extensively used throughout the world for Earth-science research, they have proven to be quite difficult to operate. For example, due to their high sensitivity to temperature differentials, these always have to be read in the middle of the night. The modern electronic tiltmeter, which is slowly replacing all other forms of tiltmeter, uses a simple bubble level principle, as used in the common carpenter level. As shown in the figure, an arrangement of electrodes senses the exact position of the bubble in the electrolytic solution, to a high degree of precision. Any small changes in the level are recorded using a standard datalogger. This arrangement is quite insensitive to temperature, and can be fully compensated, using built-in thermal electronics. A newer technology using microelectromechanical systems (MEMS) sensors enables tilt angle measuring tasks to be performed conveniently in both single and dual axis mode. Ultra-high precision 2-axis MEMS driven digital inclinometer/ tiltmeter instruments are available for speedy angle measurement applications and surface profiling requiring very high resolution and accuracy of one arc second. The 2-axis MEMS driven inclinometers/ tiltmeters can be digitally compensated and precisely calibrated for non-linearity and operating temperature variation, resulting in higher angular accuracy and stability performance over wider angular measurement range and broader operating temperature range. Further, digital display of readings can effectively prevent parallax error as experienced when viewing traditional ‘bubble’ vials located at a distance. The most dramatic application of tiltmeters is in the area of volcanic eruption prediction. As shown in this figure from the USGS, the main volcano in Hawaii (Kilauea) has a pattern of filling the main chamber with magma, and then discharging to a side vent. The graph shows this pattern of swelling of the main chamber (recorded by the tiltmeter), draining of that chamber, and then an eruption of the adjoining vent. Each number at the peak of tilt, on the graph, is a recorded eruption. Gallery See also Dam safety system Differential GPS Geomechanic Inclinometer Remote sensing methods Rock mechanics Tilt test (geotechnical engineering) References Inclinometers Seismology instruments Volcanology Geological tools
Tiltmeter
[ "Technology", "Engineering" ]
785
[ "Seismology instruments", "Measuring instruments" ]
5,592,497
https://en.wikipedia.org/wiki/World%20Nuclear%20Association
World Nuclear Association is the international organization that promotes nuclear power and supports the companies that comprise the global nuclear industry. Its members come from all parts of the nuclear fuel cycle, including uranium mining, uranium conversion, uranium enrichment, nuclear fuel fabrication, plant manufacture, transport, and the disposal of used nuclear fuel, as well as electricity generation itself. Together, World Nuclear Association members are responsible for 70% of the world's nuclear power as well as the vast majority of world uranium, conversion and enrichment production. The Association says it aims to fulfill a dual role for its members: facilitating their interaction on technical, commercial and policy matters, and promoting wider public understanding of nuclear technology. It has a secretariat of around 30 staff. The Association was founded in 2001 on the basis of the Uranium Institute, itself founded in 1975. Membership World Nuclear Association continues to expand its membership, particularly in non-OECD countries where nuclear power is produced or where this option is under active consideration. Members are located in 44 countries representing 80% of the world's population. The annual subscription fee for an institutional member is based on its size and scale of activity. Upon receiving an inquiry or application, the Association's London-based secretariat determines the fee according to standardized criteria and informs the candidate organisation accordingly. The fee structure provides, in many cases, significant discounts for organisations located in countries outside the OECD. A low-fee non-commercial membership is available for organisations with a solely academic, research, policy or regulatory function. A list of current members is published on the World Nuclear Association website. Charter of Ethics World Nuclear Association has established a Charter of Ethics to serve as a common credo for its member organizations. This affirmation of values and principles is intended to summarize the responsibilities of the nuclear industry and the surrounding legal and institutional framework that has been constructed through international cooperation to fulfill U.S. President Dwight D. Eisenhower's vision of 'Atoms for Peace'. Leadership World Nuclear Association members appoint a Director General and elect a 20-member board of management. The current Director General is Sama Bilbao y León. The Chairman of the board is H.E. Mohamed Al Hammadi, Managing Director and Chief Executive Officer of Emirates Nuclear Energy Corporation. The Vice chairman is Philippe Knoche CEO at Orano. A board of management fulfills statutory duties pertaining to the organization's governance and sets World Nuclear Association policies and strategic objectives, subject to approval by the full membership. Activities and services Industry interaction An essential role of World Nuclear Association is to facilitate commercially valuable interaction among its members. Ongoing World Nuclear Association Working Groups, consisting of members and supported by the secretariat, share information and develop analysis on a range of technical, trade and environmental matters. These subjects include: Cooperation in reactor design, evaluation and licensing Radiological protection Industry economics Nuclear law Supply chain Transport of radioactive materials Waste management and decommissioning Capacity optimization Uranium mining standardization Construction risk management Security of the international fuel cycle Fuel market report working group When meeting to discuss industry issues, World Nuclear Association members are cautioned to avoid any topic that could potentially create even the impression of an attempt to set prices or engage in other anti-competitive behaviour. Accordingly, topics not discussed in meetings include terms of specific contracts; current or projected prices for products or services; allocation of markets; refusals to deal with particular suppliers or customers; or any similar matters that might impair competition within any segment of the nuclear industry. Meetings World Nuclear Association's annual Symposium in London provides a forum for speakers from the nuclear industry. The Association has previously presented an award for 'Distinguished Contribution to the Peaceful Worldwide Use of Nuclear Energy'. The Association also cooperates with the Nuclear Energy Institute on annual World Nuclear Fuel Cycle meetings for industry representatives concerned with nuclear fuel supply and in particular the uranium market. Representation World Nuclear Association represents the interests of the international nuclear industry at key international forums such as: International Atomic Energy Agency and Nuclear Energy Agency advisory committees on transport and all aspects of nuclear safety United Nations policy forums focused on sustainable development and climate change. (The Association was in attendance at the 2009 Copenhagen climate change talks and at COP26) International Commission on Radiological Protection and OSPAR deliberations on radiological protection. In contrast to earlier less structured forms of industry representation the Association provides a unified voice from a single body; encompassing all manner of industry expertise and perspectives. It is clear and unreserved in its purpose of promoting the maximum feasible use of safe nuclear power. Public information The World Nuclear Association public website provides an available, non-technical source of information on the global nuclear industry. The site presents reference documents, and a wide range of educational and explanatory papers which are constantly updated. Australian nuclear power advocate Ian Hore-Lacy served as the organization's Director of Public Information for 12 years, after working for six years at the now defunct Melbourne-based Uranium Information Centre. In the late 2000's, the information-disseminating role was assumed by World Nuclear Association and World Nuclear News (WNN). The Association supports WNN, the authoritative online news service intended to bring accurate and accessible information on developments in nuclear power to the Association's industry readers and the general public. Its output is free of charge and may be widely reproduced in accordance with WNN's copyright policy. The Association's reactor database contains information on past, present and future nuclear power reactors across the world. Other services World Nuclear Association is engaged in a number of other initiatives to promote the peaceful development of nuclear power. These include World Nuclear University (WNU), which is a global partnership between World Nuclear Association, the International Atomic Energy Agency, The OECD Nuclear Energy Agency and the World Association of Nuclear Operators committed to enhancing international education and leadership in the peaceful applications of nuclear science and technology. It runs a series of programmes designed to complement existing institutions of nuclear learning in their curriculum. The premier event on the WNU calendar is the Summer Institute, which runs each year in July and brings together speakers from industry and government to present on all aspects of nuclear power. It also runs five one-week courses per year with partner universities around the world intended to enhance knowledge of today's nuclear industry among students. See also Institute of Nuclear Materials Management International Atomic Energy Agency World Nuclear Transport Institute References External links World Nuclear Association Symposium World Nuclear Fuel Cycle Annual report Atoms for Peace International nuclear energy organizations International organisations based in London Nuclear industry organizations Organisations based in the City of Westminster
World Nuclear Association
[ "Engineering" ]
1,314
[ "International nuclear energy organizations", "Nuclear industry organizations", "Nuclear organizations" ]
5,593,024
https://en.wikipedia.org/wiki/Indiglo
Indiglo is a product feature on watches marketed by Timex, incorporating an electroluminescent panel as a backlight for even illumination of the watch dial. The brand is owned by Indiglo Corporation, which is in turn solely owned by Timex, and the name derives from the word indigo, as the original watches featuring the technology emitted a green-blue light. History Timex introduced the Indiglo technology in 1992 in their Ironman watch line and subsequently expanded its use to 70% of their watch line, including men's and women's watches, sport watches and chronographs. Casio introduced their version of electroluminescent backlight technology in 1995. The Indiglo name was later licensed to other companies, such as Austin Innovations Inc., for use on their electroluminescent products. From 2006-2011, the Timex Group marketed a line of high-end quartz watches under the TX Watch Company brand, using a proprietary six-hand, four-motor, micro-processor controlled movement. To separate the brand from Timex, the movements had luxury features associated with a higher-end brand, e.g., sapphire crystals and stainless steel or titanium casework — and used hands treated with super-luminova luminescent pigment for low-light legibility — rather than indiglo technology. When the Timex Group migrated the microprocessor-controlled, multi-motor, multi-hand technology to its Timex brand in 2012, it created a sub-collection marketed as Intelligent Quartz (IQ). The line employed the same movements and capabilities from the TX brand, at a much lower price-point -- incorporating indiglo technology rather than the super-luminova pigments. Design Indiglo backlights typically emit a distinct greenish-blue color and evenly light the entire display or dial. Certain Indiglo models, e.g., Timex Datalink USB, use a negative liquid-crystal display so that only the digits are illuminated, rather than the entire display. References External links How does an Indiglo watch work? at HowStuffWorks Overview of electroluminescent display technology, and the discovery of electroluminescence Electroluminescence, Edison Technology Center Products introduced in 1992 Luminescence Trademarks Watch brands Lighting Display technology Timex Group
Indiglo
[ "Chemistry", "Engineering" ]
483
[ "Electronic engineering", "Luminescence", "Molecular physics", "Display technology" ]
5,593,281
https://en.wikipedia.org/wiki/Wisconsin%20Integrally%20Synchronized%20Computer
The Wisconsin Integrally Synchronized Computer (WISC) was an early digital computer designed and built at the University of Wisconsin–Madison. Operational in 1954, it was the first digital computer in the state. Pioneering computer designer Gene Amdahl drafted the WISC's design as his PhD thesis. The computer was built over the period 1951-1954. It had 1,024 50-bit words (equivalent to about 6 KB) of drum memory, with an operation time of 1/15 second and throughput of 60 operations per second, which was achieved by an early form of instruction pipeline. It was capable of both fixed and floating point operation. It weighed about . The WISC is part of the permanent collection of the Computer History Museum. References External links Oral history interview with Gene M. Amdahl. Charles Babbage Institute, University of Minnesota, Minneapolis. Amdahl starts by describing his early life and education, recalling his experiences teaching in the Advanced Specialized Training Program during and after World War II. Amdahl discusses his graduate work at the University of Wisconsin and his direction of the design and construction of the Wisconsin Integrally Synchronized Computer. Describes his role in the design of several computers for IBM including the STRETCH, IBM 701, 701A, and IBM 704. He discusses his work with Nathaniel Rochester and IBM's management of the design process for computers. He also mentions his work with Ramo-Wooldridge, Aeronutronic, and Computer Sciences Corporation. Contains Gene Amdahl's PhD thesis and WISC User's Manual Photos: Early computers One-of-a-kind computers
Wisconsin Integrally Synchronized Computer
[ "Technology" ]
332
[ "Computing stubs" ]
5,593,464
https://en.wikipedia.org/wiki/TJ-2
TJ-2 (Type Justifying Program) was published by Peter Samson in May 1963 and is thought to be the first page layout program. Although it lacks page numbering, page headers and footers, TJ-2 is the first word processor to provide a number of essential typographic alignment and automatic typesetting features: Columnation, indentation, margins, justification, and centering Word wrap, page breaks and automatic hyphenation Tab stop simulation Developed from two earlier Samson programs, Justify and TJ-1, TJ-2 was written for the PDP-1 that was donated to the Massachusetts Institute of Technology in 1961 by Digital Equipment Corporation. Taking English text as input, TJ-2 aligns left and right margins, justifying the output using white space and word hyphenation. Text is marked-up with single lowercase characters combined with the PDP-1's overline character, carriage returns, and internal concise codes. The computer's six toggle switches control the input and output devices, enable and disable hyphenation and stop the session. Words can be hyphenated with a light pen on the computer's CRT display and from the session's dictionary in memory. On-screen hyphenation has SAVE and FORGET commands and OOPS, the undo. Comments in the code were quoted thirty years later: "The ways of God are just and can be justified to man" and "Girls who wear pants should be sure that the end justifies the jeans." TJ-2 was succeeded by TYPSET and RUNOFF, a pair of complementary programs written in 1964 for the CTSS operating system. TYPSET and RUNOFF soon evolved into runoff for Multics, which was in turn ported to Unix in the 1970s as roff. A similar program for the ITS PDP-6 and later the PDP-10 was TJ6. See also Colossal Typewriter Desktop publishing Expensive Typewriter Peter Samson Text editor Text Editor and Corrector (TECO) TYPSET and RUNOFF Notes References Transcription of the 1963 memo describing TJ-2, with annotations by Daniel P. B. Smith . Samson begins at 1:16:45. Desktop publishing software Text editors Typesetting software Typesetting Word processors History of software
TJ-2
[ "Technology" ]
476
[ "History of software", "History of computing" ]
5,593,595
https://en.wikipedia.org/wiki/Earth%20structure
An earth structure is a building or other structure made largely from soil. Since soil is a widely available material, it has been used in construction since prehistory. It may be combined with other materials, compressed and/or baked to add strength. Soil is still an economical material for many applications, and may have low environmental impact both during and after construction. Earth structure materials may be as simple as mud, or mud mixed with straw to make cob. Sturdy dwellings may be also built from sod or turf. Soil may be stabilized by the addition of lime or cement, and may be compacted into rammed earth. Construction is faster with pre-formed adobe or mudbricks, compressed earth blocks, earthbags or fired clay bricks. Types of earth structure include earth shelters, where a dwelling is wholly or partly embedded in the ground or encased in soil. Native American earth lodges are examples. Wattle and daub houses use a "wattle" of poles interwoven with sticks to provide stability for mud walls. Sod houses were built on the northwest coast of Europe, and later by European settlers on the North American prairies. Adobe or mud-brick buildings are built around the world and include houses, apartment buildings, mosques and churches. Fujian Tulous are large fortified rammed earth buildings in southeastern China that shelter as many as 80 families. Other types of earth structure include mounds and pyramids used for religious purposes, levees, mechanically stabilized earth retaining walls, forts, trenches and embankment dams. Soil Soil is created from rock that has been chemically or physically weathered, transported, deposited and precipitated. Soil particles include sand, silt and clay. Sand particles are the largest at in diameter and clay the smallest at less than in diameter. Both sand and silt are mostly inert rock particles, including quartz, calcite, feldspar and mica. Clays typically are phyllosilicate minerals with a sheet-like structure. The very small clay particles interact with each other physically and chemically. Even a small proportion of clay affects the physical properties of the soil much more than might be expected. Clays such as kaolinite do not expand or contract when wetted or dried, and are useful for brick-making. Others, such as smectites, expand or contract considerably when wet or dry, and are not suitable for building. Loam is a mix of sand, silt and clay in which none predominates. Soils are given different names depending on the relative proportions of sand, silt and clay such as "Silt Loam", "Clay Loam" and "Silty Clay". Loam construction, the subject of this article, referred to as adobe construction when it uses unfired clay bricks, is an ancient building technology. It was used in the early civilizations of the Mediterranean, Egypt and Mesopotamia, in the Indus, Ganges and Yellow river valleys, in Central and South America. As of 2005 about 1.5 billion people lived in houses built of loam. In recent years, interest in loam construction has revived in the developed world. It is seen as a way to minimize use of fossil fuels and pollution, particularly carbon dioxide, during manufacture, and to create a comfortable living environment through the high mass and high absorption of the material. The two main technologies are stamped or rammed earth, clay or loam, called pise de terre in French, and adobe, typically using sun-dried bricks made of a mud and straw mixture. Materials Earth usually requires some sort of processing for use in construction. It may be combined with water to make mud, straw may be added, some form of stabilizing material such as lime or cement may be used to harden the earth, and the earth may be compacted to increase strength. Mud Coursed mud construction is one of the oldest approaches to building walls. Moist mud is formed by hand to make the base of a wall, and allowed to dry. More mud is added and allowed to dry to form successive courses until the wall is complete. With puddled mud, a hand-made mud form is filled with wetter mud and allowed to dry. In Iran, puddled mud walls are called chine construction. Each course is about thick, and about high. Typically the technique is used for garden walls but not for house construction, presumably because of concern about the strength of walls made in this way. A disadvantage to the approach is that a lot of time can be spent waiting for each course to dry. Another technique, used in areas where wood is plentiful, is to build a wood-frame house and to infill it with mud, primarily to provide insulation. In parts of England a similar technique was used with cob. Cob Cob, sometimes referred to as "monolithic adobe", is a natural building material made from soil that includes clay, sand or small stones and an organic material such as straw. Cob walls are usually built up in courses, have no mortar joints and need 30% or more clay in the soil. Cob can be used as in-fill in post-and-beam buildings, but is often used for load bearing walls, and can bear up to two stories. A cob wall should be at least thick, and the ratio of width to height should be no more than one to ten. It will typically be plastered inside and out with a mix of lime, soil and sand. Cob is fireproof, and its thermal mass helps stabilize indoor temperatures. Tests have shown that cob has some resistance to seismic activity. However, building codes in the developed world may not recognize cob as an approved material. Sod or turf Cut sod bricks, called terrone in Spanish, can be used to make tough and durable walls. The sod is cut from soil that has a heavy mat of grass roots, which may be found in river bottom lands. It is stood on edge to dry before being used in construction. European settlers on the North American Prairies found that the sod least likely to deteriorate due to freezing or rain came from dried sloughs. Turf was once extensively used for the walls of houses in Ireland, Scotland and Iceland, where some turf houses may still be found. A turf house may last fifty years or longer if well-maintained in a cold climate. The Icelanders find that the best quality turf is the Strengur, the top of the grass turf. Stabilized earth Clay is usually hard and strong when dry, but becomes very soft when it absorbs water. The dry clay helps hold an earth wall together, but if the wall is directly exposed to rain, or to water leaking down from the roof, it may become saturated. Earth may be "stabilized" to make it more weather resistant. The practice of stabilizing earth by adding burnt lime is centuries old. Portland cement or bitumen may also be added to earth intended for construction which adds strength, although the stabilized earth is not as strong as fired clay or concrete. Mixtures of cement and lime, or pozzolana and lime, may also be used for stabilization. Preferably the sand content of the soil will be 65% – 75%. Soils with low clay content, or with no more than 15% non-expansive clay, are suitable for stabilized earth. The clay percentage may be reduced by adding sand, if available. If there is more than 15% clay it may take more than 10% cement to stabilize the soil, which adds to the cost. If earth contains little clay and holds 10% or more cement, it is in effect concrete. Cement is not particularly environmentally friendly, since the manufacturing process generates large amounts of carbon dioxide. Low-density stabilized earth will be porous and weak. The earth must therefore be compacted either by a machine that makes blocks or within the wall using the "rammed earth" technique. Rammed earth Rammed earth is a technique for building walls using natural raw materials such as earth, chalk, lime or gravel. A rammed earth wall is built by placing damp soil in a temporary form. The soil is manually or mechanically compacted and then the form is removed. Rammed earth is generally made without much water, and so does not need much time to dry as the building rises. It is susceptible to moisture, so must be laid on a course that stops rising dampness, must be roofed or covered to keep out water from above, and may need protection through some sort of plaster, paint or sheathing. In China, rammed earth walls were built by the Longshan people in 2600–1900 BC, during the period when cities first appeared in the region. Thick sloping walls made of rammed earth became a characteristic of traditional Buddhist monasteries throughout the Himalayas and became very common in northern Indian areas such as Sikkim. The technique spread to the Middle East, and to North Africa, and the city of Carthage was built of rammed earth. From there the technology was brought to Europe by the Romans. Rammed earth structures may be long lasting. Most of the Great Wall of China was made from rammed earth, as was the Alhambra in the Kingdom of Granada. In Northern Europe there are rammed earth buildings up to seven stories high and two hundred years old. Concrete The Romans made durable concrete strong enough for load-bearing walls. Roman concrete contains a rubble of broken bricks and rocks set in mortar. The mortar included lime and pozzolana, a volcanic material that contributed significantly to its strength. Roman concrete structures such as the Colosseum, completed in 80 AD, still stand. Their longevity may be explained by the fact that the builders used a relatively dry mix of mortar and aggregate and compacted it by pounding it down to eliminate air pockets. Although derived from earth products, concrete structures would not usually be considered earth structures. Building units Mud brick or adobe brick Mudbricks or Adobe bricks are preformed modular masonry units of sun-dried mud that were invented at different times in different parts of the world as civilization developed. Construction with bricks avoids the delays while each course of puddled mud dries. Wall murals show that adobe production techniques were highly advanced in Egypt by 2500 BC. Adobe construction is common throughout much of Africa today. Adobe bricks are traditionally made from sand and clay mixed with water to a plastic consistency, with straw or grass as a binder. The mud is prepared, placed in wooden forms, tamped and leveled, and then turned out of the mold to dry for several days. The bricks are then stood on end to air-cure for a month or more. In the southwest United States and Mexico adobe buildings had massive walls and were rarely more than two stories high. Adobe mission churches were never more than about . Since adobe surfaces are fragile, coatings are used to protect them. These coatings, periodically renewed, have included mud plaster, lime plaster, whitewash or stucco. Adobe walls were historically made by laying the bricks with mud mortar, which swells and shrinks at the same rate as the bricks when wetted or dried, heated or cooled. Modern adobe may be stabilized with cement and bonded with cement mortars, but cement mortars will cause unstabilized adobe bricks to deteriorate due to the different rates of thermal expansion and contraction. Compressed earth block Compressed earth blocks (CEB) were traditionally made by using a stick to ram soil into a wooden mold. Today they are usually made from subsoil compressed in a hand-operated or powered machine. In the developing world, manual machines can be a cost-effective solution for making uniform building blocks, while the more complex and expensive motorized machines are less likely to be appropriate. Although labor-intensive, CEB construction avoids the cost of buying and transporting materials. Block-making machines may form blocks that have interlocking shapes to reduce the requirement for mortar. The block may have holes or grooves so rods such as bamboo can be inserted to improve earthquake resistance. Suitable earth must be used, with enough clay to hold the block together and resist erosion, but not too much expansive clay. When the block has been made from stabilized earth, which contains cement, the concrete must be given perhaps three weeks to cure. During this time the blocks should be stacked and kept from drying out by sprinkling water over them. This may be a problem in hot, dry climates where water is scarce. Closely stacking the blocks and covering them with a polythene sheet may help reduce water loss. Earthbags Earthbag construction is a natural building technique that has evolved from historic military construction techniques for bunkers. Local subsoil of almost any composition can be used, although an adobe mix would be preferable. The soil is moistened so it will compact into a stable structure when packed into woven polypropylene or burlap sacks or tubes. Plastic mesh is sometimes used. Polypropylene (pp) sacks are most common, since they are durable when covered, cheap, and widely available. The bags are laid in courses, with barbed wire between each course to prevent slipping. Each course is tamped after it is laid. The structure in pp bags is similar to adobe but more flexible. With mesh tubing the structure is like rammed earth. Earthbags may be used to make dome-shaped or vertical wall buildings. With soil stabilization they may also be used for retaining walls. Fired clay brick The technique of firing clay bricks in a kiln dates to about 3500 BC. Fired bricks were being used to build durable masonry across Europe, Asia and North Africa by 1200 BC and still remain an important building material. Modern fired clay bricks are formed from clays or shales, shaped and then fired in a kiln for 8–12 hours at a temperature of 900–1150 °C. The result is a ceramic that is mainly composed of silica and alumina, with other ingredients such as quartz sand. The porosity of the brick depends on the materials and on the firing temperature and duration. The bricks may vary in color depending on the amount of iron and calcium carbonate in the materials used, and the amount of oxygen in the kiln. Bricks may decay due to crystallization of salts on the brick or in its pores, from frost action and from acidic gases. Bricks are laid in courses bonded with mortar, a combination of Portland cement, lime and sand. A wall that is one brick thick will include stretcher bricks with their long, narrow side exposed and header bricks crossing from side to side. There are various brickwork "bonds", or patterns of stretchers and headers, including the English, Dutch and Flemish bonds. Examples Earth sheltering Earth sheltering has been used for thousands of years to make energy-efficient dwellings. There are various configurations. At one extreme, an earth sheltered dwelling is completely underground, with perhaps an open courtyard to provide air and light. An earth house may be set into a slope, with windows or door openings in one or more of its sides, or the building may be on ground level, but with earth mounded against the walls, and perhaps with an earth roof. Pit houses made by Hohokam farmers between 100 and 900 AD, in what is now the southwest of the US, were bermed structures, partially embedded in south-facing slopes. Their successful design was used for hundreds of years. At Matmata, Tunisia, most of the ancient homes were built below ground level, and surrounded courtyards about square. The homes were reached through tunnels. Other examples of subterranean, semi-subterranean or cliff-based dwellings in both hot and cold climates are found in Turkey, northern China and the Himalayas, and the southwest USA. A number of Buddhist monasteries built from earth and other materials into cliff sides or caves in Himalayan areas such as Tibet, Bhutan, Nepal and northern India are often perilously placed. Starting in the 1970s, interest in the technique has revived in developed countries. By setting an earth house into the ground, the house will be cooler in the warm season and warmer in the cool season. Native American earth lodge An earth lodge is a circular building made by some of the Native Americans of North America. They have wood post and beam construction and are dome-shaped. A typical structure would have four or more central posts planted in the ground and connected at the top by cross beams. The smoke hole would be left open in the center. Around the central structure there was a larger ring of shorter posts, also connected by cross beams. Rafters radiated from the central cross beams to the outside cross beams, and then split planks or beams formed the slanting or vertical side walls. The structure was covered by sticks and brush or grass, covered in turn by a heavy layer of earth or sod. Some groups plastered the whole structure with mud, which dried to form a shell. Wattle and daub Wattle and daub is an old building technique in which vines or smaller sticks are interwoven between upright poles, and then mud mixed with straw and grass is plastered over the wall. The technique is found around the world, from the Nile Delta to Japan, where bamboo was used to make the wattle. In Cahokia, now in Illinois, USA, wattle and daub houses were built with the floor lowered by below the ground. A variant of the technique is called bajareque in Colombia. In prehistoric Britain simple circular wattle and daub shelters were built wherever adequate clay was available. Wattle and daub is still found as the panels in timber-framed buildings. Generally the walls are not structural, and in interior use the technique in the developed world was replaced by lath and plaster, and then by gypsum wallboard. Prairie sod house European pioneer farmers in the prairies of North America, where there is no wood for construction, often made their first home in a dug-out cave in the side of a hill or ravine, with a covering over the entrance. When they had time, they would build a sod house. The farmer would use a plow to cut the sod into bricks , which were then piled up to form the walls. The sod strips were piled grass-side down, staggered in the same way as brickwork, in three side-by-side rows, resulting in a wall over thick. The sod wall was built around door and window frames, and the corners of the wall were secured by rods driven vertically through them. The roof was made with poles or brush, covered with prairie grass, and then sealed with a layer of sod. Sod houses were strong and often lasted many years, but they were damp and dirty unless the interior walls were plastered. The roofs tended to leak, and sometimes collapsed in a rainstorm. Mud brick buildings There are innumerable examples of mud brick or adobe building around the world. The walled city of Shibam in Yemen, designated a World Heritage Site in 1982, is known for its ten-story unreinforced mud-brick buildings. The Djinguereber Mosque of Timbuktu, Mali, was first built at the start of the 14th century AD (8th century AH) from round mud bricks and a stone-mud mixture, and was rebuilt several times afterwards, steadily growing in size. Further south in Mali, the Great Mosque of Djenné, a dramatic example of Sahel mudbrick architecture. was built in 1907, based on the design of an earlier Great Mosque first built on the site in 1280. Mudbrick requires maintenance, and the fundamentalist ruler Seku Amadu had let the previous mosque collapse. The Casa Grande Ruins, now a national monument in Arizona protected by a modern roof, is a massive four-story adobe structure built by Hohokam people between 1200 and 1450 AD. The first European to record the great house was a Jesuit priest, Father Eusebio Kino, who visited the site in 1694. At that time it had long been abandoned. By the time a temporary roof was installed in 1903 the adobe building had been standing empty and unmaintained for hundreds of years. Huaca de la Luna in what is now northern Peru is a large adobe temple built by the Moche people. The building went through a series of construction phases, growing eventually to a height of about , with three main platforms, four plazas and many smaller rooms and enclosures. The walls were covered by striking multi-colored murals and friezes; those visible today date from about 400–610 AD. Toulous A Fujian Tulou is a type of rural dwelling of the Hakka people in the mountainous areas in southeastern Fujian, China. They were mostly built between the 13th and the 20th centuries. A tulou is a large, enclosed and fortified earth building, rectangular or circular, with very thick load-bearing rammed earth walls between three and five stories high. A toulou might house up to 80 families. Smaller interior buildings are often enclosed by these huge peripheral walls which can contain halls, storehouses, wells and living areas. The structure resembles a small fortified city. The walls are formed by compacting earth mixed with stone, bamboo, wood and other readily available materials, and are to thick. The result is a well-lit, well-ventilated, windproof and earthquake-proof building that is warm in winter and cool in summer. Mounds and pyramids Ziggurats were elevated temples constructed by the Sumerians between the end of the 4th millennium BC and the 2nd millennium BC, rising in a series of terraces to a temple up to above ground level. The Ziggurat of Ur contained about three million bricks, none more than in length, so construction would have been a huge project. The largest ziggurat was in Babylon, and is thought by some to be the Tower of Babel mentioned in the Bible. It was destroyed by Alexander the Great and only the foundations remain, but originally it stood high on a base about square. Sun-dried bricks were used for the interior and kiln-fired bricks for the facing. The bricks were held together by clay or bitumen. Many pre-Columbian Native American societies of ancient North America built large pyramidal earth structures known as platform mounds. Among the largest and best-known of these structures is Monks Mound at the site of Cahokia in what became Illinois, completed around 1100 AD, which has a base larger than that of the Great Pyramid at Giza. Many of the mounds underwent multiple episodes of mound construction at periodic intervals, some becoming quite large. They are believed to have played a central role in the mound-building peoples' religious life and documented uses include semi-public chief's house platforms, public temple platforms, mortuary platforms, charnel house platforms, earth lodge/town house platforms, residence platforms, square ground and rotunda platforms, and dance platforms. The Pyramid of the Sun in Teotihuacan, Mexico, was started in 100 AD. The stone-faced structure contains two million tons of rammed earth. Earthworks Earthworks are engineering works created through moving or processing quantities of soil or unformed rock. The material may be moved to another location and formed into a desired shape for a purpose. Levees, embankments and dams are types of earthwork. A levee, floodbank or stopbank is an elongated natural ridge or artificially constructed dirt fill wall that regulates water levels. It is usually earthen and often runs parallel to the course of a river in its floodplain or along low-lying coastlines. Mechanically stabilized earth (MSE) retaining walls may be used for embankments. MSE walls combine a concrete leveling pad, wall facing panels, coping, soil reinforcement and select backfill. A variety of designs of wall facing panels may be used. After the leveling pad has been laid and the first row of panels has been placed and braced, the first layer of earth backfill is brought in behind the wall and compacted. The first set of reinforcements is then laid over the earth. The reinforcements, which may be tensioned polymer or galvanized metal strips or grids, are attached to the facing panels. This process is repeated with successive layers of panels, earth and reinforcements. The panels are thus tied into the earth embankment to make a stable structure with balanced stresses. Although construction using the basic principles of MSE has a long history, MSE was developed in its current form in the 1960s. The reinforcing elements used can vary but include steel and geosynthetics. The term MSE is usually used in the US to distinguish it from "Reinforced Earth", a trade name of the Reinforced Earth Company, but elsewhere Reinforced Soil is the generally accepted term. MSE construction is relatively fast and inexpensive, and although labor-intensive, it does not demand high levels of skill. It is therefore suitable for developing as well as developed countries. Forts and trenches Earth has been used to construct fortifications for thousands of years, including strongholds and walls, often protected by ditches. Aerial photography in Europe has revealed traces of earth fortifications from the Roman era, and later medieval times. Offa's Dyke is a huge earthwork that stretches along the disputed border between England and Wales. Little is known about the period or the builder, King Offa of Mercia, who died in 796 AD. An early timber and earth fortification might later be succeeded by a brick or stone structure on the same site. Trenches were used by besieging forces to approach a fortification while protected from missiles. Sappers would build "saps", or trenches, that zig-zagged towards the fortress being attacked. They piled the excavated dirt to make a protective wall or gabion. The combined trench depth and gabion height might be . Sometimes the sap was a tunnel, dug several feet below the surface. Sappers were highly skilled and highly paid due to the extreme danger of their work. In the American Civil War (1861−1865) trenches were used for defensive positions throughout the struggle, but played an increasingly important role in the campaigns of the last two years. Military earthworks perhaps culminated in the vast network of trenches built during World War I (1914−1918) that stretched from Switzerland to the North Sea by the end of 1914. The two lines of trenches faced each other, manned by soldiers living in appalling conditions of cold, damp and filth. Conditions were worst in the Allied trenches. The Germans were more willing to accept the trenches as long-term positions, and used concrete blocks to build secure shelters deep underground, often with electrical lighting and heating. Embankment dams An embankment dam is a massive artificial water barrier. It is typically created by the emplacement and compaction of a complex semi-plastic mound of various compositions of soil, sand, clay and/or rock. It has a semi-permanent natural waterproof covering for its surface, and a dense, waterproof core. This makes such a dam impervious to surface or seepage erosion. The force of the impoundment creates a downward thrust upon the mass of the dam, greatly increasing the weight of the dam on its foundation. This added force effectively seals and makes waterproof the underlying foundation of the dam, at the interface between the dam and its stream bed. Such a dam is composed of fragmented independent material particles. The friction and interaction of particles binds the particles together into a stable mass rather than by the use of a cementing substance. The Syncrude Mildred Lake Tailings Dyke in Alberta, Canada, is an embankment dam about long and from high. By volume of fill, as of 2001 it was believed to be the largest earth structure in the world. Structural issues Designing for Earthquakes Regions with low seismic risk are safe for most earth buildings, but historic construction techniques often cannot resist even medium earthquake levels effectively because of earthen buildings' three highly undesirable qualities as a seismic building material: being relatively 'weak, heavy and brittle'. However, earthen buildings can be built to resist seismic loads. Key factors to improved seismic performance are soil strength, construction quality, robust layout and seismic reinforcement. Stronger soils make stronger walls. Adobe builders can test cured blocks for strength by dropping from a specific height or by breaking them with a lever. Builders using immediate techniques like earthbag, cob, or rammed earth may prefer approximate crushing tests on smaller samples that can be oven-dried and crushed under a small lever. Builders must understand construction processes and be able to produce consistent quality for strong buildings. Robust layout means buildings more square than elongated, and symmetrical not L-shaped, as well as no 'soft' first stories (stories with large windows, buildings on unbraced columns). New Zealand's earthen building guidelines check for enough bracing wall length in each of the two principal directions, based on wall thickness, story height, bracing wall spacing, and the roof, loft and second story weight above earthen walls. Seismic-Resistant Construction Techniques Building techniques that are more ductile than brittle, like the contained earth type of earthbag, or tire walls of earthships, may better avoid collapse than brittle unreinforced earth. Contained gravel base courses may add base isolation potential. Wall containment can be added to techniques like adobe to resist loss of material that leads to collapse. Confined masonry is effective for adobe against quake forces of 0.3 g may be useful with earthen masonry. Many types of reinforcement can increase wall strength, such as plastic or wire mesh and reinforcing rods of steel or fiberglass or bamboo. Earth resists compression well but is weak when twisted. Tensile reinforcement must span potential damage points and be well-anchored to increase out-of-plane stability. Bond beams at wall tops are vital and must be well attached to walls. Builders should be aware that organic reinforcements embedded in walls may be destroyed before the building is retired. Attachment details of reinforcement are critical to resist higher forces. Best adobe shear strength came from horizontal reinforcement attached directly to vertical rebar spanning from footing to bond beam. Interlaced wood in earthen walls reduces quake damage if wood is not damaged by dry rot or insects. Timberlacing includes finely webbed Dhajji, and other types. See also , sometimes considered earthen architecture , Chinese cave dwellings Notes References Citations Sources
Earth structure
[ "Engineering" ]
6,159
[ "Construction", "Earth structures" ]
5,593,864
https://en.wikipedia.org/wiki/Jonathan%20B.%20Tucker
Jonathan B. Tucker (August 2, 1954 – July 31, 2011) was an American political scientist and expert on the chemical and biological weapons. Early life and education Tucker was born on August 2, 1954, in Boston, Massachusetts, to Deborah Tucker. Tucker earned a B.S. in biology from Yale University and a Ph.D. in political science (focusing on defense and arms control study) from MIT. Career After finishing his studies Tucker worked as an arms control specialist for the Congressional Office of Technology Assessment, the U.S. Arms Control & Disarmament Agency, and the U.S. State Department. He was an editor at High Technology and Scientific American magazines and wrote about military technologies, biotechnology, and biomedical research. Tucker was a UN weapons biological inspector in Iraq in February 1995. From 1996, he served as founding director of the Chemical and Biological Weapons Nonproliferation Program at the James Martin Center for Nonproliferation Studies of the Monterey Institute of International Studies, and then served as a senior fellow in its Washington Office. He was a professional staff member for the bipartisan Commission on the Prevention of WMD proliferation and terrorism, which published World at Risk, a volume critical of US prevention strategies for post-9/11 terrorism. In 2010, Tucker spent a semester teaching and researching at the TU Darmstadt in Germany as an endowed professor of peace and security studies, and most recently was a senior fellow at the Federation of American Scientists in Washington, D.C. Death On July 31, 2011, Tucker was found dead in his home in Washington D.C. He was 56. Published works Articles Books (editor) References External links 1954 births 2011 deaths Yale College alumni MIT School of Humanities, Arts, and Social Sciences alumni People related to biological warfare Academic staff of Technische Universität Darmstadt
Jonathan B. Tucker
[ "Biology" ]
370
[ "People related to biological warfare", "Biological warfare" ]
5,594,272
https://en.wikipedia.org/wiki/Slush
Slush, also called slush ice, is a slurry mixture of small ice crystals (e.g. snow) and liquid water. In the natural environment, slush forms when ice or snow melts or during mixed precipitation. This often mixes with dirt and other pollutants on the surface, resulting in a gray or muddy brown color. Often, solid ice or snow can block the drainage of fluid water from slushy areas, so slush often goes through multiple freeze/thaw cycles before being able to completely drain and disappear. In areas where road salt is used to clear roadways, slush forms at lower temperatures in salted areas than it would ordinarily. This can produce a number of different consistencies over the same geographical area with scattered salted areas covered with slush and others covered with frozen precipitation. Hazards Because slush behaves like a non-Newtonian fluid, which means it behaves like a mostly solid mass until its inner shear forces rise beyond a specific threshold and beyond can very suddenly become fluid, it is very difficult to predict its behavior. This is the underlying mechanism causing slush avalanches and their unpredictability and thus hidden potential to become a natural hazard without caution. Slush can also be a problem on an aircraft runway since the effect of excess slush acting on the aircraft's wheels can have a resisting effect during takeoff, making its projection unstable, which can cause an accident such as the Munich air disaster. Slush on roads can also make roads slippery and increase the braking distances for cars and trucks, increasing the possibility of rear end crashes and other road accidents. Slush can refreeze and become hazardous to vehicles and pedestrians. In some cases though, slush can be beneficial. When snow hits the slush, it partially melts and also becomes slush on contact. This prevents roads from becoming too congested with snow or sleet. References Snow or ice weather phenomena Forms of water Water ice
Slush
[ "Physics", "Chemistry" ]
402
[ "Forms of water", "Phases of matter", "Matter" ]
5,594,444
https://en.wikipedia.org/wiki/FlashCP
FlashCP was a copy prevention technology for the storage of electronic materials (e.g. e-books). Originally developed under the trademark "BookLocker", SanDisk acquired the technology in 2005 with the purchase of Israeli-based privately held company "MDRM". FlashCP is primarily used on USB flash drives to provide students with storage capabilities of copyrighted material, while preventing sharing of such material. This is done through the use of Windows software that must be installed to use the FlashCP capability of the drive. The software interfaces with proprietary software in the flash drive. SanDisk manufactured a flash drive using the FlashCP technology, the 256MB Cruzer "Freedom Drive". References Digital rights management systems
FlashCP
[ "Technology" ]
146
[ "Computing stubs", "Computer hardware stubs" ]
5,594,657
https://en.wikipedia.org/wiki/Starrucca%20Viaduct
Starrucca Viaduct is a stone arch bridge that spans Starrucca Creek near Lanesboro, Pennsylvania, in the United States. Completed in 1848 at a cost of $320,000 (equal to $ today), it was at the time the world's largest stone railway viaduct and was thought to be the most expensive railway bridge as well. Still in use, the viaduct is listed on the National Register of Historic Places and is designated as a National Historic Civil Engineering Landmark. History 19th century The viaduct was designed by Julius W. Adams and James P. Kirkwood and built in 1847–48 by the New York and Erie Railroad, of locally-quarried random ashlar bluestone, except for three brick interior longitudinal spandrel walls and the concrete bases of the piers. This may have been the first structural use of concrete in American bridge construction. It was built to solve an engineering problem posed by the wide valley of Starrucca Creek. The railroad considered building an embankment, but abandoned the idea as impractical. The Erie Railroad was well-financed by British investors but, even with money available, most American contractors at the time were incapable of the task. Julius W. Adams, the superintending engineer of construction in the area, hired James P. Kirkwood, a civil engineer who had worked on the Long Island Rail Road. Accounts differ as to whether Kirkwood worked on the bridge himself, or whether Adams was responsible for the plans with Kirkwood working as a subordinate. The lead stonemason, Thomas Heavey, an Irish immigrant from County Offaly, worked on other projects for Kirkwood, primarily in New England. It took 800 workers, each paid about $1 per day, equal to $ today, to complete the bridge in a year. The falsework for the bridge required more than half a million feet of cord and hewn timbers. The original single broad gauge track was replaced by two standard gauge tracks in 1886. The roadbed deck under the tracks was reinforced with a layer of concrete in 1958. The bridge has been in continual use for more than a century and a half. 20th century The viaduct was designated as a National Historic Civil Engineering Landmark by the American Society of Civil Engineers in 1973 and was listed on the National Register of Historic Places in 1975. 21st century In 2005, the Norfolk Southern Railway leased the portion of the line from Port Jervis to Binghamton, New York to the Delaware Otsego Corporation, which operates it under the name Central New York Railway. The only railroad currently using it is Delaware Otsego's New York, Susquehanna and Western Railway. See also List of bridges documented by the Historic American Engineering Record in Pennsylvania List of Erie Railroad structures documented by the Historic American Engineering Record List of Pennsylvania state historical markers in Susquehanna County References American Society of Civil Engineers, Reston, VA. "Starrucca Viaduct." Historic Civil Engineering Landmarks. Accessed 2022-01-26. External links Bridges to the Future at Susquehanna County, Pennsylvania's website Starrucca Viaduct at ASCE Civil Engineering Landmarks Starrucca Viaduct at Bridges & Tunnels Solid as a Rock: The Starrucca Viaduct at Literary & Cultural Heritage Map of PA Bridges completed in 1848 Railroad bridges on the National Register of Historic Places in Pennsylvania Historic Civil Engineering Landmarks Viaducts in the United States Bridges in Susquehanna County, Pennsylvania Historic American Engineering Record in Pennsylvania Erie Railroad bridges National Register of Historic Places in Susquehanna County, Pennsylvania Stone arch bridges in the United States
Starrucca Viaduct
[ "Engineering" ]
705
[ "Civil engineering", "Historic Civil Engineering Landmarks" ]
5,595,712
https://en.wikipedia.org/wiki/Digital%20cross-connect%20system
A digital cross-connect system (DCS or DXC) is a piece of circuit-switched network equipment, used in telecommunications networks, that allows lower-level TDM bit streams, such as DS0 bit streams, to be rearranged and interconnected among higher-level TDM signals, such as DS1 bit streams. DCS units are available that operate on both older T-carrier/E-carrier bit streams, as well as newer SONET/SDH bit streams. DCS devices can be used for "grooming" telecommunications traffic, switching traffic from one circuit to another in the event of a network failure, supporting automated provisioning, and other applications. Having a DCS in a circuit-switched network provides important flexibility that can otherwise only be obtained at higher cost using manual "DSX" cross-connect patch panels. DCS devices "switch" traffic, but they are not packet switches—they switch circuits, not packets, and the circuit arrangements they are used to manage tend to persist over very long time spans, typically months or longer, as compared to packet switches, which can route every packet differently, and operate on micro- or millisecond time spans. DCS units are also sometimes colloquially called "DACS" units, after a proprietary brand name of DCS units created and sold by AT&T's Western Electric division, now Alcatel-Lucent. Modern digital access and cross-connect systems are not limited to the T-carrier system, and may accommodate high data rates such as those of SONET. Transmuxing Transmuxing (transmux: transcode multiplexing) is a telecommunications signaling format change between two signaling methods, typically synchronous optical network signals, SONET, and various time-division multiplexing, TDM, signals. Transmuxing changes the “container” without changing the “contents.” Transmuxing provides the carrier the capability to embed a telecommunications signal from one logical TDM circuit to another within SONET without physically breaking down the TDM circuit into its components and reconstructing it. There are two types of transmuxing – electrical transmuxing and Optical transmuxing (sometimes called portless transmuxing). In electrical transmuxing, TDM signals (typically DS1/T1 or DS3) are brought in using copper connections, transmuxed to SONET and transported across the network until the reverse occurs. In optical transmuxing, TDM signals (DS1/T1, DS3, OCx) are brought in using fiber optics, transmuxed to SONET and transported across the network until the reverse occurs. In the U.S. and Japan, DS1/T1 signals are transmuxed into a SONET virtual tributary called a VT1.5. Traffic grooming Traffic grooming is the process of grouping smaller telecommunications signals into larger. This is typically done to minimize the number of connections and circuits needed to optimize the total cost. In TDM, 24 DS0 signals are grouped into a DS1/T1 signal and 28 DS1/T1 signals are groomed into a DS3 signal. A single DS3 signal carries 44.736 Mbit/s of data (672 DS0) and can be sent using a single cable. Circuit switching Circuit switching is the process of redirecting data signals from one input location to another. Mixed traffic handling In a Central Office DCS system, all kinds of signals connect into a DCS. Common signals connecting to a DCS are at the Electrical - DS1, DS3 levels and Optical (OCx) - OC3, OC12, OC48, and OC192. The DCS must be able to groom the traffic, economically and quickly, at the most efficient and desired levels. This is performed at the lowest level possible - DS1 level (or VT1.5) is preferred. A SONET 3/1 DCS will transmux and carry DS3 signals as STS-1 signals and groom TDM DS1/T1s using VT1.5 signals. The Central Office is where signals are generally switched and groomed to route DS1s needing to be mapped to other Optical or Electrical signals to get to different equipment or sent along to other Central Offices. If an Electrical DS3 is received, it would be connected to an Electrical Transmux port in the DCS where it would be converted from a DS3, demultiplexed back down to the DS1 level (28 DS1s), overhead would be added to the DS1s to make them VT1.5s and the VT1.5s would be put into an STS-1 and sent to the DCS Matrix as a VT mapped STS-1. If a DS3 is delivered to the Central Office inside a STS-1 (DS3 mapped STS1) carried in an OCx signal, the OCx would be connected to the DCS where the DS3 mapped STS-1 would be Optically Transmuxed and converted to a VT mapped STS-1, inside the DCS without terminating the electrical signal, and sent to the DCS Matrix as a VT mapped STS-1. In the DCS VT Matrix, the VT1.5s would be groomed from any VT mapped STS-1 to any other VT mapped STS-1s that are provisioned in the DCS VT Matrix. In diagram A, a Traverse DCS is shown receiving mixed traffic into I/O shelves. In those I/O shelves, the signals are prepared to be sent to the central Matrix shelf as VT mapped STSs. In the case of receiving an Electrical DS3, where 28 DS1s were muxed into a DS3 by means of an external M13 multiplexer (like a WideBank28 or TransAccess200), it will connect to an Electrical Tmux port on the I/O shelf to be Electrically Transmuxed. And, when a DS3 is connected to an I/O shelf via an optical OCx signal, the I/O shelf will Optically Transmux the DS3. All the VT mapped STSs from an I/O shelf are then sent to the central DCS Matrix shelf, where VT1.5s (DS1s) are groomed directly from one VT mapped STS1 to another VT mapped STSs in the VT Matrix and sent back out to an I/O shelf for further routing. See also Optical cross-connect References Cisco Technical Glossary iQor MarketPlace web site Network architecture Telecommunications equipment Cross connect system
Digital cross-connect system
[ "Technology", "Engineering" ]
1,367
[ "Network architecture", "Digital systems", "Information systems", "Computer networks engineering" ]
5,595,823
https://en.wikipedia.org/wiki/List%20of%20Microsoft%20Visual%20Studio%20add-ins
The following is a list of notable Microsoft Visual Studio Add-ins. Add-ins are software products designed to be used in conjunction with and extend Microsoft Visual Studio. There are many versions of Microsoft Visual Studio, so some of these products may not be compatible with all versions of the product. Managed add-ins are typically found in the following location on Windows Vista and higher: C:\Users\{username}\Documents\Visual Studio {version}\Addins. COM-based add-ins can be installed anywhere as their directories are specified via the registry. IDE add-ins Source control support AnkhSVN – Provides a free Subversion client for Visual Studio VisualSVN - Subversion integration for Visual Studio 2003/2005/2008/2010/2012/2013/2015/2017 VsTortoise - A free TortoiseSVN add-in for Microsoft Visual Studio 2008/2010/2012/2013 Refactoring and productivity Visual Assist X - Productivity plugin, like Resharper. Notable for C++ support Other PVS-Studio - Static Code Analyzer for C#,C,C++,C++11,C++/CX. Supports Visual Studio 2005/2008/2010/2012/2013/2015/2017. Designbox - Adds a toolbox that lets you associate initial property values with controls Koders – Adds a search plug-in to search the Koders database Reflector - a code browsing utility Dotfuscator – Provides tools to help prevent reverse engineering VSdocman - Visual Studio code commenter (XML doc comments) and API documentation generator. For VS 2010/2008/2005/2003/2002. XMLSpy – Integrates the XMLSpy IDE into Visual Studio for editing XML, XSLT, XSD, XQuery, XBRL, OOXML, etc. Liquid XML Studio – Integrates Liquid XML Tools: XML Schema Editor, WSDL Editor, XPath Expression Builder, and Web Services Test Client into Visual Studio 2005/2008/2010 AWS Toolkit for Visual Studio Code Language add-ins Ada programming language ActionScript 3, for building Flash applications Boo programming language Eiffel programming language F# programming language Oxygene programming language Python Tools for Visual Studio PHP Tools for Visual Studio References Microsoft Visual Studio Visual Studio Add-ins
List of Microsoft Visual Studio add-ins
[ "Technology" ]
481
[ "Computing-related lists", "Microsoft lists" ]
5,595,981
https://en.wikipedia.org/wiki/P21
p21Cip1 (alternatively p21Waf1), also known as cyclin-dependent kinase inhibitor 1 or CDK-interacting protein 1, is a cyclin-dependent kinase inhibitor (CKI) that is capable of inhibiting all cyclin/CDK complexes, though is primarily associated with inhibition of CDK2. p21 represents a major target of p53 activity and thus is associated with linking DNA damage to cell cycle arrest. This protein is encoded by the CDKN1A gene located on chromosome 6 (6p21.2) in humans. Function CDK inhibition p21 is a potent cyclin-dependent kinase inhibitor (CKI). The p21 (CIP1/WAF1) protein binds to and inhibits the activity of cyclin-CDK2, -CDK1, and -CDK4/6 complexes, and thus functions as a regulator of cell cycle progression at G1 and S phase. The binding of p21 to CDK complexes occurs through p21's N-terminal domain, which is homologous to the other CIP/KIP CDK inhibitors p27 and p57. Specifically it contains a Cy1 motif in the N-terminal half, and weaker Cy2 motif in the C-terminal domain that allow it to bind CDK in a region that blocks its ability to complex with cyclins and thus prevent CDK activation. Experiments looking at CDK2 activity within single cells have also shown p21 to be responsible for a bifurcation in CDK2 activity following mitosis, cells with high p21 enter a G0/quiescent state, whilst those with low p21 continue to proliferate. Follow up work, found evidence that this bistability is underpinned by double negative feedback between p21 and CDK2, where CDK2 inhibits p21 activity via ubiquitin ligase activity. PCNA inhibition p21 interacts with proliferating cell nuclear antigen (PCNA), a DNA polymerase accessory factor, and plays a regulatory role in S phase DNA replication and DNA damage repair. Specifically, p21 has a high affinity for the PIP-box binding region on PCNA, binding of p21 to this region is proposed to block the binding of processivity factors necessary for PCNA dependent S-phase DNA synthesis, but not PCNA dependent nucleotide excision repair (NER). As such, p21 acts as an effective inhibitor of S-phase DNA synthesis though permits NER, leading to the proposal that p21 acts to preferentially select polymerase processivity factors depending on the context of DNA synthesis. Apoptosis inhibition This protein was reported to be specifically cleaved by CASP3-like caspases, which thus leads to a dramatic activation of CDK2, and may be instrumental in the execution of apoptosis following caspase activation. However p21 may inhibit apoptosis and does not induce cell death on its own. The ability of p21 to inhibit apoptosis in response to replication fork stress has also been reported. Regulation p53 dependent response Studies of p53 dependent cell cycle arrest in response to DNA damage identified p21 as the primary mediator of downstream cell cycle arrest. Notably, El-Deiry et al. identified a protein p21 (WAF1) which was present in cells expressing wild type p53 but not those with mutant p53, moreover constitutive expression of p21 led to cell cycle arrest in a number of cell types. Dulcic et al. also found that γ-irradiation of fibroblasts induced a p53 and p21 dependent cell cycle arrest, here p21 was found bound to inactive cyclin E/CDK2 complexes. Working in mouse models, it was also shown that whilst mice lacking p21 were healthy, spontaneous tumours developed and G1 checkpoint control was compromised in cells derived from these mice. Taken together, these studies thus defined p21 as the primary mediator of p53-dependent cell cycle arrest in response to DNA damage. Recent work exploring p21 activation in response to DNA damage at a single-cell level have demonstrated that pulsatile p53 activity leads to subsequent pulses of p21, and that the strength of p21 activation is cell cycle phase dependent. Moreover, studies of p21-levels in populations of cycling cells, not exposed to DNA damaging agents, have shown that DNA damage occurring in mother cell S-phase can induce p21 accumulation over both mother G2 and daughter G1 phases which subsequently induces cell cycle arrest; this responsible for the bifurcation in CDK2 activity observed in Spencer et al.. Degradation p21 is negatively regulated by ubiquitin ligases both over the course of the cell cycle and in response to DNA damage. Specifically, over the G1/S transition it has been demonstrated that the E3 ubiquitin ligase complex SCFSkp2 induces degradation of p21. Studies have also demonstrated that the E3 ubiquitin ligase complex CRL4Cdt2 degrades p21 in a PCNA dependent manner over S-phase, necessary to prevent p21 dependent re-replication, as well as in response to UV irradiation. Recent work has now found that in human cell lines SCFSkp2 degrades p21 towards the end of G1 phase, allowing cells to exit a quiescent state, whilst CRL4Cdt2 acts to degrade p21 at a much higher rate than SCFSkp2 over the G1/S transition and subsequently maintain low levels of p21 throughout S-phase. Clinical significance Cytoplasmic p21 expression can be significantly correlated with lymph node metastasis, distant metastases, advanced TNM stage (a classification of cancer staging that stands for: tumor size, describing nearby lymph nodes, and distant metastasis), depth of invasion and OS (overall survival rate). A study on immunohistochemical markers in malignant thymic epithelial tumors shows that p21 expression has a negatively influenced survival and significantly correlated with WHO (World Health Organization) type B2/B3. When combined with low p27 and high p53, DFS (Disease-Free Survival) decreases. p21 mediates the resistance of hematopoietic cells to an infection with HIV by complexing with the HIV integrase and thereby aborting chromosomal integration of the provirus. HIV infected individuals who naturally suppress viral replication have elevated levels of p21 and its associated mRNA. p21 expression affects at least two stages in the HIV life cycle inside CD4 T cells, significantly limiting production of new viruses. Metastatic canine mammary tumors display increased levels of p21 in the primary tumors but also in their metastases, despite increased cell proliferation. Mice that lack the p21 gene gain the ability to regenerate lost appendages. Interactions P21 has been shown to interact with: Nrf2 BCCIP, CIZ1, CUL4A, CCNE1, CDK, DDB1, DTL, GADD45A, GADD45G, HDAC, PCNA, PIM1, TK1, and TSG101. References Further reading External links Drosophila dacapo - The Interactive Fly Cell cycle regulators Tumor suppressor genes
P21
[ "Chemistry" ]
1,539
[ "Cell cycle regulators", "Signal transduction" ]
5,596,105
https://en.wikipedia.org/wiki/G1/S%20transition
The G1/S transition is a stage in the cell cycle at the boundary between the G1 phase, in which the cell grows, and the S phase, during which DNA is replicated. It is governed by cell cycle checkpoints to ensure cell cycle integrity and the subsequent S phase can pause in response to improperly or partially replicated DNA. During this transition the cell makes decisions to become quiescent (enter G0), differentiate, make DNA repairs, or proliferate based on environmental cues and molecular signaling inputs. The G1/S transition occurs late in G1 and the absence or improper application of this highly regulated checkpoint can lead to cellular transformation and disease states such as cancer. During this transition, G1 cyclin D-Cdk4/6 dimer phosphorylates retinoblastoma releasing transcription factor E2F, which then drives the transition from G1 to S phase. The G1/S transition is highly regulated by transcription factor p53 in order to halt the cell cycle when DNA is damaged. It is a "point of no return" beyond which the cell is committed to dividing; in yeast this is called the Start point, and in multicellular eukaryotes it is termed the restriction point (R-Point). If a cell passes through the G1/S transition the cell will continue through the cell cycle regardless of incoming mitogenic factors due to the positive feed-back loop of G1-S transcription. Positive feed-back loops include G1 cyclins and accumulation of E2F in multicellular eukaryotes, and the accumulation of SBF in yeast cells. Cell cycle overview The cell cycle is a process in which an ordered set of events leads to the growth and division into two daughter cells. The cell cycle is a cycle rather than a linear process because the two daughter cells produced repeat the cycle. This process contains two main phases, interphase, in which the cell grows and synthesizes a copy of its DNA, and the mitotic (M) phase, during which the cell separates its DNA and divides into two new daughter cells. Interphase is further broken down into the G1 (GAP 1) phase, S (Synthesis) phase, G2 (GAP 2) phase and the mitotic (M) phase which in turn is broken down into mitosis and cytokinesis. Following cytokinesis, during G1 phase the cells monitor environment for the potential growth factors, grow larger and once achieve the threshold size (rRNA and overall protein content characteristic for a given cell type) they start progression through S phase. During S phase, the cell also duplicates the centrosome, or microtubule-organizing center, which is critical for DNA separation in the M phase. After complete synthesis of its DNA, the cell enters the G2 phase where it continues to grow in preparation for mitosis. Following interphase, the cell transitions into mitosis, containing four sub stages: prophase, anaphase, metaphase, and telophase. In mitosis, DNA condenses into chromosomes, which are lined up and separated by the mitotic spindle. After duplicate DNA is separated on opposite ends of the cell, the cytoplasm of the cell is split in two during cytokinesis resulting in two daughter cells. The yeast cell cycle goes through similar stages however there is the additional factor of mating to consider. A haploid cell arrests in G1 if it has not passed Start and is exposed to enough mating pheromone, but will progress into S-phase if both of those conditions are not met. Cell cycle regulation in mammalian cells As with most processes in the body, the cell cycle is highly regulated to prevent the synthesis of mutated cells and uncontrolled cell division that leads to tumor formation. The cell cycle control system is biochemically based so that the proteins of the mitosis promoting factor (MPF) control the transition from one phase to the next based on a series of checkpoints. MPF is a protein dimer made up of cyclin and cyclin-dependent kinase (Cdk), a serine and threonine kinase, which come together at different points in the cycle to control cell progression through the cycle. When cyclin binds to Cdk, Cdk becomes activated and phosphorylates serine and threonine on other proteins causing the activation and degradation of other proteins allowing the cell to transition through the cell cycle. G1/S transition In mid to late G1 phase, cyclin D bound to Cdk4/6, activates the expression of the S phase cyclin-Cdk components; however, the cell does not want S phase cyclins to become active in G1. Therefore, an inhibitor, protein Slc-1, is present that interacts with the dimer so that the S phase cyclin-Cdk dimer remains inactive until the cell is ready to move into S phase. After the cell has grown and is ready to synthesize DNA, G1 cyclin-Cdks phosphorylate the S phase cyclin inhibitor signaling ubiquitination, resulting in the addition of groups to the inhibitor. Ubiquitination of the inhibitor signals the SCF/proteasome to degrade the inhibitor releasing and allowing the S phase cyclin-Cdk to become activated and the cell moves into S phase. Once in S phase, cyclin-Cdks phosphorylate several factors on the replication complex promoting DNA replication by causing inhibitory proteins to fall off of replication complexes or through activation of components on the replication complex to induce DNA replication initiation. Retinoblastoma protein (pRB) and the G1/S transition Another dimer present during mid G1 is composed of retinoblastoma protein (pRB) and transcription factor E2F. When pRb is bound to E2F, E2F is inactive. As cyclin D is synthesized and activates Cdk4/6, the cyclin-Cdk targets Rb protein for phosphorylation. Upon phosphorylation, pRb changes conformation so that E2F is released and activated, binding to upstream regions of genes, initiating expression. Specifically, E2F drives the expression of other cyclins, including cyclin E and A, and genes necessary for DNA replication. Cyclin E either phosphorylates more pRb, thereby completing inactivation, to further activate E2F and promote the expression of more Cyclin E, or it has the ability to increase expression of itself. These interactions create a positive feedback loop. Cyclin E also interacts with Cdk2 driving the cell cycle to progress from G1 to S phase. E2F also targets Skp2, an F-box protein that targets the CDK inhibitor p27 for degradation, creating a third positive feedback loop. These positive feedback loops are the key to creating the all-or-nothing, switch-like transition between G1 and S-phase, and their components cyclin E1, cyclin E2, Skp2, and E2F1 are all transcribed early on at the G1/S transition. The role of retinoblastoma in tumor formation Retinoblastoma (Rb) is a cancer of the eye due to a mutant pRb protein. When pRb is mutated it becomes nonfunctional and is not able to inhibit the expression of transcription factor E2F. Therefore, E2F is always active and driving the cell cycle to progress from G1 to S phase. As a result, cell growth and division is unregulated causing tumor formation in the eye. Cell cycle checkpoints To ensure proper cell division, the cell cycle utilizes numerous checkpoints to monitor cell progression and halt the cycle when processes go awry. These checkpoints include four DNA damage checkpoints, one unreplicated DNA checkpoint at the end of G2, one spindle assembly checkpoint in mitosis, and a chromosome segregation checkpoint during mitosis. p53 as a regulator Between G1 and S phase, three DNA damage checkpoints occur to ensure proper growth and synthesis of DNA prior to cell division. Damaged DNA during G1, before entry into S phase, and during S phase result in the expression of ATM/R protein. ATM/R protein then stabilizes and activates transcription factor p53 so that it can bind to upstream regions of genes, inducing the expression of proteins including p21CIP. p21CIP binds to and inhibits any cyclin-cdk present in the cell cycle, halting the cycle until DNA damage can be corrected. Additional processes at DNA damage checkpoints Of the four DNA damage checkpoints, two have an additional process for monitoring DNA damage other than activating p53. Before entry into S phase and during S phase, ATM/R also activates Chk1/2 that inhibits Cdc25A, a protein responsible for activating cyclin-Cdk dimers. Without cyclin dimer activation, the cell cannot transition through the cycle. These two checkpoints have additional processes for regulation because replicating damaged DNA in S phase can be deleterious to the cell and more importantly, the organism. Cell cycle regulation in budding yeast Cell cycle regulation is just as important in yeast cells which respond to nutrient and mating pheromone levels in their environment in order to grow, divide and reproduce appropriately. The Start point is the moment in the yeast cell cycle that determines the all-or-nothing commitment to undergo DNA replication. Input signals promote cyclin synthesis which drives cyclin-dependent kinase (CDK) activity past a threshold that triggers a self-sustaining positive feedback loop that pushes the cell into S-phase. This is a sharp switch due to the genes involved in the positive feedback loop being transcribed before other genes transcribed under the same transcriptional factor. G1/S transition Cln3 binds and activates CDK1. The Cln3-CDK1 complex then inactivates the transcriptional inhibitor Whi5 by phosphorylation which causes it to release from the transcriptional factor SBF. The size of a yeast cell and its time spent in G1 are roughly proportional. This is partly governed by inhibitor dilution associated with Whi5. Once no longer inhibited, SBF goes on to weakly promote transcription of downstream G1 cyclins CLN1 and CLN2. Cln1 and Cln2 form a positive feedback loop by further inactivating Whi5 and concurrently activating SBF as well as MBF, driving the expression of over 200 genes, including S-phase cyclins that initiate DNA replication. An additional positive feedback loop is created by Swi4, a component of SBF that is itself a target of SBF. Together, this sudden activation of the G1 cyclin positive feedback loop defines the Start point that is key to making the G1/S transition distinct and abrupt. Once the CDK activity threshold is passed and feedback is activated, fluctuations in the upstream input signals no longer have an influence on the fate of the cell cycle. See also S-phase promoting factor References Cell cycle
G1/S transition
[ "Biology" ]
2,344
[ "Cell cycle", "Cellular processes" ]
5,596,641
https://en.wikipedia.org/wiki/Horse%20behavior
Horse behavior is best understood from the view that horses are prey animals with a well-developed fight-or-flight response. Their first reaction to a threat is often to flee, although sometimes they stand their ground and defend themselves or their offspring in cases where flight is untenable, such as when a foal would be threatened. Nonetheless, because of their physiology horses are also suited to a number of work and entertainment-related tasks. Humans domesticated horses thousands of years ago, and they have been used by humans ever since. Through selective breeding, some breeds of horses have been bred to be quite docile, particularly certain large draft horses. On the other hand, most light horse riding breeds were developed for speed, agility, alertness, and endurance; building on natural qualities that extended from their wild ancestors. Horses' instincts can be used to human advantage to create a bond between human and horse. These techniques vary, but are part of the art of horse training. The "fight-or-flight" response Horses evolved from small mammals whose survival depended on their ability to flee from predators (for example: wolves, big cats, bears). This survival mechanism still exists in the modern domestic horse. Humans have removed many predators from the life of the domestic horse; however, its first instinct when frightened is to escape. If running is not possible, the horse resorts to biting, kicking, striking or rearing to protect itself. Many of the horse's natural behavior patterns, such as herd-formation and social facilitation of activities, are directly related to their being a prey species. The fight-or-flight response involves nervous impulses which result in hormone secretions into the bloodstream. When a horse reacts to a threat, it may initially "freeze" in preparation to take flight. The fight-or-flight reaction begins in the amygdala, which triggers a neural response in the hypothalamus. The initial reaction is followed by activation of the pituitary gland and secretion of the hormone ACTH. The adrenal gland is activated almost simultaneously and releases the neurotransmitters epinephrine (adrenaline) and norepinephrine (noradrenaline). The release of chemical messengers results in the production of the hormone cortisol, which increases blood pressure and blood sugar, and suppresses the immune system. Catecholamine hormones, such as epinephrine and norepinephrine, facilitate immediate physical reactions associated with a preparation for violent muscular action. The result is a rapid rise in blood pressure, resulting in an increased supply of oxygen and glucose for energy to the brain and skeletal muscles, the most vital organs the horse needs when fleeing from a perceived threat. However, the increased supply of oxygen and glucose to these areas is at the expense of "non-essential" flight organs, such as the skin and abdominal organs. Once the horse has removed itself from immediate danger, the body is returned to more "normal" conditions via the parasympathetic nervous system. This is triggered by the release of endorphins into the brain, and it effectively reverses the effects of noradrenaline – metabolic rate, blood pressure and heart rate all decrease and the increased oxygen and glucose being supplied to the muscles and brain are returned to normal. This is also known as the "rest and digest" state. As herd animals Horses are highly social herd animals that prefer to live in a group. An older theory of hierarchy in herd of horses is the "linear dominance hierarchy". Newer research shows that there is no "pecking order" in horse herds. Free ranging, wild horses are mostly communicating via positive reinforcement and less via punishment. Horses are able to form companionship attachments not only to their own species, but with other animals as well, most notably humans. In fact, many domesticated horses will become anxious, flighty, and hard to manage if they are isolated. Horses kept in near-complete isolation, particularly in a closed stable where they cannot see other animals, may require a stable companion such as a cat, goat, or even a small pony or donkey, to provide company and reduce stress. When anxiety over separation occurs while a horse is being handled by a human, the horse is described as "herd-bound". However, through proper training, horses learn to be comfortable away from other horses, often because they learn to trust a human handler. Horses are able to trust a human handler. Since it is not possible to form interspecies herds, humans cannot be part of a horse herd hierarchy and therefore can never take the place of "lead-mares" or "lead-stallions". Social organization in the wild Feral and wild horse "herds" are usually made up of several separate, small "bands" which share a territory. Size may range from two to 25 individuals, mostly mares and their offspring, with one to five stallions. Bands are defined as a harem model. Each band is led by a dominant mare (sometimes called the "lead mare" or the "boss mare"). The composition of bands changes as young animals are driven out of their natal band and join other bands, or as stallions challenge each other for dominance. In bands, there is usually a single "herd" or "lead" stallion, though occasionally a few less-dominant males may remain on the fringes of the group. The reproductive success of the lead stallion is determined in part by his ability to prevent other males from mating with the mares of his harem. The stallion also exercises protective behavior, patrolling around the band, and taking the initiative when the band encounters a potential threat. The stability of the band is not affected by size, but tends to be more stable when there are subordinate stallions attached to the harem. In modern reintroduced populations of Przewalski's horse, the only remaining truly wild horse, family groups are formed by one adult stallion, one to three mares, and their common offspring that stay in the family group until they are no longer dependent, usually at two or three years old. Hierarchical structure Horses have evolved to live in herds. As with many animals that live in large groups, establishment of a stable hierarchical system or "pecking order" is important to reduce aggression and increase group cohesion. This is often, but not always, a linear system. In non-linear hierarchies horse A may be dominant over horse B, who is dominant over horse C, yet horse C may be dominant over horse A. Dominance can depend on a variety of factors, including an individual's need for a particular resource at a given time. It can therefore be variable throughout the lifetime of the herd or individual animal. Some horses may be dominant over all resources and others may be submissive for all resources. This is not part of natural horse behavior. It is forced by humans forcing horses to live together in limited space with limited resources. So called "dominant horses" are often horses with dysfunctional social abilities - caused by human intervention in their early lives (weaning, stable isolation, etc.). Once a dominance hierarchy is established, horses more often than not will travel in rank order. Most young horses in the wild are allowed to stay with the herd until they reach sexual maturity, usually in their first or second year. Studies of wild herds have shown that the herd stallion will usually drive out both colts and fillies; this may be an instinct that prevents inbreeding. The fillies usually join another band soon afterward, and the colts driven out from several herds usually join in small "bachelor" groups until those who are able to establish dominance over an older stallion in another herd. Role of the lead mare Contrary to popular belief, the herd stallion is not the "ruler" of a harem of females, though he usually engages in herding and protective behavior. Rather, the horse that tends to lead a wild or feral herd is most commonly a dominant mare. The mare "guides the herd to food and water, controls the daily routine and movement of the herd, and ensures the general wellbeing of the herd." A recent supplemental theory posits that there is "distributed leadership", and no single individual is a universal herd leader. A 2014 study of horses in Italy, described as "feral" by the researcher, observed that some herd movements may be initiated by any individual, although higher-ranked members are followed more often by other herd members. Role of the stallion Stallions tend to stay on the periphery of the herd where they fight off both predators and other males. When the herd travels, the stallion is usually at the rear and apparently drives straggling herd members forward, keeping the herd together. Mares and lower-ranked males do not usually engage in this herding behavior. During the mating season, stallions tend to act more aggressively to keep the mares within the herd, however, most of the time, the stallion is relaxed and spends much of his time "guarding" the herd by scent-marking manure piles and urination spots to communicate his dominance as herd stallion. Ratio of stallions and mares Domesticated stallions, with human management, often mate with ("cover") more mares in a year than is possible in the wild. Traditionally, thoroughbred stud farms limited stallions to breeding with between 40 and 60 mares a year. By breeding mares only at the peak of their estrous cycle, a few thoroughbred stallions have mated with over 200 mares per year. With use of artificial insemination, one stallion could potentially sire thousands of offspring annually, though in practice, economic considerations usually limit the number of foals produced. Domesticated stallion behavior Some breeders keep horses in semi-natural conditions, with a single stallion amongst a group of mares. This is referred to as "pasture breeding." Young immature stallions are kept in a separate "bachelor herd." While this has advantages of less intensive labor for human caretakers, and full-time turnout (living in pasture) may be psychologically healthy for the horses, pasture breeding presents a risk of injury to valuable breeding stock, both stallions and mares, particularly when unfamiliar animals are added to the herd. It also raises questions of when or if a mare is bred, and may also raise questions as to parentage of foals. Therefore, keeping stallions in a natural herd is not common, especially on breeding farms mating multiple stallions to mares from other herds. Natural herds are more often kept on farms with closed herds, i.e. only one or a few stallions with a stable mare herd and few, if any, mares from other herds. Mature, domesticated stallions are commonly kept by themselves in a stable or small paddock. When stallions are stabled in a manner that allows visual and tactile communication, they will often challenge each another and sometimes attempt to fight. Therefore, stallions are often kept isolated from each other to reduce the risk of injury and disruption to the rest of the stable. If stallions are provided with access to paddocks, there is often a corridor between the paddocks so the stallions cannot touch each other. In some cases, stallions are released for exercise at different times of the day to ensure they do not see or hear each another. To avoid stable vices associated with isolation, some stallions are provided with a non-horse companion, such as a castrated donkey or a goat (the Godolphin Arabian was particularly fond of a barn cat). While many domesticated stallions become too aggressive to tolerate the close presence of any other male horse without fighting, some tolerate a gelding as a companion, particularly one that has a very calm temperament. One example of this was the racehorse Seabiscuit, who lived with a gelding companion named "Pumpkin". Other stallions may tolerate the close presence of an immature and less dominant stallion. Stallions and mares often compete together at horse shows and in horse races, however, stallions generally must be kept away from close contact with mares, both to avoid unintentional or unplanned matings, and away from other stallions to minimize fighting for dominance. When horses are lined up for award presentations at shows, handlers keep stallions at least one horse length from any other animal. Stallions can be taught to ignore mares or other stallions that are in close proximity while they are working. Stallions live peacefully in bachelor herds in the wild and in natural management settings. For example, the stallions in the New Forest (U.K.) live in bachelor herds on their winter grazing pastures. When managed as domesticated animals, some farms assert that carefully managed social contact benefits stallions. Well-tempered stallions intended to be kept together for a long period may be stabled in closer proximity, though this method of stabling is generally used only by experienced stable managers. An example of this is the stallions of the Spanish Riding School, which travel, train and are stabled in close proximity. In these settings, more dominant animals are kept apart by stabling a young or less dominant stallion in the stall between them. Dominance in domesticated herds Because domestication of the horse usually requires stallions to be isolated from other horses, either mares or geldings may become dominant in a domestic herd. Usually dominance in these cases is a matter of age and, to some extent, temperament. It is common for older animals to be dominant, though old and weak animals may lose their rank in the herd. There are also studies suggesting that a foal will "inherit" or perhaps imprint dominance behavior from its dam, and at maturity seek to obtain the same rank in a later herd that its mother held when the horse was young. Studies of domesticated horses indicate that horses appear to benefit from a strong female presence in the herd. Groupings of all geldings, or herds where a gelding is dominant over the rest of the herd; for example if the mares in the herd are quite young or of low status, may be more anxious as a group and less relaxed than those where a mare is dominant. Communication Horses communicate in various ways, including vocalizations such as nickering, squealing or whinnying; touch, through mutual grooming or nuzzling; smell; and body language. Horses use a combination of ear position, neck and head height, movement, and foot stomping or tail swishing to communicate. Discipline is maintained in a horse herd first through body language and gestures, then, if needed, through physical contact such as biting, kicking, nudging, or other means of forcing a misbehaving herd member to move. In most cases, the animal that successfully causes another to move is dominant, whether it uses only body language or adds physical reinforcement. Horses can interpret the body language of other creatures, including humans, whom they view as predators. If socialized to human contact, horses usually respond to humans as a non-threatening predator. Humans do not always understand this, however, and may behave in a way, particularly if using aggressive discipline, that resembles an attacking predator and triggers the horse's fight-or-flight response. On the other hand, some humans exhibit fear of a horse, and a horse may interpret this behavior as human submission to the authority of the horse, placing the human in a subordinate role in the horse's mind. This may lead the horse to behave in a more dominant and aggressive fashion. Human handlers are more successful if they learn to properly interpret a horse's body language and temper their own responses accordingly. Some methods of horse training explicitly instruct horse handlers to behave in ways that the horse will interpret as the behavior of a trusted leader in a herd and thus more willingly comply with commands from a human handler. Other methods encourage operant conditioning to teach the horse to respond in a desired way to human body language, but also teach handlers to recognize the meaning of horse body language. Horses are not particularly vocal, but do have four basic vocalizations: the neigh or whinny, the nicker, the squeal and the snort. They may also make sighing, grunting or groaning noises at times. Ear position is often one of the most obvious behaviors that humans notice when interpreting horse body language. In general, a horse will direct the pinna of an ear toward the source of input it is also looking at. Horses have a narrow range of binocular vision, and thus a horse with both ears forward is generally concentrating on something in front of it. Similarly, when a horse turns both ears forward, the degree of tension in the horse's pinna suggests if the animal is calmly attentive to its surroundings or tensely observing a potential danger. However, because horses have strong monocular vision, it is possible for a horse to position one ear forward and one ear back, indicative of similar divided visual attention. This behavior is often observed in horses while working with humans, where they need to simultaneously focus attention on both their handler and their surroundings. A horse may turn the pinna back when also seeing something coming up behind it. Due to the nature of a horse's vision, head position may indicate where the animal is focusing attention. To focus on a distant object, a horse will raise its head. To focus on an object close by, and especially on the ground, the horse will lower its nose and carry its head in a near-vertical position. Eyes rolled to the point that the white of the eye is visible often indicates fear or anger. Ear position, head height, and body language may change to reflect emotional status as well. For example, the clearest signal a horse sends is when both ears are flattened tightly back against the head, sometimes with eyes rolled so that the white of the eye shows, often indicative of pain or anger, frequently foreshadowing aggressive behavior that will soon follow. Sometimes ears laid back, especially when accompanied by a strongly swishing tail or stomping or pawing with the feet are signals used by the horse to express discomfort, irritation, impatience, or anxiety. However, horses with ears slightly turned back but in a loose position, may be drowsing, bored, fatigued, or simply relaxed. When a horse raises its head and neck, the animal is alert and often tense. A lowered head and neck may be a sign of relaxation, but depending on other behaviors may also indicate fatigue or illness. Tail motion may also be a form of communication. Slight tail swishing is often a tool to dislodge biting insects or other skin irritants. However, aggressive tail-swishing may indicate either irritation, pain or anger. The tail tucked tightly against the body may indicate discomfort due to cold or, in some cases, pain. The horse may demonstrate tension or excitement by raising its tail, but also by flaring its nostrils, snorting, and intently focusing its eyes and ears on the source of concern. The horse does not use its mouth to communicate to the degree that it uses its ears and tail, but a few mouth gestures have meaning beyond that of eating, grooming, or biting at an irritation. Bared teeth, as noted above, are an expression of anger and an imminent attempt to bite. Horses, particularly foals, sometimes indicate appeasement of a more aggressive herd member by extending their necks and clacking their teeth. Horses making a chewing motion with no food in the mouth do so as a soothing mechanism, possibly linked to a release of tension, though some horse trainers view it as an expression of submission. Horses will sometimes extend their upper lip when scratched in a particularly good spot, and if their mouth touches something at the time, their lip and teeth may move in a mutual grooming gesture. A very relaxed or sleeping horse may have a loose lower lip and chin that may extend further out than the upper lip. The curled lip flehmen response, noted above, most often is seen in stallions, but is usually a response to the smell of another horse's urine, and may be exhibited by horses of any sex. Horses also have assorted mouth motions that are a response to a bit or the rider's hands, some indicating relaxation and acceptance, others indicating tension or resistance. Sleep patterns Horses can sleep both standing up and lying down. They can sleep while standing, an adaptation from life as a prey animal in the wild. Lying down makes an animal more vulnerable to predators. Horses are able to sleep standing up because a "stay apparatus" in their legs allows them to relax their muscles and doze without collapsing. In the front legs, their equine forelimb anatomy automatically engages the stay apparatus when their muscles relax. The horse engages the stay apparatus in the hind legs by shifting its hip position to lock the patella in place. At the stifle joint, a "hook" structure on the inside bottom end of the femur cups the patella and the medial patella ligament, preventing the leg from bending. Horses obtain needed sleep by many short periods of rest. This is to be expected of a prey animal, that needs to be ready on a moment's notice to flee from predators. Horses may spend anywhere from four to fifteen hours a day in standing rest, and from a few minutes to several hours lying down. However, not all this time is the horse asleep; total sleep time in a day may range from several minutes to two hours. Horses require approximately two and a half hours of sleep, on average, in a 24-hour period. Most of this sleep occurs in many short intervals of about 15 minutes each. These short periods of sleep consist of five minutes of slow-wave sleep, followed by five minutes of rapid eye movement sleep (REM) and then another five minutes of slow-wave sleep. Horses must lie down to reach REM sleep. They only have to lie down for an hour or two every few days to meet their minimum REM sleep requirements. However, if a horse is never allowed to lie down, after several days it will become sleep-deprived, and in rare cases may suddenly collapse as it involuntarily slips into REM sleep while still standing. This condition differs from narcolepsy, which horses may suffer from. Horses sleep better when in groups because some animals will sleep while others stand guard to watch for predators. A horse kept entirely alone may not sleep well because its instincts are to keep a constant eye out for danger. Eating patterns Horses have a strong grazing instinct, preferring to spend most hours of the day eating forage. Horses and other equids evolved as grazing animals, adapted to eating small amounts of the same kind of food all day long. In the wild, the horse adapted to eating prairie grasses in semi-arid regions and traveling significant distances each day in order to obtain adequate nutrition. Thus, they are "trickle eaters," meaning they have to have an almost constant supply of food to keep their digestive system working properly. Horses can become anxious or stressed if there are long periods of time between meals. When stabled, they do best when they are fed on a regular schedule; they are creatures of habit and easily upset by changes in routine. When horses are in a herd, their behavior is hierarchical; the higher-ranked animals in the herd eat and drink first. Low-status animals, that eat last, may not get enough food, and if there is little available feed, higher-ranking horses may keep lower-ranking ones from eating at all. Psychological disorders When confined with insufficient companionship, exercise or stimulation, horses may develop stable vices, an assortment of compulsive stereotypies considered bad habits, mostly psychological in origin, that include wood chewing, stall walking (walking in circles stressfully in the stall), wall kicking, "weaving" (rocking back and forth) and other problems. These have been linked to a number of possible causal factors, including a lack of environmental stimulation and early weaning practices. Research is ongoing to investigate the neurobiological changes involved in the performance of these behaviors. See also Domestication of the horse Equus (genus) Glossary of equestrian terms Horse Horse breeding Horse care Horse training Sacking out Stable vices Equine intelligence Notes References Budiansky, Stephen. "The Nature of Horses". Free Press, 1997. McCall C.A (Professor of Animal Sciences, Auburn University) 2006, Understanding your horses’ behaviour, Alabama Co-operative Extension System, Alabama, viewed 21/10/13, External links The Horse Trust - Equine Clinical Animal Behaviour Hub Basics of Equine Behaviour - Equine Behaviour & Training Association Case Studies of Equine Behaviour - FAB Clinicians Ethology
Horse behavior
[ "Biology" ]
5,083
[ "Behavioural sciences", "Ethology", "Behavior" ]
5,597,441
https://en.wikipedia.org/wiki/Collagenase
Collagenases are enzymes that break the peptide bonds in collagen. They assist in destroying extracellular structures in the pathogenesis of bacteria such as Clostridium. They are considered a virulence factor, facilitating the spread of gas gangrene. They normally target the connective tissue in muscle cells and other body organs. Collagen, a key component of the animal extracellular matrix, is made through cleavage of pro-collagen by collagenase once it has been secreted from the cell. This stops large structures from forming inside the cell itself. In addition to being produced by some bacteria, collagenase can be made by the body as part of its normal immune response. This production is induced by cytokines, which stimulate cells such as fibroblasts and osteoblasts, and can cause indirect tissue damage. Therapeutic uses Collagenases have been approved for medical uses for: treatment of Dupuytren's contracture and Peyronie's disease (Xiaflex). wound healing (Santyl) cellulite (Qwo) The MEROPS M9 family This group of metallopeptidases constitutes the MEROPS peptidase family M9, subfamilies M9A and M9B (microbial collagenase, clan MA(E)). The protein fold of the peptidase domain for members of this family resembles that of thermolysin, the type example for clan MA and the predicted active site residues for members of this family and thermolysin occur in the motif HEXXH. Microbial collagenases have been identified from bacteria of both the Vibrio and Clostridium genera. Collagenase is used during bacterial attack to degrade the collagen barrier of the host during invasion. Vibrio bacteria are sometimes used in hospitals to remove dead tissue from burns and ulcers. Clostridium histolyticum is a pathogen that causes gas gangrene; nevertheless, the isolated collagenase has been used to treat bed sores. Collagen cleavage occurs at an in Vibrio bacteria and at bonds in Clostridium collagenases. Analysis of the primary structure of the gene product from Clostridium perfringens has revealed that the enzyme is produced with a stretch of 86 residues that contain a putative signal sequence. Within this stretch is found PLGP, an amino acid sequence typical of collagenase substrates. This sequence may thus be implicated in self-processing of the collagenase. Metalloproteases are the most diverse of the seven main types of protease, with more than 50 families identified to date. In these enzymes, a divalent cation, usually zinc, activates the water molecule. The metal ion is held in place by amino acid ligands, usually three in number. The known metal ligands are His, Glu, Asp, or Lys and at least one other residue is required for catalysis, which may play an electrophillic role. Of the known metalloproteases, around half contain an HEXXH motif, which has been shown in crystallographic studies to form part of the metal-binding site. The HEXXH motif is relatively common, but can be more stringently defined for metalloproteases as 'abXHEbbHbc', where 'a' is most often valine or threonine and forms part of the S1' subsite in thermolysin and neprilysin, 'b' is an uncharged residue, and 'c' a hydrophobic residue. Proline is never found in this site, possibly because it would break the helical structure adopted by this motif in metalloproteases. Other uses Collagenases may be used for tenderizing meat in a manner similar to widely used tenderizers papain, bromelain and ficain. See also Matrix metalloproteinase Gas gangrene References External links EC 3.4.24 Protein families
Collagenase
[ "Biology" ]
860
[ "Protein families", "Protein classification" ]
5,597,523
https://en.wikipedia.org/wiki/Desert%20exploration
Desert exploration is the deliberate and scientific exploration of deserts, the arid regions of the earth. It is only incidentally concerned with the culture and livelihood of native desert dwellers. People have struggled to live in deserts and the surrounding semi-arid lands for millennia. Nomads have moved their flocks and herds to wherever grazing is available, and oases have provided opportunities for a more settled way of life. Many, such as the Bushmen in the Kalahari, the Aborigines in Australia and various Indigenous peoples of the Americas, were originally hunter-gatherers. Many trade routes have been forged across deserts, especially across the Sahara Desert, and traditionally were used by caravans of camels carrying salt, gold, ivory and other goods. Large numbers of slaves were also taken northwards across the Sahara. Today, some mineral extraction also takes place in deserts, and the uninterrupted sunlight gives potential for the capture of large quantities of solar energy. Many people think of deserts as consisting of extensive areas of billowing sand dunes because that is the way they are often depicted on TV and in films, but deserts do not always look like this. Across the world, around 20% of desert is sand, varying from only 2% in North America to 30% in Australia and over 45% in Central Asia. Where sand does occur, it is usually in large quantities in the form of sand sheets or extensive areas of dunes. The following sections list deserts around the world, and their explorers. Expeditions are listed by their leaders; details of other expedition members may be found via the links. Africa Kalahari Desert Sahara Desert The Romans organized expeditions to cross the Sahara desert with five different routes. All these expeditions were supported by legionaries and had mainly a commercial purpose. One of the main reasons of the explorations was to get gold using the camel to transport it.: through the western Sahara, toward the Niger River and present day Timbuktu. through the Tibesti mountains, toward Lake Chad and present day Nigeria through the Nile river, toward present day Uganda. though the western coast of Africa, toward the Canary Islands and the Cape Verde islands. through the Red Sea, toward present day Somalia and perhaps Tanzania. Michael Asher & Mariantonietta Peru – made the first recorded crossing of the Sahara from west to east, by camel and on foot, from Nouakchott, Mauretania, to Abu Simbel, Egypt, 1986–87, a distance of 4500 miles Ref: The Modern Explorers. Thames & Hudson. London 2013 Michael Asher lived for 3 years with the Kababish nomads in the Sudan. Eva Dickson – was the first woman to cross the Sahara Desert by car. In 1932 she met the former spouse of Karen Blixen, Baron Bror von Blixen-Finecke, in Kenya, and they became lovers. After her meeting with Blixen in 1932, she took a bet and drove by car from Nairobi to Stockholm in 1932, thus becoming the first woman to have crossed the Sahara by car. Heinrich Barth – crossed the Sahara during his travels in Africa and the Middle East during 1845–1847. James Richardson – explored the Sahara and Sudan he died in the notorious hamada (a stony desert) in the Western Sahara. Friedrich Gerhard Rohlfs – German geographer. First person the cross Africa north to south. Named a place Regenfeld near Dakhla Oasis in southern Egypt after experiencing a rare occurrence of desert rain. Karl Alfred von Zittel German palaeontologist who accompanied Rohlfs. Henri Duveyrier – He undertook a number of fossil-hunting explorations in the Sahara. Albert-Félix de Lapparent – Explorer of the northern and western parts of the Sahara. Victor Loche – first identified the sand cat (Felis margarita) while exploring the North Sahara. Joseph Ritchie – sent to find the course of the River Niger and the location of Timbuktu. He died in Murzuk. Helen Thayer – 20th-century walker and explorer. Michiel Franken- First man to ride a sidecar (BMW i8) through the sahara Asia Arabian Desert has been populated since prehistory. Rub' al Khali or the Empty Quarter in its remote center is one of the largest continuous bodies of sand in the world. It was recently explored by Europeans: Bertram Thomas in 1931 and St. John Philby in 1932: first documented journeys by Westerners Wilfred Thesiger in 1946–50 crossed it several times and mapped large parts of it In June 1950, a US Air Force expedition crossed the Rub' al Khali from Dhahran, Saudi Arabia, to central Yemen and back in trucks to collect specimens for the Smithsonian Institution and to test desert survival procedures. Youngho Nam (Korean) in 2013 crossed on foot 1,000 km from "Salalah, Oman" to "Liwa, United Arab Emirates" Taklamakan Desert Xuanzang, a monk in the 7th century The archaeologist Aurel Stein in the 20th century. Charles Blackmore 1993 Gobi Desert has a long history of human habitation, mostly by nomadic peoples. The Gobi Desert as a whole was known only very imperfectly to outsiders, as information was confined to observations by individual travelers engaging in their respective itineraries across the desert. Among the European and American explorers who contributed to the understanding of the Gobi, the most important were the following: Jean-François Gerbillon (1688–1698) Eberhard Isbrand Ides (1692–1694) Lorenz Lange (1727–1728 and 1736) Fuss and Alexander G. von Bunge (1830–1831) Hermann Fritsche (1868–1873) Pavlinov and Z.L. Matusovski (1870) Ney Elias (1872–1873) Nikolai Przhevalsky (1870–1872 and 1876–1877) Zosnovsky (1875) Mikhail V. Pevtsov (1878) Grigory Potanin (1877 and 1884–1886) Béla Széchenyi and Lajos Lóczy (1879–1880) The brothers Grigory Grum-Grshimailo (1889–1890) and M. Y. Grigory Grum-Grshimailo Pyotr Kuzmich Kozlov (1893–1894 and 1899–1900) Vsevolod I. Roborovsky (1894) Vladimir Obruchev (1894–1896) Karl Josef Futterer and Dr. Holderer (1896) Charles-Etienne Bonin (1896 and 1899) Sven Hedin (1897 and 1900–1901) K. Bogdanovich (1898) Ladyghin (1899–1900) and Katsnakov (1899–1900) Jacques Bouly de Lesdain and Martha Mailey, 1902 Roy Chapman Andrews from the American Museum of Natural History who led several palaeontological expeditions to the Gobi Desert between 1922 and 1930 Zofia Kielan-Jaworowska who led Polish-Mongolian palaeontological expeditions in the mid-1960s. Australia Central Australia – general term covering the arid regions in the Australian interior Jon Muir – made the first ever unassisted crossing of the Australian desert on foot Edward John Eyre – expeditions to Lake Eyre and the Flinders Ranges in the 1830. Charles Sturt – expeditions from Adelaide in the 1840s John McDouall Stuart – accompanied Sturt 1844–1845; expeditions 1859 & 1860 (South Australia), 1861–1862 (south-north crossing of Australia) Ludwig Leichhardt – expeditions 1844–1845 Moreton Bay to Port Essington, 1846–1847 and 1848 west from Moreton Bay, where the entire expedition vanished Burke and Wills (Robert O'Hara Burke and William John Wills) – south-north crossing of Australia 1860–1861 where both died on the return journey Augustus Gregory – searched for Leichhardt in 1858 Ernest Giles – expeditions 1872–1876 William Tietkens – expedition in 1889 Gibson Desert Ernest Giles – crossed the desert in 1874 Great Sandy Desert Great Victoria Desert John McDouall Stuart – skirted the desert in 1858 Ernest Giles – crossed the desert in 1875 Nullarbor Plain – desert plain on the western part of the south coast of Australia Edward John Eyre – expedition 1840–1841 Tanami Desert Simpson Desert and Sturt Stony Desert Charles Sturt – expedition 1844–1845 Cecil Madigan – expedition 1939 across the Simpson Desert Warren Bonython and Charles McCubbin were the first North to South traverse on foot in 1973. They pulled a cart with supplies and used two air drops of water and supplies. Louis-Philippe Loncke – unsupported expedition 2008 across the Simpson Desert on foot from North to South Western Australia – a large and generally arid region Robert Austin – expedition 1854 Alexander Forrest – expeditions in the 1870s and 1880s John Forrest – expeditions in the 1870s and 1880s David Carnegie – expedition in 1896-7 Larry Wells – expedition in 1896-7 North America Before the European exploration of North America, tribes of Native Americans, such as the Mohave (in the Mojave desert), the Chemehuevi (in the Great Basin desert), and the Quechan (in the Colorado desert) were hunter-gatherers living in the California deserts. European explorers started exploring the deserts beginning in the 18th century. Francisco Garcés, a Franciscan friar, was the first explorer of the Colorado and Mojave deserts in 1776. Garcés recorded information about the original inhabitants of the deserts. Later, as American interests expanded into California, American explorers started probing the California deserts. Jedediah Smith travelled through the Great Basin and Mojave deserts in 1826, finally reaching the San Gabriel Mission. John C. Frémont explored the Great Basin, proving that water did not flow out of it to the ocean, and provided maps that the forty-niners used to get to California. See also Saharan explorers References https://deserts.fr Deserts Exploration
Desert exploration
[ "Biology" ]
2,033
[ "Deserts", "Ecosystems" ]
5,597,613
https://en.wikipedia.org/wiki/Directional-hemispherical%20reflectance
Directional-hemispherical reflectance is the reflectance of a surface under direct illumination (with no diffuse component). Directional-hemispherical reflectance is the integral of the bidirectional reflectance distribution function over all viewing directions. It is sometimes called "black-sky albedo". References See also Bi-hemispherical reflectance Electromagnetic radiation Climatology
Directional-hemispherical reflectance
[ "Physics" ]
81
[ "Electromagnetic radiation", "Physical phenomena", "Radiation" ]
5,597,628
https://en.wikipedia.org/wiki/Bi-hemispherical%20reflectance
Bi-hemispherical reflectance is the reflectance of a surface under diffuse illumination (with no direct component). Bi-hemispherical reflectance is an important part of the Bidirectional reflectance distribution function over all viewing and illumination directions of a hemisphere. It is sometimes called "white-sky albedo". See also Directional-hemispherical reflectance References Electromagnetic radiation
Bi-hemispherical reflectance
[ "Physics" ]
84
[ "Electromagnetic radiation", "Physical phenomena", "Radiation" ]
5,597,800
https://en.wikipedia.org/wiki/Dermott%27s%20law
Dermott's law is an empirical formula for the orbital period of major satellites orbiting planets in the Solar System. It was identified by the celestial mechanics researcher Stanley Dermott in the 1960s and takes the form: for Where T(n) is the orbital period of the nth satellite, T(0) is of the order of days and C is a constant of the satellite system in question. Specific values are: Jovian system: T(0) = 0.444 d, C = 2.03 Saturnian system: T(0)=0.462 d, C = 1.59 Uranian system: T(0) = 0.760 d, C = 1.80 Such power-laws may be a consequence of collapsing-cloud models of planetary and satellite systems possessing various symmetries; see Titius-Bode law. They may also reflect the effect of resonance-driven commensurabilities in the various systems. References Orbits Equations of astronomy
Dermott's law
[ "Physics", "Astronomy" ]
205
[ "Concepts in astronomy", "Astronomy stubs", "Equations of astronomy" ]
5,597,859
https://en.wikipedia.org/wiki/Commensurability%20%28astronomy%29
Commensurability is the property of two orbiting objects, such as planets, satellites, or asteroids, whose orbital periods are in a rational proportion. Examples include the 2:3 commensurability between the orbital periods of Neptune and Pluto, the 3:4 commensurability between the orbital periods of the Saturnian satellites Titan and Hyperion, the orbital periods associated with the Kirkwood gaps in the asteroid belt relative to that of Jupiter, and the 2:1 commensurability between Gliese 876 b and Gliese 876 c. Commensurabilities are normally the result of an orbital resonance, rather than being due to coincidence. See also Harmonic Ratio References Asteroids Celestial mechanics Orbits Planetary satellite systems zh:通約性
Commensurability (astronomy)
[ "Physics", "Astronomy" ]
156
[ "Classical mechanics stubs", "Classical mechanics", "Astrophysics", "Astronomy stubs", "Astrophysics stubs", "Celestial mechanics" ]
5,597,883
https://en.wikipedia.org/wiki/Cash-flow%20return%20on%20investment
Cash-flow return on investment (CFROI) is a valuation model that assumes the stock market sets prices based on cash flow, not on corporate performance and earnings. For the corporation, it is essentially internal rate of return (IRR). CFROI is compared to a hurdle rate to determine if investment/product is performing adequately. The hurdle rate is the total cost of capital for the corporation calculated by a mix of cost of debt financing plus investors' expected return on equity investments. The CFROI must exceed the hurdle rate to satisfy both the debt financing and the investors expected return. Michael J. Mauboussin in his 2006 book More Than You Know: Finding Financial Wisdom in Unconventional Places, quoted an analysis by Credit Suisse First Boston, that, measured by CFROI, performance of companies tend to converge after five years in terms of their survival rates. The CFROI for a firm or a division can then be written as follows: This annuity is called the economic depreciation: where n is the expected life of the asset and Kc is the replacement cost in current dollars. See also Return of capital References Further reading Financial ratios Investment indicators
Cash-flow return on investment
[ "Mathematics" ]
233
[ "Financial ratios", "Quantity", "Metrics" ]
5,598,607
https://en.wikipedia.org/wiki/Chemical%20%26%20Engineering%20News
Chemical & Engineering News (C&EN) is a weekly news magazine published by the American Chemical Society (ACS), providing professional and technical news and analysis in the fields of chemistry and chemical engineering. It includes information on recent news and research in these fields, career and employment information, business and industry news, government and policy news, funding in these fields, and special reports. The magazine is available to all members of the American Chemical Society. The ACS also publishes C&EN Global Enterprise (), an online resource that republishes articles from C&EN for easier online access to content. History The magazine was established in 1923, and has been on the internet since 1998. The editor-in-chief is Nick Ishmael Perkins. Abstracting and indexing The magazine is abstracted and indexed in Chemical Abstracts Service, Science Citation Index, and Scopus. References External links American Chemical Society academic journals Chemical engineering journals Engineering magazines Magazines established in 1923 Magazines published in Washington, D.C. Professional and trade magazines Science and technology magazines published in the United States Weekly magazines published in the United States
Chemical & Engineering News
[ "Chemistry", "Engineering" ]
226
[ "Chemical engineering", "Chemical engineering journals" ]
5,598,795
https://en.wikipedia.org/wiki/Tekin%20Dereli
Tekin Dereli (November 30, 1949) is a Turkish theoretical physicist. Life and academic career He studied at Ankara Science High School and the Middle East Technical University. He was an associate professor and a Professor of Physics at Middle East Technical University (1984–1987, 1993–2001); professor at Faculty of Science at Ankara University (1987–1993), Leverhulme Visiting Professor at Lancaster University UK (2000–2001) and since 2001, he is a professor at the department of physics at Koç University. TÜBİTAK honored him with TÜBİTAK Junior Science Price in 1982 and TÜBITAK Science Prize in 1996. He also was awarded prestigious Turkish prizes for science by Sedat Simavi Trust in 1989 and METU Mustafa Parlar Foundation Science Award in (1993). He is a member of Turkish Academy of Sciences (TAS) since 1993. He is married with two children. Research interests His research interests are Yang-Mills gauge theories, supersymmetry, supergravity, quaternion and octonion algebras, spin structures, generalised theories of gravity, cosmological solutions, integrable systems and phase space quantisation. References Biography at Koç University External links Koç University: Tekin Dereli 1949 births Living people People from Ankara Middle East Technical University alumni Academic staff of Middle East Technical University Academic staff of Ankara University Academic staff of Koç University Turkish physicists Theoretical physicists Recipients of TÜBİTAK Science Award METU Mustafa Parlar Foundation Science Award winners 20th-century physicists 21st-century physicists
Tekin Dereli
[ "Physics" ]
320
[ "Theoretical physics", "Theoretical physicists" ]
5,598,930
https://en.wikipedia.org/wiki/Inexact%20differential
An inexact differential or imperfect differential is a differential whose integral is path dependent. It is most often used in thermodynamics to express changes in path dependent quantities such as heat and work, but is defined more generally within mathematics as a type of differential form. In contrast, an integral of an exact differential is always path independent since the integral acts to invert the differential operator. Consequently, a quantity with an inexact differential cannot be expressed as a function of only the variables within the differential. I.e., its value cannot be inferred just by looking at the initial and final states of a given system. Inexact differentials are primarily used in calculations involving heat and work because they are path functions, not state functions. Definition An inexact differential is a differential for which the integral over some two paths with the same end points is different. Specifically, there exist integrable paths such that , and In this case, we denote the integrals as and respectively to make explicit the path dependence of the change of the quantity we are considering as . More generally, an inexact differential is a differential form which is not an exact differential, i.e., for all functions , The fundamental theorem of calculus for line integrals requires path independence in order to express the values of a given vector field in terms of the partial derivatives of another function that is the multivariate analogue of the antiderivative. This is because there can be no unique representation of an antiderivative for inexact differentials since their variation is inconsistent along different paths. This stipulation of path independence is a necessary addendum to the fundamental theorem of calculus because in one-dimensional calculus there is only one path in between two points defined by a function. Notation Thermodynamics Instead of the differential symbol , the symbol is used, a convention which originated in the 19th century work of German mathematician Carl Gottfried Neumann, indicating that (heat) and (work) are path-dependent, while (internal energy) is not. Statistical Mechanics Within statistical mechanics, inexact differentials are often denoted with a bar through the differential operator, đ. In LaTeX the command "\rlap{\textrm{d}}{\bar{\phantom{w}}}" is an approximation or simply "\dj" for a dyet character, which needs the T1 encoding. Mathematics Within mathematics, inexact differentials are usually just referred more generally to as differential forms which are often written just as . Examples Total distance When you walk from a point to a point along a line (without changing directions) your net displacement and total distance covered are both equal to the length of said line . If you then return to point (without changing directions) then your net displacement is zero while your total distance covered is . This example captures the essential idea behind the inexact differential in one dimension. Note that if we allowed ourselves to change directions, then we could take a step forward and then backward at any point in time in going from to and in-so-doing increase the overall distance covered to an arbitrarily large number while keeping the net displacement constant. Reworking the above with differentials and taking to be along the -axis, the net distance differential is , an exact differential with antiderivative . On the other hand, the total distance differential is , which does not have an antiderivative. The path taken is where there exists a time such that is strictly increasing before and strictly decreasing afterward. Then is positive before and negative afterward, yielding the integrals, exactly the results we expected from the verbal argument before. First law of thermodynamics Inexact differentials show up explicitly in the first law of thermodynamics, where is the energy, is the differential change in heat and is the differential change in work. Based on the constants of the thermodynamic system, we are able to parameterize the average energy in several different ways. E.g., in the first stage of the Carnot cycle a gas is heated by a reservoir, giving us an isothermal expansion of that gas. Some differential amount of heat enters the gas. During the second stage, the gas is allowed to freely expand, outputting some differential amount of work . The third stage is similar to the first stage, except the heat is lost by contact with a cold reservoir, while the fourth cycle is like the second except work is done onto the system by the surroundings to compress the gas. Because the overall changes in heat and work are different over different parts of the cycle, there is a nonzero net change in the heat and work, indicating that the differentials and must be inexact differentials. Internal energy is a state function, meaning its change can be inferred just by comparing two different states of the system (independently of its transition path), which we can therefore indicate with and . Since we can go from state to state either by providing heat or work , such a change of state does not uniquely identify the amount of work done to the system or heat transferred, but only the change in internal energy . Heat and work A fire requires heat, fuel, and an oxidizing agent. The energy required to overcome the activation energy barrier for combustion is transferred as heat into the system, resulting in changes to the system's internal energy. In a process, the energy input to start a fire may comprise both work and heat, such as when one rubs tinder (work) and experiences friction (heat) to start a fire. The ensuing combustion is highly exothermic, which releases heat. The overall change in internal energy does not reveal the mode of energy transfer and quantifies only the net work and heat. The difference between initial and final states of the system's internal energy does not account for the extent of the energy interactions transpired. Therefore, internal energy is a state function (i.e. exact differential), while heat and work are path functions (i.e. inexact differentials) because integration must account for the path taken. Integrating factors It is sometimes possible to convert an inexact differential into an exact one by means of an integrating factor. The most common example of this in thermodynamics is the definition of entropy: In this case, is an inexact differential, because its effect on the state of the system can be compensated by . However, when divided by the absolute temperature and when the exchange occurs at reversible conditions (therefore the rev subscript), it produces an exact differential: the entropy is also a state function. Example Consider the inexact differential form, This must be inexact by considering going to the point . If we first increase and then increase , then that corresponds to first integrating over and then over . Integrating over first contributes and then integrating over contributes . Thus, along the first path we get a value of 2. However, along the second path we get a value of . We can make an exact differential by multiplying it by , yielding . And so is an exact differential. See also Closed and exact differential forms for a higher-level treatment Differential (mathematics) Exact differential Exact differential equation Integrating factor for solving non-exact differential equations by making them exact Conservative vector field References External links Inexact Differential – from Wolfram MathWorld Exact and Inexact Differentials – University of Arizona Exact and Inexact Differentials – University of Texas Exact Differential – from Wolfram MathWorld Thermodynamics Multivariable calculus
Inexact differential
[ "Physics", "Chemistry", "Mathematics" ]
1,551
[ "Calculus", "Multivariable calculus", "Thermodynamics", "Dynamical systems" ]
5,599,330
https://en.wikipedia.org/wiki/Sensitivity%20and%20specificity
In medicine and statistics, sensitivity and specificity mathematically describe the accuracy of a test that reports the presence or absence of a medical condition. If individuals who have the condition are considered "positive" and those who do not are considered "negative", then sensitivity is a measure of how well a test can identify true positives and specificity is a measure of how well a test can identify true negatives: Sensitivity (true positive rate) is the probability of a positive test result, conditioned on the individual truly being positive. Specificity (true negative rate) is the probability of a negative test result, conditioned on the individual truly being negative. If the true status of the condition cannot be known, sensitivity and specificity can be defined relative to a "gold standard test" which is assumed correct. For all testing, both diagnoses and screening, there is usually a trade-off between sensitivity and specificity, such that higher sensitivities will mean lower specificities and vice versa. A test which reliably detects the presence of a condition, resulting in a high number of true positives and low number of false negatives, will have a high sensitivity. This is especially important when the consequence of failing to treat the condition is serious and/or the treatment is very effective and has minimal side effects. A test which reliably excludes individuals who do not have the condition, resulting in a high number of true negatives and low number of false positives, will have a high specificity. This is especially important when people who are identified as having a condition may be subjected to more testing, expense, stigma, anxiety, etc. The terms "sensitivity" and "specificity" were introduced by American biostatistician Jacob Yerushalmy in 1947. There are different definitions within laboratory quality control, wherein "analytical sensitivity" is defined as the smallest amount of substance in a sample that can accurately be measured by an assay (synonymously to detection limit), and "analytical specificity" is defined as the ability of an assay to measure one particular organism or substance, rather than others. However, this article deals with diagnostic sensitivity and specificity as defined at top. Application to screening study Imagine a study evaluating a test that screens people for a disease. Each person taking the test either has or does not have the disease. The test outcome can be positive (classifying the person as having the disease) or negative (classifying the person as not having the disease). The test results for each subject may or may not match the subject's actual status. In that setting: True positive: Sick people correctly identified as sick False positive: Healthy people incorrectly identified as sick True negative: Healthy people correctly identified as healthy False negative: Sick people incorrectly identified as healthy After getting the numbers of true positives, false positives, true negatives, and false negatives, the sensitivity and specificity for the test can be calculated. If it turns out that the sensitivity is high then any person who has the disease is likely to be classified as positive by the test. On the other hand, if the specificity is high, any person who does not have the disease is likely to be classified as negative by the test. An NIH web site has a discussion of how these ratios are calculated. Definition Sensitivity Consider the example of a medical test for diagnosing a condition. Sensitivity (sometimes also named the detection rate in a clinical setting) refers to the test's ability to correctly detect ill patients out of those who do have the condition. Mathematically, this can be expressed as: A negative result in a test with high sensitivity can be useful for "ruling out" disease, since it rarely misdiagnoses those who do have the disease. A test with 100% sensitivity will recognize all patients with the disease by testing positive. In this case, a negative test result would definitively rule out the presence of the disease in a patient. However, a positive result in a test with high sensitivity is not necessarily useful for "ruling in" disease. Suppose a 'bogus' test kit is designed to always give a positive reading. When used on diseased patients, all patients test positive, giving the test 100% sensitivity. However, sensitivity does not take into account false positives. The bogus test also returns positive on all healthy patients, giving it a false positive rate of 100%, rendering it useless for detecting or "ruling in" the disease. The calculation of sensitivity does not take into account indeterminate test results. If a test cannot be repeated, indeterminate samples either should be excluded from the analysis (the number of exclusions should be stated when quoting sensitivity) or can be treated as false negatives (which gives the worst-case value for sensitivity and may therefore underestimate it). A test with a higher sensitivity has a lower type II error rate. Specificity Consider the example of a medical test for diagnosing a disease. Specificity refers to the test's ability to correctly reject healthy patients without a condition. Mathematically, this can be written as: A positive result in a test with high specificity can be useful for "ruling in" disease, since the test rarely gives positive results in healthy patients. A test with 100% specificity will recognize all patients without the disease by testing negative, so a positive test result would definitively rule in the presence of the disease. However, a negative result from a test with high specificity is not necessarily useful for "ruling out" disease. For example, a test that always returns a negative test result will have a specificity of 100% because specificity does not consider false negatives. A test like that would return negative for patients with the disease, making it useless for "ruling out" the disease. A test with a higher specificity has a lower type I error rate. Graphical illustration The above graphical illustration is meant to show the relationship between sensitivity and specificity. The black, dotted line in the center of the graph is where the sensitivity and specificity are the same. As one moves to the left of the black dotted line, the sensitivity increases, reaching its maximum value of 100% at line A, and the specificity decreases. The sensitivity at line A is 100% because at that point there are zero false negatives, meaning that all the negative test results are true negatives. When moving to the right, the opposite applies, the specificity increases until it reaches the B line and becomes 100% and the sensitivity decreases. The specificity at line B is 100% because the number of false positives is zero at that line, meaning all the positive test results are true positives. The middle solid line in both figures above that show the level of sensitivity and specificity is the test cutoff point. As previously described, moving this line results in a trade-off between the level of sensitivity and specificity. The left-hand side of this line contains the data points that tests below the cut off point and are considered negative (the blue dots indicate the False Negatives (FN), the white dots True Negatives (TN)). The right-hand side of the line shows the data points that tests above the cut off point and are considered positive (red dots indicate False Positives (FP)). Each side contains 40 data points. For the figure that shows high sensitivity and low specificity, there are 3 FN and 8 FP. Using the fact that positive results = true positives (TP) + FP, we get TP = positive results - FP, or TP = 40 - 8 = 32. The number of sick people in the data set is equal to TP + FN, or 32 + 3 = 35. The sensitivity is therefore 32 / 35 = 91.4%. Using the same method, we get TN = 40 - 3 = 37, and the number of healthy people 37 + 8 = 45, which results in a specificity of 37 / 45 = 82.2 %. For the figure that shows low sensitivity and high specificity, there are 8 FN and 3 FP. Using the same method as the previous figure, we get TP = 40 - 3 = 37. The number of sick people is 37 + 8 = 45, which gives a sensitivity of 37 / 45 = 82.2 %. There are 40 - 8 = 32 TN. The specificity therefore comes out to 32 / 35 = 91.4%. The red dot indicates the patient with the medical condition. The red background indicates the area where the test predicts the data point to be positive. The true positive in this figure is 6, and false negatives of 0 (because all positive condition is correctly predicted as positive). Therefore, the sensitivity is 100% (from ). This situation is also illustrated in the previous figure where the dotted line is at position A (the left-hand side is predicted as negative by the model, the right-hand side is predicted as positive by the model). When the dotted line, test cut-off line, is at position A, the test correctly predicts all the population of the true positive class, but it will fail to correctly identify the data point from the true negative class. Similar to the previously explained figure, the red dot indicates the patient with the medical condition. However, in this case, the green background indicates that the test predicts that all patients are free of the medical condition. The number of data point that is true negative is then 26, and the number of false positives is 0. This result in 100% specificity (from ). Therefore, sensitivity or specificity alone cannot be used to measure the performance of the test. Medical usage In medical diagnosis, test sensitivity is the ability of a test to correctly identify those with the disease (true positive rate), whereas test specificity is the ability of the test to correctly identify those without the disease (true negative rate). If 100 patients known to have a disease were tested, and 43 test positive, then the test has 43% sensitivity. If 100 with no disease are tested and 96 return a completely negative result, then the test has 96% specificity. Sensitivity and specificity are prevalence-independent test characteristics, as their values are intrinsic to the test and do not depend on the disease prevalence in the population of interest. Positive and negative predictive values, but not sensitivity or specificity, are values influenced by the prevalence of disease in the population that is being tested. These concepts are illustrated graphically in this applet Bayesian clinical diagnostic model which show the positive and negative predictive values as a function of the prevalence, sensitivity and specificity. Misconceptions It is often claimed that a highly specific test is effective at ruling in a disease when positive, while a highly sensitive test is deemed effective at ruling out a disease when negative. This has led to the widely used mnemonics SPPIN and SNNOUT, according to which a highly specific test, when positive, rules in disease (SP-P-IN), and a highly sensitive test, when negative, rules out disease (SN-N-OUT). Both rules of thumb are, however, inferentially misleading, as the diagnostic power of any test is determined by the prevalence of the condition being tested, the test's sensitivity and its specificity. The SNNOUT mnemonic has some validity when the prevalence of the condition in question is extremely low in the tested sample. The tradeoff between specificity and sensitivity is explored in ROC analysis as a trade off between TPR and FPR (that is, recall and fallout). Giving them equal weight optimizes informedness = specificity + sensitivity − 1 = TPR − FPR, the magnitude of which gives the probability of an informed decision between the two classes (> 0 represents appropriate use of information, 0 represents chance-level performance, < 0 represents perverse use of information). Sensitivity index The sensitivity index or d′ (pronounced "dee-prime") is a statistic used in signal detection theory. It provides the separation between the means of the signal and the noise distributions, compared against the standard deviation of the noise distribution. For normally distributed signal and noise with mean and standard deviations and , and and , respectively, d′ is defined as: An estimate of d′ can be also found from measurements of the hit rate and false-alarm rate. It is calculated as: d′ = Z(hit rate) − Z(false alarm rate), where function Z(p), p ∈ [0, 1], is the inverse of the cumulative Gaussian distribution. d′ is a dimensionless statistic. A higher d′ indicates that the signal can be more readily detected. Confusion matrix The relationship between sensitivity, specificity, and similar terms can be understood using the following table. Consider a group with P positive instances and N negative instances of some condition. The four outcomes can be formulated in a 2×2 contingency table or confusion matrix, as well as derivations of several metrics using the four outcomes, as follows: Estimation of errors in quoted sensitivity or specificity Sensitivity and specificity values alone may be highly misleading. The 'worst-case' sensitivity or specificity must be calculated in order to avoid reliance on experiments with few results. For example, a particular test may easily show 100% sensitivity if tested against the gold standard four times, but a single additional test against the gold standard that gave a poor result would imply a sensitivity of only 80%. A common way to do this is to state the binomial proportion confidence interval, often calculated using a Wilson score interval. Confidence intervals for sensitivity and specificity can be calculated, giving the range of values within which the correct value lies at a given confidence level (e.g., 95%). Terminology in information retrieval In information retrieval, the positive predictive value is called precision, and sensitivity is called recall. Unlike the Specificity vs Sensitivity tradeoff, these measures are both independent of the number of true negatives, which is generally unknown and much larger than the actual numbers of relevant and retrieved documents. This assumption of very large numbers of true negatives versus positives is rare in other applications. The F-score can be used as a single measure of performance of the test for the positive class. The F-score is the harmonic mean of precision and recall: In the traditional language of statistical hypothesis testing, the sensitivity of a test is called the statistical power of the test, although the word power in that context has a more general usage that is not applicable in the present context. A sensitive test will have fewer Type II errors. Terminology in genome analysis Similarly to the domain of information retrieval, in the research area of gene prediction, the number of true negatives (non-genes) in genomic sequences is generally unknown and much larger than the actual number of genes (true positives). The convenient and intuitively understood term specificity in this research area has been frequently used with the mathematical formula for precision and recall as defined in biostatistics. The pair of thus defined specificity (as positive predictive value) and sensitivity (true positive rate) represent major parameters characterizing the accuracy of gene prediction algorithms. Conversely, the term specificity in a sense of true negative rate would have little, if any, application in the genome analysis research area. See also Notes References Further reading External links UIC Calculator Vassar College's Sensitivity/Specificity Calculator MedCalc Free Online Calculator Bayesian clinical diagnostic model applet Accuracy and precision Bioinformatics Biostatistics Cheminformatics Medical statistics Statistical ratios Statistical classification
Sensitivity and specificity
[ "Chemistry", "Engineering", "Biology" ]
3,214
[ "Biological engineering", "Bioinformatics", "Computational chemistry", "nan", "Cheminformatics" ]
5,599,414
https://en.wikipedia.org/wiki/Heteroduplex%20analysis
Heteroduplex analysis (HDA) is a method in biochemistry used to detect point mutations in DNA (Deoxyribonucleic acid) since 1992. Heteroduplexes are dsDNA molecules that have one or more mismatched pairs, on the other hand homoduplexes are dsDNA which are perfectly paired. This method of analysis depend up on the fact that heteroduplexes shows reduced mobility relative to the homoduplex DNA. heteroduplexes are formed between different DNA alleles. In a mixture of wild-type and mutant amplified DNA, heteroduplexes are formed in mutant alleles and homoduplexes are formed in wild-type alleles. There are two types of heteroduplexes based on type and extent of mutation in the DNA. Small deletions or insertion create bulge-type heteroduplexes which is stable and is verified by electron microscope. Single base substitutions creates more unstable heteroduplexes called bubble-type heteroduplexes, because of low stability it is difficult to visualize in electron microscopy. HDA is widely used for rapid screening of mutation of the 3 bp p.F508del deletion in the CFTR gene. References Biochemistry methods Biochemistry Molecular biology
Heteroduplex analysis
[ "Chemistry", "Biology" ]
265
[ "Biochemistry methods", "Biochemistry", "nan", "Molecular biology" ]
5,599,416
https://en.wikipedia.org/wiki/Annexin%20A5%20affinity%20assay
In molecular biology, an annexin A5 affinity assay is a test to quantify the number of cells undergoing apoptosis. The assay uses the protein annexin A5 to tag apoptotic and dead cells, and the numbers are then counted using either flow cytometry or a fluorescence microscope. The annexin a5 protein binds to apoptotic cells in a calcium-dependent manner using phosphatidylserine-containing membrane surfaces that are usually present only on the inner leaflet of the membrane. Background Apoptosis is a form of programmed cell death that is used by the body to remove unwanted, damaged, or senescent cells from tissues. Removal of apoptotic cells is carried out via phagocytosis by white blood cells such as macrophages and dendritic cells. Phagocytic white blood cells recognize apoptotic cells by their exposure of negatively charged phospholipids (phosphatidylserine) on the cell surface. In normal cells, the negative phospholipids reside on the inner side of the cellular membrane while the outer surface of the membrane is occupied by uncharged phospholipids. After a cell has entered apoptosis, the negatively charged phospholipids are transported to the outer cell surface by a hypothetical protein known as scramblase. Phagocytic white blood cells express a receptor that can bind to and detect the negatively charged phospholipids on the apoptotic cell surfaces. After detection the apoptotic cells are removed. Detection of cell death with annexin A5 Healthy individual apoptotic cells are rapidly removed by phagocytes. However, in pathological processes, the removal of apoptotic cells may be delayed or even absent. Dying cells in tissue can be detected with annexin A5. Labeling of annexin A5 with fluorescent or radioactive molecules makes it possible to detect binding of labeled annexin A5 to the cell surface of apoptotic cells. After binding to the phospholipid surface, annexin A5 assembles into a trimeric cluster. This trimer consists of three annexin A5 molecules that are bound to each other via non-covalent protein-protein interactions. The formation of annexin A5 trimers results in the formation of a two-dimensional crystal lattice on the phospholipid membrane. This clustering of annexin A5 on the membrane greatly increases the intensity of annexin A5 when labeled with a fluorescent or radioactive probe. Two-dimensional crystal formation is believed to cause internalization of annexin A5 through a novel process of endocytosis if it occurs on cells that are in the early phase of executing cell death. Internalization amplifies additionally the intensity of the annexin A5 stained cell. Annexin A5 has been used to successively detect apoptotic cells in vitro and in vivo. Pathological processes in which apoptosis occurs include inflammation, ischemia damage of the heart caused by myocardial infarction, apoptotic white blood cells and smooth muscle cells present in atherosclerotic plaques of blood vessels, transplanted organs in the donor patient that are rejected by the immune system or tumour cells that are exposed to cytostatic drugs during chemotherapy. The non-invasive detection of diseased tissue with, for example, radioactively labeled annexin A5 is the goal of a recently developed line of research known as Molecular Imaging. Molecular Imaging of cell death using radioactive annexin A5 can become of clinical significance to diagnose vulnerability of atherosclerotic plaques (unstable atherosclerosis), heart failure, transplant rejection, and to monitor efficacy of anti-cancer therapy. References Laboratory techniques Flow cytometry
Annexin A5 affinity assay
[ "Chemistry", "Biology" ]
794
[ "Flow cytometry", "nan" ]
5,599,809
https://en.wikipedia.org/wiki/Project%20Flower
Project Flower was a joint Israeli-Iranian military effort to develop advanced missile systems. It was one of six oil-for-arms contracts that the countries signed in April 1977. History In the mid-1970s, Iran thought to expand its missile capability, including through cooperation with Israel with whom it had tacit military economic and intelligence cooperation at the time. In April 1977, six oil-for-arms contracts were signed, among them Project Flower. On 18 July 1977, Iranian Vice Minister of War General Hassan Toufanian traveled in secret to Israel where he met with Israeli Foreign Minister Moshe Dayan and Minister of Defense Ezer Weizmann. Iranian concerns over missile and nuclear developments in India and Pakistan were also discussed. This project focused on the development of a longer range Gabriel anti-ship missile and a future submarine-launched variant, and intended to reproduce an American-designed missile with Israeli-made parts that could be fitted with nuclear warheads. The missile incorporated American navigation and guidance equipment. The following year, Iran supplied Israel with $280 million worth of oil as a down payment. A team of Iranian experts began construction of a missile assembly facility near Sirjan, in south central Iran, and a missile test range near Rafsanjan On 11 February 1979, the monarchy of Mohammed Reza Pahlavi was overthrown in the Iranian Revolution, and Project Flower ended. The Israeli engineers and defense officials returned to Israel and all the blueprints and diagrams of the weapons systems were sent back via diplomatic courier. Yaakov Shapiro, the Defense Ministry official in charge of coordinating the negotiations with Iran from 1975 to 1978, recalls: "In Iran they treated us like kings. We did business with them on a stunning scale. Without the ties with Iran, we would not have had the money to develop weaponry that is today in the front line of the defense of the State of Israel." Israel's Secret War in Iran book Three decades after the overthrow of the Pahlavi and the closure of the project, a book titled "Israel's Secret War in Iran" was published by Ronen Bergman, a senior political and military analyst for Yedioth Ahronoth, Israel's largest paid newspaper. Bergman had a conversation with a Senior official in the Israeli Ministry of Defense who worked during this project. While Pahlavi's authorities were happy with this project and its advancement away from the eyes of the US in their view, Israel had carefully duped Pahlavi to secure its interests. According to Israel's plan, during this project, contrary to what Iran's top officials think, Iran would only achieve obsolete technology. See also Iranian military industry Iran–Israel relations Iran's missile forces Israel Defense Forces References External links Joseph S. Bermudez, Jr., "Iran's Missile Development," The International Missile Bazaar: the New Supplier's Network (San Francisco: Westview Press, 1994), William C. Potter and Harlan W. Jencks, eds., p. 48. "Minutes from Meeting Held in Tel Aviv between H. E. General M. Dayan, Foreign Minister of Israel, and H.E. General H. Toufanian, Vice Minister of War, Imperial Government of Iran," Top Secret Minutes from Israel's Ministry of Foreign Affairs, 18 July 1977, in Digital National Security Archive Ronen Bergman, "5 billion Reasons to Talk to Iran," Haaretz, 19 March 1999 Iran–Israel relations 1977 in Iran 1977 in Israel Abandoned military projects Abandoned military projects of Israel
Project Flower
[ "Engineering" ]
718
[ "Military projects", "Abandoned military projects" ]
5,600,116
https://en.wikipedia.org/wiki/Ipomoea%20pes-caprae
Ipomoea pes-caprae, also known as bayhops, bay-hops, beach morning glory, railroad vine, or goat's foot, is a common pantropical creeping vine belonging to the family Convolvulaceae. It grows on the upper parts of beaches and endures salted air. It is one of the most common and most widely distributed salt tolerant plants and provides one of the best known examples of oceanic dispersal. Its seeds float and are unaffected by salt water. Originally described by Linnaeus, it was placed in its current genus by Robert Brown in 1818. Description Ipomoea pes-caprae is a prostrate perennial, often covering large areas; stems long-trailing often several metres in length, rooting at the nodes, glabrous. It has pink, fused petals with a darker centre. The fruit is a capsule containing 4 hairy seeds that float in water. Distribution This species can be found on the sandy shores of the tropical Atlantic, Pacific, and Indian Oceans. A similar species, Ipomoea imperati, with white flowers, has an even wider distribution on the world's beaches. I. pes-caprae is common on the sand dunes of Australia's upper north coast of New South Wales, and can also be found along the entire Queensland coastline. Goat's Foot is a primary sand stabilizer, being one of the first plants to colonise dunes. It grows on almost all parts of dunes but is usually found on the seaward slopes, sending long runners down towards the toe of the dune. The sprawling runners spread out from the woody rootstock, but the large two-lobed leaves are sparse and a dense cover on the sand is rarely achieved except in protected situations. This plant grows in association with sand Spinifex grass and is a useful sand binder, thriving under conditions of sandblast and salt spray. Community species: Ipomoea pes-caprae has been observed in community situations, studied for their endurance of difficult growing conditions (on dunes) with some other tough species. Hydrocotyle bonariensis Senecio crassiflorus Juncus acutus Together with Melanthera biflora, Portulaca oleracea and Digitaria ciliaris, Ipomoea pes-caprae is usually one of the first species colonizing degraded or altered environments in tropical zones of the planet. Uses In Australia, it is a commonly used aboriginal medicine used as poultice for sting ray and stone fish stings. In Brazil, this plant – namely the subspecies brasiliensis – is known as salsa-da-praia in folk medicine, and is used to treat inflammation and gastrointestinal disorders. In the Philippines, the plant is known locally as Bagasua and is used to treat rheumatism, colic, oedema, whitlow, and piles. Etymology I. pes-caprae comes from the Latin 'pes' for foot and 'caprae' for goat and refers to the resemblance of the outline of the leaf to the footprint of a goat. Gallery References External links Halophytes pes-caprae Pantropical flora Plants described in 1818
Ipomoea pes-caprae
[ "Chemistry" ]
657
[ "Halophytes", "Salts" ]
5,600,138
https://en.wikipedia.org/wiki/Fertigation
Fertigation is the injection of fertilizers, used for soil amendments, water amendments and other water-soluble products into an irrigation system. Chemigation, the injection of chemicals into an irrigation system, is related to fertigation. The two terms are sometimes used interchangeably however chemigation is generally a more controlled and regulated process due to the nature of the chemicals used. Chemigation often involves insecticides, herbicides, and fungicides, some of which pose health threat to humans, animals, and the environment. Uses Fertigation is practiced extensively in commercial agriculture and horticulture. Fertigation is also increasingly being used for landscaping as dispenser units become more reliable and easier to use. Fertigation is used to add additional nutrients or to correct nutrient deficiencies detected in plant tissue analysis. It is usually practiced on the high-value crops such as vegetables, turf, fruit trees, and ornamentals. Commonly used chemicals Nitrogen is the most commonly used plant nutrient. Naturally occurring nitrogen (N2) is a diatomic molecule which makes up approximately 80% of the Earth's atmosphere. Most plants cannot directly consume diatomic nitrogen, therefore nitrogen must be contained as a component of other chemical substances which plants can consume. Commonly, anhydrous ammonia, ammonium nitrate, and urea are used as bioavailable sources of nitrogen. Other nutrients needed by plants include phosphorus and potassium. Like nitrogen, plants require these substances to live but they must be contained in other chemical substances such as monoammonium phosphate or diammonium phosphate to serve as bioavailable nutrients. A common source of potassium is muriate of potash which is chemically potassium chloride. A soil fertility analysis is used to determine which of the more stable nutrients should be used. Fungicides are used on sod (or turf), like golf courses and sodfarms. One of the earliest was cyproconazole marketed in 1995. Advantages The benefits of fertigation methods over conventional or drop-fertilizing methods include: Increased nutrient absorption by plants. Accurate placement of nutrient, where the water goes the nutrient goes as well. Ability to "micro dose", feeding the plants just enough so nutrients can be absorbed and are not left to be washed down to storm water next time it rains. Reduction of fertilizer, chemicals, and water needed. Reduced leaching of chemicals into the water supply. Reduced water consumption due to the plant's increased root mass's ability to trap and hold water. Application of nutrients can be controlled at the precise time and rate necessary. Minimized risk of the roots contracting soil borne diseases through the contaminated soil. Reduction of soil erosion issues as the nutrients are pumped through the water drip system. Leaching is decreased often through methods used to employ fertigation. Disadvantages Concentration of the solution may decrease as the fertilizer dissolves, this depends on equipment selection. If poorly selected may lead to poor nutrient placement. The water supply for fertigation is to be kept separate from the domestic water supply to avoid contamination. Possible pressure loss in the main irrigation line. The process is dependent on the water supply's non-restriction by drought rationing. Methods used Drip irrigation – Less wasteful than sprinklers. It is not only more efficient for fertilizer usage, but can also be for maximizing nutrient uptake in plants like cotton. Drip irrigation using fertigation can also increase yield and quality of fruit and flowers, especially in subsurface drip systems rather than above surface drip tape. Sprinkler systems – Increases leaf and fruit quality. Continuous application – Fertilizer is supplied at a constant rate. Three-stage application – Irrigation starts without fertilizers. Fertilizers are applied later in the process once the ground is wet, and the final stage clears fertilizers out of the irrigation system. Proportional application – Injection rate is proportional to water discharge rate. Quantitative application – Nutrient solution is applied in a calculated amount to each irrigation block. Other methods of application include the lateral move, the traveler gun, and solid set systems. System design Fertigation assists distribution of fertilizers for farmers. The simplest type of fertigation system consists of a tank with a pump, distribution pipes, capillaries, and a dripper pen. All systems should be placed on a raised or sealed platform, not in direct contact with the earth. Each system should also be fitted with chemical spill trays. Because of the potential risk of contamination in the potable (drinking) water supply, a backflow prevention device is required for most fertigation systems. Backflow requirements may vary greatly. Therefore, it is very important to understand the proper level of backflow prevention required by law. In the United States, the minimum backflow protection is usually determined by state regulation. Each city or town may set the level of protection required. See also Drip irrigation Foliar feeding Soil defertilisation Sustainable agriculture Water conservation Fertilizer injector References Bibliography Asadi, M.E., 1998. "Water and nitrogen management to reduce impact of nitrates". Proceedings of the 5th International Agricultural Engineering conference, December 7–10, Bangkok, Thailand, PP.602–616. Asadi, M.E., Clemente, R.S.2000. "Impact of nitrogen fertilizer use on the environment". Proceedings of the 6th International Agricultural Engineering Conference, December 4–7, Bangkok, Thailand. PP.413–423. Asadi, M.E., Clemente, R.S., Gupta, A.D., Loof, R., and Hansen, G.K. 2002. "Impacts of fertigation Via sprinkler irrigation on nitrate leaching and corn yield on an acid - sulphate soil in Thailand. Agricultural Water Management" 52(3): 197–213. Asadi, M.E., 2004. "Optimum utilization of water and nitrogen fertilizers in sustainable agriculture". Programme and Abstracts N2004. The Third International Nitrogen Conference. October 12–16, Nanjing, China. p. 68. Asadi, M.E., 2005. "Fertigation as an engineering system to enhance nitrogen fertilizer efficiency". Proceedings of the Second International Congress: Information Technology in Agriculture, Food and Environment, (ITAFE), October 12–14, Adana, Turkey, pp. 525–532. Department of Natural Resources, Environment, "Fertigation systems." Web. 4 May 2009. Hanson, Blaine R., Hopmans, Jan, Simunek, Jirka. "Effect of Fertigation Strategy on Nitrogen Availability and Nitrate Leaching using Microirrigation". HortScience 2005 40: 1096 North Carolina Department of Agriculture and Consumer Services, www.ncagr.com/fooddrug/pesticid/chemigation2003.pdf "Chemigation & Fertigation". (2003) 4 May 2009. Neilsen, Gerry, Kappel, Frank, Neilsen, Denise. "Fertigation Method Affects Performance of `Lapins' Sweet Cherry on Gisela 5 Rootstock". HortScience 2004 39: 1716–1721 NSW department of primary industries, "Horticultural fertigation" . 2000. Suhaimi, M. Yaseer; Mohammad, A.M.; Mahamud, S.; Khadzir, D. (July 18, 2012). "Effects of substrates on growth and yield of ginger cultivated using soilless culture", Journal of Tropical Agriculture and Food Science, Malaysian Agricultural Research and Development Institute 40(2) pp. 159 - 168. (Selangor) Agricultural terminology Fertilizers Irrigation Lawn care Plant nutrition
Fertigation
[ "Chemistry" ]
1,614
[ "Fertilizers", "Soil chemistry" ]
5,600,364
https://en.wikipedia.org/wiki/Werner%20Sobek
Werner Sobek (born May 16, 1953) is a German architect and structural engineer. Life Werner Sobek was born 1953 in Aalen, Germany. From 1974 to 1980, he studied structural engineering and architecture at the University of Stuttgart. From 1980 to 1986, he was a post-graduate fellow in the research project 'Wide-Span Lightweight Structures' at the University of Stuttgart and finished his PhD 1987 in structural engineering. In 1983, Sobek won the Fazlur Khan International Fellowship from the SOM Foundation. In 1991, he became a professor at the Leibniz University Hannover (successor to Bernd Tokarz) and director of the Institute for Structural Design and Building Methods. In 1992 he founded his own company Werner Sobek which now has offices in Stuttgart, Frankfurt, London, Moscow, New York, and Dubai. The company has over 200 employees and works with all types of structures and materials. Its core areas of expertise are lightweight construction, high-rise construction, façade design, special constructions made from steel, glass, titanium, fabric and wood, as well as the design of sustainable buildings. Since 1994, he has been a professor at the University of Stuttgart (successor to Frei Otto) and director of the Institute for Lightweight Structures and of the Central Laboratory for Structural Engineering. In 2000, he took over the chair of Jörg Schlaich, and fused the Institute for Lightweight Structures and the Institute for Construction and Design into the Institute for Lightweight Structures and Conceptual Design (ILEK). Both in its research and teaching, the ILEK at the University of Stuttgart unites the aspect of design that is dominant in architecture with the focus on analysis and construction from structural engineering as well as materials science. On the basis of a goal-oriented and interdisciplinary approach, the institute is concerned with the conceptual development of all types of construction and load-bearing structures, using all types of materials. The areas of focus span construction with textiles and glass all the way to new structures in reinforced and prestressed concrete. From the individual details to the whole structure, the approach focuses on the optimisation of form and construction with respect to material and energy use, durability and reliability, recyclability and environmental sustainability. The results of this work are published in the bilingual (German/English) serial from the institute (IL) or published individually in special research reports on particular topics. In 2008 Werner Sobek was appointed Mies van der Rohe Professor at the Illinois Institute of Technology in Chicago. In recognition of his manifold academic achievements, the Technical University of Dresden awarded him an honorary doctorate in 2009. Werner Sobek is known for his environmentally sustainable and self-sufficient prototype houses such as R 128 and H16. A well-known concept study by Werner Sobek is "R-129" which uses a polyurethane skin on a carbon-fibre frame, giving it walls thinner than eggshells. His commitment to sustainability is also reflected in his involvement in the German Sustainable Building Council (DGNB, Deutsche Gesellschaft für Nachhaltiges Bauen), which he co-founders in July 2007. He was a member of the DGNB's board of directors until June 2013 and serve as its president from April 2008 to June 2010. In December 2011 he also founded the Stuttgart Institute of Sustainability (SIS), a nonprofit association aiming at the promotion of research on new sustainable building techniques. He presented a keynote address and co-presented in the workshop Reduce CO2 – With technology to zero emissions at the 3rd International Holcim Forum 2010 in Mexico City and is on the Jury of the global Holcim Awards 2012. His research into lightweight structures, materials and the associated techniques and technologies is awarded a Global Award for Sustainable Architecture in 2019. Projects Ecole Nationale d'Art Décoratif, Limoges, France Interbank, Lima, Peru Roof over the centercourt of the Am Rothenbaum stadium, Hamburg, Germany Suvarnabhumi Airport, Bangkok, Thailand Sony Center, Berlin, Germany House R 128, Stuttgart, Germany Aktivhaus B10, Stuttgart, Germany Publications (selection) Sobek, Werner: Zum Entwerfen im Leichtbau. in: Bauingenieur, 70/1995, pp. 323–329. Schittich, C.; Staib, G.; Balkow, D.; Schuler, M.; Sobek, W.: Glasbau Atlas. Basel/Boston/Berlin: Birkhaeuser, 1998. Schulitz, H.C.; Sobek, W.; Habermann, K.J.: Stahlbau Atlas. Basel/Boston/Berlin: Birkhaeuser, 1999. Sobek, W.; Kutterer, M.; Messmer, R.: Untersuchungen zum Schubverbund bei Verbundsicherheitsglas – Ermittlung des zeit- und temperaturabhängigen Schubmoduls von PVB. in: Bauingenieur. 75/2000, Nr. 1, pp. 41–46. Sobek, W.; Haase, W.; Teuffel, P.: "Adaptive Systeme", Stahlbau 69(7), 2000, pp. 544–555. Sobek, W.: Archi-Neering – Visions of an Architecture for the 21st Century. in: Glass Processing Days. Conference Proceedings 18 to 21 June 2001, Tampere, Finland. Tampere: Tamglass Ltd. Oy, 2001, pp. 331–4. Sobek, W.; Sundermann, W.; Rehle, N.; Reinke, H.G.: Tragwerke für transparente Hochhäuser. in: Bauingenieur 76 (2001), pp. 326–335. Sobek, W.; Teuffel, P.: "Adaptive Structures in Architecture and Structural Engineering", Smart Structures and Materials (SPIE Vol. 4330): Proceedings of the SPIE 8th Annual International Symposium, 4–8 March 2001, Newport Beach, CA, USA. Liu, S. C. (ed.), Bellingham: SPIE, pp. 36–45. Sobek, W.; Teuffel, P.: Neue Entwicklungen im Leichtbau: Adaptive Tragwerke. in: Ingenieurbaukunst in Deutschland. Jahrbuch 2001. Hamburg: Junius, 2001. Sobek, W.; Teuffel, P.: "Adaptive Lightweight Structures". in: Lightweight Structures in Civil Engineering: Proceedings of the International IASS Symposium, 24–28 June 2002, Warsaw, Poland. Ed. Obrebski, J. B., Wydawnicto Naukowe: Micro Publisher, pp. 203–210. Sobek, Werner: Über Schachtelhalme, Türme und Hochhäuser. in: Der Traum vom Turm. Hochhäuser: Mythos – Ingenieurkunst – Baukultur. Katalog zur gleichnamigen Ausstellung im NRW-Forum Kultur und Wirtschaft Düsseldorf vom 6. November 2004 bis zum 20. Februar 2005. Ostfildern: Hatje-Cantz, 2004. pp. 42–57. Sobek, Werner: Glass Structures. in: The Structural Engineer, Vol. 83, Nr. 7 (5. April 2005), pp. 32–36. Sobek, Werner; Blandini, Lucio: Die "Glaskuppel". Prototyp einer rahmenlosen selbsttragenden Glasschale. in: Beratende Ingenieure11/12 (2005). pp. 23–28. Sobek, W.; Teuffel, P.; Weilandt, A.; Lemaitre, C.: "Adaptive and Lightweight", Adaptables2006, TU/e, International Conference On Adaptable Building Structures, Eindhoven The Netherlands 03-5 July 2006. Sobek, Werner; Hagenmayer, Stephen; Duder, Michael; Winterstetter, Thomas: Die "Highlight Munich Business Towers" in München. Tragwerksplanung und statische Nachweise. in: Bautechnik 4/2006, pp. 247–253. Sobek, Werner: Suvarnabhumi International Airport Bangkok – Tragwerk und Formfindung. in: Detail 7/8 (2006), pp. 818–919. Sobek, Werner: Gedanken zu einer Reform der Bauingenieurausbildung. in: Bauen im Aufbruch?! Schriftenreihe der Stiftung Bauwesen (vol. 11). pp. 65–73. Sobek, Werner: Das Mercedes-Benz Museum in Stuttgart. Die Tragwerksplanung – Komplexe Geometrie in 3-D. in: Detail 9 (2006), pp. 9801–981. Sobek, Werner; Kobler, Martin: Form und Gestaltung von Betonschalen. in: Beton Kalender 2007 vol. 2, pp. 1–18. Sobek, Werner; Straub, Wolfgang; Ploch, Jan: Teilüberdeckelung einer innerstädtischen Bundesstraße mit Spannbetonfertigteilen. in: Beton- und Stahlbetonbau 2/2007, pp. 114–119. Sobek, Werner; Reinke, Hans Georg; Berger, Tobias; Klein, Dietmar; Prasser, Patrick: Lufthansa Aviation Center. Die Neue Haupverwaltung in Frankfurt. in: Beratende Ingenieure 1/2 (2007), pp. 18–21. Sobek, Werner: Bauschaffen – auch im Sinn der Nachhaltigkeit. In: archplus 184 (Okt. 2007). pp. 88 f. Sobek, Werner; Laufs, Wilfried; Schmid, Angelika; Rossier, Ed: Innovative Steel Structures for Museo del Acero in Mexico. In: Structural Engineering International 1/2008. pp. 15–19. Sobek, Werner: Wie weiter wohnen? In: Werner Sobek & Bettina Hintze (ed.): Die besten Einfamilienhäuser – innovativ und flexibel. München: Callwey, 2008. pp. 8–13. Sobek, Werner; Schmid, Angelika; Heinlein, Frank: Innovative Stahltragwerke für das Museo del Acero in Mexiko. In: Stahlbau 77 (2008), vol. 8. pp. 551 – 554. Gertis, Karl; Hauser, Gerd; Sedlbauer, Klaus; Sobek, Werner: Was bedeutet "Platin"? Zur Entwicklung von Nachhaltigkeitsbewertungsverfahren. In: Bauphysik 30 (2008), vol. 4, pp. 244–256. Sobek, Werner; Trumpf, Heiko; Stork, Lena; Weidler, Nik: The Hollaenderbruecke. Economic and architecturally sophisticated design employing steel and GFRP. In: Steel Construction 1 (2008), vol. 1, pp. 34–41. Sobek, Werner; Schmid, Angelika; Heinlein, Frank: From Mill to Museum. In: Modern Steel Construction (June 2008). pp. 26 – 29. Sobek, Werner; Straub, Wolfgang; Schmid, Angelika: Horizon Serono – Konstruktion des weltweit größten zu öffnenden Glasdaches und der darunterliegenden Forumfassade. In: Stahlbau 78 (2009), vol. 1. pp. 1 – 10. Sobek, Werner: Engineered Glass. In: Michael Bell & Jeannie Kim (ed.): Engineered Transparency – The Technical, Visual, and Spatial Effects of Glass. New York: Princeton Architectural Press, 2009. pp. 169–82. Sobek, Werner; Sedlbauer, Klaus; Schuster, Heide: Sustainable Building. In: Bullinger, Hans-Jörg (ed.): Technology Guide. Principles – Applications – Trends. Heidelberg: Springer, 2009. pp. 432–435. Sobek, Werner: Vom Institut für Massivbau zum Institut für Leichtbau Entwerfen und Konstruieren. Das Institut nach der Emeritierung von Fritz Leonhardt. In: Joachim Kleinmanns & Christiane Weber (ed.): Fritz Leonhardt 1909–1999. Die Kunst des Konstruierens. The Art of Engineering. Stuttgart: Axel Menges, 2009. pp. 160 – 163. Sobek, Werner: Das Deutsche Gütesiegel Nachhaltiges Bauen – ein neues Instrument zur Planung und Zertifizierung von Nachhaltigkeit. In: VDI Annual Edition 2009/2010 of Bauingenieur (vol. 84, September 2009). pp. 90–91. Sobek, Werner; Tarazi, Frank: Ein weiterer Schritt hin zur entmaterialisierten Gebäudehülle – das Neue Verwaltungsgebäude der Europäischen Investititionsbank in Luxemburg. In: Bauingenieur vol. 85 (January 2010), pp. 29–35. Sobek, Werner; Hinz, Holger: Der Neubau des Emil-Schumacher-Museums in Hagen. In: Stahlbau Spezial 2010 – Konstruktiver Glasbau. pp. 30–33. Sobek, Werner: Wie weiter Bauen? Editorial. In: Beton- und Stahlbetonbau 105 (2010), vol. 4, p. 205. Sobek, Werner; Hinz, Holger; Sundermann, Wolfgang: Die Sanaya Towers in Amman. Eine tragwerksplanerische Herausforderung. In: Beton- und Stahlbetonbau 105 (2010), vol. 4, pp. 244–247. Sobek, Werner; Trumpf, Heiko; Heinlein, Frank: Recyclinggerechtes Konstruieren im Stahlbau. In: Stahlbau 79 (2010), vol. 6. pp. 424 – 433. Awards Fazlur Khan Award of the SOM (Skidmore, Owings and Merrill) Foundation Hubert-Rüsch-Preis of the Deutschen Betonvereins DuPont Benedictus Award Industrial Fabrics Association International (IFAI) Design Award European Glulam Award Fritz Schumacher Award of the Alfred Toepfer Stiftung F.V.S. Building of the Year Award, Hamburg Innovation Award "Architecture and Presentation" Hugo Haering Award of the BDA (Association of German Architects) Oscar Faber Award der Institution of Structural Engineers, London (2006) Auguste Perret Prize of the UIA- Union Internationale des Architectes The Fazlur Khan Lifetime Achievement Medal of the CTBUH - Council on Tall Buildings and Urban Habitat Prix Acier 2009 Stahlbau Zentrum Schweiz Médaille de la Recherche et de la Technique 2010 of the French Académie d’Architecture German Solar Award 2012 Order of Merit of the State of Baden-Württemberg (20 April 2013) Nike 2013 (BDA Architecture Award) (21 June 2013) Tsuboi Award of the IASS (International Association for Shell and Spatial Structures) (23 September 2013) Global Award for Sustainable Architecture (13 May 2019) Exhibitions 05/2010 – 06/2010 "Sketches for the Future. Werner Sobek and the ILEK" (Architecture Biennale 2010, Moscow/Russia) 03/2010 – 05/2010 "Designing the Future" (Goethe Institute La Paz and other sites in Bolivia (travelling exhibition)) 11/2009 – 12/2009 "Skizzen für die Zukunft. Werner Sobek und das ILEK" (FH Kärnten, Spittal/Austria) 06/2009 – 09/2009 "Skizzen für die Zukunft. Werner Sobek und das ILEK" (Ringturm Gallery, Vienna/Austria) 05/2009 – 06/2010 "Designing the Future" (Goethe Institute Jakarta and other sites in Indonesia (travelling exhibition)) 02/2005 "Merging Architecture and Engineering - Lightweight, Adaptivity and Transparency" (American University in Cairo/Egypt) 09/2004 – 11/2004 "Sobek und Seele – Glasfassaden mit hoher Performance" (Architekturmuseum Schwaben, Augsburg/Germany) 05/2004 – 08/2004 "show me the future – wege in die zukunft" (Pinakothek der Moderne, Munich/Germany) 11/2003 – 11/2004 "Beyond Materiality – China Tour" (Peking, Houa Zhong Keij, Beifang Jidaoda, Suzhou, Zhejiang, Fuzhou, Kanton, Shenzhen) 11/2003 "Beyond Materiality" (Tokyo Trade Fair/Japan) 09/2002 – 10/2002 "Beyond Materiality" (Aedes Gallery East, Berlin/Germany) 06/1999 – 11/1999 "Archi-neering" (Municipal Museum Leverkusen/Germany) Further reading Blaser, Werner: The Art of Engineering. Basel: Birkhaeuser, 1999. Blaser, Werner; Heinlein, Frank: R128 by Werner Sobek. Bauen im 21. Jahrhundert. Basel: Birkhaeuser, 2002. Morgan, Conway Lloyd: Show me the Future. Engineering and Design by Werner Sobek. Ludwigsburg: avedition, 2004. Heinlein, Frank; Sostmann, Maren: Werner Sobek - Light Works. Ludwigsburg: avedition, 2007. Stiller, Adolph (ed.): Sketches for the Future. Werner Sobek - Architecture and Construction: A Dialogue. Vienna: Muery Salzmann, 2010. p. 123. References External links Werner Sobek | Engineering & Design DIE ZEIT: Werner Sobek baut für die Zukunft: Seine Häuser sollen Energie sparen, keinen Müll erzeugen und das Klima retten ZEIT ONLINE | Lesen Sie zeit.de mit Werbung oder im PUR-Abo. Sie haben die Wahl. (Teil 6 der Serie 'Wer denkt für morgen?') 1953 births Living people 20th-century German architects Structural engineers 21st-century German architects People from Aalen University of Stuttgart alumni Recipients of the Order of Merit of Baden-Württemberg
Werner Sobek
[ "Engineering" ]
3,988
[ "Structural engineering", "Structural engineers" ]
5,600,372
https://en.wikipedia.org/wiki/SAP%20ERP
SAP ERP is enterprise resource planning software developed by the German company SAP SE. SAP ERP incorporates the key business functions of an organization. The latest version of SAP ERP (V.6.0) was made available in 2006. The most recent SAP enhancement package 8 for SAP ERP 6.0 was released in 2016. It is now considered legacy technology, having been superseded by SAP S/4HANA. Functionality Business processes included in SAP ERP are: Operations (sales & distribution, materials management, production planning, logistics execution, and quality management), Financials (financial accounting, management accounting, financial supply chain management), Human capital management (training, payroll, e-recruiting) and Corporate services (travel management, environment, health and safety, and real estate management). Development An ERP was built based on the former SAP R/3 software. SAP R/3, which was officially launched on 6 July 1992, consisted of various applications on top of SAP Basis, SAP's set of middleware programs and tools. All applications were built on top of the SAP Web Application Server. Extension sets were used to deliver new features and keep the core as stable as possible. The Web Application Server contained all the capabilities of SAP Basis. An architecture change took place with the introduction of my SAP ERP in 2004. R/3 Enterprise was replaced with the introduction of ERP Central Component (SAP ECC). The SAP Business Warehouse, SAP Strategic Enterprise Management and Internet Transaction Server were also merged into SAP ECC, allowing users to run them under one instance. The SAP Web Application Server was wrapped into SAP NetWeaver, which was introduced in 2003. Architectural changes were also made to support an enterprise service architecture to transition customers to a service-oriented architecture. The latest version, SAP ERP 6.0, was released in 2006. SAP ERP 6.0 has since then been updated through SAP enhancement packs, the most recent: SAP enhancement package 8 for SAP ERP 6.0 was released in 2016. Implementation SAP ERP consists of several modules, including Financial Accounting (FI), Controlling (CO), Asset Accounting (AA), Sales & Distribution (SD), SAP Customer Relationship Management (SAP CRM), Material Management (MM), Production Planning (PP), Quality Management (QM), Project System (PS), Plant Maintenance (PM), Human Resources (HR), Warehouse Management (WM). Traditionally an implementation is split into: Phase 1 – Project Preparation Phase 2 – Business Blueprint Phase 3 – Realization Phase 4 – Final Preparation Phase 5 – Go Live Support Deployment and maintenance costs It is estimated that "for a Fortune 500 company, software, hardware, and consulting costs can easily exceed $100 million (around $50 million to $500 million). Large companies can also spend $50 million to $100 million on upgrades. Full implementation of all modules can take years", which also adds to the end price. Midsized companies (fewer than 1,000 employees) are more likely to spend around $10 million to $20 million at most, and small companies are not likely to have the need for a fully integrated SAP ERP system unless they have the likelihood of becoming midsized and then the same data applies as would a midsized company. Independent studies have shown that deployment and maintenance costs of a SAP solution can vary depending on the organization. For example, some point out that because of the rigid model imposed by SAP tools, a lot of customization code to adapt to the business process may have to be developed and maintained. Some others pointed out that a return on investment could only be obtained when there was both a sufficient number of users and sufficient frequency of use. SAP Transport Management System SAP Transport Management System (STMS) is a tool within SAP ERP systems to manage software updates, termed transports, on one or more connected SAP systems. The tool can be accessed from transaction code STMS. This should not be confused with SAP Transportation Management, a stand-alone module for facilitating logistics and supply chain management in the transportation of goods and materials. SAP Enhancement Packages for SAP ERP 6.0 (SAP EhPs) The latest version (SAP ERP 6.0) was made available in 2006. Since then, additional functionality for SAP ERP 6.0 has been delivered through SAP Enhancement Packages (EhP). These Enhancement Packages allow SAP ERP customers to manage and deploy new software functionality. Enhancement Packages are optional; customers choose which new capabilities to implement. SAP EhPs do not require a classic system upgrade. The installation process of Enhancement Packages consists of two steps: Technical installation of an Enhancement Package Activation of new functions The technical installation of business functions does not change the system behavior. The installation of new functionalities is decoupled from its activation and companies can choose which business functions they want to activate. This means that even after installing a new business function, there is no change to existing functionality before activation. Activating a business function for one process will have no effect on users working with other functionalities. Reaching its final stage, SAP ECC 6.0 concludes its evolution with Enhancement Package 8 (EHP 8.0). EhP8 served as a foundation to transition to SAP's new business suite: SAP S/4HANA. See also GuiXT List of ERP software packages SAP NetWeaver SAP GUI SOA Secure Network Communications Secure Sockets Layer T-code UK & Ireland SAP Users Group References Sources Odell, Laura A., Brendan T. Farrar-Foley, John R. Kinkel, Rama S. Moorthy, and Jennifer A. Schultz. “History and Current Status of ERP Systems in DoD.” Beyond Enterprise Resource Planning (ERP): The Next Generation Enterprise Resource Planning Environment. Institute for Defense Analyses, 2012. Jerry Rolia, Giuliano Casale, Diwakar Krishnamurthy, Stephen Dawson, and Stephan Kraft. 2009. Predictive modelling of SAP ERP applications: challenges and solutions. In Proceedings of the Fourth International ICST Conference on Performance Evaluation Methodologies and Tools (VALUETOOLS '09). ICST (Institute for Computer Sciences, Social Informatics and Telecommunications Engineering), Brussels, BEL, Article 9, 1–9. Al-Sabri, H.M., Al-Mashari, M. and Chikh, A. (2018), "A comparative study and evaluation of ERP reference models in the context of ERP IT-driven implementation: SAP ERP as a case study", Business Process Management Journal, Vol. 24 No. 4, pp. 943–964. Gargeya, V.B. and Brady, C. (2005), "Success and failure factors of adopting SAP in ERP system implementation", Business Process Management Journal, Vol. 11 No. 5, pp. 501–516. Hufgard A., Gerhardt E. (2011) Consolidating Business Processes as Exemplified in SAP ERP Systems. In: Schmidt W. (eds) S-BPM ONE - Learning by Doing - Doing by Learning. S-BPM ONE 2011. Communications in Computer and Information Science, vol 213. Springer, Berlin, Heidelberg. R. R. Savchuk and N. A. Kirsta, "Managing of the Business Processes in Enterprise by Moving to SAP ERP System," 2019 Institute of Electrical and Electronics Engineers Conference of Russian Young Researchers in Electrical and Electronic Engineering (EIConRus), 2019, pp. 1467–1470, In White Paper Review, Industry Week OCT 2009, ‘ERP Best Practices: The SaaS Difference, Plex Systems, Retrieved 21/04/2012. Shim, Sung J. and Minsuk K. Shim. “Effects of user perceptions of SAP ERP system on user learning and skills.” Journal of Computing in Higher Education 32 (2020): 41-56. . . Angolia, Mark and Leslie Pagliari. “Experiential Learning for Logistics and Supply Chain Management Using an SAP ERP Software Simulation.” Decision Sciences Journal of Innovative Education 16 (2018): 104-125. . . Lorenc, Augustyn and Maciej Szkoda. “Customer logistic service in the automotive industry with the use of the SAP ERP system.” 2015 4th International Conference on Advanced Logistics and Transport (ICALT) (2015): 18-23. . . Al-Sabri, Hamdan Mohammed, Majed A. Al-Mashari and Azeddine Chikh. “A comparative study and evaluation of ERP reference models in the context of ERP IT-driven implementation: SAP ERP as a case study.” Bus. Process. Manag. J. 24 (2018): 943-964. . . Vlasov, Vladimir, Victoria Chebotareva, Marat Rakhimov and Sergey Kruglikov. “AI User Support System for SAP ERP.” (2017). . . Gottschalk, Friederike. "Validation of SAP R/3 and Other ERP Systems: Methodology and Tools." Pharmaceutical Technology Europe, vol. 12, no. 12, Dec. 2000, p. 26. Gale Academic OneFile, . Accessed 26 Jan. 2022. VAN EVERDINGEN, YVONNE, et al. "ERP ADOPTION BY EUROPEAN MIDSIZE COMPANIES." Communications of the ACM, vol. 43, no. 4, Apr. 2000, p. 27. Gale Academic OneFile, . Accessed 26 Jan. 2022. Lui, Kim Man, and Keith C.C. Chan. "Capability maturity model and SAP: toward a universal ERP implementation model." International Journal of Enterprise Information Systems, vol. 1, no. 3, July-Sept. 2005, pp. 69+. Gale Academic OneFile, . Accessed 26 Jan. 2022. Laosethakul, Kittipong, and Thaweephan Leingpibul. "Investigating Student Perceptions and Behavioral Intention to Use Multimedia Teaching Methods for the SAP ERP System." e-Journal of Business Education and Scholarship Teaching, vol. 15, no. 1, June 2021, pp. 1+. Gale Academic OneFile, . Accessed 26 Jan. 2022. J. Nađ and M. Vražić, "Decision making in transformer manufacturing companies with help of ERP business software," 2017 15th International Conference on Electrical Machines, Drives and Power Systems (ELMA), 2017, pp. 379–382, . C. F. Vera, R. T. Carmona, J. Armas-Aguirre and A. B. Padilla, "Technological architecture to consume On-Premise ERP services from a hybrid cloud platform," 2019 IEEE XXVI International Conference on Electronics, Electrical Engineering and Computing (INTERCON), 2019, pp. 1–4, . R. R. Savchuk and N. A. Kirsta, "Managing of the Business Processes in Enterprise by Moving to SAP ERP System," 2019 IEEE Conference of Russian Young Researchers in Electrical and Electronic Engineering (EIConRus), 2019, pp. 1467–1470, . Sung J. Shim and Minsuk K. Shim. 2018. How user perceptions of SAP ERP system change with system experience. In Proceedings of the First International Conference on Data Science, E-learning and Information Systems (DATA '18). Association for Computing Machinery, New York, NY, USA, Article 20, 1–4. Pliskin, Nava, and Marta Zarotski. "Big-bang ERP implementation at a global company." Journal of Cases on Information Technology, vol. 2, annual 2000. Gale Academic OneFile, . Accessed 26 Jan. 2022. Wang, John. "Sankar, Chetan S., Karl-Heinz Rau. 2006. Implementation Strategies for SAP R/3 in a Multinational Organization: Lessons from a Real-World Case Study." Interfaces, vol. 38, no. 4, July-Aug. 2008, pp. 347+. Gale Academic OneFile, . Accessed 26 Jan. 2022. Leyh, Christian. "Teaching ERP systems: results of a survey at research-oriented universities and universities of applied sciences in Germany." Journal of Information Systems Education, vol. 23, no. 2, summer 2012, pp. 217+. Gale Academic OneFile, . Accessed 26 Jan. 2022. Qiu, Manying, et al. "TO CLOSE THE SKILLS GAP, TECHNOLOGY AND HIGHER-ORDER THINKING SKILLS MUST GO HAND IN HAND." Journal of International Technology and Information Management, vol. 29, no. 1, Jan. 2020, pp. COV5+. Gale Academic OneFile, . Accessed 26 Jan. 2022. Cronan, Timothy Paul, and David E. Douglas. "Assessing ERP learning (management, business process, and skills) and attitudes." Journal of Organizational and End User Computing, vol. 25, no. 2, Apr.-June 2013, pp. 59+. Gale Academic OneFile, . Accessed 26 Jan. 2022. de Souza, Cesar Alexandre, and Ronaldo Zwicker. "Capabilities and actors in ERP systems management: an exploratory study in corporate users of SAP ERP/capacidades e atores na gestao de sistemas ERP: um estudo exploratorio entre usuarios corporativos do ERP da SAP." Journal of Information Systems & Technology Management, vol. 4, no. 2, Apr. 2007, pp. 197+. Gale Academic OneFile, . Accessed 26 Jan. 2022. ROBSON, SEAN. “PROJECT CONCEPTION.” In Agile SAP: Introducing Flexibility, Transparency and Speed to SAP Implementations, 33–52. IT Governance Publishing 2013. . Seddon, Peter B., et al. “A Multi-Project Model of Key Factors Affecting Organizational Benefits from Enterprise Systems.” MIS Quarterly, vol. 34, no. 2, Management Information Systems Research Center, University of Minnesota, 2010, pp. 305–28, . I. Tereshchenko, S. Shtangey and A. Tereshchenko, "The application SAP® ERP principles for the development and implementation of corporate integrated information system for SME," 2016 Third International Scientific-Practical Conference Problems of Infocommunications Science and Technology (PIC S&T), 2016, pp. 168–170, . Usmanij, P.A., Khosla, R. & Chu, MT. Successful product or successful system? User satisfaction measurement of ERP software. J Intell Manuf 24, 1131–1144 (2013). Diptikalyan Saha, Neelamadhav Gantayat, Senthil Mani, and Barry Mitchell. 2017. Natural language querying in SAP-ERP platform. In Proceedings of the 2017 11th Joint Meeting on Foundations of Software Engineering (ESEC/FSE 2017). Association for Computing Machinery, New York, NY, USA, 878–883. T. Orosz, "Analysis of SAP Development tools and methods," 2011 15th IEEE International Conference on Intelligent Engineering Systems, 2011, pp. 439–443, . External links ERP software Computer-related introductions in 1972 Cloud applications Cloud platforms Automation software Accounting software Project management software Enterprise software Business software Human resource management software Customer relationship management software ERP
SAP ERP
[ "Technology", "Engineering" ]
3,209
[ "Cloud platforms", "Computing platforms", "Automation software", "Automation" ]
5,600,471
https://en.wikipedia.org/wiki/Minimum%20inhibitory%20concentration
In microbiology, the minimum inhibitory concentration (MIC) is the lowest concentration of a chemical, usually a drug, which prevents visible in vitro growth of bacteria or fungi. MIC testing is performed in both diagnostic and drug discovery laboratories. The MIC is determined by preparing a dilution series of the chemical, adding agar or broth, then inoculating with bacteria or fungi, and incubating at a suitable temperature. The value obtained is largely dependent on the susceptibility of the microorganism and the antimicrobial potency of the chemical, but other variables can affect results too. The MIC is often expressed in micrograms per milliliter (μg/mL) or milligrams per liter (mg/L). In diagnostic labs, MIC test results are used to grade the susceptibility of microbes. These grades are assigned based on agreed upon values called breakpoints. Breakpoints are published by standards development organizations such as the U.S. Clinical and Laboratory Standards Institute (CLSI), the British Society for Antimicrobial Chemotherapy (BSAC) and the European Committee on Antimicrobial Susceptibility Testing (EUCAST). The purpose of measuring MICs and grading microbes is to enable physicians to prescribe the most appropriate antimicrobial treatment. The first step in drug discovery is often measurement of the MICs of biological extracts, isolated compounds or large chemical libraries against bacteria and fungi of interest. MIC values provide a quantitative measure of an extract or compound's antimicrobial potency. The lower the MIC, the more potent the antimicrobial. When in vitro toxicity data is available, MICs can also be used to calculate selectivity index values, a measure of off-target to target toxicity. History After the discovery and commercialization of antibiotics, microbiologist, pharmacologist, and physician Alexander Fleming developed the broth dilution technique using the turbidity of the broth for assessment. This is commonly believed to be the conception point of minimum inhibitory concentrations. Later in the 1980s, the Clinical and Laboratory Standards Institute consolidated the methods and standards for MIC determination and clinical usage. Because pathogens continue to evolve, and new drugs continue to be developed, the CLSI's MIC protocols are periodically updated to reflect these changes. The protocols and parameters set by the CLSI are considered to be the "gold standard" in the United States and are used by regulatory authorities, such as the FDA, to make evaluations. Clinical usage Nowadays, the MIC is used in antimicrobial susceptibility testing. The MIC is reported by providing the susceptibility interpretation next to each antibiotic. The different susceptibility interpretations are: "S" (susceptible or responding to a standard dosing regimen), "I" (intermediate or requiring increased exposure), and "R" (resistant). These interpretations were developed by the CLSI and EUCAST. There have been major discrepancies between the breakpoints from various European countries over the years, and between those from the CLSI and EUCAST. In clinics, more often than not, exact pathogens cannot be easily determined by symptoms of the patient. Then, even if the pathogen is determined, different strains of pathogens, such as Staphylococcus aureus, have varying levels of resistance to antimicrobials. As such, it is difficult to prescribe correct antimicrobials. The MIC is determined in such cases by growing the pathogen isolate from the patient on plate or broth, which is later used in the assay. Thus, knowledge of the MIC will provide a physician valuable information for making a prescription. Accurate and precise usage of antimicrobials is also important in the context of multidrug-resistant bacteria. Microbes such as bacteria have been gaining resistance to antimicrobials they were previously susceptible to. Usage of incompatible levels of antimicrobials provides the selective pressure that has driven the direction and evolution of resistance of bacterial pathogens. This has been seen at sub-MIC levels of antibiotics. As such, it is increasingly important to determine the MIC in order to make the best choice in prescribing antimicrobials. Methods Broth dilution assay There are three main reagents necessary to run this assay: the media, an antimicrobial agent, and the microbe being tested. The most commonly used media is cation-adjusted Mueller Hinton Broth, due to its ability to support the growth of most pathogens and its lack of inhibitors towards common antibiotics. Depending on the pathogen and antibiotics being tested, the media can be changed and/or adjusted. The antimicrobial concentration is adjusted into the correct concentration by mixing stock antimicrobial with media. The adjusted antimicrobial is serially diluted into multiple tubes (or wells) to obtain a gradient. The dilution rate can be adjusted depending on the breakpoint and the practitioner's needs. The microbe, or the inoculating agent, must come from the same colony-forming unit, and must be at the correct concentration. This may be adjusted by incubation time and dilution. For verification, the positive control is plated in a hundred fold dilution to count colony forming units. The microbes inoculate the tubes (or plate) and are incubated for 16–20 hours. The MIC is generally determined by turbidity. Etest Etests can be used as an alternative method to determine the minimum inhibitory concentrations of a wide range of antimicrobial agents against different organisms. They have been widely used in microbiology laboratories around the world. Manufactured by bioMérieux, Etests are a ready-to-use, non-porous plastic reagent strip with a predefined gradient of antibiotic, covering a continuous concentration range. MBC testing While the MIC is the lowest concentration of an antibacterial or antifungal agent necessary to inhibit visible growth, the minimum bactericidal concentration (MBC) is the minimum concentration of an antibacterial agent that results in bacterial death. It is defined by the inability to re-culture bacteria, and the closer the MIC is to the MBC, the more bactericidal the compound. MIC is used clinically over MBC because MIC is more easily determined. In addition, drug effectiveness is generally similar when taken at both MIC and MBC concentrations because the host immune system can expel the pathogen when bacterial proliferation is at a standstill. When the MBC is much higher than the MIC, drug toxicity makes taking the MBC of the drug detrimental to patient. Antimicrobial toxicity can come in many forms, such as immune hypersensitivity and off-target toxicity. Importance Increasing bacterial outbreaks and newer strains of Microbes and Pathogen invading our lives daily, there is an increasing need to Test these microbes. Mutating Bacteria poses a higher risk to humans more than ever and thus MIC Test is important to ensure we are one step ahead of them. See also Kirby–Bauer test Arthur Thomas Palin, pioneer in drinking water chlorination treatment and testing MIC Test Flow And How to Perform MIS Test References Microbiology terms
Minimum inhibitory concentration
[ "Biology" ]
1,495
[ "Microbiology terms" ]
5,600,605
https://en.wikipedia.org/wiki/CRON-diet
The CRON-diet (Calorie Restriction with Optimal Nutrition) is a nutrient-rich, reduced calorie diet developed by Roy Walford, Lisa Walford, and Brian M. Delaney. The CRON-diet involves calorie restriction in the hope that the practice will improve health and retard aging, while still attempting to provide the recommended daily amounts of various nutrients. Other names include CR-diet, Longevity diet, and Anti-Aging Plan. The Walfords and Delaney, among others, founded the CR Society International to promote the CRON-diet. Context There is no experimental evidence that calorie restriction can slow biological aging in humans. The biological mechanisms for the supposed antiaging effects are not determined, as of 2021. Origins The CRON-diet was developed from data Walford compiled during his participation in Biosphere 2 from 1991 to 1993. The subjects ate a diet low in fat and in calories but "nutrient-dense", derived from the food crops raised inside the Biosphere. Debate on effectiveness The writer Christopher Turner in The Telegraph reported that Walford claimed that the diet "will retard your rate of ageing, extend lifespan (up to perhaps 150 to 160 years, depending on when you start and how thoroughly you hold to it), and markedly decrease susceptibility to most major diseases." The same article noted however that the diet "failed to dramatically increase Walford's lifespan; he died in 2004 aged 79." A review of the effects of calorie restriction in humans by Anna Picca and colleagues in 2017 noted that direct evidence was limited to what had been "recorded from the members of the Calorie Restriction Society, who have imposed on themselves a regimen of severe CR with optimal nutrition (CRON), believing to extend in this way their healthy lifespan." The review noted that bone density was reduced but that bone strength was improved and maximal aerobic capacity per unit body mass was maintained or increased, while measures of quality of life including depression and physical function were improved. The review observed that one outcome had been the development of calorie restriction mimetic drugs which would be tested in clinical trials on humans. Reception The journalist Pagan Kennedy wrote an opinion piece for The New York Times, mentioning Walford's book The 120 Year Diet and his hope of living for more than 100 years on the CRON diet, noting that instead he died of Lou Gehrig disease at age 79: her piece was titled "The Secret to a Longer Life? Don't Ask These Dead Longevity Researchers". The journalist Emily Yoffe tried the CRON-diet for Slate. She wrote that Walford had written a book about the diet called Beyond the 120 Year Diet, in which a "typical dinner" consisted of salad, lentils, brown rice, bulgur, a stalk of broccoli, and a glass of skimmed milk. Yoffe reported that after more than 2 months on the diet, she had not experienced the promised results: her "very poor" sleep had not improved much; her "energy" remained low; her "foggy mind" was still foggy. But she was pleased that she once again could experience loose-fitting pants. References Diets Nutrient-rich, low calorie diets Eating behaviors
CRON-diet
[ "Biology" ]
674
[ "Biological interactions", "Eating behaviors", "Behavior" ]
5,600,755
https://en.wikipedia.org/wiki/Quotition%20and%20partition
In arithmetic, quotition and partition are two ways of viewing fractions and division. In quotitive division one asks "how many parts are there?" while in partitive division one asks "what is the size of each part?" In general, a quotient where , , and are integers or rational numbers, can be conceived of in either of 2 ways: Quotition: "How many parts of size must be added to get a sum of ?" Partition: "What is the size of each of equal parts whose sum is ?" For example, the quotient can be conceived of as representing either of the decompositions: In the rational number system used in elementary mathematics, the numerical answer is always the same no matter which way you put it, as a consequence of the commutativity of multiplication. Quotition Thought of quotitively, a division problem can be solved by repeatedly subtracting groups of the size of the divisor. For instance, suppose each egg carton fits 12 eggs, and the problem is to find how many cartons are needed to fit 36 eggs in total. Groups of 12 eggs at a time can be separated from the main pile until none are left, 3 groups: If the last group is a remainder smaller than the divisor, it can be thought of as forming an additional smaller group. For example, if 45 eggs are to be put into 12-egg cartons, then after the first 3 cartons have been filled there are 9 eggs remaining, which only partially fill the 4th carton. The answer to the question "How many cartons are needed to fit 45 eggs?" is 4 cartons, since rounds up to 4. Quotition is the concept of division most used in measurement. For example, measuring the length of a table using a measuring tape involves comparing the table to the markings on the tape. This is conceptually equivalent to dividing the length of the table by a unit of length, the distance between markings. Partition Thought of partitively, a division problem might be solved by sorting the initial quantity into a specific number of groups by adding items to each group in turn. For instance, a deck of 52 playing cards could be divided among 4 players by dealing the cards to into 4 piles one at a time, eventually yielding piles of 13 cards each. If there is a remainder in solving a partition problem, the parts will end up with unequal sizes. For example, if 52 cards are dealt out to 5 players, then 3 of the players will receive 10 cards each, and 2 of the players will receive 11 cards each, since . See also List of partition topics References External links A University of Melbourne web page shows what to do when the fraction is a ratio of integers or rational. Operations on numbers Division (mathematics)
Quotition and partition
[ "Mathematics" ]
576
[ "Arithmetic", "Operations on numbers" ]
7,293,292
https://en.wikipedia.org/wiki/Extrusion%20detection
Extrusion detection or outbound intrusion detection is a branch of intrusion detection aimed at developing mechanisms to identify successful and unsuccessful attempts to use the resources of a computer system to compromise other systems. Extrusion detection techniques focus primarily on the analysis of system activity and outbound traffic in order to detect malicious users, malware or network traffic that may pose a threat to the security of neighboring systems. While intrusion detection is mostly concerned about the identification of incoming attacks (intrusion attempts), extrusion detection systems try to prevent attacks from being launched in the first place. They implement monitoring controls at leaf nodes of the network—rather than concentrating them at choke points, e.g., routers—in order to distribute the inspection workload and to take advantage of the visibility a system has of its own state. The ultimate goal of extrusion detection is to identify attack attempts launched from an already compromised system in order to prevent them from reaching their target, hereby containing the impact of the threat. External links "Stopping Spam by Extrusion Detection" "Outbound Intrusion Detection" Data security
Extrusion detection
[ "Engineering" ]
223
[ "Cybersecurity engineering", "Data security" ]
7,293,328
https://en.wikipedia.org/wiki/MSConfig
MSConfig (officially called System Configuration in Windows Vista, Windows 7, Windows 8, Windows 10, or Windows 11 and Microsoft System Configuration Utility in previous operating systems) is a system utility to troubleshoot the Microsoft Windows startup process. It can disable or re-enable software, device drivers and Windows services that run at startup, or change boot parameters. It is bundled with all versions of Microsoft Windows operating systems since Windows 98 except Windows 2000. Windows 95 and Windows 2000 users can download the utility as well, although it was not designed for them. Uses MSConfig is often used for speeding up the Microsoft Windows startup process of the machine. According to Microsoft, MSConfig was not meant to be used as a startup management program. Features MSConfig is a troubleshooting tool which is used to temporarily disable or re-enable software, device drivers or Windows services that run during startup process to help the user determine the cause of a problem with Windows. Some of its functionality varies by Windows versions: In Windows 98 and Windows Me, it can configure advanced troubleshooting settings pertaining to these operating systems. It can also launch common system tools. In Windows 98, it can back up and restore startup files. In Windows Me, it has also been updated with three new tabs called "Static VxDs", "Environment" and "International". The Static VxDs tab allows users to enable or disable static virtual device drivers to be loaded at startup, the Environment tab allows users to enable or disable environment variables, and the International tab allows users to set international language keyboard layout settings that were formerly set via the real-mode MS-DOS configuration files. A "Cleanup" button on the "Startup" tab allows cleaning up invalid or deleted startup entries. In Windows Me and Windows XP versions, it can restore an individual file from the original Windows installation set. On Windows NT-based operating systems prior to Windows Vista, it can set various BOOT.INI switches. In Windows XP and Windows Vista, it can hide all operating system services for troubleshooting. In Windows Vista and later, the tool allows configuring various switches for Windows Boot Manager and Boot Configuration Data. It also gained additional support for launching a variety of tools, such as system information, other configuration areas, such as Internet options, and the ability to enable/disable UAC. An update is available for Windows XP and Windows Server 2003 that adds the Tools tab. References Further reading Windows components Windows administration Configuration management Windows 98
MSConfig
[ "Engineering" ]
520
[ "Systems engineering", "Configuration management" ]
7,293,427
https://en.wikipedia.org/wiki/Solar%20gain
Solar gain (also known as solar heat gain or passive solar gain) is the increase in thermal energy of a space, object or structure as it absorbs incident solar radiation. The amount of solar gain a space experiences is a function of the total incident solar irradiance and of the ability of any intervening material to transmit or resist the radiation. Objects struck by sunlight absorb its visible and short-wave infrared components, increase in temperature, and then re-radiate that heat at longer infrared wavelengths. Though transparent building materials such as glass allow visible light to pass through almost unimpeded, once that light is converted to long-wave infrared radiation by materials indoors, it is unable to escape back through the window since glass is opaque to those longer wavelengths. The trapped heat thus causes solar gain via a phenomenon known as the greenhouse effect. In buildings, excessive solar gain can lead to overheating within a space, but it can also be used as a passive heating strategy when heat is desired. Window solar gain properties Solar gain is most frequently addressed in the design and selection of windows and doors. Because of this, the most common metrics for quantifying solar gain are used as a standard way of reporting the thermal properties of window assemblies. In the United States, The American Society of Heating, Refrigerating, and Air-Conditioning Engineers (ASHRAE), and The National Fenestration Rating Council (NFRC) maintain standards for the calculation and measurement of these values. Shading coefficient The shading coefficient (SC) is a measure of the radiative thermal performance of a glass unit (panel or window) in a building. It is defined as the ratio of solar radiation at a given wavelength and angle of incidence passing through a glass unit to the radiation that would pass through a reference window of frameless Clear Float Glass. Since the quantities compared are functions of both wavelength and angle of incidence, the shading coefficient for a window assembly is typically reported for a single wavelength typical of solar radiation entering normal to the plane of glass. This quantity includes both energy that is transmitted directly through the glass as well as energy that is absorbed by the glass and frame and re-radiated into the space, and is given by the following equation: Here, λ is the wavelength of radiation and θ is the angle of incidence. "T" is the transmissivity of the glass, "A" is its absorptivity, and "N" is the fraction of absorbed energy that is re-emitted into the space. The overall shading coefficient is thus given by the ratio: The shading coefficient depends on the radiation properties of the window assembly. These properties are the transmissivity "T" , absorptivity "A", emissivity (which is equal to the absorptivity for any given wavelength), and reflectivity all of which are dimensionless quantities that together sum to 1. Factors such as color, tint, and reflective coatings affect these properties, which is what prompted the development of the shading coefficient as a correction factor to account for this. ASHRAE's table of solar heat gain factors provides the expected solar heat gain for ⅛” clear float glass at different latitudes, orientations, and times, which can be multiplied by the shading coefficient to correct for differences in radiation properties. The value of the shading coefficient ranges from 0 to 1. The lower the rating, the less solar heat is transmitted through the glass, and the greater its shading ability. In addition to glass properties, shading devices integrated into the window assembly are also included in the SC calculation. Such devices can reduce the shading coefficient by blocking portions of the glazing with opaque or translucent material, thus reducing the overall transmissivity. Window design methods have moved away from the Shading Coefficient and towards the Solar Heat Gain Coefficient (SHGC), which is defined as the fraction of incident solar radiation that actually enters a building through the entire window assembly as heat gain (not just the glass portion). The standard method for calculating the SHGC also uses a more realistic wavelength-by-wavelength method, rather than just providing a coefficient for a single wavelength like the shading coefficient does. Though the shading coefficient is still mentioned in manufacturer product literature and some industry computer software, it is no longer mentioned as an option in industry-specific texts or model building codes. Aside from its inherent inaccuracies, another shortcoming of the SC is its counter-intuitive name, which suggests that high values equal high shading when in reality the opposite is true. Industry technical experts recognized the limitations of SC and pushed towards SHGC in the United States (and the analogous g-value in Europe) before the early 1990s. A conversion from SC to SHGC is not necessarily straightforward, as they each take into account different heat transfer mechanisms and paths (window assembly vs. glass-only). To perform an approximate conversion from SC to SHGC, multiply the SC value by 0.87. g-value The g-value (sometimes also called a Solar Factor or Total Solar Energy Transmittance) is the coefficient commonly used in Europe to measure the solar energy transmittance of windows. Despite having minor differences in modeling standards compared to the SHGC, the two values are effectively the same. A g-value of 1.0 represents full transmittance of all solar radiation while 0.0 represents a window with no solar energy transmittance. In practice though, most g-values will range between 0.2 and 0.7, with solar control glazing having a g-value of less than 0.5. Solar heat gain coefficient (SHGC) SHGC is the successor to the shading coefficient used in the United States and it is the ratio of transmitted solar radiation to incident solar radiation of an entire window assembly. It ranges from 0 to 1 and refers to the solar energy transmittance of a window or door as a whole, factoring in the glass, frame material, sash (if present), divided lite bars (if present) and screens (if present). The transmittance of each component is calculated in a similar manner to the shading coefficient. However, in contrast to the shading coefficient, the total solar gain is calculated on a wavelength-by-wavelength basis where the directly transmitted portion of the solar heat gain coefficient is given by: Here is the spectral transmittance at a given wavelength in nanometers and is the incident solar spectral irradiance. When integrated over the wavelengths of solar short-wave radiation, it yields the total fraction of transmitted solar energy across all solar wavelengths. The product is thus the portion of absorbed and re-emitted energy across all assembly components beyond just the glass. It is important to note that the standard SHGC is calculated only for an angle of incidence normal to the window. However this tends to provide a good estimate over a wide range of angles, up to 30 degrees from normal in most cases. SHGC can either be estimated through simulation models or measured by recording the total heat flow through a window with a calorimeter chamber. In both cases, NFRC standards outline the procedure for the test procedure and calculation of the SHGC. For dynamic fenestration or operable shading, each possible state can be described by a different SHGC. Though the SHGC is more realistic than the SC, both are only rough approximations when they include complex elements such as shading devices, which offer more precise control over when fenestration is shaded from solar gain than glass treatments. Solar gain in opaque building components Apart from windows, walls and roofs also serve as pathways for solar gain. In these components heat transfer is entirely due to absorptance, conduction, and re-radiation since all transmittance is blocked in opaque materials. The primary metric in opaque components is the Solar Reflectance Index which accounts for both solar reflectance (albedo) and emittance of a surface. Materials with high SRI will reflect and emit a majority of heat energy, keeping them cooler than other exterior finishes. This is quite significant in the design of roofs since dark roofing materials can often be as much as 50 °C hotter than the surrounding air temperature, leading to large thermal stresses as well as heat transfer to interior space. Solar gain and building design Solar gain can have either positive or negative effects depending on the climate. In the context of passive solar building design, the aim of the designer is normally to maximize solar gain within the building in the winter (to reduce space heating demand), and to control it in summer (to minimize cooling requirements). Thermal mass may be used to even out the fluctuations during the day, and to some extent between days. Control of solar gain Uncontrolled solar gain is undesirable in hot climates due to its potential for overheating a space. To minimize this and reduce cooling loads, several technologies exist for solar gain reduction. SHGC is influenced by the color or tint of glass and its degree of reflectivity. Reflectivity can be modified through the application of reflective metal oxides to the surface of the glass. Low-emissivity coating is another more recently developed option that offers greater specificity in the wavelengths reflected and re-emitted. This allows glass to block mainly short-wave infrared radiation without greatly reducing visible transmittance. In climate-responsive design for cold and mixed climates, windows are typically sized and positioned in order to provide solar heat gains during the heating season. To that end, glazing with a relatively high solar heat gain coefficient is often used so as not to block solar heat gains, especially in the sunny side of the house. SHGC also decreases with the number of glass panes used in a window. For example, in triple glazed windows, SHGC tends to be in the range of 0.33 - 0.47. For double glazed windows SHGC is more often in the range of 0.42 - 0.55. Different types of glass can be used to increase or to decrease solar heat gain through fenestration, but can also be more finely tuned by the proper orientation of windows and by the addition of shading devices such as overhangs, louvers, fins, porches, and other architectural shading elements. Passive solar heating Passive solar heating is a design strategy that attempts to maximize the amount of solar gain in a building when additional heating is desired. It differs from active solar heating which uses exterior water tanks with pumps to absorb solar energy because passive solar systems do not require energy for pumping and store heat directly in structures and finishes of occupied space. In direct solar gain systems, the composition and coating of the building glazing can also be manipulated to increase the greenhouse effect by optimizing their radiation properties, while their size, position, and shading can be used to optimize solar gain. Solar gain can also be transferred to the building by indirect or isolated solar gain systems. Passive solar designs typically employ large equator facing windows with a high SHGC and overhangs that block sunlight in summer months and permit it to enter the window in the winter. When placed in the path of admitted sunlight, high thermal mass features such as concrete slabs or trombe walls store large amounts of solar radiation during the day and release it slowly into the space throughout the night. When designed properly, this can modulate temperature fluctuations. Some of the current research into this subject area is addressing the tradeoff between opaque thermal mass for storage and transparent glazing for collection through the use of transparent phase change materials that both admit light and store energy without the need for excessive weight. See also Double-skin facade Heating degree day Insulated glazing Low-emissivity coatings References Atmospheric radiation Meteorological phenomena Low-energy building Glass engineering and science Glass architecture Glass physics Shading
Solar gain
[ "Physics", "Materials_science", "Engineering" ]
2,406
[ "Glass engineering and science", "Physical phenomena", "Earth phenomena", "Materials science", "Glass architecture", "Meteorological phenomena", "Glass physics", "Condensed matter physics" ]
7,294,919
https://en.wikipedia.org/wiki/Induction%20heater
An induction heater is a key piece of equipment used in all forms of induction heating. Typically an induction heater operates at either medium frequency (MF) or radio frequency (RF) ranges. Four main component systems form the basis of a modern induction heater the control system, control panel, or ON / OFF switch; in some cases this system can be absent the power unit (power inverter) the work head (transformer) and the heating coil (inductor) How it works Induction heating is a non contact method of heating a conductive body by utilising a strong magnetic field. Supply (mains) frequency 50 Hz or 60 Hz induction heaters incorporate a coil directly fed from the electricity supply, typically for lower power industrial applications where lower surface temperatures are required. Some specialist induction heaters operate at 400 Hz, the Aerospace power frequency. Induction heating should not be confused with induction cooking, as the two heating systems are mostly very physically different from each other. Notably, induction heating systems work by applying an alternating magnetic field to a ferrous material to induce an alternating current in the material, so exciting the atoms in the material heating it up. Main equipment components An induction heater typically consists of three elements. Power unit Often referred to as the inverter or generator. This part of the system is used to take the mains frequency and increase it to anywhere between 10 Hz and 400 kHz. Typical output power of a unit system is from 2 kW to 500 kW. Work head This contains a combination of capacitors and transformers and is used to mate the power unit to the work coil. Work coil Also known as the inductor, the coil is used to transfer the energy from the power unit and work head to the work piece. Inductors range in complexity from a simple wound solenoid consisting of a number of turns of copper tube wound around a mandrel, to a precision item machined from solid copper, brazed and soldered together. As the inductor is the area where the heating takes place, coil design is one of the most important elements of the system and is a science in itself. Definitions Radio frequency (RF) induction generators work in the frequency range from 100 kHz up to 10 MHz. Most induction heating devices (with induction frequency control) have a frequency range of 100 kHz to 200 kHz. The output range typically incorporates 2.5 kW to 40 kW. Induction heaters in this range are used for smaller components and applications such as induction hardening an engine valve. MF induction generators work from 1 kHz to 10 kHz. The output range typically incorporates 50 kW to 500 kW. Induction heaters within these ranges are used on medium to larger components and applications such as the induction forging of a shaft. Mains (or supply) frequency induction coils are driven directly from the standard AC supply. Most mains-frequency induction coils are designed for single phase operation, and are low-current devices intended for localised heating, or low-temperature surface area heating, such as in a drum heater. History The basic principle involved in induction heating was discovered by Michael Faraday as early as 1831. Faraday's work involved the use of a switched DC supply provided by a battery and two windings of copper wire wrapped around an iron core. It was noted that when the switch was closed a momentary current flowed in the secondary winding, which could be measured by means of a galvanometer. If the circuit remained energized then the current ceased to flow. On opening the switch a current again flowed in the secondary winding, but in the opposite direction. Faraday concluded that since no physical link existed between the two windings, the current in the secondary coil must be caused by a voltage that was induced from the first coil, and that the current produced was directly proportional to the rate of change of the magnetic flux. Initially the principles were put to use in the design of transformers, motors and generators where undesirable heating effects were controlled by the use of a laminated core. Early in the 20th century engineers started to look for ways to harness the heat-generating properties of induction for the purpose of melting steel. This early work used motor generators to create the medium frequency (MF) current, but the lack of suitable alternators and capacitors of the correct size held back early attempts. However, by 1927 the first MF induction melting system had been installed by EFCO in Sheffield, England. At around the same time engineers at Midvale Steel and The Ohio Crankshaft Company in America were attempting to use the surface-heating effect of the MF current to produce localized surface case hardening in crankshafts. Much of this work took place at the frequencies of 1920 and 3000 Hz as these were the easiest frequencies to produce with the equipment available. As with many technology-based fields it was the advent of World War II which led to huge developments in the utilization of induction heating in the production of vehicle parts and munitions. Over time, the technology advanced and units in the 3 to 10 kHz frequency range with powers outputs to 600 kW became common place in induction forging and large induction hardening applications. The motor generator would remain the mainstay of MF power generation until the advent of high voltage semiconductors in the late 1960s and early 1970s. Early in the evolutionary process it became obvious to engineers that the ability to produce a higher radio frequency range of equipment would result in greater flexibility and open up a whole range of alternative applications. Methods were sought to produce these higher RF power supplies to operate in the 200 to 400 kHz range. Development in this particular frequency range has always mirrored that of the radio transmitter and television broadcasting industry and indeed has often used component parts developed for this purpose. Early units utilised spark gap technology, but due to limitations the approach was rapidly superseded by the use of multi-electrode thermionic triode (valve) based oscillators. Indeed, many of the pioneers in the industry were also very involved in the radio and telecommunications industry and companies such as Phillips, English Electric and Redifon were all involved in manufacturing induction heating equipment in the 1950s and 1960s. The use of this technology survived until the early 1990s at which point the technology was all but replaced by power MOSFET and IGBT solid state equipment. However, there are still many valve oscillators still in existence, and at extreme frequencies of 5 MHz and above they are often the only viable approach and are still produced. Mains frequency induction heaters are still widely used throughout manufacturing industry due to their relatively low cost and thermal efficiency compared to radiant heating where piece parts or steel containers need to be heated as part of a batch process line. Valve oscillator based power supply Due to its flexibility and potential frequency range, the valve oscillator based induction heater was until recent years widely used throughout industry. Readily available in powers from 1 kW to 1 MW and in a frequency range from 100 kHz to many MHz, this type of unit found widespread use in thousands of applications including soldering and brazing, induction hardening, tube welding and induction shrink fitting. The unit consists of three basic elements: High voltage DC power supply The DC (direct current) power supply consists of a standard air or water cooled step-up transformer and a high voltage rectifier unit capable of generating voltages typically between 5 and 10 kV to power the oscillator. The unit needs to be rated at the correct kilovolt-ampere (kVA) to supply the necessary current to the oscillator. Early rectifier systems featured valve rectifiers such as GXU4 (high power high voltage half wave rectifier) but these were ultimately superseded by high voltage solid state rectifiers. Self exciting class 'C' oscillator The oscillator circuit is responsible for creating the elevated frequency electric current, which when applied to the work coil creates the magnetic field which heats the part. The basic elements of the circuit are an inductance (tank coil) and a capacitance (tank capacitor) and an oscillator valve. Basic electrical principles dictate that if a voltage is applied to a circuit containing a capacitor and inductor the circuit will oscillate in much the same way as a swing which has been pushed. Using our swing as an analogy if we do not push again at the right time the swing will gradually stop this is the same with the oscillator. The purpose of the valve is to act as a switch which will allow energy to pass into the oscillator at the correct time to maintain the oscillations. In order to time the switching, a small amount of energy is fed back to the grid of the triode effectively blocking or firing the device or allow it to conduct at the correct time. This so-called grid bias can be derived, either capacitively, conductively or inductively depending on whether the oscillator is a Colpitts, Hartley oscillator, Armstrong tickler or a Meissner. Means of power control Power control for the system can be achieved by a variety of methods. Many latter day units feature thyristor power control which works by means of a full wave AC (alternating current) drive varying the primary voltage to the input transformer. More traditional methods include three phase variacs (autotransformer) or motorised Brentford type voltage regulators to control the input voltage. Another very popular method was to use a two part tank coil with a primary and secondary winding separated by an air gap. Power control was affected by varying the magnetic coupling of the two coils by physically moving them relative to each other. Solid state power supplies In the early days of induction heating, the motor-generator was used extensively for the production of MF power up to 10 kHz. While it is possible to generate multiples of the supply frequency such as 150 Hz using a standard induction motor driving an AC generator, there are limitations. This type of generator featured rotor mounted windings which limited the peripheral speed of the rotor due to the centrifugal forces on these windings. This had the effect of limiting the diameter of the machine and therefore its power and the number of poles which can be physically accommodated, which in turn limits the maximum operating frequency. To overcome these limitations the induction heating industry turned to the inductor-generator. This type of machine features a toothed rotor constructed from a stack of punched iron laminations. The excitation and AC windings are both mounted on the stator, the rotor is therefore a compact solid construction which can be rotated at higher peripheral speeds than the standard AC generator above thus allowing it to be greater in diameter for a given RPM. This larger diameter allows a greater number of poles to be accommodated and when combined with complex slotting arrangements such as the Lorenz gauge condition or Guy slotting which allows the generation of frequencies from 1 to 10 kHz. As with all rotating electrical machines, high rotation speeds and small clearances are utilised to maximise flux variations. This necessitates that close attention is paid to the quality of bearings utilised and the stiffness and accuracy of rotor. Drive for the alternator is normally provided by a standard induction motor for convention and simplicity. Both vertical and horizontal configurations are utilised and in most cases the motor rotor and generator rotor are mounted on a common shaft with no coupling. The whole assembly is then mounted in a frame containing the motor stator and generator stator. The whole construction is mounted in a cubicle which features a heat exchanger and water cooling systems as required. The motor-generator became the mainstay of medium frequency power generation until the advent of solid state technology in the early 1970s. In the early 1970s the advent of solid state switching technology saw a shift from the traditional methods of induction heating power generation. Initially this was limited to the use of thyristors for generating the 'MF range of frequencies using discrete electronic control systems. State of the art units now employ SCR (silicon-controlled rectifier), IGBT or MOSFET technologies for generating the 'MF' and 'RF' current. The modern control system is typically a digital microprocessor based system utilising PIC, PLC (programmable logic controller) technology and surface mount manufacturing techniques for production of the printed circuit boards. Solid state now dominates the market and units from 1 kW to many megawatts in frequencies from 1 kHz to 3 MHz including dual frequency units are now available. A whole range of techniques are employed in the generation of MF and RF power using semiconductors, the actual technique employed depends often on a complex range of factors. The typical generator will employ either a current or a voltage fed topology. The actual approach employed will be a function of the required power, frequency, individual application, the initial cost and subsequent running costs. Irrespective of the approach employed however, all units tend to feature four distinct elements: AC to DC rectifier This takes the mains supply voltage and converts it from the supply frequency of 50 or 60 Hz and also converts it to 'DC'. This can supply a variable DC voltage, a fixed DC voltage or a variable DC current. In the case of a variable systems, they are used to provide overall power control for the system. Fixed voltage rectifiers need to be used in conjunction with an alternative means of power control. This can be done by utilising a switch mode regulator or a by using a variety of control methods within the inverter section. DC to AC inverter The inverter converts the DC supply to a single phase AC output at the relevant frequency. This features the SCR, IGBT or MOSFETS and in most cases is configured as an H-bridge. The H-bridge has four legs each with a switch, the output circuit is connected across the centre of the devices. When the relevant two switches are closed current flows through the load in one direction, these switches then open and the opposing two switches close allowing current to flow in the opposite direction. By precisely timing the opening and closing of the switches, it is possible to sustain oscillations in the load circuit. Output circuit The output circuit has the job of matching the output of the inverter to that required by the coil. This can in it simplest form be a capacitor or in some cases will feature a combination of capacitors and transformers. Control system The control section monitors all the parameters in the load circuit, the inverter and supplies switching pulses at the appropriate time to supply energy to the output circuit. Early systems featured discrete electronics with variable potentiometers to adjust switching times, current limits, voltage limits and frequency trips. However, with the advent of microcontroller technology, the majority of advanced systems now feature digital control. The voltage-fed inverter The voltage-fed inverter features a filter capacitor on the input to the inverter and a series resonant output circuits. The voltage-fed system is extremely popular and can be used with either SCRs up to frequencies of 10 kHz, IGBTs to 100 kHz and MOSFETs up to 3 MHz. A voltage-fed inverter with a series connection to a parallel load is also known as a third order system. Basically this is similar to solid state, but in this system the series connected internal capacitor and inductor are connected to a parallel output tank circuit. The principal advantage of this type of system is the robustness of the inverter due to the internal circuit effectively isolating the output circuit making the switching components less susceptible to damage due to coil flash-overs or mismatching. The current-fed inverter The current-fed inverter is different from the voltage-fed system in that it utilizes a variable DC input followed by a large inductor at the input to the inverter bridge. The power circuit features a parallel resonant circuit and can have operating frequencies typically from 1 kHz to 1 MHz. As with the voltage-fed system, SCRs are typically used up to 10 kHz with IGBTs and MOSFETs being used at the higher frequencies. Suitable materials Suitable materials are those with high permeability (100-500) which are heated below the Curie temperature of that material. See also Induction forging Induction shrink fitting Induction hardening Induction heating Drum heater References Notes Bibliography . External links Sheffield University undertakes fundamental and applied research on enabling induction heater technologies - University of Sheffield Induction soldering using induction heater technology example from TWI Animation showing heating rates derived from FEA of mains frequency Induction Drum Heater - LMK Thermosafe Ltd Comprehensive tutorial on the theory and operation of an induction heater, including schematics for a low and high power device capable of levitating metals. Heaters Industrial equipment
Induction heater
[ "Engineering" ]
3,478
[ "nan" ]
7,295,282
https://en.wikipedia.org/wiki/Coprinopsis%20atramentaria
Coprinopsis atramentaria, commonly known as the common ink cap, tippler's bane, or inky cap, is a species of fungus. Previously known as Coprinus atramentarius, it is the second best-known ink cap and previous member of the genus Coprinus after C. comatus. The grey-brown cap is initially bell-shaped before opening, after which it flattens and disintegrates. The flesh is thin and the taste mild. It is a widespread and common throughout the Northern Hemisphere. Clumps of mushrooms arise after rain from spring to autumn, commonly in urban and disturbed habitats such as vacant lots and lawns, as well as grassy areas. It can be eaten, but due to the presence of coprine within the mushroom, it is poisonous when consumed with alcohol, as it heightens the body's sensitivity to ethanol in a similar manner to the anti-alcoholism drug disulfiram. Taxonomy The common ink cap was first described by French naturalist Pierre Bulliard in 1786 as Agaricus atramentarius before being placed in the large genus Coprinus in 1838 by Elias Magnus Fries. The specific epithet is derived from the Latin word atramentum "ink". The genus was formerly considered to be a large one with well over 100 species. However, molecular analysis of DNA sequences showed that most species belonged in the family Psathyrellaceae, distinct from the type species that belonged to the Agaricaceae. It was given its current binomial name in 2001 as a result, as this and other species were moved to the new genus Coprinopsis. The term "tippler's bane" is derived from its ability to create acute sensitivity to alcohol, similar to disulfiram (Antabuse). Other common names include common ink cap and inky cap. The black liquid that this mushroom releases after being picked was once used as ink. Description Measuring in diameter, the greyish or brownish-grey cap is initially bell-shaped, is furrowed, and later splits. The colour is more brownish in the centre of the cap, which later flattens before melting. The very crowded gills are free; they are whitish at first but rapidly turn black and easily deliquesce. The short stipe measures high by 1–2 cm in diameter, is grey in colour, and lacks a ring. In young groups, the stems may be obscured by the caps. The spore print is black, and the almond-shaped spores measure 8–11 by 5–6 μm. The flesh is thin and pale grey in colour. Distribution and habitat Coprinopsis atramentaria occurs across the Northern Hemisphere, including Europe, North America, and Asia, but has also been found in Australia, where it has been recorded from such urban locations as the Royal Botanic Gardens in Sydney and around Lake Torrens, and also in South Africa. Recent finds of ink caps (photographs inserted on the right) in Dunedin on New Zealand's South Island have been identified as C. atramentaria and Coprinellus micaceus, a related species which also releases a black ink once used for printing (though the exact species is still to be confirmed). in the process known as deliquescence. Like many ink caps, it grows in tufts. It is commonly associated with buried wood and is found in grassland, meadows, disturbed ground, and open terrain from late spring to autumn. Fruiting bodies have been known to push their way up through asphalt and even tennis courts. It is also common in urban areas and appears in vacant lots, and tufts of fungi can be quite large and fruit several times a year. If dug up, the mycelium can often be found originating on buried dead wood. Toxicity and uses Consuming Coprinopsis atramentaria within a few hours of alcohol results in a "disulfiram syndrome". This interaction has only been known since the early part of the twentieth century. Symptoms include facial reddening, nausea, vomiting, malaise, agitation, palpitations, and tingling in limbs, and arise five to ten minutes after consumption of alcohol. If no more alcohol is consumed, they will generally subside over two or three hours. Symptom severity is proportional to the amount of alcohol consumed, becoming evident when blood alcohol concentration reaches 5 mg/dl, and prominent at concentrations of 50–100 mg/dl. Disulfiram has, however, been known to cause myocardial infarction (heart attack). The symptoms can occur if even a small amount of alcohol is consumed up to three days after eating the mushrooms, although they are milder as more time passes. Rarely, a cardiac arrhythmia, such as atrial fibrillation on top of supraventricular tachycardia, may develop. Because of these effects, in some cases, the mushroom has been used to cure alcoholism. The fungus contains a cyclopropylglutamine compound called coprine. Its active metabolite, 1-aminocyclopropanol, blocks the action of an enzyme, acetaldehyde dehydrogenase, which breaks down acetaldehyde in the body. Acetaldehyde is an intermediate metabolite of ethanol and is responsible for most symptoms of a hangover; its effect on autonomic β receptors is responsible for the vasomotor symptoms. Treatment involves reassuring the patient that the often frightening symptoms will pass, rehydration (fluid replacement) for fluid loss from vomiting, and monitoring for cardiac arrhythmias. Large and prolonged doses of coprine were found to have gonadotoxic effects on rats and dogs in testing. See also List of Coprinopsis species Notes References Further reading Acetaldehyde dehydrogenase inhibitors Edible fungi Fungi described in 1786 Fungi of Europe Fungi of North America atramentaria Taxa named by Jean Baptiste François Pierre Bulliard Fungus species
Coprinopsis atramentaria
[ "Biology" ]
1,226
[ "Fungi", "Fungus species" ]
7,295,735
https://en.wikipedia.org/wiki/TiungSAT-1
TiungSAT-1 is the first Malaysian microsatellite. The satellite is developed through the technology transfer and training programme between Astronautic Technology Sdn Bhd (ATSB) Malaysia and Surrey Satellite Technology Ltd., United Kingdom. TiungSAT-1 was launched aboard Dnepr rocket from Baikonur Cosmodrome, Kazakhstan on 26 September 2000. In May 2002, the amateur radio payload on the satellite was designated Malaysian OSCAR-46, or MO-46. References External links NSSDC ID: 2000-057D, NSSDC Master Catalog Tiungsat 1 (MySat 1, Oscar 46, MO 46), skyrocket.de TIUNGSAT 1 Satellite details 2000-057D NORAD 26548, N2YO.com Satellites orbiting Earth Spacecraft launched in 2000 Amateur radio satellites Satellites of Malaysia
TiungSAT-1
[ "Astronomy" ]
173
[ "Astronomy stubs", "Spacecraft stubs" ]
7,297,179
https://en.wikipedia.org/wiki/Statistical%20potential
In protein structure prediction, statistical potentials or knowledge-based potentials are scoring functions derived from an analysis of known protein structures in the Protein Data Bank (PDB). The original method to obtain such potentials is the quasi-chemical approximation, due to Miyazawa and Jernigan. It was later followed by the potential of mean force (statistical PMF ), developed by Sippl. Although the obtained scores are often considered as approximations of the free energy—thus referred to as pseudo-energies—this physical interpretation is incorrect. Nonetheless, they are applied with success in many cases, because they frequently correlate with actual Gibbs free energy differences. Overview Possible features to which a pseudo-energy can be assigned include: interatomic distances, torsion angles, solvent exposure, or hydrogen bond geometry. The classic application is, however, based on pairwise amino acid contacts or distances, thus producing statistical interatomic potentials. For pairwise amino acid contacts, a statistical potential is formulated as an interaction matrix that assigns a weight or energy value to each possible pair of standard amino acids. The energy of a particular structural model is then the combined energy of all pairwise contacts (defined as two amino acids within a certain distance of each other) in the structure. The energies are determined using statistics on amino acid contacts in a database of known protein structures (obtained from the PDB). History Initial development Many textbooks present the statistical PMFs as proposed by Sippl as a simple consequence of the Boltzmann distribution, as applied to pairwise distances between amino acids. This is incorrect, but a useful start to introduce the construction of the potential in practice. The Boltzmann distribution applied to a specific pair of amino acids, is given by: where is the distance, is the Boltzmann constant, is the temperature and is the partition function, with The quantity is the free energy assigned to the pairwise system. Simple rearrangement results in the inverse Boltzmann formula, which expresses the free energy as a function of : To construct a PMF, one then introduces a so-called reference state with a corresponding distribution and partition function , and calculates the following free energy difference: The reference state typically results from a hypothetical system in which the specific interactions between the amino acids are absent. The second term involving and can be ignored, as it is a constant. In practice, is estimated from the database of known protein structures, while typically results from calculations or simulations. For example, could be the conditional probability of finding the atoms of a valine and a serine at a given distance from each other, giving rise to the free energy difference . The total free energy difference of a protein, , is then claimed to be the sum of all the pairwise free energies: where the sum runs over all amino acid pairs (with ) and is their corresponding distance. In many studies does not depend on the amino acid sequence. Conceptual issues Intuitively, it is clear that a low value for indicates that the set of distances in a structure is more likely in proteins than in the reference state. However, the physical meaning of these statistical PMFs has been widely disputed, since their introduction. The main issues are: The wrong interpretation of this "potential" as a true, physically valid potential of mean force; The nature of the so-called reference state and its optimal formulation; The validity of generalizations beyond pairwise distances. Controversial analogy In response to the issue regarding the physical validity, the first justification of statistical PMFs was attempted by Sippl. It was based on an analogy with the statistical physics of liquids. For liquids, the potential of mean force is related to the radial distribution function , which is given by: where and are the respective probabilities of finding two particles at a distance from each other in the liquid and in the reference state. For liquids, the reference state is clearly defined; it corresponds to the ideal gas, consisting of non-interacting particles. The two-particle potential of mean force is related to by: According to the reversible work theorem, the two-particle potential of mean force is the reversible work required to bring two particles in the liquid from infinite separation to a distance from each other. Sippl justified the use of statistical PMFs—a few years after he introduced them for use in protein structure prediction—by appealing to the analogy with the reversible work theorem for liquids. For liquids, can be experimentally measured using small angle X-ray scattering; for proteins, is obtained from the set of known protein structures, as explained in the previous section. However, as Ben-Naim wrote in a publication on the subject: [...] the quantities, referred to as "statistical potentials," "structure based potentials," or "pair potentials of mean force", as derived from the protein data bank (PDB), are neither "potentials" nor "potentials of mean force," in the ordinary sense as used in the literature on liquids and solutions. Moreover, this analogy does not solve the issue of how to specify a suitable reference state for proteins. Machine learning In the mid-2000s, authors started to combine multiple statistical potentials, derived from different structural features, into composite scores. For that purpose, they used machine learning techniques, such as support vector machines (SVMs). Probabilistic neural networks (PNNs) have also been applied for the training of a position-specific distance-dependent statistical potential. In 2016, the DeepMind artificial intelligence research laboratory started to apply deep learning techniques to the development of a torsion- and distance-dependent statistical potential. The resulting method, named AlphaFold, won the 13th Critical Assessment of Techniques for Protein Structure Prediction (CASP) by correctly predicting the most accurate structure for 25 out of 43 free modelling domains. Explanation Bayesian probability Baker and co-workers justified statistical PMFs from a Bayesian point of view and used these insights in the construction of the coarse grained ROSETTA energy function. According to Bayesian probability calculus, the conditional probability of a structure , given the amino acid sequence , can be written as: is proportional to the product of the likelihood times the prior . By assuming that the likelihood can be approximated as a product of pairwise probabilities, and applying Bayes' theorem, the likelihood can be written as: where the product runs over all amino acid pairs (with ), and is the distance between amino acids and . Obviously, the negative of the logarithm of the expression has the same functional form as the classic pairwise distance statistical PMFs, with the denominator playing the role of the reference state. This explanation has two shortcomings: it relies on the unfounded assumption the likelihood can be expressed as a product of pairwise probabilities, and it is purely qualitative. Probability kinematics Hamelryck and co-workers later gave a quantitative explanation for the statistical potentials, according to which they approximate a form of probabilistic reasoning due to Richard Jeffrey and named probability kinematics. This variant of Bayesian thinking (sometimes called "Jeffrey conditioning") allows updating a prior distribution based on new information on the probabilities of the elements of a partition on the support of the prior. From this point of view, (i) it is not necessary to assume that the database of protein structures—used to build the potentials—follows a Boltzmann distribution, (ii) statistical potentials generalize readily beyond pairwise differences, and (iii) the reference ratio is determined by the prior distribution. Reference ratio Expressions that resemble statistical PMFs naturally result from the application of probability theory to solve a fundamental problem that arises in protein structure prediction: how to improve an imperfect probability distribution over a first variable using a probability distribution over a second variable , with . Typically, and are fine and coarse grained variables, respectively. For example, could concern the local structure of the protein, while could concern the pairwise distances between the amino acids. In that case, could for example be a vector of dihedral angles that specifies all atom positions (assuming ideal bond lengths and angles). In order to combine the two distributions, such that the local structure will be distributed according to , while the pairwise distances will be distributed according to , the following expression is needed: where is the distribution over implied by . The ratio in the expression corresponds to the PMF. Typically, is brought in by sampling (typically from a fragment library), and not explicitly evaluated; the ratio, which in contrast is explicitly evaluated, corresponds to Sippl's PMF. This explanation is quantitive, and allows the generalization of statistical PMFs from pairwise distances to arbitrary coarse grained variables. It also provides a rigorous definition of the reference state, which is implied by . Conventional applications of pairwise distance statistical PMFs usually lack two necessary features to make them fully rigorous: the use of a proper probability distribution over pairwise distances in proteins, and the recognition that the reference state is rigorously defined by . Applications Statistical potentials are used as energy functions in the assessment of an ensemble of structural models produced by homology modeling or protein threading. Many differently parameterized statistical potentials have been shown to successfully identify the native state structure from an ensemble of decoy or non-native structures. Statistical potentials are not only used for protein structure prediction, but also for modelling the protein folding pathway. See also Scoring functions for docking Discrete optimized protein energy CASP CAMEO3D Lennard-Jones potential Bond order potential Notes References Potentials Computational biology Bioinformatics Molecular modelling Protein structure
Statistical potential
[ "Chemistry", "Engineering", "Biology" ]
1,966
[ "Biological engineering", "Molecular physics", "Bioinformatics", "Theoretical chemistry", "Molecular modelling", "Structural biology", "Computational biology", "Protein structure" ]
7,297,180
https://en.wikipedia.org/wiki/Digital%20Earth
Digital Earth is the name given to a concept by former US vice president Al Gore in 1998, describing a virtual representation of the Earth that is georeferenced and connected to the world's digital knowledge archives. Concept Original vision In a speech prepared for the California Science Center in Los Angeles on January 31, 1998, Gore described a digital future where schoolchildren - indeed all the world's citizens - could interact with a computer-generated three-dimensional spinning virtual globe and access vast amounts of scientific and cultural information to help them understand the Earth and its human activities. The greater part of this knowledge store would be free to all via the Internet, however a commercial marketplace of related products and services was envisioned to co-exist, in part in order to support the expensive infrastructure such a system would require. The origin of the idea can be traced back to Buckminster Fuller's Geoscope, a large spherical display to represent geographic phenomena. Many aspects of his proposal have been realized - for instance, virtual globe geo-browsers such as NASA World Wind, Google Earth and Microsoft's Bing Maps 3D for commercial, social and scientific applications. But the Gore speech outlined a truly global, collaborative linking of systems that has yet to happen. That vision has been continually interpreted and defined by the growing global community of interest described below. The Digital Earth imagined in the speech has been defined as an "organizing vision" to steer scientists and technologists towards a shared goal, promising substantial advances in many scientific and engineering areas, similar to the Information superhighway. An emerging view Two noteworthy excerpts from the Beijing Declaration on Digital Earth, ratified September 12, 2009 at the 6th International Symposium on Digital Earth in Beijing: "Digital Earth is an integral part of other advanced technologies including: earth observation, geo-information systems, global positioning systems, communication networks, sensor webs, electromagnetic identifiers, virtual reality, grid computation, etc. It is seen as a global strategic contributor to scientific and technological developments, and will be a catalyst in finding solutions to international scientific and societal issues." "Digital Earth should play a strategic and sustainable role in addressing such challenges to human society as natural resource depletion, food and water insecurity, energy shortages, environmental degradation, natural disasters response, population explosion, and, in particular, global climate change." Next-generation digital Earth A group of international geographic and environmental scientists from government, industry, and academia brought together by the Vespucci Initiative for the Advancement of Geographic Information Science, and the Joint Research Centre of the European Commission recently published "Next-Generation Digital Earth" a position paper that suggests its eight key elements: Not one Digital Earth, but multiple connected globes/infrastructures addressing the needs of different audiences: citizens, communities, policymakers, scientists, educationalists. Problem oriented: e.g. environment, health, societal benefit areas, and transparent on the impacts of technologies on the environment Allowing search through time and space to find similar/analogue situations with real time data from both sensors and humans (different from what existing GIS can do, and different from adding analytical functions to a virtual globe) Asking questions about change, identification of anomalies in space in both human and environmental domains (flag things that are not consistent with their surroundings in real time) Enabling access to data, information, services, and models as well as scenarios and forecasts: from simple queries to complex analyses across the environmental and social domains. Supporting the visualization of abstract concepts and data types (e.g. low income, poor health, and semantics) Based on open access, and participation across multiple technological platforms, and media (e.g. text, voice and multi-media) Engaging, interactive, exploratory, and a laboratory for learning and for multidisciplinary education and science. Key developments Significant progress towards Digital Earth has been achieved over the last decade as collected in a survey paper by Mahdavi-Amiri et al., including work in these categories: Spatial Data Infrastructure (SDI) The number of Spatial Data Infrastructures has grown steadily since the early 1990s, aided in part by interoperability standards maintained by the Open Geospatial Consortium and the International Organization for Standardization (ISO). Significant recent efforts to link and coordinate SDI's include Infrastructure for Spatial Information in Europe (INSPIRE) and the UNSDI Initiative of the UN Geographic Information Working Group (UNIGWG). Between 1998 and 2001, the NASA-chaired Interagency Digital Earth Working Group (IDEW) contributed to this growth with a particular focus on interoperability issues, giving rise to the Web Map Service standard among others. Geobrowsers The scientific use of geo-browser virtual globes such as Google Earth, NASA's World Wind, and ESRI's ArcGIS Explorer has grown significantly as their functionality has improved and with the KML format having become the de facto standard for globe visualizations. Numerous examples can be viewed at the Google Earth Outreach Showcase and at the World Wind Java Demo Applications and Applets. Sensor networks Geosensors are defined as "...any device receiving and measuring environmental stimuli that can be geographically referenced." Large scale networks of geosensors have been in place for many years, measuring Earth surface, hydrological and atmospheric phenomena. The advent of the Internet led to a large expansion of such networks, and efforts like Global Earth Observation System of Systems (GEOSS) Initiative aim to connect them. Volunteered Geographic Information (VGI) The term Volunteered Geographic Information was coined in 2007 by geographer Michael Goodchild, referring to the rapidly growing volume of social and scientific georeferenced user-generated content being made available on the Web by both expert and non-expert individuals and groups. This phenomenon is seen as an emerging Geoweb that provides Application Programming Interfaces (API's) to software developers and increasingly user-friendly web mapping software to both scientists and the public at large. International community The International Journal of Digital Earth is a peer-reviewed research journal, launched in 2008, concerned with the science and technology of Digital Earth and its applications in all major disciplines. The International Society for Digital Earth is a non-political, non-governmental and not-for-profit international organization, principally for promotion of academic exchange, science and technology innovation, education, and international collaboration. Several International Symposia on Digital Earth (ISDE) have been held. There have been seven ISDE symposia and three Digital Earth Summits. Proceedings for many of them are available. The 7th Symposium was held in Perth, Western Australia in 2011. The 4th Digital Earth Summit was held in Wellington, New Zealand in September, 2012. Digital Earth Reference Model (DERM) The term Digital Earth Reference Model (DERM) was coined by Tim Foresman in context with a vision for an all encompassing geospatial platform as an abstract for information flow in support of Al Gore's vision for a Digital Earth. The Digital Earth reference model seeks to facilitate and promote the use of georeferenced information from multiple sources over the Internet. A digital Earth reference model defines a fixed global reference frame for the Earth using four principles of a digital system, namely: Discrete partitioning using regular or irregular cell mesh, tiling or Grid; Data acquisition using signal processing theory (sampling and quantizing) for assigning binary values from continuous analog or other digital sources to the discrete cell partitions; An ordering or naming of cells that can provide both unique spatial indexing and geographic location address; A set of mathematical operations built on the indexing for algebraic, geometric, Boolean and image processing transforms, etc. The Open Geospatial Consortium has a spatial reference system standard based on the DERM called a [Discrete Global Grid] System (DGGS). According to OGC "a DGGS is a spatial reference system that uses a hierarchical tessellation of cells to partition and address the globe. DGGS are characterized by the properties of their cell structure, geo-encoding, quantization strategy and associated mathematical functions. The OGC DGGS standard supports the specification of standardized DGGS infrastructures that enable the integrated analysis of very large, multi-source, multi-resolution, multi-dimensional, distributed geospatial data. Interoperability between OGC DGGS implementations is anticipated through extension interface encodings of OGC Web Services.". Thus, the DGGS is a discrete, hierarchical, information grid with an addressing (or indexing) scheme to assign unique addresses to each cell across the entire DGGS Domain. Background United States Technology developments that support the current Digital Earth technological framework can be traced to U.S. computing advances derived from the Cold War competition, the space race, and commercial innovations. Therefore, many innovations can be tracked to corporations working for the Department of Defense or NASA. However, the philosophical foundations for Digital Earth can be more closely aligned with the increased awareness of global changes and the need to better understand the concepts of sustainability for the planet's survival. These roots can be traced back to visionaries such as Buckminster Fuller who proposed development of a GeoScope half a century ago, analogous to a microscope to examine and improve our understanding of the planet Earth. From Fall 1998 until Fall 2000, NASA led the U.S. Digital Earth initiative in cooperation with its sister government agencies, including the Federal Geospatial Data Committee (FGDC). Attention to consensus development of standards, protocols and tools through cooperative test-bed initiatives was the primary process for advancement of this initiative within the government community. In 1999, NASA was selected to head a new Interagency Digital Earth Working Group (IDEW), due to its reputation for technology innovations and its focus on the study of planetary change. The new initiative was located in the NASA's Office of Earth Sciences. This titular focus was considered necessary to help align over 17 government agencies and keep sustainability and Earth oriented applications as a guiding principle for the Digital Earth enterprise. Components for development of 3-D Earth graphic-user-interfaces (GUIs) were placed into various technological sectors to stimulate cooperative development support. While initially limited to government personnel, industry and academia were early observers attending IDEW workshops to discuss topics such as, visualization, information fusion, standards and interoperability, advanced computational algorithms, digital libraries and museums. In March 2000, at a special IDEW meeting hosted by Oracle Corporation in Herndon, Virginia, industry representatives demonstrated several promising 3-D visualization prototypes. Within two years, these were captivating international audiences, including Kofi Annan and Colin Powell, in government, business, science, and mass media who began to purchase the early commercial geo-browsers. Just as the spectacular Apollo photography of Earthrise provided an inspiring Earth-centric image for new generations to appreciate the fragility of our biosphere, the 3-D Digital Earths began inspiring growing numbers of people to the possibility of better understanding and possibly saving our planet. Introduction of satellite data into commercially accessible spatial toolboxes significantly advanced the capacity to map, monitor, and manage our planet's resources and provide a unifying perspective on the Digital Earth vision. After Al Gore lost the 2000 presidential election, the incoming administration considered the programmatic moniker Digital Earth a political liability. Digital Earth was relegated to a minority status within the FGDC, used primarily to define 3-D visualization reference models. China In 1999, with the Chinese government's full backing, the inaugural International Symposium on Digital Earth in Beijing provided a venue for the extensive international support for implementing the Gore Digital Earth vision introduced a year earlier. Hundreds of digital earth cities created by governments and universities resulted. In China, Digital Earth became a metaphor for modernization and automation with computers, leading to its incorporation into a five-year modernization plan. Originating from China's satellite remote sensing community, Digital Earth prowess spread to a range of applications including flood predictions, dust cloud modeling, environmental assessments, and city planning. China has been omnipresent at all international Digital Earth conferences since and has recently founded the International Society for Digital Earth, one of the first NGOs created by the Chinese Academy of Sciences. In 2009, the International Symposium on Digital Earth returned to Beijing for its 6th meeting. United Nations In 2000, the United Nations Environment Programme (UNEP) advanced the Digital Earth to enhance decision-makers' access to information for then Secretary-General Kofi Annan and the United Nations Security Council. UNEP promoted use of web-based geospatial technologies with the ability to access the world's environmental information, in association with economic and social policy issues. A reorganization of UNEP's data and information resources was initiated in 2001, based on the GSDI/DE architecture for a network of distributed and interoperable databases creating a framework of linked servers. The design concept was based upon using a growing network of internet mapping software and database content with advanced capabilities to link GIS tools and applications. UNEP.net, launched in February 2001, provided UN staff with an unparalleled facility for accessing authoritative environmental data resources and a visible example to others in the UN community. However, a universal user interface for UNEP.net, suitable for members of Security Council, that is non-scientists, did not exist. UNEP began actively testing prototypes for a UNEP geo-browser beginning in mid-2001 with a showcase for the African community displayed at the 5th African GIS Conference in Nairobi, Kenya November 2001. Keyhole Technology, Inc. (later purchased in 2004 by Google and to become Google Earth) was contracted to develop and demonstrate the first full globe 3-D interactive Digital Earth using web-stream data from a distributed database located on servers around the planet. A concerted effort within the UN community, via the Geographic Information Working Group (UNGIWG), followed immediately, including purchase of early Keyhole systems by 2002. UNEP provided further public demonstrations for this early Digital Earth system at the World Summit on Sustainable Development in September, 2002 at Johannesburg, South Africa. In seeking an engineering approach to system-wide development of the Digital Earth model, recommendations were made at the 3rd UNGIWG Meeting, June 2002, Washington, D.C. for creating a document on the Functional User Requirements for geo-browsers. This proposal was communicated to the ISDE Secretariat in Beijing and the organizing committee for the 3rd International Symposium on Digital Earth and agreement was reached by the Chinese Academy of Sciences-sponsored Secretariat to host the first of the two Digital Earth geo-browser meetings. Japan Japan, led by Keio University and JAXA, has also played a prominent international role in Digital Earth helping to create the Digital Asia Network with a secretariat located in Bangkok to promote regional cooperation and initiatives. Citizens in the Gifu Prefecture upload information to community-scale Digital Earth programs with from their smartphones on topics ranging from first sightings of fireflies in spring to location of blocked handicap access ramps. Events See also Destination Earth (European Union) Digital twin Geocode Geodesic grid Géoportail Geoweb Grid reference International Cartographic Association (ICA) International Society for Digital Earth (ISDE) Spatial index References Further reading External links Digital Earth technologies ADEPT - Alexandria Digital Earth Prototype (1999–2004) US Government Digital Earth Reference Model (DERM) Global Spatial Data Model (GSDM) Planetary Skin: A global platform for a new Era of Collaboration Digital marketing China PYXIS WorldView Studio: Digital Earth platform for spatial analysis and sharing map data Geographic data and information
Digital Earth
[ "Technology" ]
3,166
[ "Geographic data and information", "Data" ]
7,298,332
https://en.wikipedia.org/wiki/Oversize%20load
In road transport, an oversize load (or overweight load) is a load that exceeds the standard or ordinary legal size and/or weight limits for a truck to convey on a specified portion of road, highway, or other transport infrastructure, such as air freight or water freight. In Europe, it may be referred to as special transport or heavy and oversized transportation. There may also be load-per-axle limits. However, a load that exceeds the per-axle limits but not the overall weight limits is considered overweight. Examples of oversize/overweight loads include construction machines (cranes, front loaders, backhoes, etc.), pre-built homes, containers, and construction elements (bridge beams, generators, windmill propellers, rocket stages, and industrial equipment). Overview The legal dimensions and weights vary between countries and regions within a country. A vehicle which exceeds the legal dimensions usually requires a special permit which requires extra fees to be paid in order for the oversize/overweight vehicle to legally travel on the roadways. The permit usually specifies a route the load must follow as well as the dates and times during which the load may travel. When a load cannot be dismantled into units that can be transported without exceeding the limitations in terms of the dimensions and/or mass, it is classified as an abnormal load. Another definition can be summarized as follows: an abnormal indivisible load ('AIL') is one which cannot be divided into two or more loads for transporting (on roads). Also, break bulk is used to define the freight that cannot be loaded into any ocean container or too large for air cargo. Any road transport is framed by the CMR Convention (Convention on the Contract for the International Carriage of Goods by Road), which relates to various legal issues concerning transportation of cargo, predominantly by lorries, by road. Cargo loading and securement According to the Federal Motor Carrier Safety Administration, National Highway Traffic Safety Administration, Large Truck Crash Causation Study 7% of U.S. trucking accidents are caused by improper cargo securement or cargo shifts. Shifting cargo can cause the truck to destabilize or the load can fall off completely leading to serious public safety issues. Load shifting is prohibited by law and it is the responsibility of the shipper, motor carrier, driver, receiver, and the securing device manufacturer to ensure the cargo is completely secured. International perspectives In a specific country, the roads are built in a way that allows a vehicle with dimensions within the standard legal limits to safely (though not necessarily easily) drive and turn. Roads that do not allow large vehicles may be marked with the traffic signs. These may include per-axle load, height, width, or overall length limits. Europe Trucks must have special signs of "convoi exceptionnel" and lights that warn the oversized cargo. The escort car has also special signs, depending the country within it operates. Special permits are issued by local authorities to allow a transporter to operate on a public road for a limited period and for a certain and given route. Heavy transport companies tend to focus on renewables, civil and infrastructure, offshore, oil and gas, heavy engineering and power generation industries. Other companies across Europe have also collaborated to form the Route To Space Alliance. The Netherlands Due to its strategic location, there are many Dutch-based special transport companies, but due to the relatively small size of the country, these companies, such as Van der Vlist have often started to spread further afield to increase their market and take advantage of the freedom of movement offered through the EU. Romania In Romania, if the total dimensions (truck+load) exceed × (or if it does not fit into a tilt truck), then a transport is considered out of gauge. A table of maximum dimensions and weight as well as best practices is available for European countries on the following industry resource site. Romania has an active market for special transporters where, as mentioned above, companies such as Schnell Trans, deal with international transportation projects. Trailers suitable for special loads have different characteristics depending on the number of axles, height from the ground to the platform, extensions or load capacity. Each of these trucks can carry loads such as trams, energy transformers, construction machines, metallic structures or wooden boxes/crates. United Kingdom An abnormal load is defined as a load with a weight of more than 44 tonnes an axle load of more than 10 tonnes for a single non-driving axle and 11.5 tonnes for a single driving axle a width of more than 2.9 metres a rigid length of more than 18.65 metres Anyone wishing to transport an abnormal load must notify the police, highway authorities and any on-route bridge and structure owners such as Network Rail. National Highways operates a system known as "Electronic Service Delivery for Abnormal Loads" (ESDAL) for the purpose of supporting notifications. New Zealand In New Zealand, an oversize load is a vehicle and/or load that is wider than or higher than . Overlength limits vary depending on the type and the configuration of vehicle, but the overall maximum forward distance (i.e. the length from the front of the vehicle to the centre axis of the rear axle set) is , the overall maximum single vehicle length is (some buses can be longer), and the overall maximum combination length is . Loads must be indivisible, except when the vehicle is oversize itself where it can carry divisible loads as long as the divisible load fits within the standard load limits. Permits are not required for oversize vehicles which are under long, under high, and fit within a set combination of width and forward distance; but they must comply with certain rules regarding piloting, travel times and obstructions. United States In the United States, an oversize load is a vehicle and/or load that is wider than . Each individual state has different requirements regarding height and length (most states are tall), and a driver must purchase a permit for each state he/she will be traveling through. In many states, a load must be considered "nondivisible" to qualify for a permit (i.e. an object which cannot be broken down into smaller pieces), although some states allow divisible loads to be granted permits. India In India, any load which protrudes the platform of the vehicle which is defined in CMVR 1989 is considered ODC (Over Dimensional Cargo). Dimensions of a load with the height of 4 mtr or width of 2.6 mtr or length of 12mtr in case of rigid vehicle and 18 mtr in case of tractor trailer combination needs to obtain state specific permissions, but no load can exceed the GVW of the vehicle. Loads above 55 tons can only be moved on HMT (hydraulic modular trailer) and puller tractor combination, for which a nationalized permission must be obtained via MORTH (Ministry of Road Transport and Highways of India) portal with HMT payload of 18 ton per axle excluding the weight of the puller tractor. Loads not complying with rules are fined by the RTO (Regional Transport Office) officers individually, three for each dimension and one for weight. Signaling A pilot car driver may temporarily block traffic at intersections to ensure the safe passage of the truck. Hazards Oversize loads present a hazard to roadway structures as well as to road traffic. Because they exceed design clearances, there is a risk that such vehicles can hit bridges and other overhead structures. Over-height vehicle impacts are a frequent cause of damage to bridges, and truss bridges are particularly vulnerable, due to having critical support members over the roadway. An over-height load struck the overhead beams on the I-5 Skagit River bridge in 2013, which caused the bridge to collapse. Licensing Different countries have different approaches to licensing oversize/overweight loads. Licenses may be issued for a specific load, for a period of time, or to a specific company. In most jurisdictions, the permit specifies the exact route a vehicle must take, and includes clearance warnings. However, in some places, such as Washington state, drivers are responsible for choosing their own route. The carrier can choose to obtain the required permits themselves or go through a permit service. See also Loading gauge Structure gauge Trucking industry in the United States Maritime shipping Roll trailer References External links U.S. Government website European Best Practice Guidelines for Abnormal Road Transports road hazards road transport road transport in Europe transport by cargo Freight transport
Oversize load
[ "Technology" ]
1,722
[ "Road hazards" ]
7,299,372
https://en.wikipedia.org/wiki/Hollomon%E2%80%93Jaffe%20parameter
The Hollomon–Jaffe parameter (HP), also generally known as the Larson–Miller parameter, describes the effect of a heat treatment at a temperature for a certain time. This parameter is especially used to describe the tempering of steels, so that it is also called tempering parameter. Effect The effect of the heat treatment depends on its temperature and its time. The same effect can be achieved with a low temperature and a long holding time, or with a higher temperature and a short holding time. Formula In the Hollomon–Jaffe parameter, this exchangeability of time and temperature can be described by the following formula: This formula is not consistent concerning the units; the parameters must be entered in a certain manner. T is in degrees Celsius. The argument of the logarithmic function has the unit hours. C is a parameter unique to the material used. The Hollomon parameter itself is unitless and realistic numeric values vary between 15 and 21. where T is in kilokelvins, t is in hours, and C is the same as above. Holloman and Jaffe determined the value of C experimentally by plotting hardness versus tempering time for a series of tempering temperatures of interest and interpolating the data to obtain the time necessary to yield a number of different hardness values. This work was based on six different heats of plain carbon steels with carbon contents varying from 0.35%–1.15%. The value of C was found to vary somewhat for different steels and decrease linearly with the carbon content of a steel grade. Holloman and Jaffe proposed that C = 19.5 for carbon and alloy steels with carbon contents of 0.25%–0.4%; and C = 15 for tool steels with carbon contents of 0.9%–1.2%. See also Zener–Hollomon parameter References Metal heat treatments
Hollomon–Jaffe parameter
[ "Chemistry" ]
396
[ "Metallurgical processes", "Metal heat treatments" ]
7,299,728
https://en.wikipedia.org/wiki/Sociometer
Sociometer theory is a theory of self-esteem from an evolutionary psychological perspective which proposes that self-esteem is a gauge (or sociometer) of interpersonal relationships. This theoretical perspective was first introduced by Mark Leary and colleagues in 1995 and later expanded on by Kirkpatrick and Ellis. In Leary's research, the idea of self-esteem as a sociometer is discussed in depth. This theory was created as a response to psychological phenomenon i.e. social emotions, inter- and intra- personal behaviors, self-serving biases, and reactions to rejection. Based on this theory, self-esteem is a measure of effectiveness in social relations and interactions that monitors acceptance and/or rejection from others. With this, an emphasis is placed on relational value, which is the degree to which a person regards his or her relationship with another, and how it affects day-to-day life. Confirmed by various studies and research, if a person is deemed having relational value, they are more likely to have higher self-esteem. The main concept of sociometer theory is that the self-esteem system acts as a gauge to measure the quality of an individual's current and forthcoming relationships. Furthermore, this measurement of self-esteem assesses these two types of relationships in terms of relational appreciation. This is how other people might view and value the relationships they hold with the individual. If relational appreciation of an individual differs negatively, relational devaluation is experienced. Relational devaluation exists in the format of belongingness, with a negative alteration allowing the sociometer gauge to highlight these threats, producing emotional distress to act to regain relational appreciation and restore balance in the individual's self-esteem. According to Leary, there are five main groups associated with relational value that are classified as those affording the greatest impact on an individual. They are: 1) macro-level, i.e., communities, 2) instrumental coalitions, i.e., teams, committees, 3) mating relationships, 4) kin relationships, and 5) friendships. A study was conducted to see just how much people depend on peers and outside factors and relational values to regulate their life. The objective of the study was to pick groups for an activity based on the evaluations given by the students. In the study, two groups were assigned. Both groups consisted of college students that submitted and were subjected to a peer evaluation. The difference being that the control group of students chose if they 1) wanted to interact with the person or 2) dissociated from the person. When previously asked, some students stated that they were indifferent or did not care what others' opinions of them were. However, when results were analyzed there was a great deal of fluctuation in overall self-esteem. Those who were placed in the second group (of dissociation), receiving a low relational value, displayed a lowered self-esteem. As a result, this compromised the way they assessed a/the situation. In the first group, where perceived relational value was high, self-esteem was also high. This provides some evidence for an evolutionary basis in the fundamental human need for inclusion in a group, and the burden of being on the outskirts of social acceptance. Cameron and Stinson further review the sociometer theory definition, highlighting two key constructs of the concept: Specific experiences of social acceptance and rejection are internalised to form a representation of one's own worth and effort they are contributing as a social partner. The higher self-esteem someone has the more they will perceive as being valued by others. For individuals with low self-esteem, they question their value as a social partner, often letting their subsequent insecurities devolve onto future relationships. Types of self-esteem relating to sociometer theory State self-esteem gauges the person's level of current relational appreciation and assesses the likelihood to which the individual is to be accepted and included versus rejected and excluded by other people in the immediate situation. The state self-esteem system monitors the person's behaviour and social environment for cues relevant to relational evaluation and responds with affective and motivational consequences when cues relevant to exclusion are detected. Trait self-esteem is a subjective measure of how likely an individual is to be accepted or rejected in a social situation. This form of self-esteem aids in the assessment of an individual in social situations, furthermore estimating whether current or future relationships would be respected and valued long-term. Global self-esteem is a stable, internal measure of self-esteem that an individual elicits to assess the potential global outlook that different races, ethnicities and communities might say about an individual. Often, this internal measure of self-esteem is beneficial in restoring relational appreciation if an individual's self-esteem drops below normal levels. Domain-specific self-esteem is a measure by which an individual will examine their own accomplishments such as in social, academic, and athletics situations which could alter self-esteem. This domain specific measure is an effective way to identify outliers in a current performance, instead of creating a misinformed trend (i.e that you are consistently underperforming). Evidence in support Support for Sociometer Theory has come from an international, cross sequential study conducted in the Netherlands, assessing self-esteem in 1599 seven and eight year olds. The study examined firstly how self-esteem would develop in childhood, and secondly whether self-esteem develops alongside changing peer and family relationships. Inter-individual differences and intra-individual changes in social support over time were assessed. Mean-level self-esteem was found to remain stable in middle childhood and elicited no alterations. Additionally, both within and between person level assessments elicited positive reports of self esteem, showing a more robust social network from self-report measures that participants completed after these assessments. Cameron and Stinson again showed evidence in support for Sociometer Theory by demonstrating acceptance and rejection experiences can have a strong influence on self-esteem levels both in the short and long-term. Self-esteem is responsive to social acceptance and rejection: State self-esteem (self-esteem in the immediate situation) is heavily responsive to both social acceptance and rejection. It is known that that acceptance causes increases in state self-esteem and rejection elicits negative views in self-esteem to occur. In a laboratory setting, these alterations are due to future projections of social rejection/acceptance or remembering past experiences in which social rejection/acceptance occurred which can allow an individual's self-esteem to subjectively deviate from normal levels. Global self-esteem is associated with perceptions of social worth: Social worth is a concept that can be utilised as a sociometer to measure self-esteem. Furthermore, this association is thought to exist because global self-esteem influences self-views in a top-down manner. In this top-down process, the traits that are normally associated with the sense of belongingness in social situations act to mediate self-esteem. Additionally, if this top-down manner of global self-esteem influences self-views, then an individual's perceptions of their belongingness and social worth should be positively correlated with global self-esteem. Self-esteem regulates responses to acceptance and rejection: Sociometer theory emphasises that a negative alteration in self-esteem should disrupt the self-esteem system balance, alarming the sociometer to distinguish these discrepancies, allowing for behaviour that restores this balance by restoring belongingness and an individual's self-worth in social situations. See also Assertiveness Body image Clinical depression Deep social mind Evolutionary psychology and culture Fear of negative evaluation Identity Inner critic Invisible support List of confidence tricks Optimism bias Outline of self Overconfidence effect Passiveness Performance anxiety Primate empathy Self image Self-affirmation Self-awareness Self-compassion Self-confidence Self-enhancement Self-esteem functions Self-esteem instability Self-evaluation maintenance theory Shyness Simulation theory of empathy Social anxiety Social phobia References External links Sociometer theory (at PsychWiki) Self Interpersonal relationships Evolutionary psychology
Sociometer
[ "Biology" ]
1,634
[ "Behavior", "Interpersonal relationships", "Human behavior" ]
7,299,890
https://en.wikipedia.org/wiki/Structural%20semantics
Structural semantics (also structuralist semantics) is a linguistic school and paradigm that emerged in Europe from the 1930s, inspired by the structuralist linguistic movement started by Ferdinand de Saussure's 1916 work "Cours De Linguistique Generale" (A Course in General Linguistics). Examples of approaches within structural semantics are Lexical field theory (1931-1960s), relational semantics (from the 1960s by John Lyons) and componential analysis (from the 1960s by Eugenio Coseriu, Bernard Pottier and Algirdas Greimas). From the 1960s these approaches were incorporated into generative linguistics. Other prominent developer of structural semantics have been Louis Hjelmslev, Émile Benveniste, Klaus Heger, Kurt Baldinger and Horst Geckeler. Logical positivism asserts that structural semantics is the study of relationships between the meanings of terms within a sentence, and how meaning can be composed from smaller elements. However, some critical theorists suggest that meaning is only divided into smaller structural units via its regulation in concrete social interactions; outside of these interactions, language may become meaningless. Structural semantics is that branch that marked the modern linguistics movement started by Ferdinand de Saussure at the break of the 20th century in his posthumous discourse titled "Cours De Linguistique Generale" (A Course in General Linguistics). He posits that language is a system of inter-related units and structures and that every unit of language is related to the others within the same system. His position later became the bedding ground for other theories such as componential analysis and relational predicates. Structuralism is a very efficient aspect of Semantics, as it explains the concordance in the meaning of certain words and utterances. The concept of sense relations as a means of semantic interpretation is an offshoot of this theory as well. Structuralism has revolutionized semantics to its present state, and it also aids to the correct understanding of other aspects of linguistics. The consequential fields of structuralism in linguistics are sense relations (both lexical and sentential) among others. See also Prototype Semantics Cognitive Semantics Cognitive Linguistics Principle of compositionality Ferdinand de Saussure Algirdas Julien Greimas References Logical positivism Semantics Structuralism
Structural semantics
[ "Mathematics" ]
455
[ "Mathematical logic", "Logical positivism" ]
7,300,187
https://en.wikipedia.org/wiki/Tetraaminoethylene
In organic chemistry, tetraaminoethylene is a hypothetical, organic compound with formula or . Like all polyamines that are geminal, this compound has never been synthesised and is believed to be extremely unstable. However, there are many stable compounds that can be viewed as derivatives of tetraaminoethylene, with various organic functional groups substituted for some or all hydrogen atoms. These compounds, which have the general formula , are collectively called tetraaminoethylenes. Tetraaminoethylenes are important in organic chemistry as dimers of diaminocarbenes, a type of stable carbene with the general formula . Reactions Tetraaminoethylenes react with acids to give formamidinium salts. Tetraaminoethylenes react with oxygen to give urea derivatives (R2N)2C=O. A notorious example is the spontaneous reaction of Tetrakis(dimethylamino)ethylene ((H3C)2N)2C=C(N(CH3)2)2 in air with emission of a green-blue light, which was used by downed US Navy pilots to signal for help in World War II. References Enamines Hypothetical chemical compounds
Tetraaminoethylene
[ "Chemistry" ]
258
[ "Theoretical chemistry", "Hypotheses in chemistry", "Hypothetical chemical compounds" ]
7,300,379
https://en.wikipedia.org/wiki/Multiple%20sub-Nyquist%20sampling%20encoding
MUSE (Multiple sub-Nyquist Sampling Encoding), commercially known as Hi-Vision (a contraction of HIgh-definition teleVISION) was a Japanese analog high-definition television system, with design efforts going back to 1979. It used dot-interlacing and digital video compression to deliver 1125 line, 60 field-per-second (1125i60) signals to the home. The system was standardized as ITU-R recommendation BO.786 and specified by SMPTE 260M, using a colorimetry matrix specified by SMPTE 240M. As with other analog systems, not all lines carry visible information. On MUSE there are 1035 active interlaced lines, therefore this system is sometimes also mentioned as 1035i. MUSE employed 2-dimensional filtering, dot-interlacing, motion-vector compensation and line-sequential color encoding with time compression to "fold" or compress an original 30 MHz bandwidth Hi-Vision source signal into just 8.1 MHz. Japan began broadcasting wideband analog HDTV signals in December 1988, initially with an aspect ratio of 2:1. The Sony HDVS high-definition video system was used to create content for the MUSE system, but didn't record MUSE signals. It recorded Hi-Vision signals which are uncompressed. By the time of its commercial launch in 1991, digital HDTV was already under development in the United States. Hi-Vision MUSE was mainly broadcast by NHK through their BShi satellite TV channel, although other channels such as WOWOW, TV Asahi, Fuji Television, TBS Television, Nippon Television, and TV Tokyo also broadcast in MUSE. On May 20, 1994, Panasonic released the first MUSE LaserDisc player. There were also a number of players available from other brands like Pioneer and Sony. Hi-Vision continued broadcasting in analog by NHK until 2007. Other channels had stopped soon after December 1, 2000 as they transitioned to digital HD signals in ISDB, Japan's digital broadcast standard. History MUSE was developed by NHK Science & Technology Research Laboratories in the 1980s as a compression system for Hi-Vision HDTV signals. Japanese broadcast engineers immediately rejected conventional vestigial sideband broadcasting. It was decided early on that MUSE would be a satellite broadcast format as Japan economically supports satellite broadcasting. MUSE was transmitted at a frequency of 21 GHz or 12 GHz. Modulation research Japanese broadcast engineers had been studying the various HDTV broadcast types for some time. It was initially thought that SHF, EHF or optic fiber would have to be used to transmit HDTV due to the high bandwidth of the signal, and HLO-PAL would be used for terrestrial broadcast. HLO-PAL is a conventionally constructed composite signal (based on for luminance and for chroma like NTSC and PAL) and uses a phase alternating by line with half-line offset carrier encoding of the wideband/narrowband chroma components. Only the very lowest part of the wideband chroma component overlapped the high-frequency chroma. The narrowband chroma was completely separated from luminance. PAF, or phase alternating by field (like the first NTSC color system trial) was also experimented with, and it gave much better decoding results, but NHK abandoned all composite encoding systems. Because of the use of satellite transmission, Frequency modulation (FM) should be used with power-limitation problem. FM incurs triangular noise, so if a sub-carrierred composite signal is used with FM, demodulated chroma signal has more noise than luminance. Because of this, they looked at other options, and decided to use component emission for satellite. At one point, it seemed that FCFE (Frame Conversion Fineness Enhanced), I/P conversion compression system, would be chosen, but MUSE was ultimately picked. Separate transmission of and components was explored. The MUSE format which is transmitted today, uses separated component signalling. The improvement in picture quality was so great, that the original test systems were recalled. One more power saving tweak was made: lack of visual response to low frequency noise allows significant reduction in transponder power if the higher video frequencies are emphasised prior to modulation at the transmitter and de-emphasized at the receiver. Technical specifications MUSE's "1125 lines" are an analog measurement, which includes non-video scan lines taking place while a CRT's electron beam returns to the top of the screen to begin scanning the next field. Only 1035 lines have picture information. Digital signals count only the lines (rows of pixels) that have actual detail, so NTSC's 525 lines become 486i (rounded to 480 to be MPEG compatible), PAL's 625 lines become 576i, and MUSE would be 1035i. To convert the bandwidth of Hi-Vision MUSE into "conventional" lines-of-horizontal resolution (as is used in the NTSC world), multiply 29.9 lines per MHz of bandwidth. (NTSC and PAL/SECAM are 79.9 lines per MHz) - this calculation of 29.9 lines works for all current HD systems including Blu-ray and HD-DVD. So, for MUSE, during a still picture, the lines of resolution would be: 598-lines of luminance resolution per-picture-height. The chroma resolution is: 209-lines. The horizontal luminance measurement approximately matches the vertical resolution of a 1080 interlaced image when the Kell factor and interlace factor are taken into account. 1125 lines was selected as a compromise between the resolution in lines of NTSC and PAL and then doubling this number. MUSE employs time-compression integration (TCI) which is another term for time-division multiplexing, which is used to carry luminance, chrominance, PCM audio and sync signals on one carrier signal/in one carrier frequency. However, TCI achieves multiplexing by compression of the contents in the time dimension, in other words transmitting frames of video that are divided into regions with chrominance compressed into the left of the frame and luminance compressed into the right of the frame, which must then be expanded and layered to create a visible image. This makes it different from NTSC which carries luminance, audio and chrominance simultaneously in several carrier frequencies. Hi-Vision signals are analog component video signals with 3 channels which were RGB initially, and later YPbPr. The Hi-Vision standard aims to work with both RGB and YPbPr signals. Key features of the MUSE system: Scanlines (total/active): 1,125/1,035 Pixels per line (fully interpolated): 1122 (still image)/748 (moving) Reference clock periods: 1920 per active line Interlaced ratio: 2:1 Aspect ratio 16:9 Refresh rate: 59.94 or 60 frames per second Sampling frequency for broadcast: 16.2 MHz Vector motion compensation: horizontal ± 16 samples (32.4 MHz clock) / frame, a vertical line ± 3 / Field Audio: "DANCE" discrete 2- or 4-channel digital audio system: 48 kHz/16 bit (2 channel stereo: 2 front channels)/32 kHz/12 bit (4 channel surround: 3 front channels + 1 back channel) DPCM Audio compression format: DPCM quasi-instantaneous companding Required bandwidth: 27 MHz Usable bandwidth is 1/3 of this, 9 Mhz due to the use of FM modulation for transmission. Colorimetry The MUSE luminance signal encodes , specified as the following mix of the original RGB color channels: The chrominance signal encodes and difference signals. By using these three signals (, and ), a MUSE receiver can retrieve the original RGB color components using the following matrix: The system used a colorimetry matrix specified by SMPTE 240M (with coefficients corresponding to the SMPTE RP 145 primaries, also known as SMPTE-C, in use at the time the standard was created). The chromaticity of the primary colors and white point are: The luma () function is specified as: The blue color difference () is amplitude-scaled (), according to: The red color difference () is amplitude-scaled (), according to: Signal and Transmission MUSE is a 1125 line system (1035 visible), and is not pulse and sync compatible with the digital 1080 line system used by modern HDTV. Originally, it was a 1125 line, interlaced, 60 Hz, system with a 5:3 (1.66:1) aspect ratio and an optimal viewing distance of roughly 3.3H. In 1989 this was changed to a 16:9 aspect ratio. For terrestrial MUSE transmission a bandwidth limited FM system was devised. A satellite transmission system uses uncompressed FM. Before MUSE compression, the Hi- Vision signal bandwidth is reduced from 30 MHz for luminance and chrominance to a pre-compression bandwidth of 20 MHz for luminance, and a pre-compression bandwidth for chrominance is a 7.425 MHz carrier. The Japanese initially explored the idea of frequency modulation of a conventionally constructed composite signal. This would create a signal similar in structure to the composite video NTSC signal - with the (luminance) at the lower frequencies and the (chrominance) above. Approximately 3 kW of power would be required, in order to get 40 dB of signal to noise ratio for a composite FM signal in the 22 GHz band. This was incompatible with satellite broadcast techniques and bandwidth. To overcome this limitation, it was decided to use a separate transmission of and . This reduces the effective frequency range and lowers the required power. Approximately 570 W (360 for and 210 for ) would be needed in order to get a 40 dB of signal to noise ratio for a separate FM signal in the 22 GHz satellite band. This was feasible. There is one more power saving that appears from the character of the human eye. The lack of visual response to low frequency noise allows significant reduction in transponder power if the higher video frequencies are emphasized prior to modulation at the transmitter and then de-emphasized at the receiver. This method was adopted, with crossover frequencies for the emphasis/de-emphasis at 5.2 MHz for and 1.6 MHz for . With this in place, the power requirements drop to 260 W of power (190 for and 69 for ). Sampling systems and ratios The subsampling in a video system is usually expressed as a three part ratio. The three terms of the ratio are: the number of brightness (luma) samples, followed by the number of samples of the two color (chroma) components and , for each complete sample area. Traditionally the value for brightness is always 4, with the rest of the values scaled accordingly. A sampling of 4:4:4 indicates that all three components are fully sampled. A sampling of 4:2:0, for example, indicated that the two chroma components are sampled at half the horizontal sample rate of luma - the horizontal chroma resolution is halved. This reduces the bandwidth of an uncompressed video signal by one-third. MUSE implements a similar system as a means of reducing bandwidth, but instead of static sampling, the actual ratio varies according to the amount of motion on the screen. In practice, MUSE sampling will vary from approximately 4:2:1 to 4:0.5:0.25, depending on the amount of movement. Thus the red-green chroma component has between one-half and one-eighth the sampling resolution of the luma component , and the blue-yellow chroma has half the resolution of red-green. Audio subsystem MUSE had a discrete 2- or 4-channel digital audio system called "DANCE", which stood for Digital Audio Near-instantaneous Compression and Expansion. It used differential audio transmission (differential pulse-code modulation) that was not psychoacoustics-based like MPEG-1 Layer II. It used a fixed transmission rate of 1350 kbp/s. Like the PAL NICAM stereo system, it used near-instantaneous companding (as opposed to Syllabic-companding like the dbx system uses) and non-linear 13-bit digital encoding at a 32 kHz sample rate. It could also operate in a 48 kHz 16-bit mode. The DANCE system was well documented in numerous NHK technical papers and in a NHK-published book issued in the USA called Hi-Vision Technology. The DANCE audio codec was superseded by Dolby AC-3 (a.k.a. Dolby Digital), DTS Coherent Acoustics (a.k.a. DTS Zeta 6x20 or ARTEC), MPEG-1 Layer III (a.k.a. MP3), MPEG-2 Layer I, MPEG-4 AAC and many other audio coders. The methods of this codec are described in the IEEE paper: Real world performance issues Unlike traditional, interlaced video where interlacing is done on a line by line basis, showing either odd or even lines of video at any one time, thus requiring 2 fields of video to complete a video frame, MUSE used a four-field dot-interlacing cycle, meaning it took four fields to complete a single MUSE frame, and dot interlacing is interlacing that was done on a pixel by pixel basis, dividing both horizontal and vertical resolution by half to create each field of video, and not in a line by line basis as in traditional interlaced video which reduces only the vertical resolution to create each video field. Thus, in MUSE, only stationary images were transmitted at full resolution. However, as MUSE lowers the horizontal and vertical resolution of material that varies greatly from frame to frame, moving images were blurred. Because MUSE used motion-compensation, whole camera pans maintained full resolution, but individual moving elements could be reduced to only a quarter of the full frame resolution. Because the mix between motion and non-motion was encoded on a pixel-by-pixel basis, it wasn't as visible as most would think. Later, NHK came up with backwards compatible methods of MUSE encoding/decoding that greatly increased resolution in moving areas of the image as well as increasing the chroma resolution during motion. This so-called MUSE-III system was used for broadcasts starting in 1995 and a very few of the last Hi-Vision MUSE LaserDiscs used it (A River Runs Through It is one Hi-Vision LD that used it). During early demonstrations of the MUSE system, complaints were common about the decoder's large size, which led to the creation of a miniaturized decoder. Shadows and multipath still plague this analog frequency modulated transmission mode. Japan has since switched to a digital HDTV system based on ISDB, but the original MUSE-based BS Satellite channel 9 (NHK BS Hi-vision) was broadcast until September 30, 2007. Cultural and geopolitical impacts Internal reasons inside Japan that led to the creation of Hi-Vision (1940s): The NTSC standard (as a 525 line monochrome system) was imposed by the US occupation forces. (1950s-1960s): Unlike Canada (that could have switched to PAL), Japan was stuck with the US TV transmission standard regardless of circumstances. (1960s-1970s): By the late 1960s many parts of the modern Japanese electronics industry had gotten their start by fixing the transmission and storage problems inherent with NTSC's design. (1970s-1980s): By the 1980s there was spare engineering talent available in Japan that could design a better television system. MUSE, as the US public came to know it, was initially covered in the magazine Popular Science in the mid-1980s. The US television networks did not provide much coverage of MUSE until the late 1980s, as there were few public demonstrations of the system outside Japan. Because Japan had its own domestic frequency allocation tables (that were more open to the deployment of MUSE) it became possible for this television system to be transmitted by Ku Band satellite technology by the end of the 1980s. The US FCC in the late 1980s began to issue directives that would allow MUSE to be tested in the US, providing it could be fit into a 6 MHz System-M channel. The Europeans (in the form of the European Broadcasting Union (EBU)) were impressed with MUSE, but could never adopt it because it is a 60 Hz TV system, not a 50 Hz system that is standard in Europe and the rest of the world (outside the Americas and Japan). The EBU development and deployment of B-MAC, D-MAC and much later on HD-MAC were made possible by Hi-Vision's technical success. In many ways MAC transmission systems are better than MUSE because of the total separation of colour from brightness in the time domain within the MAC signal structure. Like Hi-Vision, HD-MAC could not be transmitted in 8 MHz channels without substantial modification – and a severe loss of quality and frame rate. A 6 MHz version Hi-Vision was experimented with in the US, but it too had severe quality problems so the FCC never fully sanctioned its use as a domestic terrestrial television transmission standard. The US ATSC working group that had led to the creation of NTSC in the 1950s was reactivated in the early 1990s because of Hi-Vision's success. Many aspects of the DVB standard are based on work done by the ATSC working group, however most of the impact is in support for 60 Hz (as well as 24 Hz for film transmission) and uniform sampling rates and interoperable screen sizes. Device support for Hi-Vision Hi-Vision LaserDiscs On May 20, 1994, Panasonic released the first MUSE LaserDisc player. There were a number of MUSE LaserDisc players available in Japan: Pioneer HLD-XØ, HLD-X9, HLD-1000, HLD-V500, HLD-V700; Sony HIL-1000, HIL-C1 and HIL-C2EX; the last two of which have OEM versions made by Panasonic, LX-HD10 and LX-HD20. Players also supported standard NTSC LaserDiscs. Hi-Vision LaserDiscs are extremely rare and expensive. The HDL-5800 Video Disc Recorder recorded both high definition still images and continuous video onto an optical disc and was part of the early analog wideband Sony HDVS high-definition video system which supported the MUSE system. Capable of recording HD still images and video onto either the WHD-3AL0 or the WHD-33A0 optical disc; WHD-3Al0 for CLV mode (up to 10 minute video or 18,000 still frames per side); WHD-33A0 for CAV mode (up to 3 minute video or 5400 still frames per side). These video discs were used for short video content such as advertisements and product demonstrations. The HDL-2000 was a full band high definition video disc player. Reel to reel VTRs Analog VTRs For recording Hi-Vision signals, Three reel to reel analog VTRs were released, among them are the Sony HDV-1000 part of their HDVS line, the NEC TT 8-1000 and the Toshiba TVR-1000. These analog VTRs had a head drum angular speed of 3600 RPM and are similar to Type C VTRs. They output a video bandwidth of 30 MHz for luma and 7 MHz for both chroma channels each, with a signal to noise ratio of 41 dB. They accept luma and chroma signals with video bandwidths of up to 30 MHz for both. Video bandwidth is measured before FM modulation. Signals are recorded onto the tape using FM modulation. Linear tape speed is 483.1 mm/s and writing speed at the heads is 25.9 m/s. The head drum is 134.5 mm wide and has 4 video record heads, 4 video playback heads and 1 video erasing head. It could record for 45 minutes on 10.5 inch reels. These machines, unlike conventional type C VTRs, are incapable of showing images while paused or playing the tape at low speeds. However they may be equipped with a frame store to capture images and display them while fast forwarding or rewinding the tape. The video heads are made of Mn-Zn ferrite material, those used for recording have a gap of 0.7 microns and a width of 80 microns and those for playback have a gap of 0.35 microns and a width of 70 microns. It records audio on 3 linear tracks, and control signals on a linear track. Unlike conventional type C videotape recorders, Vertical Blanking Intervals are not recorded on the tape. Helical tracks have groups of 4 signals or channels, arranged side by side and length-wise with red chrominance, blue chrominance, and two green chrominance signals with luminance information. Two tracks for green chrominance plus luminance are used to increase the bandwidth of these signals that can be recorded on the tape. Each of these 4 signals have a video bandwidth of 10 MHz. The VTR uses Iron metal oxide tape with cobalt for high coercivity, with capacity for 40 MHz of bandwidth at a head drum speed of 3600 RPM, which is sufficient for applying FM modulation to 10 MHz signals. To record 4 channels simultaneously in a single helical track, a separate, independent video head is required for each channel, and 4 video heads are grouped together which make a single helical track with 4 channels. Digital VTRs In 1987, technical standards for digital recording of Hi-Vision signals were released by NHK, and Sony developed the HDD-1000 VTR as part of their HDVS line, and Hitachi developed the HV-1200 digital reel to reel VTR. Audio is recorded digitally similarly to a DASH (Digital Audio Stationary Head) digital audio recorder, but several changes were made to synchronize the audio to the video. These digital VTRs can record 8 channels of digital audio on linear tracks (horizontally along the entire length of the tape). According to the standards, these VTRs operate with a head drum speed of 7200 RPM to accommodate the higher signal bandwidths of digital signal modulation on the tape which is also accommodated with the use of metal alloy particle tape, have a bit rate of 148.5 Mbit/s per video head, a linear tape speed of 805.2 mm/s and a writing speed at the heads of 51.5 m/s, are similar to Type C VTRs, have a head drum 135mm wide, 8 video playback, 8 video recording and 2 video erase heads, with 37 micron wide helical tracks. Output signal bandwidth is 30 Mhz of video bandwidth for luma (Y) and 15 Mhz of video bandwidth for chroma (Pb, Pr). Audio is recorded with a sampling rate of 48 kHz stored at 16 bits per sample in linear tape tracks, sampling rate for luma is 74.25 Mhz and 37.125 Mhz for chroma stored at 8 bits per sample. Signal to noise ratio is 56 dB for chroma and luma. Video fields are divided into 16 helical tracks on the tape. Total video bandwidth is 1.188 gigabits/s. Cue signals are recorded into 3 linear tape tracks. Video is recorded in groups of 4 tracks or channels, which are side by side length-wise within each helical track, to allow for parallelization: high total data rates with relatively low data rates per head, and reduce the linear tape speed. Digital video signals are recorded line by line (1 row of pixels in every frame of video or 1 line of video at a time) with ECC (Error Correcting Code) at the end of each line and in between a number of vertical lines. Reed-Solomon code is used for ECC and each line also has an ID number for trick play such as slow motion and picture search/shuttle. Displays Hi-Vision requires a display capable of handling 30 Mhz of video bandwidth simultaneously for each of the component video channels: R, G, B or Y, Pb and Pr. It was displayed on direct view color CRTs and CRT projectors, and plasma displays and Talaria projectors were explored to determine their ability to display Hi-Vision images. Some TVs have built in MUSE decoders. Cameras Cameras based on Saticon tubes, Plumbicon tubes, Harpicon tubes and CCD image sensors were used to capture footage using the Hi-Vision format. A prototype based on Vidicon tubes was also created. MUSE decoders A MUSE decoder is required for receiving MUSE broadcasts from satellites, and for viewing content in the MUSE format. The decoder converts MUSE format signals into Hi-Vision component video signals that can then be shown in a display. Video cassettes W-VHS allowed home recording of Hi-Vision programmes. UniHi For recording Hi-Vision video signals, NHK and 10 Japanese companies ("NEC, Matsushita Electric Industrial, Toshiba, Sharp, Sony, Hitachi, Sanyo Electric, JVC, Mitsubishi Electric, Canon") in 1989 released UniHi, a professional videocassette format. Recorders for the format were manufactured by Panasonic, Sony, NEC, and Toshiba. These machines were less expensive than their Type C counterparts. Both studio and portable versions were made. The head drum spins at 5400 RPM and uses tape that is 12.65 mm wide. It has a luminance (Y) bandwidth of 20 MHz and a chrominance (Pb, Pr) bandwidth of 7 MHz for video output. Video is recorded in analog form. The head drum is 76mm wide. It uses two video heads with azimuth recording and records each frame of video into 12 helical tracks; only 6 tracks are necessary for each video field if recording interlaced video. Audio is recorded digitally as a PCM signal, as a section on the helical tracks. Writing speed at the heads is 21.4 m/s. The tape also has 3 linear tracks, one for audio, control and time code each. Signal to noise ratio for luminance is 41 dB and for chrominance it is 47 dB. The tape is wrapped 180° around the head drum. Development began in 1987. It uses metal particle tape. It could record video for 1 hour (63 minutes). Linear tape speed is 120 mm/s. The cassette measures 205mm (width) x 121mm (depth) x 25mm (height). Signals are recorded using time-compression integration, in groups of two signals length-wise on each helical track. Grouping is used to increase the bandwidth that can be recorded on the tape. The cassette is intented to be air-tight with two flaps in the cassette's opening to protect the tape. This videocassette format was developed in order to reduce the size of HD recording equipment. The Sony version of the UniHi VTR, the HDV-10, had a price of over 90,000 US dollars. See also Analog high-definition television system HD-MAC, a planned high-definition analog video standard in Europe Clear-Vision The analog TV systems these systems were meant to replace: SECAM NTSC PAL Related standards: NICAM-like audio coding is used in the HD-MAC system. Chroma subsampling in TV indicated as 4:2:2, 4:1:1 etc... ISDB References External links Example of an early MUSE LaserDisc player Hi-Vision, MUSE, and the Optical Disc Television technology High-definition television Japanese inventions ISDB
Multiple sub-Nyquist sampling encoding
[ "Technology" ]
5,753
[ "Information and communications technology", "Television technology" ]
7,300,596
https://en.wikipedia.org/wiki/CrystEngCommunity
CrystEngCommunity is a virtual web community for people working in the field of crystal engineering. The website is owned by the Royal Society of Chemistry (RSC). CrystEngCommunity has links to the main international research groups working in crystal engineering; publishes occasional profiles (interviews) of crystal engineers; a conference diary that lists and links to international events in the field of crystal engineering; and a terminology wiki, CrystEngWiki, for crystal engineering. Also on the community are links to research articles on crystal engineering including CrystEngSelects (a selection of recent articles of interest to crystal engineers from across the RSC journals Chemical Communications, CrystEngComm, Dalton Transactions, Journal of Materials Chemistry, New Journal of Chemistry and Organic & Biomolecular Chemistry); links to special CrystEngComm Discussion conference special issues; and links to past crystal engineering articles from the RSC Journals Archive. Other useful links include downloadable wallpapers for PC desktops, book reviews and a compilation of useful weblinks for crystal engineers The community has a particularly close association with the RSC's crystal engineering journal, CrystEngComm. See also CrystEngComm Dalton Transactions References External links CrystEngCommunity CrystEngWiki Royal Society of Chemistry Crystal engineering British science websites
CrystEngCommunity
[ "Chemistry" ]
274
[ "Royal Society of Chemistry" ]
7,300,829
https://en.wikipedia.org/wiki/Pi%20Josephson%20junction
A Josephson junction (JJ) is a quantum mechanical device which is made of two superconducting electrodes separated by a barrier (thin insulating tunnel barrier, normal metal, semiconductor, ferromagnet, etc.). A Josephson junction is a Josephson junction in which the Josephson phase φ equals in the ground state, i.e. when no external current or magnetic field is applied. Background The supercurrent Is through a Josephson junction is generally given by Is = Icsin(φ), where φ is the phase difference of the superconducting wave functions of the two electrodes, i.e. the Josephson phase. The critical current Ic is the maximum supercurrent that can exist through the Josephson junction. In experiment, one usually causes some current through the Josephson junction and the junction reacts by changing the Josephson phase. From the above formula it is clear that the phase φ = arcsin(I/Ic), where I is the applied (super)current. Since the phase is 2-periodic, i.e. φ and φ + 2n are physically equivalent, without losing generality, the discussion below refers to the interval 0 ≤ φ < 2. When no current (I = 0) exists through the Josephson junction, e.g. when the junction is disconnected, the junction is in the ground state and the Josephson phase across it is zero (φ = 0). The phase can also be φ = , also resulting in no current through the junction. It turns out that the state with φ =  is unstable and corresponds to the Josephson energy maximum, while the state φ = 0 corresponds to the Josephson energy minimum and is a ground state. In certain cases, one may obtain a Josephson junction where the critical current is negative (Ic < 0). In this case, the first Josephson relation becomes The ground state of such a Josephson junction is and corresponds to the Josephson energy minimum, while the conventional state φ = 0 is unstable and corresponds to the Josephson energy maximum. Such a Josephson junction with in the ground state is called a  Josephson junction. Josephson junctions have quite unusual properties. For example, if one connects (shorts) the superconducting electrodes with the inductance L (e.g. superconducting wire), one may expect the spontaneous supercurrent circulating in the loop, passing through the junction and through inductance clockwise or counterclockwise. This supercurrent is spontaneous and belongs to the ground state of the system. The direction of its circulation is chosen at random. This supercurrent will of course induce a magnetic field which can be detected experimentally. The magnetic flux passing through the loop will have the value from 0 to a half of magnetic flux quanta, i.e. from 0 to Φ0/2, depending on the value of inductance L. Technologies and physical principles Ferromagnetic Josephson junctions. Consider a Josephson junction with a ferromagnetic Josephson barrier, i.e. the multilayers superconductor-ferromagnet-superconductor (SFS) or superconductor-insulator-ferromagnet-superconductor (SIFS). In such structures the superconducting order parameter inside the F-layer oscillates in the direction perpendicular to the junction plane. As a result, for certain thicknesses of the F-layer and temperatures, the order parameter may become +1 at one superconducting electrode and −1 at the other superconducting electrode. In this situation one gets a Josephson junction. Note that inside the F-layer the competition of different solutions takes place and the one with the lower energy wins out. Various ferromagnetic junctions have been fabricated: SFS junctions with weak ferromagnetic interlayers; SFS junctions with strong ferromagnetic interlayers, such as Co, Ni, PdFe and NiFe SIFS junctions; and S-Fi-S junctions. Josephson junctions with unconventional order parameter symmetry. Novel superconductors, notably high temperature cuprate superconductors, have an anisotropic superconducting order parameter which can change its sign depending on the direction. In particular, a so-called d-wave order parameter has a value of +1 if one looks along the crystal axis a and −1 if one looks along the crystal axis b. If one looks along the ab direction (45° between a and b) the order parameter vanishes. By making Josephson junctions between d-wave superconducting films with different orientations or between d-wave and conventional isotropic s-wave superconductors, one can get a phase shift of . Nowadays there are several realizations of  Josephson junctions of this type: tri-crystal grain boundary Josephson junctions, tetra-crystal grain boundary Josephson junctions, d-wave/s-wave ramp zigzag Josephson junctions, tilt-twist grain boundary Josephson junctions, p-wave based Josephson junctions. Superconductor–normal metal–superconductor (SNS) Josephson junctions with non-equilibrium electron distribution in N-layer. Superconductor–quantum dot–superconductor (S-QuDot-S) Josephson junctions (implemented by carbon nanotube Josephson junctions). Historical developments Theoretically, the first time the possibility of creating a Josephson junction was discussed by Bulaevskii et al. , who considered a Josephson junction with paramagnetic scattering in the barrier. Almost one decade later, the possibility of having a Josephson junction was discussed in the context of heavy fermion p-wave superconductors. Experimentally, the first Josephson junction was a corner junction made of yttrium barium copper oxide (d-wave) and Pb (s-wave) superconductors. The first unambiguous proof of a Josephson junction with a ferromagnetic barrier was given only a decade later. That work used a weak ferromagnet consisting of a copper-nickel alloy (CuxNi1−x, with x around 0.5) and optimized it so that the Curie temperature was close to the superconducting transition temperature of the superconducting niobium leads. See also Josephson effect φ Josephson junction Semifluxon Fractional vortices Brian D. Josephson References Superconductivity Josephson effect
Pi Josephson junction
[ "Physics", "Materials_science", "Engineering" ]
1,369
[ "Josephson effect", "Physical quantities", "Superconductivity", "Materials science", "Condensed matter physics", "Electrical resistance and conductance" ]
7,300,967
https://en.wikipedia.org/wiki/Automatically%20Tuned%20Linear%20Algebra%20Software
Automatically Tuned Linear Algebra Software (ATLAS) is a software library for linear algebra. It provides a mature open source implementation of BLAS APIs for C and FORTRAN 77. ATLAS is often recommended as a way to automatically generate an optimized BLAS library. While its performance often trails that of specialized libraries written for one specific hardware platform, it is often the first or even only optimized BLAS implementation available on new systems and is a large improvement over the generic BLAS available at Netlib. For this reason, ATLAS is sometimes used as a performance baseline for comparison with other products. ATLAS runs on most Unix-like operating systems and on Microsoft Windows (using Cygwin). It is released under a BSD-style license without advertising clause, and many well-known mathematics applications including MATLAB, Mathematica, Scilab, SageMath, and some builds of GNU Octave may use it. Functionality ATLAS provides a full implementation of the BLAS APIs as well as some additional functions from LAPACK, a higher-level library built on top of BLAS. In BLAS, functionality is divided into three groups called levels 1, 2 and 3. Level 1 contains vector operations of the form as well as scalar dot products and vector norms, among other things. Level 2 contains matrix-vector operations of the form as well as solving for with being triangular, among other things. Level 3 contains matrix-matrix operations such as the widely used General Matrix Multiply (GEMM) operation as well as solving for triangular matrices , among other things. Optimization approach The optimization approach is called Automated Empirical Optimization of Software (AEOS), which identifies four fundamental approaches to computer assisted optimization of which ATLAS employs three: Parameterization—searching over the parameter space of a function, used for blocking factor, cache edge, etc. Multiple implementation—searching through various approaches to implementing the same function, e.g., for SSE support before intrinsics made them available in C code Code generation—programs that write programs incorporating what knowledge they can about what will produce the best performance for the system Optimization of the level 1 BLAS uses parameterization and multiple implementation Every ATLAS level 1 BLAS function has its own kernel. Since it would be difficult to maintain thousands of cases in ATLAS there is little architecture specific optimization for Level 1 BLAS. Instead multiple implementation is relied upon to allow for compiler optimization to produce high performance implementation for the system. Optimization of the level 2 BLAS uses parameterization and multiple implementation With data and operations to perform the function is usually limited by bandwidth to memory, and thus there is not much opportunity for optimization All routines in the ATLAS level 2 BLAS are built from two Level 2 BLAS kernels: GEMV—matrix by vector multiply update: GER—general rank 1 update from an outer product: Optimization of the level 3 BLAS uses code generation and the other two techniques Since we have ops with only data, there are many opportunities for optimization Level 3 BLAS Most of the Level 3 BLAS is derived from GEMM, so that is the primary focus of the optimization. operations vs. data The intuition that the operations will dominate over the data accesses only works for roughly square matrices. The real measure should be some kind of surface area to volume. The difference becomes important for very non-square matrices. Can it afford to copy? Copying the inputs allows the data to be arranged in a way that provides optimal access for the kernel functions, but this comes at the cost of allocating temporary space, and an extra read and write of the inputs. So the first question GEMM faces is, can it afford to copy the inputs? If so, Put into block major format with good alignment Take advantage of user contributed kernels and cleanup Handle the transpose cases with the copy: make everything into TN (transpose - no-transpose) Deal with α in the copy If not, Use the nocopy version Make no assumptions on the stride of matrix A and B in memory Handle all transpose cases explicitly No guarantee about alignment of data Support α specific code Run the risk of TLB issues, bad strides, etc. The actual decision is made through a simple heuristic which checks for "skinny cases". Cache edge For 2nd Level Cache blocking a single cache edge parameter is used. The high level choose an order to traverse the blocks: ijk, jik, ikj, jki, kij, kji. These need not be the same order as the product is done within a block. Typically chosen orders are ijk or jik. For jik the ideal situation would be to copy A and the NB wide panel of B. For ijk swap the role of A and B. Choosing the bigger of M or N for the outer loop reduces the footprint of the copy. But for large K ATLAS does not even allocate such a large amount of memory. Instead it defines a parameter, Kp, to give best use of the L2 cache. Panels are limited to Kp in length. It first tries to allocate (in the jik case) . If that fails it tries . (If that fails it uses the no-copy version of GEMM, but this case is unlikely for reasonable choices of cache edge.) Kp is a function of cache edge and NB. LAPACK When integrating the ATLAS BLAS with LAPACK an important consideration is the choice of blocking factor for LAPACK. If the ATLAS blocking factor is small enough the blocking factor of LAPACK could be set to match that of ATLAS. To take advantage of recursive factorization, ATLAS provides replacement routines for some LAPACK routines. These simply overwrite the corresponding LAPACK routines from Netlib. Need for installation Installing ATLAS on a particular platform is a challenging process which is typically done by a system vendor or a local expert and made available to a wider audience. For many systems, architectural default parameters are available; these are essentially saved searches plus the results of hand tuning. If the arch defaults work they will likely get 10-15% better performance than the install search. On such systems the installation process is greatly simplified. References External links User contribution to ATLAS A Collaborative guide to ATLAS Development The FAQ has links to the Quick reference guide to BLAS and Quick reference to ATLAS LAPACK API reference Microsoft Visual C++ Howto for ATLAS C (programming language) libraries Fortran libraries Numerical linear algebra Numerical software Software using the BSD license
Automatically Tuned Linear Algebra Software
[ "Mathematics" ]
1,318
[ "Numerical software", "Mathematical software" ]
7,301,270
https://en.wikipedia.org/wiki/Deoxygenation
Deoxygenation is a chemical reaction involving the removal of oxygen atoms from a molecule. The term also refers to the removal of molecular oxygen (O2) from gases and solvents, a step in air-free technique and gas purifiers. As applied to organic compounds, deoxygenation is a component of fuels production as well a type of reaction employed in organic synthesis, e.g. of pharmaceuticals. Deoxygenation of C-O bonds With replacement by H2 The main examples involving the replacement of an oxo group by two hydrogen atoms (A=O → AH2) are hydrogenolysis. Typical examples use metal catalysts and H2 as the reagent. Conditions are typically more forcing than hydrogenation. Stoichiometric reactions that effect deoxygenation include the Wolff–Kishner reduction for aryl ketones. The replacement of a hydroxyl group by hydrogen (A-OH → A-H) is the point of the Barton–McCombie deoxygenation and the Markó–Lam deoxygenation. Biomass valorization Deoxygenation is an important goal of the conversion of biomass to useful fuels and chemicals. Partial deoxygenation is effected by dehydration and decarboxylation. Other routes Oxygen groups can also be removed by the reductive coupling of ketones, as illustrated by the McMurry reaction. Epoxides can be deoxygenated using the oxophilic reagent produced by combining tungsten hexachloride and n-butyllithium generates the alkene. This reaction can proceed with loss or retention of configuration. Deoxygenation of S-O and P-O bonds N=O bonds Nitroaromatics are deoxygenated by strongly reducing silyl reagents such as N,N'-bis(trimethylsilyl)-4,4'-bipyridinylidene. P=O bonds Phosphorus occurs in nature as oxides, so to produce elemental form of the element, deoxygenation is required. The main method involves carbothermic reduction (i.e., carbon is the deoxygenation agent). 4 Ca5(PO4)3F + 18 SiO2 + 30 C → 3 P4 + 30 CO + 18 CaSiO3 + 2 CaF2 Oxophilic main group compounds are useful reagents for certain deoxygenations conducted on laboratory scale. The highly oxophilic reagent hexachlorodisilane (Si2Cl6) stereospecifically deoxygenates phosphine oxides. S=O bonds A chemical reagent for the deoxygenation of many sulfur and nitrogen oxo compounds is the combination trifluoroacetic anhydride/sodium iodide. For example, in the deoxygenation of the sulfoxide diphenylsulfoxide to the sulfide diphenylsulfide: The reaction mechanism is based on the activation of the sulfoxide by a trifluoroacetyl group and oxidation of iodine. Iodine is formed quantitatively in this reaction and therefore the reagent is used for the analytical detection of many oxo compounds. See also Degassing Preparation of stable carbenes Ocean deoxygenation Oxophilicity References Organic redox reactions Gas technologies
Deoxygenation
[ "Chemistry" ]
728
[ "Organic redox reactions", "Organic reactions" ]
7,301,525
https://en.wikipedia.org/wiki/Pugmill
A pugmill, pug mill, or commonly just pug, is a machine in which clay or other materials are extruded in a plastic state or a similar machine for the trituration of ore. Industrial applications are found in pottery, bricks, cement and some parts of the concrete and asphalt mixing processes. A pugmill may be a fast continuous mixer. A continuous pugmill can achieve a thoroughly mixed, homogeneous mixture in a few seconds, and the right machines can be matched to the right application by taking into account the factors of agitation, drive assembly, inlet, discharge, cost and maintenance. Mixing materials at optimum moisture content requires the forced mixing action of the pugmill paddles, while soupy materials might be mixed in a drum mixer. A typical pugmill consists of a horizontal boxlike chamber with a top inlet and a bottom discharge at the other end, 2 shafts with opposing paddles, and a drive assembly. Some of the factors affecting mixing and residence time are the number and the size of the paddles, paddle swing arc, overlap of left and right swing arc, size of mixing chamber, length of pugmill floor, and material being mixed. Common construction and industrial uses Road Base - Dense well-graded aggregate, uniformly mixed, wetted, and densely compacted for building the foundation under a pavement. Lime Addition to asphalt – Lime may be added to the cold feed of an asphalt plant to strengthen the binding properties of the asphalt. Flyash Conditioning – Wetting fly ash in a pugmill to stabilize the ash so that it won’t create dust. Some flyashes have cementitious properties when wetted and can be used to stabilize other materials. Waste stabilization – various waste streams are remediated with pugmills forcing the mixing of the wastes with remediation agents. Roller-compacted concrete – (RCC) or rolled concrete is a special blend of concrete that has the same ingredients as conventional concrete but in different ratios. It has cement, water, and aggregates, but RCC is much drier and essentially has no slump. RCC is placed in a manner similar to paving, often by dump trucks or conveyors, spread by bulldozers or special modified asphalt pavers. After placement it is compacted by vibratory rollers. The “stiff” nature of RCC may require a paddle type pugmill to force the materials to mix completely and discharge easily. Ceramics pug mills, or commonly just "pugs", are not used to grind or mix, rather they extrude clay bodies prior to shaping processes. Some can be fitted with a vacuum system that ensures the extruded clay bodies have no entrapped air. According to the 1913 edition of Webster's Dictionary, a clay pug mill typically consists of an upright shaft armed with projecting knives, which is caused to revolve in a hollow cylinder, tub, or vat, in which the clay body is placed. Pugmills that run intermittently are used in the kaolin mining industry to mix certain grades of kaolin clay with water. See also Concrete slump test References External links Pug mills at potteries.org Tools Industrial processes Ceramic engineering Pavement engineering Materials science
Pugmill
[ "Physics", "Materials_science", "Engineering" ]
665
[ "Ceramic engineering", "Applied and interdisciplinary physics", "Materials science", "nan" ]
7,302,010
https://en.wikipedia.org/wiki/Receptor%E2%80%93ligand%20kinetics
In biochemistry, receptor–ligand kinetics is a branch of chemical kinetics in which the kinetic species are defined by different non-covalent bindings and/or conformations of the molecules involved, which are denoted as receptor(s) and ligand(s). Receptor–ligand binding kinetics also involves the on- and off-rates of binding. A main goal of receptor–ligand kinetics is to determine the concentrations of the various kinetic species (i.e., the states of the receptor and ligand) at all times, from a given set of initial concentrations and a given set of rate constants. In a few cases, an analytical solution of the rate equations may be determined, but this is relatively rare. However, most rate equations can be integrated numerically, or approximately, using the steady-state approximation. A less ambitious goal is to determine the final equilibrium concentrations of the kinetic species, which is adequate for the interpretation of equilibrium binding data. A converse goal of receptor–ligand kinetics is to estimate the rate constants and/or dissociation constants of the receptors and ligands from experimental kinetic or equilibrium data. The total concentrations of receptor and ligands are sometimes varied systematically to estimate these constants. Binding kinetics The binding constant is a special case of the equilibrium constant . It is associated with the binding and unbinding reaction of receptor (R) and ligand (L) molecules, which is formalized as: {R} + {L} <=> {RL}. The reaction is characterized by the on-rate constant and the off-rate constant , which have units of 1/(concentration time) and 1/time, respectively. In equilibrium, the forward binding transition {R} + {L} -> {RL} should be balanced by the backward unbinding transition {RL} -> {R} + {L}. That is, , where [{R}], [{L}] and [{RL}] represent the concentration of unbound free receptors, the concentration of unbound free ligand and the concentration of receptor-ligand complexes. The binding constant, or the association constant is defined by . Simplest case: single receptor and single ligand bind to form a complex The simplest example of receptor–ligand kinetics is that of a single ligand L binding to a single receptor R to form a single complex C {R} + {L} <-> {C} The equilibrium concentrations are related by the dissociation constant Kd where k1 and k−1 are the forward and backward rate constants, respectively. The total concentrations of receptor and ligand in the system are constant Thus, only one concentration of the three ([R], [L] and [C]) is independent; the other two concentrations may be determined from Rtot, Ltot and the independent concentration. This system is one of the few systems whose kinetics can be determined analytically. Choosing [R] as the independent concentration and representing the concentrations by italic variables for brevity (e.g., ), the kinetic rate equation can be written Dividing both sides by k1 and introducing the constant 2E = Rtot - Ltot - Kd, the rate equation becomes where the two equilibrium concentrations are given by the quadratic formula and D is defined However, only the equilibrium has a positive concentration, corresponding to the equilibrium observed experimentally. Separation of variables and a partial-fraction expansion yield the integrable ordinary differential equation whose solution is or, equivalently, for association, and for dissociation, respectively; where the integration constant φ0 is defined From this solution, the corresponding solutions for the other concentrations and can be obtained. See also Binding potential Patlak plot Scatchard plot References Further reading D.A. Lauffenburger and J.J. Linderman (1993) Receptors: Models for Binding, Trafficking, and Signaling, Oxford University Press. (hardcover) and 0-19-510663-6 (paperback) Receptors Chemical kinetics
Receptor–ligand kinetics
[ "Chemistry" ]
828
[ "Receptors", "Chemical kinetics", "Chemical reaction engineering", "Signal transduction" ]
7,302,227
https://en.wikipedia.org/wiki/Iconophor
An Iconophor, in graphic and print art, is a form of illustration (usually book illustration) characterized by the fact that the names of the objects represented in it start with a given letter. Its origins lie in medieval illuminations, though the abecedarian connection was usually coincidental in the Middle Ages. Etymologically, "iconophor" comes from Greek, meaning "bearer" or "transporter" (...phor) of image (ikono..). It seems that it was used formerly in Byzantine Christianity, to define an "icon bearer", in a direct or symbolic sense. The word iconophor was coined by Thora van Male, a French university professor; her book, Art Dico, presents a historical panorama of the use of such illustrations in French dictionaries. She is currently working on a website to facilitate the study of iconophors (ART DICO homepage). The principle of creating an art form around a letter, and illustrating it with items whose name starts with that letter is not a recent invention. Some of the artists who designed them are: 16th century: Hans Holbein (according to Anatole de Courde de Montaiglon, the letters M to Z of Holbein's Alphabet de la Mort are based on this principle); Paulini (certain passages of Ovid's Metamorphoses) 17th century: Claude Mellan (French); Giuseppe Maria Mitelli (Italian, Alfabeto in Sogno) The principle is also commonly used in ABC primers, and often their very mainspring. In the late 17th century illustrations of this type started appearing in the ornamentation of French dictionaries. The earliest of these was César de Rochefort’s Dictionnaire général et curieux (1685); the only iconophor in it is the ornamental initial letter at A, for Annunciation. The first lexicographical work to be ornamented with iconophors from A to Z was the supplement to Diderot and d'Alembert's Encyclopédie published in 1776. Congruent with the full title of the Encyclopédie itself (Encyclopédie ou dictionnaire raisonné des sciences, des arts et des métiers), many of the ornamental initial letters portray occupations. Here, A for astronomer. When, in the 1830s, the illustrated press and illustrated books took off in western Europe, the iconophoric ornamentation of dictionaries also took off. The first dictionary with ornamental head-pieces (as opposed to mere initial letters) from A to Z was Napoléon Landais' Dictionnaire général et grammatical (1834). These wood-engraved headpieces were produced in one of the large Parisian wood-engraving shops, Andrew Best and Leloir. A for abondance, agriculture, aigle (eagle), alouettes (larks), amitié (friendship), ara (South American parrot), Arab, archer, arquebusier, astronomie, avare (scrooge), aveugle (blind person). Book design Illustration Typography
Iconophor
[ "Engineering" ]
665
[ "Book design", "Design" ]
7,302,228
https://en.wikipedia.org/wiki/Distributional%20semantics
Distributional semantics is a research area that develops and studies theories and methods for quantifying and categorizing semantic similarities between linguistic items based on their distributional properties in large samples of language data. The basic idea of distributional semantics can be summed up in the so-called distributional hypothesis: linguistic items with similar distributions have similar meanings. Distributional hypothesis The distributional hypothesis in linguistics is derived from the semantic theory of language usage, i.e. words that are used and occur in the same contexts tend to purport similar meanings. The underlying idea that "a word is characterized by the company it keeps" was popularized by Firth in the 1950s. The distributional hypothesis is the basis for statistical semantics. Although the Distributional Hypothesis originated in linguistics, it is now receiving attention in cognitive science especially regarding the context of word use. In recent years, the distributional hypothesis has provided the basis for the theory of similarity-based generalization in language learning: the idea that children can figure out how to use words they've rarely encountered before by generalizing about their use from distributions of similar words. The distributional hypothesis suggests that the more semantically similar two words are, the more distributionally similar they will be in turn, and thus the more that they will tend to occur in similar linguistic contexts. Whether or not this suggestion holds has significant implications for both the data-sparsity problem in computational modeling, and for the question of how children are able to learn language so rapidly given relatively impoverished input (this is also known as the problem of the poverty of the stimulus). Distributional semantic modeling in vector spaces Distributional semantics favor the use of linear algebra as a computational tool and representational framework. The basic approach is to collect distributional information in high-dimensional vectors, and to define distributional/semantic similarity in terms of vector similarity. Different kinds of similarities can be extracted depending on which type of distributional information is used to collect the vectors: topical similarities can be extracted by populating the vectors with information on which text regions the linguistic items occur in; paradigmatic similarities can be extracted by populating the vectors with information on which other linguistic items the items co-occur with. Note that the latter type of vectors can also be used to extract syntagmatic similarities by looking at the individual vector components. The basic idea of a correlation between distributional and semantic similarity can be operationalized in many different ways. There is a rich variety of computational models implementing distributional semantics, including latent semantic analysis (LSA), Hyperspace Analogue to Language (HAL), syntax- or dependency-based models, random indexing, semantic folding and various variants of the topic model. Distributional semantic models differ primarily with respect to the following parameters: Context type (text regions vs. linguistic items) Context window (size, extension, etc.) Frequency weighting (e.g. entropy, pointwise mutual information, etc.) Dimension reduction (e.g. random indexing, singular value decomposition, etc.) Similarity measure (e.g. cosine similarity, Minkowski distance, etc.) Distributional semantic models that use linguistic items as context have also been referred to as word space, or vector space models. Beyond Lexical Semantics While distributional semantics typically has been applied to lexical items—words and multi-word terms—with considerable success, not least due to its applicability as an input layer for neurally inspired deep learning models, lexical semantics, i.e. the meaning of words, will only carry part of the semantics of an entire utterance. The meaning of a clause, e.g. "Tigers love rabbits.", can only partially be understood from examining the meaning of the three lexical items it consists of. Distributional semantics can straightforwardly be extended to cover larger linguistic item such as constructions, with and without non-instantiated items, but some of the base assumptions of the model need to be adjusted somewhat. Construction grammar and its formulation of the lexical-syntactic continuum offers one approach for including more elaborate constructions in a distributional semantic model and some experiments have been implemented using the Random Indexing approach. Compositional distributional semantic models extend distributional semantic models by explicit semantic functions that use syntactically based rules to combine the semantics of participating lexical units into a compositional model to characterize the semantics of entire phrases or sentences. This work was originally proposed by Stephen Clark, Bob Coecke, and Mehrnoosh Sadrzadeh of Oxford University in their 2008 paper, "A Compositional Distributional Model of Meaning". Different approaches to composition have been explored—including neural models—and are under discussion at established workshops such as SemEval. Applications Distributional semantic models have been applied successfully to the following tasks: finding semantic similarity between words and multi-word expressions; word clustering based on semantic similarity; automatic creation of thesauri and bilingual dictionaries; word sense disambiguation; expanding search requests using synonyms and associations; defining the topic of a document; document clustering for information retrieval; data mining and named entities recognition; creating semantic maps of different subject domains; paraphrasing; sentiment analysis; modeling selectional preferences of words. Software S-Space SemanticVectors Gensim DISCO Builder Indra See also Conceptual space Co-occurrence Distributional–relational database Gensim Phraseme Random indexing Sentence embedding Statistical semantics Word2vec Word embedding People Scott Deerwester Susan Dumais J. R. Firth George Furnas Zellig Harris Thomas Landauer Magnus Sahlgren References Sources Reprinted in External links Zellig S. Harris Computational linguistics Semantics Language acquisition Semantic relations
Distributional semantics
[ "Technology" ]
1,165
[ "Natural language and computing", "Computational linguistics" ]
7,303,274
https://en.wikipedia.org/wiki/Novellus%20Systems
Novellus Systems Inc. was a company founded by Brad Mattson that developed, manufactured, sold, and serviced semiconductor equipment used in the fabrication of integrated circuits. It was a supplier of chemical vapor deposition (CVD), plasma-enhanced chemical vapor deposition (PECVD), physical vapor deposition (PVD), electrochemical deposition (ECD), ultraviolet thermal processing (UVTP), and surface preparation equipment used in the manufacturing of semiconductors. Novellus Systems was founded in 1984 and is headquartered in San Jose, California. The company maintains engineering & manufacturing facilities in Tualatin, Oregon and San Jose, California. Also, Novellus has a component design and software development facility in Bangalore, India. In December 2011, Novellus agreed to be acquired by Lam Research for $3.3 billion. The acquisition was completed in June 2012. Product lines Novellus' product lines were called ALTUS, ATHENA, GAMMA, INOVA, SABRE, SOLA, SPEED, and VECTOR, SEQUEL and assisted semiconductor companies with manufacturing. External links References Manufacturing companies based in San Jose, California Technology companies based in the San Francisco Bay Area Defunct semiconductor companies of the United States Electronics companies established in 1984 1984 establishments in California Companies formerly listed on the Nasdaq Equipment semiconductor companies Physical vapor deposition 2012 mergers and acquisitions Defunct computer companies of the United States
Novellus Systems
[ "Engineering" ]
273
[ "Equipment semiconductor companies", "Semiconductor fabrication equipment" ]
8,795,656
https://en.wikipedia.org/wiki/Copositive%20matrix
In mathematics, specifically linear algebra, a real symmetric matrix is copositive if for every nonnegative vector (where the inequalities should be understood coordinate-wise). Some authors do not require to be symmetric. The collection of all copositive matrices is a proper cone; it includes as a subset the collection of real positive-definite matrices. Copositive matrices find applications in economics, operations research, and statistics. Examples Every real positive-semidefinite matrix is copositive by definition. Every symmetric nonnegative matrix is copositive. This includes the zero matrix. The exchange matrix is copositive but not positive-semidefinite. Properties It is easy to see that the sum of two copositive matrices is a copositive matrix. More generally, any conical combination of copositive matrices is copositive. Let be a copositive matrix. Then we have that every principal submatrix of is copositive as well. In particular, the entries on the main diagonal must be nonnegative. the spectral radius is an eigenvalue of . Every copositive matrix of order less than 5 can be expressed as the sum of a positive semidefinite matrix and a nonnegative matrix. A counterexample for order 5 is given by a copositive matrix known as Horn-matrix: Characterization The class of copositive matrices can be characterized using principal submatrices. One such characterization is due to Wilfred Kaplan: A real symmetric matrix is copositive if and only if every principal submatrix of has no eigenvector with associated eigenvalue . Several other characterizations are presented in a survey by Ikramov, including: Assume that all the off-diagonal entries of a real symmetric matrix A are nonpositive. Then A is copositive if and only if it is positive semidefinite. The problem of deciding whether a matrix is copositive is co-NP-complete. References Matrices Convex analysis
Copositive matrix
[ "Mathematics" ]
416
[ "Matrices (mathematics)", "Mathematical objects" ]
8,798,339
https://en.wikipedia.org/wiki/Supervaluationism
In philosophical logic, supervaluationism is a semantics for dealing with irreferential singular terms and vagueness. It allows one to apply the tautologies of propositional logic in cases where truth values are undefined. According to supervaluationism, a proposition can have a definite truth value even when its components do not. The proposition "Pegasus likes licorice", for example, is often interpreted as having no truth-value given the assumption that the name "Pegasus" fails to refer. If indeed reference fails for "Pegasus", then it seems as though there is nothing that can justify an assignment of a truth-value to any apparent assertion in which the term "Pegasus" occurs. The statement "Pegasus likes licorice or Pegasus doesn't like licorice", however, is an instance of the valid schema (" or not-"), so, according to supervaluationism, it should be true regardless of whether or not its disjuncts have a truth value; that is, it should be true in all interpretations. If, in general, something is true in all precisifications, supervaluationism describes it as "supertrue", while something false in all precisifications is described as "superfalse". Supervaluations were first formalized by Bas van Fraassen. Example abstraction Let v be a classical valuation defined on every atomic sentence of the language L and let At(x) be the number of distinct atomic sentences in a formula x. There are then at most 2At(x) classical valuations defined on every sentence x. A supervaluation V is a function from sentences to truth values such that x is supertrue (i.e. V(x)=True) if and only if v(x)=True for every v. Likewise for superfalse. V(x) is undefined when there are exactly two valuations v and v* such that v(x)=True and v*(x)=False. For example, let Lp be the formal translation of "Pegasus likes licorice". There are then exactly two classical valuations v and v* on Lp, namely v(Lp)=True and v*(Lp)=False. So Lp is neither supertrue nor superfalse. See also Kripke semantics Sorites paradox Subvaluationism References External links Stanford Encyclopedia of Philosophy Supervaluationism as a response to vagueness Supervaluationism as a response to the Sorites Paradox Semantics Theories of deduction
Supervaluationism
[ "Mathematics" ]
533
[ "Theories of deduction" ]
8,798,387
https://en.wikipedia.org/wiki/Hubometer
A hubometer (from hub, center of a wheel; -ometer, measure of) or hubodometer, is a device mounted on the axle of any land vehicle to measure the distance traveled by a vehicle based on the rotations of the wheel hub. The whole device rotates with the wheel, except for an eccentrically mounted weight on an internal shaft. The weight remains pointing downwards, and drives the counting mechanism as the body of the hubometer rotates around it. Typical uses Hubometers are essential for semi-trailers, serving as the primary method to track the accumulated distance traveled throughout the vehicle's lifespan. They find application in buses, trucks, or trailers, particularly those whose tires are provided to the vehicle operator through an independent company under a "price per thousand kilometers" contract. In this arrangement, the tire company installs the hubometer to obtain accurate measurements of the distance covered. In New Zealand, hubodometers are used for the calculation of road user charges for HGVs powered by a fuel not taxed at source. Historical data At the Veeder Manufacturing Company in Hartford, Connecticut production of cyclometers, hubodometers, and other scientific tools was underway for contracts with the United States government. Designed by Curtis Veeder in 1895, the cyclometer measured the distance traveled by bicycles as Curtis was a bicycle enthusiast. He would later adapt the invention to measure distance traveled for automobiles, hubodometers, as well as hand-turned cyclometers for use by the US Weather Bureau. The Veeder Manufacturing Company would produce these tools for use by the US government during World War One. These devices would be placed on the wheel of an automobile to measure the distance traveled by counting the rotations of its wheels. Curtis Veeder acquired the Root Company of Bristol in 1928 before retiring to form the Veeder-Root Corporation. It has remained in operation up to the present day. Utilizing his industrial riches, Veeder constructed an intricate stone mansion located on Elizabeth Street in Hartford. This mansion currently houses the Connecticut Museum of Culture and History. See also Odometer Speedometer Tachograph Tachometer Taximeter Hobbs meter Tach timer References Vehicle technology Measuring instruments Length, distance, or range measuring devices
Hubometer
[ "Technology", "Engineering" ]
463
[ "Vehicle technology", "Mechanical engineering by discipline", "Measuring instruments" ]
8,798,424
https://en.wikipedia.org/wiki/Tunguska%20%28The%20X-Files%29
"Tunguska" is the eighth episode of the fourth season of the American science fiction television series The X-Files. It premièred on the Fox network on . It was directed by Kim Manners, and written by Frank Spotnitz and series creator Chris Carter. "Tunguska" featured guest appearances by John Neville, Nicholas Lea and Fritz Weaver. The episode helped explore the series' overarching mythology. "Tunguska" earned a Nielsen household rating of 12.2, being watched by 18.85 million people in its initial broadcast. In the episode, FBI special agent Fox Mulder (David Duchovny) travels to Russia to investigate the source of a black oil contamination. His partner Dana Scully (Gillian Anderson) and assistant director Walter Skinner (Mitch Pileggi) are summoned to attend a United States Senate hearing on Mulder's whereabouts. "Tunguska" is a two-part episode, with the plot continuing in the next episode, "Terma". "Tunguska" was inspired by reports of evidence of extraterrestrial life possibly being found in the Allan Hills 84001 meteorite, while the gulag setting was inspired by the works of Aleksandr Solzhenitsyn. The story offered the writers a chance to expand the scale of the series' mythology globally, although production of the episode was described as troublesome and expensive. Plot The episode opens in medias res to Dana Scully (Gillian Anderson) as she is brought before a Senate select committee to be questioned about the whereabouts of Fox Mulder (David Duchovny). Scully refuses to answer the committee's questions and attempts to read a statement denouncing the conspiracy within the government. Senator Sorenson threatens to hold Scully in contempt of Congress. Ten days earlier, at Honolulu Airport, a courier returning from the Republic of Georgia (David Bloom) is searched by customs officers. One of the officers (Andy Thompson) removes a glass canister from the courier's briefcase and accidentally shatters it, exposing both men to the black oil. Meanwhile, in New York City, Mulder and Scully take part in an FBI raid against a domestic terrorist group. Mulder's tipster within the group is revealed to be Alex Krycek (Nicholas Lea), whom the terrorists released from the missile silo where he had been trapped. Krycek has turned against The Smoking Man (William B. Davis), and tells the distrustful agents that he can help expose him. Krycek leads the agents to Dulles International Airport, where they try to apprehend a second courier carrying a diplomatic pouch from Russia. The courier leads the agents on a pursuit through the airport, but drops the pouch before escaping. The pouch is revealed to carry a seemingly unremarkable rock. Mulder has Krycek confined at the high rise apartment of Assistant Director Walter Skinner before having the rock analyzed at NASA's Goddard Space Flight Center. Dr. Sacks, a NASA scientist, tells Mulder and Scully that the rock is a prehistoric meteorite fragment that might contain fossilized alien bacteria. Skinner is approached by the Smoking Man, who demands that the pouch be returned. The courier breaks into Skinner's apartment and searches for the pouch, only to be thrown off Skinner's patio by Krycek. Meanwhile, Dr. Sacks cuts into the fragment, but inadvertently releases the black oil inside; the organism penetrates the scientist's hazmat suit and puts him in a coma-like state. Mulder travels to New York to visit Marita Covarrubias (Laurie Holden), who reveals that the fragment originated from the Russian province of Krasnoyarsk and provides the documents needed to travel there. Mulder reluctantly brings along Krycek, who is fluent in Russian. In Charlottesville, Virginia, the Smoking Man is admonished by the Well-Manicured Man (John Neville) when the latter learns about Mulder's travels. Skinner and the agents are subpoenaed to appear before Senator Sorenson's panel over the missing pouch; when Skinner questions Scully about Mulder's whereabouts, she is not forthcoming. Meanwhile, as Mulder and Krycek hike through the forests of Krasnoyarsk, the former theorizes that the fragment may be tied to the Tunguska event, a mysterious cosmic impact that occurred in the area in 1908. The two men come across a slave labor camp, but are captured by the taskmasters and thrown into a gulag. Skinner and Scully meet with Senator Sorenson, who questions them on the death of the courier and the location of Agent Mulder. Mulder talks with a fellow prisoner who tells him that innocent people have been captured and brought here to be subjected to experiments. Immediately afterwards guards burst into the room and inject Mulder with a syringe. When Mulder awakens he is in a large room bound with chicken wire along with many other prisoners. Black material is dumped onto his face, infecting him with the black oil. Production "Tunguska" and its follow-up "Terma" were conceived by the writers when they were trying to conceive a "big and fun canvas" to tell stories. They decided to create a story connected to the Russian gulags, which led to the "natural" idea that the Russians were experimenting separately from the Syndicate to create a vaccine for the black oil. Series writer John Shiban felt it was natural to create an arms race-like story between the United States and Russia, being that the Cold War had ended a few years earlier. The writers desired to expand the series' mythology globally, a concept that continued into the fifth season and the series' 1998 feature film adaptation. The idea of a conspiracy with a global reach was first broached in the series' second season, and it was felt that this two-part story was a good place to expand upon this, allowing the production crew to "stretch the limits" of their resources and imagination. The inspiration for the oil-containing rocks was NASA's announcement of possible evidence of extraterrestrial life in the Allan Hills 84001 meteorite; while the gulag scenes were based on Aleksandr Solzhenitsyn's books The Gulag Archipelago and One Day in the Life of Ivan Denisovich. The scenes featuring the SWAT raid on a terrorist cell found to be harbouring Alex Krycek were filmed in a single night, requiring sixty individual film setups split between three camera crews working simultaneously. By dawn, only four of the sixty required shots had not been filmed, and these were later completed on a sound stage. Additional scenes shot for the episode featuring The Smoking Man and the Well-Manicured Man were cut due to time constraints. A scene featuring Scully briefing Skinner on the events of the episode was also cut, as it was felt that it was "redundant" within the narrative, repeating information that had already been shown to the audience. David Duchovny's father was present during production of the episode, leaving the actor to enjoy the shoot, although the crew described production as expensive and "stubbornly trouble-plagued". "Tunguska" marked the fourth appearance in the series by Malcolm Stewart, who had previously appeared in "Pilot", "3" and "Avatar". Reception Ratings "Tunguska" premiered on the Fox network on . The episode earned a Nielsen household rating of 12.2 with an 18 share, meaning that roughly of all television-equipped households, and of households watching television, were tuned in to the episode. A total of viewers watched this episode during its original airing. Reviews "Tunguska" received mostly positive reviews from critics. Based on an advance viewing of the episode's script, Entertainment Weekly rated "Tunguska" an A−, praising the "arms race" plotline. Sarah Stegall, in The Munchkyn Zone, wrote positively of the entry and gave it a 5 out of 5 rating. Stegall highlighted the "taut storyline" and the "excellent direction". Writing for The A.V. Club, Emily VanDerWerff rated the episode a B, noting that the move to a global scale detracted from the series' overall relevance. VanDerWerff felt that "the action setpieces in this episode and the next one are really terrific", and praised William B. Davis' portrayal of The Smoking Man. However, she described "Tunguska" as being "one of the first really unfocused mythology episodes in the show's run", and found the plot of the episode to not be moving the series forward enough, noting that "for the first time, Mulder feels less like he's driving the action and more like he's a messenger boy". David Duchovny described this episode, along with "Terma", as being action-heavy and "lots of fun". Awards "Tunguska" received a nomination for a CAS Award by the Cinema Audio Society for Outstanding Achievement in Sound Mixing - Television Series. Footnotes Bibliography External links 1996 American television episodes Television episodes directed by Kim Manners Television episodes written by Chris Carter (screenwriter) Television episodes written by Frank Spotnitz Television episodes set in Hawaii Television episodes set in Maryland Television episodes set in New York (state) Television episodes set in Russia The X-Files season 4 episodes Television episodes set in Virginia Tunguska event
Tunguska (The X-Files)
[ "Physics" ]
1,926
[ "Unsolved problems in physics", "Tunguska event" ]
8,798,984
https://en.wikipedia.org/wiki/Ambroxol
Ambroxol is a drug that breaks up phlegm, used in the treatment of respiratory diseases associated with viscid or excessive mucus. Ambroxol is often administered as an active ingredient in cough syrup. It was patented in 1966 and came into medical use in 1979. Medical uses Ambroxol is indicated as "secretolytic therapy in bronchopulmonary diseases associated with abnormal mucus secretion and impaired mucus transport. It promotes mucus clearance, facilitates expectoration and eases productive cough, allowing patients to breathe freely and deeply". There are many different formulations developed since the first marketing authorisation in 1978. Ambroxol is available as syrup, tablets, pastilles, dry powder sachets, inhalation solution, drops and ampules as well as effervescent tablets. Ambroxol also provides pain relief in acute sore throat. Pain in sore throat is the hallmark of acute pharyngitis. Sore throat is usually caused by a viral infection. The infection is self limited and the patient recovers normally after a few days. What is most bothering for the patient is the continuous pain in the throat maximized when the patient is swallowing. The main goal of treatment is thus to reduce pain. The main property of ambroxol for treating sore throat is the local anaesthetic effect, described first in the late 1970s, but explained and confirmed in more recent work. High-dose ambroxol, delivered via intravenous injection, reduces the mortality rate in paraquat poisoning by 31%. Side effects Studies and observations to date have not uncovered specific contraindications of ambroxol; however, caution is suggested for patients with gastric ulceration, and usage during the first trimester of pregnancy is not recommended. Mechanism of action The substance acts on mucous membranes, restoring the physiological clearance mechanisms of the respiratory tract (which play an important role in the body's natural defence mechanisms) through several mechanisms, including breaking up phlegm, stimulating mucus production, and stimulating synthesis and release of surfactant by type II pneumocytes. Surfactant acts as an anti-glue factor by reducing the adhesion of mucus to the bronchial wall, in improving its transport and in providing protection against infection and irritating agents. Ambroxol is a potent inhibitor of the neuronal Na+ channels, explaining its anaesthetic effect. This property led to the development of a lozenge containing 20 mg of ambroxol. Many state-of-the-art clinical studies have demonstrated the efficacy of ambroxol in relieving pain in acute sore throat, with a rapid onset of action, with its effect lasting at least three hours. Ambroxol is also anti-inflammatory, reducing redness in a sore throat. It reduces the release of inflammatory cytokines and histamines in cell cultures. It also acts as an antioxidant, scavenging free radicals and hypochloric acid generated by neutrophils. These two effects explain its effect in treating acute lung injury caused by paraquat. Ambroxol has recently been shown to increase activity of the lysosomal enzyme glucocerebrosidase. Because of this it may be a useful therapeutic agent for both Gaucher disease and Parkinson's disease. It was also recently shown that ambroxol triggers exocytosis of lysosomes by releasing calcium from acidic cellular calcium stores. This occurs by diffusion of ambroxol into lysosomes and lysosomal pH neutralization. This mechanism is most likely responsible for the mucolytic effects of the drug, but may also explain the reported activity in Gaucher and Parkinson's disease. Both ambroxol and its parent drug bromhexine have been shown to induce autophagy in several cell types, and ambroxol was shown to potentiate rifampicin therapy in a model of tuberculosis through host directed effects. Ambroxol also enhances lung levels of a wide range of antibiotics. Brand names Ambroxol is the active ingredient of Muciclar (Italy), Mucosolvan, Mucobrox (Spain), Bisolvon (Switzerland), Cloxan (Mexico), Mucol, Lasolvan, Mucoangin, Surbronc, Brontex (Lithuania), Ambro (Kazakhstan), Ambolar, Inhalex, Mucolite (India), Fluibrox (Greece) and Lysopain. Ambroxol has been approved as a safe and effective substance in the European Medicines Agency, but has not been approved in the USA by the Food and Drug Administration. Ambroxol is also not registered for use in Australia. References Expectorants Anilines Bromobenzene derivatives Amines Secondary alcohols
Ambroxol
[ "Chemistry" ]
1,017
[ "Amines", "Bases (chemistry)", "Functional groups" ]
8,799,355
https://en.wikipedia.org/wiki/Complementary%20sequences
For complementary sequences in biology, see complementarity (molecular biology). For integer sequences with complementary sets of members see Lambek–Moser theorem. In applied mathematics, complementary sequences (CS) are pairs of sequences with the useful property that their out-of-phase aperiodic autocorrelation coefficients sum to zero. Binary complementary sequences were first introduced by Marcel J. E. Golay in 1949. In 1961–1962 Golay gave several methods for constructing sequences of length 2N and gave examples of complementary sequences of lengths 10 and 26. In 1974 R. J. Turyn gave a method for constructing sequences of length mn from sequences of lengths m and n which allows the construction of sequences of any length of the form 2N10K26M. Later the theory of complementary sequences was generalized by other authors to polyphase complementary sequences, multilevel complementary sequences, and arbitrary complex complementary sequences. Complementary sets have also been considered; these can contain more than two sequences. Definition Let (a0, a1, ..., aN − 1) and (b0, b1, ..., bN − 1) be a pair of bipolar sequences, meaning that a(k) and b(k) have values +1 or −1. Let the aperiodic autocorrelation function of the sequence x be defined by Then the pair of sequences a and b is complementary if: for k = 0, and for k = 1, ..., N − 1. Or using Kronecker delta we can write: So we can say that the sum of autocorrelation functions of complementary sequences is a delta function, which is an ideal autocorrelation for many applications like radar pulse compression and spread spectrum telecommunications. Examples As the simplest example we have sequences of length 2: (+1, +1) and (+1, −1). Their autocorrelation functions are (2, 1) and (2, −1), which add up to (4, 0). As the next example (sequences of length 4), we have (+1, +1, +1, −1) and (+1, +1, −1, +1). Their autocorrelation functions are (4, 1, 0, −1) and (4, −1, 0, 1), which add up to (8, 0, 0, 0). One example of length 8 is (+1, +1, +1, −1, +1, +1, −1, +1) and (+1, +1, +1, −1, −1, −1, +1, −1). Their autocorrelation functions are (8, −1, 0, 3, 0, 1, 0, 1) and (8, 1, 0, −3, 0, −1, 0, −1). An example of length 10 given by Golay is (+1, +1, −1, +1, −1, +1, −1, −1, +1, +1) and (+1, +1, −1, +1, +1, +1, +1, +1, −1, −1). Their autocorrelation functions are (10, −3, 0, −1, 0, 1,−2, −1, 2, 1) and (10, 3, 0, 1, 0, −1, 2, 1, −2, −1). Properties of complementary pairs of sequences Complementary sequences have complementary spectra. As the autocorrelation function and the power spectra form a Fourier pair, complementary sequences also have complementary spectra. But as the Fourier transform of a delta function is a constant, we can write where CS is a constant. Sa and Sb are defined as a squared magnitude of the Fourier transform of the sequences. The Fourier transform can be a direct DFT of the sequences, it can be a DFT of zero padded sequences or it can be a continuous Fourier transform of the sequences which is equivalent to the Z transform for . CS spectra is upper bounded. As Sa and Sb are non-negative values we can write also If either of the sequences of the CS pair is inverted (multiplied by −1) they remain complementary. More generally if any of the sequences is multiplied by ejφ they remain complementary; If either of the sequences is reversed they remain complementary; If either of the sequences is delayed they remain complementary; If the sequences are interchanged they remain complementary; If both sequences are multiplied by the same constant (real or complex) they remain complementary; If alternating bits of both sequences are inverted they remain complementary. In general for arbitrary complex sequences if both sequences are multiplied by ejπkn/N (where k is a constant and n is the time index) they remain complementary; A new pair of complementary sequences can be formed as [a b] and [a −b] where [..] denotes concatenation and a and b are a pair of CS; A new pair of sequences can be formed as {a b} and {a −b} where {..} denotes interleaving of sequences. A new pair of sequences can be formed as a + b and a − b. Golay pair A complementary pair a, b may be encoded as polynomials A(z) = a(0) + a(1)z + ... + a(N − 1)zN−1 and similarly for B(z). The complementarity property of the sequences is equivalent to the condition for all z on the unit circle, that is, |z| = 1. If so, A and B form a Golay pair of polynomials. Examples include the Shapiro polynomials, which give rise to complementary sequences of length a power of two. Applications of complementary sequences Multislit spectrometry Ultrasound measurements Acoustic measurements radar pulse compression Wi-Fi networks, 3G CDMA wireless networks OFDM communication systems Train wheel detection systems Non-destructive tests (NDT) Communications coded aperture masks are designed using a 2-dimensional generalization of complementary sequences. See also Binary Golay code (Error-correcting code) Gold sequences Kasami sequences Polyphase sequence Pseudorandom binary sequences (also called maximum length sequences or M-sequences) Ternary Golay code (Error-correcting code) Walsh-Hadamard sequences Zadoff–Chu sequence References Sequences and series Signal processing Pseudorandom number generators
Complementary sequences
[ "Mathematics", "Technology", "Engineering" ]
1,346
[ "Sequences and series", "Mathematical analysis", "Mathematical structures", "Telecommunications engineering", "Computer engineering", "Signal processing", "Mathematical objects" ]
8,801,379
https://en.wikipedia.org/wiki/Video%20tolling
Video tolling (sometimes referred to as video billing, toll by plate, pay by mail, or pay by plate) is a form of electronic toll collection that uses video or still images of a vehicle's license plate to identify a vehicle liable to pay a road toll. The system dispenses with collection of road tolls using road-side cash or payment card methods, and may be used in conjunction with "all electronic" open road tolling, to permit drivers without an RFID device (often referred to as a "Tag") to use the toll road. Technology In a video tolling system the license plate number can be extracted from an image either by using automatic number plate recognition (ANPR) technology or manual data-entry clerks. Video tolling is sometimes used in conjunction with "all electronic" open road tolling, to allow drivers without an RFID device (often referred to as a "Tag") to use the toll road. An all electronic system is a toll collection point that does not permit cash payment, and vehicle identification / toll collection is done using RFID or other electronic means. When video tolling is used in conjunction with all electronic systems, a fee is frequently added to the toll to offset the higher cost of processing video tolls. There are two forms of video billing: "registered" and "unregistered" accounts. In registered video billing, the motorist first registers the vehicle's plates with the tolling agency prior to using the toll road. The toll system will then associate the plate images with the account and debit the amount of the toll from the account. Unregistered systems look up the vehicle registration information from a government motor vehicle registration database and send a bill to the address in the database. There may be an extra charge for the additional processing. North America The first video tolling system in North America was the Highway 407 in the Greater Toronto Area. The 407 ETR system, which opened in 1997, has struggled somewhat with accuracy and customer service issues, and recently settled a lawsuit related to potential incorrect charges on the system. Video Tolling systems are also being evaluated in the United States, and in 2006, the Texas Department of Transportation (TxDOT) deployed the first "Video Only" system on SH 121 north of Dallas, but not without some controversy with Cintra attempting to gain control and construction of the SH 121 Tollway; maintenance of the SH 121 Tollway was transferred to the North Texas Tollway Authority (NTTA) in 2008 and was redesignated as the Sam Rayburn Tollway. California Video tolling is used as a secondary enforcement measure for vehicles not equipped with a FasTrak RFID transponder on the toll roads, toll bridges, and high-occupancy toll lanes throughout California. When a vehicle does not have a transponder, or if a transponder is not detected at enforcement points, a violation enforcement system triggers a camera system that captures photos of the vehicle and its license plate for processing. Europe Austria In Austria works system called ASFINAG (Autobahnen-und-Schnellstraßen-Finanzierungs-Aktiengesellschaft), the Austrian financing provider for and operator of motorways (Autobahnen) and expressways (Schnellstraßen) uses a video tolling system called Videomaut (Video Toll). It covers parts of three motorways which are subject to special tolls (Sondermaut) in addition to the regular tolls paid through a road-tax disc (Vignette) for vehicles with a total weight of no more than 3.5 metric tonnes. Availability Videomaut is available on the following routes: A9 Pyhrn Motorway (Pyhrnautobahn), between Spital/Pyhrn and Ardning and between St. Michael and Übelbach, toll plaza of Bosruck A10 Tauern Motorway (Tauernautobahn), between Flachau and Rennweg, toll plaza of St. Michael i.L. A13 Brenner Motorway (Brennerautobahn), between Brenner Pass and Schönberg, toll plaza of Schönberg Use and limitations To use Videomaut, the user must buy a Videomaut ticket, either online or in person at an ASFINAG customer service centre or authorised point of sale. The route, number of travels, and the number plate of the car to be used must be specified. A Videomaut ticket is valid for one year from the date it is issued. When approaching a toll plaza, the user must enter the lane marked Videomaut and travel no faster than 15 km/h. The car's number plate is read and identified automatically, and the vehicle is allowed to pass. If the number plate cannot be read, or no valid Videomaut ticket is associated with the number plate, the user is directed into a regular toll lane and must show a copy of the receipt for the Videomaut ticket as proof of purchase to the toll cashier. The toll cashier verifies the validity of the ticket and allows the vehicle to continue or directs the driver to pay the special toll incurred if the ticket is invalid or has expired. Videomaut is available only to cars without a trailer which do not exceed a width of 2.3 metres and a total weight of 3.5 metric tonnes. Fees and discounts Videomaut tickets can be bought for a single or multiple passages or an unlimited number of passages for one year from the date the ticket is issued. Discounts are available to: drivers holding road-tax discs that are valid for one year, commuters, military and alternative civilian service members, and drivers with disabilities. Poland In Poland system called A4Go and AmberGo is available on two parts of highways: A4 on the section between Krakow and Katowice and on A1 on the section between Rusocin and Nowa Wieś. Payment is a possible via few mobile applications for IOS and Android: Autopay, Skycash, AmberGo, A4Go. Australia In Australia, video tolling is a part of the e-TAG system, and is country-wide. The system was originally developed in the early 1990s, with the system since adopted by all electronic tolled roads, bridges and tunnels in Australia. See also Videomaut home page ASFINAG home page References Electronic toll collection Radio-frequency identification Articles containing video clips
Video tolling
[ "Engineering" ]
1,326
[ "Radio-frequency identification", "Radio electronics" ]
8,801,791
https://en.wikipedia.org/wiki/Hawking%20%28birds%29
Hawking is a feeding strategy in birds involving catching flying insects in the air. The term usually refers to a technique of sallying out from a perch to snatch an insect and then returning to the same or a different perch, though it also applies to birds that spend almost their entire lives on the wing. This technique is called "flycatching" and some birds known for it are several families of "flycatchers": Old World flycatchers, monarch flycatchers, and tyrant flycatchers; however, some species known as "flycatchers" use other foraging methods, such as the grey tit-flycatcher. Other birds, such as swifts, swallows, and nightjars, also take insects on the wing in continuous aerial feeding. The term "hawking" comes from the similarity of this behavior to the way hawks take prey in flight, although, whereas raptors may catch prey with their feet, hawking is the behavior of catching insects in the bill. Many birds have a combined strategy of both hawking insects and gleaning them from foliage. Mainly founded in the grass lands and from dark oak wood fort trees. Flycatching The various methods of taking insects have been categorized as: gleaning (perched bird takes prey from branch or tree trunk), snatching (flying bird takes prey from ground or branch), hawking (bird leaves perch and takes prey from air), pouncing (bird drops to ground and takes prey) and pursuing (flying bird takes insects from air). In hawking behavior, a bird will watch for prey from a suitable perch. When it spies potential prey, the bird will fly swiftly from its perch to catch the insect in its bill, then return to the perch or sometimes to a different perch. This maneuver is also called a "sally". Prey that is very small relative to the bird, such as gnats, may be consumed immediately while in flight, but larger prey, such as bees or moths, are usually brought back to a perch before being eaten. Sometimes the prey will attempt to escape and this can result in a fluttering pursuit before returning to the perch. Depending on the species of bird, there are observable variations on this behavior. Some species, such as the olive-sided flycatcher of North America and the ashy drongo of the Indian Subcontinent, tend to choose an exposed perch, such as a dead tree branch overlooking a clearing, whereas others, such as the North American Acadian flycatcher and the Asian small niltava perch within the cover of foliage deep in a forest or woodland habitat. Many birds make use of a variety of tactics. A study of feeding behaviors in the family Tyrannidae categorized the following moves as ways of taking insect prey: aerial hawking (i.e. flycatching), perch-to-ground sallying, ground feeding (chasing after insects on the ground), perch-to-water sallying, sally-gleaning (can involve an hover-gleaning or a rapid strike), and gleaning while perched. Some tyrant flycatchers, such as those that choose a prominent perch from which to hawk insects, have more of a tendency to return to the same perch after each sally, while others, particularly those of the forest interior, show less of this tendency. A similar pattern is seen in Great Britain, where there are but two flycatchers, the spotted flycatcher and the pied flycatcher. The spotted flycatcher is the specialist, and tends to return to the same perch after each sally. The pied flycatcher is more of a generalist, gleaning as well as flycatching, and changes perches often. Birds with the name "flycatcher" are not the only ones to engage in flycatching behavior. For example, Lewis's woodpecker feeds by flycatching. Some honeyeaters of Australasia employ hawking and gleaning as feeding tactics. Bee-eaters catch bees in a similar manner and return to the perch to remove the sting before consuming. Furthermore, many small owls take insect prey on the wing; examples include the western screech owl of North America and the brown boobook of Asia. Sustained-flight feeding Continuous aerial feeding is a different way of hawking insects. It requires long wings and skillful flying, as in nightjars, swallows, and swifts. Swifts are the masters of aerial feeding; several species spend virtually their entire lives in the air (some non-mating common swifts have spent as much as 10 months in the air without landing), and have come to rely on insects as their main source of food. Swallows, though visually similar to swifts but being unrelated to them, feed in a similar manner, but less continuously, as they don't glide as much and they stop to perch for a while between bouts of aerial feeding. This has to do with their prey: swifts fly higher in pursuit of smaller, lighter insects that are scattered by rising air currents, while swallows generally chase after medium-sized insects that are lower to the ground, such as flies. When swallows fly higher to go after smaller insects, they adjust their fight style to glide more, like a swift. Birds of the nightjar family employ a variety of moves for catching insects. The common nighthawk of North America flies in swift-like fashion on its long, slender, pointed wings. The common poorwill, on the other hand, flies low and perches low to the ground and will sally up into the air after insects. Opportunistic feeding Many other birds are known to engage in hawking as an opportunistic feeding technique or a supplemental source of nutrition: among these are the cedar waxwing, which mostly eats fruit but is also often observed hawking insects over streams; terns of the genus Chlidonias, such as the black tern, fly in search of insects, sometimes chasing after dragonflies in flight; and even large owls that normally feed on rodents will snatch flying insects when the opportunity arises. Physical adaptations Hawking insects, like any feeding strategy, must provide a bird with sufficient nourishment to make the expenditure of energy worthwhile. The strategies and tactics for feeding on airborne insects are inextricably related to the adaptations and lifestyles of the birds that employ them. Flight, especially flight driven by the muscle-powered flapping of wings, is a strenuous physical activity. Although a sally from a perch may look like a single, rapid movement to the human eye, actually the bird must perform several moves: it begins its take-off by pushing with its feet to get into the air, it flaps its wings to generate forward motion (thrust), pursues the prey item, turns in the air, flies back, and, with a final flurry of wings, lands on its perch. When a bird hawks insects, the prey must be substantial enough to pay off in terms of a biological energy budget. In other words, the bird must take in more energy in food than it is using up in the pursuit of food. Therefore, flycatchers tend to prefer insect prey of moderate size, such as flies, over smaller insects like gnats. For birds that live in a forest habitat or other setting where short bursts of flight are used in sallies or for getting from tree branch to tree branch, their short, rounded wings are suitable for the rapid flapping required to maneuver in tight spaces. Birds in more open settings that sally after larger insects like bees, such as kingbirds and bee-eaters, benefit from longer, more pointed wings, which are more efficient because they generate more lift and less drag. Swallows and swifts, which glide about in totally open spaces, have even longer wings. Another function of long, pointed wings is to enable these birds to turn quickly and smoothly in mid-glide. The wingtips create little vortices of air, within which the low air pressure creates additional lift on the wingtips. Furthermore, long, forked tails provide additional lift, stability, and steering ability, which is important for flying at slower speeds (swifts, though capable of flying very fast, actually must fly relatively slowly to intercept airborne insects). In fact, swifts have bodies so well adapted for flying that they are unable to perch on branches or land on the ground, and so they nest and roost on precipices such as rocky cliffs, behind waterfalls (as the black swift of North America and the great dusky swift of South America are known to do) or in chimneys, as in the case of the chimney swift. Bill size and shape is also important. Compared to the bills of birds specialized for gleaning, a relatively larger, broader bill is ideal for catching sizeable insects such as bees and flies. The presence of bristles near the bill (rictal bristles) in some flycatchers may be an adaptation for hawking insects; scientists are not sure of the function but they may help protect the eyes or they might actually help provide the bird sensory information as to the location of the prey. Swallows, swifts, and nightjars do not have large bills, but they have wide-gaping mouths. Some nightjars also have bristles around the bill (the common poorwill does, the common nighthawk does not). When different kinds of birds have the same adaptations, such similarities are not necessarily indicative of any familial relationship between bird species. Rather, they are the result of convergent evolution. Consider, for example, the marked resemblance in body size, shape, and coloration between flycatchers of several families, though these species are not closely related: the Asian brown flycatcher (of the Muscicapidae or Old World flycatcher family), Acadian flycatcher (of the Tyrannidae or tyrant flycatcher family) of the New World, and slaty monarch (of the Monarchidae or monarch flycatcher family), endemic to Fiji. All three use flycatching to acquire some or all of their food. But these three families belong to separate branches of the evolutionary tree of songbirds, which diverged in two branching events some 60 and 90 million years ago and continued to evolve independently in different parts of the world. Likewise, the similarities of swifts and swallows once led naturalists to conclude they were related, but it is now established that they are unrelated, and that the same lifestyle has led to the same adaptations. Ecological implications In temperate climates, the availability of flying insects as a food source is seasonal, and this is probably why many birds that rely on this food source during the breeding season migrate in winter. Migration is timed to the availability of the birds' preferred food. For instance, it has been observed in Great Britain that migrating swallows arrive earlier in the spring than swifts, this correlates with the later profusion of small insects that swifts feed on. Weather also affects the availability of flying insects. Swallows, for example, are obliged to go where the insects are, and depending on the weather they may adjust their choice of prey or be forced to seek out prey in different locations. The preference for certain kinds of aerial insect as a food source seems to correlate with gregarious or colonial behavior versus territoriality. For birds that take advantage of swarming insects, which are by nature found in local concentrations, colonial breeding can be a successful strategy. An example is the cliff swallow of western North America. Its relative the barn swallow hunts larger, non-swarming insects, and is more solitary. Certain neotropical tyrant flycatchers will join mixed-species foraging flocks, as will some Asian drongos. Such flocks stir up flying insects, which can then be picked off in quick sallies. References Bird behavior Bird feeding
Hawking (birds)
[ "Biology" ]
2,439
[ "Behavior by type of animal", "Behavior", "Bird behavior" ]
8,801,957
https://en.wikipedia.org/wiki/Skylarking%20%28birds%29
Skylarking refers to the aerial displays including song made by various species of birds, such as Cassin's sparrow (Peterson 1990). Many skylarking displays are in courtship. Some are referred to as territorial displays by the male. There are some instances in which birdwatchers claim that skylarking has been used by male birds to avoid predators; the objective being that the predator will mistake the prey for another type of bird and end the pursuit. This survival tactic rarely works. References Birds Bird behavior
Skylarking (birds)
[ "Biology" ]
102
[ "Behavior by type of animal", "Behavior", "Animals", "Ethology", "Ethology stubs", "Birds", "Bird behavior" ]
8,802,065
https://en.wikipedia.org/wiki/Spread%20Toolkit
The Spread Toolkit is a computer software package that provides a high performance group communication system that is resilient to faults across local and wide area networks. Spread functions as a unified message bus for distributed applications, and provides highly tuned application-level multicast, group communication, and point to point support. Spread services range from reliable messaging to fully ordered messages with delivery guarantees. The toolkit consists of a messaging server, and client libraries for many software development environments, including C/C++ libraries (with and without thread support), a Java class to be used by applets or applications, and interfaces for Perl, Python, and Ruby. Interfaces for many other software environments have been provided by third parties. In typical operation, each computer in a cluster runs its own instance of the Spread server, and client applications connect locally to that server process. The Spread servers, in turn, communicate with each other to pass messages to subscriber applications. It can also be configured so that clients distributed across the network all communicate with a Spread server process on one host. The Spread Toolkit is developed by Spread Concepts LLC, with much support by the Distributed Systems and Networks Lab (DSN) at Johns Hopkins University, and the Experimental Networked Systems Lab at George Washington University. Partial funding was provided by the Defense Advanced Research Projects Agency (DARPA) and The National Security Agency (NSA). Bindings Bindings for Spread Toolkit exist for many languages and platforms: Ada C C++ C# Haskell Java Lua Microsoft Excel OCaml Perl PHP Python Ruby Squeak Scheme TCL References External links Middleware
Spread Toolkit
[ "Technology", "Engineering" ]
329
[ "Software engineering", "Middleware", "IT infrastructure" ]
8,802,094
https://en.wikipedia.org/wiki/Absorption%20cross%20section
In physics, absorption cross-section is a measure of the probability of an absorption process. More generally, the term cross-section is used in physics to quantify the probability of a certain particle-particle interaction, e.g., scattering, electromagnetic absorption, etc. (Note that light in this context is described as consisting of particles, i.e., photons.) A typical absorption cross-section has units of cm2⋅molecule−1. In honor of the fundamental contribution of Maria Goeppert Mayer to this area, the unit for the two-photon absorption cross section is named the "GM". One GM is 10−50 cm4⋅s⋅photon−1. In the context of ozone shielding of ultraviolet light, absorption cross section is the ability of a molecule to absorb a photon of a particular wavelength and polarization. Analogously, in the context of nuclear engineering, it refers to the probability of a particle (usually a neutron) being absorbed by a nucleus. Although the units are given as an area, it does not refer to an actual size area, at least partially because the density or state of the target molecule will affect the probability of absorption. Quantitatively, the number of photons absorbed, between the points and along the path of a beam is the product of the number of photons penetrating to depth times the number of absorbing molecules per unit volume times the absorption cross section : . The absorption cross-section is closely related to molar absorptivity and mass absorption coefficient. For a given particle and its energy, the absorption cross-section of the target material can be calculated from mass absorption coefficient using: where: is the mass absorption coefficient is the molar mass in g/mol is Avogadro constant This is also commonly expressed as: where: is the absorption coefficient is the atomic number density See also Cross section (physics) Photoionisation cross section Nuclear cross section Neutron cross section Mean free path Compton scattering Transmittance Attenuation Beer–Lambert law High energy X-rays Attenuation coefficient Absorption spectroscopy References Electromagnetism Nuclear physics Scattering, absorption and radiative transfer (optics)
Absorption cross section
[ "Physics", "Chemistry" ]
439
[ "Electromagnetism", "Physical phenomena", " absorption and radiative transfer (optics)", "Scattering", "Fundamental interactions", "Nuclear physics" ]
8,802,261
https://en.wikipedia.org/wiki/0%20to%2060%20mph
The time it takes a vehicle to accelerate from 0 to 60 miles per hour (97 km/h or 27 m/s), often said as just "zero to sixty" or "nought to sixty", is a commonly used performance measure for automotive acceleration in the United States and the United Kingdom. In the rest of the world, 0 to 100 km/h (0 to 62.1 mph) is used. Present production model performance cars are capable of going from 0 to 60 mph in under 5 seconds, while some exotic supercars can do 0 to 60 mph in between 2 and 3 seconds. Motorcycles have been able to achieve these figures with sub-500cc since the 1990s. The fastest automobile in 2015 was the Porsche 918 Spyder, which is a hybrid vehicle that takes 2.2 seconds to accelerate from 0 to 60 mph. In June 2021, the Tesla Model S Plaid was measured to accelerate from 0 to 60 mph in 1.98 seconds, not including first foot of rollout. Methods Measuring the 0 to 60 mph speed of vehicles is usually done in a closed setting such as a race car track or closed lot used for professional drivers. This is done to reduce risk to the drivers, their teams, and the public. The closed course is set up for test-drives in order to reduce any variables, such as wind, weather, and traction. Each variable can have a dramatic impact on the friction of the track and the drag placed on the vehicle, which will influence the overall 0 to 60 time that is recorded. The crew sets up accurate and precise measuring tools that are attached to computers. These tools included Doppler radar guns and precise timing instruments that are synchronized. This means that the driver is not worried about keeping time or the exact moment the car hits 60 miles per hour. The driver focuses solely on driving straight and fast with professional quick gear shifting. The car is timed and recorded going in two separate and opposite directions. This practice eliminates variables such as wind, directional traction of the track and driver performance. The two times are averaged together to achieve the commonly accepted 0 to 60 time. Jalopnik has said that launch control systems appearing on production exotic cars in the 2010s have made published 0 to 60 times invalid, since these cars have slower times from 5 mph to 60 mph. Some car magazines and manufacturers in the United States use a rolling start allowance term "1-foot rollout", which means that the timer is only started once the car has traveled , reducing the measured time by up to 0.3 seconds. See also List of fastest production cars by acceleration List of fastest production motorcycles by acceleration Vehicular metrics Motorcycle testing and measurement References Measurement Car performance
0 to 60 mph
[ "Physics", "Mathematics" ]
549
[ "Quantity", "Physical quantities", "Measurement", "Size" ]
8,802,414
https://en.wikipedia.org/wiki/CXCL1
The chemokine (C-X-C motif) ligand 1 (CXCL1) is a small peptide belonging to the CXC chemokine family that acts as a chemoattractant for several immune cells, especially neutrophils or other non-hematopoietic cells to the site of injury or infection and plays an important role in regulation of immune and inflammatory responses. It was previously called GRO1 oncogene, GROα, neutrophil-activating protein 3 (NAP-3) and melanoma growth stimulating activity, alpha (MGSA-α). CXCL1 was first cloned from a cDNA library of genes induced by platelet-derived growth factor (PDGF) stimulation of BALB/c-3T3 murine embryonic fibroblasts and named "KC" for its location in the nitrocellulose colony hybridization assay. This designation is sometimes erroneously believed to be an acronym and defined as "keratinocytes-derived chemokine". Rat CXCL1 was first reported when NRK-52E (normal rat kidney-52E) cells were stimulated with interleukin-1β (IL-1β) and lipopolysaccharide (LPS) to generate a cytokine that was chemotactic for rat neutrophils, cytokine-induced neutrophil chemoattractant (CINC). In humans, this protein is encoded by the gene CXCL1 and is located on human chromosome 4 among genes for other CXC chemokines. Structure and expression CXCL1 exists as both monomer and dimer and both forms are able to bind chemokine receptor CXCR2. However, CXCL1 chemokine is able to dimerize only at higher (micromolar) concentrations and its concentrations are only nanomolar or picomolar upon normal conditions, which means that the form of WT CXCL1 is more likely monomeric while dimeric CXCL1 is present only during infection or injury. CXCL1 monomer consists of three antiparallel β-strands followed by C- terminal α-helix and this α-helix together with the first β-strand are involved in forming a dimeric globular structure. Upon normal conditions, CXCL1 is not expressed constitutively. It's produced by a variety of immune cells such as macrophages, neutrophils and epithelial cells, or Th17 population. Moreover, its expression can be also induced indirectly by IL-1, TNF-α or IL-17 produced again by Th17 cells and is triggered mainly by activation of NF-κB or C/EBPβ signaling pathways predominantly involved in inflammation and leading to production of other inflammatory cytokines. Function CXCL1 has a potentially similar role as interleukin-8 (IL-8/CXCL8). After binding to its receptor CXCR2, CXCL1 activates phosphatidylinositol-4,5-bisphosphate 3-kinase-γ (PI3Kγ)/Akt, MAP kinases such as ERK1/ERK2 or phospholipase-β (PLCβ) signaling pathways. CXCL1 is expressed at higher levels during inflammatory responses thus contributing to the process of inflammation. CXCL1 is also involved in the processes of wound healing and tumorigenesis. Role in cancer CXCL1 has a role in angiogenesis and arteriogenesis and thus has been shown to act in the process of tumor progression. The role of CXCL1 was described by several studies in the development of various tumors, such as breast cancer, gastric and colorectal carcinoma or lung cancer. Also, CXCL1 is secreted by human melanoma cells, has mitogenic properties and is implicated in melanoma pathogenesis. Role in nervous system and sensitization CXCL1 plays a role in spinal cord development by inhibiting the migration of oligodendrocyte precursors. CXCR2 receptor for CXCL1 is expressed in the brain and spinal cord by neurons and oligodendrocytes and during CNS pathologies such as Alzheimer's disease, multiple sclerosis and brain injury also by microglia. An initial study in mice showed evidence that CXCL1 decreased the severity of multiple sclerosis and may offer a neuro-protective function. On the other hand, on the periphery, CXCL1 contributes to the release of prostaglandins and thus causes increased sensitivity to pain and drives nociceptive sensitization via recruitment of neutrophils to the tissue. Phosphorylation of ERK1/ERK2 kinases and activation of NMDA receptors leads to transcription of genes inducing chronic pain, such as c-Fos or cyclooxygenase-2 (COX-2). References External links Cytokines
CXCL1
[ "Chemistry" ]
1,077
[ "Cytokines", "Signal transduction" ]
8,802,504
https://en.wikipedia.org/wiki/Voltage%20graph
In graph theory, a voltage graph is a directed graph whose edges are labelled invertibly by elements of a group. It is formally identical to a gain graph, but it is generally used in topological graph theory as a concise way to specify another graph called the derived graph of the voltage graph. Typical choices of the groups used for voltage graphs include the two-element group (for defining the bipartite double cover of a graph), free groups (for defining the universal cover of a graph), d-dimensional integer lattices (viewed as a group under vector addition, for defining periodic structures in d-dimensional Euclidean space), and finite cyclic groups for n > 2. When is a cyclic group, the voltage graph may be called a cyclic-voltage graph. Definition Formal definition of a -voltage graph, for a given group : Begin with a digraph G. (The direction is solely for convenience in notation.) A -voltage on an arc of G is a label of the arc by an element . For instance, in the case where , the label is a number i (mod n). A -voltage assignment is a function that labels each arc of G with a Π-voltage. A -voltage graph is a pair such that G is a digraph and α is a voltage assignment. The voltage group of a voltage graph is the group from which the voltages are assigned. Note that the voltages of a voltage graph need not satisfy Kirchhoff's voltage law, that the sum of voltages around a closed path is 0 (the identity element of the group), although this law does hold for the derived graphs described below. Thus, the name may be somewhat misleading. It results from the origin of voltage graphs as dual to the current graphs of topological graph theory. The derived graph The derived graph of a voltage graph is the graph whose vertex set is and whose edge set is , where the endpoints of an edge (e, k) such that e has tail v and head w are and . Although voltage graphs are defined for digraphs, they may be extended to undirected graphs by replacing each undirected edge by a pair of oppositely ordered directed edges and by requiring that these edges have labels that are inverse to each other in the group structure. In this case, the derived graph will also have the property that its directed edges form pairs of oppositely oriented edges, so the derived graph may itself be interpreted as being an undirected graph. The derived graph is a covering graph of the given voltage graph. If no edge label of the voltage graph is the identity element, then the group elements associated with the vertices of the derived graph provide a coloring of the derived graph with a number of colors equal to the group order. An important special case is the bipartite double cover, the derived graph of a voltage graph in which all edges are labeled with the non-identity element of a two-element group. Because the order of the group is two, the derived graph in this case is guaranteed to be bipartite. Polynomial time algorithms are known for determining whether the derived graph of a -voltage graph contains any directed cycles. Examples Any Cayley graph of a group , with a given set of generators, may be defined as the derived graph for a -voltage graph having one vertex and self-loops, each labeled with one of the generators in . The Petersen graph is the derived graph for a -voltage graph in the shape of a dumbbell with two vertices and three edges: one edge connecting the two vertices, and one self-loop on each vertex. One self-loop is labeled with 1, the other with 2, and the edge connecting the two vertices is labeled 0. More generally, the same construction allows any generalized Petersen graph GP(n,k) to be constructed as a derived graph of the same dumbbell graph with labels 1, 0, and k in the group . The vertices and edges of any periodic tessellation of the plane may be formed as the derived graph of a finite graph, with voltages in . Notes References . . . . . . Extensions and generalizations of graphs
Voltage graph
[ "Mathematics" ]
844
[ "Mathematical relations", "Graph theory", "Extensions and generalizations of graphs" ]
8,802,530
https://en.wikipedia.org/wiki/Ritalin%20class-action%20lawsuits
The Ritalin class-action lawsuits were a series of federal lawsuits in 2000, filed in five separate US states. All five lawsuits were dismissed by the end of 2002. The lawsuits alleged that the makers of methylphenidate (brand name Ritalin) and the American Psychiatric Association had conspired to invent and promote the disorder ADHD to create a highly profitable market for the drug. The lawsuit also alleged that CHADD (children and adults with attention deficit/hyperactivity disorder) deliberately attempted to increase the supply of Ritalin and ease restrictions on the supply of Ritalin to help increase profits for Novartis. Previous lawsuits and history of class action Beginning in the 1980s, a series of lawsuits were filed based on the perceived harmful side effects of Ritalin. John Coale, who had participated in one of these lawsuits, joined what became an ever larger contingent of lawyers involved in what was then a growing series of Ritalin class action lawsuits. In the late 1990s, there was a significant increase in production of Ritalin. A minority but vocal group of critics perceived that a crisis was on hand. Coale also expressed alarm, "They were giving this stuff away like candy". The Church of Scientology advocacy organization, Citizens Commission on Human Rights, and anti-psychiatry critics believed Ritalin to be highly dangerous and completely unnecessary. Coale seemed to share these beliefs as he stated the purpose of the lawsuit to be; "...to put [Ritalin] off the market." The St. Petersburg Times wrote at that time that Coale, like his wife Greta Van Susteren, was a practicing Scientologist. Richard Scruggs, like John Coale, and a few other lawyers who participated in the Ritalin class action lawsuits, had previously helped win a landmark settlement from the asbestos and tobacco industries, Ritalin was to be the next major battleground. Scruggs would lead and also become a spokesman for the plaintiffs. He asserted the Ritalin defendants, "manufactured a disease"...and "it has been grossly over-prescribed. It is a huge risk." Peter Breggin who is a noted psychiatrist and industry critic, was hired as a medical consultant by the firm and was also involved as a consultant in the other lawsuits. The first class action was filed in Texas by the law firm Waters & Kraus in 2000. They created a webpage called Ritalinfraud.com which had an online form to seek additional participants in class action lawsuits. According to Breggin, plaintiff Andy Waters had previously read his book Talking back to Ritalin before filing his lawsuit. The firm believed that the improper conduct of Novartis rivaled the improper conduct of the tobacco and asbestos industries and that the drug company could be liable for billions of dollars. The firm claimed that Novartis specifically took the following steps to dramatically increase the sale of Ritalin. Actively promoting and supporting the concept that a significant percentage of children have a "disease" which required narcotic treatment/therapy; Actively promoting Ritalin as the "drug of choice" to treat children diagnosed with ADD and ADHD: Actively supporting groups such as defendant CHADD, both financially and with other means, so that such organizations would promote and support (as a supposed neutral party) the ever-increasing implementation of ADD/ADHD diagnoses as well as directly increasing Ritalin sales; Distributing misleading sales and promotional literature to parents, schools and other interested persons in a successful effort to further increase the number of diagnoses and the number of persons prescribed Ritalin. Novartis and APA respond A spokesperson for Novartis responded to the Texas suit, "Ritalin has been used safely and effectively in the treatment of millions of ADHD patients for over 40 years, and is the most studied drug prescribed for the disorder." The American Psychiatric Association stated, "the allegation that it had conspired with Novartis to create the ADHD diagnosis was "ludicrous and totally false," and said there existed "a mountain of scientific evidence to refute these meritless allegations." Outcome The first suit to be dismissed occurred in California in 2001. U.S. District Judge Rudi Brewster dismissed the suit under California's anti-SLAPP statute. A SLAPP (strategic lawsuit against public participation) is a form of litigation filed to intimidate and silence a less powerful critic by so severely burdening them with the cost of a legal defense that they abandon their criticism. The Anti-SLAPP statute is designed to eliminate potential lawsuits that are in reality political actions by stopping them early in court procedures. Judge Brewster dismissed the suit stating that the defendants' speech is "protected under both the United States and California Constitutions" and that plaintiffs "failed to state a cause of action." In addition to dismissing the suit, the court also ordered that the plaintiffs pay the legal fees for Novartis, APA and CHADD. In the conclusion to one of the other lawsuits, Judge Tagla stated "that the allegations were fully without merit. Plaintiffs failed to provide any concrete statements to document their claims." By 2002 all five class action lawsuits had been dismissed or had been withdrawn. A Novartis spokesperson stated;"...the fact that all five of the class action lawsuits have been dismissed, sends a strong message that the decision of how to treat ADHD is between the parent, patient and physician, and has no place in the courts." See also List of class-action lawsuits References External links Original website set up by the Plaintiff's law firm Psychiatric News PBS series on ADHD Novartis press release Class action lawsuits Biology of attention deficit hyperactivity disorder Neuropharmacology
Ritalin class-action lawsuits
[ "Chemistry" ]
1,164
[ "Pharmacology", "Neuropharmacology" ]
8,803,454
https://en.wikipedia.org/wiki/Capability-based%20addressing
In computer science, capability-based addressing is a scheme used by some computers to control access to memory as an efficient implementation of capability-based security. Under a capability-based addressing scheme, pointers are replaced by protected objects (named capabilities) which specify both a location in memory, along with access rights which define the set of operations which can be carried out on the memory location. Capabilities can only be created or modified through the use of privileged instructions which may be executed only by either the kernel or some other privileged process authorised to do so. Thus, a kernel can limit application code and other subsystems access to the minimum necessary portions of memory (and disable write access where appropriate), without the need to use separate address spaces and therefore require a context switch when an access occurs. Practical implementations Two techniques are available for implementation: Require capabilities to be stored in a particular area of memory that cannot be written to by the process that will use them. For example, the Plessey System 250 required that all capabilities be stored in capability-list segments. Extend memory with an additional bit, writable only in supervisor mode, that indicates that a particular location is a capability. This is a generalization of the use of tag bits to protect segment descriptors in the Burroughs large systems, and it was used to protect capabilities in the IBM System/38. Capability addressing in the IBM System/38 and AS/400 The System/38 supported two types of object pointer – authorized pointers, and unauthorized pointers, the former was the platform's implementation of capability-based addressing. Both types of pointer could only be manipulated using privileged instructions, and differed by whether object authorizations (i.e. access rights) were encoded in the contents of the pointer. Unauthorized pointers did not encode object authorizations, and required the operating system to check the object's authorization separately to determine if access to the object was allowed. Authorized pointers encoded object authorizations, meaning that possession of the pointer implied access, and the operating system was not required to verify authorization separately. Authorized pointers were irrevocable by design - if the object's authorizations were altered, it would not alter the encoded authorizations in any authorized pointers which already existed. Early versions of the OS/400 operating system for the AS/400 also supported authorized pointers, and by extension capability-based addressing. However, authorized pointers were removed in the V1R3 release of OS/400 as their irrevocable nature became seen as a security liability. All versions of OS/400 (later IBM i) since rely solely on unauthorized pointers which do not support capability-based addressing. Chronology of systems adopting capability-based addressing 1969: System 250 – Plessey Company 1970–77: CAP computer – University of Cambridge Computer Laboratory 1978: System/38 – IBM 1980: Flex machine – Royal Signals and Radar Establishment (RSRE) Malvern 1981: Intel iAPX 432 – Intel 2014: CHERI (adds capabilities to existing ISAs for safer programming, even in C and C++) 2020: CHEx86 2022: ARM Morello (AArch64 with CHERI capabilities) References Further reading same document as report for US NIST External links Memory management Operating system security
Capability-based addressing
[ "Technology" ]
667
[ "Capability systems", "Computer systems" ]
8,805,343
https://en.wikipedia.org/wiki/Skin%20allergy%20test
Skin allergy testing comprises a range of methods for medical diagnosis of allergies that attempts to provoke a small, controlled, allergic response. Methods A microscopic amount of an allergen is introduced to a patient's skin by various means: Skin prick test: pricking the skin with a needle or pin containing a small amount of the allergen. Skin scratch test: a deep dermic scratch is performed with help of the blunt bottom of a lancet. Intradermic test: a tiny quantity of allergen is injected under the dermis with a hypodermic syringe. Skin scrape Test: a superficial scrape is performed with help of the back of a needle to remove the superficial layer of the epidermis. Patch test: applying a patch to the skin, where the patch contains the allergen If an immuno-response is seen in the form of a rash, urticaria (hives), or anaphylaxis it can be concluded that the patient has a hypersensitivity (or allergy) to that allergen. Further testing can be done to identify the particular allergen. The "skin scratch test" as it is called, is not very commonly used due to the increased likelihood of infection. On the other hand, the "skin scrape test" is painless, does not leave residual pigmentation, and does not have a risk of infection, since it is limited to the superficial layer of the skin. Some allergies are identified in a few minutes but others may take several days. In all cases where the test is positive, the skin will become raised, red, and appear itchy. The results are recorded - larger wheals indicating that the subject is more sensitive to that particular allergen. A negative test does not conclusively rule out an allergy; occasionally, the concentration needs to be adjusted, or the body fails to elicit a response. Immediate reactions tests In the prick, scratch and scrape tests, a few drops of the purified allergen are gently pricked on to the skin surface, usually the forearm. This test is usually done in order to identify allergies to pet dander, dust, pollen, foods or dust mites. Intradermal injections are done by injecting a small amount of allergen just beneath the skin surface. The test is done to assess allergies to drugs like penicillin or bee venom. To ensure that the skin is reacting in the way it is supposed to, all skin allergy tests are also performed with proven allergens like histamine, and non-allergens like glycerin. The majority of people do react to histamine and do not react to glycerin. If the skin does not react appropriately to these allergens then it most likely will not react to the other allergens. These results are interpreted as falsely negative. Delayed reactions tests The patch test uses rectangles of special hypoallergenic adhesive tape with different allergens on them. The patch is applied to the skin, usually on the back. The allergens on the patch include latex, medications, preservatives, hair dyes, fragrances, resins, and various metals. Patch testing is used to detect allergic contact dermatitis but does not test for hives or food allergy. Skin end point titration Also called an intradermal test, this skin end point titration (SET) uses an intradermal injection of allergens at increasing concentrations to measure allergic response. To prevent a severe allergic reaction, the test is started with a very dilute solution. After 10 minutes, the injection site is measured to look for growth of wheal, a small swelling of the skin. Two millimeters of growth in 10 minutes is considered positive. If 2 mm of growth is noted, then a second injection at a higher concentration is given to confirm the response. The end point is the concentration of antigen that causes an increase in the size of the wheal followed by confirmatory whealing. If the wheal grows larger than 13 mm, then no further injections are given since this is considered a major reaction. Preparation There are no major preparations required for skin testing. At the first consult, the subject's medical history is obtained and physical examination is performed. All patients should bring a list of their medications because some may interfere with the testing. Other medications may increase the chance of a severe allergic reaction. Medications that commonly interfere with skin testing include the following: Histamine antagonists like Allegra, Claritin, Benadryl, Zyrtec Antidepressants like Amitriptyline, Doxepin Antacid like Tagamet or Zantac Patients who undergo skin testing should know that anaphylaxis can occur anytime. So if any of the following symptoms are experienced, a physician consultation is recommended immediately: Low grade Fever Lightheadedness or dizziness Wheezing or Shortness of breath Extensive skin rash Swelling of face, lips or mouth Difficulty swallowing or speaking Contraindications Even though skin testing may seem to be a benign procedure, it does have some risks, including swollen red bumps (hives) which may occur after the test. The hives usually disappear in a few hours after the test. In rare cases they can persist for a day or two. These hives may be itchy and are best treated by applying an over the counter hydrocortisone cream. In very rare cases one may develop a full blown allergic reaction. Physicians who perform skin test always have equipment and medications available in case an anaphylaxis reaction occurs. This is the main reason why people should not get skin testing performed at corner stores or by people who have no medical training. Antihistamines, which are commonly used to treat allergy symptoms, interfere with skin tests, as they can prevent the skin from reacting to the allergens being tested. People who take an antihistamine need either to choose a different form of allergy test or to stop taking the antihistamine temporarily before the test. The period of time needed can range from a day or two to 10 days or longer, depending on the specific medication. Some medications not primarily used as antihistamines, including tricyclic antidepressants, phenothiazine-based antipsychotics, and several kinds of medications used for gastrointestinal disorders, can similarly interfere with skin tests. People who have severe, generalized skin disease or an acute skin infection should not undergo skin testing, as one needs uninvolved skin for testing. Also, skin testing should be avoided for people at a heightened risk of anaphylactic shock, including people who are known to be highly sensitive to even the smallest amount of allergen. Besides skin tests, there are blood tests which measure a specific antibody in the blood. The IgE antibody plays a vital role in allergies but its levels in blood do not always correlate with the allergic reaction. There are many alternative health care practitioners who perform a variety of provocation neutralization tests, but the vast majority of these tests have no validity and have never been proven to work scientifically. See also RAST test Basophil activation Allergies Hypersensitivity Dermatitis Anaphylaxis Prausnitz-Küstner test Protein nitrogen unit List of allergies References External links The British Institute for Allergies About.com - allergy tests Skin Patches (Allergy Testing) Allergology Skin tests Dermatologic procedures Immunologic tests
Skin allergy test
[ "Biology" ]
1,579
[ "Immunologic tests" ]
8,805,458
https://en.wikipedia.org/wiki/Bustline
A bustline is an arbitrary line encircling the fullest part of the bust or body circumference at the bust. It is a body measurement which measures the circumference of a woman's torso at the level of the breasts. It is measured by keeping a measuring tape horizontal and wrapping it around the body so that it goes over the nipples and under the arms. In relation to a woman's garment, the bustline is the outline or shape of a woman's bust, or the part of the garment which covers the breasts, such as a dress with a fitted bustline. The bustline is an important measure in women's clothing sizes. A full bust is another term for the bustline measure but is also commonly used to refer to a breast with a cup size of at least C. Other measures These measures are usually taken when taking measurements for the bustline of a woman's garment, especially if the garment is to be form-fitting at the bust. A high bust is a measure of a woman's torso above her breasts. An under-bust is the measure below the breasts. See also Hemline Neckline Waistline References Anthropometry Sizes in clothing Body shape
Bustline
[ "Physics", "Mathematics" ]
252
[ "Sizes in clothing", "Quantity", "Physical quantities", "Size" ]
8,805,625
https://en.wikipedia.org/wiki/Galaxy%20merger
Galaxy mergers can occur when two (or more) galaxies collide. They are the most violent type of galaxy interaction. The gravitational interactions between galaxies and the friction between the gas and dust have major effects on the galaxies involved, but the exact effects of such mergers depend on a wide variety of parameters such as collision angles, speeds, and relative size/composition, and are currently an extremely active area of research. Galaxy mergers are important because the merger rate is a fundamental measurement of galaxy evolution and also provides astronomers with clues about how galaxies grew into their current forms over long stretches of time. Description During the merger, stars and dark matter in each galaxy become affected by the approaching galaxy. Toward the late stages of the merger, the gravitational potential begins changing so quickly that star orbits are greatly altered, and lose any trace of their prior orbit. This process is called “violent relaxation”. For example, when two disk galaxies collide they begin with their stars in an orderly rotation in the planes of the two separate disks. During the merger, that ordered motion is transformed into random energy (“thermalized”). The resultant galaxy is dominated by stars that orbit the galaxy in a complicated and random interacting network of orbits, which is what is observed in elliptical galaxies. Mergers are also locations of extreme amounts of star formation. The star formation rate (SFR) during a major merger can reach thousands of solar masses worth of new stars each year, depending on the gas content of each galaxy and its redshift. Typical merger SFRs are less than 100 new solar masses per year. This is large compared to our Galaxy, which makes only a few new stars each year (~2 new stars). Though stars almost never get close enough to actually collide in galaxy mergers, giant molecular clouds rapidly fall to the center of the galaxy where they collide with other molecular clouds. These collisions then induce condensations of these clouds into new stars. We can see this phenomenon in merging galaxies in the nearby universe. Yet, this process was more pronounced during the mergers that formed most elliptical galaxies we see today, which likely occurred 1–10 billion years ago, when there was much more gas (and thus more molecular clouds) in galaxies. Also, away from the center of the galaxy, gas clouds will run into each other, producing shocks which stimulate the formation of new stars in gas clouds. The result of all this violence is that galaxies tend to have little gas available to form new stars after they merge. Thus if a galaxy is involved in a major merger, and then a few billion years pass, the galaxy will have very few young stars (see Stellar evolution) left. This is what we see in today's elliptical galaxies, very little molecular gas and very few young stars. It is thought that this is because elliptical galaxies are the end products of major mergers which use up the majority of gas during the merger, and thus further star formation after the merger is quenched. Galaxy mergers can be simulated in computers, to learn more about galaxy formation. Galaxy pairs initially of any morphological type can be followed, taking into account all gravitational forces, and also the hydrodynamics and dissipation of the interstellar gas, the star formation out of the gas, and the energy and mass released back in the interstellar medium by supernovae. Such a library of galaxy merger simulations can be found on the GALMER website. A study led by Jennifer Lotz of the Space Telescope Science Institute in Baltimore, Maryland created computer simulations in order to better understand images taken by the Hubble Space Telescope. Lotz's team tried to account for a broad range of merger possibilities, from a pair of galaxies with equal masses joining to an interaction between a giant galaxy and a tiny one. The team also analyzed different orbits for the galaxies, possible collision impacts, and how galaxies were oriented to each other. In all, the group came up with 57 different merger scenarios and studied the mergers from 10 different viewing angles. One of the largest galaxy mergers ever observed consisted of four elliptical galaxies in the cluster CL0958+4702. It may form one of the largest galaxies in the Universe. Categories Galaxy mergers can be classified into distinct groups due to the properties of the merging galaxies, such as their number, their comparative size and their gas richness. By number Mergers can be categorized by the number of galaxies engaged in the process: Binary merger Two interacting galaxies merge. Multiple merger Three or more galaxies merge. By size Mergers can be categorized by the extent to which the largest involved galaxy is changed in size or form by the merger: Minor merger A merger is minor if one of the galaxies is significantly larger than the other(s). The larger galaxy will often "eat" the smaller - a phenomenon aptly named “galactic cannibalism” - absorbing most of its gas and stars with little other significant effect on the larger galaxy. Our home galaxy, the Milky Way, is thought to be currently absorbing several smaller galaxies in this fashion, such as the Canis Major Dwarf Galaxy, and possibly the Magellanic Clouds. The Virgo Stellar Stream is thought to be the remains of a dwarf galaxy that has been mostly merged with the Milky Way. Major merger A merger of two spiral galaxies that are approximately the same size is major; if they collide at appropriate angles and speeds, they will likely merge in a fashion that drives away much of the dust and gas through a variety of feedback mechanisms that often include a stage in which there are active galactic nuclei. This is thought to be the driving force behind many quasars. The result is an elliptical galaxy, and many astronomers hypothesize that this is the primary mechanism that creates ellipticals. One study found that large galaxies merged with each other on average once over the past 9 billion years. Small galaxies coalesced with large galaxies more frequently. Note that the Milky Way and the Andromeda Galaxy are predicted to collide in about 4.5 billion years. The expected result of these galaxies merging would be major as they have similar sizes, and will change from two "grand design" spiral galaxies to (probably) a giant elliptical galaxy. By gas richness Mergers can be categorized by the degree to which the gas (if any) carried within and around the merging galaxies interacts: Wet merger A wet merger is between gas-rich galaxies ("blue" galaxies). Wet mergers typically produce a large amount of star formation, transform disc galaxies into elliptical galaxies and trigger quasar activity. Dry merger A merger between gas-poor galaxies ("red" galaxies) is called dry. Dry mergers typically do not greatly change the galaxies' star formation rates, but can play an important role in increasing stellar mass. Damp merger A damp merger occurs between the same two galaxy-types mentioned above ("blue" and "red" galaxies), if there is enough gas to fuel significant star formation but not enough to form globular clusters. Mixed merger A mixed merger occurs when gas-rich and gas-poor galaxies ("blue" and "red" galaxies) merge. Merger history trees In the standard cosmological model, any single galaxy is expected to have formed from a few or many successive mergers of dark matter haloes, in which gas cools and forms stars at the centres of the haloes, becoming the optically visible objects historically identified as galaxies during the twentieth century. Modelling the mathematical graph of the mergers of these dark matter haloes, and in turn, the corresponding star formation, was initially treated either by analysing purely gravitational N-body simulations or by using numerical realisations of statistical ("semi-analytical") formulae. In a 1992 observational cosmology conference in Milan, Roukema, Quinn and Peterson showed the first merger history trees of dark matter haloes extracted from cosmological N-body simulations. These merger history trees were combined with formulae for star formation rates and evolutionary population synthesis, yielding synthetic luminosity functions of galaxies (statistics of how many galaxies are intrinsically bright or faint) at different cosmological epochs. Given the complex dynamics of dark matter halo mergers, a fundamental problem in modelling merger history tree is to define when a halo at one time step is a descendant of a halo at the previous time step. Roukema's group chose to define this relation by requiring the halo at the later time step to contain strictly more than 50 percent of the particles in the halo at the earlier time step; this guaranteed that between two time steps, any halo could have at most a single descendant. This galaxy formation modelling method yields rapidly calculated models of galaxy populations with synthetic spectra and corresponding statistical properties comparable with observations. Independently, Lacey and Cole showed at the same 1992 conference how they used the Press–Schechter formalism combined with dynamical friction to statistically generate Monte Carlo realisations of dark matter halo merger history trees and the corresponding formation of the stellar cores (galaxies) of the haloes. Kauffmann, White and Guiderdoni extended this approach in 1993 to include semi-analytical formulae for gas cooling, star formation, gas reheating from supernovae, and for the hypothesised conversion of disc galaxies into elliptical galaxies. Both the Kauffmann group and Okamoto and Nagashima later took up the N-body simulation derived merger history tree approach. Examples Some of the galaxies that are in the process of merging or are believed to have formed by merging are: Antennae Galaxies Mice Galaxies Centaurus A NGC 7318 Arp 273 Gallery See also Andromeda–Milky Way collision Bulge (astronomy) Galaxy formation and evolution Interacting galaxy List of stellar streams Mass deficit Pea galaxy References External links Andromeda involved in galactic collision – NBC News "GALMER: Galaxy Merger Simulations" Galaxies Galaxy merger Impact events Articles containing video clips
Galaxy merger
[ "Astronomy" ]
1,994
[ "Astronomical events", "Impact events" ]
8,806,315
https://en.wikipedia.org/wiki/SSPC-SP13/NACE%20No.%206
SSPC-SP13/NACE No. 6 Surface Preparation of Concrete is a SSPC and NACE International joint standard that covers the preparation of concrete surfaces prior to the application of protective coating or lining systems. This standard should be used by specifiers, applicators, inspectors, and other who are responsible for defining a standard degree of cleanliness, strength, profile, and dryness of prepared concrete surfaces prior to the application of a protective coating system. External links SSPC: The Society for Protective Coatings NACE International Coatings
SSPC-SP13/NACE No. 6
[ "Chemistry" ]
111
[ "Coatings" ]
8,806,323
https://en.wikipedia.org/wiki/Home%20medical%20equipment
This article discusses the definitions and types of home medical equipment (HME), also known as durable medical equipment (DME), and durable medical equipment and prosthetics and orthotics (DMEPOS). HME / DMEPOS Home medical equipment is a category of devices used for patients whose care is being managed from a home or other private facility managed by a nonprofessional caregiver or family member. It is often referred to as "durable" medical equipment (DME) as it is intended to withstand repeated use by non-professionals or the patient, and is appropriate for use in the home. Medical supplies of an expendable nature, such as bandages, rubber gloves and irrigating kits are not considered by Medicare to be DME. Within the US medical and insurance industries, the following acronyms are used to describe home medical equipment: DME: Durable Medical Equipment HME: Home Medical Equipment DMEPOS: Durable Medical Equipment, Prosthetics, Orthotics and Supplies Types of home medical equipment The following are representative examples of home medical equipment Air ionizer Air purifier Apnea monitor Artificial limb Bedpan Cannula Catheter Colostomy bag CPAP machine Crutch Diabetic Shoes Drug test Enemas Feeding tube Glucose meter Heating pad Hospital bed Infusion pump Lift chair Nasal cannula Nebulizer Oxygen concentrator Oxygen cylinder Patient lift Pill splitter Prosthetic device Pulse oximeter Traction splint Walker Ventilator Wheelchair Obtaining and using home medical equipment For most home medical equipment to be reimbursed by insurance, a patient must have a doctor's prescription for the equipment needed. Some equipment, such as oxygen, is FDA regulated and must be prescribed by a physician before purchase whether insurance reimbursed or otherwise. The physician may recommend a supplier for the home medical equipment, or the patient will have to research this on their own. HME / DMEPOS suppliers are located throughout the country and some specialty shops can also be found on the internet. There is no established typical size for HME / DMEPOS suppliers. Supply companies include very large organizations such as Walgreens, Lincare, and Apria to smaller local companies operated by sole proprietors or families. A new evolution in the home medical equipment arena is the advent of internet retailers who have lower operating costs so they often sell equipment for lower prices than local "brick and mortar", but lack the ability to offer in-home setup, equipment training and customer service. In all cases, however, there are strict rules and laws governing HME / DMEPOS suppliers that participate in Medicare and Medicaid programs. In addition to rules outlined the National Supplier Clearinghouse, of division of CMS (centers for Medicare and Medicaid), all Medicare DME suppliers must obtain and maintain accreditation by one of many approved accrediting bodies. Once a patient or caregiver selects an appropriate HME / DMEPOS supplier, he/she presents the supplier with the prescription and patient's insurance information. HME / DMEPOS suppliers maintain an inventory of products and equipment, so fulfillment of the prescription is rapid, much like a Pharmacy. The HME / DMEPOS supplier is obligated to perform certain functions when providing home medical equipment. These include: Proper delivery and setup of the equipment Ensuring the home environment is suitable and safe for proper usage of the equipment Training the patient, family and caregivers on the proper usage and maintenance of the equipment Informing the patient and/or caregiver of their rights and responsibilities All HME / DMEPOS suppliers are required to comply with Health Insurance Portability and Accountability Act (HIPAA) to protect patients' confidentiality and records. Insurance In the United States Home medical equipment is typically covered by patient's healthcare insurance, including Medicare (Part B). In order to properly code home medical equipment for billing, the Healthcare Common Procedure Coding System HCPCS is utilized. As of 2014, under the Medicare Prescription Drug, Improvement, and Modernization Act of 2003, providers of HME/DMEPOS will be required to become third-party accredited to standards regulated by the Centers for Medicare and Medicaid Services (CMS) in order to continue eligibility under Medicare Part B. This effort aims to standardize and improve the quality of service to patients provided by home medical equipment suppliers. See also Medical device Medical technology Medical equipment Loan closet Medtrade- the largest international trade fair for HME in the US References Medical equipment Medicare and Medicaid (United States)
Home medical equipment
[ "Biology" ]
951
[ "Medical equipment", "Medical technology" ]
8,806,722
https://en.wikipedia.org/wiki/Planar%20Hall%20sensor
The planar Hall sensor is a type of magnetic sensor based on the planar Hall effect of ferromagnetic materials. It measures the change in anisotropic magnetoresistance caused by an external magnetic field in the Hall geometry. As opposed to an ordinary Hall sensor, which measures field components perpendicular to the sensor plane, the planar Hall sensor responds to magnetic field components in the sensor plane. Generally speaking, for ferromagnetic materials, the resistance is larger when the current flows along the direction of magnetization than when it flows perpendicular to the magnetization vector. This creates an asymmetric electric field perpendicular to the current, which depends on the magnetization state of the sensor. Exactly controlling the magnetization state is the key to the operation of the planar Hall sensor. From fabrication the magnetization is confined to one certain direction in zero applied field, and the application of a field perpendicular to this direction changes the magnetization state in such a way that the electronic readout is linear with respect to the magnitude of the applied field. This is true for applied fields smaller than a fourth of the intrinsic effective anisotropy field (see ref. 1 for details on the working principle). The planar Hall sensor has been demonstrated as a magnetic bead detector and to measure the Earth's field with nanotesla precision As a magnetic bead sensor, the planar Hall sensor can be used as sensing principle in a magnetic bioassay. In ref. 5 detection of influenza virusses was demonstrated using an immunoassay imitating a sandwich ELISA based on monoclonal antibodies. References Electrical components Electric and magnetic fields in matter Hall effect Spintronics
Planar Hall sensor
[ "Physics", "Chemistry", "Materials_science", "Technology", "Engineering" ]
344
[ "Physical phenomena", "Electrical components", "Hall effect", "Spintronics", "Electric and magnetic fields in matter", "Materials science", "Electrical phenomena", "Condensed matter physics", "Electrical engineering", "Solid state engineering", "Components" ]
17,426,327
https://en.wikipedia.org/wiki/Droxidopa
Droxidopa, also known as L-threo-dihydroxyphenylserine (L-DOPS) and sold under the brand names Northera and Dops among others, is sympathomimetic medication which is used in the treatment of hypotension (low blood pressure) and for other indications. It is taken by mouth. Side effects of droxidopa include headache, dizziness, nausea, and hypertension, among others. Droxidopa is a synthetic amino acid precursor which acts as a prodrug to the neurotransmitter norepinephrine (noradrenaline). Hence, it acts as a non-selective agonist of the α- and β-adrenergic receptors. Unlike norepinephrine, but similarly to levodopa (L-DOPA), droxidopa is capable of crossing the protective blood–brain barrier (BBB). Droxidopa was first described by 1971. It was approved for use in Japan in 1989 and was introduced in the United States in 2014. Medical uses Droxidopa is approved for use in the treatment of orthostatic hypotension, intradialytic hypotension (IDH; hemodialysis-induced hypotension), dizziness, and amyloid polyneuropathy. For hypotension, it is specifically used in the treatment of neurogenic orthostatic hypotension (NOH) in dopamine β-hydroxylase deficiency, as well as NOH associated with multiple system atrophy (MSA), familial amyloid polyneuropathy (FAP), and pure autonomic failure (PAF). The drug is also used off-label in the treatment of freezing of gait in Parkinson's disease. Side effects With over 20years on the market, droxidopa has proven to have few side effects of which most are mild. The most common side effects reported in clinical trials include headache, dizziness, nausea, hypertension and fatigue. Pharmacology Droxidopa is a prodrug of norepinephrine used to increase the concentrations of these neurotransmitters in the body and brain. It is metabolized by aromatic L-amino acid decarboxylase (AAAD), also known as DOPA decarboxylase (DDC). Patients with NOH have depleted levels of norepinephrine which leads to decreased blood pressure or hypotension upon orthostatic challenge. Droxidopa works by increasing the levels of norepinephrine in the peripheral nervous system (PNS), thus enabling the body to maintain blood flow upon and while standing. Droxidopa can also cross the blood–brain barrier (BBB) where it is converted to norepinephrine from within the brain. Increased levels of norepinephrine in the central nervous system (CNS) may be beneficial to patients in a wide range of indications. Droxidopa can be coupled with a peripheral aromatic L-amino acid decarboxylase inhibitor (AAADI) or DOPA decarboxylase inhibitor (DDC) such as carbidopa (Lodosyn) to increase central norepinephrine concentrations while minimizing increases of peripheral levels. Chemistry Droxidopa, also known as (–)-threo-3-(3,4-dihydroxyphenyl)-L-serine (L-DOPS), is a substituted phenethylamine and is chemically analogous to levodopa (L-3,4-dihydroxyphenylalanine; L-DOPA). Whereas levodopa functions as a precursor and prodrug to dopamine, droxidopa is a precursor and prodrug of norepinephrine. History Droxidopa was first described in the scientific literature by 1971. Droxidopa was developed by Sumitomo Pharmaceuticals for the treatment of hypotension, including NOH, and NOH associated with various disorders such as MSA, FAP, and PD, as well as IDH. The drug has been used in Japan and some surrounding Asian areas for these indications since 1989. Following a merger with Dainippon Pharmaceuticals in 2006, Dainippon Sumitomo Pharma licensed droxidopa to Chelsea Therapeutics to develop and market it worldwide except in Japan, Korea, China, and Taiwan. In February 2014, the United States Food and Drug Administration approved droxidopa for the treatment of symptomatic neurogenic orthostatic hypotension. Clinical trials A systematic review and meta-analysis conducted on clinical trials comparing the clinical use of droxidopa and midodrine have found that midodrine was more likely to cause supine hypertension than droxidopa in patients with NOH. Midodrine was also found to be slightly more effective at raising blood pressure but not statistically significantly. Chelsea Therapeutics obtained orphan drug status (ODS) for droxidopa in the US for NOH, and that of which associated with PD, PAF, and MSA. In 2014, Chelsea Therapeutics was acquired by Lundbeck along with the rights to droxidopa which was launched in the US in Sept 2014. Society and culture Names Droxidopa is the generic name of the drug and its and . Brand names of droxidopa include Dops and Northera. Research Droxidopa alone and in combination with carbidopa has been studied in the treatment of attention deficit hyperactivity disorder (ADHD). Droxidopa was under development for the treatment of ADHD, chronic fatigue syndrome, and fibromyalgia, but development for these indications was discontinued. References External links Alpha-adrenergic agonists Alpha-Amino acids Antihypotensive agents Beta-adrenergic agonists Cardiac stimulants Catecholamines Monoamine precursors Phenylethanolamines Prodrugs Sympathomimetics Vasoconstrictors
Droxidopa
[ "Chemistry" ]
1,311
[ "Chemicals in medicine", "Prodrugs" ]
17,427,242
https://en.wikipedia.org/wiki/Light%20reflectance%20value
In architecture, light reflectance value (LRV), is a measure of visible and usable light that is reflected from a surface when illuminated by a light source. The measurement is most commonly used by design professionals, such as architectural color consultants, architects, environmental graphic designers and interior designers. LRVs are frequently reported on paint chips or paint samples. The values are used by lighting designers to determine the number and type of light fixtures needed to provide proper lighting for interior spaces. Guidance Designers of buildings must comply with the building codes applicable to the structure under consideration. Since 2004 guidance has existed on access to and use of buildings. The guidance is particularly concerned with provisions to assist the disabled, including those who are visually impaired. The guidance highlights the need for certain surfaces and features to contrast visually with their surroundings. Areas of particular interest are wall-to-ceiling and wall-to-floor junctions, exposed edges of sloping floors, seating and its surroundings, leading edges of doors, door opening furniture and door surfaces, sanitary fittings and grab bars. This is relevant to a wide range of non-residential buildings, such as hospitals, schools, hotels, and theatres. Codes of practice The British Standards Institute's guidance in the Regulations and in the relevant Codes of Practice, BS 8300:2018, is that adequate visual contrast is provided if the light reflectance values of the contrasting areas differ by at least thirty points. The current British Standard for the measurement of LRV is BS8493:2008+A1:2010. The Americans with Disabilities Act Standards for Accessible Design does not recommend a light reflectance value for contrast on signage with words or pictograms, but instead it provides that "characters shall contrast with their background with either light characters on a dark background or dark characters on a light background" in § 703.5.1 The International Code Council utilizes the ADA approach and does not use a light reflectance value in the 2017 update to the standards for ICC A117.1 in § 703.5.3.2 The United Nations Economic Commission for Europe uses a difference of sixty points between the LRVs for the contrast requirement of signage in "Railway Applications — Design for PRM Use - General Requirements — Part 1: Contrast." Manufacturers are advised by the Guild of Architectural Ironmongery to publish the LRV for their products. References Bradshaw, Vaugh, P.E. Building Control Systems. New York: John Wiley & Sons, Inc. Second Edition, 1993. External links LRV - What is it? How is it used? Measurement: http://www.lucideon.com/industries/construction/building-products/light-reflectance-value-testing [link is not valid] Interior design Color appearance phenomena Lighting designers
Light reflectance value
[ "Physics" ]
569
[ "Optical phenomena", "Physical phenomena", "Color appearance phenomena" ]
17,428,178
https://en.wikipedia.org/wiki/Deepwater%20drilling
Deepwater drilling, or deep well drilling, is the process of creating holes in the Earth's crust using a drilling rig for oil extraction under the deep sea. There are approximately 3400 deepwater wells in the Gulf of Mexico with depths greater than 150 meters. Deepwater drilling has not been technologically or economically feasible for many years, but with rising oil prices, more companies are investing in this sector. Major investors include Halliburton, Diamond Offshore, Transocean, Geoservices, and Schlumberger. The deepwater gas and oil market has been back on the rise since the 2010 Deepwater Horizon disaster, with total expenditures of around US$35 billion per year in the market and total global capital expenditures of US$167 billion in the past four years. Industry analysis by business intelligence company Visiongain estimated in 2011 that total expenditures in global deepwater infrastructure would reach US$145 billion. A HowStuffWorks article explains how and why deepwater drilling is practiced: In the Deepwater Horizon oil spill of 2010, a large explosion occurred, killing workers and spilling oil into the Gulf of Mexico while a BP oil rig was drilling in deep waters. History Some of the earliest evidence of water wells are located in China. The Chinese discovered and made extensive use of deep drilled groundwater for drinking. The Chinese text The Book of Changes, originally a divination text of the Western Zhou dynasty (1046 -771 BC), contains an entry describing how the ancient Chinese maintained their wells and protected their sources of water. Archaeological evidence and old Chinese documents reveal that the prehistoric and ancient Chinese had the aptitude and skills for digging deep water wells for drinking water as early as 6000 to 7000 years ago. A well excavated at the Hemedu excavation site was believed to have been built during the Neolithic era. The well was cased by four rows of logs with a square frame attached to them at the top of the well. 60 additional tile wells southwest of Beijing are also believed to have been built around 600 BC for drinking and irrigation. Types of deepwater drilling facilities Drilling in deep waters can be performed by two main types of mobile deepwater drilling rigs: semi-submersible drilling rigs and drillships. Drilling can also be performed from a fixed-position installation such as a fixed platform, or a floating platform, such as a spar platform, a tension-leg platform, or a semi-submersible production platform. Fixed Platform – A Fixed Platform consists of a tall, (usually) steel structure that supports a deck. Because the Fixed Platform is anchored to the sea floor, it is very costly to build. This type of platform can be installed in water depth up to . Jack-Up Rig – Jack-up rigs are mobile units with a floating hull that can be moved around; once they arrived at the desired location, the legs are lowered to the seafloor and locked into place. Then the platform is raised up out of the water. That makes this type of rig safer to work on because weather and waves are not an issue. Compliant Tower Platform – A compliant tower is a particular type of fixed platform. Both are anchored to the seafloor, and both workplaces are above the water surface. However, the compliant tower is taller and narrower and can operate up to 1 kilometer (3,000 feet) water depth. Semi-Submersible Production Platform – This platform is buoyant, meaning the bulk of it is floating above the surface. However, the well head is typically located on the seafloor, so extra precautions must be taken to prevent a leak. A contributing cause to the oil spill disaster of 2010 was a failure of the leak-preventing system. These rigs can operate anywhere from below the surface. Tension-Leg Platform – The Tension-leg Platform consists of a floating structure, held in place by tendons that run down to the seafloor. These rigs drill smaller deposits in narrower areas, meaning this is a low-cost way to get a little oil, which attracts many companies. These rigs can drill anywhere from below the surface. Subsea System – Subsea Systems are actually wellheads, which sit on the seafloor and extract oil straight from the ground. They use pipes to force the oil back up to the surface, and can siphon oil to nearby platform rigs, a ship overhead, a local production hub, or even a faraway onshore site. This makes the Subsea system very versatile and a popular choice for companies. Spar Platform – Spar Platforms use a large cylinder to support the floating deck from the seafloor. On average, about 90% of the Spar Platform's structure is underwater. Most Spar Platforms are used up to depths of 1 kilometer (3,000 feet), but new technology can extend them to function up to below the surface. That makes it one of the deepest drilling rigs in use today. 2010 Deepwater Horizon oil spill On 20 April 2010, a BP deepwater oil rig (Deepwater Horizon) exploded, killing 11 and releasing 750,000 cubic meters (200 million gallons) of oil into the Gulf of Mexico. With those numbers, many scientists consider this disaster to be one of the worst environmental disasters in the history of the US. A large number of animal deaths have resulted from the release of the oil. A Center study estimates that over 82,000 birds, about 6,000 sea turtles, and nearly 26,000 marine mammals were killed from either the initial explosion or the oil spill. Deepwater wellbore integrity has become an increasingly important point in the field of petroleum engineering. See also General Offshore drilling, Well drilling, Shallow water drilling, Extraction of petroleum, Age of Oil, Fossil fuel drilling (disambiguation), Energy development, Hubbert peak theory Other 2010 United States deepwater drilling moratorium, Submersible pump, IntelliServ, Petroleum industry in Mexico, Deepwater Horizon People Michael Klare, Jason Leopold References External articles Deepwater Drilling: How It Works | Chevron | Video. chevron.com. HowStuffWorks "Ultra Deep Water Oil Drilling". science.howstuffworks.com. Rigzone – Deepwater Gulf of Mexico Drilling Activity to Keep Rising. rigzone.com. April 24, 2013. Chinese inventions Petroleum production Petroleum industry
Deepwater drilling
[ "Chemistry" ]
1,300
[ "Chemical process engineering", "Petroleum", "Petroleum industry" ]
17,428,713
https://en.wikipedia.org/wiki/Customer%20experience
Customer experience, sometimes abbreviated to CX, is the totality of cognitive, affective, sensory, and behavioral responses of a customer during all stages of the consumption process including pre-purchase, consumption, and post-purchase stages. Different dimensions of customer experience include senses, emotions, feelings, perceptions, cognitive evaluations, involvement, memories, as well as spiritual components, and behavioral intentions. The pre-consumption anticipation experience can be described as the amount of pleasure or displeasure received from savoring future events, while the remembered experience is related to a recollection of memories about previous events and experiences of a product or service. Definitions According to Forrester Research (via Fast Company), the foundational elements of a remarkable customer experience consist of six key disciplines, beginning with strategy, customer understanding, design, measurement, governance and culture. A company's ability to deliver an experience that sets it apart in the eyes of its customers will increase the amount of consumer spending with the company and inspire loyalty to its brand. According to Jessica Sebor, "Loyalty is now driven primarily by a company's interaction with its customers and how well it delivers on their wants and needs". Barbara E. Kahn, Wharton's Professor of Marketing, has established an evolutional approach to customer experience as the third of four stages of any company in terms of its customer centricity maturity. These progressive phases are: Product orientation: Companies manufacture goods and offer them in the best way possible. Market orientation: Some consideration of customer needs and segmentation arises, developing different marketing mix bundles for each one. Customer experience: Adding to the other two factors some recognition of the importance of providing an emotionally positive experience to customers. Authenticity: This is the most mature stage for companies. Products and services emerge from the real soul of the brand and connect naturally with clients and other stakeholders, for a long-term. In today's competitive climate, more than just low prices and innovative products are required to survive in the retail business. Customer experience involves every point of contact you have with a customer and the interactions with the products or services of the business. Customer experience has emerged as a vital strategy for all retail businesses that are facing competition. According to Holbrook & Hirschman studies(1982) customer experience can be defined as a whole event that a customer comes into contact with when interacting with a certain business. This experience often affects the emotions of the customer. The whole experience occurs when the interaction takes place through the stimulation of goods and services consumed. In 1994 Steve Haeckel and Lou Carbone further refined the original concept and collaborated on a seminal early article on experience management, titled "Engineering Customer Experiences", where they defined experience as "the 'take-away' impression formed by people's encounters with products, services and businesses — a perception produced when humans consolidate sensory information." They argued that the new approach must focus on total experience as the key customer value proposition. The type of experience seen through a marketing perspective is put forward by Pine & Gilmore which they state that an experience can be unique which may mean different individuals will not have the same level of experience that may not be memorable to the person, therefore, it won't be remembered over a period of time. Certain types of experiences may involve different aspects of the individual person such as emotional, physical, intellectual or even spiritual. Customer experience is the stimulation a company creates for the senses of the consumers, this means that the companies and that particular brand can control the stimuli that they have given to the consumer's senses which the companies can then control the consumers' reaction resulting from the stimulation process, giving more acquisition of the customer experience as expected by company. Kotler et al. 2013, (p. 283) say that customer experience is about, "Adding value for customers buying products and services through customer participation and connection, by managing all aspects of the encounter". The encounter includes touchpoints. Businesses can create and modify touchpoints so that they are suited to their consumers which change/enhance the customers' experience. Creating an experience for the customer can lead to greater brand loyalty and brand recognition in the form of logos, colour, smell, touch, taste, etc. However, customer experience management, and in particular design for experiences, is not only relevant for the private sector but also increasingly important in the public sector, especially in the age of digitalizaiton where public service users cocreate value by integrating resources from multiple sources. In this context, organizations need to not only understand their service users but also the network of actors and how public services fit into the wider value constellation and people's activities. Realms of customer experience Customer experience is divided into realms and domains by various scholars. Pine and Gilmore introduced four realms of experience include esthetic, escapist, entertainment, and educational components. Entertainment Realm: In this realm, businesses create experiences that captivate customers by providing entertainment and amusement. It goes beyond traditional products or services, aiming to engage and delight customers through memorable and immersive experiences. Educational Realm: This realm focuses on educating customers and enhancing their knowledge during their interactions with a brand. It involves providing valuable information, insights, and learning opportunities, fostering a sense of personal growth and understanding. Esthetic Realm: The esthetic realm emphasizes the visual and sensory aspects of the customer experience. It involves creating visually appealing and sensory-rich environments, products, or services that stimulate the senses and elicit positive emotional responses. Escapist Realm: In this realm, businesses offer customers an escape from their everyday lives. It involves creating experiences that transport customers to different worlds or realities, allowing them to temporarily disconnect from their usual routines and responsibilities. Designing customer experience There are many elements in the shopping experience associated with a customer's experience. Customer service, a brand's ethical ideals and the shopping environment are examples of factors that affect a customer's experience. Understanding and effectively developing a positive customer experience has become a staple within businesses and brands to combat growing competition (Andajani, 2015). Many consumers are well informed, they are able to easily compare two similar products or services together. Therefore, consumers are looking for experiences that can fulfil their intentions(Ali, 2015). A brand that can provide this gains a competitive advantage over its competition. A study by Ali (2015) found that developing a positive behavioural culture created a greater competitive advantage in the long term. He looked at the customer experience at resort hotels and discovered that providing the best hotel service was not sufficient. To optimise a customer's experience, management must also consider the peace of mind and relaxation, recognition and escapism, involvement, and hedonics. The overall customer experience must be considered. The development of a positive customer experience is important as it increases the chances of a customer to make continued purchases and develops brand loyalty (Kim & Yu, 2016). Brand loyalty can turn customers into advocates, resulting in a long term relationship between both parties (Ren, Wang & Lin, 2016). This promotes word-of-mouth and turns the customer into a touchpoint for the brand. Potential customers can develop opinions through another's experiences. Males and females both respond differently to brands and therefore, will experience the same brand differently. Males respond effectively to relational, behavioural and cognitive experiences whereas females respond greater to behavioural, cognitive and effective experiences in relation to branded apps. If female consumers are the target market, an app advert focused on the emotion of the product will provide an effective customer experience (Kim & Yu, 2016). Today, retail stores tend to exist in shopping areas such as malls or shopping districts. Very few operate in areas alone (Tynan, McKechnie & Hartly, 2014). Customer experience is not limited to the purchase alone. It includes all activities that may influence a customer's experience with a brand (Andajani, 2015). Therefore, a shopping centre's reputation that a store is located in will affect a brand's customer experience. At the same time, it is important to provide a seamless integrated experience that goes beyond individual transactions and enhances overall brand perception. This is an example of the shopping environment effecting a customer's experience. A study by Hart, Stachow and Cadogan (2013) found that a consumer's opinion of a town centre can affect the opinion of the retail stores operating within both negatively and positively. They shared an example of a town centre's management team developing synergy between the surrounding location and the retail stores. A location bound with historical richness could provide an opportunity for the town centre and local businesses to connect at a deeper level with their customers. They suggested that town centre management and retail outlets should work cooperatively to develop an effective customer experience. This will result in all stores benefiting from customer retention and loyalty. Another effective way to develop a positive customer experience is by actively engaging a customer with an activity. Human and physical components of an experience are very important (Ren, Wang & Lin, 201 6). Customers are able to recall active, hands-on experiences much more effectively and accurately than passive activities. This is because customers in these moments are per definition the 'experts of use'. Participants within a study were able to recount previous luxury driving experiences due to its high involvement. However, this can also have a negative effect on the customer's experience. Just as active, hands-on experiences can greatly develop value creation, they can also greatly facilitate value destruction (Tynan, McKechnie & Hartly, 2014). This is related to a customer's satisfaction with their experience. By understanding what causes satisfaction or dissatisfaction with a customer's experience, management can appropriately implement changes within their approach (Ren, Wang & Lin, 2016). A study on the customer experience in budget hotels revealed interesting results. Customer satisfaction was largely influenced by tangible and sensory dimensions. This included cleanliness, shower comfort, and room temperature, just to name a few. As budget hotels are cheap, customers expected the basic elements to be satisfactory and the luxury elements to be non-existent. If these dimensions did not reach an appropriate standard, satisfaction would decline, resulting in a negative experience (Ren, Wang & Lin, 20 16). Customer experience management Customer experience management (CEM or CXM) is the process that companies use to oversee and track all interactions with a customer during their relationship. This involves the strategy of building around the needs of individual customers. According to Jeananne Rae, companies are realizing that "building great consumer experiences is a complex enterprise, involving strategy, integration of technology, orchestrating business models, brand management and CEO commitment". In 2020, the global CEM market was valued at $7.54 billion, and is expected to grow with a CAGR of 17.5% from 2021-2028. Top companies in the customer experience industry include: Adobe Clarabridge Medallia NetSuite Oracle Tech Mahindra Zendesk According to Bernd Schmitt, "the term 'Customer Experience Management' represents the discipline, methodology and/or process used to comprehensively manage a customer's cross-channel exposure, interaction and transaction with a company, product, brand or service." Harvard Business Review blogger Adam Richardson says that a company must define and understand all dimensions of the customer experience in order to have long-term success. Although 80% of businesses state that they offer a "great customer experience," according to author James Allen, this contrasts with the 8% of customers expressing satisfaction with their experience. Allen asserts that for companies to meet the demands of providing an exceptional customer experience, they must be able to execute the "Three Ds": designing the correct incentive for the correctly identified consumer, offered in an enticing environment delivery: a company's ability to focus the entire team across various functions to deliver the proposed experience development ultimately determines a company's success, with an emphasis on developing consistency in execution CEM has been recognized as the future of the customer service and sales industry. Companies are using this approach to anticipate customer needs and adopt the mindset of the customer. CEM depicts a business strategy designed to manage the customer experience and gives benefits to both retailers and customers. CEM can be monitored through surveys, targeted studies, observational studies, or "voice of customer" research. It captures the instant response of the customer to its encounters with the brand or company. Customer surveys, customer contact data, internal operations process and quality data, and employee input are all sources of "voice of customer" data that can be used to quantify the cost of inaction on customer experience issues. The aim of CEM is to optimize the customer experience by gaining the loyalty of the current customers in a multi-channel environment and ensuring they are completely satisfied. Its also to create advocates of their current customers with potential customers as a word of mouth form of marketing. However, common efforts at improving CEM can have the opposite effect. Utilizing surroundings includes using visuals, displays and interactivity to connect with customers and create an experience (Kotler, et al. 2013, p. 283). CEM can be related to customer journey mapping, a concept pioneered by Ron Zemke and Chip Bell. Customer journey mapping is a design tool used to track customers' movements through different touchpoints with the business in question. It maps out the first encounters people may have with the brand and shows the different routes people can take through the different channels or marketing (e.g. online, television, magazine, newspaper). Integrated marketing communications (IMC) is also being used to manage the customer experience; IMC is about sending a consistent message amongst all platforms; these platforms include: Advertising, personal selling, public relations, direct marketing, and sales promotion (Kotler et al. 2013, p. 495). CEM holds great importance in terms of research and showing that academia is not as applicable and usable as the practice behind it. Typically, to make the best use of CEM and ensure its accuracy, the customer journey must be viewed from the actual perspective of customers, not the business or organization. It needs to be noted that there isn't a specific set of rules or steps to follow as companies (in their various industries) will have different strategies. Therefore, development into the conceptual and theoretical aspects is needed, based on customers' perspectives on the brand experience. This can be seen through different scholarly research. The reasoning behind the interest in CEM increasing so significantly is because businesses are looking for competitive differentiation. Businesses want to be more profitable and see this as a means to do so. Hence why businesses want to offer a better experience to their customers and want to manage this process efficiently. In order to gain success as a business customers need to be understood. In order to fully utilise the models used in practice, academic research that is conducted can assist the practical aspect. This along with recognising past customer experiences can help manage future experiences. A good indicator of customer satisfaction is the Net Promoter Score (NPS). This indicates out of a score of ten if a customer would recommend a business to other people. With scores of nine and ten these people are called protractors and will recommend others to the given product but on the other end of the spectrum are detractors, those who give the score of zero to six. Subtracting the detractors from the protractors gives the calculation of advocacy. Those businesses with higher scores are likely to be more successful and give a better customer experience. Not all aspects of CEM can be controlled by the business (e.g. other people and the influence they have). Besides, there is not much substantial information to support CEM claims in terms of academic research. The use of artificial intelligence in customer experience has slowly been increasing in recent years. Chatbots are often seen as the first phase of this development. Managing the communication The classical linear communication model includes having one sender or source sending out a message that goes through the media (television, magazines) and then to the receiver. The classical linear model is a form of mass marketing that targets a large number of people where only a few may be customers; this is a form of non-personal communication (Dahlen, et al. 2010, p. 39). The adjusted model shows the source sending a message either to the media or directly to an opinion leader/s and/or opinion former (Model, actress, credible source, trusted figure in society, YouTuber/reviewer), which sends a decoded message to the receiver (Dahlen et al. 2010, p. 39). The adjusted model is a form of interpersonal communication where feedback is almost instantaneous with receiving the message. The adjusted model means that there are many more platforms of marketing with the use of social media, which connects people with more touchpoints. Marketers use the digital experience to enhance the customer experience (Dahlen et al. 2010, p. 40). Enhancing digital experiences influences changes to the CEM, the customer journey map and IMC. The adjusted model allows marketers to communicate a message designed specifically for the 'followers' of the particular opinion leader or opinion former, sending a personalised message and creating a digital experience. Persuasion techniques Persuasion techniques are used when trying to send a message in order for an experience to take place. Marcom Projects (2007) came up with five mind shapers to show how humans view things. The five mind shapers of persuasion include: Frames – only showing what they want you to see (a paid ad post) Setting and context – the surrounding objects of items for sale Filters – previous beliefs that shape thoughts after an interaction Social influence – how behaviours of others impact us Belief (placebo effect) – the expectation Mind shapers can be seen through the use of the adjusted communication model, it allows the source/sender to create a perception for the receiver (Dahlen, Lange, & Smith, 2010, p. 39). Mind shapers can take two routes for persuasion: Central route, this route requires a thought process to occur, the content of the message is important. People think thoroughly about their reaction/reply. This can be seen in the purchase of homes, Internet providers, insurance companies. Peripheral route, does not require very much thought, the brain makes the connection. Marketers use recognisable cues like logos, colours and sounds. This type of marketing is used when the decision is about something simple like choosing a drink, food (Petty & Cacioppo, 1981). Marketers can use human thought processes and target these to create greater experiences, they can do so by either making the process more simple and creating interactive steps to help the process (Campbell & Kirmani, 2000). Customer relationship management According to Das (2007), customer relationship management (CRM) is the "establishment, development, maintenance and optimization of long-term mutually valuable relationships between consumers and organizations". The official definition of CRM by the Customer Relationship Management Research Center is "a strategy used to learn more about the customers' needs and behaviours in order to develop stronger relationships with them". The purpose of this strategy is to change the approach to customers and improve the experience for the consumer by making the supplier more aware of their buying habits and frequencies. The D4 Company Analysis is an audit tool that considers the four aspects of strategy, people, technology and processes in the design of a CRM strategy. The analysis includes four main steps. "Define the existing customer relationship management processes within the company. Determine the perceptions of how the company manages its customer relationships, both internally and externally. Design the ideal customer relationship management solutions relative to the company or industry. Deliver a strategy for the implementation of the recommendations based on the findings". User experience In the classical marketing model, marketing is deemed to be a funnel: at the beginning of the process (in the "awareness" stage) there are many branches competing for the attention of the customer, and this number is reduced through the different purchasing stages. Marketing is an action of "pushing" the brand through a few touchpoints (for example through TV ads). Since the rise of the World Wide Web and smartphone applications, there are many more touchpoints from new content serving platforms (Facebook, Instagram, Twitter, YouTube etc.), individual online presences (such as websites, forums, blogs, etc.) and dedicated smartphone applications. As a result, this process has become a type of "journey": The number of brands does not decrease during the process of evaluating and purchasing a product. Brands not taken into account in the "awareness" stage may be added during the evaluation or even purchase stage Following the post-purchase stage, there is a return to the first step in the process, thus feeding the brand awareness. In relation to customers and the channels which are associated with sales, these are multichannel in nature. Due to the growth and importance of social media and digital advancement, these aspects need to be understood by businesses to be successful in this era of customer journeys. With tools such as Facebook, Instagram and Twitter having such prominence, there is a constant stream of data that needs to be analysed to understand this journey. Business flexibility and responsiveness are vital in the ever-changing digital customer environment, as customers are constantly connected to businesses and their products. Customers are now instant product experts due to various digital outlets and form their own opinions on how and where to consume products and services. Businesses use customer values and create a plan to gain a competitive advantage. Businesses use the knowledge of customers to guide the customer journey to their products and services. Due to the shift in customer experience, in 2014 Wolny & Charoensuksai highlight three behaviours that show how decisions can be made in this digital journey. The Zero Moment of truth is the first interaction a customer has in connection with a service or product. This moment affects the consumer's choice to explore a product further or not at all. These moments can occur on any digital device. Showrooming highlights how a consumer will view a product in a physical store but then decide to exit the store empty handed and buy online instead. This consumer decision may be due to the ability to compare multiple prices online. On the opposing end of the spectrum is webrooming. Consumers will research a product online in regards to quality and price but then decide to purchase in store. These three channels need to be understood by businesses because customers expect businesses to be readily available to cater to their specific customer needs and purchasing behaviours. Customer journey In marketing, the notion of customer journey portrays the process customers go through to establish a commercial relationship with a firm. The journey emphasizes touchpoints, which are the moments in which firms can interact with their current or potential customers. Managers use visualizations called customer journey mapping (CJM) to represent the sequences of interactions between firms and customers to identify opportunities for interaction. Understanding CJM also allows for corporations to reduce "friction", or potential issues for the customer. CJM has subsequently become one of the most widely used tools for service design and has been utilized as a tool for visualizing intangible services. A customer journey map shows the story of the customer's experience. It not only identifies key interactions that the customer has with the organization, but it also brings the user's feelings, motivations, and questions for each of the touchpoints. Finally, a customer journey map has the objective of teaching organizations more about their customers. To map a customer journey is important to consider the company's customers (buyer persona), the customer journey's time frame, channels (telephone, email, in-app messages, social media, forums, recommendations), first actions (problem acknowledgment), and last actions (recommendations or subscription renewal). Customer Journey Maps are good storytelling conduits – they communicate to the brand the journey, along with the emotional quotient, that the customer experiences at every stage of the buyer journey. Customer journey maps take into account people's mental models (how things should behave), the flow of interactions, and possible touchpoints. They may combine user profiles, scenarios, and user flows; and reflect the thought patterns, processes, considerations, paths, and experiences that people go through in their daily lives. Benefits Mapping the customer journey helps organizations understand how prospects and customers use the various channels and [touchpoints], how the organization is perceived, and how the organization would like its customers and prospects' experiences to be. By understanding the latter, it is possible to design an optimal experience that meets the expectations of major customer groups, achieves competitive advantage, and supports the attainment of desired customer experience objectives. Increased customer retention is another benefit of a carefully designed and executed customer experience strategy. Journey mapping or journey orchestration has recently benefitted from the growth of AI technology. Solutions have become available in the last decade which allow AI to enhance complex customer journeys. Until recently, all customer journey mapping was human-led, but we are currently experiencing a rise of artificial intelligence in customer experience. Sales experience Retail environment factors include social features, design, and ambiance. This can result in enhanced pleasure while shopping, thus a positive customer experience and more likely chances of the customer revisiting the store in the future. The same retail environment may produce varied outcomes and emotions, depending on what the consumer is looking for. For example, a crowded retail environment may be exciting for a consumer seeking entertainment, but create an impression of inattentive customer service and frustration to a consumer who may need help looking for a specific product to meet an immediate need. Environmental stimuli such as lighting and music can influence a consumer's decision to stay longer in the store, therefore increasing the chances of purchasing. For example, a retail store may have dim lights and soothing music which may lead a consumer to experience the store as relaxing and calming. Today's consumers are consistently connected through the development of technological innovation in the retail environment. This has led to the increased use of digital-led experiences in their purchase journey both in-store and online that inspire and influence the sales process. For example, Rebecca Minkoff has installed smart mirrors in their fitting rooms that allow the customers to browse for products that may complement what they are trying on. These mirrors also hold an extra feature, a self-checkout system where the customer places the item on an RFID-powered table, which then sends the products to an iPad that is used to check out. External and internal variables in a retail environment can also affect a consumer's decision to visit the store. External variables include window displays such as posters and signage, or product exposure that can be seen by the consumer from outside of the store. Internal variables include flooring, decoration and design. These attributes of a retail environment can either encourage or discourage a consumer from approaching the store. Sales experience is a subset of the customer experience. Whereas customer experience encompasses the sum of all interactions between an organization and a customer over the entire relationship, sales experience is focused exclusively on the interactions that take place during the sales process and up to the point that a customer decides to buy. Customer experience tends to be owned by the marketing function within an organization, and therefore has little control or focus on what happens before a customer decides to buy. Sales experience is concerned with the buyer's journey up to and including the point that the buyer makes a purchase decision. Sales is a very important touch-point for overall customer experience as this is where the most human interaction takes place. See also Consumer behavior Customer satisfaction Experience economy Experience model User experience References Brand management Branding terminology Customer service Product management Services marketing Telecommunications systems
Customer experience
[ "Technology" ]
5,648
[ "Telecommunications systems" ]