id
int64
39
79M
url
stringlengths
32
168
text
stringlengths
7
145k
source
stringlengths
2
105
categories
listlengths
1
6
token_count
int64
3
32.2k
subcategories
listlengths
0
27
36,479,878
https://en.wikipedia.org/wiki/Ergonomics
Ergonomics, also known as human factors or human factors engineering (HFE), is the application of psychological and physiological principles to the engineering and design of products, processes, and systems. Primary goals of human factors engineering are to reduce human error, increase productivity and system availability, and enhance safety, health and comfort with a specific focus on the interaction between the human and equipment. The field is a combination of numerous disciplines, such as psychology, sociology, engineering, biomechanics, industrial design, physiology, anthropometry, interaction design, visual design, user experience, and user interface design. Human factors research employs methods and approaches from these and other knowledge disciplines to study human behavior and generate data relevant to previously stated goals. In studying and sharing learning on the design of equipment, devices, and processes that fit the human body and its cognitive abilities, the two terms, "human factors" and "ergonomics", are essentially synonymous as to their referent and meaning in current literature. The International Ergonomics Association defines ergonomics or human factors as follows: Human factors engineering is relevant in the design of such things as safe furniture and easy-to-use interfaces to machines and equipment. Proper ergonomic design is necessary to prevent repetitive strain injuries and other musculoskeletal disorders, which can develop over time and can lead to long-term disability. Human factors and ergonomics are concerned with the "fit" between the user, equipment, and environment or "fitting a job to a person" or "fitting the task to the man". It accounts for the user's capabilities and limitations in seeking to ensure that tasks, functions, information, and the environment suit that user. To assess the fit between a person and the used technology, human factors specialists or ergonomists consider the job (activity) being done and the demands on the user; the equipment used (its size, shape, and how appropriate it is for the task), and the information used (how it is presented, accessed, and changed). Ergonomics draws on many disciplines in its study of humans and their environments, including anthropometry, biomechanics, mechanical engineering, industrial engineering, industrial design, information design, kinesiology, physiology, cognitive psychology, industrial and organizational psychology, and space psychology. Etymology The term ergonomics (from the Greek ἔργον, meaning "work", and νόμος, meaning "natural law") first entered the modern lexicon when Polish scientist Wojciech Jastrzębowski used the word in his 1857 article (The Outline of Ergonomics; i.e. Science of Work, Based on the Truths Taken from the Natural Science). The French scholar Jean-Gustave Courcelle-Seneuil, apparently without knowledge of Jastrzębowski's article, used the word with a slightly different meaning in 1858. The introduction of the term to the English lexicon is widely attributed to British psychologist Hywel Murrell, at the 1949 meeting at the UK's Admiralty, which led to the foundation of The Ergonomics Society. He used it to encompass the studies in which he had been engaged during and after World War II. The expression human factors is a predominantly North American term which has been adopted to emphasize the application of the same methods to non-work-related situations. A "human factor" is a physical or cognitive property of an individual or social behavior specific to humans that may influence the functioning of technological systems. The terms "human factors" and "ergonomics" are essentially synonymous. Domains of specialization According to the International Ergonomics Association, within the discipline of ergonomics there exist domains of specialization. These comprise three main fields of research: physical, cognitive, and organizational ergonomics. There are many specializations within these broad categories. Specializations in the field of physical ergonomics may include visual ergonomics. Specializations within the field of cognitive ergonomics may include usability, human–computer interaction, and user experience engineering. Some specializations may cut across these domains: Environmental ergonomics is concerned with human interaction with the environment as characterized by climate, temperature, pressure, vibration, light. The emerging field of human factors in highway safety uses human factors principles to understand the actions and capabilities of road users – car and truck drivers, pedestrians, cyclists, etc. – and use this knowledge to design roads and streets to reduce traffic collisions. Driver error is listed as a contributing factor in 44% of fatal collisions in the United States, so a topic of particular interest is how road users gather and process information about the road and its environment, and how to assist them to make the appropriate decision. New terms are being generated all the time. For instance, "user trial engineer" may refer to a human factors engineering professional who specializes in user trials. Although the names change, human factors professionals apply an understanding of human factors to the design of equipment, systems and working methods to improve comfort, health, safety, and productivity. Physical ergonomics Physical ergonomics is concerned with human anatomy, and some of the anthropometric, physiological, and biomechanical characteristics as they relate to physical activity. Physical ergonomic principles have been widely used in the design of both consumer and industrial products for optimizing performance and preventing / treating work-related disorders by reducing the mechanisms behind mechanically induced acute and chronic musculoskeletal injuries / disorders. Risk factors such as localized mechanical pressures, force and posture in a sedentary office environment lead to injuries attributed to an occupational environment. Physical ergonomics is important to those diagnosed with physiological ailments or disorders such as arthritis (both chronic and temporary) or carpal tunnel syndrome. Pressure that is insignificant or imperceptible to those unaffected by these disorders may be very painful, or render a device unusable, for those who are. Many ergonomically designed products are also used or recommended to treat or prevent such disorders, and to treat pressure-related chronic pain. One of the most prevalent types of work-related injuries is musculoskeletal disorder. Work-related musculoskeletal disorders (WRMDs) result in persistent pain, loss of functional capacity and work disability, but their initial diagnosis is difficult because they are mainly based on complaints of pain and other symptoms. Every year, 1.8 million U.S. workers experience WRMDs and nearly 600,000 of the injuries are serious enough to cause workers to miss work. Certain jobs or work conditions cause a higher rate of worker complaints of undue strain, localized fatigue, discomfort, or pain that does not go away after overnight rest. These types of jobs are often those involving activities such as repetitive and forceful exertions; frequent, heavy, or overhead lifts; awkward work positions; or use of vibrating equipment. The Occupational Safety and Health Administration (OSHA) has found substantial evidence that ergonomics programs can cut workers' compensation costs, increase productivity and decrease employee turnover. Mitigation solutions can include both short term and long-term solutions. Short and long-term solutions involve awareness training, positioning of the body, furniture and equipment and ergonomic exercises. Sit-stand stations and computer accessories that provide soft surfaces for resting the palm as well as split keyboards are recommended. Additionally, resources within the HR department can be allocated to provide assessments to employees to ensure the above criteria are met. Therefore, it is important to gather data to identify jobs or work conditions that are most problematic, using sources such as injury and illness logs, medical records, and job analyses. Innovative workstations that are being tested include sit-stand desks, height adjustable desk, treadmill desks, pedal devices and cycle ergometers. In multiple studies these new workstations resulted in decreased waist circumference and improved psychological well-being. However a significant number of additional studies have seen no marked improvement in health outcomes. With the emergence of collaborative robots and smart systems in manufacturing environments, the artificial agents can be used to improve physical ergonomics of human co-workers. For example, during human–robot collaboration the robot can use biomechanical models of the human co-worker in order to adjust the working configuration and account for various ergonomic metrics, such as human posture, joint torques, arm manipulability and muscle fatigue. The ergonomic suitability of the shared workspace with respect to these metrics can also be displayed to the human with workspace maps through visual interfaces. Cognitive ergonomics Cognitive ergonomics is concerned with mental processes, such as perception, emotion, memory, reasoning, and motor response, as they affect interactions among humans and other elements of a system. (Relevant topics include mental workload, decision-making, skilled performance, human reliability, work stress and training as these may relate to human–system and human–computer interaction design.) Organizational ergonomics and safety culture Organizational ergonomics is concerned with the optimization of socio-technical systems, including their organizational structures, policies, and processes. Relevant topics include human communication successes or failures in adaptation to other system elements, crew resource management, work design, work systems, design of working times, teamwork, participatory ergonomics, community ergonomics, cooperative work, new work programs, virtual organizations, remote work, and quality management. Safety culture within an organization of engineers and technicians has been linked to engineering safety with cultural dimensions including power distance and ambiguity tolerance. Low power distance has been shown to be more conducive to a safety culture. Organizations with cultures of concealment or lack of empathy have been shown to have poor safety culture. History Ancient societies Some have stated that human ergonomics began with Australopithecus prometheus (also known as "little foot"), a primate who created handheld tools out of different types of stone, clearly distinguishing between tools based on their ability to perform designated tasks. The foundations of the science of ergonomics appear to have been laid within the context of the culture of Ancient Greece. A good deal of evidence indicates that Greek civilization in the 5th century BC used ergonomic principles in the design of their tools, jobs, and workplaces. One outstanding example of this can be found in the description Hippocrates gave of how a surgeon's workplace should be designed and how the tools he uses should be arranged. The archaeological record also shows that the early Egyptian dynasties made tools and household equipment that illustrated ergonomic principles. Industrial societies Bernardino Ramazzini was one of the first people to systematically study the illness that resulted from work earning himself the nickname "father of occupational medicine". In the late 1600s and early 1700s Ramazzini visited many worksites where he documented the movements of laborers and spoke to them about their ailments. He then published "De Morbis Artificum Diatriba" (Latin for Diseases of Workers) which detailed occupations, common illnesses, remedies. In the 19th century, Frederick Winslow Taylor pioneered the "scientific management" method, which proposed a way to find the optimum method of carrying out a given task. Taylor found that he could, for example, triple the amount of coal that workers were shoveling by incrementally reducing the size and weight of coal shovels until the fastest shoveling rate was reached. Frank and Lillian Gilbreth expanded Taylor's methods in the early 1900s to develop the "time and motion study". They aimed to improve efficiency by eliminating unnecessary steps and actions. By applying this approach, the Gilbreths reduced the number of motions in bricklaying from 18 to 4.5, allowing bricklayers to increase their productivity from 120 to 350 bricks per hour. However, this approach was rejected by Russian researchers who focused on the well-being of the worker. At the First Conference on Scientific Organization of Labour (1921) Vladimir Bekhterev and Vladimir Nikolayevich Myasishchev criticised Taylorism. Bekhterev argued that "The ultimate ideal of the labour problem is not in it [Taylorism], but is in such organisation of the labour process that would yield a maximum of efficiency coupled with a minimum of health hazards, absence of fatigue and a guarantee of the sound health and all round personal development of the working people." Myasishchev rejected Frederick Taylor's proposal to turn man into a machine. Dull monotonous work was a temporary necessity until a corresponding machine can be developed. He also went on to suggest a new discipline of "ergology" to study work as an integral part of the re-organisation of work. The concept was taken up by Myasishchev's mentor, Bekhterev, in his final report on the conference, merely changing the name to "ergonology" Aviation Prior to World War I, the focus of aviation psychology was on the aviator himself, but the war shifted the focus onto the aircraft, in particular, the design of controls and displays, and the effects of altitude and environmental factors on the pilot. The war saw the emergence of aeromedical research and the need for testing and measurement methods. Studies on driver behavior started gaining momentum during this period, as Henry Ford started providing millions of Americans with automobiles. Another major development during this period was the performance of aeromedical research. By the end of World War I, two aeronautical labs were established, one at Brooks Air Force Base, Texas and the other at Wright-Patterson Air Force Base outside of Dayton, Ohio. Many tests were conducted to determine which characteristic differentiated the successful pilots from the unsuccessful ones. During the early 1930s, Edwin Link developed the first flight simulator. The trend continued and more sophisticated simulators and test equipment were developed. Another significant development was in the civilian sector, where the effects of illumination on worker productivity were examined. This led to the identification of the Hawthorne Effect, which suggested that motivational factors could significantly influence human performance. World War II marked the development of new and complex machines and weaponry, and these made new demands on operators' cognition. It was no longer possible to adopt the Tayloristic principle of matching individuals to preexisting jobs. Now the design of equipment had to take into account human limitations and take advantage of human capabilities. The decision-making, attention, situational awareness and hand-eye coordination of the machine's operator became key in the success or failure of a task. There was substantial research conducted to determine the human capabilities and limitations that had to be accomplished. A lot of this research took off where the aeromedical research between the wars had left off. An example of this is the study done by Fitts and Jones (1947), who studied the most effective configuration of control knobs to be used in aircraft cockpits. Much of this research transcended into other equipment with the aim of making the controls and displays easier for the operators to use. The entry of the terms "human factors" and "ergonomics" into the modern lexicon date from this period. It was observed that fully functional aircraft flown by the best-trained pilots, still crashed. In 1943 Alphonse Chapanis, a lieutenant in the U.S. Army, showed that this so-called "pilot error" could be greatly reduced when more logical and differentiable controls replaced confusing designs in airplane cockpits. After the war, the Army Air Force published 19 volumes summarizing what had been established from research during the war. In the decades since World War II, human factors has continued to flourish and diversify. Work by Elias Porter and others within the RAND Corporation after WWII extended the conception of human factors. "As the thinking progressed, a new concept developed—that it was possible to view an organization such as an air-defense, man-machine system as a single organism and that it was possible to study the behavior of such an organism. It was the climate for a breakthrough." In the initial 20 years after the World War II, most activities were done by the "founding fathers": Alphonse Chapanis, Paul Fitts, and Small. Cold War The beginning of the Cold War led to a major expansion of Defense supported research laboratories. Also, many labs established during WWII started expanding. Most of the research following the war was military-sponsored. Large sums of money were granted to universities to conduct research. The scope of the research also broadened from small equipments to entire workstations and systems. Concurrently, a lot of opportunities started opening up in the civilian industry. The focus shifted from research to participation through advice to engineers in the design of equipment. After 1965, the period saw a maturation of the discipline. The field has expanded with the development of the computer and computer applications. The Space Age created new human factors issues such as weightlessness and extreme g-forces. Tolerance of the harsh environment of space and its effects on the mind and body were widely studied. Information age The dawn of the Information Age has resulted in the related field of human–computer interaction (HCI). Likewise, the growing demand for and competition among consumer goods and electronics has resulted in more companies and industries including human factors in their product design. Using advanced technologies in human kinetics, body-mapping, movement patterns and heat zones, companies are able to manufacture purpose-specific garments, including full body suits, jerseys, shorts, shoes, and even underwear. Organizations Formed in 1946 in the UK, the oldest professional body for human factors specialists and ergonomists is The Chartered Institute of Ergonomics and Human Factors, formally known as the Institute of Ergonomics and Human Factors and before that, The Ergonomics Society. The Human Factors and Ergonomics Society (HFES) was founded in 1957. The Society's mission is to promote the discovery and exchange of knowledge concerning the characteristics of human beings that are applicable to the design of systems and devices of all kinds. The Association of Canadian Ergonomists - l'Association canadienne d'ergonomie (ACE) was founded in 1968. It was originally named the Human Factors Association of Canada (HFAC), with ACE (in French) added in 1984, and the consistent, bilingual title adopted in 1999. According to its 2017 mission statement, ACE unites and advances the knowledge and skills of ergonomics and human factors practitioners to optimise human and organisational well-being. The International Ergonomics Association (IEA) is a federation of ergonomics and human factors societies from around the world. The mission of the IEA is to elaborate and advance ergonomics science and practice, and to improve the quality of life by expanding its scope of application and contribution to society. As of September 2008, the International Ergonomics Association has 46 federated societies and 2 affiliated societies. The Human Factors Transforming Healthcare (HFTH) is an international network of HF practitioners who are embedded within hospitals and health systems. The goal of the network is to provide resources for human factors practitioners and healthcare organizations looking to successfully apply HF principles to improve patient care and provider performance. The network also serves as collaborative platform for human factors practitioners, students, faculty, industry partners, and those curious about human factors in healthcare. Related organizations The Institute of Occupational Medicine (IOM) was founded by the coal industry in 1969. From the outset the IOM employed an ergonomics staff to apply ergonomics principles to the design of mining machinery and environments. To this day, the IOM continues ergonomics activities, especially in the fields of musculoskeletal disorders; heat stress and the ergonomics of personal protective equipment (PPE). Like many in occupational ergonomics, the demands and requirements of an ageing UK workforce are a growing concern and interest to IOM ergonomists. The International Society of Automotive Engineers (SAE) is a professional organization for mobility engineering professionals in the aerospace, automotive, and commercial vehicle industries. The Society is a standards development organization for the engineering of powered vehicles of all kinds, including cars, trucks, boats, aircraft, and others. The Society of Automotive Engineers has established a number of standards used in the automotive industry and elsewhere. It encourages the design of vehicles in accordance with established human factors principles. It is one of the most influential organizations with respect to ergonomics work in automotive design. This society regularly holds conferences which address topics spanning all aspects of human factors and ergonomics. Practitioners Human factors practitioners come from a variety of backgrounds, though predominantly they are psychologists (from the various subfields of industrial and organizational psychology, engineering psychology, cognitive psychology, perceptual psychology, applied psychology, and experimental psychology) and physiologists. Designers (industrial, interaction, and graphic), anthropologists, technical communication scholars and computer scientists also contribute. Typically, an ergonomist will have an undergraduate degree in psychology, engineering, design or health sciences, and usually a master's degree or doctoral degree in a related discipline. Though some practitioners enter the field of human factors from other disciplines, both M.S. and PhD degrees in Human Factors Engineering are available from several universities worldwide. Sedentary workplace Contemporary offices did not exist until the 1830s, with Wojciech Jastrzębowsk's seminal book on MSDergonomics following in 1857 and the first published study of posture appearing in 1955. As the American workforce began to shift towards sedentary employment, the prevalence of [WMSD/cognitive issues/ etc..] began to rise. In 1900, 41% of the US workforce was employed in agriculture but by 2000 that had dropped to 1.9% This coincides with an increase in growth in desk-based employment (25% of all employment in 2000) and the surveillance of non-fatal workplace injuries by OSHA and Bureau of Labor Statistics in 1971. Sedentary behavior requires a basal metabolic rate of 1.0–1.5 and occurs in a sitting or reclining position. Adults older than 50 years report spending more time sedentary and for adults older than 65 years this is often 80% of their awake time. Multiple studies show a dose-response relationship between sedentary time and all-cause mortality with an increase of 3% mortality per additional sedentary hour each day. High quantities of sedentary time without breaks is correlated to higher risk of chronic disease, obesity, cardiovascular disease, type 2 diabetes and cancer. Currently, there is a large proportion of the overall workforce who is employed in low physical activity occupations. Sedentary behavior, such as spending long periods of time in seated positions poses a serious threat for injuries and additional health risks. Unfortunately, even though some workplaces make an effort to provide a well designed environment for sedentary employees, any employee who is performing large amounts of sitting will likely experience discomfort. There are existing conditions that would predispose both individuals and populations to an increase in prevalence of living sedentary lifestyles, including: socioeconomic determinants, education levels, occupation, living environment, age (as mentioned above) and more. A study published by the Iranian Journal of Public Health examined socioeconomic factors and sedentary lifestyle effects for individuals in a working community. The study concluded that individuals who reported living in low income environments were more inclined to living sedentary behavior compared to those who reported being of high socioeconomic status. Individuals who achieve less education are also considered to be a high risk group to partake in sedentary lifestyles, however, each community is different and has different resources available that may vary this risk. Oftentimes, larger worksites are associated with increased occupational sitting. Those who work in environments that are classified as business and office jobs are typically more exposed to sitting and sedentary behavior while in the workplace. Additionally, occupations that are full-time, have schedule flexibility, are also included in that demographic, and are more likely to sit often throughout their workday. Policy implementation Obstacles surrounding better ergonomic features to sedentary employees include cost, time, effort and for both companies and employees. The evidence above helps establish the importance of ergonomics in a sedentary workplace, yet missing information from this problem is enforcement and policy implementation. As a modernized workplace becomes more and more technology-based more jobs are becoming primarily seated, therefore leading to a need to prevent chronic injuries and pain. This is becoming easier with the amount of research around ergonomic tools saving money companies by limiting the number of days missed from work and workers comp cases. The way to ensure that corporations prioritize these health outcomes for their employees is through policy and implementation. In the United States, there are no nationwide policies that are currently in place; however, a handful of big companies and states have taken on cultural policies to ensure the safety of all workers. For example, the state of Nevada risk management department has established a set of ground rules for both agencies' responsibilities and employees' responsibilities. The agency responsibilities include evaluating workstations, using risk management resources when necessary and keeping OSHA records. To see specific workstation ergonomic policies and responsibilities click here. Methods Until recently, methods used to evaluate human factors and ergonomics ranged from simple questionnaires to more complex and expensive usability labs. Some of the more common human factors methods are listed below: Ethnographic analysis: Using methods derived from ethnography, this process focuses on observing the uses of technology in a practical environment. It is a qualitative and observational method that focuses on "real-world" experience and pressures, and the usage of technology or environments in the workplace. The process is best used early in the design process. Focus groups are another form of qualitative research in which one individual will facilitate discussion and elicit opinions about the technology or process under investigation. This can be on a one-to-one interview basis, or in a group session. Can be used to gain a large quantity of deep qualitative data, though due to the small sample size, can be subject to a higher degree of individual bias. Can be used at any point in the design process, as it is largely dependent on the exact questions to be pursued, and the structure of the group. Can be extremely costly. Iterative design: Also known as prototyping, the iterative design process seeks to involve users at several stages of design, to correct problems as they emerge. As prototypes emerge from the design process, these are subjected to other forms of analysis as outlined in this article, and the results are then taken and incorporated into the new design. Trends among users are analyzed, and products redesigned. This can become a costly process, and needs to be done as soon as possible in the design process before designs become too concrete. Meta-analysis: A supplementary technique used to examine a wide body of already existing data or literature to derive trends or form hypotheses to aid design decisions. As part of a literature survey, a meta-analysis can be performed to discern a collective trend from individual variables. Subjects-in-tandem: Two subjects are asked to work concurrently on a series of tasks while vocalizing their analytical observations. The technique is also known as "Co-Discovery" as participants tend to feed off of each other's comments to generate a richer set of observations than is often possible with the participants separately. This is observed by the researcher, and can be used to discover usability difficulties. This process is usually recorded. Surveys and questionnaires: A commonly used technique outside of human factors as well, surveys and questionnaires have an advantage in that they can be administered to a large group of people for relatively low cost, enabling the researcher to gain a large amount of data. The validity of the data obtained is, however, always in question, as the questions must be written and interpreted correctly, and are, by definition, subjective. Those who actually respond are in effect self-selecting as well, widening the gap between the sample and the population further. Task analysis: A process with roots in activity theory, task analysis is a way of systematically describing human interaction with a system or process to understand how to match the demands of the system or process to human capabilities. The complexity of this process is generally proportional to the complexity of the task being analyzed, and so can vary in cost and time involvement. It is a qualitative and observational process. Best used early in the design process. Human performance modeling: A method of quantifying human behavior, cognition, and processes; a tool used by human factors researchers and practitioners for both the analysis of human function and for the development of systems designed for optimal user experience and interaction. Think aloud protocol: Also known as "concurrent verbal protocol", this is the process of asking a user to execute a series of tasks or use technology, while continuously verbalizing their thoughts so that a researcher can gain insights as to the users' analytical process. Can be useful for finding design flaws that do not affect task performance, but may have a negative cognitive effect on the user. Also useful for utilizing experts to better understand procedural knowledge of the task in question. Less expensive than focus groups, but tends to be more specific and subjective. User analysis: This process is based around designing for the attributes of the intended user or operator, establishing the characteristics that define them, creating a persona for the user. Best done at the outset of the design process, a user analysis will attempt to predict the most common users, and the characteristics that they would be assumed to have in common. This can be problematic if the design concept does not match the actual user, or if the identified are too vague to make clear design decisions from. This process is, however, usually quite inexpensive, and commonly used. "Wizard of Oz": This is a comparatively uncommon technique but has seen some use in mobile devices. Based upon the Wizard of Oz experiment, this technique involves an operator who remotely controls the operation of a device to imitate the response of an actual computer program. It has the advantage of producing a highly changeable set of reactions, but can be quite costly and difficult to undertake. Methods analysis is the process of studying the tasks a worker completes using a step-by-step investigation. Each task in broken down into smaller steps until each motion the worker performs is described. Doing so enables you to see exactly where repetitive or straining tasks occur. Time studies determine the time required for a worker to complete each task. Time studies are often used to analyze cyclical jobs. They are considered "event based" studies because time measurements are triggered by the occurrence of predetermined events. Work sampling is a method in which the job is sampled at random intervals to determine the proportion of total time spent on a particular task. It provides insight into how often workers are performing tasks which might cause strain on their bodies. Predetermined time systems are methods for analyzing the time spent by workers on a particular task. One of the most widely used predetermined time system is called Methods-Time-Measurement. Other common work measurement systems include MODAPTS and MOST. Industry specific applications based on PTS are Seweasy, MODAPTS and GSD as seen in paper: . Cognitive walkthrough: This method is a usability inspection method in which the evaluators can apply user perspective to task scenarios to identify design problems. As applied to macroergonomics, evaluators are able to analyze the usability of work system designs to identify how well a work system is organized and how well the workflow is integrated. Kansei method: This is a method that transforms consumer's responses to new products into design specifications. As applied to macroergonomics, this method can translate employee's responses to changes to a work system into design specifications. High Integration of Technology, Organization, and People: This is a manual procedure done step-by-step to apply technological change to the workplace. It allows managers to be more aware of the human and organizational aspects of their technology plans, allowing them to efficiently integrate technology in these contexts. Top modeler: This model helps manufacturing companies identify the organizational changes needed when new technologies are being considered for their process. Computer-integrated Manufacturing, Organization, and People System Design: This model allows for evaluating computer-integrated manufacturing, organization, and people system design based on knowledge of the system. Anthropotechnology: This method considers analysis and design modification of systems for the efficient transfer of technology from one culture to another. Systems analysis tool: This is a method to conduct systematic trade-off evaluations of work-system intervention alternatives. Macroergonomic analysis of structure: This method analyzes the structure of work systems according to their compatibility with unique sociotechnical aspects. Macroergonomic analysis and design: This method assesses work-system processes by using a ten-step process. Virtual manufacturing and response surface methodology: This method uses computerized tools and statistical analysis for workstation design. Weaknesses Problems related to measures of usability include the fact that measures of learning and retention of how to use an interface are rarely employed and some studies treat measures of how users interact with interfaces as synonymous with quality-in-use, despite an unclear relation. Although field methods can be extremely useful because they are conducted in the users' natural environment, they have some major limitations to consider. The limitations include: Usually take more time and resources than other methods Very high effort in planning, recruiting, and executing compared with other methods Much longer study periods and therefore requires much goodwill among the participants Studies are longitudinal in nature, therefore, attrition can become a problem. See also ISO 9241 Journal of Occupational Health Psychology Wojciech Jastrzębowski (1799–1882), a Polish pioneer of ergonomics Canadian Society for Biomechanics References Further reading Books Thomas J. Armstrong (2008), Chapter 10: Allowances, Localized Fatigue, Musculoskeletal Disorders, and Biomechanics (not yet published) Berlin C. & Adams C. & 2017. Production Ergonomics: Designing Work Systems to Support Optimal Human Performance. London: Ubiquity Press. . Jan Dul and Bernard Weedmaster, Ergonomics for Beginners. A classic introduction on ergonomics—Original title: Vademecum Ergonomie (Dutch)—published and updated since the 1960s. Valerie J Gawron (2000), Human Performance Measures Handbook Lawrence Erlbaum Associates—A useful summary of human performance measures. Liu, Y (2007). IOE 333. Course pack. Industrial and Operations Engineering 333 (Introduction to Ergonomics), University of Michigan, Ann Arbor, MI. Winter 2007 Donald Norman, The Design of Everyday Things—An entertaining user-centered critique of nearly every gadget out there (at the time it was published) Peter Opsvik (2009), "Re-Thinking Sitting". Interesting insights on the history of the chair and how we sit from an ergonomic pioneer Computer Ergonomics & Work Related Upper Limb Disorder Prevention- Making The Business Case For Pro-active Ergonomics (Rooney et al., 2008) Stephen Pheasant, Bodyspace—A classic exploration of ergonomics Alvin R. Tilley & Henry Dreyfuss Associates (1993, 2002), The Measure of Man & Woman: Human Factors in Design A human factors design manual. Kim Vicente, The Human Factor Full of examples and statistics illustrating the gap between existing technology and the human mind, with suggestions to narrow it Wickens and Hollands (2000). Engineering Psychology and Human Performance. Discusses memory, attention, decision making, stress and human error, among other topics Wilson & Corlett, Evaluation of Human Work A practical ergonomics methodology. Warning: very technical and not a suitable 'intro' to ergonomics Zamprotta, Luigi, La qualité comme philosophie de la production.Interaction avec l'ergonomie et perspectives futures, thèse de Maîtrise ès Sciences Appliquées – Informatique, Institut d'Etudes Supérieures L'Avenir, Brussels, année universitaire 1992–93, TIU Press, Independence, Missouri (USA), 1994, Peer-reviewed Journals (Numbers between brackets are the ISI impact factor, followed by the date) Behavior & Information Technology (0.915, 2008) Ergonomics (0.747, 2001–2003) Ergonomics in Design (-) Applied Ergonomics (1.713, 2015) Human Factors (1.37, 2015) International Journal of Industrial Ergonomics (0.395, 2001–2003) Human Factors and Ergonomics in Manufacturing (0.311, 2001–2003) Travail Humain (0.260, 2001–2003) Theoretical Issues in Ergonomics Science (-) International Journal of Human Factors and Ergonomics (-) International Journal of Occupational Safety and Ergonomics (-) External links Directory of Design Support Methods Directory of Design Support Methods Engineering Data Compendium of Human Perception and Performance Index of Non-Government Standards on Human Engineering... Index of Government Standards on Human Engineering... NIOSH Topic Page on Ergonomics and Musculoskeletal Disorders Office Ergonomics Information from European Agency for Safety and Health at Work Human Factors Standards & Handbooks from the University of Maryland Department of Mechanical Engineering Human Factors and Ergonomics Resources Human Factors Engineering Collection, The University of Alabama in Huntsville Archives and Special Collections Industrial engineering Occupational safety and health Posture
Ergonomics
[ "Engineering" ]
7,705
[ "Industrial engineering" ]
31,284,193
https://en.wikipedia.org/wiki/Challenge%20point%20framework
The challenge point framework, created by Mark A. Guadagnoli and Timothy D. Lee (2004), provides a theoretical basis to conceptualize the effects of various practice conditions in motor learning. This framework relates practice variables to the skill level of the individual, task difficulty, and information theory concepts. The fundamental idea is that “motor tasks represent different challenges for performers of different abilities” (Guadagnoli and Lee 2004, p212). Any task will present the individual with a certain degree of challenge. However, the learning potential from this task difficulty level will differ based on the: skill level of the performer task complexity task environment Importantly, though increases in task difficulty may increase learning potential, increased task difficulty is also expected to decrease performance. Thus, an optimal challenge point exists when learning is maximized and detriment to performance in practice is minimized. Importance and applications Practice has been proposed as the most important factor for the “relatively permanent” improvement in the ability to perform motor skills (Adams 1964; Annett 1969; Fitts 1964; Magill 2001; Marteniuk 1976; Newell 1981; Salmoni et al. 1984; Schmidt and Lee 1999; Guadagnoli and Lee 2004). With all other variables held constant, skill increases with practice (Guadagnoli and Lee 2004). However, time devoted to practice can be made more efficient by careful consideration of practice conditions. The challenge point framework presents a theoretical perspective to consider the roles of the level of the performer, the complexity of the task and the environment in regulating the learning potential during practice. Adjustment of these components to enhance motor learning can be applied to variety of contexts, including rehabilitation (Descarreaux et al. 2010; Onla-or & Winstein 2008) and simulation-based health professions education (Gofton 2006). History The challenge point framework involves concepts generated through various lines of research including information theory, communications theory, and information processing (Lintern and Gopher 1978; Martenuik 1976; K.M. Newell et al. 1991; Wulf and Shea 2002). Specific notions borrowed from prior research important to understanding the theoretical framework include: Learning is a problem-solving process in which the goal of an action represents the problem to be solved and the evolution of a movement configuration represents the performer’s attempt to solve the problem (Miller et al. 1960 as cited by Guadagnoli and Lee 2004). Sources of information available during and after each attempt to solve a problem are remembered and form the basis for learning, which is defined as a relatively permanent improvement in skill that results from practice (Guthrie 1952 as cited by Guadagnoli and Lee 2004). Two sources of information are critical for learning: An action plan is a construct that invokes intention and ultimately results in a specific movement configuration on a given performance (Miller et al. 1960 as cited by Guadagnoli and Lee 2004). See motor control. Feedback may be inherent to the individual (e.g. vision) or available via external, augmented sources (e.g. verbal instruction). Information is transmitted only when uncertainty is reduced (Shannon and Weaver 1949; Fitts 1954; Fitts and Posner 1967; Legge and Barber 1967; Martenuik 1976; Miller 1956 as cited by Guadagnoli and Lee 2004). Components It follows from the description of the challenge point framework that: It is impossible to learn without information Learning is impaired by presentation of insufficient or excessive amounts of information Learning is facilitated by an optimal amount of information, which depends on individual skill and task difficulty Information available and task difficulty Learning is fundamentally a problem-solving process. It has been proposed that with practice, there is reduced information available to the participant because better expectations are formed (i.e. practice = redundancy, therefore less uncertainty; Marteniuk 1976). However, increasing functional task difficulty results in less certainty about the predicted success of the action plan and the nature of the feedback. At low levels of functional difficulty, the potential available information is low for performers in all skill levels. As functional task difficulty increases, the potential information available increases exponentially for beginners and less rapidly for intermediate and skilled performers. For experts, the potential information available increases only at the highest levels of functional task difficulty. Task difficulty and skill Task difficulty has received considerable attention in prior research (Fleighman and Quaintance 1984; Gentile 1998). Important to the challenge point framework, task difficulty is not explicitly defined. Alternately, two broad categories can encompass these elements: Nominal task difficulty Difficulty due to the characteristics of the task only, reflecting a constant amount of task difficulty (e.g. the target of a throw being near versus far); includes perceptual and motor performance requirements (Swinnen et al. 1992). Functional task difficulty Difficulty due to the person performing the task and the environment (e.g. two individuals, a major league pitcher and an inexperienced ball thrower, are asked to throw a baseball as fast as they can to first base on two days, one on a sunny day and the other on a windy day). Performance of a task with low nominal difficulty will be expected to be high in all groups of performers (i.e. all skill levels). However, beginner performance will be expected to decline rapidly as nominal difficulty increases, whereas intermediate and skilled performance will decline less rapidly, and expert performance is expected to decline only at the highest nominal difficulty levels. Although the "Expert" skill level is useful to explain this framework, one may argue that experts should have a high level of predicted success for all nominal task difficulties. It is possible that once expertise is attained, these individuals are able to predict the outcome of the ongoing task and modify ongoing processes in order to reach a suitable outcome (e.g. surgeons). Optimal challenge points The optimal challenge point represents the degree of functional task difficulty an individual of a specific skill level would need to optimize learning (Guadagnoli and Lee 2004). However, this learning depends on the amount of interpretable information. Therefore, although increases in task difficulty may increase learning potential, only so much is interpretable, and task performance is expected to decrease. Thus, an optimal challenge point exists when learning is maximized and detriment to performance in practice is minimized. With increased practice it is assumed that one's information-processing capabilities will increase (Marteniuk 1976). Therefore, the optimal challenge point will change as the individual's ability to use information changes, requiring further changes in functional difficulties in task to facilitate learning (Guadagnoli and Lee 2004). Practice variables and framework predictions Contextual interference (CI) and action planning Predictions from the challenge point framework with respect to CI (refer to motor learning; Guadagnoli and Lee 2004, p 219): "For tasks with differing levels of nominal difficulty, the advantage of random practice (vs. blocked practice) for learning will be largest for tasks of lowest nominal difficulty and smallest for tasks of highest nominal difficulty". "For individuals with differing skill levels, low levels of CI will be better for beginning skill levels and higher levels of CI will be better for more highly skilled individuals". Knowledge of results (KR) and feedback information Predictions from the challenge point framework with respect to KR (refer to motor learning; Guadagnoli and Lee 2004, p221): "For tasks of high nominal difficulty, more frequent or immediate presentation of KR, or both, will yield the largest learning effect. For tasks of low nominal difficulty, less frequent or immediate presentation of KR, or both, will yield the largest learning effect". "For tasks about which multiple sources of augmented information can be provided, the schedule of presenting the information will influence learning. For tasks of low nominal difficulty, a random schedule of augmented feedback presentation will facilitate learning as compared with a blocked presentation. For tasks high in nominal difficulty, a blocked presentation will produce better learning than a random schedule Memory Motor skills Nervous system Motor control
Challenge point framework
[ "Biology" ]
1,643
[ "Behavior", "Motor skills", "Nervous system", "Motor control", "Organ systems" ]
31,284,240
https://en.wikipedia.org/wiki/Thick%20bed%20mortar
A traditional method for the installation of tile and stone which involves setting the tile or stone into a mortar bed which has been packed over a surface. History The thick bed mortar method has been around for hundreds, if not thousands of years. Historically, a sand/cement mixture was mixed with water to a fairly dry consistency and was spread on either a portland cement water paste (neat cement), or over cement powder spread on the surface which is then sprayed with water to create a slurry coat and spread over the surface. The thick bed mortar would then be compacted and screeded (made flat and/or level) prior to installation of tile or stone. As the slurry coat dried it would bond the mortar bed to the concrete surface on which it was installed. Mortar beds were used underneath almost every tile or stone installation until the late 1950s when a chemical engineer, Henry M. Rothberg, invented the technology which introduced latex to sand/cement mortar mixes, and created a new industry based on thin bed adhesive installations by founding Laticrete International. Mortars used in this technique typically have a compressive strength ranging from at least 400 psi (2.8 MPa) to 1600 psi (11 MPa), when tested using ANSI testing procedures. However, with advancements in technology and materials, the potential strengths of the thick bed mortar system have increased. Quality controlled manufacturing processes create thick bed mortar mixes which combine carefully graded, high quality aggregates (sand) in a precise ratio with portland cement. This means that a consistent mix can be achieved without the need for a laborer to blend sand/cement and possibly lime at the jobsite. These pre-packaged mortars also eliminate the problems caused by excessively damp sand, incorrect mixing ratios, quality of the raw materials, and piles of sand on a jobsite. Today's thick bed mortars can be fortified with the inclusion of a liquid latex or redispersible polymer per the manufacturer's directions to enhance the performance properties of the thick bed mortar. See also Thinset mortar Ceramic tile References Masonry Cement
Thick bed mortar
[ "Engineering" ]
428
[ "Construction", "Masonry" ]
31,284,549
https://en.wikipedia.org/wiki/Whole%20Building%20Design%20Guide
The Whole Building Design Guide or WBDG is guidance in the United States, described by the Federal Energy Management Program as "a complete internet resource to a wide range of building-related design guidance, criteria and technology", and meets the requirements in guidance documents for Executive Order 13123. The WBDG is based on the premise that to create a successful high-performance building, one must apply an integrated design and team approach in all phases of a project, including planning, design, construction, operations and maintenance. The WBDG is managed by the National Institute of Building Sciences. History The WBDG was initially designed to serve U.S. Department of Defense (DOD) construction programs. A 2003 DOD memorandum named WBDG the “sole portal to design and construction criteria produced by the U.S. Army Corps of Engineers (USACE), Naval Facilities Engineering Command (NAVFAC), and U.S. Air Force.” Since then, WBDG has expanded to serve all building industry professionals. The majority of its 500,000 monthly users are from the private sector. The WBDG draws information from the Construction Criteria Base and a privately owned database run by Information Handling Services. A significant amount of the Whole Building Design Guide content is organized by three categories: Design Guidance, Project Management, and Operations and Maintenance. It is structured to provide WBDG visitors first a broad understanding then increasingly specific information more targeted towards building industry professionals. The WBDG is the resource that federal agencies look to for policy and technical guidance on Federal High Performance and Sustainable Buildings In addition, the WBDG contains online tools, the original Construction Criteria Base, Building Information Modeling guides and libraries, a database of select case studies, federal mandates and other resources. The WBDG also provides over 70 online continuing education courses for architects and other building professionals, free of charge. Development Development of the WBDG is a collaborative effort among federal agencies, private sector companies, non-profit organizations and educational institutions. The WBDG web site maintained by the National Institute of Building Sciences through funding support from the DOD, the NAVFAC Engineering Innovation and Criteria Office, U.S. Army Corps of Engineers, the U.S. Air Force, the U.S. General Services Administration (GSA), the U.S. Department of Veterans Affairs, the National Aeronautics and Space Administration (NASA), and the U.S. Department of Energy (DOE), and the assistance of the Sustainable Buildings Industry Council (SBIC). A Board of Direction and an Advisory Committee consisting of representatives from over 25 participating federal agencies guide the development of the WBDG. References External links Whole Building Design Guide National Institute of Building Sciences Building engineering Building technology Architecture websites Online databases Web portals Building information modeling
Whole Building Design Guide
[ "Engineering" ]
570
[ "Building engineering", "Building information modeling" ]
31,291,039
https://en.wikipedia.org/wiki/Artificial%20dielectrics
Artificial dielectrics are fabricated composite materials, often consisting of arrays of conductive shapes or particles in a nonconductive support matrix, designed to have specific electromagnetic properties similar to dielectrics. As long as the lattice spacing is smaller than a wavelength, these substances can refract and diffract electromagnetic waves, and are used to make lenses, diffraction gratings, mirrors, and polarizers for microwaves. These were first conceptualized, constructed and deployed for interaction in the microwave frequency range in the 1940s and 1950s. The constructed medium, the artificial dielectric, has an effective permittivity and effective permeability, as intended. In addition, some artificial dielectrics may consist of irregular lattices, random mixtures, or a non-uniform concentration of particles. Artificial dielectrics came into use with the radar microwave technologies developed between the 1940s and 1970s. The term "artificial dielectrics" came into use because these are macroscopic analogues of naturally occurring dielectrics. The difference between the natural and artificial substance is that the atoms or molecules are artificially (human) constructed materials. Artificial dielectrics were proposed because of the need for lightweight structures and components for various microwave delivery devices. Artificial dielectrics are a direct historical link to metamaterials. Seminal work The term artificial dielectric was originated by Winston E. Kock in 1948 when he was employed by Bell Laboratories. It described materials of practical dimensions that imitated the electromagnetic response of natural dielectric solids. The artificial dielectrics were borne out of a need for lightweight low loss materials for large and otherwise heavy devices. Dielectric analog Natural dielectrics, or natural materials, are a model for artificial dielectrics. When an electromagnetic field is applied to a natural dielectric, local responses and scattering occur on the atomic or molecular level. The macroscopic response of the material is then described as electric permittivity and magnetic permeability. However, for this macroscopic response to be valid, a type of spatial ordering must be present between the scatterers. In addition, a certain relation to the wavelength is part of its description. A lattice structure, with some degree of spatial ordering is present. Also, the applied field is longer in wavelength than the lattice spacing. This then allows for a macroscopic description expressed as electric permittivity and magnetic permeability. In order to manufacture an artificial permittivity and permeability there must be a capability to access the atoms themselves. This degree of precision is impractical. However, in the late 1940s - in the domain of long wavelengths such as radio frequencies and microwave - it became possible to manufacture larger scale, and more accessible scatterers that mimic the local response of natural materials - along with a synthesized macroscopic response. In the radio frequency and microwave regions such artificial crystal lattice structures were assembled. The scatterers responded to an electromagnetic field like atoms and molecules in natural materials, and the media behaved much like dielectrics with an effective media response. The scattering elements are designed to scatter the electromagnetic field in a prescribed manner. The geometric shape of the elements – spheres, disks, conducting strips, etc. – contribute to the design parameters. Rodded medium The rodded medium (plasma medium) is also known as the wire mesh, and wire grid. It is a square lattice of thin parallel wires The initial research pertaining to this medium was conducted by J. Brown, K.E. Golden, and W. Rotman. Metamaterials Artificial dielectrics are a direct historical link to metamaterials. Further reading Brown, John, and Willis Jackson. "The properties of artificial dielectrics at centimetre wavelengths." Proceedings of the IEE-Part B: Radio and Electronic Engineering 102.1 (1955): 11-16. Date of Current Version: 22 January 2010. See: related articles on IEEE Xplore. Golden, Kurt E. A study of artificial dielectrics. No. TDR-269 (4280-10)-4. Aerospace Corp.(1964) El Segundo, Ca. Lalanne, Philippe, and Mike Hutley. "The optical properties of artificial media structured at a subwavelength scale." Encyclopedia of Optical Engineering (2003): 62-71.(Free PDF download) Rotman, Walter. "Plasma simulation by artificial dielectrics and parallel-plate media." Antennas and Propagation, IRE Transactions on 10.1 (1962): 82-95. A Luneburg Lens for the SKA Summary of the MNRF research project into the manufacture of a low-cost microwave refracting spherical lens for radioastronomy, proposes the use of artificial dielectrics. A lens constructed of uniform spherical shells seems feasible. Collin, R. E., Field Theory of Guided Waves, 2nd ed., Wiley-IEEE, 1991 (Chapter 12). References External links An Artificial Dielectric (video lecture). Electromagnetics and Applications (Physics). Massachusetts Institute of Technology (MIT) Metamaterials Microwave technology Radio frequency propagation
Artificial dielectrics
[ "Physics", "Materials_science", "Engineering" ]
1,068
[ "Physical phenomena", "Spectrum (physical sciences)", "Metamaterials", "Radio frequency propagation", "Electromagnetic spectrum", "Materials science", "Waves" ]
46,724,627
https://en.wikipedia.org/wiki/Project%2023000%20aircraft%20carrier
Project 23000 or Shtorm () is a proposal for an aircraft carrier designed by the Krylov State Research Center for the Russian Navy. The cost of the export version (Project 23000E) has been put at over US$5.5 billion, and as of 2017 development had been expected to take ten years. , the project had not yet been approved and, given the financial costs, it was unclear whether it would be made a priority over other elements of Russian naval modernization. History The carrier is being considered for service with the Russian Navy's Northern Fleet as a replacement for aircraft carrier (heavy aircraft cruiser in Russian classification) which was commissioned in 1991. The Nevskoye Design Bureau is also reported to be taking part in the development project. Although the creation of a new aircraft carrier, along with the s, has been postponed by Russian President Vladimir Putin, it is still mentioned in the Russia's State Armament Programme for 2018–2027 released in May 2017. According to Russian officials, a new heavy aircraft carrier should be laid down between 2025 and 2030. In 2020, it was reported that, if built, the carrier might also be fitted with the proposed S-500 surface-to-air missiles. In early July 2016, the design of the aircraft carrier was offered to India for purchase. See also List of active Russian Navy ships List of ships of Russia by project number References Aircraft carriers of the Russian Navy Proposed aircraft carriers
Project 23000 aircraft carrier
[ "Engineering" ]
296
[ "Military projects", "Proposed aircraft carriers" ]
46,724,991
https://en.wikipedia.org/wiki/Argpyrimidine
Argpyrimidine is an organic compound with the chemical formula C11H18N4O3. It is an advanced glycation end-product formed from arginine and methylglyoxal through the Maillard reaction. Argpyrimidine has been studied for its food chemistry purposes and its potential involvement in aging diseases and diabetes mellitus. Synthesis Endogenous In vivo, argpyrimidine is synthesized from a Methylglyoxal (MG) mediated modification on an arginine residue in a protein. Methylglyoxal is formed through the Polyol pathway, the degradation of triose phosphates from Glycolysis, acetone metabolism, protein Glycation, or Lipid peroxidation. Methylglyoxal then can modify Arginine, Cysteine, or Lysine amino acid residues within a protein. The modification of these side chains through the Maillard reaction forms Advanced glycation end-products (AGEs). This occurs when there is an increase in blood sugar levels in the body. The free sugar compounds undergo alternate pathways, like advanced glycation, to produce AGEs. In the Methylglyoxal-mediated Maillard Reaction on arginine, a dihydroxy-imidazolidine intermediate is involved in the production of the argpyrimidine modification. Exogenous In vitro, argpyrimidine has been synthesized through incubation with methylglyoxal and other higher sugars at physiological conditions. In synthesis through other sugars, argpyrimidine was produced in lower concentrations with glyceraldehyde, threose, ribose, ascorbic acids, and glucose and fructose, respectively. The argpyrimidine derivative produced through the MG-incubation with Nα-t-BOC-Arg (N-alpha-(tertbutoxycarbonyl)-L-Arginine), an alpha-amine protected amino acid derivative, in vitro used a reductone intermediate, 3-hydroxypentane-2,4-dione. This argpyrimidine product was found to be detectable by its blue fluorescent properties. Argpyrimidine is also found in food chemistry through the browning of food by the Maillard Reaction. During this process, Glycation occurs, adding carbohydrate modifications to proteins and lipids. By adding the sugar components to the food, there is an added/changed element to the flavor of the food. This reaction is involved in the formation of most yeast containing foods, including breads and fermented alcohols. The Maillard Reaction occurs between the carbonyl group of a sugar and the amino group on a protein. These react to form a N-substituted glycosylamine, also known as a Schiff base. The Schiff Base then undergoes an isomerization by an Amadori rearrangement to form a ketosamine, or an Amadori rearrangement. The Amadori product can then undergo many further reactions to form various AGE products, which can also be further modified into different products. Disease Research Diabetes Mellitus Argpyrimidine has been associated with Diabetes mellitus because of its relationship with Hyperglycemia in the body. Increased blood sugar is characteristic of Diabetes. During times of high concentration of sugar in the blood, the glucose-derivative methylglyoxal can be synthesized as an alternate pathway to glycolysis. This then allows for the AGEs, like argpyrimidine, to be produced. There have been studies that have linked the increase in AGEs to the characteristics of various diseases, including Diabetes, cardiovascular disease, and neurodegeneration. Because of this, there has been increasing research regarding argpyrimidine's role in diabetes related injury. Aging Similar to its association with Diabetes, argpyrimidine is also a known biomarker for aging. Through glycation of certain proteins, Microglia and Macrophages are activated in the brain, leading to aging related diseases, such as Alzheimer's disease. This glycation due to increase in AGEs has also been linked to a release of Cytokines, and to the increase of Oxidative stress, which increases oxidative damage to DNA, proteins, and other macromolecules in the body. The effects of the protein glycation is due to the interaction between the AGEs and their receptors on cell surfaces. Antioxidants have been shown to slow the process of aging and age related diseases by disrupting the interaction between AGEs and their receptors. See also N(6)-Carboxymethyllysine References Alpha-Amino acids Amino acid derivatives Aminopyrimidines Advanced glycation end-products
Argpyrimidine
[ "Chemistry", "Biology" ]
988
[ "Senescence", "Carbohydrates", "Advanced glycation end-products", "Biomolecules" ]
46,726,184
https://en.wikipedia.org/wiki/Vofopitant
Vofopitant (GR205171) is a drug which acts as an NK1 receptor antagonist. It has antiemetic effects as with other NK1 antagonists, and also shows anxiolytic actions in animals. It was studied for applications such as the treatment of social phobia and post-traumatic stress disorder, but did not prove sufficiently effective to be marketed. See also NK1 receptor antagonist References Antiemetics NK1 receptor antagonists Amines Phenol ethers Tetrazoles Trifluoromethyl compounds Abandoned drugs
Vofopitant
[ "Chemistry" ]
116
[ "Pharmacology", "Functional groups", "Drug safety", "Amines", "Medicinal chemistry stubs", "Pharmacology stubs", "Bases (chemistry)", "Abandoned drugs" ]
46,728,418
https://en.wikipedia.org/wiki/Huwood%20power%20loader
The Huwood Power Loader was mechanical device of roughly 6 ft by 2 ft by 1 ft dimensions and powered by a 10 hp engine, used to move cut coal from the coal face on to the conveyor. The machine was equipped with winches which used haulage ropes to drag the machine along the coal face and used both horizontal and rotary motions to shift the coal onto the conveyor. Pleasley Colliery, Derbyshire introduced one of the first such loaders in 1950. See also Meco-Moore Cutter Loader Anderton Shearer References Mining equipment
Huwood power loader
[ "Engineering" ]
113
[ "Mining equipment" ]
46,733,119
https://en.wikipedia.org/wiki/Volumetric%20path%20tracing
Volumetric path tracing is a method for rendering images in computer graphics which was first introduced by Lafortune and Willems. This method enhances the rendering of the lighting in a scene by extending the path tracing method with the effect of light scattering. It is used for photorealistic effects of participating media like fire, explosions, smoke, clouds, fog or soft shadows. Like in the path tracing method, a ray is followed backwards, beginning from the eye, until reaching the light source. In volumetric path tracing, scattering events can occur along with ray tracing. When a light ray hits a surface, a certain amount gets scattered into the media. Description The algorithm is based on the volumetric rendering equation, which extends the rendering equation with a scattering term. It is composed of an absorption, out-scattering, emission and an in-scattering part. The absorption and out-scattering together form the extinction term. The in-scattering is the most expensive part to calculate because it needs an integration over all paths in the scene that consist of radiance. Therefore, thousands of paths need to be traced to achieve a result with good quality and without much noise. For a better handling, the in-scattering term can be split into two components, the single scattering and the multiple scattering. Algorithm In volumetric path tracing, a distance between the ray and the surface gets sampled and compared with the distance of the nearest intersection of the ray with the surface. If the sampled distance is smaller, a scatter event occurs. In that case the path is evaluated and traced from the scatter point in the media, not from the surface point on which it falls. The rest of the procedure continues in the same manner, until reaching the light source. Sampling A possible way of sampling distances is the ray marching method, which works similarly to ray tracing but operates on a distance field of the scene and acts in discrete steps. The scattering inside the media can be determined by a phase function using importance sampling. Therefore, the Henyey–Greenstein phase function — a non-isotropic phase function for simulating the scattering of materials like oceans, clouds or skin — can be applied. References Further reading Volumetric Path Tracing (March 2012). Cornell University. Volume light transport (March 2012). Cornell University. Efficient Volume Rendering in CUDA Path Tracer (2013). University of Southern California. Global illumination algorithms Computer graphics algorithms Monte Carlo methods
Volumetric path tracing
[ "Physics" ]
493
[ "Monte Carlo methods", "Computational physics" ]
26,672,766
https://en.wikipedia.org/wiki/Comobatrachus
Comobatrachus (meaning "Como Bluff frog") is a dubious genus of extinct frog known only from the holotype, YPM 1863, part of the right humerus, found in Reed's Quarry 9 near Como Bluff, Wyoming in the Late Jurassic-aged Morrison Formation. The holotype was commented on but not described by Moodie in 1912, although it was probably discovered alongside the holotype of Eobatrachus, but was not described by Othniel Charles Marsh when he named Eobatrachus in 1887. The type, and only species, C. aenigmatis, was named and described in 1960. It was probably related to the contemporaneous Eobatrachus. References Mesozoic frogs Morrison fauna Nomina dubia Fossil taxa described in 1960
Comobatrachus
[ "Biology" ]
169
[ "Biological hypotheses", "Nomina dubia", "Controversial taxa" ]
26,675,250
https://en.wikipedia.org/wiki/Aegean%20numerals
Aegean numbers was an additive sign-value numeral system used by the Minoan and Mycenaean civilizations. They are attested in the Linear A and Linear B scripts. They may have survived in the Cypro-Minoan script, where a single sign with "100" value is attested so far on a large clay tablet from Enkomi. Unicode See also Linear A Linear B Greek numerals References External links Open source font for rendering aegean numerals correctly - Google Noto Fonts Aegean languages in the Bronze Age Numeral systems Linear B Linear A
Aegean numerals
[ "Mathematics" ]
118
[ "Numeral systems", "Mathematical objects", "Numbers", "Number stubs" ]
26,680,317
https://en.wikipedia.org/wiki/MYO1G
Myosin IG, also known as myosin 1G and MYO1G, is a protein that in humans is encoded by the MYO1G gene. MYO1G is a member of class I unconventional myosins. Its expression is highly restricted to hematopoietic tissues and cells. It localises exclusively to the plasma membrane and is dependent on both the motor domain and the tail domain. MYO1G regulates cell elasticity possibly by interaction plasma membrane and cortical actin in Jurkat T-cells. Function MYO1G is a plasma membrane-associated class I myosin (see MIM 601478) that is abundant in T and B lymphocytes and mast cells (Pierce et al., 2001 [PubMed 11544309]; Patino-Lopez et al., 2010 [PubMed 20071333]).[supplied by OMIM, Jun 2010]. References Further reading Motor proteins
MYO1G
[ "Chemistry" ]
204
[ "Molecular machines", "Motor proteins" ]
26,684,024
https://en.wikipedia.org/wiki/Slot%20insulation
Slot insulation is the common name for shielding material used for the rotor inside a power generator. The slot insulation process for electric motors provides a barrier between the copper wirings and the steel lamination for all stator, armature and rotor products. This shielding material separates the rotor's electrically conductive winding from its body. Temperature ratings Due to their operating environment, slot insulation must be capable of withstanding high temperatures. In the 1970s, slot insulation materials often had a Class F temperature rating (an operational range up to 155 °C). Today, there are commercially available slot insulation materials with a Class H temperature rating (an operational range up to 180 °C) Its composite of laminate materials, consisting of epoxy, aramid, and dielectric film, create its insulating properties. Notes References Payne, B. (2010) The Unsung Hero of Power Generation? The Power Generation Blog. Gerome Technologies Inc. Gerome Technologies U Slot Insulation Gibney, J. GE Generators, An Overview. PCT. Generator Rotor Slot Insulation. External links Power and Composite Structures. Example Slot Insulation Images Electrical generators
Slot insulation
[ "Physics", "Technology" ]
233
[ "Physical systems", "Electrical generators", "Machines" ]
26,684,187
https://en.wikipedia.org/wiki/Athinoula%20A.%20Martinos%20Center%20for%20Biomedical%20Imaging
The Athinoula A. Martinos Center for Biomedical Imaging, usually referred to as just the "Martinos Center," is a major hub of biomedical imaging technology development and translational research. The Center is part of the Department of Radiology at Massachusetts General Hospital and is affiliated with both Harvard University and MIT. Bruce Rosen is the Director of the Center and Monica Langone is the Administrative Director. The core technologies being developed and used at the Center are magnetic resonance imaging (MRI) and in vivo magnetic resonance spectroscopy (MRS), magnetoencephalography (MEG) and electroencephalography (EEG), optical imaging techniques (microscopy and near-infrared spectroscopy), positron emission tomography (PET), molecular imaging, medical image computing (MIC), health informatics, artificial intelligence in healthcare, and transcranial magnetic stimulation. A particular area of innovation at the Center is Multimodal Functional Neuroimaging, which involves the integration of imaging technologies for neuroscience applications. Major areas of research at the Center include: psychiatric, neurologic and neurovascular disorders; basic and cognitive neuroscience; cardiovascular disease; cancer; and more. Scientific investigation and technology development is funded through government, industry and other research grants. The center is located in the Massachusetts General Hospital (MGH) East Campus in the Charlestown Navy Yard, 149 13th St. Charlestown, MA 02129. Separately, Massachusetts Institute of Technology (MIT) is home to its own Martinos Imaging Center. The Martinos Center is home to approximately 120 faculty members and more than 100 postdoctoral research fellows and graduate students, and is a resource to hundreds of researchers and students throughout Boston, the United States and the world. The research faculty are basic scientists and clinicians interested in a broad range of biologically and medically important questions. They work in conjunction with physical scientists, computer scientists, and engineers to develop new imaging technologies and research applications, and to bring these developments to the sphere of medical care. Some of the prominent faculty at the Center include Bruce Rosen, Lawrence Wald, David Boas, Jacob Hooker, Julie C. Price, Peter Caravan, Anna Moore, Umar Mahmood, Randy Buckner, Matthew S. Rosen, Maria Angela Franceschini, Bruce Fischl and Marco Loggia. The Center includes investigators and their laboratories based at the MGH research campus in Charlestown, as well as numerous other researchers from various departments within MGH, and other local, national and international institutions. Most Martinos Center-based faculty members have primary appointments in Radiology at MGH and Harvard, some with secondary appointments at MIT. Several of the investigators from other MGH departments and other institutions work at the Center, while even more conduct long- and short-term imaging studies at the Center and maintain their base elsewhere. The center is a member or collaborator with NCRR (and BIRN), NIDA, NIBIB, National Cancer Institute, NINDS, NCCAM, ONDCP, and The MIND Institute. The center also has a strategic corporate partnerships with Siemens Medical Solutions, Pfizer Inc., and Canon Inc. It is also a Harvard Catalyst site, and incorporates research projects from Boston University, McLean Hospital, and other Boston institutions. At the MGH Navy Yard site, there are eight large bore and five small bore MRI scanning bays used primarily for research, including the high-gradient field Human Connectome Project scanner, a 7 Tesla magnet for human radiography, and a combined PET-MRI. The Martinos Center also served as the site for the development of magnetoencephalography (MEG), and software development for analysis of MEG data is ongoing at the facility. New MRI and MRS sequences are developed in conjunction with Martinos, Harvard, and MIT faculty. In addition, the Center serves as a development site for new Siemens equipment, such as 32, 64, and 128 channel MRI coils which were designed and prototyped there. References Martinos Center Research Martinos Center About Overview Martinos Center Virtual Tour Martinos Center Strategic Partnership with Siemens Medical Solutions Martinos Center Major Funding Sources and Initiatives Siemens AG Research Cooperation with the Martinos Center HealthImaging:Automated MRI measures may identify early Alzheimer's at New York Times: Lost Chances for Survival, Before and After Stroke The Washington Post: Migraine Tied to Thickening in Brain Area The Washington Post: Scientists Probe the Idle Mind Laboratories in the United States Medical research institutes in Massachusetts Nuclear magnetic resonance Magnetic resonance imaging Massachusetts General Hospital Medical imaging research institutes Research institutes in Massachusetts
Athinoula A. Martinos Center for Biomedical Imaging
[ "Physics", "Chemistry" ]
926
[ "Nuclear magnetic resonance", "Magnetic resonance imaging", "Nuclear physics" ]
42,104,941
https://en.wikipedia.org/wiki/Kepler-45
Kepler-45, formerly known as KOI-254, is a star in the northern constellation of Cygnus. It is located at the celestial coordinates: right ascension , declination . With an apparent visual magnitude of 16.88, this star is too faint to be seen with the naked eye. The star is exhibiting strong starspot activity, with 4.1% of its surface covered by starspots. Planetary system The "Hot Jupiter" class planet Kepler-45b, discovered in February 2011, is unusually massive for the M-class parent star. Its orbit is aligned within 11 degrees of rotational axis of the star. The planet is strongly suspected to have optically thick rings, because its planetary shadow appears to be elongated. See also NGTS-1b References Cygnus (constellation) M-type main-sequence stars 254 Planetary transit variables Planetary systems with one confirmed planet J19312949+4103513
Kepler-45
[ "Astronomy" ]
191
[ "Cygnus (constellation)", "Constellations" ]
42,110,287
https://en.wikipedia.org/wiki/Concurrent%20testing
Research and literature on concurrency testing and concurrent testing typically focuses on testing software and systems that use concurrent computing. The purpose is, as with most software testing, to understand the behaviour and performance of a software system that uses concurrent computing, particularly assessing the stability of a system or application during normal activity. Research and study of program concurrency started in the 1950s, with research and study of testing program concurrency appearing in the 1960s. Examples of problems that concurrency testing might expose are incorrect shared memory access and unexpected order sequence of message or thread execution. Resource contention resolution, scheduling, deadlock avoidance, priority inversion and race conditions are also highlighted. Selected history & approaches of testing concurrency Approaches to concurrency testing may be on a limited unit test level right up to system test level. Some approaches to research and application of testing program/software concurrency have been: Execute a test once. This was considered to be ineffective for testing concurrency in a non-deterministic system and was equivalent to the testing of a sequential non-concurrent program on a system Execution of the same test sequence multiple times. Considered likely to find some issues in non-deterministic software execution. This later became called non-deterministic testing. Deterministic testing. This is an approach to set the system into a particular state so that code can be executed in a known order. Reachability testing An attempt to test synchronisation sequence combinations for a specified input (shared variable access not being corrupted, effectively testing race conditions variables). The sequence is typically derived for non-deterministic test execution. Structural Approaches / Static Analysis Analysis of code structure and static analysis tools. An example was a heuristic approach This led to code checker development, for example jlint. Research and comparison of static analysis and code checkers for concurrency bugs See also List of tools for static code analysis Multi-user approach This is an approach to testing program concurrency by looking at multiple user access, either serving different users or tasks simultaneously. Testing software and system concurrency should not be confused with stress testing, which is usually associated with loading a system beyond its defined limits. Testing of concurrent programs can exhibit problems when a system is performing within its defined limits. Most of the approaches above do not rely on overloading a system. Some literature states that testing of concurrency is a pre-requisite to stress testing. Lessons learned from concurrency bug characteristics study A study in 2008 analysed bug databases in a selection of open source software. It was thought to be the first real-world study of concurrency bugs. 105 bugs were classified as concurrency bugs and analysed, split as 31 being deadlock bugs and 74 non-deadlock bugs. The study had several findings, for potential follow-up and investigation: Approximately one-third of the concurrency bugs cause crashes or hanging programs. Most non-deadlock concurrency bugs are atomicity or order violation. I.e. focusing on atomicity (protected use of shared data) or sequence will potentially find most non-deadlock type bugs. Most concurrency bugs involve 1 or 2 threads. I.e. Heavy simultaneous users/usage is not the trigger for these bugs. There is a suggestion that pairwise testing may be effective to catch these types of bugs. Over 20% (7/31) deadlock bugs occurred with a single thread. Most deadlock concurrency bugs (30/31) involved only one or two resources. An implication that pairwise testing from a resource usage perspective could be applied to reveal deadlocks. See also Software testing Scalability testing Load testing Software performance testing Scenario analysis Simulation Stress test (hardware) System testing References General References Software testing
Concurrent testing
[ "Engineering" ]
735
[ "Software engineering", "Software testing" ]
42,113,563
https://en.wikipedia.org/wiki/Bachelor%20griller
A bachelor griller, mini oven or mini kitchen is a countertop kitchen appliance about the size of a microwave oven but which can instead grill, bake, broil or roast food. It generally incorporates one or two heating elements at the top and bottom of the appliance, has one or two hobs () on the cooktop, or a ceramic hotplate, and may incorporate a rotisserie. It can be used to fry, bake and grill () foods. It is an alternative to reheating prepackaged meals in a microwave oven. Modern bachelor grillers have controller knobs to control cooking temperatures. These are steadystates, a combination of a potentiometer and a thermostat, which ensure that the temperature stays stable. History The expression is at least 100 years old, with early versions generally powered by gas. The expression derives from the stereotypical idea that a bachelor will not cook anything properly, if at all. It has also been used as (and may have originated as) a brand name: the 1905 Journal of Gas Lighting, Water Supply & Sanitary Improvement (page 410) describes "illustrations of the firm's "Welcome" and "Bachelor" grillers, their "Vulcan" cooker, and an assortment of brass fittings for gas". George Orwell used a bachelor griller in 1935 while sharing a flat with Rayner Heppenstall in Bloomsbury, London. See also List of stoves Toaster oven References External links Image of a gas-fired bachelor griller, circa 1910. Cooking appliances Home appliances Stoves Ovens
Bachelor griller
[ "Physics", "Technology" ]
329
[ "Physical systems", "Machines", "Home appliances" ]
43,558,650
https://en.wikipedia.org/wiki/Index%20%28statistics%29
In statistics and research design, an index is a composite statistic – a measure of changes in a representative group of individual data points, or in other words, a compound measure that aggregates multiple indicators. Indices – also known as indexes and composite indicators – summarize and rank specific observations. Much data in the field of social sciences and sustainability are represented in various indices such as Gender Gap Index, Human Development Index or the Dow Jones Industrial Average. The ‘Report by the Commission on the Measurement of Economic Performance and Social Progress’, written by Joseph Stiglitz, Amartya Sen, and Jean-Paul Fitoussi in 2009 suggests that these measures have experienced a dramatic growth in recent years due to three concurring factors: improvements in the level of literacy (including statistical) increased complexity of modern societies and economies, and widespread availability of information technology. According to Earl Babbie, items in indices are usually weighted equally, unless there are some reasons against it (for example, if two items reflect essentially the same aspect of a variable, they could have a weight of 0.5 each). According to the same author, constructing the items involves four steps. First, items should be selected based on their content validity, unidimensionality, the degree of specificity in which a dimension is to be measured, and their amount of variance. Items should be empirically related to one another, which leads to the second step of examining their multivariate relationships. Third, index scores are designed, which involves determining score ranges and weights for the items. Finally, indices should be validated, which involves testing whether they can predict indicators related to the measured variable not used in their construction. A handbook for the construction of composite indicators (CIs) was published jointly by the OECD and by the European Commission's Joint Research Centre in 2008. The handbook – officially endorsed by the OECD high level statistical committee, describe ten recursive steps for developing an index: Step 1: Theoretical framework Step 2: Data selection Step 3: Imputation of missing data Step 4: Multivariate analysis Step 5: Normalisation Step 6: Weighting Step 7: Aggregating indicators Step 8: Sensitivity analysis Step 9: Link to other measures Step 10: Visualisation As suggested by the list, many modelling choices are needed to construct a composite indicator, which makes their use controversial. The delicate issue of assigning and validating weights is discussed e.g. in. A sociological reading of the nature of composite indicators is offered by Paul-Marie Boulanger, who sees these measures at the intersection of three movements: the democratisation of expertise, the concept that more knowledge is needed to tackle societal and environmental issues that can be provided by the sole experts – this line of thought connects to the concept of extended peer community developed by post-normal science the impulse to the creation of a new public through a process of social discovery, which can be reconnected to the work of pragmatists such as John Dewey the semiotic of Charles Sanders Peirce; Thus a CI is not just a sign or a number, but suggests an action or a behaviour. A subsequent work by Boulanger analyses composite indicators in light of the social system theories of Niklas Luhmann to investigate how different measurements of progress are or are not taken up. See also Index (economics) Scale (social sciences) References Quantitative research Statistical indicators
Index (statistics)
[ "Mathematics" ]
695
[ "Index numbers", "Mathematical objects", "Numbers" ]
43,559,240
https://en.wikipedia.org/wiki/Distinguished%20limit
In mathematics, a distinguished limit is an appropriately chosen scale factor used in the method of matched asymptotic expansions. External links Singular perturbation theory, Scholarpedia Differential equations Asymptotic analysis
Distinguished limit
[ "Mathematics" ]
45
[ "Mathematical analysis", "Mathematical analysis stubs", "Mathematical objects", "Differential equations", "Equations", "Asymptotic analysis" ]
43,563,154
https://en.wikipedia.org/wiki/Shared-use%20path
A shared-use path, mixed-use path or multi-use pathway is a path which is "designed to accommodate the movement of pedestrians and cyclists". Examples of shared-use paths include sidewalks designated as shared-use, bridleways and rail trails. A shared-use path typically has a surface that is asphalt, concrete or firmly packed crushed aggregate. Shared-use paths differ from cycle tracks and cycle paths in that shared-use paths are designed to include pedestrians even if the primary anticipated users are cyclists. The path may also permit other users such as inline skating. Contrastingly, motorcycles and mopeds are normally prohibited. Shared-use paths sometimes provide different lanes for users who travel at different speeds to prevent conflicts between user groups on high-use trails. Shared-use paths are criticised for creating conflict between different users. The UK's Department for Transport deprecates this kind of route in denser urban environments. Types Bridleways In the UK, cyclists are legally permitted to cycle on bridleways (paths open to horse riders), but not on public footpaths. Therefore, bridleways are, in effect, a form of shared-use path. Segregated paths On segregated or divided paths, the path is split into a section for pedestrians and a section for cyclists. This may be achieved with a painted line or different surface. It may also be delineated with tactile paving for blind and visually impaired pedestrians. Research by the UK Department for Transport found that cyclists and pedestrians prefer wider non-segregated paths to more narrow segregated paths (e.g. a 3 m wide shared path, compared with a 3 m path split into 1.5 m sections). Benefits The principal benefit of a shared-use path is saving space. This may be important in environmentally-sensitive areas or on narrow streets, where a full cycle track may not be feasible. Issues Shared use paths are criticised for creating conflict between pedestrians and cyclists and creating complaints from pedestrians and speed. Therefore, the paths do not properly take into account the different needs of different road users. For example a study by the Institute for Chartered Engineers found that users of shared use paths were confused about the nature of the path and who has priority on them. Pedestrians are sometimes unsure how to behave on shared-use paths. The question arises whether the path is to be treated as a road (therefore pedestrians should face oncoming traffic), or a path (and therefore pedestrians may walk wherever they choose). Shared-use paths alongside the highway often look like sidewalks to motorists. Therefore, at side roads, in jurisdictions where pedestrians do not have priority at side roads, the priority situation at side roads on shared-use paths can be confusing and often cyclists are required to give way to turning motorists. By country United Kingdom Before the January 2022 revision, the Highway Code gave no advice to pedestrians on how to share space with cyclists; there was also little guidance given to cyclists. (The 2023 edition covers both aspects. The UK Department for Transport advises local authorities that cyclists and pedestrians should not be expected to share space on or alongside city streets. Sustrans gives advice for cyclists, walkers and runners using shared-use paths on the National Cycle Network. The Milton Keynes redway system is an example of a city-wide network of shared-use paths. The network consists of over of shared-use paths that avoid the city's busy and fast grid roads (which run between neighbourhoods rather than through them). United States In the US, the 1999 AASHTO Guide for the Development of Bicycle Facilities defines a shared-use path as being physically separated from motor vehicular traffic with an open space or barrier. See also Cycling infrastructure List of rail trails Rail trail Shared-use trail section in Trail page References Cycling infrastructure Transport infrastructure
Shared-use path
[ "Physics" ]
779
[ "Physical systems", "Transport", "Transport infrastructure" ]
43,563,159
https://en.wikipedia.org/wiki/Rhino%20ferry
A rhino ferry is a barge constructed from several pontoons which are connected and equipped with outboard engines, used to transport heavy equipment and people. Rhino ferries were used extensively during the Normandy landings and other theaters (Attu, Africa, Sicily, Italy); their low draft was well-suited for shallow beaches, and they could also be used as piers when filled with water. An alternative to tank landing craft, they were operated by United States Navy Construction Battalions. They ferried their cargo from the outlying Landing Ships, Tank to the shore. For the Normandy invasion, components were shipped from the US. Initial construction in the UK was by the USN Construction battalions. Rhinos (and causeways, which used the same components) were also assembled by British Army Royal Engineers. See also , similar device, 1960s to present day References External links US Navy footage of Rhino barges in action, Normandy, June 11, 1944. Buoyancy devices Coastal construction Operation Overlord Allied logistics in the Western European Campaign (1944–1945)
Rhino ferry
[ "Engineering" ]
209
[ "Construction", "Coastal construction" ]
43,564,795
https://en.wikipedia.org/wiki/Crowdfunded%20satellites
Crowdfunded satellites are artificial satellites that have been funded by using crowdfunding, rather than more traditional methods of financing. Several crowdfunded satellites have been launched in the 2010s, including SkyCube, KickSat, ArduSat, all of which resulted from successful Kickstarter campaigns, and the Russian Mayak, which used the Russian Boomstarter platform. Crowdfunded satellites are an example of public participation to research. References Satellites Satellites Citizen science Crowdfunded science
Crowdfunded satellites
[ "Astronomy" ]
99
[ "Satellites", "Outer space" ]
43,568,153
https://en.wikipedia.org/wiki/Cuprophane
Cuprophane is a membrane made of cellulose, commonly used for hemodialysis. Cuprophane is a synthetic non-biocompatible membrane. It has been associated with hemodialysis-associated amyloidosis. References Cellulose Membrane technology
Cuprophane
[ "Chemistry" ]
57
[ "Membrane technology", "Separation processes" ]
43,569,526
https://en.wikipedia.org/wiki/Vista%20Analysis
Vista Analysis (Norwegian: Vista Analyse AS) is a Norwegian research and consultancy firm with the main emphasis on economic research, policy analysis and advice, and evaluations. It was established in 2000 and is headquartered at Frogner in Oslo, and focuses on climate, environment and energy, urban and rural development, international development, transport and communications, and welfare state research. The company is owned by several partners. The chairman of the board is the economist Steinar Strøm, a professor of economics at the University of Turin and the University of Oslo. The company is one of the largest firms providing independent policy analysis, evaluations and research for the Government of Norway. Its regular customers include the Ministry of Finance, the Ministry of Foreign Affairs, the Ministry of Petroleum and Energy, the Ministry of Trade and Industry, the Ministry of Transport and Communications, the Ministry of Justice and Public Security, the Ministry of the Environment, and the Norwegian Agency for Development Cooperation. The institute carried out the much noted evaluation in 2014 of the Norwegian ban on purchasing the services of prostitutes. Noted people Steinar Strøm Michael Hoel Sidsel Sverdrup Haakon Vennemo References External links Vista Analyse Research institutes in Norway Social science research institutes Economic research institutes Environmental research institutes Energy research institutes 2000 establishments in Norway Research institutes established in 2000 Companies based in Oslo
Vista Analysis
[ "Engineering", "Environmental_science" ]
275
[ "Energy research institutes", "Energy organizations", "Environmental research institutes", "Environmental research" ]
48,413,286
https://en.wikipedia.org/wiki/Bregman%E2%80%93Minc%20inequality
In discrete mathematics, the Bregman–Minc inequality, or Bregman's theorem, allows one to estimate the permanent of a binary matrix via its row or column sums. The inequality was conjectured in 1963 by Henryk Minc and first proved in 1973 by Lev M. Bregman. Further entropy-based proofs have been given by Alexander Schrijver and Jaikumar Radhakrishnan. The Bregman–Minc inequality is used, for example, in graph theory to obtain upper bounds for the number of perfect matchings in a bipartite graph. Statement The permanent of a square binary matrix of size with row sums for can be estimated by The permanent is therefore bounded by the product of the geometric means of the numbers from to for . Equality holds if the matrix is a block diagonal matrix consisting of matrices of ones or results from row and/or column permutations of such a block diagonal matrix. Since the permanent is invariant under transposition, the inequality also holds for the column sums of the matrix accordingly. Application There is a one-to-one correspondence between a square binary matrix of size and a simple bipartite graph with equal-sized partitions and by taking This way, each nonzero entry of the matrix defines an edge in the graph and vice versa. A perfect matching in is a selection of edges, such that each vertex of the graph is an endpoint of one of these edges. Each nonzero summand of the permanent of satisfying corresponds to a perfect matching of . Therefore, if denotes the set of perfect matchings of , holds. The Bregman–Minc inequality now yields the estimate where is the degree of the vertex . Due to symmetry, the corresponding estimate also holds for instead of . The number of possible perfect matchings in a bipartite graph with equal-sized partitions can therefore be estimated via the degrees of the vertices of any of the two partitions. Related statements Using the inequality of arithmetic and geometric means, the Bregman–Minc inequality directly implies the weaker estimate which was proven by Henryk Minc already in 1963. Another direct consequence of the Bregman–Minc inequality is a proof of the following conjecture of Herbert Ryser from 1960. Let by a divisor of and let denote the set of square binary matrices of size with row and column sums equal to , then The maximum is thereby attained for a block diagonal matrix whose diagonal blocks are square matrices of ones of size . A corresponding statement for the case that is not a divisor of is an open mathematical problem. See also Computing the permanent References External links Theorems in discrete mathematics
Bregman–Minc inequality
[ "Mathematics" ]
541
[ "Mathematical theorems", "Mathematical problems", "Discrete mathematics", "Theorems in discrete mathematics" ]
48,415,829
https://en.wikipedia.org/wiki/Qcells
Hanwha Qcells (commonly known as simply Qcells) is a manufacturer of photovoltaic cells. The company is headquartered in Seoul, South Korea, after being founded in 1999 in Bitterfeld-Wolfen, Germany, where the company still has its engineering offices. Qcells was purchased out of bankruptcy in August 2012 by the Hanwha Group, a South Korean business conglomerate. Qcells now operates as a subsidiary of Hanwha Solutions, the group's energy and petrochemical company. Qcells has manufacturing facilities in the United States, Malaysia, and South Korea. The company was the sixth-largest producer of solar cells in 2019, with shipments totaling 7.3 gigawatts. History On 23 July 2001, the company produced its first working polycrystalline solar cell on its new production line in Thalheim. Qcells would grow to become one of the world's largest solar cell manufacturers, employing over 2,000 people and encouraging other companies to open facilities in the surrounding area, which would come to be known as Germany's "Solar Valley". The company went public on 5 October 2005, listing on the Frankfurt Stock Exchange. High share prices during the initial public offering poured money into the company and made the founders wealthy. Lemoine died in 2006, and shortly thereafter, Fest and Grunow left the company to go back into research. Only Milner remained and served as the company's CEO. In 2005, Qcells established the CdTe PV manufacturer Calyxo. In November 2007, Qcells agreed a deal with Solar Fields, which intellectual property and assets were merged into Calyxo's newly established subsidiary Calyxo USA. In 2011, Solar Fields took over Calyxo. In 2008, Qcells acquired 17.9% stake in Renewable Energy Corporation. This stake was sold in 2009. At the same year, Qcells' subsidiary Sontor merged with a thin-film company Solarfilm. In June 2009, the company acquired Solibro, a joint venture it had established in 2006. Solibro manufactured thin-film solar cells based on copper-indium-gallium-diselenide. These modules were marketed until the sale of Solibro to Hanergy in 2012. Qcells was hit hard by the Great Recession in late 2008, with share prices slipping from over 80 euros to under 20. In response, the company laid off 500 employees. Milner resigned as CEO in early 2010, and by the end of the year, the company's finances appeared to stabilize. Just a few months later, in 2011, the global solar cell market crashed, with production overcapacity driving prices extremely low. Qcells saw sales slide by around 1 billion euros, ran a loss of 846 million euros and on 3 April 2011, the company filed for bankruptcy. In August 2012, the Hanwha Group, a large South Korean business conglomerate, agreed to acquire Qcells, saying that it presented synergy opportunities. In 2010, Hanwha had purchased a 49.99% share in Chinese manufacturer Solarfun which had been renamed Hanwha SolarOne. SolarOne had been producing solar cells for Qcells under contract. Due to high costs, production in Germany ceased in 2015, with Hanwha moving the work to its SolarOne facilities in China and newly opened manufacturing facilities in Malaysia and South Korea. In 2019, Qcells opened its first manufacturing facility in the United States. In recent years, Hanwha has since worked to simplify the structure of units, merging SolarOne into Qcells in December 2014, merging Qcells and the company's Advanced Materials (petrochemicals) group in 2018, Qcells & Advanced Materials acquired a solar company operated by the Hanwha Chemicals group in 2019, and in 2020 Hanwha Qcells & Advanced Materials merged with Hanwha Chemical to form the Hanwha Solutions group. In January 2023, Qcells made a commitment to invest more than $2.5 billion to build a fully integrated, silicon-based solar supply chain in the United States from raw material to finished module with full production expected by the end of 2024. In August 2024, Qcells received a conditional commitment for a future $1.45 billion loan from the US department of energy to help finance the construction of a fully integrated solar cell manufuacturing facility north of Atlanta, Georgia. The loan guarantee was approved in part by Qcells receiving an order from Microsoft for 12 gigawatts of solar panels through 2032 thus demonstrating a market for their product. The loan was finalized in December 2024. Qcells also operates a residential solar financing platform in the United States, EnFin, offering loans for those who choose to install PV systems in their homes. In August 2023, the U.S. Department of Commerce ruled that Qcells had not circumvented tariffs on Chinese-made goods following an investigation involving multiple photovoltaic cell manufacturers. In July 2024, it was reported that Hanwha Qcells' factory in Dalton, Georgia, was importing cells made with Chinese wafers from TCL Zhonghuan Renewable Energy Technology Co. and Gokin Solar Co., wafer suppliers who source Xinjiang, China polysilicon from Daqo and GCL, both of which are on the UFLPA Entity List. However, the report had stated that there is no evidence that components containing the banned polysilicon have turned up in Qcells panels. Large manufacturers have their own separate duty rates. Several big China-based producers received far lower rates than Hanwha Qcells. The Commerce Department calculated a subsidy rate of 14.72% for Hanwha Qcells products produced in Malaysia, based in part on government loans and below-market land provisions to the company in that country. Operations Qcells develops and produces monocrystaline silicon photovoltaic cells and solar panels. It produces and installs PV systems for commercial, industrial, and residential applications and provides EPC services for large-scale solar power plants. The company's engineering offices are located at the original headquarters in Thalheim, Germany. Production facilities are located in Dalton, Georgia, and Cartersville, Georgia, in the United States; Cyberjaya in Malaysia; and Jincheon in South Korea. See also List of photovoltaics companies Photovoltaic array Photovoltaics Theory of solar cells Thin-film cell References External links Technology companies established in 1999 Engineering companies of South Korea Manufacturing companies based in Seoul Photovoltaics manufacturers Thin-film cell manufacturers Silicon wafer producers Hanwha subsidiaries Companies formerly listed on the Nasdaq South Korean brands South Korean companies established in 1999
Qcells
[ "Engineering" ]
1,403
[ "Photovoltaics manufacturers", "Engineering companies" ]
48,416,304
https://en.wikipedia.org/wiki/Underlying%20event
In particle physics, underlying event (UE) refers to the additional interactions of two particle beams at a collision point beyond the main collision under study. Specifically, the term is used for hadron collider events which do not originate from the primary hard scattering (high energy, high momentum impact) process. The term was first defined in 2002. Further explanation Underlying events can be thought of as the remnants of scattering interactions. The UE may involve contributions from both "hard" and "soft" processes (here “soft” refers to interactions with low p-T, i.e. transverse momentum, transfer). These are important both in the simulation of particle experiments (often using event generators); and interpretation and analysis of data so as to filter out the desired signals. Features Contents of UE include initial and final state radiation, beam-beam remnants, multiple parton interactions, pile-up, noise. See also Minimum bias event Drell–Yan process and the underlying event References External links CMS measures the ‘underlying event’ in pp collisions Particle physics Scattering
Underlying event
[ "Physics", "Chemistry", "Materials_science" ]
218
[ "Condensed matter physics", "Scattering", "Particle physics", "Nuclear physics" ]
48,420,059
https://en.wikipedia.org/wiki/Dilatancy%20%28granular%20material%29
In soil mechanics, dilatancy or shear dilatancy is the volume change observed in granular materials when they are subjected to shear deformations. This effect was first described scientifically by Osborne Reynolds in 1885/1886 and is also known as Reynolds dilatancy. It was brought into the field of geotechnical engineering by . Unlike most other solid materials, the tendency of a compacted dense granular material is to dilate (expand in volume) as it is sheared. This occurs because the grains in a compacted state are interlocking and therefore do not have the freedom to move around one another. When stressed, a lever motion occurs between neighboring grains, which produces a bulk expansion of the material. On the other hand, when a granular material starts in a very loose state it may continuously compact instead of dilating under shear. A sample of a material is called dilative if its volume increases with increasing shear and contractive if the volume decreases with increasing shear. Dilatancy is a common feature of soils and sands. Its effect can be seen when the wet sand around the foot of a person walking on beach appears to dry up. The deformation caused by the foot expands the sand under it and the water in the sand moves to fill the new space between the grains. Phenomenon The phenomenon of dilatancy can be observed in a drained simple shear test on a sample of dense sand. In the initial stage of deformation, the volumetric strain decreases as the shear strain increases. But as the stress approaches its peak value, the volumetric strain starts to increase. After some more shear, the soil sample has a larger volume than when the test was started. The amount of dilation depends strongly on the initial density of the soil. In general, the denser the soil, the greater the amount of volume expansion under shear. It has also been observed that the angle of internal friction decreases as the effective normal stress is decreased. The relationship between dilation and internal friction is typically illustrated by the sawtooth model of dilatancy where the angle of dilation is analogous to the angle made by the teeth to the horizontal. Such a model can be used to infer that the observed friction angle is equal to the dilation angle plus the friction angle for zero dilation. Why is dilatancy important? Because of dilatancy, the angle of friction increases as the confinement increases until it reaches a peak value. After the peak strength of the soil is mobilized the angle of friction abruptly decreases. As a result, geotechnical engineering of slopes, footings, tunnels, and piles in such soils have to consider the potential decrease in strength after the soil strength reaches this peak value. Poorly / uniformly graded silt with trace sand to sandy that is non-plastic can be associated with challenges during construction, even when they are hard. These materials often appear to be granular because the silt is so coarse and thus may be described as dense to very dense. Vertical excavations below the water table in these soil types exhibit short term stability, similar to many dense sandy soil deposits, in part due to matric suction. However, as shearing of the soil occurs in the active wedge due to gravity forces, strength is lost and the rate of failure accelerates. This can be exacerbated by hydrostatic forces developing at the location(s) where water (drains to and) collects in tension cracks in or near the back of the active wedge. Generally retrogressive spalling manifests, often accompanied by piping / internal erosion. The use of appropriate filters is critical to managing these materials; a preferred filter might be a #4 sized clear gravel / coarse-grained sand as a commercial aggregate which is generally readily available. Some non- woven filter fabrics are also suitable. As with all filters, D15 and D50 compatibility criteria should be checked. Dilatancy cut-off After extensive shearing, dilating materials arrive in a state of critical density where dilatancy has come to an end. This phenomenon of soil behaviour can be included in the Hardening Soil model by means of a dilatancy cut-off. In order to specify this behaviour, the initial void ratio, , and the maximum void ratio, , of the material must be entered as general parameters. As soon as the volume change results in a state of maximum void, the mobilised dilatancy angle, , is automatically set back to zero. See also Triaxial shear tests μ(I) rheology: one model of the rheology of a granular flow. References Soil mechanics
Dilatancy (granular material)
[ "Physics" ]
933
[ "Soil mechanics", "Applied and interdisciplinary physics" ]
52,218,453
https://en.wikipedia.org/wiki/Gradient-enhanced%20kriging
Gradient-enhanced kriging (GEK) is a surrogate modeling technique used in engineering. A surrogate model (alternatively known as a metamodel, response surface or emulator) is a prediction of the output of an expensive computer code. This prediction is based on a small number of evaluations of the expensive computer code. Introduction Adjoint solvers are now becoming available in a range of computational fluid dynamics (CFD) solvers, such as Fluent, OpenFOAM, SU2 and US3D. Originally developed for optimization, adjoint solvers are now finding more and more use in uncertainty quantification. Linear speedup An adjoint solver allows one to compute the gradient of the quantity of interest with respect to all design parameters at the cost of one additional solve. This, potentially, leads to a linear speedup: the computational cost of constructing an accurate surrogate decrease, and the resulting computational speedup scales linearly with the number of design parameters. The reasoning behind this linear speedup is straightforward. Assume we run primal solves and adjoint solves, at a total cost of . This results in data; values for the quantity of interest and partial derivatives in each of the gradients. Now assume that each partial derivative provides as much information for our surrogate as a single primal solve. Then, the total cost of getting the same amount of information from primal solves only is . The speedup is the ratio of these costs: A linear speedup has been demonstrated for a fluid-structure interaction problem and for a transonic airfoil. Noise One issue with adjoint-based gradients in CFD is that they can be particularly noisy. When derived in a Bayesian framework, GEK allows one to incorporate not only the gradient information, but also the uncertainty in that gradient information. Approach When using GEK one takes the following steps: Create a design of experiment (DoE): The DoE or 'sampling plan' is a list of different locations in the design space. The DoE indicates which combinations of parameters one will use to sample the computer simulation. With Kriging and GEK, a common choice is to use a Latin Hypercube Design (LHS) design with a 'maximin' criterion. The LHS-design is available in scripting codes like MATLAB or Python. Make observations: For each sample in our DoE one runs the computer simulation to obtain the Quantity of Interest (QoI). Construct the surrogate: One uses the GEK predictor equations to construct the surrogate conditional on the obtained observations. Once the surrogate has been constructed it can be used in different ways, for example for surrogate-based uncertainty quantification (UQ) or optimization. Predictor equations In a Bayesian framework, we use Bayes' Theorem to predict the Kriging mean and covariance conditional on the observations. When using GEK, the observations are usually the results of a number of computer simulations. GEK can be interpreted as a form of Gaussian process regression. Kriging Along the lines of, we are interested in the output of our computer simulation, for which we assume the normal prior probability distribution: with prior mean and prior covariance matrix . The observations have the normal likelihood: with the observation matrix and the observation error covariance matrix, which contains the observation uncertainties. After applying Bayes' Theorem we obtain a normally distributed posterior probability distribution, with Kriging mean: and Kriging covariance: where we have the gain matrix: In Kriging, the prior covariance matrix is generated from a covariance function. One example of a covariance function is the Gaussian covariance: where we sum over the dimensions and are the input parameters. The hyperparameters , and can be estimated from a Maximum Likelihood Estimate (MLE). Indirect GEK There are several ways of implementing GEK. The first method, indirect GEK, defines a small but finite stepsize , and uses the gradient information to append synthetic data to the observations , see for example. Indirect Kriging is sensitive to the choice of the step-size and cannot include observation uncertainties. Direct GEK (through prior covariance matrix) Direct GEK is a form of co-Kriging, where we add the gradient information as co-variables. This can be done by modifying the prior covariance or by modifying the observation matrix ; both approaches lead to the same GEK predictor. When we construct direct GEK through the prior covariance matrix, we append the partial derivatives to , and modify the prior covariance matrix such that it also contains the derivatives (and second derivatives) of the covariance function, see for example . The main advantages of direct GEK over indirect GEK are: 1) we do not have to choose a step-size, 2) we can include observation uncertainties for the gradients in , and 3) it is less susceptible to poor conditioning of the gain matrix . Direct GEK (through observation matrix) Another way of arriving at the same direct GEK predictor is to append the partial derivatives to the observations and include partial derivative operators in the observation matrix , see for example. Gradient-enhanced kriging for high-dimensional problems (Indirect method) Current gradient-enhanced kriging methods do not scale well with the number of sampling points due to the rapid growth in the size of the correlation matrix, where new information is added for each sampling point in each direction of the design space. Furthermore, they do not scale well with the number of independent variables due to the increase in the number of hyperparameters that needs to be estimated. To address this issue, a new gradient-enhanced surrogate model approach that drastically reduced the number of hyperparameters through the use of the partial-least squares method that maintains accuracy is developed. In addition, this method is able to control the size of the correlation matrix by adding only relevant points defined through the information provided by the partial-least squares method. For more details, see. This approach is implemented into the Surrogate Modeling Toolbox (SMT) in Python (https://github.com/SMTorg/SMT), and it runs on Linux, macOS, and Windows. SMT is distributed under the New BSD license. Augmented gradient-enhanced kriging (direct method) A universal augmented framework is proposed in to append derivatives of any order to the observations. This method can be viewed as a generalization of Direct GEK that takes into account higher-order derivatives. Also, the observations and derivatives are not required to be measured at the same location under this framework. Example: Drag coefficient of a transonic airfoil As an example, consider the flow over a transonic airfoil. The airfoil is operating at a Mach number of 0.8 and an angle of attack of 1.25 degrees. We assume that the shape of the airfoil is uncertain; the top and the bottom of the airfoil might have shifted up or down due to manufacturing tolerances. In other words, the shape of the airfoil that we are using might be slightly different from the airfoil that we designed. On the right we see the reference results for the drag coefficient of the airfoil, based on a large number of CFD simulations. Note that the lowest drag, which corresponds to 'optimal' performance, is close to the undeformed 'baseline' design of the airfoil at (0,0). After designing a sampling plan (indicated by the gray dots) and running the CFD solver at those sample locations, we obtain the Kriging surrogate model. The Kriging surrogate is close to the reference, but perhaps not as close as we would desire. In the last figure, we have improved the accuracy of this surrogate model by including the adjoint-based gradient information, indicated by the arrows, and applying GEK. Applications GEK has found the following applications: 1993: Design problem for a borehole model test-function. 2002: Aerodynamic design of a supersonic business jet. 2008: Uncertainty quantification for a transonic airfoil with uncertain shape parameters. 2009: Uncertainty quantification for a transonic airfoil with uncertain shape parameters. 2012: Surrogate model construction for a panel divergence problem, a fluid-structure interaction problem. Demonstration of a linear speedup. 2013: Uncertainty quantification for a transonic airfoil with uncertain angle of attack and Mach number. 2014: Uncertainty quantification for the RANS simulation of an airfoil, with the model parameters of the k-epsilon turbulence model as uncertain inputs. 2015: Uncertainty quantification for the Euler simulation of a transonic airfoil with uncertain shape parameters. Demonstration of a linear speedup. 2016: Surrogate model construction for two fluid-structure interaction problems. 2017: Large review of gradient-enhanced surrogate models including many details concerning gradient-enhanced kriging. 2017: Uncertainty propagation for a nuclear energy system. 2020: Molecular geometry optimization. References Mathematical modeling Computational fluid dynamics
Gradient-enhanced kriging
[ "Physics", "Chemistry", "Mathematics" ]
1,878
[ "Mathematical modeling", "Computational fluid dynamics", "Applied mathematics", "Computational physics", "Fluid dynamics" ]
52,219,057
https://en.wikipedia.org/wiki/Luttinger%E2%80%93Ward%20functional
In solid state physics, the Luttinger–Ward functional, proposed by Joaquin Mazdak Luttinger and John Clive Ward in 1960, is a scalar functional of the bare electron-electron interaction and the renormalized one-particle propagator. In terms of Feynman diagrams, the Luttinger–Ward functional is the sum of all closed, bold, two-particle irreducible diagrams, i.e., all diagrams without particles going in or out that do not fall apart if one removes two propagator lines. It is usually written as or , where is the one-particle Green's function and is the bare interaction. The Luttinger–Ward functional has no direct physical meaning, but it is useful in proving conservation laws. The functional is closely related to the Baym–Kadanoff functional constructed independently by Gordon Baym and Leo Kadanoff in 1961. Some authors use the terms interchangeably; if a distinction is made, then the Baym–Kadanoff functional is identical to the two-particle irreducible effective action , which differs from the Luttinger–Ward functional by a trivial term. Construction Given a system characterized by the action in terms of Grassmann fields , the partition function can be expressed as the path integral: , where is a binary source field. By expansion in the Dyson series, one finds that is the sum of all (possibly disconnected), closed Feynman diagrams. in turn is the generating functional of the N-particle Green's function: The linked-cluster theorem asserts that the effective action is the sum of all closed, connected, bare diagrams. in turn is the generating functional for the connected Green's function. As an example, the two particle connected Green's function reads: To pass to the two-particle irreducible (2PI) effective action, one performs a Legendre transform of to a new binary source field. One chooses an, at this point arbitrary, convex as the source and obtains the 2PI functional, also known as Baym–Kadanoff functional:   with   . Unlike the connected case, one more step is required to obtain a generating functional from the two-particle irreducible effective action because of the presence of a non-interacting part. By subtracting it, one obtains the Luttinger–Ward functional: , where is the self-energy. Along the lines of the proof of the linked-cluster theorem, one can show that this is the generating functional for the two-particle irreducible propagators. Properties Diagrammatically, the Luttinger–Ward functional is the sum of all closed, bold, two-particle irreducible Feynman diagrams (also known as “skeleton” diagrams): The diagrams are closed as they do not have any external legs, i.e., no particles going in or out of the diagram. They are “bold” because they are formulated in terms of the interacting or bold propagator rather than the non-interacting one. They are two-particle irreducible since they do not become disconnected if we sever up to two fermionic lines. The Luttinger–Ward functional is related to the grand potential of a system: is a generating functional for irreducible vertex quantities: the first functional derivative with respect to gives the self-energy, while the second derivative gives the partially two-particle irreducible four-point vertex: ;   While the Luttinger–Ward functional exists, it can be shown to be not unique for Hubbard-like models. In particular, the irreducible vertex functions show a set of divergencies, which causes the self-energy to bifurcate into a physical and an unphysical solution. Baym and Kadanoff showed that we can satisfy the conservation law for any functional , thanks to the Noether's theorem. This is followed by the fact that the equation of motion of responding to one-body external fields apparently satisfies the space- and time- translational symmetries as well as the abelian gauge symmetry (phase symmetry), as long as the equation of motion is given with the derivative of . Note that reverse is also true. Based on the diagramatic analysis, what Baym found is that is needed to satisfy the conservation law. This is nothing but the completely-integrable condition, implying the existence of such that (recall the completely-integrable condition for ). Thus the remaining problem is how to determine approximately. Such approximations are called as conserving approximation. Some examples: The (fully self-consistent) GW approximation is equivalent to truncating to so-called ring diagrams: (A ring diagram consists of polarisation bubbles connected by interaction lines). Dynamical mean field theory is equivalent to taking only purely local diagrams into account: , where are lattice site indices. See also Luttinger's theorem Ward identity References Condensed matter physics Fermions
Luttinger–Ward functional
[ "Physics", "Chemistry", "Materials_science", "Engineering" ]
1,015
[ "Matter", "Fermions", "Phases of matter", "Materials science", "Condensed matter physics", "Subatomic particles" ]
52,223,658
https://en.wikipedia.org/wiki/Maskin%20monotonicity
Maskin monotonicity is a desired property of voting systems suggested by Eric Maskin. Each voter reports his entire preference relation over the set of alternatives. The set of reports is called a preference profile. A social choice rule maps the preference profile to the selected alternative. For a preference profile with a chosen alternative , there is another preference profile such that the position of relative to each of the other alternatives either improves or stays the same as in . With Maskin monotonicity, should still be chosen at . Maskin monotonicity is a necessary condition for implementability in Nash equilibrium. Moreover, any social choice rule that satisfies Maskin monotonicity and another property called "no veto power" can be implemented in Nash equilibrium form if there are three or more voters. See also Monotonicity (mechanism design) The monotonicity criterion in voting systems References Mechanism design Voting
Maskin monotonicity
[ "Mathematics" ]
180
[ "Game theory", "Mechanism design" ]
52,229,069
https://en.wikipedia.org/wiki/Communicative%20planning
Communicative planning is an approach to urban planning that gathers stakeholders and engages them in a process to make decisions together in a manner that respects the positions of all involved. It is also sometimes called collaborative planning among planning practitioners or collaborative planning model. History and theory Since the 1970s, communicative planning theory has formed based on several key understandings. These key points include the notions that communication and reasoning come in diverse forms, knowledge is socially constructed, and people’s diverse interests and preferences are formed out of their social contexts. Communicative theory also draws on Foucauldian analyses of power in that it recognizes that power relations exist in practice and have the ability to oppress individuals. Specific to a community and urban planning context, communicative theory acknowledges that planners' own actions, words, lived experiences, and communication styles have an effect on the planning process the planner is facilitating. Finally, communicative planning theory advances the idea that planning happens in everyday practice and social relations, and consensus-building can be used to organize people's thoughts and move past traditional ways of knowing and decision-making. In the 1990s, a number of planning scholars began writing about a new orientation to urban planning theory that moved away from the prevalent rational approach to planning. Judith Innes is credited with coining the term "communicative planning" in her article Planning Theory’s Emerging Paradigm: Communicative Action and Interactive Practice. Innes' tries to bridge the gap between planning theories and planning in practice, and offers consensus-building as a tool for urban planners to create collaborative and engaging planning environments that allow different stakeholders to participate. Around the same time as this article was published, Patsy Healey also published a number of planning theory texts exploring communicative and collaborative planning. Drawing on the theory of Jürgen Habermas in particular, Healey's work focuses on the impact that communicative acts (which can be in spoken or written form) have on a community planning process. The emerging field of therapeutic planning is closely related to communicative planning. Therapeutic planning operates on the basis that communities can experience collective trauma, including from past planning processes, and that carefully facilitated community engagement can act as catalysts for community-wide healing. Some planning practitioners use untraditional planning approaches, such as filmmaking and other artistic media, to engage community members in therapeutic planning processes. Scholars and texts This section provides a short list of works written by planning academics on the subject of collaborative planning. Communicative process and tools In a communicative planning process, planning practitioners play more of a facilitative role. They often act as a ‘knowledge mediator and broker’ to help reframe problems in order to promote more creative thinking about possible solutions. Throughout this process, information should be produced collectively by the full range of stakeholders who may be affected by the outcome of the process. In particular, all of the stakeholders should be involved in negotiating both the problem definition and the solution together. In doing so, solutions to conflicts amongst stakeholders may be re-framed as ‘win-win’, as opposed to the ‘zero sum’ mindset which occurs when stakeholders are bargaining on the basis of their own fixed interests. Consensus-building is an important part of this collective meaning-making process, as information is discussed and validated within the group of stakeholders, resulting in information which holds more significance to the group. To aid in consensus-building efforts, power should be distributed amongst the stakeholders such that they are equals in the process. Openness and trust are also crucial for building consensus. The objectives, underlying assumptions, and positions of these stakeholders should be considered along with the uncertainties about future conditions, such as population growth, and decisions which are linked to other decisions. It is important to have the stakeholders identify this information for themselves, as it will help reduce the biases present in both analyses driven by only one future and position-based discussions, as well as bring to the forefront any conflicts between the underlying values of the stakeholders. By considering this broad range of information, commonalities between different stakeholders may be identified, which can help build consensus. However, this cannot guarantee consensus, as positions might in fact be too different. In order to deal with the challenges that arise from positions being very different and the increasing complexity of analysis required, new models of collaboration are needed which build on various principles of conflict management, including engaging early and engaging often. Case studies The Neighbourhood Revitalization Program (NRP) - 1990 In 1990, the city of Minneapolis, Minnesota launched a 20-year program designed to empower residents in local decision making and share community planning responsibilities among residential, government and private stakeholders. To combat the dwindling standard of living within Minneapolis neighbourhoods, the NRP was conceptualized as a means of involving citizens in the prioritization of revitalization efforts. The Minneapolis government divided 400 million dollars between 81 neighbourhood organizations who utilized the funding over two decades to assess priorities, reach consensus and implement neighbourhood improvement projects. Within the first decade of the NRP, 48% of funding was used for upgrading housing and 16% went towards job creation and economic developments. Other priorities included public safety, the preservation of green space and improving transportation infrastructure. Through the completion and adoption of 66 unique neighbourhood plans, stakeholders from various organizations including the general public, Minneapolis Public Library, Minneapolis Parks and Recreation, Public Works, Housing Inspection and Hennepin County all came together to articulate and agree upon feasible and mutually beneficial neighbourhood directives. With emphasis placed on citizen participation, municipal planners took on an advisory role and assisted neighbourhood planning organizations in encouraging participation, engaging a diverse audience and reviewing completed plans through a technical lens. Despite the creation of Participation Agreements which stood as formal commitments to holding an inclusive engagement process, the NRP has been criticized for a lack of representation from all neighbourhood members. While the NRP has been applauded for its communicative and collaborative values, critics point to cases of exclusion and the enormous amount of continuous time and energy required for its success as main drawbacks. Seattle's Neighbourhood Planning Program - 1994 In 1994, Seattle developed the Neighbourhood Planning Program (NPP) in response to outcry from the general public surrounding a lack of involvement in a recently completed comprehensive plan. The NPP intended to build a partnership between residents and the local government and provided neighbourhoods with the choice to create their own unique local plan or continue by the comprehensive plan. While these neighbourhood plans had to be consistent with the broad goals of the comprehensive plan, participating neighbourhoods were afforded the opportunity to identify their own priorities and provide a list of recommendations to the city. Initially, each participating neighbourhood was given 10,000 dollars to begin a communicative engagement process and identify a vision for their local community. Additional funding for the planning stage would not be rewarded until the City felt as though enough stakeholders and community representatives had been included in the process. Once the visioning process was deemed to be inclusive and rigorous, the city provided each neighbourhood with between 60,000-100,000 dollars to develop a plan. In total, 38 neighbourhoods participated and developed their own neighbourhood plan for the municipality to follow. Before approving each neighbourhood plan, the municipality would hold public hearings in the neighbourhood to share the plan and ensure there was consensus among all the residents in the area. By 1999, the City had adopted these plans and began implementing the shared visions of each neighbourhood. Each plan varied significantly as each neighbourhood was afforded the opportunity to hire their own planner or consultants to assist them in the process. Planning professionals participated in the process mainly as mediators who helped guide participatory sessions and facilitated the consensus-building process. Between 20, 000 and 30, 000 residents participated directly in the NPP. The program has been recognized as a successful example of communicative planning and collaborative governance due to the high level of participation and the frequency with which consensus was genuinely reached. Challenges and critiques Critiques of Innes, Healey, and communicative planning focus on the planning processes and outcomes. Older critiques of communicative planning theory question whether the theory they find idealist can translate a consensus-based process into authentic outcomes. They also question whether consensus is a valuable goal when they see critical planning decisions as being made gradually. Additional critiques relate to power: who has the power to exclude and include stakeholders and whether stakeholders will use their power to manipulate the consensus building process (given that consensus must be reached). Older critiques of communicative planning practice also see a lack of real world outcomes from the communicative planning processes because deeper political and institutional change is needed first. Judith Innes directly responded to these critiques in her article Consensus Building: Clarifications for the Critics. Additionally, she expanded her description of the consensus building process and communicative planning's roots. Newer critiques argue collaborative planning is a way to maintain larger political and institutional systems while creating a process that only seems to better represent the public. They see collaborative planning as a way to keep neoliberals in power and political systems stable, rather than creating real changes to the governing system. References External links Urban planning Environmental social science Urban geography Human overpopulation Urban design
Communicative planning
[ "Engineering", "Environmental_science" ]
1,869
[ "Urban planning", "Environmental social science", "Architecture" ]
28,496,873
https://en.wikipedia.org/wiki/LISE%2B%2B
The program LISE++ is designed to predict the intensity and purity of radioactive ion beams (RIB) produced by In-flight separators. LISE++ also facilitates the tuning of experiments where its results can be quickly compared to on-line data. The program is constantly expanding and evolving from the feedback of its users around the world. Description The aim of LISE++ is to simulate the production of RIBs via some type of nuclear reactions (several are available in the program), between a beam of stable isotopes and a target. The program simulates the characteristics of the nuclear reactions based on well-established models, as well as the effects of the filtering device located downstream of the target used to create the RIBs. The LISE++ name is borrowed from the well known evolution of the C programming language, and is meant to indicate that the program is no longer limited to a fixed configuration like it was in the original “LISE” program, but can be configured to match any type of device or add to an existing device using the concept of modular blocks. Many physical phenomena are incorporated in this program, from reaction mechanism models, cross section systematics, electron stripping models, energy loss models to beam optics, just to list a few. The references for the calculations are available within the program itself (see the various option windows) and the user is encouraged to consult them for detailed information. The interface and algorithms are designed to provide a user-friendly environment allowing easy adjustments of the input parameters and quick calculations. Application The ability to predict as well as identify on-line the composition of RIBs is of prime importance. This has shaped the main functions of the program: predict the fragment separator settings necessary to obtain a specific RIB; predict the intensity and purity of the chosen RIB; simulate identification plots for on-line comparison; provide a highly user-friendly graphical environment; allow configuration for different fragment separators. The LISE++ package includes configuration files for most of the existing fragment and recoil separators found in the world (examples of fragment separators whose configurations are available in LISE++). Projectile fragmentation, fusion–evaporation, fusion–fission, Coulomb fission, abrasion–fission and two body nuclear reactions models are included in this program and can be used as the production reaction mechanism to simulate experiments at beam energies above the Coulomb barrier. LISE++ can be used not only to forecast the yields and purities of radioactive beams, but also as an on-line tool for beam identification and tuning during experiments. Large progress has recently been done in ion-beam optics with the introduction of "elemental" blocks, that allows optical matrices calculation within LISE++. New type of configurations based on these blocks allow a detailed analysis of the transmission, useful for fragment separator design, and can be used for optics optimization based on user constraints. It can be configured to simulate the fragment separators of various research institutes by means of configuration files. Utilities Many “satellite” tools have been incorporated into the LISE++ framework, which are accessible with buttons on the main toolbar and include: Physical calculator Relativistic Kinematics calculator Evaporation calculator Radiation Residue Calculator Units converter ISOL catcher utility Nuclide and Isomeric state Databases utilities Units converter Stripper foil lifetime utility The program PACE4 (fusion-evaporation code) by A. Gavron et al. Spectrometric calculator by J. Kantele The program CHARGE (charge state distribution code) by Th. Stöhlker et al. The program GLOBAL (charge-state distribution code) by W. E. Meyerhof et al. The program BI (search for 2-dimensional peaks) MOTER by H. A. Thiessen et al.: raytracing code with optimization capabilities operating under MS Windows See also Examples of Fragment separators at LISE++ A1900 @ NSCL/MSU (USA) LISE @ GANIL (France) FRS @ GSI (Germany) BigRIPS & RIPS @ RIBF/RIKEN (Japan) Accullina @ JINR (Russia) Simulation programs used to calculate the transport of ion beams MOCADI Beam TRANSPORT code COSY INFINITY References Physics software Scientific simulation software
LISE++
[ "Physics" ]
890
[ "Physics software", "Computational physics" ]
28,500,068
https://en.wikipedia.org/wiki/Infrared%20open-path%20detector
Infrared open-path gas detectors send out a beam of infrared light, detecting gas anywhere along the path of the beam. This linear 'sensor' is typically a few metres up to a few hundred metres in length. Open-path detectors can be contrasted with infrared point sensors. They are widely used in the petroleum and petrochemical industries, mostly to achieve very rapid gas leak detection for flammable gases at concentrations comparable to the lower flammable limit (typically a few percent by volume). They are also used, but so far to a lesser extent, in other industries where flammable concentrations can occur, such as in coal mining and water treatment. In principle the technique can also be used to detect toxic gases, for instance hydrogen sulfide, at the necessary parts-per-million concentrations, but the technical difficulties involved have so far prevented widespread adoption for toxic gases. Usually, there are separate transmitter and receiver units at either end of a straight beam path. Alternatively, the source and receiver are combined, and the beam bounced off a retroreflector at the far end of the measurement path. For portable use, detectors have also been made which use the natural albedo of surrounding objects in place of the retroreflector. The presence of a chosen gas (or class of gases) is detected from its absorption of a suitable infrared wavelength in the beam. Rain, fog etc. in the measurement path can also reduce the strength of the received signal, so it is usual to make a simultaneous measurement at one or more reference wavelengths. The quantity of gas intercepted by the beam is then inferred from the ratio of the signal losses at the measurement and reference wavelengths. The calculation is typically carried out by a microprocessor which also carries out various checks to validate the measurement and prevent false alarms. The measured quantity is the sum of all the gas along the path of the beam, sometimes termed the path-integral concentration of the gas. Thus the measurement has a natural bias (desirable in many applications) towards the total size of an unintentional gas release, rather than the concentration of the gas that has reached any particular point. Whereas the natural units of measurement for an Infrared point sensor are parts-per-million (ppm) or the percentage of the lower flammable limit (%LFL), the natural units of measurement for an open path detector are ppm.metres (ppm.m) or LFL.metres (LFL.m). For instance, the fire and gas safety system on an offshore platform in the North Sea typically has detectors set to a full-scale reading of 5LFL.m, with low and high alarms triggered at 1LFL.m and 3LFL.m respectively. Advantages and disadvantages versus fixed-point detectors An open path detector usually costs more than a single point detector, so there is little incentive for applications that play to a point detector's strengths: where the point detector can be placed at the known location of the highest gas concentration, and a relatively slow response is acceptable. The open path detector excels in outdoor situations where, even if the likely source of the gas release is known, the evolution of the developing cloud or plume is unpredictable. Gas will almost certainly enter an extended linear beam before finding its way to any single chosen point. Also, point detectors in exposed outdoor locations require weather shields to be fitted, increasing the response time significantly. Open path detectors can also show a cost advantage in any application where a row of point detectors would be required to achieve the same coverage, for instance monitoring along a pipeline, or around the perimeter of a plant. Not only will one detector replace several, but the costs of installation, maintenance, cabling etc. are likely to be lower. Component parts In principle any source of infrared radiation could be used, together with an optical system of lenses or mirrors to form the transmitted beam. In practice the following sources have been used, always with some form of modulation to aid the signal processing at the receiver: An incandescent light bulb, modulated by pulsing the current powering the filament or by a mechanical chopper. For systems used outdoors, it is difficult for an incandescent source to compete with the intensity of sunlight when the sun shines directly into the receiver. Also, it is difficult to achieve modulation frequencies distinguishable from those that can be produced naturally, for instance by heat shimmer or by sunlight reflecting off waves at sea. A gas-discharge lamp is capable of exceeding the spectral power of direct sunlight in the infrared, especially when pulsed. Modern open path systems typically use a xenon flashtube powered by a capacitor discharge. Such pulsed sources are inherently modulated. A semiconductor laser provides a relatively weak source, but one that can be modulated at high frequency in wavelength as well as amplitude. This property permits various signal processing schemes based on Fourier analysis, of use when the absorption of the gas is weak but narrow in spectral linewidth. The precise wavelength passbands used must be isolated from the broad infrared spectrum. In principle any conventional spectrometer technique is possible, but the NDIR technique with multilayer dielectric filters and beamsplitters is most often used. These wavelength-defining components are usually located in the receiver, although one design has shared the task with the transmitter. At the receiver, the infrared signal strengths are measured by some form of infrared detector. Generally photodiode detectors are preferred, and are essential for the higher modulation frequencies, whereas slower photoconductive detectors may be required for longer wavelength regions. The signals are fed to low-noise amplifiers, then invariably subject to some form of digital signal processing. The absorption coefficient of the gas will vary across the passband, so the simple Beer–Lambert law cannot be applied directly. For this reason the processing usually employs a calibration table, applicable for a particular gas, type of gas, or gas mixture, and sometimes configurable by the user. Operating wavelengths The choice of infrared wavelengths used for the measurement largely defines the detector's suitability for a particular applications. Not only must the target gas (or gases) have a suitable absorption spectrum, the wavelengths must lie within a spectral window so the air in the beam path is itself transparent. These wavelength regions have been used: 3.4 μm region. All hydrocarbons and their derivatives absorb strongly, due to the C-H stretch mode of molecular vibration. It is commonly used in infrared point detectors where path lengths are necessarily short, and for open-path detectors requiring parts-per-million sensitivity. A disadvantage for many applications is that methane absorbs relatively weakly compared to heavier hydrocarbons, leading to large inconsistencies of calibration. For open-path detection of flammable concentrations the absorption for non-methane hydrocarbons is so strong that the measurement saturates, a significant gas cloud appearing 'black'. This wavelength region is beyond the transmission range of borosilicate glass, so windows and lenses must be made of more expensive materials and tend to be small in aperture. 2.3 μm region. All hydrocarbons and their derivatives have absorption coefficients appropriate for open path detection at flammable concentrations. A useful advantage in practical applications is that the detector's response to many different gases and vapours is relatively uniform when expressed in terms of the lower flammable limit. Borosilicate glass retains useful transmission in this wavelength region, allowing large aperture optics to be produced at moderate cost. 1.6 μm region. A wide range of gases absorb in the near-infrared. Typically the absorption coefficients are relatively weak, but light molecules show narrow, individually resolved spectral lines rather than broad bands. This results in relatively large values of the gradient and curvature of the absorption with respect to wavelength, enabling semiconductor laser-based systems to distinguish gas molecules very specifically; for instance hydrogen sulfide, or methane to the exclusion of heavier hydrocarbons. History The first open-path detector offered for routine industrial use, as distinct from research instruments built in small numbers, was the Wright and Wright 'Pathwatch' in the US, 1983. Acquired by Det-Tronics (Detector Electronics Corporation) in 1992, the detector operated in the 3.4 μm region with a powerful incandescent source and a mechanical chopper. It did not achieve large volume sales, mainly because of cost and doubts about long-term reliability with moving parts. Beginning in 1985, Shell Research in UK was funded by Shell Natural Gas to develop an open-path detector with no moving parts. The advantages of the 2.3 μm wavelength were identified, and a research prototype was demonstrated. This design had a combined transmitter-receiver with a corner-cube retroreflector at 50 m. It used a pulsed incandescent lamp, PbS photoconductive detectors in the gas and reference channels, and an Intel 8031 microprocessor for signal processing. In 1987 Shell licensed this technology to Sieger-Zellweger (later Honeywell) who designed and marketed their industrial version as the 'Searchline', using a retro-reflective panel made up of multiple corner-cubes. This was the first open-path detector to be certified for use in hazardous areas and to have no moving parts. Later work by Shell Research used two alternately pulsed incandescent sources in the transmitter and a single PbS detectors in the receiver, avoiding zero drifts caused by the variable responsivity of PbS detectors. This technology was offered to Sieger-Zellweger, and later licensed to PLMS. a company part-owned by Shell Ventures UK. The PLMS GD4001/2 in 1991 were the first detectors to achieve a truly stable zero without moving parts or software compensation of slow drifts. They were also the first infrared gas detectors of any kind to be certified intrinsically safe. The Israeli company Spectronix (also Spectrex) made an important advance in 1996 with their SafEye, the first to use a flash tube source, followed by Sieger-Zellweger with their Searchline Excel in 1998. In 2001 the PLMS Pulsar, soon afterwards acquired by Dräger as their Polytron Pulsar, was the first detector to incorporate sensing to monitor the mutual alignment of the transmitter and receiver during both installation and routine operation. References Explosive atmospheres – Part 29-4: Gas detectors – Performance requirements of open-path detectors for flammable gases; IEC 60079-29-4 Explosive atmospheres. Gas detectors. Performance requirements of open-path detectors for flammable gases; EN 60079-29-4:2010 UK Health and Safety Executive, Fire and Explosion Strategy; http://www.hse.gov.uk/offshore/strategy/fgdetect.htm Optoelectronics Chemical engineering Fire prevention Oil platforms Offshore engineering Petroleum production Oil refineries Petroleum technology Safety equipment Detectors Gas sensors
Infrared open-path detector
[ "Chemistry", "Engineering" ]
2,242
[ "Oil platforms", "Structural engineering", "Offshore engineering", "Chemical engineering", "Oil refineries", "Petroleum technology", "Petroleum engineering", "Construction", "Petroleum", "Natural gas technology", "Oil refining", "nan" ]
39,368,443
https://en.wikipedia.org/wiki/AKNS%20system
In mathematics, the AKNS system is an integrable system of partial differential equations, introduced by and named after Mark J. Ablowitz, David J. Kaup, Alan C. Newell, and Harvey Segur from their publication in Studies in Applied Mathematics: . Definition The AKNS system is a pair of two partial differential equations for two complex-valued functions p and q of 2 variables t and x: If p and q are complex conjugates this reduces to the nonlinear Schrödinger equation. Huygens' principle applied to the Dirac operator gives rise to the AKNS hierarchy. Applications to General Relativity In october of 2021, the dynamics of three-dimensional (extremal) black holes on General Relativity with negative cosmological constant were shown equivalent to two independent copies of the AKNS system. This duality was addressed through the imposition of suitable boundary conditions to the Chern-Simons action. In this scheme, the involution of conserved charges of the AKNS system yields an infinite-dimensional commuting asymptotic symmetry algebra of gravitational charges. See also Huygens principle References Integrable systems
AKNS system
[ "Physics", "Mathematics" ]
238
[ "Mathematical analysis", "Mathematical analysis stubs", "Integrable systems", "Theoretical physics", "Theoretical physics stubs" ]
39,368,969
https://en.wikipedia.org/wiki/Phases%20of%20fluorine
Fluorine forms diatomic molecules () that are gaseous at room temperature with a density about 1.3 times that of air. Though sometimes cited as yellow-green, pure fluorine gas is actually a very pale yellow. The color can only be observed in concentrated fluorine gas when looking down the axis of long tubes, as it appears transparent when observed from the side in normal tubes or if allowed to escape into the atmosphere. The element has a "pungent" characteristic odor that is noticeable in concentrations as low as 20 ppb. Fluorine condenses to a bright yellow liquid at −188 °C (−307 °F), which is near the condensation temperatures of oxygen and nitrogen. The solid state of fluorine relies on Van der Waals forces to hold molecules together, which, because of the small size of the fluorine molecules, are relatively weak. Consequently, the solid state of fluorine is more similar to that of oxygen or the noble gases than to those of the heavier halogens. Fluorine solidifies at −220 °C (−363 °F) into a cubic structure, called beta-fluorine. This phase is transparent and soft, with significant disorder of the molecules; its density is 1.70 g/cm3. At −228 °C (−378 °F) fluorine undergoes a solid–solid phase transition into a monoclinic structure called alpha-fluorine. This phase is opaque and hard, with close-packed layers of molecules, and is denser at 1.97 g/cm3. The solid state phase change requires more energy than the melting point transition and can be violent, shattering samples and blowing out sample holder windows. History Henri Moissan was the first to isolate the element in 1886, observing its gaseous phase. Eleven years later, Sir James Dewar first liquified the element. For unclear reasons, Dewar measured a density for the liquid about 40% too small, and would not be corrected until 1951. Solid fluorine received significant study in the 1920s and 30s, but relatively less until the 1960s. The crystal structure of alpha-fluorine given, which still has some uncertainty, dates to a 1970 paper by Linus Pauling. Notes Citations Indexed references Further reading http://www.osti.gov/bridge/servlets/purl/4010212-0BbwUC/4010212.pdf (phase diagrams of the elements) http://jcp.aip.org/resource/1/jcpsa6/v47/i2/p740_s1?isAuthorized=no (sample holder blowout) NASA ADS: Solid Fluorine and Solid Chlorine: Crystal Structures and Intermolecular Forces by S. C. Nyburg Fluorine Phases of matter Allotropes
Phases of fluorine
[ "Physics", "Chemistry" ]
604
[ "Periodic table", "Properties of chemical elements", "Allotropes", "Phases of matter", "Materials", "Matter" ]
39,374,174
https://en.wikipedia.org/wiki/Fermat%27s%20and%20energy%20variation%20principles%20in%20field%20theory
In general relativity, light is assumed to propagate in a vacuum along a null geodesic in a pseudo-Riemannian manifold. Besides the geodesics principle in a classical field theory there exists Fermat's principle for stationary gravity fields. Fermat's principle In case of conformally stationary spacetime with coordinates a Fermat metric takes the form where the conformal factor depends on time and space coordinates and does not affect the lightlike geodesics apart from their parametrization. Fermat's principle for a pseudo-Riemannian manifold states that the light ray path between points and corresponds to stationary action. where is any parameter ranging over an interval and varying along curve with fixed endpoints and . Principle of stationary integral of energy In principle of stationary integral of energy for a light-like particle's motion, the pseudo-Riemannian metric with coefficients is defined by a transformation With time coordinate and space coordinates with indexes k,q=1,2,3 the line element is written in form where is some quantity, which is assumed equal 1. Solving light-like interval equation for under condition gives two solutions where are elements of the four-velocity. Even if one solution, in accordance with making definitions, is . With and even if for one k the energy takes form In both cases for the free moving particle the Lagrangian is Its partial derivatives give the canonical momenta and the forces Momenta satisfy energy condition for closed system which means that is the energy of the system that combines the light-like particle and the gravitational field. Standard variational procedure according to Hamilton's principle is applied to action which is integral of energy. Stationary action is conditional upon zero variational derivatives and leads to Euler–Lagrange equations which is rewritten in form After substitution of canonical momentum and forces they yields motion equations of lightlike particle in a free space and where are the Christoffel symbols of the first kind and indexes take values . Energy integral variation and Fermat principles give identical curves for the light in stationary space-times. Generalized Fermat's principle In the generalized Fermat’s principle the time is used as a functional and together as a variable. It is applied Pontryagin’s minimum principle of the optimal control theory and obtained an effective Hamiltonian for the light-like particle motion in a curved spacetime. It is shown that obtained curves are null geodesics. The stationary energy integral for a light-like particle in gravity field and the generalized Fermat principles give identity velocities. The virtual displacements of coordinates retain path of the light-like particle to be null in the pseudo-Riemann space-time, i.e. not lead to the Lorentz-invariance violation in locality and corresponds to the variational principles of mechanics. The equivalence of the solutions produced by the generalized Fermat principle to the geodesics, means that the using the second also turns out geodesics. The stationary energy integral principle gives a system of equations that has one equation more. It makes possible to uniquely determine canonical momenta of the particle and forces acting on it in a given reference frame. Euler–Lagrange equations in contravariant form The equations can be transformed into a contravariant form where the second term in the left part is the change in the energy and momentum transmitted to the gravitational field when the particle moves in it. The force vector ifor principle of stationary integral of energy is written in form In general relativity, the energy and momentum of a particle is ordinarily associated with a contravariant energy-momentum vector . The quantities do not form a tensor. However, for the photon in Newtonian limit of Schwarzschild field described by metric in isotropic coordinates they correspond to its passive gravitational mass equal to twice rest mass of the massive particle of equivalent energy. This is consistent with Tolman, Ehrenfest and Podolsky result for the active gravitational mass of the photon in case of interaction between directed flow of radiation and a massive particle that was obtained by solving the Einstein-Maxwell equations. After replacing the affine parameter the expression for the momenta turned out to be where 4-velocity is defined as . Equations with contravariant momenta are rewritten as follows These equations are identical in form to the ones obtained from the Euler-Lagrange equations with Lagrangian by raising the indices. In turn, these equations are identical to the geodesic equations, which confirms that the solutions given by the principle of stationary integral of energy are geodesic. The quantities and appear as tensors for linearized metrics. See also Fermat's principle Variational methods in general relativity References General relativity Variational principles
Fermat's and energy variation principles in field theory
[ "Physics", "Mathematics" ]
979
[ "Mathematical principles", "Variational principles", "General relativity", "Theory of relativity" ]
39,374,186
https://en.wikipedia.org/wiki/Residual%20sodium%20carbonate%20index
The residual sodium carbonate (RSC) index of irrigation water or soil water is used to indicate the alkalinity hazard for soil. The RSC index is used to find the suitability of the water for irrigation in clay soils which have a high cation exchange capacity. When dissolved sodium in comparison with dissolved calcium and magnesium is high in water, clay soil swells or undergoes dispersion which drastically reduces its infiltration capacity. In the dispersed soil structure, the plant roots are unable to spread deeper into the soil due to lack of moisture. However, high RSC index water does not enhance the osmotic pressure to impede the off take of water by the plant roots unlike high salinity water. Clay soils irrigation with high RSC index water leads to fallow alkali soils formation. RSC index formula RSC is expressed in meq/L units. RSC should not be higher than 1 and preferably less than +0.5 for considering the water use for irrigation. The formula for calculating RSC index is: RSC index = [HCO3 + CO3] − [Ca + Mg] RSC index = HCO3/61 + CO3/30 – Ca/20 – Mg/12 (in case the ionic concentrations are measured in mg/L or ppm as salts) While calculating RSC index, the water quality present at the root zone of the crop should be considered which would take into account the leaching factor in the field. Calcium present in dissolved form is also influenced by the partial pressure of dissolved at the plants root zone in the field water. Natural water contamination Soda ash [Na2CO3] can be present in natural water from the weathering of basalt which is an igneous rock. Lime [Ca(OH)2] can be present in natural water when rain water comes in contact with calcined minerals such as ash produced from the burning of calcareous coal or lignite in boilers. Anthropogenic use of soda ash also finally adds to the RSC of the river water. Where the river water and ground water are repeatedly used in the extensively irrigated river basins, the river water available in lower reaches is often rendered not useful in agriculture due to high RSC index or alkalinity. The salinity of water need not be high. Softened water In industrial water treatment terminology, water quality with high RSC index is synonymous with the soft water but is chemically very different from naturally soft water which has a very low ionic concentration. When calcium and magnesium salts are present in dissolved form in water, these salts precipitate on the heat transfer surfaces forming insulating hard scaling / coating which reduces the heat transfer efficiency of the heat exchangers. To avoid scaling in water cooled heat exchangers, water is treated by lime and or soda ash to remove the water hardness. The following chemical reactions take place in lime soda softening process which precipitates the calcium and magnesium salts as calcium carbonate and magnesium hydroxide which have very low solubility in water. CaSO4 + Na2CO3 → CaCO3↓ + Na2SO4 CaCl2 + Na2CO3 → CaCO3↓ + 2NaCl MgSO4 + Ca(OH)2 + Na2CO3 → Mg(OH)2↓ + CaCO3↓ + Na2SO4 MgCl2 + Ca(OH)2 + Na2CO3 → Mg(OH)2↓ + CaCO3↓ + 2NaCl 2NaHCO3 + Ca(OH)2 → CaCO3↓ + Na2CO3 + 2H2O Na2CO3 + Ca(OH)2 → CaCO3↓ + 2NaOH Ca(HCO3)2 + Ca(OH)2 → 2CaCO3↓ + 2H2O Mg(HCO3)2 + 2Ca(OH)2 → Mg(OH)2↓ + 2CaCO3↓ + 2H2O MgCO3 + Ca(OH)2 → Mg(OH)2↓ + CaCO3↓ The excess soda ash after precipitating the calcium and magnesium salts is in carbonates & bicarbonates of sodium which imparts high pH or alkalinity to soil water. Soda lakes The endorheic basin lakes are called soda or alkaline lakes when the water inflows contain high concentrations of Na2CO3. The pH of the soda lake water is generally above 9 and sometimes the salinity is close to brackish water due to depletion of pure water by solar evaporation. Soda lakes are rich with algal growth due to enhanced availability of dissolved CO2 in the lake water compared to fresh water or saline water lakes. Sodium carbonate and sodium hydroxide are in equilibrium with availability of dissolved carbon dioxide as given below in the chemical reaction Na2CO3 + H2O <=> 2NaOH + CO2 NaHCO3 <=> NaOH + CO2 During day time when sun light is available, Algae undergoes photosynthesis process which absorbs CO2 to shift the reaction towards NaOH formation and vice versa takes place during night time with the release of CO2 from the respiration process of Algae towards Na2CO3 and NaHCO3 formation. In soda lake waters, carbonates of sodium act as catalyst for the algae growth by providing favourable higher concentration of dissolved CO2 during the day time. Due to fluctuation in dissolved CO2, the pH and alkalinity of the water also keep varying. . See also Soil pH Environmental impact of irrigation Index of soil-related articles Agreti green vegetable Algae fuel Algaculture Gravitropism References Soil chemistry Types of soil Land reclamation Water quality indicators
Residual sodium carbonate index
[ "Chemistry", "Environmental_science" ]
1,180
[ "Soil chemistry", "Water quality indicators", "Water pollution" ]
39,375,034
https://en.wikipedia.org/wiki/Abell%20S1077
Abell S1077 is a galaxy cluster located in the constellation Piscis Austrinus. References Galaxy clusters Piscis Austrinus
Abell S1077
[ "Astronomy" ]
32
[ "Piscis Austrinus", "Galaxy clusters", "Astronomical objects", "Constellations" ]
39,375,353
https://en.wikipedia.org/wiki/C16H20N4O2
{{DISPLAYTITLE:C16H20N4O2}} The molecular formula C16H20N4O2 (molar mass: 300.36 g/mol, exact mass: 300.1586 u) may refer to: Azapropazone BIA 10-2474 Molecular formulas
C16H20N4O2
[ "Physics", "Chemistry" ]
68
[ "Molecules", "Set index articles on molecular formulas", "Isomerism", "Molecular formulas", "Matter" ]
39,376,462
https://en.wikipedia.org/wiki/Ab-polar%20current
Ab-polar current, an obsolete term sometimes found in 19th century meteorological literature, refers to any air current moving away from either the North Pole or the South Pole. In the Northern Hemisphere, this term indicates a northerly wind. The Latin prefix ab- means "from" or "away from". References Atmospheric dynamics
Ab-polar current
[ "Chemistry" ]
66
[ "Atmospheric dynamics", "Fluid dynamics" ]
39,378,034
https://en.wikipedia.org/wiki/Minimum%20rank%20of%20a%20graph
In mathematics, the minimum rank is a graph parameter for a graph G. It was motivated by the Colin de Verdière graph invariant. Definition The adjacency matrix of an undirected graph is a symmetric matrix whose rows and columns both correspond to the vertices of the graph. Its elements are all 0 or 1, and the element in row i and column j is nonzero whenever vertex i is adjacent to vertex j in the graph. More generally, a generalized adjacency matrix is any symmetric matrix of real numbers with the same pattern of nonzeros off the diagonal (the diagonal elements may be any real numbers). The minimum rank of is defined as the smallest rank of any generalized adjacency matrix of the graph; it is denoted by . Properties Here are some elementary properties. The minimum rank of a graph is always at most equal to n − 1, where n is the number of vertices in the graph. For every induced subgraph H of a given graph G, the minimum rank of H is at most equal to the minimum rank of G. If a graph is disconnected, then its minimum rank is the sum of the minimum ranks of its connected components. The minimum rank is a graph invariant: isomorphic graphs necessarily have the same minimum rank. Characterization of known graph families Several families of graphs may be characterized in terms of their minimum ranks. For , the complete graph Kn on n vertices has minimum rank one. The only graphs that are connected and have minimum rank one are the complete graphs. A path graph Pn on n vertices has minimum rank n − 1. The only n-vertex graphs with minimum rank n − 1 are the path graphs. A cycle graph Cn on n vertices has minimum rank n − 2. Let be a 2-connected graph. Then if and only if is a linear 2-tree. A graph has if and only if the complement of is of the form for appropriate nonnegative integers with for all . Notes References . Algebraic graph theory Graph invariants
Minimum rank of a graph
[ "Mathematics" ]
408
[ "Graph theory", "Graph invariants", "Mathematical relations", "Algebra", "Algebraic graph theory" ]
53,605,449
https://en.wikipedia.org/wiki/Basis%20theorem%20%28computability%29
In computability theory, there are a number of basis theorems. These theorems show that particular kinds of sets always must have some members that are, in terms of Turing degree, not too complicated. One family of basis theorems concern nonempty effectively closed sets (that is, nonempty sets in the arithmetical hierarchy); these theorems are studied as part of classical computability theory. Another family of basis theorems concern nonempty lightface analytic sets (that is, in the analytical hierarchy); these theorems are studied as part of hyperarithmetical theory. Effectively closed sets Effectively closed sets are a topic of study in classical computability theory. An effectively closed set is the set of all paths through some computable subtree of the binary tree . These sets are closed, in the topological sense, as subsets of the Cantor space , and the complement of an effective closed set is an effective open set in the sense of effective Polish spaces. Kleene proved in 1952 that there is a nonempty, effectively closed set with no computable point (Cooper 1999, p. 134). Basis theorems show that there must be points that are not "too far" from being computable, in an informal sense. A class is a basis for effectively closed sets if every nonempty effectively closed set includes a member of  (Cooper 2003, p. 329). Basis theorems show that particular classes are bases in this sense. These theorems include (Cooper 1999, p. 134): The low basis theorem: each nonempty set has a member that is of low degree. The hyperimmune-free basis theorem: each nonempty set has a member that is of hyperimmune-free degree. The r.e. basis theorem: each nonempty set has a member that is of recursively enumerable (r.e.) degree. Here, a set is low if its Turing jump , the degree of the halting problem. has hyperimmune-free degree if every total -computable function is dominated by a total computable function (meaning for all ). No two of the above three theorems can be combined for the set of consistent completions of PA (or just EFA; the Turing degrees are the same). The only r.e. Turing degree that computes a consistent completion of PA is 0'. However, the low basis theorem and the hyperimmune-free basis theorem can each be combined with cone avoidance, i.e. for every noncomputable X, we can choose a member (as in the theorem) that does not compute X. The theorems also relativize above an arbitrary real. Lightface analytic sets There are also basis theorems for lightface sets. These basis theorems are studied as part of hyperarithmetical theory. One theorem is the Gandy basis theorem, which is analogous to the low basis theorem. The Gandy basis theorem shows that each nonempty set has an element that is hyperarithmetically low, that is its hyperjump has the same hyperdegree (and for the theorem, even the same Turing degree) as Kleene's set . References Cooper, S. B. (1999). "Local degree theory", in Handbook of Computability Theory, E.R. Griffor (ed.), Elsevier, pp. 121–153. — (2003), Computability Theory, Chapman-Hall. External links Simpson, S. "A survey of basis theorems", slides from Computability Theory and Foundations of Mathematics, Tokyo Institute of Technology, February 18–20, 2013. Computability theory
Basis theorem (computability)
[ "Mathematics" ]
777
[ "Computability theory", "Mathematical logic" ]
53,606,150
https://en.wikipedia.org/wiki/Simon%20Birrell
Simon Birrell (born 26 July 1966) is a British entrepreneur, technologist and film maker. He was part of the team that invented ambient intelligence and who, with Eli Zelkha, coined the term. Biography Early life, education and career Born in 1966 in Bristol, UK. He graduated from Cambridge University in 1988 with a degree in Natural Sciences. He has been a founder or co-founder of three companies. Euro-Profile/i-Profile – a business intelligence company based out of Silicon Valley which was acquired by Virgo Capital (2008), Vemm Brazil, a publisher of consumer advice websites in Brazil which was acquired by QuinStreet (2015) and Silicon Artists, a Madrid-based entertainment technology company funded by Silicon Valley–based Tandem Computers. Ambient intelligence In 1998, Birrell was part of the team at Palo Alto Ventures that invented and developed the ambient intelligence concept and who, with Eli Zelkha, coined the term. It was presented by Roel Pieper of Philips at The Digital Living Room Conference on 22 June 1998. Since its invention in 1998, Ambient Intelligence labs have been formed at leading universities and ambient intelligence has become part of the core strategies of many of the world's leading technology companies, including Microsoft, Google, Amazon and IBM. Robotics and deep learning Birrell is researching deep learning and robotics at Cambridge University. He is the author of the blog Artificial Human Companions. Video games, virtual reality and other activities He developed some of the first video games for Richard Branson's Virgin Interactive in 1983. These included Bug Bomb – BBC Micro (1983), Microbe – BBC Micro (1983), High-Rise Horror – Commodore 64 (1984), Strangeloop – Commodore 64 (1985), Shogun – Commodore 64 / Amstrad (co-design). From 1993 to 1995, Birrell was the CTO of an early virtual reality company in Spain called Realidad Virtual S.L. At Realidad Virtual, he developed Pandora – the first Spanish online virtual reality platform for the Internet. Mundo de Estrellas (1998) was a distributed virtual reality environment for hospitalised children in Andalucia created by his company Silicon Artists. He is also a film maker and writer. As a film maker, he has directed two shorts and collaborated with cult filmmakers Jess Franco and Jose Ramon Larraz. Birrell authored a chapter in an MIT book on Information Design and co-authored a book on videogames. References External links Birrell's blog 1966 births Living people Alumni of the University of Cambridge British artificial intelligence researchers British businesspeople English filmmakers British video game designers Virtual reality pioneers
Simon Birrell
[ "Technology" ]
538
[ "Computing and society", "Ambient intelligence" ]
53,606,876
https://en.wikipedia.org/wiki/Jeffery%E2%80%93Hamel%20flow
In fluid dynamics Jeffery–Hamel flow is a flow created by a converging or diverging channel with a source or sink of fluid volume at the point of intersection of the two plane walls. It is named after George Barker Jeffery(1915) and Georg Hamel(1917), but it has subsequently been studied by many major scientists such as von Kármán and Levi-Civita, Walter Tollmien, F. Noether, W.R. Dean, Rosenhead, Landau, G.K. Batchelor etc. A complete set of solutions was described by Edward Fraenkel in 1962. Flow description Consider two stationary plane walls with a constant volume flow rate is injected/sucked at the point of intersection of plane walls and let the angle subtended by two walls be . Take the cylindrical coordinate system with representing point of intersection and the centerline and are the corresponding velocity components. The resulting flow is two-dimensional if the plates are infinitely long in the axial direction, or the plates are longer but finite, if one were neglect edge effects and for the same reason the flow can be assumed to be entirely radial i.e., . Then the continuity equation and the incompressible Navier–Stokes equations reduce to The boundary conditions are no-slip condition at both walls and the third condition is derived from the fact that the volume flux injected/sucked at the point of intersection is constant across a surface at any radius. Formulation The first equation tells that is just function of , the function is defined as Different authors defines the function differently, for example, Landau defines the function with a factor . But following Whitham, Rosenhead the momentum equation becomes Now letting the and momentum equations reduce to and substituting this into the previous equation(to eliminate pressure) results in Multiplying by and integrating once, where are constants to be determined from the boundary conditions. The above equation can be re-written conveniently with three other constants as roots of a cubic polynomial, with only two constants being arbitrary, the third constant is always obtained from other two because sum of the roots is . The boundary conditions reduce to where is the corresponding Reynolds number. The solution can be expressed in terms of elliptic functions. For convergent flow , the solution exists for all , but for the divergent flow , the solution exists only for a particular range of . Dynamical interpretation Source: The equation takes the same form as an undamped nonlinear oscillator(with cubic potential) one can pretend that is time, is displacement and is velocity of a particle with unit mass, then the equation represents the energy equation(, where and ) with zero total energy, then it is easy to see that the potential energy is where in motion. Since the particle starts at for and ends at for , there are two cases to be considered. First case are complex conjugates and . The particle starts at with finite positive velocity and attains where its velocity is and acceleration is and returns to at final time. The particle motion represents pure outflow motion because and also it is symmetric about . Second case , all constants are real. The motion from to to represents a pure symmetric outflow as in the previous case. And the motion to to with for all time() represents a pure symmetric inflow. But also, the particle may oscillate between , representing both inflow and outflow regions and the flow is no longer need to symmetric about . The rich structure of this dynamical interpretation can be found in Rosenhead(1940). Pure outflow For pure outflow, since at , integration of governing equation gives and the boundary conditions becomes The equations can be simplified by standard transformations given for example in Jeffreys. First case are complex conjugates and leads to where are Jacobi elliptic functions. Second case leads to Limiting form The limiting condition is obtained by noting that pure outflow is impossible when , which implies from the governing equation. Thus beyond this critical conditions, no solution exists. The critical angle is given by where where is the complete elliptic integral of the first kind. For large values of , the critical angle becomes . The corresponding critical Reynolds number or volume flux is given by where is the complete elliptic integral of the second kind. For large values of , the critical Reynolds number or volume flux becomes . Pure inflow For pure inflow, the implicit solution is given by and the boundary conditions becomes Pure inflow is possible only when all constants are real and the solution is given by where is the complete elliptic integral of the first kind. Limiting form As Reynolds number increases ( becomes larger), the flow tends to become uniform(thus approaching potential flow solution), except for boundary layers near the walls. Since is large and is given, it is clear from the solution that must be large, therefore . But when , , the solution becomes It is clear that everywhere except in the boundary layer of thickness . The volume flux is so that and the boundary layers have classical thickness . References Fluid dynamics Flow regimes
Jeffery–Hamel flow
[ "Chemistry", "Engineering" ]
1,019
[ "Piping", "Chemical engineering", "Flow regimes", "Fluid dynamics" ]
53,609,781
https://en.wikipedia.org/wiki/Abu%20Ja%27far%20ibn%20Habash
Abu Ja'far ibn Habash was a Persian astronomer. He was most likely a son of Habash al-Hasib. Since his father died after 864 AD at the age of 100, it can be concluded that he was active in 3rd century AH (9th century AD). According to Ibn Nadim and Qifti, he wrote a book on astrolabe, named al-ostorlab al-mosatah. References 9th-century Iranian astronomers Astronomers of the medieval Islamic world
Abu Ja'far ibn Habash
[ "Astronomy" ]
105
[ "Astronomers", "Astronomer stubs", "Astronomy stubs" ]
50,765,485
https://en.wikipedia.org/wiki/Spatiomap
A spatiomap is a document similar to a map, but based on an orthophoto. Often, some annotations are added to the orthophoto. Similar to normal maps, can display a north arrow, a scale bar and cartographical information like the used projection. Spatiomaps are useful when other reliable source are missing for a certain area and/or when a map must be produced in very short time (e.g. for disaster management). Spatiomaps are frequently used during disaster relief. An image map or orthophotomap is a similar document, but is mostly regarded as an orthophotomosaic with some points, lines or polygon layers of a traditional map drawn over the orthophoto. An image map resembles a standard general purpose map but adds the use of an orthophotomosaic as a background. References Map types Geodesy Geography terminology
Spatiomap
[ "Mathematics" ]
198
[ "Applied mathematics", "Geodesy" ]
50,768,319
https://en.wikipedia.org/wiki/Multiple%20scattering%20theory
Multiple scattering theory (MST) is the mathematical formalism that is used to describe the propagation of a wave through a collection of scatterers. Examples are acoustical waves traveling through porous media, light scattering from water droplets in a cloud, or x-rays scattering from a crystal. A more recent application is to the propagation of quantum matter waves like electrons or neutrons through a solid. As pointed out by Jan Korringa, the origin of this theory can be traced back to an 1892 paper by Lord Rayleigh. An important mathematical formulation of the theory was made by Paul Peter Ewald. Korringa and Ewald acknowledged the influence on their work of the 1903 doctoral dissertation of Nikolai Kasterin, portions of which were published in German in the Proceedings of the Royal Academy of Sciences in Amsterdam under the sponsorship of Heike Kamerlingh Onnes. The MST formalism is widely used for electronic structure calculations as well as diffraction theory, and is the subject of many books. The multiple-scattering approach is the best way to derive one-electron Green's functions. These functions differ from the Green's functions used to treat the many-body problem, but they are the best starting point for calculations of the electronic structure of condensed matter systems that cannot be treated with band theory. The terms "multiple scattering" and "multiple scattering theory" are often used in other contexts. For examples, Molière's theory of the scattering of fast charged particles in matter, or Glauber multiple scattering theory for high-energy particle multiple-scattering off nucleons in a nucleus are also denominated that way. Mathematical formulation The MST equations can be derived with different wave equations, but one of the simplest and most useful ones is the Schrödinger equation for an electron moving in a solid. With the help of density functional theory, this problem can be reduced to the solution of a one-electron equation where the effective one-electron potential, , is a functional of the density of the electrons in the system. In the Dirac notation, the wave equation can be written as an inhomogeneous equation, , where is the kinetic energy operator. The solution of the homogeneous equation is , where . A formal solution of the inhomogeneous equation is the sum of the solution of the homogeneous equation with a particular solution of the inhomogeneous equation , where . This is the Lippmann–Schwinger equation, which can also be written . The t-matrix is defined by . Suppose that the potential is the sum of non-overlapping potentials, . The physical meaning of this is that it describes the interaction of the electron with a cluster of atoms having nuclei located at positions . Define an operator so that can be written as a sum . Inserting the expressions for and into the definition of leads to , so , where is the scattering matrix for one atom. Iterating this equation leads to . The solution of the Lippmann-Schwinger equation can thus be written as the sum of an incoming wave on any site and the outgoing wave from that site . The site that we have chosen to focus on can be any of the sites in the cluster. The incoming wave on this site is the incoming wave on the cluster and the outgoing waves from all the other sites . The outgoing wave from the site is defined as . These last two equations are the fundamental equations of multiple scattering. To apply this theory to x-ray or neutron diffraction we go back to the Lippmann–Schwinger equation, . The scattering from a site is assumed to be very small, so or . The Born approximation is used to calculate the t-matrix, which simply means that is replaced with . A plane wave impinges on a site, and a spherical wave exits it. The outgoing wave from the crystal is determined by the constructive interference of the waves from the sites. Advances to this theory involve the inclusion of higher-order terms in the total scattering matrix , such as. These terms are particularly important in the scattering of charged particles treated by Molière. Multiple scattering theory of electronic states in solids In 1947, Korringa pointed out that the multiple scattering equations can be used to calculate stationary states in a crystal for which the number of scatterers goes to infinity. Setting the incoming wave on the cluster and the outgoing wave from the cluster to zero, he wrote the first multiple scattering as . A simple description of this process is that the electrons scatter from one atom to the other ad infinitum. Since the are bounded in space and do not overlap, there is an interstitial region between them within which the potential is a constant, usually taken to be zero. In this region, the Schrödinger equation becomes , where . The incoming wave on site can thus be written in the position representation , where the are undetermined coefficients and . The Green's function may be expanded in the interstitial region , and the outgoing Hankel function can be written . This leads to a set of homogeneous simultaneous equations that determines the unknown coefficients , which is a solution in principle of the multiple scattering equations for stationary states. This theory is very important for studies in condensed matter physics. Periodic solids, one atom per unit cell The calculation of stationary states is simplified considerably for periodic solids in which all of the potentials are the same, and the nuclear positions form a periodic array. Bloch's theorem holds for such a system, which means that the solutions of the Schrödinger equation may be written as a Bloch wave . It is more convenient to deal with a symmetric matrix for the coefficients, and this can be done by defining . These coefficients satisfy the set of linear equations , with the elements of the matrix being , and the are the elements of the inverse of the t-matrix. For a Bloch wave the coefficients depend on the site only through a phase factor, , and the satisfy the homogeneous equations , where and . Walter Kohn and Norman Rostoker derived this same theory using the Kohn variational method. It is called the Korringa–Kohn–Rostoker method (KKR method) for band theory calculations. Ewald derived a mathematically sophisticated summation process that makes it possible to calculate the structure constants, . The energy eigenvalues of the periodic solid for a particular , , are the roots of the equation . The eigenfunctions are found by solving for the with . The dimension of these matrix equations is technically infinite, but by ignoring all contributions that correspond to an angular momentum quantum number greater than , they have dimension . The justification for this approximation is that the matrix elements of the t-matrix are very small when and are greater than , and the elements of the inverse matrix are very large. In the original derivations of the KKR method, spherically symmetric muffin-tin potentials were used. Such potentials have the advantage that the inverse of the scattering matrix is diagonal in , where is the scattering phase shift that appears in the partial wave analysis in scattering theory. It is also easier to visualize the waves scattering from one atom to another, and can be used in many applications. The muffin-tin approximation is adequate for most metals in a close-packed arrangement. It cannot be used for calculating forces between atoms, or for important systems like semiconductors. Extensions of the theory It is now known that the KKR method can be used with space-filling non-spherical potentials. It can be extended to treat crystals with any number of atoms in a unit cell. There are versions of the theory that can be used to calculate surface states. The arguments that lead to a multiple scattering solution for the single-particle orbital can also be used to formulate a multiple scattering version of the single-particle Green's function which is a solution of the equation . The potential is the same one from density functional theory that was used in the preceding discussion. With this Green's function and the Korringa–Kohn–Rostoker method, the Korringa–Kohn–Rostoker coherent potential approximation (KKR-CPA) is obtained. The KKR-CPA is used to calculate the electronic states for substitutional solid-solution alloys, for which Bloch's theorem does not hold. The electronic states for an even wider range of condensed matter structures can be found using the locally self-consistent multiple scattering (LSMS) method, which is also based on the single-particle Green's function. References Scattering theory Quantum mechanics
Multiple scattering theory
[ "Physics", "Chemistry" ]
1,748
[ "Scattering", "Theoretical physics", "Quantum mechanics", "Scattering theory" ]
50,771,501
https://en.wikipedia.org/wiki/Overheating%20%28electricity%29
Overheating is a phenomenon of rising temperatures in an electrical circuit. Overheating causes damage to the circuit components and can cause fire, explosion, and injury. Damage caused by overheating is usually irreversible; the only way to repair it is to replace some components. Causes When overheating, the temperature of the part rises above the operating temperature. Overheating can take place: if heat is produced in more than expected amount (such as in cases of short-circuits, or applying more voltage than rated), or if heat dissipation is poor, so that normally produced waste heat does not drain away properly. Overheating may be caused from any accidental fault of the circuit (such as short-circuit or spark-gap), or may be caused from a wrong design or manufacture (such as the lack of a proper heat dissipation system). Due to accumulation of heat, the system reaches an equilibrium of heat accumulation vs. dissipation at a much higher temperature than expected. Preventive measures Use of circuit breaker or fuse Circuit-breakers can be placed at portions of a circuit in series to the path of current it will affect. If more current than expected goes through the circuit-breaker, the circuit breaker "opens" the circuit and stops all current. A fuse is a common type of circuit breaker that involves direct effect of Joule-overheating. A fuse is always placed in series with the path of current it will affect. Fuses usually consist of a thin strand of wire of definite-material. When more that the rated current flows through the fuse, the wire melts and breaks the circuit. Use of heat-dissipating systems Many systems use ventilation holes or slits kept on the box of equipment to dissipate heat. Heat sinks are often attached to portions of the circuit that produce most heat or are vulnerable to heat. Fans are also often used. Some high-voltage instruments are kept immersed in oil. In some cases, to remove unwanted heat, a cooling system like air conditioning or refrigerating heat-pumps may be required. Control within circuit-design Sometimes, special circuits are built for the purpose of sensing and controlling the temperature or voltage status. Devices such as thermistors, voltage-dependent resistors, thermostats and sensors such as infrared thermometers are used to modify the current upon different conditions such as circuit-temperature and input voltage. Proper manufacture For certain purposes in an item of electrical equipment or a portion of it, definite type and size of materials with proper rating for voltage, current and temperature, are used. The circuit resistance never kept too low. Sometimes some parts placed inside the board and box, maintaining a proper distance from each other, to avoid heat damage and short-circuit damage. To prevent short circuit, appropriate types of electrical connectors and mechanical fasteners are used. Gallery See also Active cooling Air-cooling with fan Computer cooling Conflagration Coolant Heat exchanger Heat pipe Heat pump Heat sink Heat spreader Oil cooling Radiator Thermal design power Thermal management of electronic devices and systems Thermal management of high-power LEDs Thermal resistance in electronics Thermal runaway Thermoelectric cooling Transformer oil Wire gauge References http://www.ufba.org.nz/images/documents/hazardsandsafeguards.pdf http://www.testequipmentdepot.com/application-notes/pdf/power-quality/case-study-the-overheating-transformer_an.pdf http://www.mirusinternational.com/downloads/hmt_faq10.pdf http://www.learnabout-electronics.org/Downloads/ac_theory_module11.pdf http://sound.whsites.net/xfmr.htm http://sound.whsites.net/xfmr-6.jpg http://ecmweb.com/site-files/ecmweb.com/files/uploads/2016/03/Electrical-Service-Meltdown-6.jpg Electricity Electrical engineering Electric heating Thermodynamics Safety Fire prevention Fire protection
Overheating (electricity)
[ "Physics", "Chemistry", "Mathematics", "Engineering" ]
875
[ "Building engineering", "Thermodynamics", "Fire protection", "Electrical engineering", "Dynamical systems" ]
33,894,241
https://en.wikipedia.org/wiki/ACS%20Synthetic%20Biology
ACS Synthetic Biology is a peer-reviewed scientific journal published by the American Chemical Society. It began publishing accepted articles in the Fall of 2011, with the first full monthly issue published in January 2012. It covers all aspects of synthetic biology, including molecular, systems, and synthetic research. The founding editor-in-chief is Christopher Voigt (Massachusetts Institute of Technology). Types of articles The journal publishes Letters: Short reports of original research focused on an individual finding Articles: Original research presenting findings of immediate, broad interest. Reviews: Expert perspectives and analyses of recently published research Technical Notes: Concise communications that focus on the characterization of new or interesting tools and websites Tutorials: Detailed descriptions of synthetic, computational, and systems methodologies References External links See also Systems and Synthetic Biology Academic journals established in 2012 Monthly journals Synthetic Biology English-language journals Biochemistry journals Synthetic biology
ACS Synthetic Biology
[ "Chemistry", "Engineering", "Biology" ]
176
[ "Synthetic biology", "Biological engineering", "Biochemistry journals", "Bioinformatics", "Molecular genetics", "Biochemistry literature" ]
33,899,473
https://en.wikipedia.org/wiki/SND%20Experiment
Spherical Neutral Detector (SND) is a detector for particle physics experiments, successor of the Neutral Detector (ND), created by the team of physicists in the Budker Institute of Nuclear Physics (BINP), Novosibirsk, Russia. There are three major periods in evolution of the SND experiment; from 1995 to 2000 - data collection at the e+e− storage ring VEPP-2M in the energy range 2E=0.4-1.4 GeV, from 2001 to 2008 - upgrade of SND and the storage ring VEPP-2M to VEPP-2000, from 2009 - data collection at the e+e− storage ring VEPP-2000 in the energy range 2E=1.0-2.0 GeV. Physics Previous experiment with ND (predecessor of SND) has shown that the e+e− annihilation in the final states with neutral particles in the energy range 2E=0.4-1.4 GeV is mediated by the processes which need to be studied in more details. In particular, quark structure of the light scalar mesons can be studied in electric-dipole radiative decays ; precise measurement of the e+e−annihilation cross section into hadrons is the important component in definition of the muon anomalous magnetic dipole moment; precise measurement of the hadronic cross sections is necessary to study the radial excitation of the light vector mesons ρ, ω, and φ; measurement of the higher order quantum electrodynamic (QED]) processes is important for the QED theory test. This physics can be studied with dedicated detector at higher statistics. For this purpose the SND was constructed with many improvements relative to ND; the solid angle is covered up to 96% of 4π sr, the NaI(Tl) calorimeter has uniform spherical shape with fine segmentation in azimuthal and polar angles and 3 layers in radial direction, drift chamber is used as a central tracker, external anti-coincidence flat scintillation counters are enhanced by the coordinate system made of arrays of strimmer tubes. Experimental program Experimental program of the SND is presented in Ref. and consists of items as follow. Radiative decays OZI and G-parity suppressed decays Electromagnetic decays e+e− annihilation into hadrons Test of QED processes Search for rare decays Search for C-even reactions Detector The SND and its upgraded version are described in Refs. and, respectively. The detector design is illustrated in the R-θ view and 3-D plot of the NaI(Tl) calorimeter segmentation. Unique features of the SND and its sensitivity to the neutral particles are defined by the state-of-art NaI(Tl) calorimeter. Results Data collected in the SND experiment from 1995 to 2000 corresponds to the integrated luminosity 30 pb−1 spread in the energy range 2E=0.4-1.4 GeV. Review of results of this experiment is presented in Refs. and . Results are included in the PDG Review. Complete list of publications from SND also covers recent results of the experiment in the energy range 2E=1.0-2.0 GeV started in 2009. References External links SND experiment record on INSPIRE-HEP See also ND Experiment Budker Institute of Nuclear Physics Particle detector Experimental physics List of accelerators in particle physics Storage ring Meson List of particles Annihilation Сферический нейтральный детектор Институт ядерной физики СО РАН ВЭПП-2000 Particle detectors Particle experiments Experimental particle physics Particle physics facilities Budker Institute of Nuclear Physics
SND Experiment
[ "Physics", "Technology", "Engineering" ]
809
[ "Particle detectors", "Measuring instruments", "Experimental physics", "Particle physics", "Experimental particle physics" ]
33,902,476
https://en.wikipedia.org/wiki/Minusheet%20perfusion%20culture%20system
Minusheet perfusion culture system is used for advanced cell culture experiments in combination with adherent cells and to generate specialized tissues in combination with selected biomaterials, special tissue carriers and compatible perfusion culture containers. The technical development of the Minusheet perfusion culture system was driven by the idea to create under in vitro conditions an environment resembling as near as possible the situation of specialized tissues found within the organism. Basis of this invention is therefore individually selected biomaterials for optimal cell adhesion mounted in Minusheet tissue carriers. Moreover, to always offer fresh nutrition including respiratory gas and to simulate a tissue-specific fluid environment, the tissue carriers can be inserted into compatible perfusion culture containers. As a result, a variety of publications illustrates that tissues generated by this innovative approach exhibit an excellent and stable quality. Thus, on the one hand the system provides a highly adaptable basis for the culture of adherent cells and the generation of specialized tissues. On the other hand the Minusheet perfusion culture system is bridging a methodical gap between the conventional static 24 well culture plate and modern perfusion culture technology. Crucial generation of specialized tissues Specialized tissues in culture are urgently needed in regenerative medicine, tissue engineering, nanotechnology, biomaterial research and advanced toxicity testing of newly developed pharmaceuticals. However, it is often observed that raised tissues do not exhibit expected functional features. Instead dedifferentiation is observed [1-4]. These cell biological alterations arise after isolation of cells and proceed during static culture in a dish due to suboptimal fluid environment and minor adhesion on biomaterials. Further uncontrolled supply with nutrition and respiratory gas, an overshoot of metabolites and paracrine factors or missing rheological stress can increase the degree of dedifferentiation. In consequence, regarding an optimal generation of specialized tissues a powerful strategy has to exclude as much as possible harmful parameters, while factors supporting the process of tissue development must be intensified [5]. Selected biomaterials promote development within a tissue carrier Under natural conditions a prerequisite for an optimal tissue development is a cell-specific interaction with the extracellular matrix, while under in vitro conditions a substitute for the extracellular matrix has to be selected. However, the crucial problem is that a biomaterial can influence the development of functional features within a maturing tissue in a good and in a bad sense. In consequence, the suitability of a decellularized extracellular matrix, newly developed synthetic polymers, biodegradable scaffolds, ceramics or metal alloys cannot be predicted but must be tested. To meet parameters positively influencing cell adhesion and communication, the technical concept is based on a Minusheet tissue carrier (Fig. 1). By the help of this tool cell adhesion and development of tissue can be tested with individually selected biomaterials. These experiments can be performed first under static (Fig. 2) and then under dynamic (Fig. 3) culture conditions [6]. In both cases a Minusheet tissue carrier prevents damage but supports development of contained cells or tissues during experimentation. To stay compatible with a conventional 24 well culture plate a selected biomaterial must be punched in a diameter of 13 mm. In this format many materials are also commercially available. Further materials can be applied in form of filters, foils, nets, fleeces and scaffolds (Fig. 1a). For an easy handling and to prevent damage during development the selected specimens are placed in the base part of a Minusheet tissue carrier (Fig. 1b). Pressing down a tension ring the biomaterial is held in position (Fig. 1c). After mounting a tissue carrier is enveloped in a bag and sterilized. Cell seeding on a tissue carrier For cell seeding the mounted tissue carrier is transferred by a forceps in a 24 well culture plate (Fig. 2). To concentrate cells on top of a tissue carrier culture medium is added to a level so that the selected biomaterial is just wetted. Then an aliquot of cells is transferred by a pipette to the surface of the mounted biomaterial. A standard culture protocol with a tissue carrier can be initiated by seeding cells onto the upper side. When a tissue carrier is turned, cells can also be seeded on the other side so that co-culture experiments with two different cell types become possible. Not only single cells but also a thin slice of tissue can be mounted between two pieces of a woven net within a Minusheet tissue carrier. Further flexible materials such as collagen sheets can be used in a tissue carrier like the skin of a drum. Last but not least excellent results were obtained by mounting a polyester fleece as an artificial interstitium for spatial parenchyma development [5,6,8]. It is obvious that for each specialized tissue very individual spatial environments within a tissue carrier can be created. Compatible perfusion culture containers It has been shown that the static environment within a 24 well culture plate leads to a decrease of nutrition and hormones, an uncontrollable increase of metabolites and an overshoot of paracrine factors during time. Due to these reasons a Minusheet tissue carrier with adherent cells is used only for the short period of cell seeding in a 24 well culture plate. In consequence, after adhesion of cells the tissue carrier is transferred to a perfusion culture container to offer a dynamic fluid environment. To meet the individual requirements of specialized tissues a variety of perfusion culture containers was constructed (Fig. 3). Each of the perfusion culture containers has at least one inlet and one outlet for the transport of culture medium. A basic version of a container allows the simple bathing of cells respectively growing tissues under continuous medium transport (Fig. 4a). In a gradient container the tissue carrier is placed between the base and the lid so that both sides can be provided with individual media mimicking a typical environment for epithelia (Fig. 4b). A further culture container is made of a transparent lid and base allowing the microscopic observation during tissue development (Fig. 4c). In addition, a perfusion culture container can exhibit a flexible silicone lid. Applying force to this lid by an eccentric rotor simulates a mechanical load as required in cartilage and bone tissue engineering. Shaped tissues such as an auricle or different forms of cartilage can be generated with individual scaffolds in a special tissue engineering container. Finally, spatial extension of tubules derived from renal stem/progenitor cells is obtained within a perfusion container filled with an artificial interstitium made of polyester fleece. Finally, all of these containers are machined out of a special polycarbonate (Makrolon®) so that all of them can be autoclaved for multiple uses. Performance of perfusion culture experiments To maintain the necessary temperature of 37 °C within a perfusion culture container, a heating plate (MEDAX-Nagel, Kiel, Germany) and a cover lid (not shown) are used during performance of culture experiments over weeks (Fig. 5, 7). The transport of culture medium is best accomplished using a slowly rotating peristaltic pump (ISMATEC, IPC N8, Wertheim, Germany). It is able to deliver adjustable and exact pump rates between 0.1 and 5 mL per hour. On the passage from the storage bottle through the perfusion culture container medium is transported along a mounted tissue carrier to provide contained cells. The exact geometrical placement of the tissue carrier within a perfusion culture container guarantees during transport of medium provision with always fresh nutrition and respiratory gas from all sides. At the same time it prevents an unphysiological accumulation of metabolic products and an overshoot of paracrine factors. To maintain for the whole culture period this controlled environment, the metabolized medium is collected in a separate waste bottle. In consequence, medium is not recirculated. Stabilization of pH during perfusion culture Normally cell culture experiments are performed in a CO2 incubator. Also perfusion culture experiments can be performed in such an atmosphere. However, a much better solution is the performance of perfusion culture experiments under atmospheric air on a laboratory table, since it facilitates the complete handling. However, in this case the culture medium has to be adjusted to atmospheric air. Keeping media in a 5% CO2 atmosphere within an incubator always a relatively high amount of NaHCO3 is contained to maintain a constant pH between 7.2 and 7.4. If such a formulated medium is used for perfusion culture outside a CO2 incubator, the pH will shift from the physiological range to much more alkaline values due to the low content of CO2 (0.3%) in atmospheric air. For that reason any medium used for perfusion culture outside a CO2 incubator has to be stabilized by reducing the NaHCO3 concentration and/or by adding biological buffers such as HEPES (GIBCO/Invitrogen, Karlsruhe, Germany) or BUFFER ALL (Sigma-Aldrich-Chemie, München, Germany). The necessary amount can be easily determined by admixing increasing amounts of biological buffer solution to an aliquot of medium. Then the medium must equilibrate over night on a thermo plate at 37 °C under atmospheric air. For example, application of 50 mmol/L HEPES or an equivalent of BUFFER ALL (ca. 1%) to IMDM (Iscove’s Modified Dulbecco’s Medium, GIBCO/Invitrogen) will maintain a constant pH of 7.4 throughout long term perfusion culture under atmospheric air on a laboratory table. Availability of oxygen in medium To obtain in a perfusion culture experiment a high saturation of O2 a selected medium such as IMDM has to be transported through a gas permeable silicone tube. The use of a silicone tube provides a large surface for gas exchange by diffusion due to a thin wall (1 mm), the small inner diameter (1 mm) and its extended length (1 m). For example, analysis of IMDM (3024 mg/L NaHCO3, 50 mmol/L HEPES) equilibrated against atmospheric air during a standard perfusion culture experiment shows constant partial pressures of at least 160 mmHg O2 [7]. Modulation of oxygen content It has been shown that growing cells and tissues have very individual oxygen requirements. Due to this reason it is important that the content of oxygen can be adapted in individual perfusion culture experiments. The technical solution is a gas exchanger module containing a gas inlet and outlet (Fig. 6a). Further a spiral with a long thin-walled silicon tube for medium transport is mounted inside the module. Since the tube is highly gas-permeable, it guarantees optimal diffusion of gases between culture medium and internal atmosphere of the gas exchange module. In consequence, the desired gas atmosphere can be adjusted by a constant flow of a specific gas mixture through the module. This way the content of oxygen or any other gases can be modulated in the medium by diffusion. Applying this simple protocol it became possible to decrease the oxygen partial pressure within the transported medium during long term culture experiments under absolutely sterile conditions [7]. Elimination of harmful gas bubbles Performing perfusion culture experiments it always has to be considered that gas bubbles are forming during slow transport of culture medium. They arise during suction of medium in the storage bottle, during transport within the tube, during distribution within the culture container and during elimination on the way to the waste bottle. Due to unknown reasons gas bubbles accumulate especially at material transitions between tubes, connectors and perfusion containers. First these gas bubbles are so small that they cannot be observed with the human eye, but during ongoing transport of culture medium they increase in size and are able to form an embolus that massively impedes medium flow. Within a culture container gas bubbles are leading to a regional shortage of medium supply and are causing breaks in the fluid continuum so that massive fluid pressure changes result. In a gradient perfusion culture container, where two media are transported at exactly the same speed, embolic effects can lead to pressure differences destroying in turn the contained epithelial barrier [5,9]. To avoid the concentration of gas bubbles within a perfusion culture experiment, a gas expander module was developed (Fig. 6b). This module removes gas bubbles from the medium during transport. When medium is entering the gas expander module, it rises within a small reservoir and expands before it drops down a barrier. During this process gas bubbles are separated from the medium at the top of the gas expander module. In consequence, medium leaving the container is oxygen-saturated but free of gas bubbles [8,9]. Broad spectrum of applications In the last years numerous papers were published dealing with the Minusheet perfusion culture system. The wide spectrum illustrates that the modular system was applied to generate specialized tissues in excellent cell biological quality used in tissue engineering, biomaterial research and advanced pharmaceutical drug toxicity testing. A complete list of these applications is found in the data bank ‘Proceedings in perfusion culture’ (see 'External links'). As demonstrated by numerous patents (DE 39 23 279, DE 42 00 446, DE 42 08 805, DE 44 43 902, DE 19530 556, DE 196 48 876 C2, DE 199 52 847 B4, US 5 190 878, US 5 316 945, US 5 665 599, J 2847669, DE 10 2005 002 938, PA 10 2004 054 125.6, PA 10 2005 001 747.9, patents pending) Will W. Minuth has invented the presented Minusheet perfusion culture system. Numerous pilot experiments with the Minusheet perfusion culture system were performed in the last years by Lucia Denk and Will W. Minuth. The experimental work is presently focusing on the creation of an artificial polyester interstitium to repair injured renal parenchyma. In 1992 the Minusheet perfusion culture system received the Philip Morris research award ‘Challenge of the Future’ in Munich, Germany. The award was handed over by Henry Kissinger, Hans Joachim Friedrichs and Paul Müller. To introduce the Minusheet perfusion culture system on the market, Katharina Lorenz-Minuth founded non-profit orientated Minucells and Minutissue Vertriebs GmbH (D-93077 Bad Abbach/Germany). External links Proceedings in perfusion culture Philip Morris Stiftung Book dealing with the Minusheet perfusion culture References 1. Elaut G, Henkens T, Papeleu P, Snykers S, Vinken M, Vanhaecke T, Rogiers V (2006) Molecular mechanisms underlying the dedifferentiation process of isolated hepatocytes and their cultures. Curr Drug Metab 7(6):629-60. 2. Schuh E, Hofmann S, Stok K, Notbohm H, Müller R, Rotter N (2011) Chondrocyte redifferentiation in 3D: the effect of adhesion site density and substrate elasticity. J Biomed Mater Res A: DOI 10.1002/jbm.a.33226. 3. Zhang Y, Li TS, Lee ST, Wawrowsky KA, Cheng K, Galang G, Malliaras K, Abraham MR, Wang C, Marban E (2010) Dedifferentiation and proliferation of mammalian cadiomyocytes 2010 PLoS ONE 5(9):e12559. 4. Liu Y, Jiang X, Yu MK, Dong J, Zhang X, Tsang LL, Chung YW, Li T, Chan HC (2010) Switsching from bone marrow-derived neurons to epithelial cells through dedifferentiation and translineage redifferentiation. Cell Biol Int 34(11):1075-83. 5. Minuth WW and Denk L 2011 Advanced culture experiments with adherent cells. From single cells to specialized tissues in perfusion culture. ISBN Nr. 978-3-88246-330-9, http://epub.uni-regensburg.de/21484/ 6. Minuth WW, Denk L, Glashauser A 2010 A modular culture system for the generation of multiple specialized tissues. Biomaterials 31:2945-2954. 7. Strehl R, Schumacher K, Minuth WW 2004 Controlled respiratory gas delivery to embryonic renal explants in perfusion culture. Tissue Eng 10(7-8):1196-203. 8. Minuth WW, Strehl R, Schumacher K 2004 Tissue factory: conceptual design of a modular system for the in vitro generation of functional tissues. Tissue Eng 10:285-294. 9. Minuth WW, Denk L, Roessger A 2009 Gradient perfusion culture – Simulating a tissue-specific environment for epithelia in biomedicine. J Epithelial Biology & Pharmacology 2:1-13. Biomaterials Biomedicine Cell culture Molecular biology techniques
Minusheet perfusion culture system
[ "Physics", "Chemistry", "Biology" ]
3,566
[ "Biomaterials", "Biomedicine", "Model organisms", "Materials", "Molecular biology techniques", "Molecular biology", "Cell culture", "Matter", "Medical technology" ]
48,429,140
https://en.wikipedia.org/wiki/Tricholoma%20microcarpoides
Tricholoma microcarpoides is an agaric fungus of the genus Tricholoma. Found in Singapore, it was described as new to science in 1994 by English mycologist E.J.H. Corner. See also List of Tricholoma species References microcarpoides Fungi described in 1994 Fungi of Asia Taxa named by E. J. H. Corner Fungus species
Tricholoma microcarpoides
[ "Biology" ]
83
[ "Fungi", "Fungus species" ]
48,429,141
https://en.wikipedia.org/wiki/Tricholoma%20minutissimum
Tricholoma minutissimum is an agaric fungus of the genus Tricholoma. Found in the South Solomons, it was described as new to science in 1994 by English mycologist E.J.H. Corner. See also List of Tricholoma species References minutissimum Fungi described in 1994 Fungi of Oceania Taxa named by E. J. H. Corner Fungi without expected TNC conservation status Fungus species
Tricholoma minutissimum
[ "Biology" ]
93
[ "Fungi", "Fungus species" ]
48,430,030
https://en.wikipedia.org/wiki/Titanium%20Sponge%20Plant
Titanium Sponge Plant is an Indian manufacturing plant that produces titanium sponge, a material widely used in aeronautics, light defence vehicles, and other applications. It is located at Kerala Minerals and Metals Ltd (KMML), Chavara, Kollam district of Kerala. Notably, It is the only plant in the world that carries out entire aerospace-grade titanium sponge manufacturing process under one roof. History The importance of establishing domestic titanium production was realized due to India's significant demand for titanium and magnesium alloys, which were predominantly imported from countries such as China, Russia, and Japan. Dr. APJ Abdul Kalam, a scientist and the former President of India, highlighted the issue in a speech at the Kerala Legislative Assembly. The plant was fully commissioned in August 2015. Establishment The successful implementation was achieved after about twenty years of continuous research by the Defence Metallurgical Research Laboratory (DMRL under DRDO). The project is funded by Vikram Sarabhai Space Centre (VSSC under ISRO). Ranking India is the seventh country in the world to have such a complex structured TSP which has the technology to make titanium sponge, and the first to have done all the process under one roof in an indigenous manner. The company, Kerala Minerals and Metals Ltd. (KMML), has also won awards for commercialising this technology. Design and capacity The plant has an intricate design to carry out manufacturing of titanium alloy wrought products and fabrication of hardware. Titanium sponge is an alloy product that is produced through Kroll process, which includes leaching or heated vacuum distillation to make the metal almost 99.7% pure. Work is being done actively to increase the capacity of the TSP for the proposed 10000 TPY. A memorandum of understanding has also been signed by the KMML with Steel Authority of India (SAIL) for a joint venture to prepare titanium sponge at large scale. India has the third largest reserves of Titanium containing minerals, and was the sixth largest country by Titanium production in 2013. However, the high purity Titanium sponge (defined as containing at least 99.7% Titanium) as raw material is still imported for aerospace applications from countries like Japan, Russia and China. Using the indigenously made titanium sponge, VSSC realized the aerospace-grade alloy, having formula Ti6Al4V, at Mishra Dhatu Nigam (Midhani) in Hyderabad. Future prospects Proposals for future work include magnesium recovery from MgCl2 (magnesium chloride) to set up an additional facility on similar lines, as well as to expand titanium production capacity from 500 MT to 1000 MT. References Manufacturing plants Titanium companies Aerospace materials
Titanium Sponge Plant
[ "Engineering" ]
532
[ "Aerospace materials", "Aerospace engineering" ]
48,430,909
https://en.wikipedia.org/wiki/AlkD
AlkD (Alkylpurine glycosylase D) is an enzyme belonging to a family of DNA glycosylases that are involved in DNA repair. It was discovered by a team of Norwegian biologists from Oslo in 2006. It was isolated from a soil-dwelling Gram-positive bacteria Bacillus cereus, along with another enzyme AlkC. AlkC and AlkD are most probably derived from the same protein as indicated by their close resemblance. They are also found in other prokaryotes. Among eukaryotes, they are found only in the single-celled species only, such as Entamoeba histolytica and Dictyostelium discoideum. The enzyme specifically targets 7mG (methyl-guanine) in the DNA, and is, therefore, unique among DNA glycosylases. It can also act on other methylpurines with less affinity. It indicates that the enzyme is specific for locating and cutting (excision) of chemically modified bases from DNA, exactly at 7mG, whenever there are errors in replication. It accelerates the rate of 7mG hydrolysis 100-fold over the spontaneous depurination. Thus, it protects the genome from harmful changes induced by chemical and environmental agents. Its crystal structure was described in 2008. It is the first HEAT repeat protein identified to interact with nucleic acids or to contain enzymatic activity. Structure AlkD is made up of 237 amino acids, and has a molecular size of 25 kDa. It is composed of a tandem array of helical repeats reminiscent of HEAT motifs, which are known to facilitate protein-protein interactions and have not yet been associated with DNA binding or catalytic activity. It is a single-stranded protein with α-helical domain. The entire protein domain is composed of HEAT repeat domains, similar to those found in other proteins. Twelve of the fourteen helices (αA-αN) pair in an antiparallel pattern, and form six tandemly repeated α-α motifs, such as αA/αC, αD/αE, αF/αG, αH/αI, αJ/αK, and αL/αM. These helical repeats are stacked into a superhelical solenoid in which helices B, C, E, G, I, K and M form a concave surface with an aromatic cleft at its center. Residues within this cleft are crucial for the base excision activity. The concave surface is positively charged and is presumed to be the binding site of DNA, as well as for protection against bacterial sensitivity to alkylating agents. Mechanism of action AlkD has a unique mechanism for base excision in DNA. Instead of interacting directly with the damaged (alkylated) DNA portion, it acts on the nearby undamaged region. It then induces flipping of the alkylated and opposing base accompanied by DNA stack compression. The exposed DNA portion can then be enzymatically removed, by hydrolysis of the 7mG. References DNA repair Hydrolases
AlkD
[ "Biology" ]
644
[ "Molecular genetics", "Cellular processes", "DNA repair" ]
37,930,030
https://en.wikipedia.org/wiki/C15H12N2O2
The molecular formula C15H12N2O2 (molar mass: 252.27 g/mol, exact mass: 252.0899 u) may refer to: Oxcarbazepine Phenytoin (PHT) Molecular formulas
C15H12N2O2
[ "Physics", "Chemistry" ]
55
[ "Molecules", "Set index articles on molecular formulas", "Isomerism", "Molecular formulas", "Matter" ]
37,930,059
https://en.wikipedia.org/wiki/C20H26N4O5S
The molecular formula C20H26N4O5S may refer to: Niperotidine Glisolamide Molecular formulas
C20H26N4O5S
[ "Physics", "Chemistry" ]
29
[ "Molecules", "Set index articles on molecular formulas", "Isomerism", "Molecular formulas", "Matter" ]
36,486,337
https://en.wikipedia.org/wiki/HPP%20model
The Hardy–Pomeau–Pazzis (HPP) model is a fundamental lattice gas automaton for the simulation of gases and liquids. It was a precursor to the lattice Boltzmann methods. From lattice gas automata, it is possible to derive the macroscopic Navier-Stokes equations. Interest in lattice gas automaton methods levelled off in the early 1990s, due to rising interest in the lattice Boltzmann methods. It was first introduced in papers published in 1973 and 1976 by Jean Hardy, Yves Pomeau and Olivier de Pazzis, whose initials give the model its name. The model can be used as a simple model for both the movement of gases and fluid. Model In this model, the lattice takes the form of a two-dimensional square grid, with particles capable of moving to any of the four adjacent grid points which share a common edge, and particles cannot move diagonally. This means each grid point can only have one of sixteen possible interactions. Particles exist only on the grid points, never on the edges or surface of the lattice. Each particle has an associated direction (from one grid point to another immediately adjacent grid point). Each lattice grid cell can only contain a maximum of one particle for each direction, i.e., contain a total of between zero and four particles. The following rules also govern the model: A single particle moves in a fixed direction until it experiences a collision. Two particles experiencing a head-on collision are deflected perpendicularly. Two particles experience a collision which isn't head-on simply pass through each other and continue in the same direction. Optionally, when a particles collides with the edges of a lattice it can rebound. The HPP models follows a two-stage update process. Collision step In this step, the above rules 2., 3., and 4. are checked and applied if any collisions have occurred. This results in head-on collision particles changing direction, pass-through collisions continuing unchanged, or non-colliding particles simple remaining the same. Transport step The second step consists of each particle moving one lattice step in the direction they are currently travelling, which could have been changed by the above Collision Step. Formal definition The model operates on an infinite two-dimensional square lattice, where four unit vectors are associated with the following numbers: . Let be an allowed configuration. The function checks for the existence of a particle with a certain velocity, while does the opposite: The successor of the configuration can be calculated using the formulas form the original paper: Shortcomings The model is badly flawed, as momentum is always conserved in both the horizontal and vertical lanes. No energy is ever removed from the model, either by collisions or movement, so it will continue indefinitely. The HPP model lacked rotational invariance, which made the model highly anisotropic. This means for example, that the vortices produced by the HPP model are square-shaped. Notes References (Chapter 2 on Lattice gas Cellular Automata) Computational fluid dynamics
HPP model
[ "Physics", "Chemistry" ]
619
[ "Computational fluid dynamics", "Fluid dynamics", "Computational physics" ]
36,490,463
https://en.wikipedia.org/wiki/TyeA%20protein%20domain
In molecular biology, the protein domain TyeA is short for Translocation of Yops into eukaryotic cells A. It controls the release of Yersinia outer proteins (Yops) which help Yersinia evade the immune system. More specifically, it interacts with the bacterial protein YopN via hydrophobic residues located on the helices. Function This protein domain is involved in the control of Yop release. This helps it to evade the host's immune system. Yersinia spp. do this by injecting the effector Yersinia outer proteins (Yops) into the target cell. Also involved in Yop secretion are YopN and LcrG. TyeA is also required for translocation of YopE and YopH. TyeA interacts with YopN and with YopD, a component of the translocation apparatus. This shows the complex which recognizes eukaryotic cells and controls Yop secretion is also actively involved in translocation. Localisation Like YopN, TyeA is localized at the bacterial surface. Structure The structure of TyeA is composed of two pairs of parallel alpha-helices. Mechanism Association of TyeA with the C terminus of YopN is accompanied by conformational changes in both polypeptides that create order out of disorder: the resulting structure then serves as an impediment to type III secretion of YopN. References Protein domains
TyeA protein domain
[ "Biology" ]
299
[ "Protein domains", "Protein classification" ]
36,490,792
https://en.wikipedia.org/wiki/Marine%20Environmental%20Data%20and%20Information%20Network
The Marine Environmental Data and Information Network (MEDIN) is a United Kingdom organization created to curate marine environmental data. It is overseen by the UK government's Marine Science Co-ordination Committee. References Oceanography Scientific organisations based in the United Kingdom
Marine Environmental Data and Information Network
[ "Physics", "Environmental_science" ]
51
[ "Oceanography", "Hydrology", "Applied and interdisciplinary physics" ]
36,491,327
https://en.wikipedia.org/wiki/TACI-CRD2%20protein%20domain
In molecular biology, TACI-CRD2 represents the second cysteine-rich protein domain found in the TACI family of proteins. Members of this family are predominantly found in tumour necrosis factor receptor superfamily, member 13b (TACI), and are required for binding to the ligands APRIL and BAFF. TACI-CRD2 stands for Transmembrane Activator and CAML Interactor- Cysteine Rich Domain 2. Function TACI functions as a negative regulator of BAFF function given that loss of TACI expression results in the overproduction of B lymphocytes, a type of white blood cell that guards against infection. Cytokines can be grouped into a family on the basis of sequence, functional and structural similarities. Tumor necrosis factor (TNF) (also known as TNF-alpha or cachectin) is a cytotoxin which is derived from a form of white blood cell called monocytes. It is thought to cause tumour regression, septic shock and cachexia. The protein is synthesised as a prohormone with an unusually long and atypical signal sequence, which is absent from the mature secreted cytokine. A short hydrophobic stretch of amino acids serves to anchor the prohormone in lipid bilayers. Both the mature protein and a partially processed form of the hormone are secreted after cleavage of the propeptide. There are a number of different families of TNF, but all these cytokines seem to form homotrimeric (or heterotrimeric in the case of LT-alpha/beta) complexes that are recognised by their specific receptors. TACI is a member of the tumor necrosis factor receptor superfamily and has an important role as regulator of B cell function. TACI binds two ligands, APRIL and BAFF, which it binds to with high affinity and contains two cysteine-rich domains (CRDs) in its extracellular region. Formation TACI-CRD1 forms TACI-CRD2 by removing the N-terminal cysteine rich domain by alternative splicing. This shorter form is capable of ligand-induced cell signaling and that the second CRD alone (TACI-CRD2) contains full affinity for both ligands. Ligands The ligands are type II transmembrane protein cytokines that have various effects on immune cells, including acting as a: costimulatory molecules, apoptotic agents, growth factors APRIL (also known as TNSF13A, TALL-2, and TRDL-1) is a TNF ligand that is overexpressed by some tumours. References Protein domains Protein families
TACI-CRD2 protein domain
[ "Biology" ]
557
[ "Protein families", "Protein domains", "Protein classification" ]
36,491,756
https://en.wikipedia.org/wiki/Platinum-based%20antineoplastic
Platinum-based antineoplastic drugs (informally called platins) are chemotherapeutic agents used to treat cancer. Their active moieties are coordination complexes of platinum. These drugs are used to treat almost half of people receiving chemotherapy for cancer. In this form of chemotherapy, commonly used drugs include cisplatin, oxaliplatin, and carboplatin, but several have been proposed or are under development. Addition of platinum-based chemotherapy drugs to chemoradiation in women with early cervical cancer seems to improve survival and reduce risk of recurrence. In total, these drugs can cause a combination of more than 40 specific side effects which include neurotoxicity, which is manifested by peripheral neuropathies including polyneuropathy. Mechanism of action As studied mainly on cisplatin, but presumably for other members as well, platinum-based antineoplastic agents cause crosslinking of DNA as monoadduct, interstrand crosslinks, intrastrand crosslinks or DNA protein crosslinks. Mostly they act on the adjacent N-7 position of guanine, forming a 1, 2 intrastrand crosslink. The resultant crosslinking inhibits DNA repair and/or DNA synthesis. This mechanism leads to specific patterns of damage in DNA, which can kill cancer cells but can also increase the risk of secondary tumors developing. Platinum-based antineoplastic agents are sometimes described as "alkylating-like" due to similar effects as alkylating antineoplastic agents, although they do not have an alkyl group. Examples Strategies for improving platinum-based anticancer drugs usually involve changes in the neutral spectator ligands, changes in the nature of the anions (halides vs various carboxylates), or changes in the oxidation state of the metal (Pt(II) vs Pt(IV)). Nanotechnology has been explored to deliver platinum more efficiently in the case of lipoplatin, which is introduced into the tumor sites thereby reducing the chance of toxicity. Cisplatin was the first to be developed. Cisplatin is particularly effective against testicular cancer; the cure rate was improved from 10% to 85%. Similarly, the addition of cisplatin to adjuvant chemotherapy led to a marked increase in disease-free survival rates for patients with medulloblastoma - again, up to around 85%. This application of cisplatin was developed by pediatric oncologist Roger Packer in the early 1980s. References Medicinal inorganic chemistry Platinum compounds Chemotherapy
Platinum-based antineoplastic
[ "Chemistry" ]
529
[ "Medicinal inorganic chemistry", "Bioinorganic chemistry", "Medicinal chemistry" ]
36,493,328
https://en.wikipedia.org/wiki/C19H28N2O4
{{DISPLAYTITLE:C19H28N2O4}} The molecular formula C19H28N2O4 may refer to: Carpindolol, a beta blocker Roxatidine acetate, a histamine H2 receptor antagonist drug Molecular formulas
C19H28N2O4
[ "Physics", "Chemistry" ]
60
[ "Molecules", "Set index articles on molecular formulas", "Isomerism", "Molecular formulas", "Matter" ]
36,493,464
https://en.wikipedia.org/wiki/LARMOR%20neutron%20microscope
The LARMOR neutron microscope is a microscope based on the principle of neutron scattering. It is named in honor of Joseph Larmor and the principle of larmor precession that will increase resolution and accuracy. It is located at ISIS Neutron and Muon Source in Oxfordshire. Description LARMOR will be used to make high-precision, deep images of physical objects. Since neutrons bear no electrical charge, neutron beams can penetrate deeply into materials. By examining the few interactions that neutrons do have with atoms they encounter and enhancing the imaging using larmor precession, the microscope is predicted to create images with an atom-level resolution. The microscope will allow for observation of magnetic materials, complex liquids and living specimens. An example of application of this research is improved electronics and charge storage in lithium-ion batteries. LARMOR is a joint project of the Delft University of Technology, the Eindhoven University of Technology, the University of Groningen and the Science and Technology Facilities Council's ISIS Neutron and Muon Source . It is funded jointly by the participating Dutch universities and ISIS Neutron and Muon Source, and the Dutch NWO will contribute 2.3 million Euros. One-third of the microscope's time will be reserved for research from the Netherlands. See also Neutron microscope Larmor precession Larmor website ISIS Neutron and Muon Source References Microscopes Science and technology in Oxfordshire Vale of White Horse
LARMOR neutron microscope
[ "Chemistry", "Technology", "Engineering" ]
292
[ "Microscopes", "Measuring instruments", "Microscopy" ]
36,494,909
https://en.wikipedia.org/wiki/C21H25NO
{{DISPLAYTITLE:C21H25NO}} The molecular formula C21H25NO (molar mass: 307.429 g/mol) may refer to: Benzatropine, or benztropine Hepzidine 2β-Propanoyl-3β-(2-naphthyl)-tropane (or WF-23) Molecular formulas
C21H25NO
[ "Physics", "Chemistry" ]
84
[ "Molecules", "Set index articles on molecular formulas", "Isomerism", "Molecular formulas", "Matter" ]
45,400,612
https://en.wikipedia.org/wiki/Hyperloop%20One
Hyperloop One, known as Virgin Hyperloop until November 2022, was an American transportation technology company that worked to commercialize high-speed travel utilizing the Hyperloop concept which was a variant of the vacuum train. The company was established on June 1, 2014, and reorganized and renamed on October 12, 2017. Hyperloop systems were intended to move cargo and passengers at airline speeds but at a fraction of the cost. They were designed to run suspended by magnetic systems in a partially-evacuated tube. The original Hyperloop concept proposed to use a linear electric motor to accelerate and decelerate an air bearing levitated pod through a low-pressure tube. The vehicle was to glide silently at speeds up to with very low turbulence. The system was proposed to be entirely autonomous, quiet, direct-to-destination, and on-demand. It would have been built on elevated structures or in tunnels, free of at-grade crossings and requiring less right of way than high-speed rail or highways. Virgin Hyperloop made substantive technical changes to Elon Musk's initial proposal and chose not to pursue the Los Angeles–San Francisco notional route that Musk envisioned in his 2013 alpha-design white paper. It demonstrated a form of propulsion technology on May 11, 2016, at its test site in North Las Vegas. It completed a Development Loop (DevLoop) and on May 12, 2017, held its first full-scale test. The test combined Hyperloop components including vacuum, propulsion, levitation, sled, control systems, tube, and structures. On November 8, 2020, after more than 400 uncrewed tests, the firm conducted the first human trial at a speed of at its test site in Las Vegas, Nevada. However, in February 2022, the company abandoned plans for human rated travel and instead focused on freight, firing more than 100 employees amounting to half its total workforce. In November of that year the company decided to rebrand, reverting to the name, Hyperloop One. It was announced on December 21, 2023 that the company will cease operations on December 31, 2023 due to a number of factors including financial challenges, high interest rates, initial backing and support, as well as to its failure to secure any contracts for building a working hyperloop system; it began selling its assets and laying off remaining employees. According to The Verge, all of its intellectual property would shift to its majority stakeholder, major Dubai port operator DP World. History Origins The idea of trains in vacuum has been elaborated many times in history of science and science-fiction. The concept of Hyperloop transportation was first introduced by Robert H. Goddard in 1904. The recent plans for a version of vacuum train called Hyperloop emerged from a conversation between Elon Musk and Iranian-American Silicon Valley investor Shervin Pishevar when they were flying together to Cuba on a humanitarian mission in January 2012. Pishevar asked Musk to elaborate on his hyperloop idea, which the industrialist had been mulling over for some time. Pishevar suggested using it for cargo, an idea Musk hadn't considered, but he did say he was considering open-sourcing the concept because he was too busy running SpaceX and Tesla. Pishevar pushed Musk to publish his ideas about the hyperloop, so that Pishevar could study them. On August 12, 2013, Musk released the Hyperloop Alpha white paper, generating widespread attention and enthusiasm. In the months that followed Pishevar incorporated Hyperloop Technologies, which would later be renamed Hyperloop One, and recruited the first board members, including David O. Sacks, Jim Messina, and Joe Lonsdale. Pishevar also recruited a cofounder, former SpaceX engineer Brogan Bambrogan. The firm set up shop in Bambrogan's garage in Los Angeles in November 2014. By January 2015, the firm had raised $9 million in venture capital from Pishevar's Sherpa Capital and investors such as Formation 8 and Zhen Fund, and was able to move into its current campus in the Los Angeles Arts District. Forbes magazine put the firm on its February 2015 cover, landing the startup many fresh recruits and much new investor interest. In June 2015, Pishevar recruited former Cisco president Rob Lloyd as an investor and, eventually, the company's CEO. Funding and growth Between June 2015 and December 2015, the company continued to hire engineers and expand its downtown campus (now up to 75,000 square feet). In December 2015, Hyperloop Tech announced it would hold an open-air propulsion test at a new Test and Safety Site in Nevada. At the time, the company disclosed it had raised $37 million in financing to date and was completing a Series B round of $80m, which they closed on in May 2016. In October 2016, the firm announced that it had raised another $50 million, led by an investment from 8VC and DP World. The propulsion open-air test or POAT, was successfully held in North Las Vegas on May 11, 2016. The POAT sled accelerated to 134 mph (216 km/h) in 2.3 seconds, representing a crucial proof of concept. At the time, the renamed Hyperloop One announced it had secured partnerships with global engineering and design firms such as AECOM, SYSTRA, Arup, Deutsche Bahn, General Electric, and Bjarke Ingels. On November 10, 2016, Hyperloop One released its first system designs in collaboration with the Bjarke Ingels Group. On October 12, 2017, Hyperloop One and the Virgin Group announced that it developed a strategic investment partnership, resulting in Richard Branson joining the board of directors. The global strategic partnership will focus on passenger and mixed-use cargo service in addition to the creation of a new passenger division. Hyperloop One had raised $295 million on December 18, 2017, and subsequently was renamed Virgin Hyperloop One, and Branson became the chairman of the board of directors. As of May 2019, the company had raised $400 million. In June 2020, the firm rebranded to Virgin Hyperloop, changing their logo and launching a new website. In October 2020, West Virginia governor Jim Justice announced that Virgin Hyperloop would be constructing a certification facility on land in Tucker and Grant Counties. About 800 acres owned by Western Pocahontas near Mount Storm was donated to the West Virginia University Foundation, and cooperation was expected from WVU, Marshall University, and the West Virginia Community and Technical College System. Focus on freight and layoffs In February 2022, the Financial Times reported that the company laid off more than 100 employees, with the move allowing it to focus on cargo transport instead of passenger travel. In December 2022, a second round of layoffs was reported, focused on the firm's downtown Los Angeles staff and Las Vegas operational team. While Hyperloop One focuses on freight, competitors continue to focus on a mix of freight and passenger travel. The change in focus put construction of the West Virginia facility in question, until the company admitted in March 2023 that it had been cancelled. Test pods XP-1 After Hyperloop One began the construction of DevLoop in October 2016, the company successfully conducted the first full-system test using the levitating chassis without passenger pod on May 12, 2017. On July 12, 2017, the company revealed images of its first generation pod prototype to be used at the DevLoop test site in Nevada to test aerodynamics. The system-wide test integrated Hyperloop components including vacuum, propulsion, levitation, sled, control systems, tube, and structures. The company designed and built its first generation full-scale test pod name XP-1 (short for experimental pod one) to be used in the full-scale pod tests. XP-1 has the length of , the width of , and the height of . The pod's motor was evolved from 500 motors that were built and tested in order to operate with resiliency in near-vacuum environment. The pod was successfully tested for the first time on July 29, 2017, with the of acceleration to reach the recorded speeds of . The pod achieved 3,151 horsepower during the test inside the depressurized tube with conditions similar to the atmosphere at above sea level. On August 2, 2017, Hyperloop One successfully tested its XP-1 passenger pod, reaching speeds of up to . It traveled for just over before the brakes kicked in and it rolled to a stop. The XP-1 speed record was broken in August 2017 by WARR Hyperloop during the second Hyperloop Pod Competition with the top speeds of ; however, the pods in the competition were too small to carry passengers. XP-1 set the world's speed record again during the test in December 2017, reaching . With that test, the company also demonstrated its airlock technology that allowed the pod to be transferred into the depressurized tube. With this system, XP-1 pod can be put in an airlock which takes a few minutes to depressurize before entering the already depressurized tube. Otherwise, the pod would need to enter the tube and wait for the 4-hour depressurization of the entire test tube. In 2018, WARR Hyperloop broke XP-1 record again in the third Hyperloop Pod Competition, on a longer track. In the summer of 2019, the company took XP-1 on a roadshow to Ohio, Texas, Kansas, New York, Missouri, North Carolina, and Washington, D.C. XP-2 For the company's passenger testing, they created a new vehicle, dubbed "experimental pod 2", or XP-2. The vehicle was designed by Bjarke Ingels Group and Kilo Design. On November 8, 2020, after more than 400 uncrewed tests, the firm conducted the first human trial with Josh Giegel, its co-founder and CTO, and Sara Luchian, Director of Passenger Experience, as the first passengers at a speed of at its DevLoop test site in Las Vegas, Nevada. The test was conducted in a near-vacuum environment of 100 Pascals. In March 2021, Virgin Hyperloop announced that the vehicle would be on display at the Smithsonian Arts and Industries Building in late 2021. Following successful passenger testing, Virgin Hyperloop unveiled its commercial vehicle design in January 2021. Designed in collaboration with Seattle-based design firm Teague, each vehicle is planned to seat about 28 passengers but can transport thousands of passengers per hour in convoys. Funding Hyperloop One had raised over $485 million as of May 2019. Its investors include Sherpa Capital, Formation 8, 137 Ventures, DP World, Khosla Ventures, Caspian Venture Capital, Fast Digital, Western Technology Investment, Zhen Fund, GE Ventures, and SNCF. Management , the board of directors included Richard Branson (chairman), Justin Fishner-Wolfson, Sultan Ahmed Bin Sulayem, Rob Lloyd, Josh Giegel, Bill Shor, Yuvraj Narayan, Anatoly Braverman, and Emily White as a strategic adviser. Former board members include Peter Diamandis, Jim Messina, who as of July 2018 serves as strategic adviser, former Morgan Stanley executive Jim Rosenthal, Joe Lonsdale, the co-founder Shervin Pishevar, who took a leave of absence from Hyperloop One in December 2017 after multiple women accused him of sexual misconduct, and Ziyavudin Magomedov, a Russian billionaire who was arrested on embezzlement charges in 2018. On November 8, 2018, Sultan Ahmed bin Sulayem succeeded Richard Branson as chairman. In February 2021, co-founder Josh Giegel was named CEO, before being replaced by CFO Raja Narayanan in October 2021. The firm announced an intent to accelerate scheduled fielding of production systems from the early 2030s to the mid-2020s, and that the planned initial project would transport freight between the cities of Dubai and Abu-Dhabi in the United Arab Emirates. Planned cooperation In June 2016 the company announced a memorandum of understanding with the Summa Group and the Russian government to construct a hyperloop in Moscow and has since completed feasibility studies in Moscow and in the Far East. In August 2016, the firm announced a deal with the world's third largest ports operator, DP World, to develop a cargo offloader system at Jebel Ali in Dubai. On November 8, 2016, the firm announced it had signed a deal with Dubai's Roads and Transport Authority (RTA) to conduct feasibility studies on potential passenger and cargo hyperloop routes in the United Arab Emirates. By April 2017, the firm had feasibility studies underway in the United Arab Emirates, Finland, Sweden, the Netherlands, Switzerland, Moscow, and the UK. On September 1, 2017, the firm signed a letter of intent with Estonia to cooperate on the Helsinki–Tallinn Tunnel. In February 2018, the Virgin Group signed an "intent agreement" with the Government of Maharashtra state of India to build a hyperloop transportation system between Mumbai and Pune. In August 2019, the government deemed hyperloop a public infrastructure project and approved the Virgin Hyperloop-DP World Consortium as the Original Project Proponent (OPP), recognizing hyperloop technology alongside other more traditional forms of mass transit. The Principal Scientific Adviser to the Government of India, K. VijayRaghavan, set up a Consultative Group on Future of Transportation (CGFT) to explore the regulatory path for hyperloop. On July 19, 2018, an Ohio regional planning commission was investigating using hyperloop between airports and potentially between Chicago, Columbus, and Pittsburgh; in May 2020 the commission released the results of their Midwest Connect feasibility study, which found that the route would create $300 billion in overall economic benefits and reduce emission by 2.4 million tons. In July 2018, Texas officials announced that the state will explore hyperloop technology for a route connecting Dallas, Austin, San Antonio, and Laredo. In June 2019, the firm announced an ongoing collaboration with the Sam Fox School of Washington University in St. Louis to explore proposals for the Missouri Hyperloop. In October 2019, Missouri became the first US state to conduct a hyperloop feasibility study, exploring a route between Kansas City and St. Louis. In December 2019, the State Government of Punjab, India, signed an MoU with the firm to explore a route connecting the Amritsar-Ludhiana-Chandigarh corridor. In February 2020, the firm signed a partnership agreement with Saudi Arabia to conduct a pre-feasibility study. In September 2020, Virgin Hyperloop signed a partnership agreement with Bangalore International Airport Limited to conduct a feasibility study for a proposed corridor from BLR Airport. Hyperloop One Global Challenge In 2016, the firm launched its Hyperloop One Global Challenge to find the locations for, develop, and construct the world's first hyperloop networks. In January 2017, the firm announced the 35 semifinalist routes (spread over 17 countries) and held a series of events showcasing the semifinalists, Vision for India in February, Vision for America in April and Vision for Europe in June. On September 14, 2017, Hyperloop One announced the 10 winners; they were to be invited to work closely with the firm on viability studies to try to bring their respective loops from proposal to reality. The ten winning routes that were selected are: Lawsuits In July 2016, the CTO and co-founder Brogan BamBrogan left the company, later filing a lawsuit with three other former employees alleging breach of fiduciary duty and misuse of corporate resources. On July 19, 2016, Hyperloop One filed a counterclaim against the four former employees, alleging they staged a failed coup of the company, in the process breaching agreements around fiduciary duty, non-competes, proprietary information, and non-disparagement, as well as intentional interference with contractual relations. On November 18, 2016, both parties agreed to settle the lawsuit. Terms were confidential and not disclosed. BamBrogan and other former Hyperloop One and SpaceX employees went on to found Arrivo, another Hyperloop company (defunct in 2018). References External links Alan James about Baltic Sea Hyperloop One ring Transformed connections & enhanced cohesion. Example opportunities for Europe Transformed connections & enhanced cohesion. Example opportunities for Europe, 7 June 2017 2014 establishments in California 2023 disestablishments in California American companies established in 2014 American companies disestablished in 2023 Hyperloop Technology companies based in Greater Los Angeles Technology companies of the United States Transport companies established in 2014 Transport companies disestablished in 2023 Transportation companies based in California Transportation companies of the United States Virgin Group
Hyperloop One
[ "Technology", "Engineering" ]
3,469
[ "Vacuum systems", "Hyperloop", "Transport systems" ]
45,404,640
https://en.wikipedia.org/wiki/Blohm%20%26%20Voss%20BV%20237
The Blohm & Voss BV 237 was a German proposed dive bomber with an unusual asymmetric design based on the Blohm & Voss BV 141. Design and development In 1942, the Luftwaffe was interested in replacing the venerable but ageing Junkers Ju 87, and Dr. Richard Vogt's design team at Blohm & Voss began work on project P 177. The dive bomber version would have had a one-man crew with two fixed forward firing MG 151 cannon and two rear firing MG 131 machine guns, carrying of bombs. A two-seat ground attack version was also proposed with two fixed forward firing MG 151 cannon, three forward firing MK 103 cannon with six bombs. A final B-1 type was to incorporate a Junkers Jumo 004B turbojet engine in a third nacelle slung underneath the wing, between the piston engine and the cockpit. In early 1943 the B&V design, now called the BV 237, was shown to Hitler and he ordered it into production. However the order was not carried out. In the summer, Allied bombing raids over Hamburg caused no damage to the Blohm and Voss facilities, but the Ministry of Aviation ordered all developmental work stopped. Work continued later and it was determined that construction could begin in mid 1945, but plans for a pre-production A-0 series were abandoned, leaving the project at the pre-production stage near the end of 1944, with only a wooden mock-up completed. Variants P.177: Original project which led to the BV 237. BV 237 (single seat)A single seat Sturzkampfflugzeug (dive bomber) armed with two fixed forward firing MG 151 cannon and two rear firing MG 131 machine guns, carrying of bombs. BV 237 (2-seat) A twin seater Schlachtflugzeug (ground attack) aircraft armed with two fixed forward firing MG 151 cannon and three forward firing MK 103 cannon, with six bombs. BV 237B-1 A proposed mixed-power version with a podded Junkers Jumo 004B underslung between the BMW 801 nacelle and the fuselage. Specifications (BV 237) See also List of German aircraft projects, 1939–45 References BV 237 Asymmetrical aircraft
Blohm & Voss BV 237
[ "Physics" ]
470
[ "Asymmetrical aircraft", "Symmetry", "Asymmetry" ]
45,404,713
https://en.wikipedia.org/wiki/UCL%20Australia
UCL Australia was an international campus of the University College London, located on Victoria Square in Adelaide, South Australia. It had three parts: the School of Energy and Resources (SERAus), the International Energy Policy Institute (IEPI) and a branch of UCL's Mullard Space Science Laboratory. UCL Australia described its university community as "welcoming, dynamic and influential." The campus closed in December 2017. History In December 2008, Professor Michael Worton (Academic & International UCL Vice-Provost) said of the establishment of UCL Australia that the university was "committed to working to solve real-world problems and we relish the opportunity to work not only with the South Australian Government but also with Santos and a range of other Australian and international energy companies through our presence in Adelaide." UCL Australia established key corporate partnerships with two major resource and energy companies operating in South Australia: Santos and BHP. Santos' South Australian interests include onshore and offshore oil and gas developments while BHP Billiton's interest is concentrated on the expansion of the Olympic Dam mine—the world's largest known deposit of uranium. Its campus was established in the Torrens Building on Victoria Square, Adelaide after the Government of South Australia committed A$4 million to refurbishing the building. The building also houses an international campus of Carnegie Mellon University. In 2010, UCL Australia completed its first full academic year. Agreements between the partners were negotiated by Adelaide lawyer and public servant, Pamela Martin. Closure In January 2015, UCL Australia announced that its campus would close within three years but agreed to support currently enrolled students through their degrees and courses. The agreement with the Government of South Australia and Santos expired in 2017. The UCL Adelaide satellite campus closed in December 2017, with academic staff and student transferring to the University of South Australia. UniSA and UCL are offering joint master qualifications in Science in Data Science (international) or Science in Sustainable Energy Systems. Research In 2012, research undertaken at UCL Australia included efforts to address problems in water processing for coal seam gas (coal bed methane), design evaporative cooling systems for buildings using sea water and develop integrated energy systems for sustainable wine production. In 2015, UCL Australia's research was focused on the following areas: Shale and other unconventional gas The low carbon economy Electricity markets, transmission and renewables Adding value to resources Community engagement and governance Environmental and resource monitoring School of Energy & Resources (SERAus) The UCL School of Energy and Resources was established in partnership with the Government of South Australia and oil and gas company, Santos. It was established in 2009, with its first full academic year commencing in 2010. Its objective was to develop management capability to help the resources and energy sector meet the challenges of energy security, affordability and regulation, sustainability, environmental impact and climate change. Research In 2015, research projects undertaken at the School of Energy and Resources included: Reliability and resilience of Smart Grid technologies and architecture Design and optimization of water distribution networks Monitoring of environmental impacts from dredging and port development International regulation of offshore energy exploration and exploitation Scholarships The School of Energy & Resources offered incentives for student enrolment, initially awarding 10 Santos scholarships to students wishing to undertake a Masters of Science in Energy and Resources. The scholarships covered full tuition fees and provided each recipient an additional $25,000 annual stipend. In 2016, scholarships were still being offered, with each scholarship "worth" up to $114,500 over two years, comprising full tuition plus a A$50,000 tax-free stipend. International Energy Policy Institute The International Energy Policy Institute (IEPI) was housed on the Adelaide campus of University College London, Australia. In 2011, UCL signed a five-year $10 million partnership with BHP Billiton to establish the International Energy Policy Institute in Adelaide and an Institute for Sustainable Resources in London. The Institute was created to address challenges of complexity and sensitivity in the energy policy field through intensive research. Stefaan Simons was appointed the inaugural BHP Billiton Chair of Energy Policy. His directorship of the Institute commenced on 1 September 2012. The Institute was seeded by donations from oil and gas company Santos and the resource multi-national BHP. Research Research at IEPI was focused on upstream (exploration and production) issues, acknowledging the Asia Pacific region's influence on global Coal, nuclear and gas markets, and its growing uptake in renewable energy. The Institute complements and contrasts the downstream (consumer) focus of the UCL Energy Institute which is based in London. Research undertaken at IEPI followed four themes: adding value to energy resources fossil, nuclear and renewable energy futures community engagement climate strategies In 2015, projects at the IEPI included: Adding value to global Uranium resources The impact of climate policies on Australia’s Steel Manufacturing Sector Energy epidemiology – demand response management Engaging regional communities in climate action plans and sustainable energy futures The prospects for a Shale gas revolution in Australia Alternative uses for coal – do they make sense? Staff Notable staff of the IEPI included Emeritus Professor Anthony "Tony" Owen, Visiting Professor Timothy "Tim" Stone CBE (non-executive director of Horizon Nuclear Power) and Honorary Reader James "Jim" Voss (former managing director of Pangea Resources). Grote Lecture series UCL Australia has presented a series of lectures, most of which have been accessible to the general public. Subjects and presenters have included: Governance UCL Australia's governance structure included a management team, an academic board and an advisory board. In April 2016, its Academic Board's membership includes representatives from: the Australian School of Petroleum at the University of Adelaide, the School of Engineering at the University of South Australia, the Dean of Brunel University London and the CTO of Aveillant in the UK. Its Advisory Board members included representatives of University College London, BHP Billiton, Santos Ltd, Cheung Kong Infrastructure Holdings, the Department of the Premier and Cabinet (South Australia), South Australia's Economic Development Board (Tanya Monro), the University of South Australia and former Australian politicians Jane Lomax-Smith and Martin Ferguson. Nuclear industrial development In 2011, former Federal minister Alexander Downer addressed UCL students to discuss the nuclear industry. Prior to his presentation he told the media of his support for the establishment of a nuclear waste dump in South Australia, and described a possible future scenario in which a nuclear power plant could power a seawater desalination plant in order to provide water for BHP Billiton's Olympic Dam mine. In 2012, Stefaan Simons was appointed the inaugural Director of the International Energy Policy Institute, and the BHP Billiton Chair of Energy Policy. Simons has acknowledged that asking "whether Australia could, and should, develop a nuclear power service industry based on uranium enrichment and fuel rod manufacture for the global market" is a key theme of the Institute's work. In a 2013 article entitled Is it time for nuclear energy for Australia? Simons proposed that goals of securing energy supply, maintaining economic growth and mitigating impacts of climate change could all be advanced by including nuclear in a "low-emission energy mix" for Australia. On UCL's role in the process he wrote: University College London’s International Energy Policy Institute (IEPI), based at its Australia campus in Adelaide, undertakes economic, regulatory and policy research on how Australia could develop a nuclear energy industry and manage its externalities, including decommissioning and waste." In late 2013, UCL staff and students contributed to conference papers investigating the subject of nuclear submarine development in Australia. Papers entitled What would it take for Australia to develop a nuclear-powered submarine capability? and From subs to Mines: What would it take for Australia to develop a nuclear-powered submarine capability? were presented in Brisbane, Australia and at the AIChE Annual Meeting in San Francisco, USA respectively. The subject was further explored in 2014 with the presentation of a conference paper entitled Selecting Nuclear-Powered Submarines in Australia: Nuclear Waste Consideration at a Waste Management conference (WM2014) in Phoenix, Arizona. In 2014, former Federal resources and energy minister Martin Ferguson was appointed as chairman of the UCL Australia board. Ferguson is an advocate for nuclear power in Australia. UCL Australia's Chief Executive David Travers said of Ferguson's appointment: UCL doesn't want to be large in Australia, but we do want to be influential and welcome Martin to the team to help us achieve these goals. Also in 2014, James "Jim" Voss, a senior nuclear engineer and Fellow of the UK Nuclear Institute was appointed Honorary Reader at UCL Australia's International Energy Policy Institute. He had previously served in the Executive Office of the President of the United States under two Presidents and advised senior government officials in other countries. He is also a former Managing Director of Pangea Resources, the proponent of a proposal to establish a nuclear waste dump in Australia in the late 1990s. Research conducted at UCL in 2014 included several studies investigating the prospect of expanding nuclear industrial activity in Australia and South Australia. These included work by staff Dr Michel Berthelemy and Dr Tim Stone on Nuclear fuel cycle strategies and work by UCL students investigating nuclear fuel leasing opportunities. Student research subjects included The legal merits of an Australian Nuclear Fuel Leasing scheme by Owen Sharpe, and The World’s first integrated nuclear fuel leasing in South Australia? A proposed business model and its economic appraisal by Iwan Setiyono Ko. After graduating, Sharpe was recruited to South Australia's Department of the Premier and Cabinet as a Senior Policy Officer. In March 2014, briefings on nuclear fuel leasing were given by UCL staff to Parsons Brinkerhoff, Deloitte and Babcock. In May a further briefing on the subject was given by Martin Ferguson at a confidential event. On 4 December 2014, Stefaan Simons and Tim Stone presented a conference paper entitled The international management of spent nuclear fuel at the Nuclear Industries Association Annual Meeting in London, United Kingdom. In April 2015, Visiting Professor Dr Timothy Stone was appointed to the Expert Advisory Committee of the Nuclear Fuel Cycle Royal Commission, an inquiry initiated at the request the Government of South Australia. UCL Australia established a Nuclear Working Group "to share scientific knowledge in relation to the main issues identified by the Royal Commission; to assist and facilitate the process leading up to informed community decisions". Group members include: Magnus Nyden (Head), Christian Ekberg, Paola Lettieri, Jonathan Mirrlees-Black, Michael Pollitt, Tim Stone, Pam Sykes, Geraldine Thomas, Jim Voss and Max Zanin. See also List of universities in Australia References External links UCL Australia - School of Energy & Resources UCL Australia - International Energy Policy Institute Nuclear power in Australia Petroleum engineering schools Universities in South Australia History of University College London
UCL Australia
[ "Engineering" ]
2,198
[ "Petroleum engineering", "Petroleum engineering schools", "Engineering universities and colleges" ]
45,410,087
https://en.wikipedia.org/wiki/Feedback%20suppressor
A feedback suppressor is an audio signal processing device which is used in the signal path in a live sound reinforcement system to prevent or suppress audio feedback. Digital feedback reduction is the application of digital techniques to sound reinforcement in order to reduce audio feedback and increase headroom. Operation Feedback suppressors use three main methods to control feedback, frequency shifting, adaptive filtering and automatic notch filtering Frequency shifting is the oldest feedback suppression technique dating back to the 1960s. This technique works by introducing a varying shift in frequency to the system response. This is typically implemented using a frequency mixer. Only modest improvement of gain before feedback is achieved and the technique creates noticeable pitch distortion in music program. The adaptive filter approach works by modeling the transfer function of the sound reinforcement system and subtracts the reinforced sound from the inputs to the system in the same way that an echo canceller removes echoes from a communications system. Parametric equalization and notch filters are commonly used by sound engineers to manually control feedback. A feedback suppressor using the automatic notch technique listens for the onset of feedback and automatically inserts a notch filter into the signal path at the frequency of the detected feedback. Feedback suppressors use several techniques for detecting feedback from non-invasive harmonic analysis of a potential feedback signal to more invasive adaptive filtering and speculative placement of notch filters. The automatic notch technique is the most popular method and has the advantage that the sound is not colored until the system is at risk of feedback. References Sound recording technology Audio engineering
Feedback suppressor
[ "Technology", "Engineering" ]
297
[ "Electrical engineering", "Recording devices", "Audio engineering", "Sound recording technology" ]
30,027,748
https://en.wikipedia.org/wiki/Magic%20pipe
A magic pipe is a surreptitious change to a ship's oily water separator (OWS), or other waste-handing equipment, which allows waste liquids to be discharged in contravention of maritime pollution regulations. Such equipment alterations may allow hundreds of thousands of gallons of contaminated water to be discharged untreated, causing extensive pollution of marine waters. Manipulation techniques The pipe may be improvised, aboard ship, from available hoses and pumps, to discharge untreated waste water directly into the sea. As ships are required to keep records of waste and its treatment, magic pipe cases often involve falsification of these records. The pipe is ironically called "magic" because it bypasses the ship's oily water separator and goes directly overboard. Hence, it can make untreated bilge water "magically disappear". Often the pipe can be easily disconnected and stored away into a different location aboard the ship so state and regulatory officers can not detect its usage. The use of magic pipes continues to this day, as well as efforts to improve bilge water treatment to make the use of magic pipes unnecessary. Legal ramifications In the United States, magic pipe cases often attract large fines for shipping lines, and prison sentences for crew. Cases are often brought to light by whistle blowers, including a 2016 case involving Princess Cruises, which resulted in a record US$40 million fine. In April 2021 a ship engineer on the Zao Galaxy, an oil tanker, was convicted of intentionally dumping oily bilge water in February 2019 and submitting false paperwork in an attempt to conceal the crime. The engineer may receive a substantial prison sentence and fine. The ship operator was fined US$1.65 million and ordered to "implement a comprehensive Environmental Compliance Plan." On older OWS systems bypass pipes were fitted with regulatory approval. These approved pipes are no longer fitted on newer vessels. In some serious emergencies ship's crews are allowed to discharge untreated bilge water overboard, but they need to declare these emergencies in the ship's records and oil record book. Unregistered discharges violate the MARPOL 73/78 international pollution control treaty. Motivation and responsibility The problem is worsened by a lack of facilities in developing countries; some port reception facilities do not allow for oily water to be discharged easily and cost effectively. Crew members, engineers, and ship owners can receive huge fines and even imprisonment if they continue to use a magic pipe to pollute the environment. Conclusively, some engineers use the magic pipe manipulation technique because of: Lack of training Lack of shore side assistance with regard to bilge water treatment Simple disregard of the ocean environment. Proper process The oily bilge waste comes from a ship's engines and fuel systems. The waste is required to be offloaded when a ship is in port and either burned in an incinerator or taken to a waste management facility. In rare occasions, bilge water can be discharged into the ocean but only after almost all oil is separated out. See also International Maritime Organization – Regulatory agency Marpol Annex I – Detailed implementation of Marpol 73/78 Oil–water separator (general) Oil content meter Oil discharge monitoring equipment References Deception Shipping and the environment Water pollution Watercraft components Piping
Magic pipe
[ "Chemistry", "Engineering", "Environmental_science" ]
670
[ "Building engineering", "Chemical engineering", "Water pollution", "Mechanical engineering", "Piping" ]
30,028,958
https://en.wikipedia.org/wiki/DPHM-RS
DPHM-RS (Semi-Distributed Physically based Hydrologic Model using Remote Sensing and GIS) is a semi-distributed hydrologic model developed at University of Alberta, Canada. Model description The semi-distributed DPHM-RS (Semi-Distributed Physically based Hydrologic Model using Remote Sensing and GIS) sub-divides a river basin to a number of sub-basins, computes the evapotranspiration, soil moisture and surface runoff using energy and rainfall forcing data in a sub-basin scale. It consists of six basic components: interception of rainfall, evapotranspiration, soil moisture, saturated subsurface flow, surface flow and channel routing, as described in Biftu and Gan. The interception of precipitation from the atmosphere by the canopy is modeled using the Rutter Interception Model. The land surface evaporation and vegetation transpiration are computed separately using the Two Source Model of Shuttleworth and Gurney, which is based on the energy balance above canopy, within canopy and at soil surface. This model solves the non-linear equations based on the energy balance for the canopy, surface, and air temperatures at canopy height, evaporation from soil surface and transpiration from vegetation. A soil profile of three homogeneous layers (active, transmission and saturated layers) is used to model the soil moisture on the basis of water balance between layers. The active layer is 15–30 cm thick and it simulates the rapid changes of soil moisture content under high frequency atmospheric forcing. The transmission zone lies between the base of the active layer and the top of the capillary fringe and so it more characterizes the seasonal (instead of transient) changes of soil moisture. In modeling the unsaturated flow component of soil water, the water transport is assumed vertical and non-interactive between sub-basins. The lower boundary of the unsaturated zone is the top of capillary fringe controlled by the local average ground water table derived from the catchment average water table and topographic soil index which include the spatial variability of the topographic and soil parameters. Starting with an observed value from the surrounding wells of the modeled basin, the temporal changes in the average ground water depth is based on the water balance analysis for the whole catchment, and the rate of change of the average ground water table is assumed to be the rate of change of local water table. After simulating the soil moisture, the saturation and Hortonian infiltration excess for vegetated and bare soil are computed to generate the surface runoff for each sub-basin. Philip's equation is used to compute the infiltration capacity of soil, and the surface runoff is distributed temporally using a time lag response function obtained from a reference rainfall excess of 1 cm depth applied to each grid cell within the sub-basin for one time step. Then for each grid cell, which has the resolution of the digital elevation model (DEM) used, the flow is routed according to the kinematic wave equation from cell to cell based on eight possible flow directions until the total runoff water for the sub-basin is completely routed. The resulting runoff becomes a lateral inflow to the stream channel within the sub-basin and these flows are routed through the drainage network by the Muskingum-Cunge routing method whose variable parameters are evaluated by an iterative four point approach. See also Environmental engineering is a broad category hydrogeology fits into, Groundwater energy balance: groundwater flow equations based on the energy balance. Fault zone hydrogeology: field specifically analyzing hydrogeology in fault zones. Hydrology (agriculture) Isotope hydrology is often used to understand sources and travel times in groundwater systems. SahysMod is a spatial agro-hydro-salinity model with groundwater flow in a polygonal network. Water cycle, hydrosphere and water resources are larger concepts which hydrogeology is a part of. References Hydrology
DPHM-RS
[ "Chemistry", "Engineering", "Environmental_science" ]
788
[ "Hydrology", "Environmental engineering" ]
30,030,151
https://en.wikipedia.org/wiki/Flight%20dynamics
Flight dynamics in aviation and spacecraft, is the study of the performance, stability, and control of vehicles flying through the air or in outer space. It is concerned with how forces acting on the vehicle determine its velocity and attitude with respect to time. For a fixed-wing aircraft, its changing orientation with respect to the local air flow is represented by two critical angles, the angle of attack of the wing ("alpha") and the angle of attack of the vertical tail, known as the sideslip angle ("beta"). A sideslip angle will arise if an aircraft yaws about its centre of gravity and if the aircraft sideslips bodily, i.e. the centre of gravity moves sideways. These angles are important because they are the principal source of changes in the aerodynamic forces and moments applied to the aircraft. Spacecraft flight dynamics involve three main forces: propulsive (rocket engine), gravitational, and atmospheric resistance. Propulsive force and atmospheric resistance have significantly less influence over a given spacecraft compared to gravitational forces. Aircraft Flight dynamics is the science of air-vehicle orientation and control in three dimensions. The critical flight dynamics parameters are the angles of rotation with respect to the three aircraft's principal axes about its center of gravity, known as roll, pitch and yaw. Aircraft engineers develop control systems for a vehicle's orientation (attitude) about its center of gravity. The control systems include actuators, which exert forces in various directions, and generate rotational forces or moments about the center of gravity of the aircraft, and thus rotate the aircraft in pitch, roll, or yaw. For example, a pitching moment is a vertical force applied at a distance forward or aft from the center of gravity of the aircraft, causing the aircraft to pitch up or down. Roll, pitch and yaw refer, in this context, to rotations about the respective axes starting from a defined equilibrium state. The equilibrium roll angle is known as wings level or zero bank angle, equivalent to a level heeling angle on a ship. Yaw is known as "heading". A fixed-wing aircraft increases or decreases the lift generated by the wings when it pitches nose up or down by increasing or decreasing the angle of attack (AOA). The roll angle is also known as bank angle on a fixed-wing aircraft, which usually "banks" to change the horizontal direction of flight. An aircraft is streamlined from nose to tail to reduce drag making it advantageous to keep the sideslip angle near zero, though aircraft are deliberately "side-slipped" when landing in a cross-wind, as explained in slip (aerodynamics). Spacecraft and satellites The forces acting on space vehicles are of three types: propulsive force (usually provided by the vehicle's engine thrust); gravitational force exerted by the Earth and other celestial bodies; and aerodynamic lift and drag (when flying in the atmosphere of the Earth or another body, such as Mars or Venus). The vehicle's attitude must be controlled during powered atmospheric flight because of its effect on the aerodynamic and propulsive forces. There are other reasons, unrelated to flight dynamics, for controlling the vehicle's attitude in non-powered flight (e.g., thermal control, solar power generation, communications, or astronomical observation). The flight dynamics of spacecraft differ from those of aircraft in that the aerodynamic forces are of very small, or vanishingly small effect for most of the vehicle's flight, and cannot be used for attitude control during that time. Also, most of a spacecraft's flight time is usually unpowered, leaving gravity as the dominant force. See also References Aerospace engineering Aerodynamics Spaceflight concepts
Flight dynamics
[ "Chemistry", "Engineering" ]
742
[ "Aerospace engineering", "Aerodynamics", "Fluid dynamics" ]
30,031,074
https://en.wikipedia.org/wiki/Lovelock%20theory%20of%20gravity
In theoretical physics, Lovelock's theory of gravity (often referred to as Lovelock gravity) is a generalization of Einstein's theory of general relativity introduced by David Lovelock in 1971. It is the most general metric theory of gravity yielding conserved second order equations of motion in an arbitrary number of spacetime dimensions D. In this sense, Lovelock's theory is the natural generalization of Einstein's general relativity to higher dimensions. In three and four dimensions (D = 3, 4), Lovelock's theory coincides with Einstein's theory, but in higher dimensions the theories are different. In fact, for D > 4 Einstein gravity can be thought of as a particular case of Lovelock gravity since the Einstein–Hilbert action is one of several terms that constitute the Lovelock action. Lagrangian density The Lagrangian of the theory is given by a sum of dimensionally extended Euler densities, and it can be written as follows where Rμναβ represents the Riemann tensor, and where the generalized Kronecker delta δ is defined as the antisymmetric product Each term in corresponds to the dimensional extension of the Euler density in 2n dimensions, so that these only contribute to the equations of motion for n < D/2. Consequently, without lack of generality, t in the equation above can be taken to be for even dimensions and for odd dimensions. Coupling constants The coupling constants αn in the Lagrangian have dimensions of [length]2n − D, although it is usual to normalize the Lagrangian density in units of the Planck scale Expanding the product in , the Lovelock Lagrangian takes the form where one sees that coupling α0 corresponds to the cosmological constant Λ, while αn with n ≥ 2 are coupling constants of additional terms that represent ultraviolet corrections to Einstein theory, involving higher order contractions of the Riemann tensor Rμναβ. In particular, the second order term is precisely the quadratic Gauss–Bonnet term, which is the dimensionally extended version of the four-dimensional Euler density. Equations of motion By noting that is a topological constant, we can eliminate the Riemann tensor term and thus we can put the Lovelock Lagrangian into the form which has the equations of motion Other contexts Because Lovelock action contains, among others, the quadratic Gauss–Bonnet term (i.e. the four-dimensional Euler characteristic extended to D dimensions), it is usually said that Lovelock theory resembles string-theory-inspired models of gravity. This is because a quadratic term is present in the low energy effective action of heterotic string theory, and it also appears in six-dimensional Calabi–Yau compactifications of M-theory. In the mid-1980s, a decade after Lovelock proposed his generalization of the Einstein tensor, physicists began to discuss the quadratic Gauss–Bonnet term within the context of string theory, with particular attention to its property of being ghost-free in Minkowski space. The theory is known to be free of ghosts about other exact backgrounds as well, e.g. about one of the branches of the spherically symmetric solution found by Boulware and Deser in 1985. In general, Lovelock's theory represents a very interesting scenario to study how the physics of gravity is corrected at short distance due to the presence of higher order curvature terms in the action, and in the mid-2000s the theory was considered as a testing ground to investigate the effects of introducing higher-curvature terms in the context of AdS/CFT correspondence. See also Lovelock's theorem f(R) gravity Gauss–Bonnet gravity Curtright field Horndeski's theory Notes References . Theories of gravity String theory Spacetime
Lovelock theory of gravity
[ "Physics", "Astronomy", "Mathematics" ]
783
[ "Astronomical hypotheses", "Vector spaces", "Theoretical physics", "Theory of relativity", "Space (mathematics)", "Theories of gravity", "Spacetime", "String theory" ]
31,293,970
https://en.wikipedia.org/wiki/Bean%27s%20critical%20state%20model
Bean's critical state model, introduced by C. P. Bean in 1962, gives a macroscopic explanation of the irreversible magnetization behavior (hysteresis) of hard Type-II superconductors. Assumptions Hard superconductors often exhibit hysteresis in magnetization measurements. C. P. Bean postulated for the Shubnikov phase an extraordinary shielding process due to the microscopic structure of the materials. He assumed lossless transport with a critical current density Jc(B) (Jc(B→0) = const. and Jc(B→∞) = 0). An external magnetic field is shielded in the Meissner phase (H < Hc1) in the same way as in a soft superconductor. In the Shubnikov phase (Hc1 < H < Hc2), the critical current flows below the surface within a depth necessary to reduce the field in the inside of the superconductor to Hc1. Explanation of the irreversible magnetization To understand the origin of the irreversible magnetization: assume a hollow cylinder in an external magnetic field parallel to the cylinder axis. In the Meissner phase, a screening current is within the London penetration depth. Exceeding Hc1, vortices start to penetrate into the superconductor. These vortices are pinned on the surface (Bean–Livingston barrier). In the area below the surface, which is penetrated by the vortices, is a current with the density Jc. At low fields (H < H0), the vortices do not reach the inner surface of the hollow cylinder and the interior stays field-free. For H > H0, the vortices penetrate the whole cylinder and a magnetic field appears in the interior, which then increases with increasing external field. Let us now consider what happens, if the external field is then decreased: Due to induction, an opposed critical current is generated at the outer surface of the cylinder keeping inside the magnetic field for H0 < H < H1 constant. For H > H1, the opposed critical current penetrates the whole cylinder and the inner magnetic field starts to decrease with decreasing external field. When the external field vanishes, a remnant internal magnetic field occurs (comparable to the remanent magnetization of a ferromagnet). With an opposed external field H0, the internal magnetic field finally reaches 0T (H0 equates the coercive field of a ferromagnet). Extensions Bean assumed a constant critical current meaning that H << Hc2. Kim et al. extended the model assuming 1/J(H) proportional to H, yielding excellent agreement of theory and measurements on Nb3Sn tubes. Different geometries have to be considered as the irreversible magnetization depends on the sample geometry. References Superconductivity Magnetic hysteresis
Bean's critical state model
[ "Physics", "Materials_science", "Engineering" ]
602
[ "Physical phenomena", "Physical quantities", "Superconductivity", "Materials science", "Magnetic hysteresis", "Condensed matter physics", "Hysteresis", "Electrical resistance and conductance" ]
31,294,123
https://en.wikipedia.org/wiki/Psychopharmacology%20revolution
The psychopharmacology revolution covers the introduction of various psychiatric drugs into clinical practice as well as their continued development. Although not exclusively limited to the 1950s period, the literature tends to suggest that this decade was a particularly fruitful time for CNS drug discovery and it has been referred to as a "golden era". Chlorpromazine The history of chlorpromazine can be traced back to the work of BASF who were creating dyes at around the turn of the 20th century (c.f. methylene blue). It was found that attaching basic side chains to the tricyclic phenothiazine residue resulted in compounds that functioned as reliable antihistamines. Henri Laborit was first using chlorpromazine to treat the anxiety of patients prior to surgery. He noted the so-called "indifference" that this agent causes and suggested that it be used on agitated psychotic patients. Chlorpromazine has H1, M1, and α1 receptor antagonist activity. This causes sedation, anticholinergic effects, as well as orthostatic hypotension. It also functions as a blocker of D2 receptors, although it is much weaker and less selective than haloperidol in this respect. Blockade of the D2 receptors is thought to underlie the antipsychotic effect of the typical antipsychotics. However, in the case of atypicals such as clozapine and risperidone, blockade of 5HT2A receptors are thought to also account for an important part of their pharmacology. Minor chemical manipulations in the chemical structure of chlorpromazine was used to create novel antipsychotic agents such as thioridazine and fluoperazine. Imipramine Minor chemical manipulations in the structure of chlorpromazine led to the first tricyclic antidepressant (TCA), imipramine (Tofranil), whose structure is iminodibenzyl (dibenzazepine) based. Imipramine was first used on agitated psychotic patients, but it was shown that in the majority of cases their condition did not improve and actually worsened slightly. However, it was noted that a few of the patients who were depressed became more animated so its use in the treatment of depression became apparent. Due to the chemical similarity of imipramine to chlorpromazine, this agent also functions as a H1, M1, and α1 receptor antagonist. Imipramine is also known to function as a fast sodium channel blocker, which is said to account for the cardiotoxicity of this agent. The collective effect of imipramine on these receptors is not thought to contribute to its therapeutic activity in the treatment of depression, although it is believed to account for mostly all of its side effects. The usefulness of the TCAs in treating depression is thought to stem from their ability to inhibit the uptake of the neurotransmitters serotonin (5-HT) and noradrenaline (NA). It was proposed that designing agents that were more selective for 5-HT and/or NA would lower the incidence of side effects. This in turn has led to the development/discovery of the SSRIs and SNRIs. Iproniazid The so-called golden era also covers the discovery of the first monoamine oxidase inhibitor, iproniazid (Marsilid), which is hydrazine based. Like imipramine, this also was used in the treatment of depression. Iproniazid was the result of a failed medicinal chemistry attempt to improve on the anti-tubercular activity of isoniazid. It was first given to patients with tuberculosis where a surprising but wholly unexpected improvement in mood was noticed. Nathan Kline coined the term "psychic energizer" to account for this effect and posited that they be used in the treatment of depression. Iproniazid is no longer used because it caused an unacceptable incidence in jaundice. Nevertheless, related agents such as phenelzine and isocarboxazid are still on the market. In addition, tranylcypromine is a non-hydrazine containing irreversible inhibitor of MAO which is also available. A limitation of these agents is their potential to cause hypertension so their safety is not guaranteed. However, it seems that the selective inhibitor of the B isoform of MAO, selegiline, is much less likely to cause hypertension. Theory of mood disorders The investigations into the mechanism of activity of these agents that followed their discovery led to the proposal of the "chemical imbalance" of neurotransmitters theory of mood disorders, which is supposed to account for the pathophysiology and/or pathogenesis of these states. It follows that these so-called "imbalances" can be corrected by the judicious application of appropriately selected psychotropic medication(s). An excess of dopamine is cited as the cause of schizophrenia, whereas a deficiency of noradrenaline and serotonin were cited as the cause for depression. The discovery of reserpine was also of great significance to the development of the monoamine amine theory of depression. Prior to the 1950s Prior to the introduction of these agents, the management of mental disorders in America relied mainly on "psychoanalytic" methods said to be deriving from a "Freudian" understanding of the subject area. Apparently, there was great resistance to the use of medicine in the treatment of mental disorders prior to the 1950s. It is, however, known that various other agents including amphetamine and opium have documented use in the history of treating depression, and that barbiturates, lithium salts, bromide salts, various anticholinergic alkaloids, as well as opium, were all used in the history of the treatment of schizophrenia. References External links DRUGS OF THE PSYCHOPHARMACOLOGICAL REVOLUTION IN CLINICAL PSYCHIATRY Psychopharmacology
Psychopharmacology revolution
[ "Chemistry" ]
1,248
[ "Psychopharmacology", "Pharmacology" ]
31,294,625
https://en.wikipedia.org/wiki/Median%20center%20of%20the%20United%20States%20population
The median center of U.S. population is determined by the United States Census Bureau from the results of each census. The Bureau defines it to be: As of the 2020 U.S. census, this places roughly 165.7 million Americans living on each side of a longitude line passing through a location in Gibson County, Indiana, and the same number living on each side of a latitude line through the same point. During the 20th century the median center of U.S. population moved roughly southwest, from a location in Randolph County, Indiana to a location in Daviess County, Indiana. The majority of this southwest shift happened in the second half of the century, as the center shifted within a narrow circular band between 1900 and 1950 – all within roughly of the 1900 starting point in Randolph County. See also Mean center of the United States population Center of population Geographic center of the United States Geographic center of the contiguous United States References Demographic history of the United States Center of population
Median center of the United States population
[ "Physics", "Mathematics" ]
197
[ "Point (geometry)", "Geometric centers", "Center of population", "Symmetry" ]
31,298,171
https://en.wikipedia.org/wiki/NINA%20%28accelerator%29
NINA (Northern Institute's Nuclear Accelerator) was a particle accelerator located at Daresbury Laboratory, UK that was used for particle physics and as a source of synchrotron radiation. Introduction Given government UK approval in 1962, NINA was a 70.19m, 4 GeV electron synchrotron built in 1964 at the Daresbury Laboratory site in Cheshire, England to study particle physics. This was the first facility at this site and gave birth to the second UK national laboratory (after Rutherford Appleton Laboratory). NINA was first brought into operation in December 1966, when an energy of 4.5 GeV was achieved. It started regular running in January 1967, for investigations into the targeting and placement of external photon beamlines. In February, the first high energy photon beam was brought into the Manchester experimental area. The Daresbury and Liverpool experimental areas had beams by March/April 1967. Along with other particle physics accelerators, scientists had been using the synchrotron radiation produced by NINA for its unique properties. By 1975, over 50 scientists with affiliations to more than 16 institutions were at work on NINA exploiting this by product of the particle accelerator. This led to the conversion of the NINA ring into a dedicated source of synchrotron radiation at a cost of £3M at 1974 prices. The particle physics was to be exported to CERN, at the time a proposed 400 GeV machine. Whilst the majority of NINA was reused onsite for the new Synchrotron Radiation Source (SRS), some parts were repurposed at other facilities, including the 90 ton choke which became a key part of the operation of the ISIS neutron source at the Rutherford Appleton Lab. Specifications NINA's design energy was 4 GeV and was reached in 1966. By the time NINA was closed it had been upgraded to 6 GeV. The synchrotron contained 40 electromagnets and initial acceleration was performed by a 40 MeV linac in a tunnel outside the ring. References Particle accelerators Particle physics facilities Particle experiments Research institutes in Cheshire Synchrotron radiation facilities
NINA (accelerator)
[ "Materials_science" ]
423
[ "Materials testing", "Synchrotron radiation facilities" ]
31,300,435
https://en.wikipedia.org/wiki/Sliding%20criterion%20%28geotechnical%20engineering%29
The sliding criterion (discontinuity) is a tool to estimate easily the shear strength properties of a discontinuity in a rock mass based on visual and tactile (i.e. by feeling) characterization of the discontinuity. The shear strength of a discontinuity is important in, for example, tunnel, foundation, or slope engineering, but also stability of natural slopes is often governed by the shear strength along discontinuities. The sliding-angle is based on the ease with which a block of rock material can move over a discontinuity and hence is comparable to the tilt-angle as determined with the tilt test, but on a larger scale. The sliding criterion has been developed for stresses that would occur in slopes between , hence, in the order of maximum . The sliding criterion is based on back analyses of slope instability and earlier work of ISRM and Laubscher. The sliding criterion is part of the Slope Stability Probability Classification (SSPC) system for slope stability analyses. Sliding-angle The sliding-angle is calculated as follows: where sliding-angle is in degrees, and Rl = roughness large scale Rs = roughness small scale Im = infill material in the discontinuity Ka = karst; presence of karst (solution) features along the discontinuity (The values for the parameters are listed in table 1 and explained below) Roughness large scale (Rl) The roughness large scale (Rl) is based on visual comparison of the trace (with a length of about 1 m) or surface (with an area of about 1 x 1 m2 of a discontinuity with the example graphs in figure 1. This results in a descriptive term: wavy, slightly wavy, curved, slightly curved, or straight. The corresponding factor for Rl is listed in table 1. The roughness large scale (Rl) contributes only to the friction along the discontinuity when the walls on both sides of the discontinuity are fitting, i.e. the asperities on both discontinuity walls match. If the discontinuity is non-fitting, the factor Rl = 0.75. Roughness small scale (Rs) The roughness small scale (Rs) is established visually and tactile (by feeling). The first term rough, smooth, or polished is established by feeling the surface of the discontinuity; rough hurts when fingers are moved over the surface with some (little) force, smooth feels that there is resistance to the fingers, while polished gives a feeling about similar to the surface of glass. The second term is established visually. The trace (with a length of about 0.2 m) or surface (with an area of about 0.2 x 0.2 m2 of a discontinuity is compared with the example graphs in figure 2; this gives stepped, undulating, or planar. The two terms of visual and tactile give a combined term and the corresponding factor is listed in table 1. The visual part of the roughness small scale (Rs) contributes only to the friction along the discontinuity if the walls on both sides of the discontinuity are fitting, i.e. the asperities on both discontinuity walls match. If the discontinuity is non-fitting, the visual part of the roughness small scale (Rs) should be taken as planar for the calculation of the sliding-angle, and hence, the roughness small scale (Rs) can be only rough planar, smooth planar, or polished planar. Infill in discontinuity (Im) Infill material in a discontinuity has often a marked influence on the shear characteristics. The different options for infill material are listed in table 1, and below follows a short explanation for each option. Cemented discontinuity or cemented infill A cemented discontinuity or a discontinuity with cemented infill has higher shear strength than a non-cemented discontinuity if the cement or cemented infill is bonded to both discontinuity walls. Note that cement and cement bounds that are stronger than the surrounding intact rock ceases the discontinuity to be a mechanical plane of weakness, meaning the 'sliding-angle' has no validity. No infill No infill describes a discontinuity that may have coated walls but no other infill. Non-softening infill Non-softening infill material is material that does not change in shear characteristics under the influence of water nor under the influence of shear displacement. The material may break but no greasing effect will occur. The material particles can roll but this is considered to be of minor influence because, after small displacements, the material particles generally will still be very angular. This is further sub-divided in coarse, medium, and fine for the size of the grains in the infill material or the size of the grains or minerals in the discontinuity wall. The larger of the two should be used for the description. The thickness of the infill can be very thin, sometimes not more than a dust coating. Softening infill Softening infill material will under the influence of water or displacements, attain in lower shear strength and will act as a lubricating agent. This is further sub-divided in coarse, medium, and fine for the size of the grains in the infill material or the size of the grains or minerals in the discontinuity wall. The larger of the two should be used for the description. The thickness of the infill can be very thin, sometimes not more than a dust coating. Gouge infill Gouge infill means a relatively thick and continuous layer of infill material, mainly consisting of clay but may contain rock fragments. The clay material surrounds the rock fragments in the clay completely or partly, so that these are not in contact with both discontinuity walls. A sub-division is made between less thick and thicker than the amplitude of the roughness of the discontinuity walls. If the thickness is less than the amplitude of the roughness, the shear strength will be influenced by the wall material and the discontinuity walls will be in contact after a certain displacement. If the infill is thicker than the amplitude, the friction of the discontinuity is fully governed by the infill. Flowing material infill Very weak and not compacted infill in discontinuities flows out of the discontinuities under its own weight or as a consequence of a very small trigger force (such as water pressure, vibrations due to traffic or the excavation process, etc.). Karst (Ka) The presence of solution (karst) features along the discontinuity. See also Discontinuity (Geotechnical engineering) Shear strength (Discontinuity) Slope stability probability classification (SSPC) Tilt test (Geotechnical engineering) References Further reading Landslide analysis, prevention and mitigation Mining engineering Rock mass classification Rocks Tunnel construction
Sliding criterion (geotechnical engineering)
[ "Physics", "Engineering", "Environmental_science" ]
1,486
[ "Mining engineering", " prevention and mitigation", "Environmental soil science", "Physical objects", "Rocks", "Matter", "Landslide analysis" ]
42,117,977
https://en.wikipedia.org/wiki/TDR%20moisture%20sensor
A spatial TDR moisture sensor employs time-domain reflectometry (TDR) to measure moisture content indirectly based on the correlation to electric and dielectric properties of materials, such as soil, agrarian products, snow, wood or concrete. Measurement usually involves inserting a sensor into the substance to be tested and then applying either Standard Waveform Analysis to determine the average moisture content along the sensor or Profile Analysis to provide moisture content at discrete points along the sensor. A spatial location can be achieved by appropriate installation of several sensors. Standard waveform analysis In the waveform analysis a sensor (usually a probe) is placed in the material to be tested. The sensor contains a waveguide consisting of two, three, or more parallel wires which is connected via a coaxial cable to a voltage pulse generator which sends precisely defined voltage pulses into the sensor. As the pulse travels along the waveguide its progress varies depending on the moisture content of the material being examined. When the pulse reaches the end of the waveguide it is reflected. This reflection is visualised in a TDR waveform using an oscilloscope connected to the sensor. The rate of travel of the pulse in the probe is measured and related to moisture content, with slower travel indicating an increase of moisture. By measuring the time from the initial pulse until the reflection is received the average moisture content and relative permitivity of the sample can be calculated by using an equivalent circuit as a reference. Standard waveform analysis can be used either manually (hand held instruments) or automatically for monitoring moisture content in several areas such as hydrology, agriculture and construction. Profile analysis Standard Waveform Analysis is unable to provide a spatial moisture profile. More sophisticated methods such as Profile Analysis are required. This method uses a variety of techniques to add spatial information to the measurement results. Reconstruction algorithm: One approach is to model the pulse propagation in the waveguide and calibrate the model against laboratory measurement. By comparing real sample measurements to the model, the moisture distribution can be inferred. The usefulness of this method is limited by the complexity of the algorithms, the limited amplitude resolution and interference in the TDR equipment. Alteration of Cross Section: Altering the cross section of the waveguide alters the pulse reflections and creates artificial reflections at each alteration of the cross section. This enables segmentations of the waveguide by applying a different cross section to each segment. However the difficulty of distinguishing the artificial pulse reflection from a real variance prevents the use of this technique for automated data analysis. Subdivision: The waveguide is subdivided into segments by using PIN diodes. Each segment provides its own pulse reflection thus showing the moisture content in that segment alone. This enables the moisture content to be mapped to the individual segments and therefore shows the spatial moisture distribution. As the length of the waveguide increases the reflections become weaker and eventually disappear. This limits the use of this method as do the influence of the diode circuit on the signal and manufacturing costs associated with the complexity of the waveguide compared to other methods. Length Variation: This method uses several waveguides with different lengths mounted parallel to each other. As a separate waveguide must be connected for each area, the costs of this method are very high. Profile analysis allows fully automatic measurement and monitoring of spatial moisture content and thus a leak monitoring of building foundations, landfill barriers and geological repositories in salt mines. See also Time-domain reflectometer Transmission line References Further reading Cataldo, Andrea / De Benedetto, Egidio / Cannazza, Giuseppe (2011). Broadband Reflectometry of Enhanced Diagnostics and Monitoring Applications. Springer Press. External links Electronic engineering Hydrology Measurement Semiconductor analysis Soil physics Water
TDR moisture sensor
[ "Physics", "Chemistry", "Mathematics", "Technology", "Engineering", "Environmental_science" ]
747
[ "Hydrology", "Applied and interdisciplinary physics", "Physical quantities", "Computer engineering", "Quantity", "Soil physics", "Measurement", "Size", "Electronic engineering", "Environmental engineering", "Water", "Electrical engineering" ]
42,119,079
https://en.wikipedia.org/wiki/Electric%20tug
An electric tug is a battery-powered and pedestrian-operated machine used to move heavy loads on wheels. The machines form part of the material-handling equipment field that, amongst others, also covers forklift trucks, overhead cranes and pallet jacks. Although electric tug is perhaps the most commonly used term, suppliers and customers regularly use a range of other names, such as towing tractor, battery-powered tug, electric hand truck, electric tugger and pedestrian-operated tug. The tugs move loads across a single level. They do not lift the load clear of the ground which is why the load must be on wheels. If the load itself does not have wheels, it would be placed on a wheeled platform often referred to as a trolley, bogie or skate. The tug connects to this wheeled platform just as a forklift truck picks up a pallet to move a load placed on it. In most cases a steel coupling (male) attached to the machine itself connects to a corresponding coupling (female) bolted to the load's bogie. A second bogie or multiple bogies will each have identical female couplings attached to them so that a single male coupling attached to the machine can move them all without alterations. Operation An electric tug relies on the principal of tractive effort. The machine, once secured to the bogie, will lift a portion of the load ensuring the load's wheels remain on the ground. This is achieved via the machine's hydraulic mast which is designed to create downforce on the drive wheel immediately beneath it. It is the traction generated from this process that allows the tug to move very large and heavy objects. As a tug does not lift its load clear of the ground, it does not have to conform to the Lifting Operations and Lifting Equipment Regulations 1998 (LOLER); therefore, an operator does not need a licence to operate it. Applications Electric tugs are used in many work sectors. Some common applications include: RetailTo move heavy roll cages from a delivery vehicle's tail lift to the supermarket's storeroom or long trains of empty roll cages. Healthcareto move bariatric beds, waste bins (including multiple bins at once), linen cages and gas bottles. Pharmaceuticalto move chromatography columns within laboratories. Supermarkets and airportsto move long trains of empty luggage trolleys. Horticulture and agricultureto move heavy materials such as top soil or to harvest crops in polytunnels. ConstructionTo move heavy building materials or to access construction sites where diggers and movers cannot due to size restrictions. Food and beverageto move large, heavy mixing bowls full of product. Waste handling - to move containers and waste bins and wheelie bins. Manufacturing and assembly: Automotive - To move heavy products such as vehicles down a production line. GlassTo move heavy stillages used to hold glass through production. Wind turbinesMoving turbine blades up to 50 metres in length through production. AerospaceTo move wing assemblies, invar tooling, turnover jigs etc. Boat buildingTo move luxury yachts on cradles through production. Brick and ceramicTo move product into and out of a kiln. Modular buildingsTo move buildings through production and completed buildings into storage. Cable and wire reelsTo move unwieldy reels in production. RailTo move loads mounted on rails, such as in railway maintenance depots. References Electric vehicles Material-handling equipment Industrial equipment
Electric tug
[ "Engineering" ]
692
[ "nan" ]
42,121,937
https://en.wikipedia.org/wiki/International%20Fertilizer%20Development%20Center
The International Fertilizer Development Center (known as IFDC) is a science-based public international organization working to alleviate global hunger by introducing improved agricultural practices and fertilizer technologies to farmers and by linking farmers to markets. Headquartered in Muscle Shoals, Alabama, USA, the organization has projects in over 25 countries. History IFDC was established, in part, because by 1975, the Tennessee Valley Authority's National Fertilizer Development Center (NFDC) began receiving an amount of international assistance calls that exceeded the capabilities of the center's staff to fulfill both international demand and domestic programs. A year earlier at the Sixth Special Session of the United Nations General Assembly, U.S. Secretary of State Henry Kissinger in his speech "The Challenge of Interdependence" urged the creation of an international fertilizer institute and promised U.S. contribution through facilities, technology and expertise. The result of Kissinger's urgency became the International Fertilizer Development Center, a non-profit organization incorporated under the state laws of Alabama, which began its service by answering the international calls once fielded to the NFDC. In March 1977, U.S. President Jimmy Carter designated IFDC a public international organization "entitled to enjoy the privileges, exemptions, and immunities conferred by the International Organizations Immunities Act." Funding IFDC receives funding from various bilateral and multilateral development agencies, private enterprises, foundations and an assortment of other organizations. Additionally, long-term revenue is given to the Center through long-term, donor-funded market development projects involving the transfer of policy and technology improvements to emerging economies. President and CEO Albin Hubscher's work in international agricultural development has spanned both the public and private sectors. Most recently, Hubscher served as Interim Corporate Service Director for the International Livestock Research Institute (ILRI). He has also held roles as CFO – Director Of Finance for the CGIAR System Organization from 2015 to 2018 and as Deputy Director General for the Centro International de Agricultural Tropical (CIAT) from 2007 to 2015. Within the private sector, Hubscher held positions at Ciba-Geigy Corporation and Novartis in Colombia, Switzerland, and the United States. Hubscher earned a degree in industrial/processing engineering from Fachhochschule Nordwestschweiz. He has also completed several management and leadership training programs and workshops in the private and non-profit sectors, including from the Harvard Business School. Board of Directors Dr. Jimmy Cheek, chairperson of the board, United States Rudy Rabbinge, Co-Vice Chairperson, The Netherlands Rhoda Peace Tumusiime, Co-Vice Chairperson, Uganda M. Peter McPherson, Chairperson Emeritus, United States Dr. Josué Dioné, Mali Charlotte Hebebrand, United States Douglas Horswill, Canada Dr. Mark E. Keenum, United States Dr. Steven Leath, United States William P. O'Neill Jr., United States Esin Mete, Turkey R.S. Paroda, India Jason Scarpone, United States Albin Hubscher, IFDC President and CEO, United States Patrick J. Murphy, United States, Ex-Officio Member Divisions East and Southern Africa Division Active Countries: Burundi, Democratic Republic of the Congo, Ethiopia, Kenya, Mozambique, Rwanda, South Sudan, Tanzania, Uganda, Zambia The East and South Africa Division (ESAFD) of IFDC handles areas where previous farming techniques are no longer adequate for the growing population they serve. ESAFD works to improve farmers' access to quality seeds and fertilizers as well as markets to cover their costs. The effort also educates farmers in Integrated Soil Fertility Management (ISFM) to improve soil conditions. North and West Africa Division Active Countries: Benin, Burkina Faso, Cape Verde, Chad, Côte d'Ivoire, Gambia, Ghana, Guinea, Guinea Bissau, Liberia, Mali, Niger, Nigeria, Senegal, Sierra Leone, Togo The North and West Africa Division (NWAFD) of IFDC covers an area of Africa of about 520 million people, more than half of whom are directly affected by its programs. These programs include demonstrations fields where farmers receive hands-on training and experience with new and specialized fertilizer, seed, crop protection and irrigation research. Through the use of voucher programs called "smart subsidies," farmers can receive quality supplies in a timely manner and be supported at harvest time. EurAsia Division Active Countries: Bangladesh, Myanmar The EurAsia Division (EAD) of IFDC focuses on countries with little land suitable for farming where farmers' yields steadily decrease over time due to crop quality and quantity. EAD hopes to reverse these issues by addressing specific financial, market, and agricultural needs. The division teaches farmers about Fertilizer Deep Placement (FDP), a method which has previously raised crop yields by 20 percent and decreased nitrogen losses by 40 percent. Office of Programs The Office of Programs conducts research and development projects dedicated to improving fertilizer efficiency. It offers consultation to national governments as well as private sector organizations with regard to critical domains such as supply/demand and policy issues. Nations Currently Served by IFDC Nations Previously Served by IFDC Research and development By 2050, 60 percent more food will need to be grown annually to keep up with a rapidly growing population. According to Vaclav Smil, man-made nitrogen fertilizers are keeping about 40% of the world's population alive. IFDC conducts research to identify the most efficient use of fertilizer raw materials and develops processes to use these materials in the sustainable and cost-effective manufacture of various fertilizer products. In Bangladesh, for example, IFDC introduced Urea Deep Placement (UDP) technology, a briquetted form of urea applied into the soil, which increases farmer incomes by an average of 20% and decreases nitrogen loss by up to 30%. Applied research also includes the development of more efficient cropping technologies, decision support tools and the agronomic evaluation of these products and processes to ensure their long-term viability in a free-market environment. Fertilizer Deep Placement During the mid to late 1980s, IFDC began research in India on several fertilizer types, one being the IFDC-developed fertilizer deep placement (FDP) technology, which was shown at the time to decrease nitrogen losses by 9% on sorghum crops. In 1986, the Center introduced FDP in Bangladesh where IFDC has promoted the technology ever since. Farmers are now using the technology on 1.7 million acres in that country alone. In 2007, IFDC began a new FDP campaign, spreading the technology to sub-Saharan Africa. FDP involves "briquetting" nitrogen fertilizer by compacting prilled fertilizer into 1-3 gram briquettes. The briquettes (either urea- or NPK-based) are then placed in a plant's root zone, as opposed to the traditional application method of broadcasting. Trials have shown that FDP and UDP (when only urea is used) can increase crop production up to 36 percent, reduce fertilizer use by up to 38 percent, and reduce nitrogen losses by up to 40 percent. The technology, mainly promoted in lowland flooded rice, showed promising results in reducing nitrogen runoff, so in 2012, IFDC began research in Bangladesh to quantify GHG emissions produced from using FDP. Through the USAID-funded Accelerating Agricultural Productivity Improvement project, which integrated the U.S. government's Global Climate Change Initiative into its Feed the Future Initiative, research is currently underway. Peak Phosphorus Phosphorus is a key component of fertilizer, along with nitrogen and potassium. Predicting the future event of peak phosphate in which production of phosphate rock begins to decline as resources dwindle, researchers estimated that world phosphorus supplies would be used up by 2030 if mined and processed at its present rates. Depletion of this material would be a major complication for the fertilizer production sector. In 2010, IFDC geologist Steven Van Kauwenburgh estimated the world's supply of phosphate rock at 60 billion metric tons in the publication World Phosphate Rock Reserves and Resources. By his estimates, global resources of phosphate rock suitable to produce phosphate rock concentrate, phosphoric acid, phosphate fertilizers and other phosphate-based products will be available for several hundred years. His estimation overshadowed previous estimates of the U.S. Geological Survey (USGS) by 44 billion tons. Upon review and intense scrutiny of the information in the report, the USGS revised its world phosphate rock reserve and resource numbers to more closely reflect those stated in the report. Areas of Expertise Capacity Building IFDC trains farmers to participate effectively in a global market while facing challenges in their specific local communities. This training works both with farming techniques on a hands-on agricultural level and with commercial concepts on an agribusiness level. Competitive Agricultural Systems and Enterprises (CASE) CASE consolidates local stakeholders to encourage innovation and growth while also developing a commodity value chain and involving public and private entities. IFDC developed CASE in 2004 to further promote agricultural intensification and strengthen the integration of farmers and local entrepreneurs. Decision Support Tools (DSTs) DSTs help farmers apply agricultural research based on geography and markets by using crop modeling and analyses of soil, weather and market information to increase yields and profits. IFDC has aided in the development of several tools, including the Decision Support System for Agrotechnology Transfer (DSSAT). Fertilizer Deep Placement (FDP) FDP involves "briquetting" nitrogen fertilizer by compacting prilled fertilizer into 1-3 gram briquettes. The briquettes (either urea- or NPK-based) are then placed in a plant's root zone, as opposed to the traditional application method of broadcasting. Trials have shown that FDP and UDP (when only urea is used) can increase crop production up to 36 percent, reduce fertilizer use by up to 38 percent, and reduce nitrogen losses by up to 40 percent. Integrated Soil Fertility Management (ISFM) ISFM adapts agricultural practices to specific areas and their respective conditions to maximize agricultural efficiency and productivity. Market Development Market development efforts consist of developing output markets for farmers to sell their surplus produce, which can thus create an input market from which farmers can buy the necessary supplies such as seeds, fertilizers and crop protection products. Public-Private Partnerships (PPPs) PPPs accomplish tasks together that neither public sector institutions nor private sector organizations could accomplish individually. Initiatives Africa Fertilizer Summit On June 9–13, 2006 heads of state and governments gathered in Abuja, Nigeria, for the Africa Fertilizer summit and called for the elimination of all taxes and tariffs on fertilizer in the historic “Abuja Declaration on Fertilizer for an African Green Revolution” . Summit participants also agreed on 12 resolutions designed to increase fertilizer use five-fold in 10 years in the Abuja Declaration. IFDC helped organize and to implement the Summit. Dr. Amit Roy, then president and CEO of IFDC, in a corporate report address on the Summit stated, "The obstacles to agricultural development in Africa are enormous and long-standing. Human, institutional and research capacity, as well as physical infrastructure, must be built to enable Africa to compete effectively. Policies should be changed to encourage business investment. Furthermore, as history has demonstrated, countries must take charge of their own futures if they are to build better futures for their children." The Summit was attended by 1,100 participants including five African heads of state, 15 ministers of agriculture, 17 members of the Summit's Eminent Persons Advisory Committee, and hundreds of leaders of international organizations, agricultural research centers and private sector companies. The Abuja Declaration was written at the conclusion of the Africa Fertilizer Summit on June 13, 2006, in Abuja, Nigeria. Global Transdisciplinary Processes for Sustainable Phosphorus Management (Global TraPs) The Global TraPs initiative brings together experts from a multitude of fields to build knowledge on how humans can make steps towards using phosphorus in a sustainable manner. The multi-stakeholder initiative is headed by Dr. Amit H. Roy, then IFDC president and CEO, and Dr. Roland W. Scholz, of Fraunhofer IWKS. More than 200 other partners worldwide participate in the project. Recently, Global TraPs published a Springer book titled Sustainable Phosphorus Management: A Global Transdisciplinary Roadmap. The book discusses the economic scarcity of phosphorus and ways to increase efficiency and reduce environmental impacts of anthropogenic phosphorus flows at every stage of production, supply and use. Virtual Fertilizer Research Center (VFRC) The VFRC was an IFDC research initiative designed to create and disseminate the "next generation" of fertilizers. The initiative, through a virtual network, engaged universities, public and private research laboratories and the global fertilizer and agribusiness industries in the development of new fertilizers. Past work focused on biological solutions to plant and human nutrition. See also Nitrate City, Alabama Nitrate Plant Number 1 Reservation Subdivision References External links Africa Fertilizer Summit Proceedings Global TraPs Website VFRC Website Agricultural organizations based in the United States International development organizations Non-profit organizations based in Alabama Muscle Shoals, Alabama Fertilizers 1974 establishments in Alabama
International Fertilizer Development Center
[ "Chemistry" ]
2,748
[ "Fertilizers", "Soil chemistry" ]
42,122,318
https://en.wikipedia.org/wiki/Copper%20zinc%20antimony%20sulfide
Copper zinc antimony sulfide is a semiconductor. References Semiconductor materials
Copper zinc antimony sulfide
[ "Chemistry" ]
15
[ "Semiconductor materials" ]
26,692,812
https://en.wikipedia.org/wiki/Ketipramine
Ketipramine (G-35,259), also known as ketimipramine or ketoimipramine, is a tricyclic antidepressant (TCA) that was tested in clinical trials for the treatment of depression in the 1960s but was never marketed. It differs from imipramine in terms of chemical structure only by the addition of a ketone group, to the azepine ring, and is approximately equivalent in effectiveness as an antidepressant in comparison. It was one of the drugs prescribed by Roland Kuhn in a series of unethical experiments to test drugs on children without informed consent and without proper approval at the psychiatric hospital in Münsterlingen, Switzerland. See also Tricyclic antidepressant References Dimethylamino compounds Antidepressants Dibenzazepines Ketones Tricyclic antidepressants
Ketipramine
[ "Chemistry" ]
184
[ "Ketones", "Functional groups", "Drug safety", "Abandoned drugs" ]
26,694,015
https://en.wikipedia.org/wiki/Super-resolution%20microscopy
Super-resolution microscopy is a series of techniques in optical microscopy that allow such images to have resolutions higher than those imposed by the diffraction limit, which is due to the diffraction of light. Super-resolution imaging techniques rely on the near-field (photon-tunneling microscopy as well as those that use the Pendry Superlens and near field scanning optical microscopy) or on the far-field. Among techniques that rely on the latter are those that improve the resolution only modestly (up to about a factor of two) beyond the diffraction-limit, such as confocal microscopy with closed pinhole or aided by computational methods such as deconvolution or detector-based pixel reassignment (e.g. re-scan microscopy, pixel reassignment), the 4Pi microscope, and structured-illumination microscopy technologies such as SIM and SMI. There are two major groups of methods for super-resolution microscopy in the far-field that can improve the resolution by a much larger factor: Deterministic super-resolution: the most commonly used emitters in biological microscopy, fluorophores, show a nonlinear response to excitation, which can be exploited to enhance resolution. Such methods include STED, GSD, RESOLFT and SSIM. Stochastic super-resolution: the chemical complexity of many molecular light sources gives them a complex temporal behavior, which can be used to make several nearby fluorophores emit light at separate times and thereby become resolvable in time. These methods include super-resolution optical fluctuation imaging (SOFI) and all single-molecule localization methods (SMLM), such as SPDM, SPDMphymod, PALM, FPALM, STORM, and dSTORM. On 8 October 2014, the Nobel Prize in Chemistry was awarded to Eric Betzig, W.E. Moerner and Stefan Hell for "the development of super-resolved fluorescence microscopy", which brings "optical microscopy into the nanodimension". The different modalities of super-resolution microscopy are increasingly being adopted by the biomedical research community, and these techniques are becoming indispensable tools to understanding biological function at the molecular level. History By 1978, the first theoretical ideas had been developed to break the Abbe limit, which called for using a 4Pi microscope as a confocal laser-scanning fluorescence microscope where the light is focused from all sides to a common focus that is used to scan the object by 'point-by-point' excitation combined with 'point-by-point' detection. However the publication from 1978 had drawn an improper physical conclusion (i.e. a point-like spot of light) and had completely missed the axial resolution increase as the actual benefit of adding the other side of the solid angle. Some of the following information was gathered (with permission) from a chemistry blog's review of sub-diffraction microscopy techniques. In 1986, a super-resolution optical microscope based on stimulated emission was patented by Okhonin. Super-resolution techniques Photon tunneling microscopy (PTM) Local enhancement / ANSOM / optical nano-antennas Near-field optical random mapping (NORM) microscopy Near-field optical random mapping (NORM) microscopy is a method of optical near-field acquisition by a far-field microscope through the observation of nanoparticles' Brownian motion in an immersion liquid. NORM uses object surface scanning by stochastically moving nanoparticles. Through the microscope, nanoparticles look like symmetric round spots. The spot width is equivalent to the point spread function (~ 250 nm) and is defined by the microscope resolution. Lateral coordinates of the given particle can be evaluated with a precision much higher than the resolution of the microscope. By collecting the information from many frames one can map out the near field intensity distribution across the whole field of view of the microscope. In comparison with NSOM and ANSOM this method does not require any special equipment for tip positioning and has a large field of view and a depth of focus. Due to the large number of scanning "sensors" one can achieve image acquisition in a shorter time. 4Pi A 4Pi microscope is a laser-scanning fluorescence microscope with an improved axial resolution. The typical value of 500–700 nm can be improved to 100–150 nm, which corresponds to an almost spherical focal spot with 5–7 times less volume than that of standard confocal microscopy. The improvement in resolution is achieved by using two opposing objective lenses, both of which are focused to the same geometric location. Also, the difference in optical path length through each of the two objective lenses is carefully minimized. By this, molecules residing in the common focal area of both objectives can be illuminated coherently from both sides, and the reflected or emitted light can be collected coherently, i.e. coherent superposition of emitted light on the detector is possible. The solid angle that is used for illumination and detection is increased and approaches the ideal case, where the sample is illuminated and detected from all sides simultaneously. Up to now, the best quality in a 4Pi microscope has been reached in conjunction with STED microscopy in fixed cells and RESOLFT microscopy with switchable proteins in living cells. Structured illumination microscopy (SIM) Structured illumination microscopy (SIM) enhances spatial resolution by collecting information from frequency space outside the observable region. This process is done in reciprocal space: the Fourier transform (FT) of an SI image contains superimposed additional information from different areas of reciprocal space; with several frames where the illumination is shifted by some phase, it is possible to computationally separate and reconstruct the FT image, which has much more resolution information. The reverse FT returns the reconstructed image to a super-resolution image. SIM could potentially replace electron microscopy as a tool for some medical diagnoses. These include diagnosis of kidney disorders, kidney cancer, and blood diseases. Although the term "structured illumination microscopy" was coined by others in later years, Guerra (1995) first published results in which light patterned by a 50 nm pitch grating illuminated a second grating of pitch 50 nm, with the gratings rotated with respect to each other by the angular amount needed to achieve magnification. Although the illuminating wavelength was 650 nm, the 50 nm grating was easily resolved. This showed a nearly 5-fold improvement over the Abbe resolution limit of 232 nm that should have been the smallest obtained for the numerical aperture and wavelength used. In further development of this work, Guerra showed that super-resolved lateral topography is attained by phase-shifting the evanescent field. Several U.S. patents were issued to Guerra individually, or with colleagues, and assigned to the Polaroid Corporation. Licenses to this technology were procured by Dyer Energy Systems, Calimetrics Inc., and Nanoptek Corp. for use of this super-resolution technique in optical data storage and microscopy. Spatially modulated illumination (SMI) One implementation of structured illumination is known as spatially modulated illumination (SMI). Like standard structured illumination, the SMI technique modifies the point spread function (PSF) of a microscope in a suitable manner. In this case however, "the optical resolution itself is not enhanced"; instead structured illumination is used to maximize the precision of distance measurements of fluorescent objects, to "enable size measurements at molecular dimensions of a few tens of nanometers". The Vertico SMI microscope achieves structured illumination by using one or two opposing interfering laser beams along the axis. The object being imaged is then moved in high-precision steps through the wave field, or the wave field itself is moved relative to the object by phase shifts. This results in an improved axial size and distance resolution. SMI can be combined with other super resolution technologies, for instance with 3D LIMON or LSI-TIRF as a total internal reflection interferometer with laterally structured illumination (this last instrument and technique is essentially a phase-shifted photon tunneling microscope, which employs a total internal reflection light microscope with phase-shifted evanescent field (Guerra, 1996). This SMI technique allows one to acquire light-optical images of autofluorophore distributions in sections from human eye tissue with previously unmatched optical resolution. Use of three different excitation wavelengths (488, 568, and 647 nm), enables one to gather spectral information about the autofluorescence signal. This has been used to examine human eye tissue affected by macular degeneration. Biosensing Biosensing is crucial for understanding the activities of cellular components in cell biology. Genetically encoded sensors have transformed this field and typically consist of two parts: the sensing domain, which detects cellular activity or interactions, and the reporting domain, which produces measurable signals. There are two main types of sensors: FRET-based sensors using two fluorophores for precise quantification but with some limitations, and single-fluorophore biosensors that are smaller, faster, and allow for multiplexed experiments, but may have challenges in obtaining absolute values and detecting response saturation. Various microscopy methods, including super-resolution optical fluctuation imaging, have been used to quantify and monitor biological activities in real time. Examples include calcium, pH, and voltage sensing. Greenwald et al. offer a more comprehensive overview of these applications. Deterministic functional techniques REversible Saturable OpticaL Fluorescence Transitions (RESOLFT) microscopy is an optical microscopy with very high resolution that can image details in samples that cannot be imaged with conventional or confocal microscopy. Within RESOLFT the principles of STED microscopy and GSD microscopy are generalized. Also, there are techniques with other concepts than RESOLFT or SSIM. For example, fluorescence microscopy using the optical AND gate property of nitrogen-vacancy center, or super-resolution by Stimulated Emission of Thermal Radiation (SETR), which uses the intrinsic super-linearities of the Black-Body radiation and expands the concept of super-resolution beyond microscopy. Stimulated emission depletion (STED) Stimulated emission depletion microscopy (STED) uses two laser pulses, the excitation pulse for excitation of the fluorophores to their fluorescent state and the STED pulse for the de-excitation of fluorophores by means of stimulated emission. In practice, the excitation laser pulse is first applied whereupon a STED pulse soon follows (STED without pulses using continuous wave lasers is also used). Furthermore, the STED pulse is modified in such a way so that it features a zero-intensity spot that coincides with the excitation focal spot. Due to the non-linear dependence of the stimulated emission rate on the intensity of the STED beam, all the fluorophores around the focal excitation spot will be in their off state (the ground state of the fluorophores). By scanning this focal spot, one retrieves the image. The full width at half maximum (FWHM) of the point spread function (PSF) of the excitation focal spot can theoretically be compressed to an arbitrary width by raising the intensity of the STED pulse, according to equation ().    () where ∆r is the lateral resolution, ∆ is the FWHM of the diffraction limited PSF, Imax is the peak intensity of the STED laser, and is the threshold intensity needed in order to achieve saturated emission depletion. The main disadvantage of STED, which has prevented its widespread use, is that the machinery is complicated. On the one hand, the image acquisition speed is relatively slow for large fields of view because of the need to scan the sample in order to retrieve an image. On the other hand, it can be very fast for smaller fields of view: recordings of up to 80 frames per second have been shown. Due to a large Is value associated with STED, there is the need for a high-intensity excitation pulse, which may cause damage to the sample. Ground state depletion (GSD) Ground state depletion microscopy (GSD microscopy) uses the triplet state of a fluorophore as the off-state and the singlet state as the on-state, whereby an excitation laser is used to drive the fluorophores at the periphery of the singlet state molecule to the triplet state. This is much like STED, where the off-state is the ground state of fluorophores, which is why equation () also applies in this case. The value is smaller than in STED, making super-resolution imaging possible at a much smaller laser intensity. Compared to STED, though, the fluorophores used in GSD are generally less photostable; and the saturation of the triplet state may be harder to realize. Saturated structured illumination microscopy (SSIM) Saturated structured-illumination microscopy (SSIM) exploits the nonlinear dependence of the emission rate of fluorophores on the intensity of the excitation laser. By applying a sinusoidal illumination pattern with a peak intensity close to that needed in order to saturate the fluorophores in their fluorescent state, one retrieves Moiré fringes. The fringes contain high order spatial information that may be extracted by computational techniques. Once the information is extracted, a super-resolution image is retrieved. SSIM requires shifting the illumination pattern multiple times, effectively limiting the temporal resolution of the technique. In addition there is the need for very photostable fluorophores, due to the saturating conditions, which inflict radiation damage on the sample and restrict the possible applications for which SSIM may be used. Examples of this microscopy are shown under section Structured illumination microscopy (SIM): images of cell nuclei and mitotic stages recorded with 3D-SIM Microscopy. Stochastic functional techniques Localization microscopy Single-molecule localization microscopy (SMLM) summarizes all microscopical techniques that achieve super-resolution by isolating emitters and fitting their images with the point spread function (PSF). Normally, the width of the point spread function (~ 250 nm) limits resolution. However, given an isolated emitter, one is able to determine its location with a precision only limited by its intensity according to equation ().    () where Δloc is the localization precision, Δ is the FWHM (full width at half maximum) of the PSF and N is the number of collected photons. This fitting process can only be performed reliably for isolated emitters (see Deconvolution), and interesting biological samples are so densely labeled with emitters that fitting is impossible when all emitters are active at the same time. SMLM techniques solve this dilemma by activating only a sparse subset of emitters at the same time, localizing these few emitters very precisely, deactivating them and activating another subset. Considering background and camera pixelation, and using Gaussian approximation for the point spread function (Airy disk) of a typical microscope, the theoretical resolution is proposed by Thompson et al. and fine-tuned by Mortensen et al.: where * σ is the Gaussian standard deviation of the center locations of the same molecule if measured multiple times (e.g. frames of a video). (unit m) * σPSF is the Gaussian standard deviation of the point spread function, whose FWHM following the Ernst Abbe equation d = λ/(2 N.A.). (unit m) * a is the size of each image pixel. (unit m) * Nsig is the photon counts of the total PSF over all pixels of interest. (unitless) * Nbg the average background photon counts per pixel (dark counts already removed), which is approximated to be the square of the Gaussian standard deviation of the Poisson distribution background noise of each pixel over time or standard deviation of all pixels with background noise only, σbg2. The larger the σbg2, the better the approximation (e.g. good for σbg2 >10, excellent for σbg2 >1000). (unitless) * Resolution FWHM is ~2.355 times the Gaussian standard deviation. Generally, localization microscopy is performed with fluorophores. Suitable fluorophores (e.g. for STORM) reside in a non-fluorescent dark state for most of the time and are activated stochastically, typically with an excitation laser of low intensity. A readout laser stimulates fluorescence and bleaches or photoswitches the fluorophores back to a dark state, typically within 10–100 ms. In Points Accumulation for Imaging in Nanoscale Topography (PAINT), the fluorophores are nonfluorescent before binding and afterwards become fluorescent. The photons emitted during the fluorescent phase are collected with a camera and the resulting image of the fluorophore (which is distorted by the PSF) can be fitted with very high precision, even on the order of a few Angstroms. Repeating the process several thousand times ensures that all fluorophores can go through the bright state and are recorded. A computer then reconstructs a super-resolved image. The desirable traits of fluorophores used for these methods, in order to maximize the resolution, are that they should be bright. That is, they should have a high extinction coefficient and a high quantum yield. They should also possess a high contrast ratio (ratio between the number of photons emitted in the light state and the number of photons emitted in the dark state). Also, a densely labeled sample is desirable, according to the Nyquist criteria. The multitude of localization microscopy methods differ mostly in the type of fluorophores used. Spectral precision distance microscopy (SPDM) A single, tiny source of light can be located much better than the resolution of a microscope usually allows for: although the light will produce a blurry spot, computer algorithms can be used to accurately calculate the center of the blurry spot, taking into account the point spread function of the microscope, the noise properties of the detector, etc. However, this approach does not work when there are too many sources close to each other: the sources then all blur together. Spectral precision distance microscopy (SPDM) is a family of localizing techniques in fluorescence microscopy which gets around the problem of there being many sources by measuring just a few sources at a time, so that each source is "optically isolated" from the others (i.e., separated by more than the microscope's resolution, typically ~200-250 nm). This "optical isolation" requires that the particles under examination have different spectral signatures, so that it is possible to look at light from just a few molecules at a time by using the appropriate light sources and filters. This achieves an effective optical resolution several times better than the conventional optical resolution that is represented by the half-width of the main maximum of the effective point image function. The structural resolution achievable using SPDM can be expressed in terms of the smallest measurable distance between two punctiform particles of different spectral characteristics ("topological resolution"). Modeling has shown that under suitable conditions regarding the precision of localization, particle density, etc., the "topological resolution" corresponds to a "space frequency" that, in terms of the classical definition, is equivalent to a much improved optical resolution. Molecules can also be distinguished in even more subtle ways based on fluorescent lifetime and other techniques. An important application is in genome research (study of the functional organization of the genome). Another important area of use is research into the structure of membranes. SPDMphymod Localization microscopy for many standard fluorescent dyes like GFP, Alexa dyes, and fluorescein molecules is possible if certain photo-physical conditions are present. With this so-called physically modifiable fluorophores (SPDMphymod) technology, a single laser wavelength of suitable intensity is sufficient for nanoimaging in contrast to other localization microscopy technologies that need two laser wavelengths when special photo-switchable/photo-activatable fluorescence molecules are used. A further example of the use of SPDMphymod is an analysis of Tobacco mosaic virus (TMV) particles or the study of virus–cell interaction. Based on singlet–triplet state transitions it is crucial for SPDMphymod that this process is ongoing and leading to the effect that a single molecule comes first into a very long-living reversible dark state (with half-life of as much as several seconds) from which it returns to a fluorescent state emitting many photons for several milliseconds before it returns into a very long-living, so-called irreversible dark state. SPDMphymod microscopy uses fluorescent molecules that emit the same spectral light frequency but with different spectral signatures based on the flashing characteristics. By combining two thousands images of the same cell, it is possible, using laser optical precision measurements, to record localization images with significantly improved optical resolution. Standard fluorescent dyes already successfully used with the SPDMphymod technology are GFP, RFP, YFP, Alexa 488, Alexa 568, Alexa 647, Cy2, Cy3, Atto 488 and fluorescein. Cryogenic optical localization in 3D (COLD) Cryogenic Optical Localization in 3D (COLD) is a method that allows localizing multiple fluorescent sites within a single small- to medium-sized biomolecule with Angstrom-scale resolution. The localization precision in this approach is enhanced because the slower photochemistry at low temperatures leads to a higher number of photons that can be emitted from each fluorophore before photobleaching. Consequently, cryogenic stochastic localization microscopy achieves the sub-molecular resolution required to resolve the 3D positions of several fluorophores attached to a small protein. By employing algorithms known from electron microscopy, the 2D projections of fluorophores are reconstructed into a 3D configuration. COLD brings fluorescence microscopy to its fundamental limit, depending on the size of the label. The method can also be combined with other structural biology techniques—such as X-ray crystallography, magnetic resonance spectroscopy, and electron microscopy—to provide valuable complementary information and specificity. Binding-activated localization microscopy (BALM) Binding-activated localization microscopy (BALM) is a general concept for single-molecule localization microscopy (SMLM): super-resolved imaging of DNA-binding dyes based on modifying the properties of DNA and a dye. By careful adjustment of the chemical environment—leading to local, reversible DNA melting and hybridization control over the fluorescence signal—DNA-binding dye molecules can be introduced. Intercalating and minor-groove binding DNA dyes can be used to register and optically isolate only a few DNA-binding dye signals at a time. DNA structure fluctuation-assisted BALM (fBALM) has been used to nanoscale differences in nuclear architecture, with an anticipated structural resolution of approximately 50 nm. Imaging chromatin nanostructure with binding-activated localization microscopy based on DNA structure fluctuations. Recently, the significant enhancement of fluorescence quantum yield of NIAD-4 upon binding to an amyloid was exploited for BALM imaging of amyloid fibrils and oligomers. STORM, PALM, and FPALM Stochastic optical reconstruction microscopy (STORM), photo activated localization microscopy (PALM), and fluorescence photo-activation localization microscopy (FPALM) are super-resolution imaging techniques that use sequential activation and time-resolved localization of photoswitchable fluorophores to create high resolution images. During imaging, only an optically resolvable subset of fluorophores is activated to a fluorescent state at any given moment, such that the position of each fluorophore can be determined with high precision by finding the centroid positions of the single-molecule images of a particular fluorophore. One subset of fluorophores is subsequently deactivated, and another subset is activated and imaged. Iteration of this process allows numerous fluorophores to be localized and a super-resolution image to be constructed from the image data. These three methods were published independently over a short period of time, and their principles are identical. STORM was originally described using Cy5 and Cy3 dyes attached to nucleic acids or proteins, while PALM and FPALM were described using photoswitchable fluorescent proteins. In principle any photoswitchable fluorophore can be used, and STORM has been demonstrated with a variety of different probes and labeling strategies. Using stochastic photoswitching of single fluorophores, such as Cy5, STORM can be performed with a single red laser excitation source. The red laser both switches the Cy5 fluorophore to a dark state by formation of an adduct and subsequently returns the molecule to the fluorescent state. Many other dyes have been also used with STORM. In addition to single fluorophores, dye-pairs consisting of an activator fluorophore (such as Alexa 405, Cy2, or Cy3) and a photoswitchable reporter dye (such as Cy5, Alexa 647, Cy5.5, or Cy7) can be used with STORM. In this scheme, the activator fluorophore, when excited near its absorption maximum, serves to reactivate the photoswitchable dye to the fluorescent state. Multicolor imaging has been performed by using different activation wavelengths to distinguish dye-pairs, depending on the activator fluorophore used, or using spectrally distinct photoswitchable fluorophores, either with or without activator fluorophores. Photoswitchable fluorescent proteins can be used as well. Highly specific labeling of biological structures with photoswitchable probes has been achieved with antibody staining, direct conjugation of proteins, and genetic encoding. STORM has also been extended to three-dimensional imaging using optical astigmatism, in which the elliptical shape of the point spread function encodes the x, y, and z positions for samples up to several micrometers thick, and has been demonstrated in living cells. To date, the spatial resolution achieved by this technique is ~20 nm in the lateral dimensions and ~50 nm in the axial dimension; and the temporal resolution is as fast as 0.1–0.33s. Points accumulation for imaging in nanoscale topography (PAINT) Points accumulation for imaging in nanoscale topography (PAINT) is a single-molecule localization method that achieves stochastic single-molecule fluorescence by molecular adsorption/absorption and photobleaching/desorption. The first dye used was Nile red which is nonfluorescent in aqueous solution but fluorescent when inserted into a hydrophobic environment, such as micelles or living cell walls. Thus, the concentration of the dye is kept small, at the nanomolar level, so that the molecule's sorption rate to the diffraction-limited area is in the millisecond region. The stochastic binding of single-dye molecules (probes) to an immobilized target can be spatially and temporally resolved under a typical widefield fluorescence microscope. Each dye is photobleached to return the field to a dark state, so the next dye can bind and be observed. The advantage of this method, compared to other stochastic methods, is that in addition to obtaining the super-resolved image of the fixed target, it can measure the dynamic binding kinetics of the diffusing probe molecules, in solution, to the target. Combining 3D super-resolution technique (e.g. the double-helix point spread function develop in Moerner's group), photo-activated dyes, power-dependent active intermittency, and points accumulation for imaging in nanoscale topography, SPRAIPAINT (SPRAI=Super resolution by PoweR-dependent Active Intermittency) can super-resolve live-cell walls. PAINT works by maintaining a balance between the dye adsorption/absorption and photobleaching/desorption rates. This balance can be estimated with statistical principles. The adsorption or absorption rate of a dilute solute to a surface or interface in a gas or liquid solution can be calculated using Fick's laws of diffusion. The photobleaching/desorption rate can be measured for a given solution condition and illumination power density. DNA-PAINT has been further extended to use regular dyes, where the dynamic binding and unbinding of a dye-labeled DNA probe to a fixed DNA origami is used to achieve stochastic single-molecule imaging. DNA-PAINT is no longer limited to environment-sensitive dyes and can measure both the adsorption and the desorption kinetics of the probes to the target. The method uses the camera blurring effect of moving dyes. When a regular dye is diffusing in the solution, its image on a typical CCD camera is blurred because of its relatively fast speed and the relatively long camera exposure time, contributing to the fluorescence background. However, when it binds to a fixed target, the dye stops moving; and clear input into the point spread function can be achieved. The term for this method is mbPAINT ("mb" standing for motion blur). When a total internal reflection fluorescence microscope (TIRF) is used for imaging, the excitation depth is limited to ~100 nm from the substrate, which further reduces the fluorescence background from the blurred dyes near the substrate and the background in the bulk solution. Very bright dyes can be used for mbPAINT which gives typical single-frame spatial resolutions of ~20 nm and single-molecule kinetic temporal resolutions of ~20 ms under relatively mild photoexcitation intensities, which is useful in studying molecular separation of single proteins. The temporal resolution has been further improved (20 times) using a rotational phase mask placed in the Fourier plane during data acquisition and resolving the distorted point spread function that contains temporal information. The method was named Super Temporal-Resolved Microscopy (STReM). Label-free localization microscopy Optical resolution of cellular structures in the range of about 50 nm can be achieved, even in label-free cells, using localization microscopy SPDM. By using two different laser wavelengths, SPDM reveals cellular objects which are not detectable under conventional fluorescence wide-field imaging conditions, beside making for a substantial resolution improvement of autofluorescent structures. As a control, the positions of the detected objects in the localization image match those in the bright-field image. Label-free superresolution microscopy has also been demonstrated using the fluctuations of a surface-enhanced Raman scattering signal on a highly uniform plasmonic metasurface. Direct stochastical optical reconstruction microscopy (dSTORM) dSTORM uses the photoswitching of a single fluorophore. In dSTORM, fluorophores are embedded in a reducing and oxidizing buffering system (ROXS) and fluorescence is excited. Sometimes, stochastically, the fluorophore will enter a triplet or some other dark state that is sensitive to the oxidation state of the buffer, from which they can be made to fluoresce, so that single molecule positions can be recorded. Development of the dSTORM method occurred at 3 independent laboratories at about the same time and was also called "reversible photobleaching microscopy" (RPM), "ground state depletion microscopy followed by individual molecule return" (GSDIM), as well as the now generally accepted moniker dSTORM. Software for localization microscopy Localization microscopy depends heavily on software that can precisely fit the point spread function (PSF) to millions of images of active fluorophores within a few minutes. Since the classical analysis methods and software suites used in the natural sciences are too slow to computationally solve these problems, often taking hours of computation for processing data measured in minutes, specialised software programs have been developed. Many of these localization software packages are open-source; they are listed at SMLM Software Benchmark. Once molecule positions have been determined, the locations need to be displayed and several algorithms for display have been developed. Super-resolution optical fluctuation imaging (SOFI) It is possible to circumvent the need for PSF fitting inherent in single molecule localization microscopy (SMLM) by directly computing the temporal autocorrelation of pixels. This technique is called super-resolution optical fluctuation imaging (SOFI) and has been shown to be more precise than SMLM when the density of concurrently active fluorophores is very high. Omnipresent Localization Microscopy (OLM) Omnipresent Localisation Microscopy (OLM) is an extension of Single Molecule Microscopy (SMLM) techniques that allow high-density single molecule imaging with an incoherent light source (such as a mercury-arc lamp) and a conventional epifluorescence microscope setup. A short burst of deep-blue excitation (with a 350-380 nm, instead of a 405 nm, laser) enables a prolonged reactivation of molecules, for a resolution of 90 nm on test specimens. Finally, correlative STED and SMLM imaging can be performed on the same biological sample using a simple imaging medium, which can provide a basis for a further enhanced resolution. These findings can democratize super-resolution imaging and help any scientist to generate high-density single-molecule images even with a limited budget. Resolution Enhancement by Sequential Imaging (RESI) Resolution enhancement by sequential imaging (RESI) is an extension of DNA-PAINT that can achieve theoretically unlimited resolution. Rather than using one label type to identify a given target species, copies of the same target are labeled with orthogonal DNA sequences. Upon sequential (i.e. separated) imaging, localization clouds that would overlap in conventional SMLM can be (1) resolved and (2) combined into a single "super" localization, the precision of which scales with the underlying number of localizations. As the number of achievable localizations in DNA-PAINT is unlimited, so is the theoretical resolution of RESI. Overlaying the RESI localizations from the underlying imaging rounds creates a composite, highly resolved image. Combination of techniques 3D light microscopical nanosizing (LIMON) microscopy Light MicrOscopical Nanosizing microscopy (3D LIMON) images, using the Vertico SMI microscope, are made possible by the combination of SMI and SPDM, whereby first the SMI, and then the SPDM, process is applied. The SMI process determines the center of particles and their spread in the direction of the microscope axis. While the center of particles/molecules can be determined with a precision of 1–2 nm, the spread around this point can be determined down to an axial diameter of approximately 30–40 nm. Subsequently, the lateral position of the individual particle/molecule is determined using SPDM, achieving a precision of a few nanometers. As a biological application in the 3D dual color mode, the spatial arrangements of Her2/neu and Her3 clusters was achieved. The positions in all three directions of the protein clusters could be determined with an accuracy of about 25 nm. Integrated correlative light and electron microscopy Combining a super-resolution microscope with an electron microscope enables the visualization of contextual information, with the labelling provided by fluorescence markers. This overcomes the problem of the black backdrop that the researcher is left with when using only a light microscope. In an integrated system, the sample is measured by both microscopes simultaneously. Enhancing of techniques using neural networks Recently, owing to advancements in artificial intelligence computing, deep learning neural networks (GANs) have been used for super-resolution enhancing of photographic images extracted from optical microscopes, enhancing resolution from 40x to 100x. Resolution increases from 20x with an optical microscope to 1500x, comparable to a scanning electron microscope, via a neural lens. These techniques have applications in super-resolving images from positron-emission tomography and fluorescence microscopy. See also Correlative light-electron microscopy Deconvolution Multifocal plane microscopy (MUM) Photoactivatable probes Photoactivated localization microscopy (PALM) Stimulated emission depletion microscope (STED) Super-resolution imaging Video super resolution References Further reading Microscopy German inventions Cell imaging Laboratory equipment Optical microscopy Fluorescence techniques
Super-resolution microscopy
[ "Chemistry", "Biology" ]
7,582
[ "Optical microscopy", "Cell imaging", "Fluorescence techniques", "Microscopy" ]
26,696,317
https://en.wikipedia.org/wiki/Cover%20meter
A cover meter is an instrument to locate rebars and measure the exact concrete cover. Rebar detectors are less sophisticated devices that can only locate metallic objects below the surface. Due to the cost-effective design, the pulse-induction method is one of the most commonly used solutions. Method The pulse-induction method is based on electromagnetic pulse induction technology to detect rebars. Coils in the probe are periodically charged by current pulses and thus generate a magnetic field. On the surface of any electrically conductive material which is in the magnetic field eddy currents are produced. They induce a magnetic field in opposite directions. The resulting change in voltage can be utilized for the measurement. Rebars that are closer to the probe or of larger size produce a stronger magnetic field. Modern rebar detectors use different coil arrangements to generate several magnetic fields. Advanced signal processing supports not only the localization of rebars but also the determination of the cover and the estimation of the bar diameter. This method is unaffected by all non conductive materials such as concrete, wood, plastics, bricks, etc. However any kind of conductive materials within the magnetic field will have an influence on the measurement. Advantages of the pulse induction method: high accuracy not influenced by moisture and heterogeneities of the concrete unaffected by environmental influences low costs Disadvantage of the pulse induction method: Limited detection range Minimum bar spacing depends on cover depths Standards BS 1881:204 Testing concrete. Recommendations on the use of electromagnetic covermeters DGZfP:B2: Guideline “für Bewehrungsnachweis und Überdeckungsmessung bei Stahl- und Spannbeton” DIN 1045: Guideline Concrete, reinforced and prestressed concrete structures ACI Concrete Practices Non Destructive testing 228.2R-2.51: Covermeters Application Early diagnosis and analysis of seemingly healthy concrete cover and reinforcement status allows pre-emptive corrosion control measures to reduce unwanted risks to structural safety. Bundesanstalt für Materialforschung und -prüfung (Federal Institute for Materials Research and Testing, Germany) has developed a sensor equipped robotic system to accelerate the collection of several criteria used for diagnostics. Besides ultrasonic, ground-penetrating radar, concrete resistance, potential field, the eddy current method implemented in the Profometer 5 was used to measure the concrete cover. See also Metal detector Reinforced concrete Concrete cover References Concrete Corrosion Measuring instruments
Cover meter
[ "Chemistry", "Materials_science", "Technology", "Engineering" ]
493
[ "Structural engineering", "Metallurgy", "Corrosion", "Measuring instruments", "Electrochemistry", "Concrete", "Materials degradation" ]
26,699,451
https://en.wikipedia.org/wiki/Mechanophilia
Mechanophilia (or mechaphilia) is a paraphilia involving a sexual attraction to machines such as bicycles, motorcycles, cars, helicopters, and airplanes. Mechanophilia is treated as a crime in some nations with perpetrators being placed on a sex-offenders' register after prosecution. Motorcycles are often portrayed as sexualized fetish objects to those who desire them. Incidents In 2015 a man in Thailand was on caught on CCTV masturbating himself on the front end of a Porsche. In 2008, an American named Edward Smith admitted to 'having sex' with 1000 cars, and the helicopter used in the television show Airwolf. Art, culture and design Mechanophilia has been used to describe important works of the early modernists, including in the Eccentric Manifesto (1922), written by Leonid Trauberg, Sergei Yutkevich, Grigori Kozintsev and othersmembers of the Factory of the Eccentric Actor, a modernist avant-garde movement that spanned Russian futurism and constructivism. The term has entered into the realms of science fiction and popular fiction. Scientifically, in BiophiliaThe Human Bond with Other Species by Edward O. Wilson, Wilson is quoted describing mechanophilia, the love of machines, as "a special case of biophilia", whereas psychologists such as Erich Fromm would see it as a form of necrophilia. Designers such as Francis Picabia and Filippo Tommaso Marinetti have been said to have exploited the sexual attraction of automobiles. Culturally, critics have described it as "all-pervading" within contemporary Western society and that it seems to overwhelm our society and all too often our better judgment. Although not all such uses are sexual in intent, the terms are also used for specifically erotogenic fixation on machinery and taken to its extreme in hardcore pornography as Fucking Machines. This mainly involves women being sexually penetrated by machines for male consumption, which are seen as being the limits of current sexual biopolitics. Arse Elektronika, an annual conference organized by the Austrian arts-and-philosophy collective monochrom, has propagated a DIY/feminist approach to sex machines. Authors have drawn a connection between mechanophilia and masculine militarisation, citing the works of animator Yasuo Ōtsuka and Studio Ghibli. The 1973 French film La Grande Bouffe includes a scene of a man and a car copulating, to fatal effect. David Cronenberg's 1996 film Crash concerns a cult of people fascinated by car crashes. The 2021 French film and Palme d'Or winner Titane depicts scenes of a mechanophilic woman having sex with cars. Documentaries My Car is My Lover (2008) See also Gynoid Object sexuality I'm in Love with My Car Sex robot References Further reading Aggrawal, Anil (2009). Forensic and Medico-Legal Aspects of Sexual Crimes and Unusual Sexual Practices. Boca Raton, Florida: CRC Press. p. 376. . Wilson Edward O. (1984). BiophiliaThe Human Bond with Other Species. Cambridge, Massachusetts: Harvard University Press. p. 116. . Machines Paraphilias
Mechanophilia
[ "Physics", "Technology", "Engineering" ]
672
[ "Physical systems", "Machines", "Mechanical engineering" ]
26,700,564
https://en.wikipedia.org/wiki/Harmonic%20differential
In mathematics, a real differential one-form ω on a surface is called a harmonic differential if ω and its conjugate one-form, written as ω∗, are both closed. Explanation Consider the case of real one-forms defined on a two dimensional real manifold. Moreover, consider real one-forms that are the real parts of complex differentials. Let , and formally define the conjugate one-form to be . Motivation There is a clear connection with complex analysis. Let us write a complex number z in terms of its real and imaginary parts, say x and y respectively, i.e. . Since , from the point of view of complex analysis, the quotient tends to a limit as dz tends to 0. In other words, the definition of ω∗ was chosen for its connection with the concept of a derivative (analyticity). Another connection with the complex unit is that (just as ). For a given function f, let us write , i.e. , where ∂ denotes the partial derivative. Then . Now d((df)∗) is not always zero, indeed , where . Cauchy–Riemann equations As we have seen above: we call the one-form ω harmonic if both ω and ω∗ are closed. This means that (ω is closed) and (ω∗ is closed). These are called the Cauchy–Riemann equations on . Usually they are expressed in terms of as and . Notable results A harmonic differential (one-form) is precisely the real part of an (analytic) complex differential. To prove this one shows that satisfies the Cauchy–Riemann equations exactly when is locally an analytic function of . Of course an analytic function is the local derivative of something (namely ∫w(z) dz). The harmonic differentials ω are (locally) precisely the differentials df of solutions f to Laplace's equation . If ω is a harmonic differential, so is ω∗. See also De Rham cohomology References Mathematical analysis
Harmonic differential
[ "Mathematics" ]
419
[ "Mathematical analysis" ]
26,702,096
https://en.wikipedia.org/wiki/Matsumoto%20zeta%20function
In mathematics, Matsumoto zeta functions are a type of zeta function introduced by Kohji Matsumoto in 1990. They are functions of the form where p is a prime and Ap is a polynomial. References Zeta and L-functions
Matsumoto zeta function
[ "Mathematics" ]
50
[ "Number theory stubs", "Number theory" ]
49,662,809
https://en.wikipedia.org/wiki/Whole%20genome%20bisulfite%20sequencing
Whole genome bisulfite sequencing is a next-generation sequencing technology used to determine the DNA methylation status of single cytosines by treating the DNA with sodium bisulfite before high-throughput DNA sequencing. The DNA methylation status at various genes can reveal information regarding gene regulation and transcriptional activities. This technique was developed in 2009 along with reduced representation bisulfite sequencing after bisulfite sequencing became the gold standard for DNA methylation analysis. Whole genome bisulfite sequencing measures single-cytosine methylation levels genome-wide and directly estimates the ratio of molecules methylated rather than enrichment levels. Currently, this technique has recognized and tested approximately 95% of all cytosines in known genomes. With the improvement of library preparation methods and next-generation sequencing technology over the past decade, whole genome bisulfite sequencing has become an increasingly widespread and informative method for analyzing DNA methylation in epigenomic-wide studies. History Prior to the development of whole genome bisulfite sequencing, genome methylation analysis relied heavily on early non-specific and differential methods such as paper chromatography, high-performance liquid chromatography, and thin-layer chromatography to analyze methylation profiles. These methods were limited by the inability to amplify methylated DNA via polymerase chain reaction in vitro due to loss of methylation status. As a result, much of these early methods relied on detecting and analyzing naturally-manifested methylated cytosines in vivo rather than chemically methylated cytosines. In 1970, a breakthrough occurred when it was discovered that treating DNA with sodium bisulfite deaminated cytosine residues into uracil. In the following decade, this discovery led to the revelation that unmethylated cytosine reacted much faster to sodium bisulfite treatment than did 5-methylcytosine. This difference in reaction rates created the possibility of identifying chemical changes in DNA as an easily detectable genetic marker. Whole genome bisulfite sequencing was derived as a combination of this bisulfite treatment and next-generation sequencing technology, such as shotgun sequencing. The whole genome sequencing technique was first applied to the DNA methylation mapping at single nucleotide resolution to Arabidopsis thaliana in 2008, and shortly after in 2009, the first single-base-resolution DNA methylation map of the entire human genome was created using whole genome bisulfite sequencing. Since its development, many various protocols of whole genome bisulfite sequencing have been developed aiming to improve the efficiency and efficacy of its single-base mapping. As the costs of next-generation sequencing have decreased, whole genome bisulfite sequencing has become more widely used in clinical and experimental research. Currently, multiple public datasets of genomic data have been established, and this technique has recognized and tested approximately 95% of all cytosines in known genomes. Method The following steps are derived from one potential workflow of conventional whole genome bisulfite sequencing: target DNA extraction, bisulfite conversion, library amplification, and bioinformatics analysis. However, various sequencing systems and analysis tools often adapt the technical parameters and order of the following step processes in order to optimize assay coverage and efficacy. DNA extraction Library preparation protocols undergo DNA fragmentation, end repair, dA-tailing, and adapter ligation prior to bisulfite treatment and library amplification. Standard fragmentation under high-throughput technology such as Illumina Genome Analyser and Solexa requires nebulization to generate fragments that range from 0-1200 base pairs. After fragmentation, end repair enzymes and complementary adapters are then applied to the DNA in an end-prep polymerase chain reaction and adapter ligation reaction, respectively. Size selection occurs before the DNA is treated with sodium bisulfite. Conventional methods of eukaryotic DNA preparation during sequencing use a wide variety of DNA input amount, varying from as little as 10 ng for novel NGS library alternatives, such as the tagmentation approach, to as much as 500-1000 ng of DNA as sample input. Bisulfite conversion The adapter-ligated DNA sample is treated with sodium bisulfite, a chemical compound that converts unmethylated cytosines into uracil, at low pH and high temperatures. The chemical reaction is depicted in Figure 1, where sulfonation occurs at the carbon-6 position of cytosine to produce the intermediate cytosine sulfonate. This intermediate then undergoes irreversible hydrolytic deamination to create uracil sulfonate. Under alkaline conditions, uracil sulfonate desulfonates to generate uracil. This enables methylation detection by distinguishing the methylated cytosines (5-methylcytosine), which resist bisulfite treatment, from uracil. During amplification by polymerase chain reaction, the uracils are converted into thymines. Methylated cytosines are then recognized as cytosines. Their locations are then identified by comparison of the bisulfite-treated and original DNA sequence. Following bisulfite treatment, purification of the sample is required to remove unwanted products including bisulfite salts. Library amplification In order to amplify the epigenome library, bisulfite-treated DNA is primed to generate DNA with a specific tagging sequence. The 3' end of this sequence is then tagged again, creating DNA fragments with markers on either end. These fragments are amplified in a final polymerase chain reaction reaction, after which the library is prepped for sequencing-by-synthesis. This is demonstrated in Figure 2, in which high-throughput sequencing system developed by biotechnology company, Illumina, perform comprehensive assays based on sequencing-by-synthesis of base pairs. Bioinformatics analysis Following library amplification, a series of analyses can be performed on the expanded library to determine various methylation characteristics or map a genome-wide methylation profile. One such study aligns the new reads against the reference genome in order to directly compare locations of methylated cytosines and C-T mismatches. This requires software such as SOAP for side-by-side comparison of the genomes. Another potential sequencing analysis is methylated cytosine calling, which computes methylated cytosine ratios by mapping probabilities based on read quality. This helps determine methylated cytosine locations across the genome. Finally, global trends of methylome can be analyzed by calculating the distribution ratios of CG, CHGG, and CHH in methylated cytosines across the genome. These ratios can reflect features of whole genome methylation maps of certain species. Applications Due to its ability to screen methylation status at single-nucleotide resolution across a given genome, whole genome bisulfite sequencing has become increasingly promising in aiding fundamental epigenomics research, novel hypotheses on DNA methylation, and investigations of future large-scale epidemiological studies. This whole genome approach is also capable of sensitive cytosine-methylation detection under specific sequences across an entire genome, which increases its potential to identify specific DNA methylation sites and their relation to certain gene expressions. DNA Methylation The whole genome bisulfite sequencing technique is capable of sensitive cytosine-methylation detection under specific sequences across an entire genome, which increases its potential to identify specific DNA methylation sites and their relation to certain gene expressions. The use of whole genome bisulfite sequencing to create the first human DNA methylome in 2009 also helped identify a significant ratio of non-CG methylation. As a result, multiple single-base resolution methylomes of the human genome continue to be produced in order to identify the role of intragenic DNA methylation in gene expression and regulation. Future studies aim to use whole genome bisulfite sequencing in order to investigate the role DNA methylation has in multifarious cellular processes such as cellular differentiation, embryogenesis, X-inactivation, genomic imprinting, and tumorigenesis. Single-nucleotide maps have already been sequenced for two human cell lines, H1 human embryonic stem cells and IMR90 fetal lung fibroblasts, in order to study patterns of non-CG methylation in human cells. Developmental biology Whole genome bisulfite sequencing has also been applied to developmental biology studies in which non-CG methylation was discovered prevalent in pluripotent stem cells and oocytes. This technique helped researchers discover that non-CG methylation accumulated during oocyte growth and covered over half of all methylation in mouse germinal vesicle oocytes. Similarly, in plants, whole genome bisulfite sequencing was used to examine CG, CHH, and CHG methylation. It was then discovered that the plant germline conserved CG and CHG methylation while mammals lost CHH methylation in microspores and sperm cells. Other fields The unlimited resources provided by the approach of an entire genome have spurred many novel hypotheses on how whole genome bisulfite sequencing could be used in other various fields including disease diagnosis and forensic science. Studies have shown that whole genome bisulfite sequencing could detect abnormal methylation, or more specifically hyper-methylated suppressor genes, that are often seen in cancers including leukemia. Additionally, whole genome bisulfite sequencing has been applied to blood spot samples in forensic investigations to generate high-quality DNA methylation analyses on dried stains. Limitations Technical concerns The widespread use of whole genome bisulfite sequencing has been primarily limited by its excessive cost, complex data output, and minimal required coverage. Due to the high amount and subsequent cost of DNA input, many studies using whole genome bisulfite sequencing assays occur with few or no biological replicates. For human samples, the US National Institutes of Health (NIH) Roadmap Epigenomics Project recommends a minimum of 30x coverage sequencing to achieve accurate results and approximately 80 million aligned, high quality reads. Consequently, large-scale studies for genomic-wide methylation profiling remain less cost-effective, often requiring multiple re-sequences of the entire genome multiple times for every experiment. Current studies are being conducted to reduce the conventional minimum coverage requirements while maintaining mapping accuracy. Finally, the technique is also limited the complexity of data and lack of sufficiently advanced analytical tools for downstream computational requirements. The current bioinformatics requirements for accurate data interpretation are ahead of existing technology, which stalls the accessibility of sequencing results to the general public. Biases and over-representation of DNA methylation Additionally, there are biological limitations concerning various steps in the standard protocol, particularly in the library preparation method. One of the biggest concerns is the potential of bias in the base composition of sequences and over-representation of methylated DNA data following bioinformatics analyses. Bias can arise from multiple unintended effects of bisulfite conversion including DNA degradation. This degradation can cause uneven sequence coverage by misrepresenting genomic sequences and overestimating 5-methylcytosine values. Additionally, the bisulfite conversion process only distinguishes unmethylated cytosine from 5-methylcytosine. As a result, specificity between 5-methylcytosine and 5-hydroxymethylcytosine is limited. Another potential source of bias rises from polymerase chain reaction amplification of the library, which affects sequences with highly skewed base compositions due to high rates of polymerase sequence errors in high AT-content, bisulfite-converted DNA. See also Reduced representation bisulfite sequencing DNA methylation Shotgun sequencing ChIP-sequencing References DNA sequencing
Whole genome bisulfite sequencing
[ "Chemistry", "Biology" ]
2,383
[ "Molecular biology techniques", "DNA sequencing" ]
46,736,435
https://en.wikipedia.org/wiki/Poincar%C3%A9%E2%80%93Miranda%20theorem
In mathematics, the Poincaré–Miranda theorem is a generalization of intermediate value theorem, from a single function in a single dimension, to functions in dimensions. It says as follows: Consider continuous, real-valued functions of variables, . Assume that for each variable , the function is nonpositive when and nonnegative when . Then there is a point in the -dimensional cube in which all functions are simultaneously equal to . The theorem is named after Henri Poincaré — who conjectured it in 1883 — and Carlo Miranda — who in 1940 showed that it is equivalent to the Brouwer fixed-point theorem. It is sometimes called the Miranda theorem or the Bolzano–Poincaré–Miranda theorem. Intuitive description The picture on the right shows an illustration of the Poincaré–Miranda theorem for functions. Consider a couple of functions whose domain of definition is (i.e., the unit square). The function is negative on the left boundary and positive on the right boundary (green sides of the square), while the function is negative on the lower boundary and positive on the upper boundary (red sides of the square). When we go from left to right along any path, we must go through a point in which is . Therefore, there must be a "wall" separating the left from the right, along which is (green curve inside the square). Similarly, there must be a "wall" separating the top from the bottom, along which is (red curve inside the square). These walls must intersect in a point in which both functions are (blue point inside the square). Generalizations The simplest generalization, as a matter of fact a corollary, of this theorem is the following one. For every variable , let be any value in the range . Then there is a point in the unit cube in which for all : . This statement can be reduced to the original one by a simple translation of axes, where are the coordinates in the domain of the function are the coordinates in the codomain of the function. By using topological degree theory it is possible to prove yet another generalization. Poincare-Miranda was also generalized to infinite-dimensional spaces. See also The Steinhaus chessboard theorem is a discrete theorem that can be used to prove the Poincare-Miranda theorem. References Further reading Topology Real analysis Fixed-point theorems
Poincaré–Miranda theorem
[ "Physics", "Mathematics" ]
487
[ "Theorems in mathematical analysis", "Fixed-point theorems", "Theorems in topology", "Topology", "Space", "Geometry", "Spacetime" ]
46,737,135
https://en.wikipedia.org/wiki/SPEARpesticides
SPEARpesticides (Species At Risk) is a trait based biological indicator system for streams which quantitatively links pesticide contamination to the composition of macroinvertebrate communities. The approach uses species traits that characterize the ecological requirements posed by pesticide contamination in running waters. Therefore, it is highly specific and only slightly influenced by other environmental factors. SPEARpesticides is linked to the quality classes of the EU Water Framework Directive (WFD) History SPEARpesticides has been first developed for Central Germany and updated. SPEARpesticides was adapted and validated for streams and mesocosms worldwide and provides the first ecotoxicological approach to specifically determine the ecological effects of pesticides on aquatic invertebrate communities. Argentina, Australia Denmark, Finland, France, Germany, Kenya, Switzerland, USA, Russia Mesocosms. Calculation SPEARpesticides estimates pesticide effects and contamination. The calculation is based on monitoring data of invertebrate communities as ascertained for the EU Water Framework Directive (WFD). A simplified version of SPEARpesticides is included in the ASTERICS software for assessing the ecological quality of rivers. A detailed analysis is enabled by the free SPEAR Calculator. The SPEAR Calculator provides most recent information on species traits and allows specific user settings. The SPEARpesticides index is computed as relative abundance of vulnerable 'SPecies At Risk' (SPEAR) to be affected by pesticides. Relevant species traits comprises the physiological sensitivity towards pesticides, generation time, migration ability and exposure probability. The indicator value of SPEARpesticides at a sampling site is calculated as follows: with n = number of taxa; xi = abundance of taxon i; y = 1 if taxon i is classified as SPEAR-sensitive; y = 0 if taxon i is classified as SPEAR-insensitive. An application is available for the calculation. Web address of SPEAR calculator References Bioindicators Water quality indicators Pesticides Water pollution
SPEARpesticides
[ "Chemistry", "Biology", "Environmental_science" ]
406
[ "Pesticides", "Bioindicators", "Toxicology", "Environmental chemistry", "Water pollution", "Water quality indicators", "Biocides" ]
43,570,208
https://en.wikipedia.org/wiki/TNO%20intestinal%20model
TNO (gastro-) Intestinal Models (“TIM”) is a system of models mimicking the digestive tract. The system was developed by TNO, the Netherlands Organisation for Applied Scientific Research. The models are dynamic computer controlled multi-compartmental systems with adjustable parameters for the physiological conditions of the stomach and intestine. Temperature, peristalsis, bile secretion, secretion of saliva, stomach and pancreas enzymes are all fully adjustable. The TIM systems are being used to study the behavior of oral products during transit through the stomach, the small intestine and large intestine. Commonly performed studies concern the digestibility of food and food components, the bioaccessibility for absorption of pharmaceutical compounds, proteins, fat, minerals and (water- and fat-soluble) vitamins. There are different models for the stomach and small intestine (TIM-1 and Tiny-TIM) and a model simulating the physiological conditions of the colon (TIM-2). The TIM-1 system consists of a stomach compartment and 3 compartments for the small intestine, the duodenum, jejunum and ileum. The Tiny-TIM system consists of a stomach compartment and one single compartment for the small intestine. Samples can be harvested for analysis from these models from any compartment at any time. TIM-2 simulates the colon, containing the microbiota as found in human colon. This model serves as a tool to study fermentation of non-digestible food components (fibers and prebiotics) and the release of drugs specifically targeted for the colon. References External links Website of TNO Triskelion YouTube movie about TIM pharma YouTube movie about TIM food and nutrition Digestive system Drug discovery Alternatives_to_animal_testing
TNO intestinal model
[ "Chemistry", "Biology" ]
367
[ "Digestive system", "Animal testing", "Life sciences industry", "Drug discovery", "Alternatives to animal testing", "Organ systems", "Medicinal chemistry" ]
43,571,383
https://en.wikipedia.org/wiki/Two-variable%20logic
In mathematical logic and computer science, two-variable logic is the fragment of first-order logic where formulae can be written using only two different variables. This fragment is usually studied without function symbols. Decidability Some important problems about two-variable logic, such as satisfiability and finite satisfiability, are decidable. This result generalizes results about the decidability of fragments of two-variable logic, such as certain description logics; however, some fragments of two-variable logic enjoy a much lower computational complexity for their satisfiability problems. By contrast, for the three-variable fragment of first-order logic without function symbols, satisfiability is undecidable. Counting quantifiers The two-variable fragment of first-order logic with no function symbols is known to be decidable even with the addition of counting quantifiers, and thus of uniqueness quantification. This is a more powerful result, as counting quantifiers for high numerical values are not expressible in that logic. Counting quantifiers actually improve the expressiveness of finite-variable logics as they allow to say that there is a node with neighbors, namely . Without counting quantifiers variables are needed for the same formula. Connection to the Weisfeiler-Leman algorithm There is a strong connection between two-variable logic and the Weisfeiler-Leman (or color refinement) algorithm. Given two graphs, then any two nodes have the same stable color in color refinement if and only if they have the same type, that is, they satisfy the same formulas in two-variable logic with counting. References Model theory Systems of formal logic
Two-variable logic
[ "Mathematics" ]
354
[ "Mathematical logic", "Model theory" ]
43,571,557
https://en.wikipedia.org/wiki/Fragment%20%28logic%29
In mathematical logic, a fragment of a logical language or theory is a subset of this logical language obtained by imposing syntactical restrictions on the language. Hence, the well-formed formulae of the fragment are a subset of those in the original logic. However, the semantics of the formulae in the fragment and in the logic coincide, and any formula of the fragment can be expressed in the original logic. The computational complexity of tasks such as satisfiability or model checking for the logical fragment can be no higher than the same tasks in the original logic, as there is a reduction from the first problem to the other. An important problem in computational logic is to determine fragments of well-known logics such as first-order logic that are as expressive as possible yet are decidable or more strongly have low computational complexity. The field of descriptive complexity theory aims at establishing a link between logics and computational complexity theory, by identifying logical fragments that exactly capture certain complexity classes. References Mathematical logic
Fragment (logic)
[ "Mathematics" ]
203
[ "Mathematical logic" ]
43,576,000
https://en.wikipedia.org/wiki/Temporary%20adjustments%20of%20theodolites
Temporary adjustments are a set of operations which are performed on a theodolite to make it ready for taking observations. These include its initial setting up on a tripod or other stand, centering, levelling up and focusing of eyepiece. Initial setting The initial setting operation includes fixing the theodolite on a tripod, along with approximate levelling and centering over the station mark. For setting up the instrument, the tripod is placed over the station with its legs widely spread so that the centre of the tripod head lies above the station point and its head approximately level (by eye estimation). The instrument is then fixed with the tripod by screwing through the trivet. The height of the instrument should be such that observer can see through telescope conveniently. After this, a plumb bob is suspended from the bottom of the instrument and it should approximately align with the station mark. Centering Centering means bringing the vertical axis of the theodolite exactly over the station mark. Exact centering is done by using the shifting head of the instrument. During this, first the screw-clamping ring of the sliding head is loosened and the upper plate of the shifting head is slid over the lower one until the plumb bob is exactly over the station mark. After the exact centering, the screw clamping ring is tightened. This can be done by means of a forced centering plate or tribrach. An optical or laser plummet is normally used for the most accurate setting. The centering and levelling of the instrument is interactive and iterative; a re-levelling may change the centering, so error each is eliminated successively until negligible. Levelling Leveling of an instrument is done to make its vertical axis adjusted with respect to the apparent force of gravity at the station. For two spirit vials at right angles: Bring one of the level tubes parallel to any two of the foot screws, by rotating the upper part of the instrument. The bubble is brought to the centre of the level tube by rotating both the foot screws either inward or outward. The bubble moves in the same direction as the left thumb. The bubble of the other level tube is then brought to the centre of the level tube by rotating the third foot screw either inward or outward. [In step 1 itself, the other plate level will be parallel to the line joining the third foot screw and the centre of the line joining the previous two foot screws.] Repeat step 2 and step 3 in the same quadrant till both the bubble remain central. By rotating the upper part of the instrument through 180°, the level tube is brought parallel to first two-foot screws in reverse order. The bubble will remain in the centre if the instrument is in permanent adjustment.Otherwise, repeat the whole process starting from step 1 to step 5. The same principle applies for a bulls-eye level: Bring the level parallel to any two of the foot screws, by rotating the upper part of the instrument. The bubble is brought to the centre of the level tube by rotating both the foot screws either inward or outward. Rotate the upper part of the instrument through 180°, so the level is over the remaining foot screw. The bubble will remain in the centre if the instrument is in permanent adjustment. If not adjust this screw to halve the error. Then rotate back through 180° and check the error. Adjust those screws to halve the residual error. Continue until the bubble is always central on the ring. Focusing To obtain an accurate clear sighting, the cross hairs should be in focus; adjust the eyepiece to do this. Focusing of eyepiece lens For focusing of the eye piece, point the telescope to the sky or hold a piece of white paper in front of telescope. Move the eye-piece in and out until a distinct sharp black image of the cross-hairs is seen. This confirms proper focusing. To clearly view the object being sighted focus the objective lens. Focusing of objective lens It is done for each independent observation to bring the image of the object in the plane of cross hairs. It includes following steps of operation: First, direct the telescope towards the object for observation. Next, turn the focusing screw until the image of the object appears clear and sharp as the observer looks through properly focused eye-piece. If focusing has been done properly, there will be no parallax i.e., there will be no apparent movement of the image relative to the cross hairs if the observer moves his eye from one side to the other or from top to bottom. See also Permanent adjustments of theodolite Tape (surveying) Ranging rods Survey camp Local attraction References Surveying Civil engineering Engineering academics
Temporary adjustments of theodolites
[ "Engineering" ]
953
[ "Construction", "Surveying", "Civil engineering" ]
43,579,996
https://en.wikipedia.org/wiki/Permanent%20adjustments%20of%20theodolites
The permanent adjustments of theodolites are made to establish fixed relationship between the instrument's fundamental lines. The fundamental lines or axis of a transit theodolite include the following:- Vertical axis Axis of plate levels Axis of telescope Line of collimation Horizontal axis Axis of altitude bubble and the vernier should read zero. These adjustments once made last for a long time. These are important for accuracy of observations taken from the instrument. The permanent adjustments in case of transit theodolite are:- Horizontal axis adjustment. The horizontal axis must be perpendicular to the vertical axis. Vertical circle index adjustment. The vertical circle must read zero when the line of collimation is horizontal. Adjustment of altitude level. The axis of altitude level must be parallel to the line of collimation. Collimation adjustment. The line of collimation or line of sight should coincide with axis of the telescope. The line of sight should also be perpendicular to the horizontal axis at its intersection with the vertical axis . Also, the optical axis, the axis of the objective slide, and the line of sight should coincide. Adjustment of horizontal plate levels. The axis of plate levels must be perpendicular to the vertical axis. See also Temporary adjustments of theodolite Ranging rods Tape (surveying) References Surveying Civil engineering
Permanent adjustments of theodolites
[ "Engineering" ]
261
[ "Construction", "Surveying", "Civil engineering" ]
39,379,960
https://en.wikipedia.org/wiki/De-extinction
De-extinction (also known as resurrection biology, or species revivalism) is the process of generating an organism that either resembles or is an extinct species. There are several ways to carry out the process of de-extinction. Cloning is the most widely proposed method, although genome editing and selective breeding have also been considered. Similar techniques have been applied to certain endangered species, in hopes to boost their genetic diversity. The only method of the three that would provide an animal with the same genetic identity is cloning. There are benefits and drawbacks to the process of de-extinction ranging from technological advancements to ethical issues. Methods Cloning Cloning is a commonly suggested method for the potential restoration of an extinct species. It can be done by extracting the nucleus from a preserved cell from the extinct species and swapping it into an egg, without a nucleus, of that species' nearest living relative. The egg can then be inserted into a host from the extinct species' nearest living relative. This method can only be used when a preserved cell is available, meaning it would be most feasible for recently extinct species. Cloning has been used by scientists since the 1950s. One of the most well known clones is Dolly the sheep. Dolly was born in the mid 1990s and lived normally until the abrupt midlife onset of health complications resembling premature aging, that led to her death. Other known cloned animal species include domestic cats, dogs, pigs, and horses. Genome editing Genome editing has been rapidly advancing with the help of the CRISPR/Cas systems, particularly CRISPR/Cas9. The CRISPR/Cas9 system was originally discovered as part of the bacterial immune system. Viral DNA that was injected into the bacterium became incorporated into the bacterial chromosome at specific regions. These regions are called clustered regularly interspaced short palindromic repeats, otherwise known as CRISPR. Since the viral DNA is within the chromosome, it gets transcribed into RNA. Once this occurs, the Cas9 binds to the RNA. Cas9 can recognize the foreign insert and cleaves it. This discovery was very crucial because now the Cas protein can be viewed as a scissor in the genome editing process. By using cells from a closely related species to the extinct species, genome editing can play a role in the de-extinction process. Germ cells may be edited directly, so that the egg and sperm produced by the extant parent species will produce offspring of the extinct species, or somatic cells may be edited and transferred via somatic cell nuclear transfer. The result is an animal which is not completely the extinct species, but rather a hybrid of the extinct species and the closely related, non-extinct species. Because it is possible to sequence and assemble the genome of extinct organisms from highly degraded tissues, this technique enables scientists to pursue de-extinction in a wider array of species, including those for which no well-preserved remains exist. However, the more degraded and old the tissue from the extinct species is, the more fragmented the resulting DNA will be, making genome assembly more challenging. Back-breeding Back breeding is a form of selective breeding. As opposed to breeding animals for a trait to advance the species in selective breeding, back breeding involves breeding animals for an ancestral characteristic that may not be seen throughout the species as frequently. This method can recreate the traits of an extinct species, but the genome will differ from the original species. Back breeding, however, is contingent on the ancestral trait of the species still being in the population in any frequency. Back breeding is also a form of artificial selection by the deliberate selective breeding of domestic animals, in an attempt to achieve an animal breed with a phenotype that resembles a wild type ancestor, usually one that has gone extinct. Iterative evolution A natural process of de-extinction is iterative evolution. This occurs when a species becomes extinct, but then after some time a different species evolves into an almost identical creature. For example, the Aldabra rail was a flightless bird that lived on the island of Aldabra. It had evolved some time in the past from the flighted white-throated rail, but became extinct about 136,000 years ago due to an unknown event that caused sea levels to rise. About 100,000 years ago, sea levels dropped and the island reappeared, with no fauna. The white-throated rail recolonized the island, but soon evolved into a flightless species physically identical to the extinct species. Herbarium specimens for de-extincting plants Not all extinct plants have herbarium specimens that contain seeds. Of those that do, there is ongoing discussion on how to coax barely alive embryos back to life. See Judean date palm and tsori. In-vitro fertilisation and artificial insemination In-vitro fertilisation and artificial insemination are assisted reproduction technology commonly used to treat infertility in humans. However, it has usage as a viable option for de-extinction in cases of functional extinction where all remaining individuals are of the same sex, incapable of naturally reproducing, or suffer from low genetic diversity such as the northern white rhinoceros, Yangtze giant softshell turtle, Hyophorbe amaricaulis, baiji, and vaquita. For example, viable embryos are created from preserved sperm from deceased males and ova from living females are implemented into a surrogate species. Advantages of de-extinction The technologies being developed for de-extinction could lead to large advances in various fields: An advance in genetic technologies that are used to improve the cloning process for de-extinction could be used to prevent endangered species from becoming extinct. By studying revived previously extinct animals, cures to diseases could be discovered. Revived species may support conservation initiatives by acting as "flagship species" to generate public enthusiasm and funds for conserving entire ecosystems. Prioritising de-extinction could lead to the improvement of current conservation strategies. Conservation measures would initially be necessary in order to reintroduce a species into the ecosystem, until the revived population can sustain itself in the wild. Reintroduction of an extinct species could also help improve ecosystems that had been destroyed by human development. It may also be argued that reviving species driven to extinction by humans is an ethical obligation. Disadvantages of de-extinction The reintroduction of extinct species could have a negative impact on extant species and their ecosystem. The extinct species' ecological niche may have been filled in its former habitat, making it an invasive species. This could lead to the extinction of other species due to competition for food or other competitive exclusion. It could lead to the extinction of prey species if they have more predators in an environment that had few predators before the reintroduction of an extinct species. If a species has been extinct for a long period of time the environment they are introduced to could be wildly different from the one that they can survive in. The changes in the environment due to human development could mean that the species may not survive if reintroduced into that ecosystem. A species could also become extinct again after de-extinction if the reasons for its extinction are still a threat. The woolly mammoth might be hunted by poachers just like elephants for their ivory and could go extinct again if this were to happen. Or, if a species is reintroduced into an environment with disease for which it has no immunity, the reintroduced species could be wiped out by a disease that current species can survive. De-extinction is a very expensive process. Bringing back one species can cost millions of dollars. The money for de-extinction would most likely come from current conservation efforts. These efforts could be weakened if funding is taken from conservation and put into de-extinction. This would mean that critically endangered species would start to go extinct faster because there are no longer resources that are needed to maintain their populations. Also, since cloning techniques cannot perfectly replicate a species as it existed in the wild, the reintroduction of the species may not bring about positive environmental benefits. They may not have the same role in the food chain that they did before and therefore cannot restore damaged ecosystems. Current candidate species for de-extinction Woolly mammoth The existence of preserved soft tissue remains and DNA from woolly mammoths (Mammuthus primigenius) has led to the idea that the species could be recreated by scientific means. Two methods have been proposed to achieve this: The first would be to use the cloning process; however, even the most intact mammoth samples have had little usable DNA because of their conditions of preservation. There is not enough DNA intact to guide the production of an embryo. The second method would involve artificially inseminating an elephant egg cell with preserved sperm of the mammoth. The resulting offspring would be a hybrid of the mammoth and its closest living relative the Asian elephant. After several generations of cross-breeding these hybrids, an almost pure woolly mammoth could be produced. However, sperm cells of modern mammals are typically potent for up to 15 years after deep-freezing, which could hinder this method. Whether the hybrid embryo would be carried through the two-year gestation is unknown; in one case, an Asian elephant and an African elephant produced a live calf named Motty, but it died of defects at less than two weeks old. In 2008, a Japanese team found usable DNA in the brains of mice that had been frozen for 16 years. They hope to use similar methods to find usable mammoth DNA. In 2011, Japanese scientists announced plans to clone mammoths within six years. In March 2014, the Russian Association of Medical Anthropologists reported that blood recovered from a frozen mammoth carcass in 2013 would now provide a good opportunity for cloning the woolly mammoth. Another way to create a living woolly mammoth would be to migrate genes from the mammoth genome into the genes of its closest living relative, the Asian elephant, to create hybridized animals with the notable adaptations that it had for living in a much colder environment than modern day elephants. This is currently being done by a team led by Harvard geneticist George Church. The team has made changes in the elephant genome with the genes that gave the woolly mammoth its cold-resistant blood, longer hair, and an extra layer of fat. According to geneticist Hendrik Poinar, a revived woolly mammoth or mammoth-elephant hybrid may find suitable habitat in the tundra and taiga forest ecozones. George Church has hypothesized the positive effects of bringing back the extinct woolly mammoth would have on the environment, such as the potential for reversing some of the damage caused by global warming. He and his fellow researchers predict that mammoths would eat the dead grass allowing the sun to reach the spring grass; their weight would allow them to break through dense, insulating snow in order to let cold air reach the soil; and their characteristic of felling trees would increase the absorption of sunlight. In an editorial condemning de-extinction, Scientific American pointed out that the technologies involved could have secondary applications, specifically to help species on the verge of extinction regain their genetic diversity. Pyrenean ibex The Pyrenean ibex (Capra pyrenaica pyrenaica) was a subspecies of Iberian ibex that lived on the Iberian Peninsula. While it was abundant through medieval times, over-hunting in the 19th and 20th centuries led to its demise. In 1999, only a single female named Celia was left alive in Ordesa National Park. Scientists captured her, took a tissue sample from her ear, collared her, then released her back into the wild, where she lived until she was found dead in 2000, having been crushed by a fallen tree. In 2003, scientists used the tissue sample to attempt to clone Celia and resurrect the extinct subspecies. Despite having successfully transferred nuclei from her cells into domestic goat egg cells and impregnating 208 female goats, only one came to term. The baby ibex that was born had a lung defect, and lived for only seven minutes before suffocating from being incapable of breathing oxygen. Nevertheless, her birth was seen as a triumph and is considered the first de-extinction. In late 2013, scientists announced that they would again attempt to resurrect the Pyrenean ibex. A problem to be faced, in addition to the many challenges of reproduction of a mammal by cloning, is that only females can be produced by cloning the female individual Celia, and no males exist for those females to reproduce with. This could potentially be addressed by breeding female clones with the closely related Southeastern Spanish ibex, and gradually creating a hybrid animal that will eventually bear more resemblance to the Pyrenean ibex than the Southeastern Spanish ibex. Aurochs The aurochs (Bos primigenius) was widespread across Eurasia, North Africa, and the Indian subcontinent during the Pleistocene, but only the European aurochs (B. p. primigenius) survived into historical times. This species is heavily featured in European cave paintings, such as Lascaux and Chauvet cave in France, and was still widespread during the Roman era. Following the fall of the Roman Empire, overhunting of the aurochs by nobility caused its population to dwindle to a single population in the Jaktorów forest in Poland, where the last wild one died in 1627. However, because the aurochs is ancestral to most modern cattle breeds, it is possible for it to be brought back through selective or back breeding. The first attempt at this was by Heinz and Lutz Heck using modern cattle breeds, which resulted in the creation of Heck cattle. This breed has been introduced to nature preserves across Europe; however, it differs strongly from the aurochs in physical characteristics, and some modern attempts claim to try to create an animal that is nearly identical to the aurochs in morphology, behavior, and even genetics. There are several projects that aim to create a cattle breed similar to the aurochs through selectively breeding primitive cattle breeds over a course of twenty years to create a self-sufficient bovine grazer in herds of at least 150 animals in rewilded nature areas across Europe, for example the Tauros Programme and the separate Taurus Project. This organization is partnered with the organization Rewilding Europe to help revert some European natural ecosystems to their prehistoric form. A competing project to recreate the aurochs is the Uruz Project by the True Nature Foundation, which aims to recreate the aurochs by a more efficient breeding strategy using genome editing, in order to decrease the number of generations of breeding needed and the ability to quickly eliminate undesired traits from the population of aurochs-like cattle. It is hoped that aurochs-like cattle will reinvigorate European nature by restoring its ecological role as a keystone species and bring back biodiversity that disappeared following the decline of European megafauna, as well as helping to bring new economic opportunities related to European wildlife viewing. Sometime in 2025, Tauros Programme and Rewilding Europe plan to release their aurochs into the wild in select areas of Europe and to have the species recognised as a protected wildlife species again. In 2026, these animals will be reintroduced to parts of the Scottish Highlands. Quagga The quagga (Equus quagga quagga) is a subspecies of the plains zebra that was distinct in that it was striped on its face and upper torso, but its rear abdomen was a solid brown. It was native to South Africa, but was wiped out in the wild due to overhunting for sport, and the last individual died in 1883 in the Amsterdam Zoo. However, since it is technically the same species as the surviving plains zebra, it has been argued that the quagga could be revived through artificial selection. The Quagga Project aims to breed a similar form of zebra by selective breeding of plains zebras. This process is also known as back breeding. It also aims to release these animals onto the western Cape once an animal that fully resembles the quagga is achieved, which could have the benefit of eradicating introduced species of trees such as the Brazilian pepper tree, Tipuana tipu, Acacia saligna, bugweed, camphor tree, stone pine, cluster pine, weeping willow and Acacia mearnsii. Thylacine The thylacine (Thylacinus cynocephalus), commonly known as the Tasmanian tiger, was native to the Australian mainland, Tasmania and New Guinea. It is believed to have become extinct in the 20th century. The thylacine had become extremely rare or extinct on the Australian mainland before British settlement of the continent. The last known thylacine died at the Hobart Zoo, on September 7, 1936. He is believed to have died as the result of neglect—locked out of his sheltered sleeping quarters, he was exposed to a rare occurrence of extreme Tasmanian weather: extreme heat during the day and freezing temperatures at night. Official protection of the species by the Tasmanian government was introduced on July 10, 1936, roughly 59 days before the last known specimen died in captivity. In December 2017, it was announced in the journal Nature Ecology and Evolution that the full nuclear genome of the thylacine had been successfully sequenced, marking the completion of the critical first step toward de-extinction that began in 2008, with the extraction of the DNA samples from the preserved pouch specimen. The thylacine genome was reconstructed by using the genome editing method. The Tasmanian devil was used as a reference for the assembly of the full nuclear genome. Andrew J. Pask from the University of Melbourne has stated that the next step toward de-extinction will be to create a functional genome, which will require extensive research and development, estimating that a full attempt to resurrect the species may be possible as early as 2027. In August 2022, the University of Melbourne and Colossal Biosciences announced a partnership to accelerate de-extinction of the thylacine via genetic modification of one of its closest living relatives, the fat-tailed dunnart. In October 2024, a 99.9% complete genome of the thylacine was created from a well-preserved skull that is estimated to be 110 years old. This discovery allowed for the full genome of the species to constructed in January 2025, and in the same month, Colossal Biosciences and University of Melbourne developed an artificial marsupial womb to further accelerate the de-extinction of thylacine and conservation for endangered marsupials Passenger pigeon The passenger pigeon (Ectopistes migratorius) numbered in the billions before being wiped out due to unsustainable commercial hunting and habitat loss during the early 20th century. The non-profit Revive & Restore obtained DNA from the passenger pigeon from museum specimens and skins; however, this DNA is degraded because it is so old. For this reason, simple cloning would not be an effective way to perform de-extinction for this species because parts of the genome would be missing. Instead, Revive & Restore focuses on identifying mutations in the DNA that would cause a phenotypic difference between the extinct passenger pigeon and its closest living relative, the band-tailed pigeon. In doing this, they can determine how to modify the DNA of the band-tailed pigeon to change the traits to mimic the traits of the passenger pigeon. In this sense, the de-extinct passenger pigeon would not be genetically identical to the extinct passenger pigeon, but it would have the same traits. In 2015, the de-extinct passenger pigeon hybrid was forecast ready for captive breeding by 2025 and released into the wild by 2030. In October 2024, Revive & Restore collaborated with Applied Ecological Institute to simulate forest disturbances in the American state of Wisconsin to see how trees would react to the reintroduced passenger pigeons. The original 2025 goal was not met, with the new goal for reviving the species for captive breeding set for between 2029 and 2032. However, it could take decades for the species to be reintroduced to the wild. Bush moa The bush moa, also known as the little bush moa or lesser moa (Anomalopteryx didiformis) is a slender species of moa slightly larger than a turkey that went extinct abruptly, around 500–600 years ago following the arrival and proliferation of the Māori people in New Zealand, as well as the introduction of Polynesian dogs. Scientists at Harvard University assembled the first nearly complete genome of the species from toe bones, thus bringing the species a step closer to de-extinction. The New Zealand politician, Trevor Mallard has also previously suggested bringing back a medium-sized species of moa. The proxy of the species will likely be the emu. Maclear's rat The Maclear's rat (Rattus macleari), also known as the Christmas Island rat, was a large rat endemic to Christmas Island in the Indian Ocean. It is believed Maclear's rat might have been responsible for keeping the population of Christmas Island red crab in check. It is thought that the accidental introduction of black rats by the Challenger expedition infected the Maclear's rats with a disease (possibly a trypanosome), which resulted in the species' decline. The last recorded sighting was in 1903. In March 2022, researchers discovered the Maclear's rat shared about 95% of its genes with the living brown rat, thus sparking hopes in bringing the species back to life. Although scientists were mostly successful in using CRISPR technology to edit the DNA of the living species to match that of the extinct one, a few key genes were missing, which would mean resurrected rats would not be genetically pure replicas. Dodo The dodo (Raphus cucullatus) was a flightless bird endemic to the island of Mauritius in the Indian Ocean. Due to various factors such as the inability to feel fear caused by isolation from significant predators, predation from humans and introduced invasive species such as pigs, dogs, cats, rats, and crab-eating macaques, competition for food with invasive species, habitat loss, and the birds naturally slow reproduction, the species' numbers declined rapidly. The last widely accepted recorded sighting was in 1662. Since then, the bird has become a symbol for extinction and is often cited as the primary example of man-made extinction. In January 2023, Colossal Biosciences announced their project to revive the dodo alongside their previously announced projects for reviving the woolly mammoth and thylacine in hopes of restoring biodiversity to Mauritius and changing the dodo's status as a symbol of extinction to de-extinction. Steller's sea cow The Steller's sea cow was a sirenian endemic to Bering Sea between Russia and the United States but had a much larger range during the Pleistocene. First described by Georg Wilhelm Steller in 1741, it was hunted to extinction 27 years later due to its buoyancy making it an easy target for humans hunting it for its meat and fur in addition to an already low population caused by climate change. In 2021, the nuclear genome of the species was sequenced. In late 2022, a group of Russian scientists funded by Sergei Bachin began their project to revive and reintroduce the giant sirenian to its former range in the 18th century to restore its kelp forest ecosystem. Arctic Sirenia plans to revive the species through genome editing of the dugong, but they need an artificial womb to conceive a live animal due to lack of an adequate surrogate species. Ben Lamm of Colossal Biosciences has also expressed desire to revive the species once his company develops an artificial womb. Northern white rhinoceros The northern white rhinoceros or northern white rhino (Ceratotherium simum cottoni) is a subspecies of the white rhinoceros endemic to East and Central Africa south of the Sahara. Due to widespread and uncontrollable poaching and civil warfare in their former range, the subspecies' numbers dropped quickly over the course of the late 1900s and early 2000s. Unlike the majority of the potential candidates for de-extinction, the northern white rhinoceros is not extinct, but functionally extinct and is believed to be extinct in the wild with only two known female members left, Najin and Fatu who reside on the Ol Pejeta Conservancy in Kenya. The BioRescue Team in collaboration with Colossal Biosciences plan to implement 30 northern white rhinoceros embryos made from egg cells collected from Najin and Fatu and preserved sperm from dead male individuals into female southern white rhinoceros by the end of 2024. Ivory-billed woodpecker The ivory-billed woodpecker (Campephilus principalis) is the largest woodpecker endemic to the United States with a subspecies in Cuba. The species numbers have declined since the late 1800s due to logging and hunting. Similar to the northern white rhinoceros, the ivory-billed woodpecker is not completely extinct, but functionally extinct with occasional sightings that suggest that 50 or less individuals are left. In October 2024, Colossal Biosciences announced their non-profit Colossal Foundation, a foundation dedicated to conservation of extant species with their first projects being the Sumatran rhinoceros, vaquita, red wolf, pink pigeon, northern quoll, and ivory-billed woodpecker. Colossal plans to revive or rediscover the species through genome editing of its closest living relatives, such as the pileated woodpecker and using drones and AI to identify any potential remaining individuals in the wild. Heath hen The heath hen (Tympanuchus cupido cupido) was a subspecies of greater prairie chicken endemic to the heathland barrens of coastal North America. It is even speculated that the pilgrims' first Thanksgiving featured this bird as the main course instead of wild turkey. Due to overhunting caused by its perceived abundancy, the population became extinct in mainland North America by 1870, leaving a population of 300 individuals left on Martha's Vineyard. Despite conservation efforts, the subspecies became extinct in 1932 following the disappearance and presumed death of Booming Ben, the final known member of the subspecies. In the summer of 2014, non-profit organisation, Revive & Restore held a meeting with the community of Martha's Vineyard to announce their project to revive the heath hen in hopes of restoring and maintaining the sandplain grasslands. On April 8th, 2020, germs cells were collected from greater prairie chicken eggs at Texas A&M. Yangtze giant softshell turtle The Yangtze giant softshell turtle (Rafetus swinhoei) is a softshell turtle endemic to China and Vietnam and is possibly the largest living freshwater turtle. Due to various factors such as habitat loss, wildlife trafficking, trophy hunting, and the Vietnam War, the species population has been reduced to only three male individuals, rendering it functionally extinct similar to the northern white rhinoceros and ivory-billed woodpecker. There is one captive individual in Suzhou Zoo in China, and two wild individuals at Dong Mo Lake in Vietnam. Efforts to save the species from extinction through various means of assisted reproduction in captivity have been ongoing since 2009 by the Suzhou Zoo and Turtle Survival Alliance. Despite efforts to breed the turtles naturally, the eggs laid by the final known female were all infertile and unviable. In May 2015, artificial insemination was performed for the first time in the species. In July of the same year, the female laid 89 eggs, but like all previous natural attempts, they were all unviable. In April 2019, the female individual at the zoo died after another failed artificial insemination attempt. In 2020, a female was discovered in the wild, reigniting hope for the survival of the species. However, this individual was found dead in early 2023. Several searches across China and Vietnam are currently underway to locate female individuals to breed with the final known males, or to undergo artificial insemination. Future potential candidates for de-extinction A "De-extinction Task Force" was established in April 2014 under the auspices of the Species Survival Commission (SSC) and charged with drafting a set of Guiding Principles on Creating Proxies of Extinct Species for Conservation Benefit to position the IUCN SSC on the rapidly emerging technological feasibility of creating a proxy of an extinct species. Avians Giant moa – The tallest birds to have ever lived, but not as heavy as the elephant bird. Both the northern and southern species became extinct by 1500 due to overhunting by the Polynesian settlers and Māori in New Zealand. Elephant bird – The heaviest birds to have ever lived, the elephant birds were driven to extinction by the early colonization of Madagascar. Ancient DNA has been obtained from the eggshells but may be too degraded for use in de-extinction. Carolina parakeet - One of the only indigenous parrots to North America, it was driven to extinction by destruction of its habitat, overhunting, competition from introduced honeybees, and persecution for crop damages and declared extinct following the death of its final known member, Incas in 1918. Hundreds of specimens with viable DNA still exist in museums around the world, making it a prime candidate for revival. In 2019, the full genome of the carolina parakeet was sequenced. Great auk - A flightless bird native to the North Atlantic similar to the penguin. The great auk went extinct in the 1800s due to overhunting by humans for food. The last two known great auks lived on an island near Iceland and were clubbed to death by sailors. There have been no known sightings since. The great auk has been identified as a good candidate for de-extinction by Revive and Restore, a non-profit organization. Because the great auk is extinct it cannot be cloned, but its DNA can be used to alter the genome of its closest relative, the razorbill, and breed the hybrids to create a species that will be very similar to the original great auks. The plan is to introduce them back into their original habitat, which they would then share with razorbills and puffins, who are also at risk for extinction. This would help restore the biodiversity and restore that part of the ecosystem. Colossal Biosciences has also expresse interest in reviving the species Imperial woodpecker – A large possibly extinct woodpecker endemic to Mexico that has not been seen since 1956 due to habitat destruction and hunting. The Federal government of Mexico has considered the species extinct since 2001, 47 years after the last widely accepted sighting. However, they have conservation plans if the species is rediscovered or revived. Cuban macaw – A colourful macaw that was native to Cuba and Isla de la Juventud. It became extinct in the late 19th century due to overhunting, pet trade, and habitat loss. Labrador duck – A duck that was native to North America. it became extinct in the late 19th century due to colonisation in their former range combined with an already naturally low population. It is also the first known endemic North American bird species to become extinct following the Columbian Exchange. Huia – A species of Callaeidae that was native to New Zealand. It became extinct in 1907 due to overhunting from both the Māori and European settlers, habitat loss, and predation from introduced invasive species. In 1999, students of Hastings Boys' High School proposed the idea of de-extinction of the huia, the school's emblem through cloning. The Ngāti Huia tribe approved of the idea and the de-extinction process would have been performed by the University of Otago with $100,000 funding from a Californian based internet startup. However, due to the poor state of DNA in the specimens at Museum of New Zealand Te Papa Tongarewa, a complete huia genome could not be created, making this method of de-extinction improbable to succeed. Moho – An entire genus of songbirds that were native to the islands of Hawaii. The genus became extinct in 1987 following the extinction of its final living member, Kauaʻi ʻōʻō. The reasons for the genus' decline were overhunting for their plumage, habitat loss caused by both colonisation of Hawaii and natural disasters, mosquito-borne diseases, and predation from introduced invasive species. Mammals Caribbean monk seal – A species of monk seal that was native to the Caribbean. It became extinct in 1952 due to poaching and starvation caused by overfishing of its natural prey. Bluebuck – A species of antelope that was native to South Africa. The species was hunted to extinction by 1799 or 1800 by Europeans, and the species had a naturally low population similar to the Labrador duck. In 2024, the nuclear genome of the species was sequenced by University of Potsdam and Colossal Biosciences. Colossal Biosciences has also expressed interest in reviving the species in the future. Tarpan – A population of free-ranging horses in Europe that went extinct in 1909. Much like the aurochs, there have been many attempts to breed tarpan-like horses from domestic horses, the first being by the Heck brothers, creating the Heck horse as a result. Though it is not a genetic copy, it is claimed to bear many similarities to the tarpan. Other attempts were made to create tarpan-like horses. A breeder named Harry Hegardt was able to breed a line of horses from American Mustangs. Other breeds of supposedly tarpan-like horse include the Konik and Strobel's horse. Baiji – A freshwater dolphin native to the Yangtze River in China. Unlike most potential candidates for de-extinction, the baiji is not completely extinct, but instead functionally extinct with a low population in the wild due to entanglement in nets, collision with boats, and pollution of the Yangtze River with occasional sightings, with the most recent in 2024. There are plans to help save the species if a living specimen is found. Vaquita – The smallest cetacean to have ever lived that is endemic to the upper Gulf of California in Mexico. Similar to the baiji, the vaquita is not completely extinct, but functionally extinct with an estimate of 8 or less members left due to entanglement in gillnets meant to poach totoabas, a fish with a highly valued swim bladder on black markets due to its perceived medicinal values. In October 2024, Colossal Biosciences launched their Colossal Foundation, a non-profit foundation dedicated to conservation of extant species with one of their first projects being the vaquita. In addition to using technology to monitor the final remaining individuals, they aim to collect tissue samples from vaquitas in order to revive it if it does become extinct in the near future. Pleistocene megafauna Irish elk – The largest deer to have ever lived, formerly inhabiting Eurasia from present day Ireland to present day Sibera during the Pleistocene. It became extinct 5-10 thousand years ago due to suspected overhunting. Cave lion – A species of Panthera endemic to Eurasia and Northwest North America during the Pleistocene. It is estimated that the species died out 14-15 thousand years ago due to climate change and low genetic diversity. The discovery of preserved cubs in the Sakha Republic ignited a project to clone the animal. Cave bear – A species of bear that was endemic to Eurasia during the Pleistocene. It is estimated to have become extinct 24 thousand years ago due to climate change and suspected competition with early humans. Cave hyena – A species or subspecies of hyena that was endemic to Eurasia during the Pleistocene. It is estimated that the species died out 31 thousand years ago due to competition with early humans and other carnivores and decreased availably of prey. Dire wolf – A large canine that was endemic to The Americas during the late Pleistocene and early Holocene. It is estimated that the species became extinct 9,500 years ago during the Quaternary extinction event due to competition with other carnivores and early humans, extinction of its prey, and climate change. In 1988, the Dire Wolf Project emerged with the goal of reviving the species through back breeding domesticated dogs similar to efforts to revive the aurochs and quagga. However, these animals only resemble their extinct relative physically and not genetically. Colossal Biosciences has also expressed interest in reviving the species through genome editing as opposed to back breeding Castoroides – An entire genus of giant beavers endemic to North America during the Pleistocene. It is unknown how the species died out, but some suggest that climate change and competition are factors. Beth Shapiro of Colossal Biosciences has expressed interest in reviving a species from this genus. Steppe bison – The ancestor of all modern bison in North America, formerly endemic to Western Europe to eastern Beringia in North America during the Late Pleistocene. The discovery of the mummified steppe bison of 9,000 years ago could help people clone the ancient bison species back, even though the steppe bison would not be the first to be "resurrected". Russian and South Korean scientists are collaborating to clone steppe bison in the future, using DNA preserved from an 8,000-year-old tail, in wood bison, which themselves have been introduced to Yakutia to fulfil a similar niche. Ground sloths – An extremely diverse genus of sloths native to The Americas during the Pleistocene with some growing to the size of modern elephants. The genus died out 11 thousand years ago due to climate change and some suspect that their size and slowness made them easy targets for early humans. Woolly rhinoceros – A species of rhinoceros that was endemic to Northern Eurasia during the Pleistocene. It is believed to have become extinct as a result of both climate change and overhunting by early humans. In November 2023, scientists managed to sequence the woolly rhinoceros's genome from faeces of cave hyenas in addition to the existence of frozen specimens. However, the woolly rhinoceros' closest living relative is the critically endangered Sumatran rhinoceros with an estimate of only 80 individuals left in the wild, which presents ethical dilemmas similar to the woolly mammoth. Miracinonyx – Also known as American cheetahs, an entire genus of felines that were native to North America during the Pleistocene. It is unknown how the genus went extinct, but some suggest that they died out for the same reasons as other North American megafauna; climate change, loss of prey, and competition with early humans and other carnivores. Columbian mammoth – A species of mammoth that was endemic to North America across what are now the United States and Northern Mexico. The species became extinct 12 thousand years ago during the Quaternary extinction event due to climate change, overhunting from early humans, and habitat loss. Mastodon – An entire genus of proboscideans that were native to North America from the Miocene to the early Holocene. Like the Columbian mammoth, the species became extinct about 11,795 to 11,345 years ago due to climate change, overhunting from early humans, and habitat loss. Arctodus – An entire genus of short-faced bears endemic to North America during the Pleistocene. It is estimated that they became extinct 12 thousand years ago following the death of its final member, Arctodus simus due to climate change and low genetic diversity. Beth Shapiro of Colossal Biosciences has expressed interest in reviving one of the two species from the genus. Amphibians Gastric-brooding frog – An entire genus of ground frogs that were native to Queensland, Australia. They became extinct in the mid-1980s primarily due to Chytridiomycosis. In 2013, scientists in Australia successfully created a living embryo from non-living preserved genetic material, and hope that by using somatic-cell nuclear transfer methods, they can produce an embryo that can survive to the tadpole stage. Insects Xerces blue – A species of butterfly that was native to the Sunset District of San Francisco in the American state of California. It is estimated that the species became extinct in the early 1940s due to urbanization of their former habitat. Similar species to the Xerces blue, such as Glaucopsyche lygdamus and the Palos Verdes blue, have been released into the Xerces blue's former range to substitute its role. On April 15, 2024, non-profit organisation Revive & Restore announced the early stages of their plans to potentially revive the species. Plants Paschalococos – A genus of coccoid palm trees that were native to Easter Island, Chile. It is believed to have become extinct around 1650 due to its disappearance from the pollen records. Hyophorbe amaricaulis – A species of palm tree from the Arecales family that is native to the island of Mauritius. Unlike the majority of potential candidates, this palm is not completely extinct, but functionally extinct and is believed to be extinct in the wild with only one known specimen left in the Curepipe Botanic Gardens. In 2010, there was an attempt to revive the species through germination in vitro in which Isolated and growing embryos were extracted from seeds in tissue culture, but these seedlings only lived for three months. Successful de-extinctions Judean date palm The Judean date palm is a species of date palm native to Judea that is estimated to have originally become extinct around the 15th century due to climate change and human activity in the region. In 2005, preserved seeds found in the 1960s excavations of Herod the Great's palace were given to Sarah Sallon by Bar-Ilan University after she came up with the initiative to germinate some ancient seeds. Sallon later challenged her friend, Elaine Solowey of the Center for Sustainable Agriculture at the Arava Institute for Environmental Studies with the task of germinating the seeds. Solowey managed to revive several of the provided seeds after hydrating them with a common household baby bottle warmer along with average fertiliser and growth hormones. The first plant grown was named after Lamech's father, Methuselah, the oldest living man in the Bible. In 2012, there were plans to crossbreed the male palm with what was considered its closest living relative, the Hayani date of Egypt to generate fruit by 2022. However, two female Judean date palms have been sprouted since then. By 2015 Methuselah had produced pollen that has been used successfully to pollinate female date palms. In June 2021, one of the female plants, Hannah, produced dates. The harvested fruits are currently being studied to determine their properties and nutritional values. The de-extinct Judean date palms are currently at a Kibbutz located in Ketura, Israel. Rastreador Brasileiro The Rastreador Brasileiro (Brazilian Tracker) is a large scent hound from Brazil that was bred in the 1950s to hunt jaguars and wild pigs. It was originally declared extinct and delisted by the Fédération Cynologique Internationale and Confederação Brasileira de Cinofilia in 1973 due to tick-borne diseases and subsequent poisoning from insecticides in attempt to get rid of the ectoparasites. In the early 2000s, a group named Grupo de Apoio ao Resgate do Rastreador Brasileiro (Brazilian Tracker Rescue Support Group) dedicated to reviving the breed and having it relisted by Confederação Brasileira de Cinofilia began work to locate dogs in Brazil that had genetics of the extinct breed to breed a purebred Rastreador Brasileiro. In 2013, the breed was de-extinct through preservation breeding from descendants of the final original members and was relisted by the FCI. Floreana giant tortoise The Floreana giant tortoise (Chelonoidis niger niger) is a subspecies of the Galápagos tortoise endemic to Floreana Island, Ecuador that is believed to have become extinct by 1850 due to overexploitation, predation, and habitat degradation by sailors and invasive species such as feral livestock, rodents, and stray dogs and cats. A deliberate wildfire started by Thomas Chappel, a crew member of the Essex in 1820 is also cited as a reason for the subspecies initial decline. In 2012, Floreana and Volcán Wolf tortoise hybrids were discovered Isabela Island. Allegedly, these tortoises were imported or abandoned on the island in the early 19th century, allowing them to hybridise with the native subspecies. In 2017, a breeding programme was established to revive the subspecies through back breeding the hybrids to regain their genetic purity. As of 2025, 400 Floreana giant tortoises have been hatched on Santa Cruz Island with plans to release them into the wild on Floreana Island after the successful extirpation of invasive species. However, IUCN has yet to update the status of the subspecies due to lack of a genetically pure specimen and the de-extinct subspecies has yet to reproduce naturally in the wild. Unknown Commiphora In 2010, Sarah Sallon of Arava Institute for Environmental Studies grew a seed found in excavations of a cave in the northern Judean desert in 1986. The specimen, Sheba reached maturity in 2024 and is believed to be an entirely new species of Commiphora with many believing that she may be the tsori or Judean balsam, plants that are said to have healing properties in the Bible. Montreal melon The Montreal melon, also known as the Montral market muskmelon, Montreal nutmeg melon, and in French as melon de Montréal (Melon of/from Montreal) is a cultivar of melon native to Canada and traditionally grown around the Montreal area. Despite its status as a delicacy on the east coast of North America, the Montreal melon disappeared from farms and was presumed extinct by the 1920s due to urbanisation in the region and being ill-suited for agribusiness. In 1996, seeds of the lost melon were discovered in a seed bank in the American state of Iowa. Since then, the plant has been reintroduced to its former range by local gardeners. See also Breeding back Preservation breeding Cryoconservation of animal genetic resources Endangered species Functional extinction Endling Holocene extinction List of introduced species Pleistocene Park Pleistocene rewilding Colossal Biosciences Arava Institute for Environmental Studies References Further reading Pilcher, Helen (2016). Bring Back the King: The New Science of De-extinction . Bloomsbury Press External links TEDx DeExtinction March 15, 2013 conference sponsored by Revive and Restore project of the Long Now Foundation, supported by TEDx and hosted by the National Geographic Society, that helped popularize the public understanding of the science of de-extinction. Video proceedings, meeting report, and links to press coverage freely available. De-Extinction: Bringing Extinct Species Back to Life April 2013 article by Carl Zimmer for National Geographic magazine reporting on 2013 conference. Cloning Conservation biology Evolution of the biosphere Extinction Science fiction themes
De-extinction
[ "Engineering", "Biology" ]
9,629
[ "De-extinction", "Evolution of the biosphere", "Behavior", "Reproduction", "Cloning", "Genetic engineering", "Breeding", "Conservation biology" ]
39,381,650
https://en.wikipedia.org/wiki/NodeXL
NodeXL is a network analysis and visualization software package for Microsoft Excel 2007/2010/2013/2016. The package is similar to other network visualization tools such as Pajek, UCINet, and Gephi. It is widely applied in ring, mapping of vertex and edge, and customizable visual attributes and tags. NodeXL enables researchers to undertake social network analysis work metrics such as centrality, degree, and clustering, as well as monitor relational data and describe the overall relational network structure. When applied to Twitter data analysis, it showed the total network of all users participating in public discussion and its internal structure through data mining. It allows social Network analysis (SNA) to emphasize the relationships rather than the isolated individuals or organizations, allowing interested parties to investigate the two-way dialogue between organizations and the public. SNA also provides a flexible measurement system and parameter selection to confirm the influential nodes in the network, such as in-degree and out-degree centrality. The software contains network visualization, social network analysis features, access to social media network data importers, advanced network metrics, and automation. Codebase NodeXL is a set of prebuilt class libraries using a custom Windows Presentation Foundation control. Additional .NET assemblies can be developed as "plug-ins" to import data from outside data providers. Currently-implemented data providers for NodeXL include YouTube, Twitter, Wikipedia (the MediaWiki understructure), web hyperlinks, Microsoft Exchange Server. Contributors NodeXL is a collaborative effort of a number of individuals from different universities and other organizations forming the NodeXL Team. Notable contributors include: Microsoft Research established a NodeXL research project on November 20, 2008. Features NodeXL is intended for users with little or no programming experience to allow them to collect, analyze, and visualize a variety of networks. NodeXL integrates into Microsoft Excel 2007, 2010, 2013, 2016, 2019 and 365 and opens as a workbook with a variety of worksheets containing the elements of a graph structure such as edges and nodes. NodeXL can also import a variety of graph formats such as edgelists, adjacency matrices, GraphML, UCINet .dl, and Pajek .net. Data import NodeXL Pro imports UCINet and GraphML files, as well as Excel spreadsheets containing edge lists or adjacency matrices, into NodeXL workbooks. NodeXL Pro also allows for the quick collection of social media data via a set of import tools which can collect network data from e-mail, Twitter, YouTube, and Flickr. NodeXL asks for the user's permission before collecting any personal data and focuses on the collection of publicly available data, such as Twitter statuses and follows relationships for users who have made their accounts public. These features allow NodeXL users to instantly get working on relevant social media data and integrate aspects of social media data collection and analysis into a single tool. Data representation NodeXL workbooks contain four worksheets: Edges, Vertices, Groups, and Overall Metrics. The relevant data about entities in the graph and relationships between them are located in the appropriate worksheet in row format. For example, the edges worksheet contains a minimum of two columns, and each row has a minimum of two elements corresponding to the two vertices that make up an edge in the graph. Graph metrics and edge and vertex visual properties appear as additional columns in the respective worksheets. This representation allows the user to leverage the Excel spreadsheet to quickly edit existing node properties and to generate new ones, for instance by applying Excel formulas to existing columns. Graph analysis NodeXL Pro contains a library of commonly used graph metrics: centrality, clustering coefficient, and diameter. NodeXL differentiates between directed and undirected networks. NodeXL Pro implements a variety of community detection algorithms to allow the user to automatically discover clusters in their social networks. Graph visualization NodeXL generates an interactive canvas for visualizing graphs. The project allows users to pick from several well-known Force-directed graph drawing layout algorithms such as Fruchterman-Reingold and Harel-Koren. NodeXL allows the user to multi-select, drag and drop nodes on the canvas and to manually edit their visual properties (size, color, and opacity). In addition, NodeXL enables users to map the visual properties of nodes and edges to metrics it calculates, and in general to any column in the edges and vertices worksheet. Using the YouTube API, NodeXL enables researchers to solicit videos and viewers' comments with titles, keywords, descriptions and users’ IDs. The collected data can then be visualized via algorithms and methods, for example, Harel–Koren fast multiscale algorithm, Clauset–Newman–Moore algorithm, Treema, force-directed. Research NodeXL has been used by news outlets such as Foreign Policy to visualize the structure of conversations about political topics as well as by organizations like the World Bank to analyze voting data. NodeXL has been used as an analytical tool in dozens of research papers in the social, information, and computer sciences, as well as the focus of research in humancomputer interaction, data mining, and data visualization. See also Graph drawing Social network analysis software File formats GraphML (NodeXL Pro only) Geographic Data Files GEXF Pajek Related software Cytoscape Gephi Notes References Further reading Hansen, DL, Smith, MA, Shneiderman, B (2011) EventGraphs: Charting Collections of Conference Connections. In Forty-Fourth Annual Hawaii International Conference on System Sciences (HICSS). Also see EventGraph SlideShare Presentation Hansen, D, Rotman, D, Bonsignore, E, Milic-Frayling, N, Rodrigues, E, Smith, M, Shneiderman, B . Do You Know the Way to SNA?: A Process Model for Analyzing and Visualizing Social Media Data. HCIL-2009-17 Tech Report. External links NodeXL project on Twitter Resources Video overview Introductory slides List of research papers using NodeXL 2000 software Network theory Free application software Graph drawing software Microsoft free software Microsoft Research Software using the MS-PL license Windows-only free software
NodeXL
[ "Mathematics" ]
1,313
[ "Network theory", "Mathematical relations", "Graph theory" ]